uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,477,468,750,781
arxiv
\section*{Introduction} At the very origin of the theory of quantum groups is the search of an algebraic procedure for constructing solutions of the quantum Yang-Baxter equation \begin{equation}\label{YB} R_{12}(u)R_{13}(uv)R_{23}(v) = R_{23}(v)R_{13}(uv)R_{12}(u). \end{equation} The unknown $R(u)$ of this equation is an endomorphism of $V\otimes V$ for some finite-dimensional vector space $V$. This endomorphism depends on a parameter $u\in{\mathbb C}^*$, and we denote by $R_{ij}(u)$ the endomorphism of $V\otimes V\otimes V$ acting via $R(u)$ on the product of the $i$th and $j$th factors, and via $\operatorname{Id}_V$ on the remaining factor. The Yang-Baxter equation appeared in several different guises in the literature on integrable systems (see \cite{J1} for a nice selection of early papers on the subject). As is well known, Drinfeld and Jimbo showed how to associate a solution of (\ref{YB}) with every irreducible finite-dimensional representation $V$ of a quantum loop algebra $U_q(L\mathfrak g)$. Here $\mathfrak g$ is a simple complex Lie algebra, $L\mathfrak g = \mathfrak g\otimes{\mathbb C}[t,t^{-1}]$ is its loop algebra, and $U_q(L\mathfrak g)$ denotes the quantum analogue of the enveloping algebra of $L\mathfrak g$ with parameter $q\in{\mathbb C}^*$ not a root of unity. This gives a strong motivation for studying the category $\mathcal{C}$ of finite-dimensional representations of $U_q(L\mathfrak g)$, and many authors have brought important contributions (see \emph{e.g.\,} \cite{AK,CP2,CP,FR,GV,N1}). In particular the simple objects of $\mathcal{C}$ have been classified by Chari and Pressley, an appropriate notion of $q$-character has been introduced by Frenkel and Reshetikhin, and, when $\mathfrak g$ is of simply laced type, Nakajima has calculated the irreducible $q$-characters in terms of the cohomology of certain quiver varieties. After reviewing these results in the first two lectures, we will turn to more recent attempts to understand the tensor structure of $\mathcal{C}$. In the case of $\mathfrak g = \mathfrak{sl}_2$, Chari and Pressley \cite{CP2} proved that every simple object is a tensor product of Kirillov-Reshetikin modules. These irreducible representations (which are defined for every $\mathfrak g$) are very particular. They also come from the theory of integrable systems \cite{KR, KNS} where they were first studied in relation with row-to-row transfer matrices. However, if $\mathfrak g$ is different from $\mathfrak{sl}_2$, the Kirillov-Reshetikin modules are not the only prime simple objects and the situation is far more complicated. Thus, already for $\mathfrak g=\mathfrak{sl}_3$ we do not know a factorization theorem for simple objects. In the last lecture, we will explain a general conjecture (for $\mathfrak g$ of simply-laced type) which would imply that the prime tensor factorization of many simple objects can be described as the factorization of a cluster monomial into a product of cluster variables. In retrospect, this should not be too surprising, since one of the first applications of cluster combinatorics given by Fomin and Zelevinsky was the proof of Zamolodchikov's periodicity conjecture for $Y$-systems \cite{FZ1}, also intimately related with the representation theory of $U_q(L\mathfrak g)$ \cite{KNS}. Our conjecture, which involves a positive integer $\ell$, is now proved when $\ell = 1$ \cite{HL, N5}, but the general case remains open. \section{Representations of quantum loop algebras} In this first lecture, we review the definition of quantum loop algebras, the classification of their finite-dimensional irreducible representations, and the notion of $q$-character. For simplicity, we only consider quantum loop algebras of simply laced type $A, D, E$. We also formulate the $T$-systems satisfied by the $q$-characters of the Kirillov-Reshetikhin modules. All this is illustrated in the case of $U_q(L\mathfrak{sl}_2)$. We briefly explain the connection with the Yang-Baxter equation. Finally, we introduce some interesting tensor subcategories of the category of finite-dimensional representations, which will be used in \S\ref{sect_4}. For a recent and more complete survey on these topics, see \cite{CH}. \subsection{The quantum loop algebra $U_q(L\mathfrak g)$} \label{subsect_2.1} Let $\mathfrak g$ be a simple Lie algebra over ${\mathbb C}$ of type $A_n, D_n$ or $E_n$. We denote by $I=[1,n]$ the set of vertices of the Dynkin diagram, by $A=[a_{ij}]_{i,j\in I}$ the Cartan matrix, and by $\Pi=\{\alpha_i\mid i\in I\}$ the set of simple roots. Let $L\mathfrak g = \mathfrak g\otimes{\mathbb C}[t,t^{-1}]$ be the loop algebra of $\mathfrak g$. This is a Lie algebra with bracket \begin{equation} [x\otimes t^k,\, y\otimes t^l] = [x,y] \otimes t^{k+l}, \quad (x,y\in\mathfrak g,\ k,l\in {\mathbb Z}). \end{equation} Following Drinfeld \cite{D}, the enveloping algebra $U(L\mathfrak g)$ has a quantum deformation $U_q(L\mathfrak g)$. This is an algebra over ${\mathbb C}$ defined by a presentation with infinitely many generators \begin{equation} x^+_{i,r},\ x^-_{i,r},\ h_{i,m},\ k_i,\ k_i^{-1}, \qquad (i\in I,\ r\in{\mathbb Z},\ m\in{\mathbb Z}\setminus \{0\}), \end{equation} and a list of relations which we will not repeat (see \emph{e.g.\,} \cite{FR}). These relations depend on $q\in{\mathbb C}^*$ which we assume is not a root of unity. If $x^+_i,\,x^-_i,\,h_i\ (i\in I)$ denote the Chevalley generators of $\mathfrak g$ then $x^\pm_{i,r}$ is a $q$-analogue of $x^\pm_i\otimes t^r$, $h_{i,m}$ is a $q$-analogue of $h_i\otimes t^m$, and $k_i$ stands for the $q$-exponential of $h_i\equiv h_i\otimes 1$. In fact $U_q(L\mathfrak g)$ is isomorphic to a quotient of the quantum enveloping algebra $U_q(\widehat{\mathfrak g})$ attached by Drinfeld and Jimbo to the affine Kac-Moody algebra $\widehat{\mathfrak g}$. It thus inherits from $U_q(\widehat{\mathfrak g})$ the structure of a Hopf algebra. Our main object of study is the category $\mathcal{C}$ of finite-dimensional $U_q(L\mathfrak g)$-modu\-les\footnote{We only consider modules \emph{of type 1}, a mild technical condition, see \emph{e.g.\,} \cite[\S12.2 B]{CP}.}. Since $U_q(L\mathfrak g)$ is a Hopf algebra, $\mathcal{C}$ is an abelian monoidal category. It is well-known that $\mathcal{C}$ is not semisimple. We denote by $R$ its Grothendieck ring. Given two objects $M$ and $N$ of $\mathcal{C}$, the tensor products $M\otimes N$ and $N\otimes M$ are in general not isomorphic. However, they have the same composition factors with the same multiplicities, so $R$ is a commutative ring. For every $a\in{\mathbb C}^*$ there exists an automorphism $\tau_a$ of $U_q(L\mathfrak g)$ given by \[ \tau_a(x^\pm_{i,r}) = a^r x^\pm_{i,r},\quad \tau_a(h_{i,m}) = a^m h_{i,m},\quad \tau_a(k_i^{\pm1}) = k_i^{\pm1}. \] (This is a quantum analogue of the automorphism $x\otimes t^k \mapsto a^k(x\otimes t^k)$ of $L\mathfrak g$.) Each automorphism $\tau_a$ induces an auto-equivalence $\tau_a^*$ of $\mathcal{C}$, which maps an object $M$ to its pullback $M(a)$ under $\tau_a$. \subsection{$q$-characters} By Drinfeld's presentation, the generators $k_i^{\pm1}$ and $h_{i,m}$ are pairwise commutative. So every object $M$ of $\mathcal{C}$ can be written as a finite direct sum of common generalized eigenspaces for the simultaneous action of the $k_i$ and of the $h_{i,m}$. These common generalized eigenspaces are called the {\em l}-weight-spaces of $M$. (Here {\em l} stands for ``loop''). The $q$-character of $M$, introduced by Frenkel and Reshetikhin \cite{KR}, is a Laurent polynomial with positive integer coefficients in some indeterminates $Y_{i,a}\ (i\in I, a\in{\mathbb C}^*)$, which encodes the decomposition of~$M$ as the direct sum of its {\em l}-weight-spaces. More precisely, the eigenvalues of the $h_{i,m}\ (m>0)$ in an {\em l}-weight-space $V$ of $M$ are always of the form \begin{equation}\label{eq24} \frac{q^m-q^{-m}}{m(q-q^{-1})} \left( \sum_{r=1}^{k_i}(a_{i,r})^m-\sum_{s=1}^{l_i}(b_{i,s})^m \right) \end{equation} for some nonzero complex numbers $a_{i,r}, b_{i,s}$. Moreover, they completely determine the eigenvalues of the $h_{i,m}\ (m<0)$ and of the $k_i$ on $V$. We encode this collection of eigenvalues with the Laurent monomial \begin{equation}\label{eq23} m_V = \prod_{i\in I}\left( \prod_{r=1}^{k_i} Y_{i,a_{i,r}} \prod_{s=1}^{l_i} Y_{i,b_{i,s}}^{-1} \right). \end{equation} The collection of eigenvalues (\ref{eq24}), or equivalently the monomial (\ref{eq23}), will be called the {\em l}-weight of $V$. Let $\mathcal{Y} = {\mathbb Z}[Y_{i,a}^{\pm1} ; i\in I, a\in{\mathbb C}^*]$. One then defines the $q$-character of $M\in\mathcal{C}$ by \begin{equation} \chi_q(M) = \sum_V \dim V \, m_V \in \mathcal{Y}, \end{equation} where the sum is over all {\em l}-weight spaces $V$ of $M$ (see \cite[Prop. 2.4]{FM}). \begin{theorem}[\cite{FR}] The Laurent polynomial $\chi_q(M)$ depends only on the class of $M$ in $R$, and the induced map $\chi_q : R \to \mathcal{Y}$ is an injective ring homomorphism. \end{theorem} The subalgebra of $U_q(L\mathfrak g)$ generated by \[ x^+_{i,0},\ x^-_{i,0},\ \ k_i,\ k_i^{-1}, \qquad (i\in I), \] is isomorphic to $U_q(\mathfrak g)$. Hence every $M\in\mathcal{C}$ can be regarded as a $U_q(\mathfrak g)$-module by restriction. The {\em l}-weight-space decomposition of $M$ is a refinement of its decomposition as a direct sum of $U_q(\mathfrak g)$-weight-spaces. Let $P$ be the weight lattice of $\mathfrak g$, with basis given by the fundamental weights $\varpi_i\ (i\in I)$. Let ${\mathbb Z}[P]$ be the group ring of $P$. As usual, $\lambda\in P$ is written in ${\mathbb Z}[P]$ as a formal exponential $e^\lambda$ to allow multiplicative notation. We denote by $\omega$ the ring homomorphism from $\mathcal{Y}$ to ${\mathbb Z}[P]$ defined by \begin{equation} \omega\left(Y_{i,a}\right) = e^{\varpi_i}. \end{equation} If $V$ is an {\em l}-weight-space of $M$ with {\em l}-weight the Laurent monomial $m\in \mathcal{Y}$, then $V$ is a subspace of the $U_q(\mathfrak g)$-weight-space with weight $\lambda$ such that $e^\lambda=\omega(m)$. Hence, the image $\omega(\chi_q(M))$ of the $q$-character of $M$ is the ordinary character of the underlying $U_q(\mathfrak g)$-module. For $i\in I$ and $a\in{\mathbb C}^*$ define \begin{equation}\label{eqrootA} A_{i,a} = Y_{i,aq}Y_{i,aq^{-1}}\prod_{j\not = i}Y_{j,a}^{a_{ij}}. \end{equation} Thus $\omega(A_{i,a})=e^{\alpha_i}$, and the $A_{i,a}\ (a\in {\mathbb C}^*)$ should be viewed as affine analogues of the simple root~$\alpha_i$\footnote{In the non-simply-laced case, the definition of $A_{i,a}$ is more complicated, see \cite{FR}.}. Following \cite{FR}, we define a partial order on the set $\mathcal{M}$ of Laurent monomials in the variables $Y_{i,a}$ by setting: \begin{equation}\label{eqorder} m \le m'\quad \Longleftrightarrow \quad \mbox{$\displaystyle\frac{m'}{m}$ is a monomial in the $A_{i,a}$ with exponents $\ge 0$.} \end{equation} This is an affine analogue of the usual partial order on $P$, defined by $\lambda \le \lambda'$ if and only if $\lambda'-\lambda$ is a sum of simple roots $\alpha_i$. A monomial $m\in\mathcal{M}$ is called dominant if it does not contain negative powers of the variables $Y_{i,a}$. We will denote by $\mathcal{M}_+$ the set of dominant monomials. They parametrize simple objects of $\mathcal{C}$, as was first shown by Chari and Pressley \cite{CP}\footnote{The original parametrization of \cite{CP} is in terms of Drinfeld polynomials, but in these notes we will rather use the equivalent parametrization by dominant monomials.}. More precisely, we have \begin{theorem}[\cite{FM}] Let $S$ be a simple object of $\mathcal{C}$. The $q$-character of $S$ is of the form \begin{equation}\label{eq_q_char} \chi_q(S) = m_S\left(1 + \sum_p M_p\right), \end{equation} where $m_S\in\mathcal{M}_+$, and all the $M_p\not = 1$ are monomials in the variables $A_{i,a}^{-1}$ with nonnegative exponents. Moreover the map $S \mapsto m_S$ induces a bijection from the set of isoclasses of irreducible modules in $\mathcal{C}$ to $\mathcal{M}_+$. \end{theorem} The dominant monomial $m_S$ is called the highest {\em l}-weight of $\chi_q(S)$, since every other monomial $m_SM_p$ of (\ref{eq_q_char}) is less than $m_S$ in the partial order (\ref{eqorder}). The one-dimensional {\em l}-weight-space of $S$ with {\em l}-weight $m_S$ consists of the highest-weight vectors of $S$, that is, the {\em l}-weight vectors $v\in S$ such that $x_{i,r}^+v=0$ for every $i\in I$ and $r\in{\mathbb Z}$. For $m\in\mathcal{M}_+$, we denote by $L(m)$ the corresponding simple object of $\mathcal{C}$. In particular, the modules \[ L(Y_{i,a}),\qquad (i\in I,\ a\in {\mathbb C}^*), \] are called the fundamental modules. It is known \cite[Cor. 2]{FR} that the Grothendieck ring $R$ is the polynomial ring over ${\mathbb Z}$ in the classes of the fundamental modules. \subsection{Kirillov-Reshetikhin modules}\label{subsect_2.3} For $i\in I$, $k\in {\mathbb N}^*$ and $a\in{\mathbb C}^*$, the simple object $W^{(i)}_{k,a}$ with highest {\em l}-weight \[ m_{k,a}^{(i)} = \prod_{j=0}^{k-1}Y_{i,\,aq^{2j}} \] is called a Kirillov-Reshetikhin module. Thus $W^{(i)}_{k,a}$ is an affine analogue of the irreducible representation of $U_q(\mathfrak g)$ with highest weight $k\varpi_i$. In particular for $k=1$, $W^{(i)}_{1,a}$ coincides with the fundamental module $L(Y_{i,a})$. By convention, $W^{(i)}_{0,a}$ is the trivial representation for every $i$ and $a$. The classes $[W^{(i)}_{k,a}]$ in $R$, or equivalently the $q$-characters $\chi_q(W^{(i)}_{k,a})$, satisfy the following system of equations indexed by $i\in I$, $k\in{\mathbb N}^*$, and $a\in{\mathbb C}^*$, called the $T$-system\footnote{In the non-simply laced case, the $T$-systems are more complicated, see \cite{KNS,H2}.}: \begin{equation}\label{eqTsystem} [W^{(i)}_{k,a}][W^{(i)}_{k,aq^2}] = [W^{(i)}_{k+1,a}][W^{(i)}_{k-1,aq^2}] + \prod_{j\not = i} [W^{(j)}_{k,aq}]^{-a_{ij}}. \end{equation} This was conjectured in \cite{KNS} and proved in \cite{N2, H2}. Using these equations, one can calculate inductively the expression of any $[W^{(i)}_{k,a}]$ as a polynomial in the classes $[W^{(i)}_{1,a}]$ of the fundamental modules. Thus, one can obtain the $q$-characters of all the Kirillov-Reshetikhin modules once the $q$-characters of the fundamental modules are known. \subsection{The case of $\mathfrak{sl}_2$}\label{subsect_2.4} Let us illustrate the previous statements for $\mathfrak g=\mathfrak{sl}_2$. Here $I=\{1\}$, and we may drop the index~$i$ in $\alpha_i$, $\varpi_i$, $Y_{i,a}$, $A_{i,a}$, $[W^{(i)}_{k,a}]$. We have $A_a = Y_{aq}Y_{aq^{-1}}$. The fundamental modules $W_{1,a}\ (a\in{\mathbb C}^*)$ are the affine analogues of the vector representation ${\mathbb C}^2$ of $U_q(\mathfrak{sl}_2)$, whose character is equal to \[ e^{\varpi}+e^{-\varpi} = e^\varpi(1+e^{-\alpha}). \] Since the {\em l}-weight spaces are subspaces of the one-dimensional $U_q(\mathfrak{sl}_2)$-weight spaces, $W_{1,a}$ also decomposes as a sum of two {\em l}-weight spaces, and the two {\em l}-weights are easily checked to be $Y_{a}$ and $Y_{aq^2}^{-1}$. Hence \[ \chi_q(W_{1,a})=Y_a+Y_{aq^2}^{-1}=Y_a(1+A_{aq}^{-1}). \] The $T$-system (\ref{eqTsystem}) reads \[ [W_{k,a}][W_{k,aq^2}] = [W_{k+1,a}][W_{k-1,aq^2}] + 1, \qquad (a\in{\mathbb C}^*,\ k\in{\mathbb N}^*). \] From the identity \begin{equation}\label{eqKR1} [W_{1,a}][W_{1,aq^2}] = [W_{2,a}][W_{0,aq^2}]+ 1=[W_{2,a}]+1, \end{equation} one deduces that \[ \chi_q(W_{2,a})=Y_aY_{aq^2}+Y_aY_{aq^4}^{-1}+Y_{aq^2}^{-1}Y_{aq^4}^{-1} = Y_aY_{aq^2}\left(1 + A_{aq^3}^{-1} + A_{aq}^{-1}A_{aq^3}^{-1}\right). \] More generally, we have \begin{equation}\label{eqKR} \chi_q(W_{k,a})=\prod_{j=0}^{k-1}Y_{aq^{2j}}\left(1+A_{aq^{2k-1}}^{-1}\left(1+A_{aq^{2k-3}}^{-1} \left(1+\cdots\left(1+A_{aq}^{-1}\right)\cdots \right)\right)\right). \end{equation} Following Chari and Pressley \cite{CP2}, we now describe the $q$-characters of all the simple objects of $\mathcal{C}$. We call $q$-segment of origin $a$ and length $k$ the string of complex numbers \[ \Sigma(k,a)=\{a,\ aq^2,\ \ldots, aq^{2k-2}\}. \] Two $q$-segments are said to be in special position if one does not contain the other, and their union is a $q$-segment. Otherwise we say that they are in general position. It is easy to check that every finite multi-set $\{b_1,\ldots, b_s\}$ of elements of ${\mathbb C}^*$ can be written uniquely as a union of $q$-segments $\Sigma(k_i,a_i)$ in such a way that every pair $(\Sigma(k_i,a_i),\,\Sigma(k_j,a_j))$ is in general position. Then, Chari and Pressley have proved that the simple module $S$ with highest {\em l}-weight \[ m_S =\prod_{j=1}^s Y_{b_j} \] is isomorphic to the tensor product of Kirillov-Reshetikhin modules $\bigotimes_i W_{k_i,a_i}$. Hence $\chi_q(S)$ can be calculated using (\ref{eqKR}). \subsection{Trigonometric solutions of the Yang-Baxter equation} Let us briefly indicate how quantum loop algebras give rise to families of solutions of the quantum Yang-Baxter equation. A nice introduction to these ideas is given by Jimbo in \cite{J2}. Consider the tensor product $W_{k,a}\otimes W_{k,b}$ of Kirillov-Reshetikhin modules for $U_q(L\mathfrak{sl}_2)$. For a generic choice of $u=a/b\in{\mathbb C}^*$, the $q$-segments $\Sigma(k,a)$ and $\Sigma(k,b)$ are in general position, and therefore $W_{k,a}\otimes W_{k,b}$ is irreducible. Moreover, since the Grothendieck group is commutative, when the tensor product is irreducible it is isomorphic to $W_{k,b}\otimes W_{k,a}$. Therefore, there exists up to normalization a unique isomorphism \[ I(a,b) : W_{k,a}\otimes W_{k,b} \stackrel{\sim}{\longrightarrow} W_{k,b}\otimes W_{k,a}. \] Now, if $\Sigma(k,c)$ is another $q$-segment in general position with $\Sigma(k,a)$ and $\Sigma(k,b)$, then $I_{12}(b,c)I_{23}(a,c)I_{12}(a,b)$ and $I_{23}(a,b)I_{12}(a,c)I_{23}(b,c)$ are two isomorphisms between the irreducible modules $W_{k,a}\otimes W_{k,b}\otimes W_{k,c}$ and $W_{k,c}\otimes W_{k,b}\otimes W_{k,a}$, hence they are proportional. These intertwinners can be normalized in such a way that \[ I_{12}(b,c)I_{23}(a,c)I_{12}(a,b) = I_{23}(a,b)I_{12}(a,c)I_{23}(b,c). \] Putting $R(a,b) = P\cdot I(a,b)$ where $P$ is the linear map from $W_{k,b}\otimes W_{k,a}$ to $W_{k,a}\otimes W_{k,b}$ defined by $P(w\otimes w') = w'\otimes w$, it follows that \[ R_{23}(b,c)R_{13}(a,c)R_{12}(a,b) = R_{12}(a,b)R_{13}(a,c)R_{23}(b,c). \] Moreover it can be seen that $R(a,b)$ only depends on $a/b$, thus setting $u=a/b$ and $v=b/c$, we obtain that \[ R_{23}(v)R_{13}(uv)R_{12}(u) = R_{12}(u)R_{13}(uv)R_{23}(v), \] that is, $R(u)$ is a solution of the Yang-Baxter equation (\ref{YB}). These solutions were obtained by Tarasov \cite{T1,T2}. For example, the solution coming from the 2-dimensional representation $W_{1,a}$ can be written in matrix form as \[ R(u) = \left( \begin{matrix} 1& 0 & 0 & 0 \\[2mm] 0& \displaystyle\frac{q(u-1)}{u-{q}^2} & \displaystyle\frac{1-{q}^2}{u-q^2} & 0\\[3mm] 0& \displaystyle\frac{u(1-{q}^2)}{u-{q}^2} & \displaystyle\frac{{q}(u-1)}{u-{q}^2} & 0 \\[3mm] 0& 0 & 0 & 1 \end{matrix} \right). \] This is the $R$-matrix associated with two famous integrable models: the spin 1/2 XXZ chain, and the six-vertex model. Note that it is well-defined and invertible if and only if $u\not = q^{\pm 2}$. In fact, for $u = q^{\pm 2}$, Eq.~(\ref{eqKR1}) shows that the tensor product $W_{1,au}\otimes W_{1,a}$ is not irreducible, and one can check that $W_{1,au}\otimes W_{1,a}$ is not isomorphic to $W_{1,a}\otimes W_{1,au}$. More generally, the same method can be applied to any finite-dimensional irreducible representation $W$ of $U_q(L\mathfrak g)$, using the general fact that $W(u)\otimes W$ is irreducible except for a finite number of values of $u\in{\mathbb C}^*$. We shall return to this special feature of quantum loop algebras in \S\ref{sect_4}. \subsection{Subcategories}\label{subsect_2.5} Since the Dynkin diagram of $\mathfrak g$ is a bipartite graph, we have a partition $I=I_0\sqcup I_1$ such that every edge connects a vertex of $I_0$ with a vertex of $I_1$. For $i\in I$ we set \begin{equation} \xi_i = \left\{ \begin{array}{ll} 0& \mbox{if $i\in I_0$,}\\ 1& \mbox{if $i\in I_1$,} \end{array} \right.. \end{equation} Let $\mathcal{M}_{\mathbb Z}$ be the subset of $\mathcal{M}$ consisting of all monomials in the variables \[ Y_{i,q^{\xi_i + 2k}},\qquad (i\in I,\ k\in{\mathbb Z}). \] Let $\mathcal{C}_{\mathbb Z}$ be the full subcategory of $\mathcal{C}$ whose objects $V$ have all their composition factors of the form $L(m)$ with $m\in\mathcal{M}_{\mathbb Z}$. One can show that $\mathcal{C}_{\mathbb Z}$ is an abelian subcategory, stable under tensor products. Its Grothendieck ring $R_{\mathbb Z}$ is the subring of $R$ generated by the classes of the fundamental modules \[ L(Y_{i,q^{2k+\xi_i}})\qquad (i\in I,\ k\in{\mathbb Z}). \] It is known that every simple object $S$ of $\mathcal{C}$ can be written as a tensor product $S_1(a_1)\otimes\cdots\otimes S_k(a_k)$ for some simple objects $S_1, \ldots, S_k$ of $\mathcal{C}_{\mathbb Z}$ and some complex numbers $a_1,\ldots,a_k$ such that \[ \frac{a_i}{a_j} \not \in q^{2{\mathbb Z}}, \qquad (1\le i < j \le k). \] (Here $S_j(a_j)$ denotes the image of $S_j$ under the auto-equivalence $\tau_{a_j}^*$, see \S\ref{subsect_2.1}.) Therefore, the description of the simple objects of $\mathcal{C}$ essentially reduces to the description of the simple objects of $\mathcal{C}_{\mathbb Z}$. We will now introduce, following \cite{HL}, an increasing sequence of subcategories of $\mathcal{C}_{\mathbb Z}$. Let $\ell\in{\mathbb N}$. Let $\mathcal{M}_{\ell}$ be the subset of $\mathcal{M}_{\mathbb Z}$ consisting of all monomials in the variables \[ Y_{i,q^{\xi_i + 2k}},\qquad (i\in I,\ 0\le k\le \ell). \] Define $\mathcal{C}_\ell$ to be the full subcategory of $\mathcal{C}$ whose objects $V$ have all their composition factors of the form $L(m)$ with $m\in\mathcal{M}_\ell$. \begin{proposition}[\cite{HL}]\label{propCCl} $\mathcal{C}_\ell$ is an abelian monoidal category, with Grothendieck ring the polynomial ring \[ R_\ell = {\mathbb Z}\left[[L(Y_{i,q^{2k+\xi_i}})];\ i\in I,\ 0\le k\le \ell\right]. \] \end{proposition} The simple objects of the category $\mathcal{C}_0$ are easy to describe. Indeed, it follows from \cite[Prop. 6.15]{FM} that every simple object of $\mathcal{C}_0$ is a product of fundamental modules of $\mathcal{C}_0$, and conversely any tensor product of fundamental modules of $\mathcal{C}_0$ is simple. We will see in \S\ref{sect_4} that the simple objects of the subcategory $\mathcal{C}_1$ are already non trivial, and that they have a nice description involving cluster algebras. Clearly, every simple object of $\mathcal{C}_{\mathbb Z}$ is of the form $S(q^k)$ for some $k\in{\mathbb Z}$ and some simple object $S$ in $\mathcal{C}_\ell$ with $\ell$ large enough. Therefore, the description of the simple objects of $\mathcal{C}$ eventually reduces to the description of the simple objects of $\mathcal{C}_\ell$ for arbitrary $\ell\in{\mathbb N}$. \section{Nakajima quiver varieties and irreducible $q$-characters} \label{sect_3} The characters of the irreducible finite-dimensional $U_q(\mathfrak g)$-modules are identical to those of the corresponding $\mathfrak g$-modules, and are thus given by the classical Weyl character formula. Moreover, Kashiwara's theory of crystal bases gave rise to a uniform combinatorial description of these characters, generalizing the Young tableaux descriptions available for $\mathfrak g = \mathfrak{sl}_n$. The situation is much more complicated for $U_q(L\mathfrak g)$. Indeed, there is no analogue of Weyl's formula in this case, and it is believed that only Kirillov-Reshetikhin modules (and their irreducible tensor products) have a crystal basis (see \cite{OS} for the existence of crystals of KR-modules for $\mathfrak g$ of classical type). However, inspired by earlier work on Springer theory for affine Hecke algebras, Ginzburg and Vasserot \cite{GV} gave a geometric description of the irreducible $q$-characters of $U_q(L\mathfrak g)$ for $\mathfrak g=\mathfrak{sl}_n$ in terms of intersection cohomology of closures of graded nilpotent orbits. This was extended to all simply-laced types by Nakajima \cite{N1}, using a graded version of his quiver varieties. In this second lecture, we shall review Nakajima's geometric approach. \subsection{Graded vector spaces}\label{subsect_3.1} Recall the partition $I=I_0\sqcup I_1$ of \S\ref{subsect_2.5}. Define the sets of ordered pairs: \[ \widehat{I}= I \times {\mathbb Z},\quad \widehat{I}_0=(I_0\times 2{\mathbb Z}) \sqcup (I_1\times (2{\mathbb Z} + 1)),\quad \widehat{I}_1=(I_0\times (2{\mathbb Z}+1)) \sqcup (I_1\times 2{\mathbb Z}). \] We will consider finite-dimensional $\widehat{I}$-graded ${\mathbb C}$-vector spaces. More precisely, we will use the letters $V, V', \ldots$ for $\widehat{I}_1$-graded vector spaces, and the letters $W, W', \ldots $ for $\widehat{I}_0$-graded vector spaces. We shall write \[ V = \bigoplus_{(i,r)\in \widehat{I}_1} V_i(r),\qquad W = \bigoplus_{(i,r)\in \widehat{I}_0} W_i(r), \] where the spaces $V_i(r)$, $W_i(r)$ are finite-dimensional, and nonzero only for a finite number of $(i,r)$. We write $V \le V'$ if and only if $\dim V_i(r) \ge \dim V'_i(r)$ for every $(i,r)\in\widehat{I}_1$. Consider a pair $(V,W)$ where $V$ is $\widehat{I}_1$-graded and $W$ is $\widehat{I}_0$-graded. We say that $(V,W)$ is $l$-dominant if \begin{equation}\label{def_d} d_i(r,V,W):=\dim W_i(r) - \dim V_i(r+1) - \dim V_i(r-1) - \sum_{j\not = i} a_{ij} \dim V_j(r) \ge 0 \end{equation} for every $(i,r)\in \widehat{I}_0$. The pair $(0,W)$ is always $l$-dominant, and for a given $W$ there are finitely many isoclasses of $\widehat{I}_1$-graded spaces $V$ such that $(V,W)$ is $l$-dominant. \subsection{ADHM equations} Let $(V,W)$ be a pair of vector spaces, where $V$ is $\widehat{I}_1$-graded and $W$ is $\widehat{I}_0$-graded. Define \begin{align*} &L^\bullet(V,W)= \bigoplus_{(i,r)\in \widehat{I}_1} \operatorname{Hom}(V_i(r),W_i(r-1)), \\ &L^\bullet(W,V)= \bigoplus_{(i,r)\in \widehat{I}_0} \operatorname{Hom}(W_i(r),V_i(r-1)), \\ &E^\bullet(V)= \bigoplus_{(i,r)\in \widehat{I}_1;\ j:\,a_{ij}=-1} \operatorname{Hom}(V_i(r),V_j(r-1)). \end{align*} Put $M^\bullet(V,W) = E^\bullet(V) \oplus L^\bullet(W,V) \oplus L^\bullet(V,W)$. An element of $M^\bullet(V,W)$ is written $(B,\alpha,\beta)$, and its components are denoted by: \begin{align*} &B_{ij}(r)\in \operatorname{Hom}(V_i(r),V_j(r-1)), \\ &\alpha_i(r)\ \in \operatorname{Hom}(W_i(r),V_i(r-1)), \\ &\beta_i(r)\ \in \operatorname{Hom}(V_i(r),W_i(r-1)). \end{align*} We define a map $\mu:\ M^\bullet(V,W) \to \displaystyle\bigoplus_{(i,r)\in\widehat{I}_1} \operatorname{Hom}(V_i(r),V_i(r-2))$ by \[ \mu_{(i,r)}(B,\alpha,\beta) = \alpha_i(r-1)\beta_i(r) + \sum_{j:\, a_{ij}=-1} B_{ji}(r-1)B_{ij}(r), \qquad ((i,r)\in\widehat{I}_1). \] We can then introduce $\Lambda^\bullet(V,W) := \mu^{-1}(0) \subset M^\bullet(V,W)$. In other words, $\Lambda^\bullet(V,W)$ is the subvariety of the affine space $M^\bullet(V,W)$ defined by the so-called complex Atiyah-Drinfeld-Hitchin-Manin equations (or ADHM, in short): \begin{equation}\label{ADHM} \alpha_i(r-1)\beta_i(r) + \sum_{j:\,a_{ij}=-1} B_{ji}(r-1)B_{ij}(r) = 0, \qquad ((i,r)\in\widehat{I}_1). \end{equation} \subsection{Graded quiver varieties} A point $(B,\alpha,\beta)$ of $\Lambda^\bullet(V,W)$ is called stable if the following condition holds: for every $\widehat{I}_1$-graded subspace $V'$ of $V$, if $V'$ is $B$-invariant and contained in $\operatorname{Ker}\beta$ then $V'=0$. The stable points form an open subset of $\Lambda^\bullet(V,W)$ denoted by $\Lambda^{\bullet}_s(V,W)$. Let \[ G_V := \prod_{(i,r)\in\widehat{I}_1}GL(V_i(r)). \] This reductive group acts on $M^\bullet(V,W)$ by base change in $V$: \[ g\cdot (B,\alpha,\beta) = \left((g_j(r-1)B_{ij}(r) g_i(r)^{-1}),\ (g_i(r-1)\alpha_i(r)),\ (\beta_i(r)g_i(r)^{-1})\right). \] Note that there is no action on the space $W$. This action preserves the subvariety $\Lambda^\bullet(V,W)$ and the open subset $\Lambda^{\bullet}_s(V,W)$. Moreover, the action on $\Lambda^{\bullet}_s(V,W)$ is free. One can then define, following Nakajima, \[ \mathfrak{M}^\bullet(V,W) := \Lambda^{\bullet}_s(V,W) \slash G_V. \] This set-theoretic quotient coincides with a quotient in the geometric invariant theory sense. The $G_V$-orbit through $(B,\alpha,\beta)$, considered as a point of $\mathfrak{M}^\bullet(V,W)$, will be denoted by $[B,\alpha,\beta]$. Note that $\mathfrak{M}^\bullet(V,W)$ may be empty (if there is no stable point). One also defines the affine quotient \[ \mathfrak{M}^\bullet_0(V,W) := \Lambda^{\bullet}(V,W) \sslash G_V. \] By definition, the coordinate ring of $\mathfrak{M}^\bullet_0(V,W)$ is the ring of $G_V$-invariant functions on $\Lambda^{\bullet}(V,W)$, and $\mathfrak{M}^\bullet_0(V,W)$ parametrizes the closed $G_V$-orbits. Since the orbit $\{0\}$ is always closed, $\mathfrak{M}^\bullet_0(V,W)$ is never empty. We have a projective morphism \[ \pi_V : \mathfrak{M}^\bullet(V,W) \to \mathfrak{M}^\bullet_0(V,W), \] mapping the orbit $[B,\alpha,\beta]$ to the unique closed $G_V$-orbit in its closure. Finally, the third quiver variety is \[ \mathfrak{L}^\bullet(V,W) := \pi_V^{-1}(0). \] \subsection{Properties}\label{subsect_3.4} If it is not empty, the variety $\mathfrak{M}^\bullet(V,W)$ is smooth of dimension \[ \dim \mathfrak{M}^\bullet(V,W) = \sum_{(i,r)\in \widehat{I}_0} \dim V_i(r+1) d_i(r,V,W) + \dim W_i(r)\dim V_i(r-1) \] where $d_i(r,V,W)$ is defined by (\ref{def_d}). The coordinate ring of $\mathfrak{M}^\bullet_0(V,W)$ is generated by the following $G_V$-invariant functions on $\Lambda^\bullet(V,W)$: \begin{equation}\label{gen_f} (B,\alpha,\beta) \mapsto \<\psi \ ,\ \beta_j(r-n-1)B_{j_{n-1}j}(r-n)\cdots B_{j_1j_2}(r-2)B_{ij_1}(r-1)\alpha_i(r)\>, \end{equation} where $(i,r)\in\widehat{I}_0$, $(i,j_1,j_2,\ldots,j_{n-1},j)$ is a path (possibly of length 0) on the (unoriented) Dynkin diagram, and $\psi$ is a linear form on $\operatorname{Hom}(W_i(r),W_j(r-n-2))$ \cite{L}. When the path is the trivial path at vertex $i$, the function (\ref{gen_f})~is \[ (B,\alpha,\beta) \mapsto \<\psi \ ,\ \beta_i(r-2)\alpha_i(r)\>, \] where $\psi$ is a linear form on $\operatorname{Hom}(W_i(r),W_i(r-2))$. In particular, if $W=W_i(r)$ is supported on a single vertex $(i,r)\in\widehat{I}_0$ then every function of the form (\ref{gen_f}) is equal to 0, so for every $V$ the coordinate ring of $\mathfrak{M}^\bullet_0(V,W)$ is ${\mathbb C}$, and $\mathfrak{M}^\bullet_0(V,W)= \{0\}$. It follows that $\mathfrak{M}^\bullet(V,W) = \mathfrak{L}^\bullet(V,W)$ in this case. Let $\mathfrak{M}^{\bullet\, {\rm reg}} _0(V,W)$ be the subset of $\mathfrak{M}^\bullet_0(V,W)$ consisting of the closed {\em free} $G_V$-orbits. This is open, but possibly empty. In fact, $\mathfrak{M}^{\bullet\, {\rm reg}} _0(V,W) \not = \emptyset$ if and only if $\mathfrak{M}^\bullet(V,W) \not = \emptyset$ and the pair $(V,W)$ is $l$-dominant. In this case, the restriction of $\pi_V$ to $\pi_V^{-1}(\mathfrak{M}^{\bullet\, {\rm reg}} _0(V,W))$ is an isomorphism, and in particular $\mathfrak{M}^{\bullet\, {\rm reg}} _0(V,W)$ is non singular of dimension equal to $\dim \mathfrak{M}^\bullet(V,W)$. If $V \ge V'$, that is, if $V_i(r) \subseteq V'_i(r)$ for every $(i,r)\in \widehat{I}_1$, then we have a natural closed embedding $\mathfrak{M}^\bullet_0(V,W) \subset \mathfrak{M}^\bullet_0(V',W)$. One defines \[ \mathfrak{M}^\bullet_0(W) = \bigcup_V \mathfrak{M}^\bullet_0(V,W). \] In fact, one has a stratification \[ \mathfrak{M}^\bullet_0(W) = \bigsqcup_{[V]} \mathfrak{M}^{\bullet\,{\rm reg}}_0(V,W), \] where $V$ runs through the $\widehat{I}_1$-graded spaces such that $(V,W)$ is $l$-dominant, and $[V]$ denotes the isomorphism class of $V$ as a graded space. It follows from \S\ref{subsect_3.1} that $\mathfrak{M}^\bullet_0(W)$ has finitely many strata. \subsection{Examples}\label{subsect_3.5} \textbf{1.}\ Take $\mathfrak g=\mathfrak{sl}_2$ of type $A_1$. Assume that $I_0 = I = \{1\}$. Since $I$ is a singleton, we can drop indices $i$ in the notation and write $\widehat{I}_0 = 2{\mathbb Z}$, $\widehat{I}_1 = 2{\mathbb Z}+1$. Hence \[ W = \bigoplus_{r\in 2{\mathbb Z}} W(r), \qquad V = \bigoplus_{s\in 2{\mathbb Z}+1} V(s), \] and $M^\bullet(V,W) = L^\bullet(W,V) \oplus L^\bullet(V,W)$ consists of pairs $(\alpha,\beta)$: the $B$-component is zero in this case. In particular, any subspace $V'$ of $V$ is $B$-stable, so $(\alpha,\beta)$ is stable if and only $\beta(s)$ is injective for every $s\in 2{\mathbb Z}+1$. The ADHM equations reduce to \[ \alpha(s-1)\beta(s) = 0, \qquad (s\in 2{\mathbb Z}+1). \] With any pair $(\alpha,\beta)$ we associate $x = \beta\alpha \in \operatorname{End}(W)$, and $E=\beta(V) \subseteq W$. Clearly, $x$ and $E$ depend only on the $G_V$-orbit of $(\alpha,\beta)$. Let $\mathcal{N}(W)$ denote the subvariety of $\operatorname{End}(W)$ consisting of degree -2 endomorphisms of $W$ (that is, $x(W(r)) \subseteq W(r-2)$ for every $r$) satisfying $x^2 = 0$. Let $\mathcal{F}(V,W)$ denote the variety of pairs $(x,E)$, where $E$ is a graded subspace of $W$ with $\dim E(r) = \dim V(r+1)$, and $x\in \mathcal{N}(W)$ is such that \[ \operatorname{Im} x \subseteq E \subseteq \operatorname{Ker} x. \] Then one can check that the map $[\alpha,\beta]\mapsto(x,E)$ establishes an isomorphism \[ \mathfrak{M}^\bullet(V,W) \stackrel{\sim}\longrightarrow \mathcal{F}(V,W). \] Moreover, the affine variety $\mathfrak{M}^\bullet_0(W)$ is isomorphic to $\mathcal{N}(W)$, and the projective morphism $\pi_V : \mathfrak{M}^\bullet(V,W) \to \mathfrak{M}^\bullet_0(V,W) \subseteq \mathfrak{M}^\bullet_0(W)$ is the first projection \[ \pi_V(x,E) = x. \] Thus the fiber $\mathfrak{L}^\bullet(V,W) = \pi_V^{-1}(0)$ is the Grassmannian of graded subspaces $E$ of $W$ with $\dim E(r) = \dim V(r+1)$. Finally, $\mathfrak{M}^\bullet_0(V,W)$ is isomorphic to the subvariety of $\mathcal{N}(W)$ defined by the rank conditions: \[ \dim x(W(r)) \le \dim V(r-1), \qquad (r\in 2{\mathbb Z}). \] \begin{figure}[t] \[ \[email protected]{ && &\ar[ld]^{B_{32}(3)} V_3(3) \\ &&\ar[ld]^{B_{21}(2)} V_2(2) && \\ &V_1(1)\ar[d]^{\beta_1(1)} && \\ &W_1(0) } \quad \[email protected]{ &&\ar[ld]^{B_{21}(4)} V_2(4) \ar[rd]^{B_{23}(4)}&& \\ &{V_1(3)}\ar[rd]^{B_{12}(3)}& &\ar[ld]^{B_{32}(3)} V_3(3) \\ && V_2(2)\ar[d]^{\beta_2(2)} && \\ &&W_2(1) } \] \caption{\label{Fig1} {\it The cases $W=W_1(0)={\mathbb C}$, and $W=W_2(1)={\mathbb C}$, in type $A_3$.}} \end{figure} \noindent \textbf{2.}\ Take $Q$ of type $A_3$, with sink set $I_0=\{1,3\}$ and source set $I_1=\{2\}$. Choose first \[ W = W_1(0) = {\mathbb C}. \] One can check that the only choices of $V$ such that $\mathfrak{M}^\bullet(V,W) \not = \emptyset$ are \begin{itemize} \item[(a)] $V= 0$. \item[(b)] $V= V_1(1) = {\mathbb C}$. \item[(c)] $V= V_1(1) \oplus V_2(2)= {\mathbb C} \oplus {\mathbb C}$. \item[(d)] $V= V_1(1) \oplus V_2(2) \oplus V_3(3) = {\mathbb C} \oplus {\mathbb C} \oplus {\mathbb C}$. \end{itemize} For instance, this can be checked by using the dimension formula of \S\ref{subsect_3.4} : in all other cases, this formula produces a negative number. Case (d) is illustrated in Figure~\ref{Fig1}. Let us determine $\Lambda^\bullet_s(V,W)$ in each case. In cases (b), (c), (d), the map $\beta_1(1): V_1(1) \to W_1(0)$ has to be injective. Indeed its kernel is $B$-invariant, and so the stability condition forces it to be trivial. Similarly, in cases (c), (d), the map $B_{21}(2): V_2(2) \to V_1(1)$ has to be injective. Indeed its kernel is $B$-invariant and contained in $\operatorname{Ker}\beta_2(2) = V_2(2)$, and so the stability condition forces it again to be trivial. Finally, in case (d), for the same reasons, the map $B_{32}(3): V_3(3) \to V_2(2)$ has to be injective. Thus we have \begin{itemize} \item[(b)] $\Lambda^\bullet_s(V,W)={\mathbb C}^*$ and $\mathfrak{M}^\bullet(V,W)={\mathbb C}^*/{\mathbb C}^* = \{{\rm point}\}$. \item[(c)] $\Lambda^\bullet_s(V,W)={\mathbb C}^*\times {\mathbb C}^*$ and $\mathfrak{M}^\bullet(V,W)=({\mathbb C}^*\times {\mathbb C}^*)/({\mathbb C}^*\times {\mathbb C}^*) = \{{\rm pt}\}$. \item[(d)] $\Lambda^\bullet_s(V,W)={\mathbb C}^*\times {\mathbb C}^*\times {\mathbb C}^*$, $\mathfrak{M}^\bullet(V,W)=({\mathbb C}^*\times {\mathbb C}^*\times {\mathbb C}^*)/({\mathbb C}^*\times {\mathbb C}^*\times {\mathbb C}^*) = \{{\rm pt}\}$. \end{itemize} As a second choice, take \[ W = W_2(1) = {\mathbb C}. \] One can check that the only choices of $V$ such that $\mathfrak{M}^\bullet(V,W) \not = \emptyset$ are \begin{itemize} \item[(a)] $V= 0$. \item[(b)] $V= V_2(2) = {\mathbb C}$. \item[(c)] $V= V_2(2) \oplus V_1(3)= {\mathbb C} \oplus {\mathbb C}$. \item[(d)] $V= V_2(2) \oplus V_3(3)= {\mathbb C} \oplus {\mathbb C}$. \item[(e)] $V= V_2(2) \oplus V_1(3) \oplus V_3(3) = {\mathbb C} \oplus {\mathbb C} \oplus {\mathbb C}$. \item[(f)] $V= V_2(2) \oplus V_1(3) \oplus V_3(3) \oplus V_2(4) = {\mathbb C} \oplus {\mathbb C} \oplus {\mathbb C} \oplus {\mathbb C}$. \end{itemize} Case (f) is illustrated in Figure~\ref{Fig1}. Let us determine $\Lambda^\bullet_s(V,W)$ in each case. In cases (b), (c), (d), (e) the map $\beta_2(2): V_2(2) \to W_2(1)$ has to be injective. Indeed its kernel is $B$-invariant, and so the stability condition forces it to be trivial. Similarly, in cases (c), (e), (f) the map $B_{12}(3)$ has to be injective. Indeed its kernel is $B$-invariant and contained in $\operatorname{Ker}\beta_1(3) = V_1(3)$, and so the stability condition forces it again to be trivial. Similarly, in cases (d), (e), (f) the map $B_{32}(3)$ has to be injective. Finally, in case (f), the ADHM equations imply the relation \begin{equation}\label{rel} B_{12}(3)B_{21}(4) + B_{32}(3)B_{23}(4) = 0. \end{equation} Since $B_{12}(3)$ and $B_{32}(3)$ are injective, this implies that $B_{21}(4)$ and $B_{23}(4)$ are both injective or both equal to 0. If they are both equal to 0 then $V_2(4)$ is $B$-invariant and contained in $\operatorname{Ker}\beta_2(4)$, so $(B,\alpha,\beta)$ is not stable. Hence $B_{21}(4)$ and $B_{23}(4)$ are both injective. Thus we have \begin{itemize} \item[(b)] $\Lambda^\bullet_s(V,W)={\mathbb C}^*$ and $\mathfrak{M}^\bullet(V,W)={\mathbb C}^*/{\mathbb C}^* = \{{\rm pt}\}$. \item[(c)] $\Lambda^\bullet_s(V,W)={\mathbb C}^*\times {\mathbb C}^*$ and $\mathfrak{M}^\bullet(V,W)=({\mathbb C}^*\times {\mathbb C}^*)/({\mathbb C}^*\times {\mathbb C}^*) = \{{\rm pt}\}$. \item[(d)] $\Lambda^\bullet_s(V,W)={\mathbb C}^*\times {\mathbb C}^*$ and $\mathfrak{M}^\bullet(V,W)=({\mathbb C}^*\times {\mathbb C}^*)/({\mathbb C}^*\times {\mathbb C}^*) = \{{\rm pt}\}$. \item[(e)] $\Lambda^\bullet_s(V,W)={\mathbb C}^*\times {\mathbb C}^*\times {\mathbb C}^*$, $\mathfrak{M}^\bullet(V,W)=({\mathbb C}^*\times {\mathbb C}^*\times {\mathbb C}^*)/({\mathbb C}^*\times {\mathbb C}^*\times {\mathbb C}^*) = \{{\rm pt}\}$. \item[(f)] $\Lambda^\bullet_s(V,W)={\mathbb C}^*\times {\mathbb C}^*\times {\mathbb C}^*\times {\mathbb C}^*$, because $B_{23}(4)$ can be expressed in terms of $B_{12}(3), B_{21}(4)$, and $B_{32}(3)$ in view of (\ref{rel}). Hence, we find again that $\mathfrak{M}^\bullet(V,W)=({\mathbb C}^*\times {\mathbb C}^*\times {\mathbb C}^* \times {\mathbb C}^*)/({\mathbb C}^*\times {\mathbb C}^*\times {\mathbb C}^*\times {\mathbb C}^*) = \{{\rm pt}\}$. \end{itemize} \noindent \textbf{3.}\ \begin{figure}[t] \[ \[email protected]{ &\qquad\qquad\qquad&& \\ && \ar[ld]^{B_{32}(7)} V_3(7) \\ &\ar[ld]^{B_{21}(6)} V_2(6)\ar[rd]^{B_{23}(6)} \\ \ar[d]^{\beta_1(5)}V_1(5)\ar[rd]^{B_{12}(5)}&&\ar[ld]^{B_{32}(5)} V_3(5) \\ \ar[d]^{\alpha_1(4)}W_1(4)&\ar[ld]^{B_{21}(4)} V_2(4)\ar[rd]^{B_{23}(4)} \\ \ar[d]^{\beta_1(3)}V_1(3)\ar[rd]^{B_{12}(3)}&&\ar[ld]^{B_{32}(3)} V_3(3) \\ \ar[d]^{\alpha_1(2)} W_1(2)&\ar[ld]^{B_{21}(2)} V_2(2) && \\ V_1(1)\ar[d]^{\beta_1(1)} && \\ W_1(0) } \] \caption{\label{Fig2} {\it The case $W=W_1(0)\oplus W_1(2)\oplus W_1(4)$ in type $A_3$.}} \end{figure} Assume that $Q$ is of type $A_n$ and $1\in I_0$. Take $W$ of the form \[ W = W_1(r) \oplus W_1(r+2) \oplus \cdots \oplus W_1(r+2k), \qquad (r\in 2{\mathbb Z},\ k\in{\mathbb N}), \] where $\dim W_1(r+2j)=d^{(j)}\ (0\le j\le k)$. Thus, $W$ can be regarded as a $2{\mathbb Z}$-graded vector space. One can check that the map \[ (B,\alpha,\beta)\mapsto x:=(\beta_1(r+1)\alpha_1(r),\beta_1(r+3)\alpha_1(r+2)\ldots,\beta_1(r+2k-1)\alpha_1(r+2k)) \] induces an isomorphism from $\mathfrak{M}^\bullet_0(W)$ to the variety of degree -2 endomorphisms $x$ of $W$ satisfying $x^{n+1}=0$. In other words, $\mathfrak{M}^\bullet_0(W)$ is the affine space of representations in $W$ of the quiver \[ r \stackrel{\gamma_1}{\leftarrow} r+2 \stackrel{\gamma_2}{\leftarrow} r+4 \stackrel{\gamma_3}{\leftarrow} \cdots \stackrel{\gamma_k}{\leftarrow} r+2k \] bound by the relations \[ \gamma_i\gamma_{i+1}\cdots\gamma_{i+n} = 0,\qquad (1\le i \le k-n). \] Let us now determine $\Lambda^\bullet_s(V,W)$ for a given $\widehat{I}_1$-graded space $V$. First, the stability condition implies that $V_i(j) = 0$ for $j < r+i$. Next, it is easy to show by induction that if $(B,\alpha,\beta)$ is stable then the following maps are injective: \[ \beta_1(r+1+2j)\quad (0\le j\le k),\qquad B_{i,i-1}(r+i+2j)\quad (2\le i\le n,\ 0\le j\le k). \] Moreover, we have $V_i(j) = 0$ for $j > r+i+2k$. Therefore a typical example of $(B,\alpha,\beta)$ in $\Lambda_s^\bullet(V,W)$ looks like in Figure~\ref{Fig2} with all maps $\beta_1(j)$ and $B_{i,i-1}(j)$ injective. Put \[ E_1^{(j)} := \beta_1(r+1+2j)(V_1(r+1+2j)) \subseteq W_1(r+2j), \qquad (0\le j\le k), \] and for $i=2,\ldots,n$, \[ E_i^{(j)} := \beta_1(r+1+2j)B_{21}(r+2+2j)\cdots B_{i,i-1}(r+i+2j)(V_i(r+i+2j)). \] The vector spaces $E_i=\bigoplus_{j} E_i^{(j)}$ form an $n$-step flag \[ F^\bullet = \left(E_0 = W\supseteq E_1 \supseteq \cdots \supseteq E_n \supseteq 0 = E_{n+1}\right) \] of graded subspaces of $W = E_0$, with graded dimension \[ \bd_i = (\dim V_i(r+i),\, \dim V_i(r+i+2),\, \ldots), \qquad (i=1,\ldots n). \] We get a well-defined map $(B,\alpha,\beta) \mapsto (x,F^\bullet)$ from $\Lambda_s^\bullet(V,W)$ to the variety $\mathcal{F}(V,W)$ of pairs consisting of a graded flag $F^\bullet$ of dimension $(\bd_1,\ldots,\bd_n)$ in $W$, together with a graded nilpotent endomorphism $x$ preserving this flag (that is, such that $x(E_i^{(j)})\subseteq E_{i+1}^{(j-1)}$). This map is $G_V$-equivariant, hence induces a map $\mathfrak{M}^\bullet(V,W) \to \mathcal{F}(V,W)$, and one can check that this is an isomorphism. Moreover, in this identification the map $\pi_V : \mathfrak{M}^\bullet(V,W) \to \mathfrak{M}_0^\bullet(V,W)$ becomes the projection $(x,F^\bullet) \mapsto x$. Finally, the zero fibre $\mathfrak{L}^\bullet(V,W)=\pi_V^{-1}(0)$ is isomorphic to the variety of graded flags of $W$ of dimension $(\bd_1,\ldots,\bd_n)$. More generally, the fiber $\pi_V^{-1}(x)$ is the variety of graded flags of $W$ of dimension $(\bd_1,\ldots,\bd_n)$ which are preserved by $x$. Ginzburg and Vasserot \cite{GV} have shown that the Borel-Moore homologies of the varieties \[ M_x = \bigsqcup_{V} \pi_V^{-1}(x),\qquad (x\in \mathfrak{M}_0^\bullet(W)), \] (where $V$ runs over isoclasses of $\widehat{I}_1$-graded spaces) have natural structures of $U_q(\widehat{\mathfrak{sl}}_{n+1})$-modules, called standard modules. These modules are not simple, but can be decomposed into simple ones using the decomposition theorem for perverse sheaves. In the next sections, we review Nakajima's extension of these results to other root systems. \subsection{Quiver varieties and standard modules for $U_q(L\mathfrak g)$} To a pair $(V,W)$ of graded spaces as in \S\ref{subsect_3.1} we attach two monomials in $\mathcal{Y}$ given by \begin{equation} Y^W = \prod_{(i,\,r)\in \widehat{I}_0} Y_{i,q^r}^{\dim W_i(r)}, \qquad A^V = \prod_{(j,\,s)\in \widehat{I}_1} A_{j,q^s}^{-\dim V_j(s)}. \end{equation} One can check that the monomial $Y^WA^V$ is dominant if and only if the pair $(V,W)$ is {\em l}-dominant in the sense of \S\ref{subsect_3.1}. Let us associate with $W$ the simple $U_q(L\mathfrak g)$-module $L(W):=L(Y^W)$, which belongs to the subcategory $\mathcal{C}_{\mathbb Z}$. We can also attach to $W$ the tensor product \[ M(W) = \bigotimes_{(i,\,r)\in \widehat{I}_0} L(Y_{i,q^r})^{\otimes \dim W_i(r)}. \] This product is not simple in general. Moreover, its isomorphism class may depend on the chosen ordering of the factors. However, we will only be interested in its $q$-character (or in its class in $R_{\mathbb Z}$) which is independent of this ordering. The modules $M(W)$ are called standard modules. The morphism $\pi_V : \mathfrak{M}^\bullet(V,W) \to \mathfrak{M}^\bullet_0(V,W)$ being projective, its zero fiber $\mathfrak{L}^\bullet(V,W)$ is a complex projective variety. Let $\chi(\mathfrak{L}^\bullet(V,W))$ denote its Euler characteristic. Note that $\mathfrak{L}^\bullet(V,W)$ has no odd cohomology \cite[\S7]{N1}, so $\chi(\mathfrak{L}^\bullet(V,W))$ is equal to the total dimension of the cohomology. \begin{theorem}[Nakajima \cite{N1}]\label{theo_std} The $q$-character of the standard module $M(W)$ is given by \[ \chi_q(M(W)) = Y^W \sum_{[V]} \chi(\mathfrak{L}^\bullet(V,W))\, A^V, \] where the sum runs over all isomorphism classes $[V]$ of $\widehat{I}_1$-graded spaces $V$. \end{theorem} If $W$ has dimension 1, $M(W)=L(W)$ is a fundamental module, hence Theorem~\ref{theo_std} describes in particular the $q$-characters of all fundamental modules. On the other hand, $\chi_q(M(W))$ is the product of the $q$-characters of the factors of $M(W)$, so it can also be expressed as a product of $q$-characters of fundamental modules. \subsection{Examples}\label{subsect_3.7} We illustrate Theorem~\ref{theo_std} with the examples of \S\ref{subsect_3.5}. \medskip \noindent \textbf{1.}\ We take $\mathfrak g = \mathfrak{sl}_2$. The variety $\mathfrak{L}^\bullet(V,W)$ is isomorphic to the product of ordinary Grassmannians \[ \prod_{r\in 2{\mathbb Z}} \operatorname{Gr}\left(\dim V(r+1), \dim W(r)\right), \] so its Euler characteristic is \[ \prod_{r\in 2{\mathbb Z}} \left( \begin{matrix} \dim W(r) \\[1.5mm] \dim V(r+1) \end{matrix} \right). \] Hence \[ \chi_q(M(W)) = Y^W \sum_{[V]} \prod_{r\in 2{\mathbb Z}} \left( \begin{matrix} \dim W(r) \\[1.5mm] \dim V(r+1) \end{matrix} \right) A^V. \] On the other hand, recall that $\chi_q(L(Y_{q^r})) = Y_{q^r}(1 + A_{q^{r+1}}^{-1})$. We can check that the above value of $\chi_q(M(W))$ is equal to \[ \prod_{r\in 2{\mathbb Z}} \chi_q(L(Y_{q^r}))^{\dim W(r)}, \] as it should. \medskip \noindent \textbf{2.}\ Take $\mathfrak g$ of type $A_3$. It follows from the calculations of \S\ref{subsect_3.5}{.2} that \begin{align*} \chi_q(L(Y_{1,q^0}))&= Y_{1,q^0}\left(1 + A_{1,q}^{-1} + A_{1,q}^{-1}A_{2,q^2}^{-1} + A_{1,q}^{-1}A_{2,q^2}^{-1} A_{3,q^3}^{-1}\right), \\ \chi_q(L(Y_{2,q^1}))&= Y_{2,q^1}\left(1 + A_{2,q^2}^{-1} + A_{2,q^2}^{-1}A_{1,q^3}^{-1} + A_{2,q^2}^{-1}A_{3,q^3}^{-1} + A_{2,q^2}^{-1}A_{1,q^3}^{-1}A_{3,q^3}^{-1} \right. \\ &\ \ \ \left.+\ A_{2,q^2}^{-1}A_{1,q^3}^{-1} A_{3,q^3}^{-1}A_{2,q^4}^{-1}\right). \end{align*} \medskip \noindent \textbf{3.}\ Assume that $\mathfrak g$ is of type $A_n$. Choosing $W$ of dimension 1, it follows from \S\ref{subsect_3.5}{.3} that \begin{align*} \chi_q(L(Y_{1,q^r}))&= Y_{1,q^r}\left(1 + A_{1,q^{r+1}}^{-1} + A_{1,q^{r+1}}^{-1}A_{2,q^{r+2}}^{-1} \right. \\ &\ \ \ \left.+\ \cdots\ +\ A_{1,q^r}^{-1}A_{2,q^{r+2}}^{-1}\cdots A_{n,q^{r+n}}^{-1}\right). \end{align*} \subsection{Standard modules and the graded preprojective algebra} \begin{figure}[t] \[ \def\scriptstyle{\scriptstyle} \def\scriptstyle{\scriptstyle} \[email protected]{ &&&&\\ &{(1,4)}\ar[rd]&{}\save[]+<0cm,2ex>*{\vdots}\restore &\ar[ld] (3,4) \\ &&\ar[ld] (2,3) \ar[rd]&& \\ &{(1,2)}\ar[rd]& &\ar[ld] (3,2) \\ &&\ar[ld] (2,1) \ar[rd]&& \\ &(1,0) \ar[rd] &&\ar[ld] (3,0) \\ &&\ar[ld] (2,-1) \ar[rd]&& \\ &(1,-2) &{}\save[]+<0cm,-2ex>*{\vdots}\restore& (3,-2) \\ } \] \caption{\label{Fig3} {\it The quiver ${\mathbb Z} Q$ in type $A_3$.}} \end{figure} Let $Q$ be the sink-source orientation of the Dynkin diagram of $\mathfrak g$, with set of sinks $I_0$ and set of sources $I_1$. Define the repetition quiver ${\mathbb Z} Q$ as the infinite quiver with set of vertices $\widehat{I}_0=(I_0\times 2{\mathbb Z}) \sqcup (I_1\times (2{\mathbb Z} + 1))$, and two types of arrows: \begin{itemize} \item[(i)] for every arrow $i\to j$ in $Q$ we have arrows $(i,2m+1)\to (j,2m)$ in ${\mathbb Z} Q$ for all $m\in{\mathbb Z}$; \item[(ii)] for every arrow $i\to j$ in $Q$ we have arrows $(j,2m)\to (i,2m-1)$ in ${\mathbb Z} Q$ for all $m\in{\mathbb Z}$. \end{itemize} As an example, the quiver ${\mathbb Z} Q$ for $Q$ of type $A_3$ is shown in Figure~\ref{Fig3}. We then introduce a set of degree two elements in the path algebra of ${\mathbb Z} Q$: for every $(i,r)\in\widehat{I}_0$, let $\sigma_{i,r}$ be the sum of all paths from $(i,r)$ to $(i,r-2)$. The graded preprojective algebra of $Q$ is by definition the quotient $\widehat{\Lambda}$ of the path algebra of ${\mathbb Z} Q$ by the two-sided ideal generated by the $\sigma_{i,r}\ ((i,r) \in \widehat{I}_0)$. The algebra $\widehat{\Lambda}$ is well known to be the universal cover of the preprojective algebra $\Lambda$ of $Q$, in the sense of \cite{G}. It turns out that the quiver variety $\mathfrak{L}^\bullet(V,W)$ is homeomorphic to a quiver Grassmannian of an injective $\widehat{\Lambda}$-module. To state this precisely, let us denote by $S_{i,r}$ the one-dimensional simple $\widehat{\Lambda}$-module supported on vertex $(i,r)$ of ${\mathbb Z} Q$. Let $\Delta_{i,r}$ be the injective hull of $S_{i,r}$. (This is a finite-dimensional module.) To the $\widehat{I}_0$-graded vector space $W$ we attach the injective module \[ \Delta_W = \bigoplus_{(i,r)\in \widehat{I}_0} \Delta_{i,r}^{\oplus \dim W_i(r)}. \] To the $\widehat{I}_1$-graded vector space $V$ we attach the dimension vector \[ d_V = (\dim V_i(r+1);\ (i,r)\in\widehat{I}_0). \] We then have the following result, due to Lusztig \cite{L} in the ungraded case, and extended to the graded case by Savage and Tingley \cite{ST}. \begin{proposition} The complex variety $\mathfrak{L}^\bullet(V,W)$ is homeomorphic to the Grassmannian $\operatorname{Gr}(d_V,\Delta_W)$ of $\widehat{\Lambda}$-submodules of $\Delta_W$ with dimension vector $d_V$. \end{proposition} It follows that we can rewrite Nakajima's formula for standard modules of $\mathcal{C}_{\mathbb Z}$ as \begin{equation}\label{eqstandard} \chi_q(M(W)) = Y^W \sum_{[V]} \chi(\operatorname{Gr}(d_V,\Delta_W))\, A^V. \end{equation} In particular, for the fundamental modules of $\mathcal{C}_{\mathbb Z}$ we get \begin{equation}\label{eqfund} \chi_q(L(Y_{i,q^r})) = Y_{i,q^r} \sum_{[V]} \chi(\operatorname{Gr}(d_V,\Delta_{i,r}))\, A^V. \end{equation} \begin{figure}[t] \[ \def\scriptstyle{\scriptstyle} \def\scriptstyle{\scriptstyle} \[email protected]{ && &\ar[lld]\ar[ld] {(3,4)} \ar[rd] \\ &(1,3)\ar[rrd]&(2,3) \ar[rd]&&(4,3)\ar[ld] \\ & &&\ar[lld]\ar[ld] (3,2)\ar[rd] \\ &(1,1)\ar[rrd]& (2,1) \ar[rd]&&(4,1)\ar[ld] \\ & && (3,0) \\ } \] \caption{\label{FigD} {\it The skeleton of the injective module $\Delta_{3,0}$ in type $D_4$}.} \end{figure} \subsection{Examples}\label{subsect_newexamples} \textbf{1.}\ One can easily recover the formulas of \S\ref{subsect_3.7} using the well-known description of the indecomposable injective $\widehat{\Lambda}$-modules for a Dynkin quiver $Q$ of type $A_n$. Thus, \S\ref{subsect_3.7}.3 follows immediately from the fact that the injective module $\Delta_{1,r}$ is an $n$-dimensional module with a unique composition series. More generally, all the Grassmannians of submodules of all indecomposable injective $\widehat{\Lambda}$-modules in type $A_n$ are reduced to a point. This implies that all fundamental $U_q(L\mathfrak{sl}_{n+1})$-modules have multiplicity-free $q$-characters, that is, all their {\em l}-weight spaces have dimension 1. \medskip \noindent \textbf{2.}\ Let $\mathfrak g= \mathfrak{so}_8$ be of type $D_4$. We label by $3$ the central node of the Dynkin diagram, and we set $I_0 = \{3\}$, $I_1 = \{1,2,4\}$. The injective module $\Delta_{3,0}$ has total dimension 10, and its dimension vector is supported on the finite strip of ${\mathbb Z} Q$ displayed in Figure~\ref{FigD}. More precisely, every vertex of the picture carries a 1-dimensional vector space, except $(3,2)$ which has a 2-dimensional vector space. There are 28 non-trivial quiver Grassmannians $\operatorname{Gr}(d,\Delta_{3,0})$, and it easy to check that all of them are points, except for the following $d$: \[ d_{3,0}=d_{1,1}=d_{2,1}=d_{4,1}=d_{3,2}=1,\quad d_{1,3}=d_{2,3}=d_{4,3}=d_{3,4}=0, \] for which $\operatorname{Gr}(d,\Delta_{3,0}) \simeq {\mathbb P}^1({\mathbb C})$. It follows that the fundamental module $L(Y_{3,q^0})$ has dimension 29. More precisely, writing for short $v_{i,s}:=A_{i,q^s}^{-1}$, we have \begin{align*} \chi_q(L(Y_{3,0}))=&\ Y_{3,q^0}(1 + v_{3,1} + v_{3,1}v_{1,2} + v_{3,1}v_{2,2} + v_{3,1}v_{4,2} + v_{3,1}v_{1,2}v_{2,2} \\ &+ v_{3,1}v_{1,2}v_{4,2} + v_{3,1}v_{2,2}v_{4,2}+v_{3,1}v_{1,2}v_{2,2}v_{4,2} + v_{3,1}v_{1,2}v_{2,2}v_{3,3} \\&+ v_{3,1}v_{1,2}v_{4,2}v_{3,3} + v_{3,1}v_{2,2}v_{4,2}v_{3,3} + 2\, v_{3,1}v_{1,2}v_{2,2}v_{4,2}v_{3,3} \\ &+ v_{3,1}v_{1,2}v_{2,2}v_{3,3}v_{4,4} + v_{3,1}v_{1,2}v_{4,2}v_{3,3}v_{2,4} + v_{3,1}v_{2,2}v_{4,2}v_{3,3}v_{1,4} \\ &+v_{3,1}v_{1,2}v_{2,2}v_{4,2}v_{3,3}^2 + v_{3,1}v_{1,2}v_{2,2}v_{4,2}v_{3,3}v_{1,4} \\ &+ v_{3,1}v_{1,2}v_{2,2}v_{4,2}v_{3,3}v_{2,4} +v_{3,1}v_{1,2}v_{2,2}v_{4,2}v_{3,3}v_{4,4} \\ &+v_{3,1}v_{1,2}v_{2,2}v_{4,2}v_{3,3}^2v_{1,4} + v_{3,1}v_{1,2}v_{2,2}v_{4,2}v_{3,3}^2v_{2,4} \\ &+v_{3,1}v_{1,2}v_{2,2}v_{4,2}v_{3,3}^2v_{4,4} +v_{3,1}v_{1,2}v_{2,2}v_{4,2}v_{3,3}^2v_{1,4}v_{2,4} \\ &+v_{3,1}v_{1,2}v_{2,2}v_{4,2}v_{3,3}^2v_{1,4}v_{4,4} +v_{3,1}v_{1,2}v_{2,2}v_{4,2}v_{3,3}^2v_{2,4}v_{4,4} \\ &+v_{3,1}v_{1,2}v_{2,2}v_{4,2}v_{3,3}^2v_{1,4}v_{2,4}v_{4,4} +v_{3,1}v_{1,2}v_{2,2}v_{4,2}v_{3,3}^2v_{1,4}v_{2,4}v_{4,4}v_{3,5}) \end{align*} The restriction of $L(Y_{3,0})$ to $U_q(\mathfrak{so}_8)$ decomposes into the direct sum of the fundamental module with fundamental weight $\varpi_3$, of dimension 28, and of a copy of the trivial representation. \subsection{Perverse sheaves}\label{perverse} Recall from \S\ref{subsect_3.4} the stratification \[ \mathfrak{M}^\bullet_0(W) = \bigsqcup_{[V']} \mathfrak{M}^{\bullet\,{\rm reg}}_0(V',W), \] where $V'$ runs through the $\widehat{I}_1$-graded spaces such that the pair $(V',W)$ is $l$-dominant. We denote by $IC_W(V')$ the intersection cohomology complex associated with the trivial local system on the stratum $\mathfrak{M}^{\bullet\,{\rm reg}}_0(V',W)$. Consider an arbitrary $\widehat{I}_1$-graded space $V$ such that $\mathfrak{M}^\bullet(V,W) \not = \emptyset$. The map \[ \pi_V : \mathfrak{M}^\bullet(V,W) \to \mathfrak{M}^\bullet_0(V,W) \] is projective, and the variety $\mathfrak{M}^\bullet(V,W)$ is smooth. Hence, by the decomposition theorem, the push-down $(\pi_V)_!(1_{\mathfrak{M}^\bullet(V,W)})$ of the constant sheaf on $\mathfrak{M}^\bullet(V,W)$ is a direct sum of shifts of simple perverse sheaves in the derived category $\mathcal{D}(\mathfrak{M}^\bullet_0(V,W))$. We can regard these perverse sheaves as objects of $\mathcal{D}(\mathfrak{M}^\bullet_0(W))$ by extending them by $0$ on the complement $\mathfrak{M}^\bullet_0(W) \setminus \mathfrak{M}^\bullet_0(V,W)$. Nakajima has shown that all these perverse sheaves are of the form $IC_W(V')$ for some $l$-dominant pair $(V',W)$ with $V' \ge V$. So we can write in the Grothendieck group of $\mathcal{D}(\mathfrak{M}^\bullet_0(W))$ \begin{equation}\label{eqIC} [(\pi_V)_!(1_{\mathfrak{M}^\bullet(V,W)}[\dim\mathfrak{M}^\bullet(V,W)])] = \sum_{V'\ge V} a_{V,V';W}(t) [IC_W(V')]. \end{equation} Here $t$ is a formal variable implementing the action of the shift functor: \[ t^j[L] = [L[j]], \] and $a_{V,V';W}(t) \in {\mathbb N}[t^{\pm 1}]$ is the graded multiplicity. (The additional shift in degree by $\dim\mathfrak{M}^\bullet(V,W)$ makes the left-hand side invariant under Verdier duality.) Note that in (\ref{eqIC}), the pair $(V,W)$ is not necessarily $l$-dominant. We can now state the main result of this lecture. \begin{theorem}[Nakajima \cite{N1}]\label{theo_sim} Let $W$ be an $\widehat{I}_0$-graded space, and let $L(W)$ be the corresponding simple module in $\mathcal{C}_{\mathbb Z}$. The coefficient of the monomial $Y^WA^V$ in $\chi_q(L(W))$ is equal to $a_{V,0;W}(1)$. \end{theorem} In other words, the $l$-weight multiplicities of $L(W)$ are calculated by the (ungraded) multiplicities of the skyscraper sheaf $IC_W(0) = 1_{\{0\}}$ in the expansions of the push-downs $[(\pi_V)_!(1_{\mathfrak{M}^\bullet(V,W)})]$ on the basis $\{[IC_W(V')]\}$. \subsection{Examples} \textbf{1.}\ Let $L(W) = L(Y_{i,q^r})$ be a fundamental module. Then, by \S\ref{subsect_3.4}, $\mathfrak{M}^\bullet_0(W) = \{0\}$, and $\mathfrak{M}^\bullet(V,W) = \mathfrak{L}^\bullet(V,W)$. So $a_{V,0;W}(t)$ is the Poincar\'e polynomial of the cohomology of $\mathfrak{L}^\bullet(V,W)$ (up to some shift). Since the odd cohomology groups of $\mathfrak{L}^\bullet(V,W)$ vanish, we recover that the coefficient of $Y^WA^V$ in $\chi_q(L(W))$ is the Euler characteristic of $\mathfrak{L}^\bullet(V,W)$, in agreement with Theorem~\ref{theo_std}. \medskip \noindent \textbf{2.}\ Take $\mathfrak g$ of type $A_1$, and \[ W=W(r)\oplus W(r+2) \oplus \cdots \oplus W(r+2k), \] with $\dim W(r+2i) = 1$ for every $i=0,1,\ldots,k$. The corresponding simple $U_q(\widehat{\mathfrak g})$-module is the Kirillov-Reshetikhin module $L(W) = W_{k+1,q^r}$. Recall from \S\ref{subsect_3.5}.1 the description of the quiver varieties $\mathfrak{M}^\bullet(V,W)$ and $\mathfrak{M}^\bullet(W)$. The variety $\mathfrak{M}^\bullet(V,W)$ is non-empty if and only if $\dim V(r+2j+1) \le \dim W(r+2j)$ for every $j=0,\ldots,k$. For any such choice, since $\dim V(r+2j+1)$ is equal to $0$ or $1$, there is a unique graded subspace $E$ of $W$ satisfying $\dim E(r+2j) = \dim V(r+2j+1)$. Moreover, the set of $x$'s such that $\operatorname{Im} x \subseteq E \subseteq \operatorname{Ker} x$ is isomorphic to the vector space of linear maps of degree -2 from $W/E$ to $E$. We have two cases: (i)\ \ If $V$ is such that there exists $s\in\{0,\ldots,k\}$ with \[ \dim V(r+2j+1) = \left\{ \begin{array}{lc} 0&\mbox{if } j< s,\\ 1&\mbox{if } j\ge s, \end{array} \right. \] then this space is reduced to $\{0\}$, and \[ (\pi_V)_!(1_{\mathfrak{M}^\bullet(V,W)}) = 1_{\{0\}}, \qquad a_{V,0,W}(1)=1. \] (ii)\ \ Otherwise, if there is $s\in\{0,\ldots,k-1\}$ with \[ \dim V(r+2s+1)= 1,\quad \dim V(r+2s+3)=0, \] then this space has positive dimension, and $(\pi_V)_!(1_{\mathfrak{M}^\bullet(V,W)})$ is a simple perverse sheaf $\not = 1_{\{0\}}$. So $a_{V,0,W}=0$. In conclusion, the $q$-character of the Kirillov-Reshetikhin module $W_{k+1,q^r}$ is given by \begin{align*} \chi_q(W_{k+1,q^r})&= Y_{q^r} Y_{q^{r+2}} \cdots Y_{q^{r+2k}} \left(1+A_{q^{r+2k+1}}^{-1}+A_{q^{r+2k-1}}^{-1}A_{q^{r+2k+1}}^{-1} + \right. \\ &\left.\ \ \ +\ \cdots +A_{q^{r+1}}^{-1}A_{q^{r+3}}^{-1}\cdots A_{q^{r+2k+1}}^{-1}\right), \end{align*} in agreement with Eq.~(\ref{eqKR}). \subsection{Algorithms} Let $m\in\mathcal{M}_+$. We say that the simple module $L(m)$ is minuscule if $m$ is the only dominant monomial of $\chi_q(L(m))$. (In \cite{N3} these modules are called special.) There exists an algorithm due to Frenkel and Mukhin \cite{FM} which attaches to any $m\in\mathcal{M}_+$ a polynomial $\operatorname{FM}(m)\in\mathcal{Y}$, and in case $L(m)$ is minuscule it is proved that $\operatorname{FM}(m) = \chi_q(L(m))$. Moreover, all fundamental modules $L(Y_{i,a})$ are minuscule, so this algorithm allows to calculate their $q$-characters. It was proved in \cite{N2} that Kirillov-Reshetikhin modules are also minuscule. But there also exist simple modules for which the Frenkel-Mukhin algorithm fails. For example in type~$A_2$, $ \chi_q(L(Y_{1,1}^2Y_{2,q^3})) \not = \operatorname{FM}(Y_{1,1}^2Y_{2,q^3}), $ \cite[Example 5.6]{HL}. (For an earlier example in type $C_3$ see \cite{NN}.) In \cite{N3}, Nakajima has introduced a $t$-analogue $\chi_{q,t}$ of the $q$-character $\chi_q$. This is obtained by keeping the $t$-grading in the graded multiplicities $a_{V,0;W}(t)$ of Theorem~\ref{theo_sim}. Imitating the Kazhdan-Lusztig algorithm for calculating the intersection cohomology of a Schubert variety, he has described an algorithm for computing the $(q,t)$-character of an arbitrary simple module in terms of the $(q,t)$-characters of the fundamental modules. The $(q,t)$-characters of the fundamental modules can in turn be obtained using a $t$-version of the Frenkel-Mukhin algorithm. We therefore have, in principle, a way of calculating $\chi_q(L(m))$ for every $m\in\mathcal{M}_+$. \section{Tensor structure}\label{sect_4} In the category of finite-dimensional $U_q(\mathfrak g)$-modules, tensor products of irreducible modules are almost never irreducible. This is in sharp contrast with what happens for tensor products of finite-dimensional $U_q(L\mathfrak g)$-modules. Indeed, if $M$ and $N$ are simple objects of $\mathcal{C}$, the tensor product $M\otimes N(a)$ is simple for all but a finite number of $a\in{\mathbb C}^*$. (Here $N(a)$ is the image of $N$ under the auto-equivalence $\tau_a^*$ of \S\ref{subsect_2.1}.) Hence many tensor products of simple $U_q(L\mathfrak g)$-modules are simple, or equivalently, many simple modules can be factored as tensor products of smaller simple modules. The following questions are therefore natural: \begin{itemize} \item[(i)] what are the \emph{prime} simple modules, \emph{i.e.\,} the simple modules which have no factorization as a tensor product of smaller modules ? \item[(ii)] which tensor products of prime simples are simple ? \end{itemize} We have seen in \S\ref{subsect_2.4} that these questions have a simple answer when $\mathfrak g=\mathfrak{sl}_2$, namely, the prime simples are the Kirillov-Reshetikhin modules, and a tensor product of Kirillov-Reshetikhin modules is simple if and only if the corresponding $q$-segments are pairwise in general position. In this third lecture, we will report on some recent progress in trying to extend these results to an arbitrary simply-laced~$\mathfrak g$. \subsection{The cluster algebra $\mathcal{A}_\ell$} \begin{figure}[t] \[ \def\scriptstyle{\scriptstyle} \def\scriptstyle{\scriptstyle} \[email protected]{ &&\ar[ld] {(2,5)} \ar[rd]&& \\ &{(1,4)}\ar[rd]& &\ar[ld] {(3,4)} \\ &&\ar[ld] (2,3)\ar[uu] \ar[rd]&& \\ &(1,2)\ar[uu] \ar[rd] &&\ar[ld] (3,2)\ar[uu] \\ &&\ar[ld] (2,1)\ar[uu] \ar[rd]&& \\ &(1,0)\ar[uu] && (3,0)\ar[uu] \\ } \] \caption{\label{Fig4} {\it The quiver $\Gamma_2$ in type $A_3$.}} \end{figure} We will assume the reader has some familiarity with cluster algebras. Nice introductions to this theory with pointers to the literature have been written by Zelevinsky \cite{Z} and Fomin \cite{F}. All the necessary material for understanding this lecture can also be found in \cite[\S2]{Le}. For $\ell\in{\mathbb N}$, we define a new quiver $\Gamma_\ell$. Put \[ \widehat{I}_0(\ell)=\{(i,\xi_i+2k)\mid i\in I,\ 0\le k \le \ell\}. \] The graph $\Gamma_\ell$ is obtained by taking the full subgraph of ${\mathbb Z} Q$ with vertex set $\widehat{I}_0(\ell)$, and by adding to it new vertical up-arrows corresponding to the natural translation $(i,r) \mapsto (i,r+2)$. For example, if $\mathfrak g$ has type $A_3$ and $I_0 = \{1,3\}$, the quiver $\Gamma_2$ is shown in Figure~\ref{Fig4}. \begin{table}[t] \begin{center} \begin{tabular} {|c|c|c|} \hline Type of $\mathfrak g$ & $\ell$ & Type of $\mathcal{A}_\ell$\\ \hline $A_1$ & $\ell$ & $A_\ell$ \\ \hline $X_n$ & $1$ & $X_n$ \\ \hline $A_2$ & $2$ & $D_4$ \\ $A_2$ & $3$ & $E_6$\\ $A_2$ & $4$ & $E_8$ \\ \hline $A_3$ & $2$ & $E_6$ \\ \hline $A_4$ & $2$ & $E_8$ \\ \hline \end{tabular} \end{center} \caption{\small \it Algebras ${\cal A}_\ell$ of finite cluster type. \label{table1}} \end{table} Let $\mathbf{z} = \{z_{(i,r)} \mid (i,r)\in \widehat{I}_0(\ell)\}$ be a set of indeterminates corresponding to the vertices of $\Gamma_\ell$, and consider the seed $(\mathbf{z} , \Gamma_\ell)$ in which the variables $z_{(i,\xi_i)}\ (i\in I)$ are frozen. This is the initial seed of a cluster algebra $\mathcal{A}_\ell \subset {\mathbb Q}(\mathbf{z})$. It follows easily from \cite{FZ2} that $\mathcal{A}_\ell$ has in general infinitely many cluster variables. The exceptional pairs $(\mathfrak g,\ell)$ for which $\mathcal{A}_\ell$ has finite cluster type are listed in Table~\ref{table1}. \subsection{Conjectural relation between $\mathcal{A}_\ell$ and $\mathcal{C}_\ell$} Recall the subcategory $\mathcal{C}_\ell$ from \S\ref{subsect_2.5}, and its Grothendieck ring $R_\ell$. We say that a simple object $S$ of $\mathcal{C}_\ell$ is \emph{real} if $S\otimes S$ is simple. \begin{conjecture}[Hernandez-Leclerc \cite{HL}]\label{conjec} The assignment \[ z_{(i,\xi_i+2k)} \mapsto \left[W^{(i)}_{\ell+1-k,\,q^{\xi_i+2k}}\right] \] extends to a ring isomorphism $\iota_\ell: \mathcal{A}_{\ell} \to R_\ell$. The map $\iota_\ell$ induces a bijection between cluster monomials and classes of real simple objects of $\mathcal{C}_\ell$, and between cluster variables and classes of real prime simple objects of $\mathcal{C}_\ell$. \end{conjecture} Note that in \cite{HL} we have chosen a different initial seed for defining $\mathcal{A}_\ell$, so the Kirillov-Reshetikhin modules assigned to the initial cluster variables have different spectral parameters\footnote{We take this opportunity to correct a typo in \cite{Le}: in the statement of Conjecture 9.1, one should replace $W^{(i)}_{k,\,q^{\xi_i+2(\ell+1-k)}}$ by $W^{(i)}_{k,\,q^{\xi_i}}$.}. For $\mathfrak g=\mathfrak{sl}_2$, Conjecture~\ref{conjec} holds, as it is just a reformulation of the classical results of \S\ref{subsect_2.4}. If true in general, Conjecture~\ref{conjec} will give a combinatorial description in terms of cluster algebras of the prime tensor factorization of every real simple module of~$\mathcal{C}$. Note that, by definition, the square of a cluster monomial is again a cluster monomial. This explains why cluster monomials can only correspond to real simple modules. For $\mathfrak g=\mathfrak{sl}_2$, all simple $U_q(L\mathfrak g)$-modules are real. However for $\mathfrak g\not = \mathfrak{sl}_2$ there exist \emph{imaginary} simple $U_q(L\mathfrak g)$-modules (\emph{i.e.\,} simple modules whose tensor square is not simple), as shown in \cite{Le1}. This is consistent with the expectation that a cluster algebra with infinitely many cluster variables is not spanned by its set of cluster monomials. We arrived at Conjecture~\ref{conjec} by noting that the $T$-system equations satisfied by Kirillov-Reshetikhin modules (see \S\ref{subsect_2.3}) are of the same form as the exchange relations of a cluster algebra. This was inspired by the seminal work \cite{FZ0}, in which cluster algebra combinatorics is used to prove Zamolodchikov's periodicity conjecture for $Y$-systems attached to Dynkin diagrams. \subsection{The case $\ell = 1$} Our main evidence for Conjecture~\ref{conjec} is the following: \begin{theorem}[\cite{HL,N3}]\label{th_l_1} Conjecture~\ref{conjec} holds for $\mathfrak g$ of type $A, D, E$ and $\ell = 1$. In this case, all simple modules are real. \end{theorem} This was first proved in \cite{HL} for type $A$ and $D_4$ by combinatorial and represen\-ta\-tion-theoretic methods, and soon after, by Nakajima \cite{N3} in the general case, by using the geometric description of the irreducible $q$-characters explained in \S\ref{sect_3}. These two different proofs will be explained in \S\ref{proofHL} and \S\ref{proofN}. Let us illustrate Theorem~\ref{th_l_1} for $\mathfrak g = \mathfrak{sl}_4$. As the cluster algebra $\mathcal{A}_1$ has finite cluster type $A_3$, the cluster variables, and therefore the non frozen prime simple modules, are in bijection with the almost positive roots of $A_3$ \cite{FZ2}. Of course, there can be several such bijections. The bijection chosen in \cite{HL} is as follows: \begin{align*} &S(-\alpha_1)= L(Y_{1,q^2}),\ S(-\alpha_2) = L(Y_{2,q}),\ S(-\alpha_3) = L(Y_{3,q^2}),\\ &S(\alpha_1) = L(Y_{1,q^0}),\ \ \ S(\alpha_2) = L(Y_{2,q^3}),\ \ \ S(\alpha_3) = L(Y_{3,q^0}),\\ &S(\alpha_1+\alpha_2) = L(Y_{1,q^0}Y_{2,q^3}),\ \ S(\alpha_2+\alpha_3) = L(Y_{2,q^3}Y_{3,q^0}),\\ &S(\alpha_1+\alpha_2+\alpha_3) = L(Y_{1,q^0}Y_{2,q^3}Y_{3,q^0}). \end{align*} Note that the last three modules are not Kirillov-Reshetikhin modules. There are three more prime simples corresponding to the three frozen variables of $\mathcal{A}_1$, namely \[ F_1 = L(Y_{1,q^0}Y_{1,q^2}),\ \ F_2 = L(Y_{2,q}Y_{2,q^3}),\ \ F_3 = L(Y_{3,q^0}Y_{3,q^2}). \] \begin{figure}[t] \begin{center} \setlength{\unitlength}{1.70pt} \begin{picture}(90,110)(0,0) \thicklines \put(0,0){\line(1,0){60}} \put(0,0){\line(0,1){40}} \put(60,0){\line(0,1){20}} \put(60,0){\line(1,1){30}} \put(0,40){\line(1,0){40}} \put(0,40){\line(1,3){20}} \put(60,20){\line(-1,1){20}} \put(60,20){\line(1,3){10}} \put(90,30){\line(0,1){40}} \put(40,40){\line(1,3){10}} \put(70,50){\line(1,1){20}} \put(70,50){\line(-1,1){20}} \put(50,70){\line(-1,3){10}} \put(90,70){\line(-1,1){40}} \put(20,100){\line(1,0){20}} \put(20,100){\line(1,1){10}} \put(30,110){\line(1,0){20}} \put(40,100){\line(1,1){10}} \thinlines \multiput(0,0)(1.5,1.5){20}{\circle*{0.5}} \multiput(30,30)(2,0){30}{\circle*{0.5}} \multiput(30,30)(0,2){40}{\circle*{0.5}} \put(35,105){\makebox(0,0){$\alpha_2$}} \put(28,70){\makebox(0,0){$\alpha_1+\alpha_2$}} \put(63,80){\makebox(0,0){$\alpha_2+\alpha_3$}} \put(55,45){\makebox(0,0){$\scriptstyle \alpha_1+\alpha_2+\alpha_3$}} \put(32,18){\makebox(0,0){$\alpha_1$}} \put(77,37){\makebox(0,0){$\alpha_3$}} \put(0,0){\circle*{2}} \put(60,0){\circle*{2}} \put(60,20){\circle*{2}} \put(30,30){\circle*{2}} \put(90,30){\circle*{2}} \put(0,40){\circle*{2}} \put(40,40){\circle*{2}} \put(70,50){\circle*{2}} \put(50,70){\circle*{2}} \put(90,70){\circle*{2}} \put(20,100){\circle*{2}} \put(40,100){\circle*{2}} \put(30,110){\circle*{2}} \put(50,110){\circle*{2}} \end{picture} \end{center} \caption{\label{Figassocia} {\it The associahedron of type $A_3$.}} \end{figure} The cluster algebra $\mathcal{A}_1$ has fourteen clusters, in bijection with the vertices of the associahedron shown in Figure~\ref{Figassocia} \cite{FZ2}. The faces of the associahedron are naturally labeled by the almost positive roots (the rear, bottom, and leftmost faces are labeled by $-\alpha_1$, $-\alpha_2$, and $-\alpha_3$, respectively). Each vertex corresponds to the cluster consisting in its three adjacent faces: \[ \begin{array}{c} \{-\alpha_1,\,-\alpha_2,\,-\alpha_3\},\ \{\alpha_1,\,-\alpha_2,\,-\alpha_3\},\ \{-\alpha_1,\,\alpha_2,\,-\alpha_3\},\ \{-\alpha_1,\,-\alpha_2,\,\alpha_3\}, \\[2mm] \{\alpha_1,\,-\alpha_2,\,\alpha_3\},\, \{-\alpha_1,\,\alpha_2,\,\alpha_2+\alpha_3\},\, \{-\alpha_1,\,\alpha_3,\,\alpha_2+\alpha_3\},\, \{-\alpha_3,\,\alpha_2,\,\alpha_1+\alpha_2\}, \\[2mm] \{-\alpha_3,\,\alpha_1,\,\alpha_1+\alpha_2\},\ \{\alpha_1+\alpha_2,\,\alpha_2,\,\alpha_2+\alpha_3\},\ \{\alpha_1,\,\alpha_3,\,\alpha_1+\alpha_2+\alpha_3\}, \\[2mm] \{\alpha_1,\,\alpha_1+\alpha_2,\,\alpha_1+\alpha_2+\alpha_3\},\ \{\alpha_3,\,\alpha_2+\alpha_3,\,\alpha_1+\alpha_2+\alpha_3\}, \\[2mm] \{\alpha_1+\alpha_2,\,\alpha_2+\alpha_3,\,\alpha_1+\alpha_2+\alpha_3\}. \end{array} \] The simple modules of $\mathcal{C}_1$ are exactly all tensor products of the form \[ S(\beta_1)^{\otimes k_1}\otimes S(\beta_2)^{\otimes k_2}\otimes S(\beta_3)^{\otimes k_3}\otimes F_1^{\otimes l_1}\otimes F_2^{\otimes l_2}\otimes F_3^{\otimes l_3}, \ (k_1,k_2,k_3,l_1,l_2,l_3)\in{\mathbb N}^6, \] in which $\{\beta_1,\,\beta_2,\,\beta_3\}$ runs over the 14 clusters listed above. Note that by Gabriel's theorem, positive roots are in one-to-one correspondence with indecomposable representations of $Q$. By inspection of the above list of clusters, one can check that two roots belong to a common cluster if and only if the corresponding representations of $Q$ have no extension between them. This is true in general, as will be explained below (see Corollary~\ref{thgeom}, and the end of \S\ref{proofN}). \subsection{Proof of Theorem~\ref{th_l_1}: approach of \cite{HL}}\label{proofHL} Write for short $z_i = z_{(i,\xi_i+2)}$ for the cluster variables of the initial cluster $\mathbf{z}$. We know that $R_1$ is the polynomial ring in the classes of the fundamental modules $L(Y_{i,q^{\xi_i}})$ and $L(Y_{i,q^{\xi_i+2}})$. On the other hand, it is not difficult to show that, because of the presence of the frozen variables, $\mathcal{A}_1$ is the polynomial ring in the variables $z_i$, together with the variables $z_i'$ of the cluster $\mathbf{z}'$ obtained from $\mathbf{z}$ by applying the product of mutations \[ \mathbf{z}' = \left(\prod_{i\in I_0}\mu_i\right)\left( \prod_{i\in I_1} \mu_i\right)\mathbf{z}. \] Therefore, the assignment \[ z_i\mapsto [L(Y_{i,q^{\xi_i+2}})],\quad z'_i \mapsto [L(Y_{i,q^{\xi_i}})],\qquad (i\in I), \] extends to a ring isomorphism $\iota$ from $\mathcal{A}_1$ to $R_1$. To calculate the images under $\iota$ of the remaining cluster variables, we use the fact \cite{FZ4} that every cluster variable is entirely determined by its $F$-polynomial (and its $g$-vector) with respect to the reference cluster $\mathbf{z}$. For an almost positive root $\beta=\sum_i b_i\alpha_i$, let $\mathbf{z}[\beta]$ be the cluster variable whose cluster expansion with respect to $\mathbf{z}$ has denominator $\prod_i z_i^{b_i}$. In particular $z_i = \mathbf{z}[-\alpha_i]$. Denote by $F_\beta$ the $F$-polynomial of $\mathbf{z}[\beta]$. By convention $F_{-\alpha_i}=1$. On the other hand, the $q$-character of an object $M$ of $\mathcal{C}_1$ is uniquely determined by its truncation obtained by specializing $A_{i,q^{k}}^{-1}$ to $0$ for $k>\xi_i+1$. The truncated $q$-character of a simple object $L(m)$ of $\mathcal{C}_1$ is of the form \begin{equation}\label{qrest} \chi_q(L(m))_{\le 2} = m\, P_m(v_1,\ldots,v_n) \end{equation} where $P$ is a polynomial in the variables $v_i := A_{i,q^{\xi_i+1}}^{-1}\ (i\in I)$ with constant term~1. Moreover, the map $\tau: [L(m)]\mapsto \chi_q(L(m))_{\le 2}$ is an injective ring homomorphism from $R_1$ to its image in $\mathcal{Y}$. The injectivity comes from the fact that the truncated $q$-character of a module of $\mathcal{C}_1$ already contains all its dominant monomials. It is thus enough to determine the images of the cluster variables of $\mathcal{A}_1$ under $\iota':=\tau\iota$. Let $s_i\ (i\in I)$ be the Coxeter generators of the Weyl group. In \cite{HL}, it is proved that for $\beta>0$, \begin{equation}\label{imF} \iota'(\mathbf{z}[\beta]) = Y^{\alpha} F_\beta(v_1,\ldots,v_n), \end{equation} where \begin{equation}\label{albe} \alpha = \sum_ia_i\alpha_i := \left(\prod_{i\in I_1} s_i\right)\beta \end{equation} and \begin{equation}\label{Yal} Y^\alpha = \left\{ \begin{array}{ll} \displaystyle\prod_{i\in I} Y_{i,3\xi_i}^{a_i}&\quad \mbox{if}\quad \alpha>0, \\[5mm] Y_{i,2-\xi_i}&\quad \mbox{if} \quad \alpha=-\alpha_i. \end{array} \right. \end{equation} Thus, setting $m=Y^\alpha$ and comparing (\ref{imF}) with (\ref{qrest}), we see that an important step in proving Theorem~\ref{th_l_1} is to show that the two polynomials $P_m$ and $F_\beta$ coincide. This last statement is verified in \cite{HL} for every root $\beta$ in types $A_n$ and $D_n$, and for every multiplicity-free root $\beta$ in type $E_n$. The proof uses the Frenkel-Mukhin algorithm for evaluating $P_m$, and the combinatorial description of the Fibonacci polynomials of Fomin and Zelevinsky \cite{FZ1} for evaluating $F_\beta$. Thus, except for these missing roots in type $E_n$, this shows that all cluster variables of $\mathcal{A}_1$ are mapped by $\iota$ to the classes of some simple modules in $R_1$. The second main step is the following tensor product theorem, proved for all types $A_n$, $D_n$, $E_n$. Let $S_1,\ldots,S_k$ be simple modules of $\mathcal{C}_1$, and suppose that for every $1\le i<j\le k$ the tensor product $S_i\otimes S_j$ is simple. Then it is shown \cite[Th. 8.1]{HL} that $S_1\otimes\cdots\otimes S_k$ is simple\footnote{This theorem was later extended by Hernandez \cite{H3} to the whole category $\mathcal{C}$.}. Thus, to show that the image of a cluster monomial by $\iota$ is the class of a simple module, it is enough to prove it when the monomial is the product of \emph{two} cluster variables. Finally, the third step consists in proving that if $z[\beta]$ and $z[\gamma]$ are two compatible cluster variables of $\mathcal{A}_1$, that is, if $z[\beta]z[\gamma]$ is a cluster monomial, then the tensor product of the corresponding simple modules of $\mathcal{C}_1$ is simple. Since, for a given $\mathfrak g$, there are only finitely many cluster variables in $\mathcal{A}_1$, and so finitely many compatible pairs, this is in principle only a ``finite check''. Unfortunately it is not easy in general to decide if a product of (truncated) irreducible $q$-characters is simple, and in \cite{HL} this was only settled completely in types $A_n$ and~$D_4$. Although this (partial) proof is combinatorial and representation-theoretic, it has an interesting geometric consequence. Indeed, it shows that the truncated $q$-characters of the prime simple objects of $\mathcal{C}_1$ coincide, after dividing out the highest {\em l}-weight monomial, with the $F$-polynomials of the cluster variables of $\mathcal{A}_1$. But the $F$-polynomials have a geometric description due to Fu and Keller \cite{FK} in terms of quiver Grassmannians, inspired from a similar formula of Caldero and Chapoton for cluster expansions of cluster variables \cite{CC}. Therefore we get the following geometric description of the truncated $q$-characters. Let $M[\beta]$ be the indecomposable representation of the Dynkin quiver $Q$ attached to a positive root $\beta$, and denote by $\operatorname{Gr}_\nu(M[\beta])$ the quiver Grassmannian of subrepresentations of $M[\beta]$ with dimension vector $\nu$. Let $\alpha$ and $Y^\alpha$ be related to~$\beta$ as in Eq.(\ref{albe}), (\ref{Yal}). Finally, recall the notation $v_i := A_{i,q^{\xi_i+1}}^{-1}$. \begin{corollary}[\cite{HL}]\label{thgeom} Conjecture~\ref{conjec} for $\mathcal{C}_1$ implies that \begin{equation}\label{quivgrass} \chi_q(L(Y^\alpha))_{\le 2}\ = Y^\alpha \sum_\nu \chi(\operatorname{Gr}_\nu(M[\beta])) \,v_1^{\nu_1}\cdots v_n^{\nu_n}. \end{equation} More generally, we have a similar truncated $q$-character formula for every simple module of $\mathcal{C}_1$, in which the indecomposable representation $M[\beta]$ of the right-hand side is replaced by a generic representation of $Q$, \emph{i.e.\,} a representation without self-extension. \end{corollary} Corollary~\ref{thgeom} should be compared to Eq.(\ref{eqstandard}) and (\ref{eqfund}) for $q$-characters of standard modules. What is remarkable here is that we obtain a similar formula for \emph{simple} modules of $\mathcal{C}_1$: for all these modules, we do not need to use the decomposition theorem for perverse sheaves, as was done in \S\ref{perverse}. \subsection{Proof of Theorem~\ref{th_l_1}: approach of \cite{N5}}\label{proofN} \begin{figure}[t] \[ \[email protected]{ &\qquad\qquad\qquad&& \\ &\ar[d]^{\alpha_2(3)}W_2(3)& \\ \ar[d]^{\alpha_1(2)} W_1(2)&\ar[ld]^{B_{21}(2)} V_2(2)\ar[d]^{\beta_2(2)} \ar[rd]^{B_{23}(2)} & \ar[d]^{\alpha_3(2)}W_3(2)& \\ V_1(1)\ar[d]^{\beta_1(1)} &W_2(1) & V_3(1)\ar[d]^{\beta_3(1)} \\ W_1(0) && W_3(0) } \] \caption{\label{Fig5} {\it The graded spaces $W$ and $V$ associated with a simple object of $\mathcal{C}_1$ in type $A_3$.}} \end{figure} In \cite{N5}, Nakajima reverses the logic of \cite{HL}, and first proves the formula of Corollary~\ref{thgeom} for all simple modules of $\mathcal{C}_1$ and for all Dynkin types, by means of his description in terms of perverse sheaves. This is made possible because of the following simple description of the quiver varieties $\mathfrak{M}^\bullet(V,W)$ and $\mathfrak{M}_0^\bullet(W)$ when $W$ corresponds to the highest {\em l}-weight of a simple object of $\mathcal{C}_1$, and the monomial $Y^WA^V$ contributes to its truncated $q$-character. One first notes that $L(W)$ is in $\mathcal{C}_1$ if and only if the $\widehat{I}_0$-graded space $W$ satisfies: \begin{equation} W_i(r) \not = 0 \quad\mbox{ only if }\quad r\in\{\xi_i, \xi_i+2\}. \end{equation} Moreover, if $Y^WA^V$ appears in $\chi_q(L(W))_{\le 2}$ then \begin{equation}\label{eqV} V_i(r) \not = 0 \quad\mbox{ only if }\quad r=\xi_i+1. \end{equation} Thus $W$ and $V$ are supported on a zig-zag strip of height 2, as shown in Figure~\ref{Fig5}. Therefore the ADHM equations are trivially satisfied in this case, and we have $M^\bullet(V,W)=\Lambda^\bullet(V,W)$. \begin{figure}[t] \[ \[email protected]{ &\qquad\qquad\qquad&& \\ &\ar[lddd]^{\mathbf{y}_{21}}2\ar[dd]^{\mathbf{x}_2}\ar[rddd]^{\mathbf{y}_{23}}& \\ 1'\ar[dd]^{\mathbf{x}_1}& & 3'\ar[dd]^{\mathbf{x}_3}& \\ &2' & \\ 1&& 3 } \] \caption{\label{Fig6} {\it The decorated quiver $\widetilde{Q}$ in type $A_3$.}} \end{figure} Since every dominant monomial appears in the truncated $q$-character, and since $\mathfrak{M}_0^\bullet(W)$ is equal to $\mathfrak{M}_0^\bullet(V,W)$ for some {\em l}-dominant pair $(V,W)$, we see that $\mathfrak{M}_0^\bullet(W) = M^\bullet(V,W)\sslash G_V$ for some $V$ satisfying (\ref{eqV}). Define the following $G_V$-invariant maps on $M^\bullet(V,W)$: \begin{align} \mathbf{x}_i &= \beta_i(\xi_i+1)\alpha_i(\xi_i+2),\quad (i\in I),\label{eq28} \\ \mathbf{y}_{ij} &= \beta_j(1)B_{ij}(2)\alpha_i(3),\quad\quad (i\in I_1,\ j\in I_0,\ a_{ij}=-1).\label{eq29} \end{align} The data $(\mathbf{x}_i,\mathbf{y}_{ij})$ amount to a representation of a decorated quiver $\widetilde{Q}$ obtained by attaching to every vertex $i$ of $Q$ a new vertex $i'$ and an arrow $i'\to i$ (\emph{resp.\,} $i\to i'$) if $i\in I_0$ (\emph{resp.\,} $i\in I_1$) (see Figure~\ref{Fig6}). Let \[ E_W = \bigoplus_{i\in I} \operatorname{Hom}(W_i(\xi_i + 2),W_i(\xi_i)) \oplus \bigoplus_{i\in I_1,\, j\in I_0,\, a_{ij}=-1} \operatorname{Hom}(W_i(3),W_j(0)) \] be the space of representations of $\widetilde{Q}$ based on $W$. Nakajima shows that the map $(B,\alpha,\beta) \mapsto (\mathbf{x}_i, \mathbf{y}_{ij})$ induces an isomorphism from $\mathfrak{M}_0^\bullet(W)$ to $E_W$. Hence, for $L(W)$ in $\mathcal{C}_1$ the affine variety $\mathfrak{M}_0^\bullet(W)$ is isomorphic to a vector space. Put $\nu_i = \dim V_i$ and $\nu = (\nu_i) \in {\mathbb N}^I$. Let $\mathcal{F}(\nu,W)$ be the variety of $n$-tuples $X=(X_i)$ of subspaces of $W$ satisfying $\dim X_i = \nu_i$ and \[ X_i \subseteq W_i(0) \quad (i\in I_0), \qquad X_i \subseteq W_i(1) \oplus \bigoplus_{j:\, a_{ij}=-1} X_j \quad (i\in I_1). \] Define $\widetilde{\F}(\nu,W)$ as the closed subvariety of $E_W \times \mathcal{F}(\nu,W)$ consisting of all elements $((\mathbf{x}_i,\,\mathbf{y}_{ij}),\, X)$ such that \begin{equation}\label{fiber} \operatorname{Im} \mathbf{x}_i \subseteq X_i\ (i\in I_0),\qquad \operatorname{Im}\left(\mathbf{x}_i\oplus \bigoplus_{j:\, a_{ij}=-1} \mathbf{y}_{ij}\right) \subseteq X_i \ (i\in I_1). \end{equation} Nakajima shows that $(B,\alpha,\beta)\in M^\bullet(V,W)$ is stable if and only if all maps $\beta_i(1)\ (i\in I_0)$ and $\sigma_i(2) := \beta_i(2) + \sum_{j:\,a_{ij}=-1} B_{ij}(2)\ (i\in I_1)$ are injective. Clearly the collection $X$ of spaces \begin{align} X_i& = \beta_i(1)(V_i(1))\ (i\in I_0),\label{eq30} \\ X_i& = \sigma_i(2)(V_i(2))\ (i\in I_1), \label{eq31} \end{align} is $G_V$-invariant, and $\dim X_i = \nu_i$ if $(B,\alpha,\beta)$ is stable. Therefore, the map $(B,\alpha,\beta)\mapsto ((\mathbf{x}_i,\,\mathbf{y}_{ij}),\, X)$ defined by (\ref{eq28}) (\ref{eq29}) (\ref{eq30}) (\ref{eq31}) induces a map from $\mathfrak{M}^\bullet(V,W)$ to $\widetilde{\F}(\nu,W)$, and Nakajima shows that this is an isomorphism \cite[Proposition 4.6]{N5}. Moreover, when $\mathfrak{M}^\bullet(V,W)$ and $\mathfrak{M}_0^\bullet(W)$ are realized as $\widetilde{\F}(\nu,W)$ and $E_W$, respectively, then the projective morphism $\pi_V: \mathfrak{M}^\bullet(V,W) \to \mathfrak{M}_0^\bullet(W)$ becomes the first projection. (Compare this description with the prototypical example of \S\ref{subsect_3.5}.1.) By Theorem~\ref{theo_sim}, to calculate the truncated $q$-character of a simple module of~$\mathcal{C}_1$, one must now compute the multiplicity $a_{V,0;W}(1)$ of the skyscraper sheaf $IC_W(0)$ in the expansion of $[(\pi_V)_!(1_{\mathfrak{M}^\bullet(V,W)})]$ on the basis $\{[IC_W(V')]\}$. Since $\mathfrak{M}_0^\bullet(W) \simeq E_W$ is a vector space, one can use for that a Fourier transform. Let $E_W^*$ denote the dual space, and let $\psi$ be the Fourier-Sato-Deligne functor from the derived category $D(E_W)$ to $D(E_W^*)$. The functor $\psi$ maps every simple perverse sheaf $IC_W(V)$ on $E_W$ to a simple perverse sheaf on $E_W^*$. In particular, the image of the skyscraper sheaf is \[ \psi(IC_W(0)) = 1_{E_W^*}[\dim E_W], \] the constant sheaf on $E_W^*$, with degree shifted by $\dim E_W$. We can regard the product $E_W \times \mathcal{F}(\nu,W)$ as a trivial vector bundle on $\mathcal{F}(\nu,W)$ with fiber $E_W$. By (\ref{fiber}), the fibers of the restriction of the second projection to $\widetilde{\F}(\nu,W)$ are vector spaces of constant dimension, and $\widetilde{\F}(\nu,W)$ can be seen as a subbundle of $E_W \times \mathcal{F}(\nu,W)$. Denote by $\widetilde{\F}(\nu,W)^\perp$ the annihilator of $\widetilde{\F}(\nu,W)$ in the dual trivial bundle $E_W^* \times \mathcal{F}(\nu,W)$. We also have a Fourier-Sato-Deligne functor $\psi'$ from the derived category of the trivial bundle $E_W \times \mathcal{F}(\nu,W)$ to that of $E_W^* \times \mathcal{F}(\nu,W)$. It satisfies \[ \psi'\left(1_{\widetilde{\F}(\nu,W)}[\dim \widetilde{\F}(\nu,W)]\right) = 1_{\widetilde{\F}(\nu,W)^\perp}[\dim \widetilde{\F}(\nu,W)^\perp]. \] Moreover, denoting by $\pi: \widetilde{\F}(\nu,W)\to E_W$ and $\pi^\perp: \widetilde{\F}(\nu,W)^\perp\to E_W^*$ the bundle maps, we have the commutation relation \[ \pi_!^\perp\circ \psi' = \psi \circ \pi_!. \] It follows that the required (ungraded) multiplicity $a_{V,0;W}(1)$ is equal to the multiplicity of the constant sheaf $1_{E_W^*}$ in the expansion of $\pi_!^\perp (1_{\widetilde{\F}(\nu,W)^\perp})$ in terms of the $\{\psi(IC_W(V))\}$. The advantage of this Fourier transformation is that we can now evaluate this new multiplicity by looking at the stalk of $\pi_!^\perp (1_{\widetilde{\F}(\nu,W)^\perp})$ over a generic point of $E_W^*$. At this point we remark that we can without loss of generality assume that $W_i(2-\xi_i)=0$ for every $i\in I$. In other words, we suppose that $E_W$ is a space of representations of the quiver $Q$ \emph{without decoration}. (One can easily reduce the general case to this one by factoring out from $L(W)$ a tensor product of frozen Kirillov-Reshetikhin modules $L(Y_{i,\xi_i} Y_{i,\xi_i+2})$, as in \cite[\S9.2]{HL} or \cite[\S6.3]{N5}.) So $E_W^*$ is a space of representations of the quiver $Q^*$ obtained from $Q$ by changing the orientation of every arrow. Let \[ G_W := \prod_{i\in I} GL(W_i(3\xi_i)). \] Since $Q^*$ is a Dynkin quiver, $E_W^*$ has an open dense $G_W$-orbit corresponding to the generic representation of dimension vector $(\dim W_i(3\xi_i))$, and all other $G_W$-orbits have strictly smaller dimension. Now, all the simple perverse sheaves $\psi(IC_W(V))$ are $G_W$-equivariant, hence they are supported on a union of $G_W$-orbits, so the only one having a nonzero stalk over a generic point of $E_W^*$ is $\psi(IC_W(0)) = 1_{E_W^*}[\dim E_W]$. Therefore, by definition of the pushdown functor~$\pi_!^\perp$, the multiplicity $a_{V,0;W}(1)$ is nothing else than the dimension of the total cohomology of a generic fiber of $\pi^\perp$. It remains to describe this generic fiber. Because of our simplifying assumption, a point of $E_W$ is now just a collection of maps $\mathbf{y}_{ij} \in \operatorname{Hom}(W_i(3),W_j(0))\ (a_{ij}=-1)$, and a point in $\mathcal{F}(\nu,W)$ is a collection of subspaces $X=(X_i)$ of $W$ such that \[ X_i \subseteq W_i(0)\ \ (i\in I_0), \qquad X_i \subseteq \bigoplus_{j: a_{ij}=-1} X_j \ \ (i\in I_1). \] The pair $((\mathbf{y}_{ij}),X)$ belongs to $\widetilde{\F}(\nu,W)$ if and only if $\operatorname{Im}(\oplus_j \mathbf{y}_{ij}) \subseteq X_i$ for all $i\in I_1$. Clearly, the annihilator $\widetilde{\F}(\nu,W)^\perp$ consists of pairs $((\mathbf{y}_{ij}^*),X)$ in $E_W^*\times\mathcal{F}(\nu,W)$ such that $X_i \subseteq \operatorname{Ker} (\oplus_j \mathbf{y}_{ij}^*)$ for every $i\in I_1$. To get a nicer description of the fibers of $\pi^\perp$ we consider the product $\sigma$ of Gelfand-Ponomarev reflection functors at every sink $i\in I_1$ of $Q^*$. The functor $\sigma$ sends $(\mathbf{y}_{ij}^*)\in E_W^*$ to $(\mathbf{y}_{ij}^\sigma)\in E_{W^\sigma}$ defined by \[ W_i^\sigma(0)=W_i(0),\ \ (i\in I_0),\qquad W_i^\sigma(3)= \operatorname{Ker} (\oplus_j \mathbf{y}_{ij}^*),\ \ (i\in I_1), \] and, for $i\in I_1$, $\mathbf{y}_{ik}^\sigma$ is the composition of the embedding of $\operatorname{Ker} (\oplus_j \mathbf{y}_{ij}^*)$ in $\oplus_j W_j(0)$ followed by the projection onto $W_k(0)$. The collection of linear maps $\mathbf{y}^\sigma=(\mathbf{y}_{ij}^\sigma)$ is a representation of the original quiver $Q$. By construction, $X_i \subseteq W_i^\sigma(3\xi_i)$ for every $i\in I$. Moreover, one can easily check that $((\mathbf{y}_{ij}^*),X)\in\widetilde{\F}(\nu,W)^\perp$ if and only if $\mathbf{y}_{ij}^\sigma(X_i) \subseteq X_j$ for every $i\in I_1$. In other words, $X$ belongs to the fiber of $\pi^\perp$ above $(\mathbf{y}_{ij}^*)$ if and only if $X$ is a point of the quiver Grassmannian $\operatorname{Gr}_{\nu}(\mathbf{y}^\sigma)$. It now follows that the multiplicity $a_{V,0;W}(1)$ of the monomial $Y^WA^V$ in $\chi_q(L(Y^W))_{\le 2}$ is the total dimension of the cohomology of $\operatorname{Gr}_{\nu}(\mathbf{y}^\sigma)$ for a generic representation $\mathbf{y}^\sigma$ of $Q$ in $E_{W^\sigma}$. Note that the product of reflection functors $\sigma$ categorifies the product $\prod_{i\in I_1} s_i$ in the Weyl group, so if we denote by $\beta$ the graded dimension of $W^\sigma$, and if we assume that $\beta$ is a positive root, then the graded dimension $\alpha$ of $W$ is related to $\beta$ by (\ref{albe}), in perfect agreement with (\ref{quivgrass}). Moreover, Nakajima explains that the vanishing of the odd cohomology of $\mathfrak{L}^\bullet(V,W)$ implies that this generic fiber has no odd cohomology, therefore $a_{V,0;W}(1)$ is also equal to the Euler characteristic of the quiver Grassmannian $\operatorname{Gr}_{\nu}(\mathbf{y}^\sigma)$. Thus, Corollary~\ref{thgeom} follows in full generality. After this $q$-character formula is established, Nakajima proceeds to show that the tensor product factorization of the simple modules $L(W)$ of $\mathcal{C}_1$ is given by the canonical direct sum decomposition of the corresponding generic quiver representation $\mathbf{y}$ of $E_W$ into indecomposable summands. The proof uses the geometric realization given by Varagnolo and Vasserot \cite{VV} of the $t$-deformed product of $(q,t)$-characters in terms of convolution of perverse sheaves. Finally, to relate the $q$-character formula with cluster algebras, Nakajima makes use of the cluster category of Buan, Marsh, Reineke, Reiten and Todorov \cite{BMRRT}, and of the Caldero-Chapoton formula for cluster variables \cite{CC}. It is worth noting that Nakajima's approach is more general: most of his results work for the quantum affinization $U_q(L\mathfrak g)$ of a possibly infinite-dimensional symmetric Kac-Moody algebra $\mathfrak g$. This yields some important positivity results for all cluster algebras attached to an arbitrary bipartite quiver. However, when $\mathfrak g$ is infinite-dimensional $U_q(L\mathfrak g)$ is no longer a Hopf algebra, and the meaning of the multiplicative structure of the Grothendieck group is less clear (see \cite{H4}). \subsection{The case $\ell>1$} If $\mathfrak g = \mathfrak{sl}_2$, Conjecture~\ref{conjec} holds for every $\ell$. Otherwise, Conjecture~\ref{conjec} has only been proved for $\mathfrak g = \mathfrak{sl}_3$ and $\ell = 2$ \cite[\S13]{HL}. In that small rank case, $\mathcal{A}_2$ still has finite cluster type $D_4$ (see Table~\ref{table1}), and this implies that $\mathcal{C}_2$ has only real objects. There are 18 explicit prime simple objects with respective dimensions \[ 3,\ 3,\ 3,\ 3,\ 3,\ 3,\ 6,\ 6,\ 6,\ 6,\ 8,\ 8,\ 8,\ 10,\ 10,\ 15,\ 15,\ 35, \] and 50 factorization patterns (corresponding to the 50 vertices of a generalized associahedron of type $D_4$ \cite{FZ2}). Our proof in this case is quite indirect and uses a lot of ingredients: the quantum affine Schur-Weyl duality, Ariki's theorem for type $A$ affine Hecke algebras \cite{A}, the coincidence of Lusztig's dual canonical and dual semicanonical bases of ${\mathbb C}[N]$ in type $A_4$ \cite{GLS1}, and the results of \cite{GLS2} on cluster algebras and dual semicanonical bases. This proof could be extended to $\mathfrak g = \mathfrak{sl}_n$ and every $\ell$ if the general conjecture of \cite{GLS2} about the relationship between Lusztig's dual canonical and dual semicanonical bases was established. \frenchspacing
1,477,468,750,782
arxiv
\section{Introduction} Spontaneous synchronization of coupled non-identical oscillators is a well-known form of collective behavior\cite{book:strog,sync,book:pik}. The problem has been intensively studied since Huygens. If the legend is true, presumably he was the first one who noticed and reported the synchronized swinging of pendulum clocks. By simple ex\-pe\-ri\-ments he found that the synchronized state is a stable limit cycle of the system, because even after perturbing the system, the pendulums came back to this dynamic state. Originally, Huygens thought that this \emph{"odd kind of sympathy"}, as he named it, occurs due to shared air currents between the pendulums. He performed several tests to confirm this idea. His experimental setup was really simple, with two pendulum clocks hung from a common suspension beam which was placed between two chairs \cite{book:pik} . After performing some additional tests, Huygens observed a stable and reproducible anti-phase synchronization, and attributed this to imperceptible vibrations in the suspension beam. He summarized his observations in a letter to the Royal Society of London \cite{huy}, and launched the study of synchronization phenomena and coupled oscillators. Recent studies have aimed at reconsidering various forms of Huygens' two pedulum-clock experiment as well as realistically modelling the system. Bennet and his group \cite{bennet} investigated the same two pendulum-clock system as Huygens did and came to the conclusion that several types of collective dynamics are observable as a function of the system's parameters. For strong coupling, a "beat death" phenomenon usually occurs where one pendulum oscillates and the other does not. For weak coupling, synchronization does not occur, and a quasi-periodic oscillation is observed. There is, however, an intermediate coupling strength interval where the anti-phase synchronization observed by Huygens appears. Hence, Huygens had some luck with his setup as the coupling was just in the right interval: strong enough to cause synchronization, but also weak enough to avoid the "beat death" phenomenon. Dilao \cite{dilao} came to the conclusion that the periods of two synchronized nonlinear oscillators (pendulum clocks) differ from the natural frequencies of the oscillators. Kumon and his group \cite{kumon} have studied a similar system consisting of two pendulums, one of them having an excitation mechanism, and the two pendulums being coupled by a weak spring. Fradkov and Andrievsky \cite{fradkov} developed a model for such a system, and obtained from numerical solutions that both in- and anti-phase synchronizations are possible, depending on the initial conditions. Kapitaniak and his group \cite{czo} revisited Huygens' original experiment and found that the anti-phase synchronization usually emerges, although in rare cases in-phase synchronization is also possible. They also developed a more realistic model for this experiment \cite{kapit}. Pantaleone \cite{panta} considered a similar system, but he used metronomes placed on a freely moving base (suspended on two cylinders) instead of pendulum clocks. He modeled the metronomes as van der Pol oscillators \cite{vanderpol} and came to the conclusion that anti-phase synchronization occurs in some rare cases only. He proposed this setup as an easy classroom demonstration for the Kuramoto model \cite{kuramoto} and extended the study for larger systems containing up to seven globally coupled metronomes. He also made quantitative investigations by tracking the motion of the metronomes' pendulum by acoustically registering the ticks with a microphone. Ulrichs and his group \cite{ulric} examined the case when the number of metronomes was even larger. The present state of this quite old field of physics was recently reviewed by Kapitaniak et. al \cite{Kap2012}. Our work is intended to continue this line of studies showing that it is still possible to find interesting aspects of this quite old problem in physics. In contrast to previous works, we consider an ensemble of metronomes arranged symmetrically on the perimeter of a freely rotating disk, as illustrated in Figure \ref{fig1}. The free rotation of the disk acts as a coupling mechanism between the metronomes and, for high enough ticking frequencies, synchronization emerges. Our aim here is to investigate the conditions favoring such spontaneous synchronization by using a realistic model and model parameters. In order to achieve our task, we first study the dynamics of the system by well-controlled experiments. Contrary to earlier studies that investigated only the final stable dynamic state of the system, here we also consider and describe the transient dynamics leading to synchronization. The synchronization level is quantified and measured. This is achieved by using an optical phase-detection mechanism for each metronome separately. We then construct a realistic model for the system and its modeling power is proved by comparing its results with the experimental ones. We discuss the reasons behind the fact that only in-phase synchronization is observed in our experiments. Finally, the model is used to investigate the emergence of synchronization in large ensembles of coupled metronomes. \section{Experimental setup} The experimental setup is sketched in Figure \ref{fig1}. The main units are the metronomes (Figure \ref{fig1} and \ref{fig3}a), which are devices that produce regular, metrical beats. They were patented by Johann Maelzel in 1815 as a timekeeping tool for musicians (\cite{urlmet}). The oscillating element of the metronome is a physical pendulum, which consists of a rod with two weights on it (Figure \ref{fig2}): a fixed one at the lower end of the rod, whose mass is denoted by W1, and a movable one, $W_2$, attached to the upper part of the rod. In general, $W_1>W_2$ and the rod is suspended on a horizontal axis between the two weights in a stable manner, so that the center of mass lies below the suspension axes. \begin{figure}[h] \centering \resizebox{0.40\textwidth}{!}{% \includegraphics{FIG1.eps} } \caption{Schematic view of the experimental setup: metronomes are placed on the perimeter of a disk that can rotate around a vertical axes.} \label{fig1} \end{figure} By sliding the $W_2$ weight along the rod, the oscillation frequency can be tuned. There are several marked places on the rod where the $W_2$ weight has a stable position, yielding standard ticking frequencies for the metronome. These $\omega_0$ frequencies are marked on the metronome in units of Beats Per Minute (BPM). Another key part of the metronomes is the excitation mechanism, which compensates for the energy lost to friction. This mechanism gives additional momentum to the physical pendulum in the form of pulses delivered at a given phase of the oscillation period. For a more detailed analysis of this excitation mechanism we recommend the work of Kapitaniak \emph{et. al.} \cite{kapit} \begin{figure}[h] \centering \resizebox{0.30\textwidth}{!}{% \includegraphics{FIG2.eps} } \caption{Schematic view of the metronomes' bob. The dotted line denotes the horizontal suspension axis, and the white dot illustrates the center of mass. } \label{fig2} \end{figure} For the experiments, we used the commercially available Thomann 330 metronomes (Figure \ref{fig3}a). From the 10 metronomes we had bought, the 7 with the most similar frequencies were selected. Naturally, since there are no two identical units, we have to deal with a non-zero standard deviation of the natural frequencies in experiments In order to globally couple the metronomes, we placed them on a disk shaped platform which could rotate with a very little friction around a vertical axis, as is sketched in Figure \ref{fig1} and illustrated in the photo in Figure \ref{fig3}a. \begin{figure}[ht!] \centering \resizebox{0.40\textwidth}{!}{% \includegraphics{FIG3a.eps} } \resizebox{0.40\textwidth}{!}{% \includegraphics{FIG3b.eps} } \caption{(a) The experimental setup, with the metronomes placed on the platform and the wiring that carries information on the metronomes' phases. (b) One of the light-gates (Kingbright KTIR 0611 S), composed of an infrared LED and a photo-transistor.} \label{fig3} \end{figure} In order to monitor the dynamics of all metronomes separately, photo-cell detectors (Figure \ref{fig3}b) were mounted on them. These detectors were commercial ones (Kingbright KTIR 0611 S), and contained a Light Emitting Diode and a photo-transistor. They were mounted on the bottom of the metronomes. The wires starting from each metronome (seen in Figure \ref{fig3}) connect the photo-cells with a circuit board, allowing data collection through the USB port of a computer. The data was collected using a free, open-source program, called \emph{MINICOM}. (\cite{minicom}). The data was saved in log files, and could be processed in real-time. It was possible to simultaneously follow the states of up to $8$ metronomes. The circuit board only sent data when there was a change in the signal from the photo-cell system (i.e. a metronome's bob passed the light-gate). At that point, it would record a string such as $0-0-1-1-0-1-0 $ $1450$, where the first $7$ numbers characterize the metronome bob's position relative to the photo-cell (whether the gate is open or closed) and the eighth number is the time, where one time unit corresponds to 64 microseconds. Since we are interested in the dynamics of this system from the perspective of synchronization, we computed the classical order parameter, r, of the Kuramoto model \cite{kuramoto} in our numerical evaluations: \begin{equation} r\exp(i\phi)=\frac{1}{N}\sum_j{\exp(i\theta_j)}. \label{op} \end{equation} Here, $\phi$ is the average phase of the whole ensemble, $\theta_j$ is the phase of the $j$-th metronome, $N$ is the number of metronomes, and $i$ is the imaginary unit. The recorded data only tells us the exact moment at which the metronome's bob passes through the light-gate, so some additional steps are needed in order to get the phases $\theta_j$ of all metronomes and to compute the Kuramoto order-parameter for a given moment in time. In order to achieve this, we first excluded from the data those time-moments when the metronome's bob passed through the light-gate for the second time in a period, and after that we retained the pass-times corresponding to a given directional motion only. With this "cleaned data", we calculated the period of each cycle and interpolated this time-interval for the $\theta_j$ phases (between $0$ and $2 \pi$, corresponding to the state of a Kuramoto rotator) assuming a uniform angular-velocity. This way, the phase $\theta_j$ of each metronome (considered here as a rotator) could be uniquely determined at each moment in time, and the Kuramoto order parameter (\ref{op}) could be computed. Before starting the experiments we monitored each metronome separately and recorded their exact frequency, $\omega_i$, for all the standardly marked rhythms. These frequencies had a small, but finite fluctuation around the nominal frequency, $\omega_0$. We have selected those $7$ met\-ro\-no\-mes that had their $\omega_i$ standard frequencies relatively close to each other, and precisely measured these values. From these values the standard deviation, $\sigma$, of the used metronomes' natural frequencies could be determined (Table \ref{table2}). \section{Experimental Results} As already described in the introductory section, the met- metronomes oscillate with different natural frequencies, depending on the position of the adjustable weight on the metronomes' rod. For our experiments we have used the standard frequencies marked on the metronome. These frequencies are given in $BPM$ units. Before discussing the experimental results in detail, we have to emphasize that, independently of the chosen initial condition, only in-phase synchronization of the metronomes was observed. The reasons for this will be given in a separate section (Section \ref{sync}). In the very first experiments we were studying how the chosen frequency influences the detected synchronization level. We fixed all the metronomes' frequencies on the same marked $\omega_0$ value and placed them symmetrically on the perimeter of the rotating platform as indicated in Figure \ref{fig3}a. In reality, of course, this does not mean that their frequencies were exactly the same since no two macroscopic physical systems can be exactly identical. We initialized the system by starting the metronomes randomly, and let the system composed of the metronomes and platform evolve freely. For each considered frequency value we made $10$ measurements, collecting data for 10 minutes. The dynamics of the computed Kuramoto order parameter averaged across the 10 independent experiments are presented in Figure \ref{fig5}a. \begin{figure}[ht!] \centering \resizebox{0.40\textwidth}{!}{% \includegraphics{FIG4a.eps} } \resizebox{0.40\textwidth}{!}{% \includegraphics{FIG4b.eps} } \caption{Dynamics of the order parameter for different natural frequencies and numbers of metronomes. (top) Results for seven metronomes, different curves corresponding to different frequencies as indicated in the legends. (bottom) Results are for the fixed frequency ($\omega_0=192\ BPM$) and different numbers of metronomes as indicated in the legend. On both graphs, the results are averaged across $10$ independent measurements. } \label{fig5} \end{figure} The results suggest that the obtained degree of synchronization increases as the metronomes' natural frequencies increase. The standard deviations of the natural frequencies of the independent oscillators are indicated in Table \ref{table2}. \begin{table} \centering \begin{tabular}{|l|l|l|l|l|l|l|}\hline $\omega_0 (BPM)$ & 160 & 168 & 176 & 184 & 192 & 208\\ \hline $\sigma$ ($BPM \cdot 10^{-7}$) &8.4 &7.9 &7.8 &9.8& 8.5 &8.7\\ \hline \end{tabular} \caption{Standard deviation of the seven metronomes used for different nominal frequencies.} \label{table2} \end{table} Since there is no clear trend in this data as a function of $\omega_0$, the obtained result suggests that the observed effect is not due to a decreasing trend in the metronomes' standard deviation. We have also found that, for the standard metronome frequencies below $160$ BPM, the system did not synchronize. It is interesting to note, however, that if one inspects visually or auditorily the system, one would observe no synchronization for frequencies already below $184$ BPM. This means that we are not suited to detect partial synchronization with an order parameter below $r=0.75$. In a second experiment we were investigating the influence of the number of metronomes on the synchronization level. In order to study this, we fixed the metronomes at the same frequency ($\omega_0=192$ BPM) and repeated the previous experiment with increasing numbers of metronomes placed on the rotating platform. Again, we performed 10 measurements for each configuration so as to obtain accurate results and averaged the observed order parameter. The averaged results are presented in Figure \ref{fig5}b. Although the standard deviation of the metronomes' natural frequencies (Table \ref{table1}) does not present a clear trend as a function of the number of metronomes, $N$, we see a clear trend in the detected synchronization level: increasing the number of metronomes will result in a decrease in the synchronization level. \begin{table} \centering \begin{tabular}{|l|l|l|l|l|l|l|}\hline N & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline $\sigma $ ($BPM \cdot 10^{-7}$) &5.1 &8.1 &7.5 &7.1 &6.7 &8.5\\ \hline \end{tabular} \caption{Standard deviation of the metronomes' natural frequencies for different numbers of metronomes on the rotating platform ($\omega_0=192$ BPM) } \label{table1} \end{table} \section{Theoretical model} \label{theo} Inspired by the model described in \cite{kapit}, it is possible to consider a simple mechanical model for the system investigated here. The model is composed of a rotating platform and physical pendulums attached to its perimeter, as is sketched in Figure \ref{Fig7}. \begin{figure}[ht!] \centering \resizebox{0.40\textwidth}{!}{% \includegraphics{FIG5.eps} } \caption{Schematic view and notations for the considered mechanical model. The white dots denote the center of mass of the physical pendulums and the gray dots are the suspension axes.} \label{Fig7} \end{figure} The Lagrange function of such a system is written as: \begin{eqnarray}\nonumber L&=&\frac{J}{2}\dot \phi^2 + \sum_{i=1}^N \frac{J_i \omega_i^2}{2}+\sum_{i=1}^N \frac{m_i}{2} \Big \{ \Big [ \frac{d}{dt}\Big(x_i+h_i \sin \theta_i\Big)\Big]^2 + \\ && +\Big[\frac{d}{dt}\Big(h_i \cos \theta_i\Big)\Big]^2\Big\} - \sum_{i=1}^N m_i g h_i(1-\cos\theta_i) \label{lagrange} \end{eqnarray} The first term is the kinetic energy of the platform, the second is the kinetic energy due to the rotation of the pendulum around its center of mass, the third one is the kinetic energy of the pendulum's center of mass, and the last term is the potential energy of the pendulum. In the Lagrangian we have used the following notations: the index $i$ denotes the pendulums, $J$ is the moment of inertia of the platform with the metronomes on it -- taken relative to the vertical rotation axes, $\phi$ is the angular displacement of the platform, $m_i$ is the total mass of the pendulum ($m_i \approx W_1^{(i)}+W_2^{(i)}$, neglecting the mass of the rod), $h_i$ is the distance between the center of mass and the suspension point of the pendulum, $x_i$ is the horizontal displacement of the center of mass of the pendulums due to the rotation of the platform, $\theta_i$ is the displacement of the $i$-th pendulum's center of mass, in radians, $J_i$ is the moment of inertia of the pendulum relative to its center of mass and $\omega_i$ is the angular velocity of the rotation of the pendulum relative to its center of mass. It is easy to see that $x_i=R\dot \phi$ and $\omega_i=\dot \theta_i$. Assuming now that the mass of all the weights suspended on the metronomes' bobs are the same ($W_1^{(i)}=w_1$, $W_2^{(i)}=w_2$, and consequently $m_i=m$), and disregarding the $m_igh_i$ constant terms, one obtains: \begin{eqnarray}\nonumber L'&=&\Big(\frac{J+N m R^2}{2}\Big)\dot \phi^2 + \sum_i \Big(\frac{m h_i^2}{2} + \frac{J_i}{2}\Big)\dot \theta_i^2+\\ && + m R \dot \phi \sum_i h_i \cos\theta_i \cdot \dot \theta_i + mg\sum_ih_i \cos\theta_i \end{eqnarray} The Euler-Lagrange equations of motion yield: \begin{equation} (J+NmR^2)\ddot \phi + mR\sum_ih_i[\ddot \theta_i \cos\theta_i-\dot \theta_i^2 \sin\theta_i] = 0 \label{eqm-ndf1} \\ \end{equation} \begin{equation} [m h_i^2+J_i] \ddot \theta_i + mR \ddot \phi h_i \cos \theta_i+mgh_i \sin\theta_i = 0. \label{eqm-ndf2} \end{equation} The above equations of motion are for a Hamiltonian system where there is no damping (no friction) and no driving (excitation). Friction and excitation from the met\-ro\-no\-mes' driving mechanism has to be taken into account with some extra terms. The system of equations of motion may be written as \begin{eqnarray}\nonumber (J+NmR^2)\ddot \phi &+& mR\sum_ih_i[\ddot \theta_i \cos\theta_i-\dot \theta_i^2 \sin\theta_i] +\\ && + c_{\phi} \dot \phi + \sum_i \mathbb{M}_i= 0 \label{metron2a} \end{eqnarray} \begin{eqnarray}\nonumber [mh_i^2+J_i]\ddot \theta_i &+& mR\ddot \phi h_i \cos \theta_i+\\ && +mgh_i \sin\theta_i + c_{\theta} \dot \theta_i= \mathbb{M}_i. \label{metron2} \end{eqnarray} where $c_{\phi}$ and $c_{\theta}$ are coefficients characterizing the friction in the rotation of the platform and pendulums, and $\mathbb{M}_i$ are some instantaneous excitation terms defined as \begin{equation} \mathbb{M}_i=M \delta(\theta_i) \dot\theta_i, \label{excitation} \end{equation} where $\delta$ denotes the Dirac function and $M$ is a fixed parameter characterizing the driving mechanism of the metronomes. The choice of the form for $\mathbb{M}_i$ in Equation (\ref{excitation}) means that excitations are given only when the met\-ro\-no\-me's bob passes the $\theta=0$ position. The term $\dot\theta$ is needed in order to ensure a constant momentum input, independently of the metronomes' amplitude. It also ensures that the excitation is given in the correct direction (the direction of motion). It is easy to see that the total momentum transferred, $\mathbb{M}_{trans}$, to the metronomes in a half period ($T/2$) is always $M$: \[ \mathbb{M}_{trans}=\int_{t}^{t+T/2} M \delta({\theta_i}) \dot\theta_i dt=\int_{-\theta_{max}}^{\theta_{max}} M \delta({\theta_i}) d\theta_i = M \] This driving will be implemented in the numerical solution as \[ \mathbb{M}_i = \left\{ \begin{array}{l l l} M/dt & \quad \text{if $ \theta_i{(t-dt)} < 0$ and $ \theta_i{(t)} > 0$ }\\ -M/dt & \quad \text{if $ \theta_i{(t-dt)} > 0$ and $ \theta_i{(t)} < 0$}\\ 0 & \quad \text{in any other case}\\ \end{array} \right. \] where $dt$ is the time-step in the numerical integration of the equations of motion. Clearly, this driving leads to the same total momentum transfer $M$ as the one defined by Equation (\ref{excitation}). The coupled system of equations (\ref{metron2a},\ref{metron2}) can be written in a form more suitable for numerical integration: \begin{equation} \ddot \phi =\frac{ mR \sum_ih_i\dot \theta_i^2 \sin\theta_i - c_{\phi} \dot \phi - \sum_i \mathbb{M}_i +A + B - C}{ D }, \label{eqm1} \end{equation} \begin{equation} \ddot \theta_i =\frac{ \mathbb{M}_i - mR\ddot \phi h_i \cos \theta_i - mgh_i \sin\theta_i - c_{\theta} \dot \theta_i}{mh_i^2+J_i} \label{eqm2} \end{equation} where \begin{align} \nonumber A &= m^2gR\sum_i\frac{h^2_i \sin\theta_i \cos\theta_i}{mh_i^2+J_i}, \\ \nonumber B &= mRc_{\theta} \sum_i\frac{h_i \dot \theta_i \cos \theta_i}{mh_i^2+J_i}, \\ \nonumber C &= mR\sum_i\frac{h_iM_i \cos\theta_i}{mh_i^2+J_i} \text{,} \\ \nonumber D &= \Big (J+NmR^2-m^2R^2\sum_i\frac{ h^2_i \cos^2 \theta_i}{mh_i^2+J_i}\Big). \nonumber \end{align} Now taking into account that the metronomes' bobs have the form sketched in Figure \ref{fig2}b and the $L_1$ distances are fixed and assumed to be identical for all the metronomes, the $h_i$ and $J_i$ terms of the physical pendulums in our model will be calculated as: \begin{align} h_i=\frac{1}{w_1+w_2} (w_1 L_1-w_2 L_2^{(i)}) \\ J_i=w_1 L_1^2+w_2 (L_2^{(i)})^2. \end{align} \section{Realistic model parameters} \label{mparam} The parameters were chosen following the experimental device: $w_1 = 0.025$ $kg$, $w_2 = 0.0069$ $kg$, $L_1 = 0.0358$ $m$, $ L_2 \in [0.019,0.049]m$ depending on the chosen natural frequency, $R = 0.27$ $m$ and $J \in [0.0729,0.25515]kg~m^2$ depending on the number of metronomes placed on the platform. The damping and excitation coefficients were estimated as follows. For the estimation of $c_{\theta}$, a single metronome on a rigid support was considered. Switching off the excitation mechanism, a quasi-harmonic damped oscillation of the metronome took place. The exponential decay of its amplitude as a function of time uniquely defines the damping coefficient, hence a simple fit of the amplitude as a function of time allowed the determination of $c_{\theta}$. Switching the excitation mechanism on lead to a steady-state oscillation regime with constant amplitude. Since $c_{\theta}$ has already been measured, this amplitude value is defined by the excitation coefficient $M$. Solving Equations (\ref{metron2a}) and (\ref{metron2}) for a single metronome and tuning it until the same steady-state amplitude is obtained as in experiments makes the estimation of $M$ possible. Now that both $c_{\theta}$ and $M$ are known, the following scenario is considered: all the metronomes are placed on the platform and synchronization is reached. Then the platform has a constant-amplitude oscillatory motion. In order to determine $c_{\phi}$, its value is tuned while solving Equations (\ref{metron2a})-(\ref{metron2}) until the same amplitude of the disk's oscillations is obtained as in the experiments. This way, all the parameters in the model can be related to the experimental quantities. Using the method defined above, we estimated the following parameter values: $c_{\theta} = 0.00005$ $kg$ $m^2/s$, $c_{\phi} = 0.00001$ $kg$ $m^2/s$ and $M = 0.0006$ $Nm/s$. \section{In-phase synchronization versus anti-phase synchronization of two metronomes} \label{sync} As described in the introductory section, many previous works have reported a stable anti-synchronized state in the case of two coupled oscillators \cite{bennet,fradkov,czo}. Due to the fact that no such stable phase was observed in our experiments (independently of the starting conditions), we feel that investigating this issue is important. Starting from our theoretical model described in Section \ref{theo}, we will show that the in-phase synchronization is favored whenever there are large enough equilibrated damping and driving forces acting on the metronomes. \begin{figure}[ht!] \centering \resizebox{0.40\textwidth}{!}{% \includegraphics{FIG6.eps} } \caption{Dynamics of two metronomes as a function of time, and the quantities used for defining the $z$ order parameter.} \label{fig-opn} \end{figure} First, let us investigate the case without any damping and with no driving forces. The equations of motion for such a system are given by Equations (\ref{eqm-ndf1}) and (\ref{eqm-ndf2}). Considering the case of two identical metronomes ($N=2$) with small-angle deviations ($\theta_{1,2}^{max} <<\pi/2$), we investigate the synchronization properties of such a system. The synchronization level will be studied here by an appropriately chosen order parameter for two metronomes, $z$, that indicates whether we have in-phase or anti-phase synchronization. Although we could have used the Kuramoto order parameter for this purpose, we decided to introduce a new, more suitable order parameter. Note that this new order parameter is only useful for small ensembles, because its calculation would be very time consuming for large systems. In order to introduce a proper order parameter, let us consider the dynamics of two metronomes as a function of time by plotting $\theta_{1,2}(t)$ (Figure \ref{fig-opn}). Let us denote the time-moments where metronome $i$ reaches a local minimum and maximum $\theta_1$ values by $t_i^{min}$ and $t_i^{max}$, respectively. We denote the time-moments where metronome 2 has local maximum $\theta_2$ values by $T_j^{max}$. With these notations, we define two time-like quantities that characterize the average time-interval of the maximum position of $\theta_2(t)$ relative to the maximum and minimum positions of $\theta_1(t)$, respectively: \begin{eqnarray} t_1=\langle \min_{\{i\}}\{|t_i^{max}-T_j^{max}|\}\rangle_j \\ t_2=\langle \min_{\{i\}}\{|t_i^{min}-T_j^{max}|\}\rangle_j \end{eqnarray} In the above equations the averages are considered over all $j$ maximum positions of $\theta_2(t)$, and the "$\min$" notation refers to the minimal value of the quantity in the brackets. Now, the $z$ order parameter is defined as: \begin{equation} z=\frac{t_2-t_1}{t_2+t_1} \end{equation} It is easy to see that $z$ is bounded between $-1$ and $1$. For totally in-phase synchronized dynamics we have $t_1=0$, leading to $z=1$. For totally anti-phase synchronized dynamics $t_2=0$, and we get $z=-1$. Negative $z$ values suggest a dynamic where the anti-phase synchronized states are dominant, positive $z$ values suggest a dynamic with more pronounced in-phase synchronized states. The $z$ order parameter was estimated numerically for different initial conditions. A velocity Verlet-type algorithm was used, and simulations were performed up to a $t=4000$ $s$ time interval, with a $dt=0.01$ $s$ time-step. Initially the deviation angle of the first metronome was chosen as $\theta_1(0)=\theta_{max}=0.1$ rad and $\theta_2(0)$ was chosen in the interval $[-0.1,0.1]$ rad, leading to various initial phase-differences between them. The computed $z$ values as a function of $\theta_2(0)$ are plotted in Figure \ref{q-op}. \begin{figure}[ht!] \centering \resizebox{0.40\textwidth}{!}{% \includegraphics{FIG7.eps} } \caption{The $z$ order parameter as function of the initial phase $\theta_2(0)$. The results obtained without damping ($c_{\theta}=0$, $c_{\phi}=0$) and driving ($M=0$) indicate that phase-locking and complete synchronization is possible only if the system starts from such a situation. For small damping and driving values ($M=6 \times 10^{-5}$ Nm/s), both in-phase and anti-phase synchronization is possible. For large driving intensities ($M=6 \times 10^{-4}$ Nm/s), only the in-phase synchronization is stable. The later is the realistic parameter for the experimental metronome setup.} \label{q-op} \end{figure} The above results suggest that for the friction-free and undriven case (Figure \ref{q-op}), synchronization and phase-locking of coupled identical metronomes are possible only if they start either in completely in-phase or completely anti-phase configurations. Depending on how the phases are initialized, the ticking dynamics statistically resemble either the in-phase or the anti-phase states, but no phase-locking or synchronization is observable. Starting from an arbitrary initial condition a complete in-phase or anti-phase synchronization is possible only if there is dissipation and driving. For small dissipation and driving values both the in-phase and anti-phase synchronization are possible, as the results obtained for $M=6 \times 10^{-5}$ Nm/s suggests. In this limit in-phase synchronization will emerge if the initial phases are closer to such a situation. Alternatively, if the initial conditions resemble the anti-phase configuration, a stable anti-phase synchronization emerges. For higher dissipation and driving values (characteristic for our experimental setup, $M=6 \times 10^{-4}$ Nm/s ) this apparently symmetric picture breaks down, and the in-phase synchronization is the one that is stable. Anti-phase synchronization is unlikely to be observed; it will appear only in the case when the two metronomes are started exactly in anti-phases ($\theta_2(0)=-\theta_1(0)$). In view of these results, one can understand why only the stable in-phase synchronized dynamics was observed in our experiments. The results also emphasize the importance of using realistic model parameters in order to reproduce the observed dynamics. \section{Simulation results for several metronomes} Using the model defined in Section 4, our aim here is to theoretically understand the experimentally obtained trends. The equations of motion (\ref{eqm1}),(\ref{eqm2}) were numerically integrated using a velocity Verlet-type algorithm as the integration method. A time-step of $dt=0.01$ $s$ was chosen. First we intended to explain the experimental results presented in Figure \ref{fig5}. Seven metronomes with the same $\omega_i$ natural frequencies as the experimentally measured ones were considered, and the time-evolution of the Kuramoto order parameter was computed. Results obtained for different $\omega_0$ frequency values are presented in the top panel of Figure \ref{fig8}. For the sake of better statistics we averaged the results of $100$ simulations. \begin{figure}[ht!] \centering \resizebox{0.40\textwidth}{!}{% \includegraphics{FIG8a.eps} } \resizebox{0.40\textwidth}{!}{% \includegraphics{FIG8b.eps} } \caption{Simulation results for the time-evolution of the Kuramoto order-parameter. (top) Results for the same natural frequencies as the ones used in the experiments. (bottom) Results for the same number of pendulums, and nominal beat frequency ($\omega_0=192$ BPM) as the ones used in the experiments. For both graphs, the presented results are an average over $100$ independent simulations. The corresponding experimental results are presented in Figure \ref{fig5}. } \label{fig8} \end{figure} The obtained results are in good agreement with the experimental results presented in Figure \ref{fig5}. Following our experiments, we have also studied the time-evolution of the order parameter for different numbers of pendulums, setting the same $\omega_0=192$ BPM natural frequency as in the experiments. Again, we averaged the results for $100$ independent simulations. The obtained trend is sketched on the bottom panel of Figure \ref{fig8}. The trend of the simulation results is in agreement with the experimental ones: increasing the number of metronomes results in a decrease in the observed synchronization level. In simulations, however, this decrease is not as evident as in the experiments. The reason for this could be the oversimplified manner in which we have handled the differences between the metronomes. In our model, the only difference between the metronomes are in the $L_2^{(i)}$ values (the distance of the movable weight from the horizontal suspension axes, see Figure \ref{fig2}). In our simulations, the non-zero spread of these values is the sole source of the $\sigma$ standard deviation for the frequencies $\omega_i$. However, in reality many other parameters of the metronomes are different, leading to more different model parameters in their equations of motion. As a result of this, a more pronounced variation in the synchronization level is expectable. In spite of the above discussed discrepancy, the simulation results suggest that our model with realistic model parameters works well for describing the dynamics of the coupled metronome system. In order to illustrate the effectiveness of our approach more quantitatively, we have plotted the simulated equilibrium synchronization level, $r_{sim}$, as a function of the experimentally determined value, $r_{exp}$, for the case of $N=7$ metronomes. The plot from Figure \ref{comp} suggest that there is a satisfactory correlation. \begin{figure}[ht!] \centering \resizebox{0.40\textwidth}{!}{% \includegraphics{FIG9.eps} } \caption{Comparison between the simulated and experimentally obtained equilibrium synchronization level. Circles represent the results obtained for the $\omega_0$ frequencies shown in Table 1. The straight dashed line indicates the optimal $r_{sim}=r_{exp}$ limit.} \label{comp} \end{figure} Thus, one can investigate several interesting cases through simulations that are not feasible experimentally. Many interesting questions can be formulated this way. Here we focus however only on clarifying the problems that we have investigated experimentally, namely the influence of the number of oscillators and the chosen natural frequency on the observed synchronization level. Computer simulations will allow us to consider a higher number of metronomes and will also allow for a continuous variation of the metronomes' natural frequencies. Particularly, we are interested in clarifying whether, in the thermodynamic limit ($N\rightarrow \infty$), there is a clear $\omega_0=\omega_c$ frequency threshold below which there is no synchronization in a system with fixed standard deviation ($\sigma$) of the metronomes' frequencies. Also, we would like to show that the reason for not obtaining a complete synchronization ($r=1$) of the metronomes is the finite value of $\sigma$. Considering a normal distribution of metronomes' natural frequency $\omega_i$ with a fixed standard deviation around the mean value of the standard deviations presented in Table 1 ($\sigma=8 \times 10^{-7}$ BPM), we first studied how the Kuramoto order parameter, $r$, varies as a function of $\omega_0$. Results obtained for a wide range of the number of metronomes, $N$, are plotted in the top panel of Figure \ref{fre}. The results plotted in Figure \ref{fre} suggest that, in the $N\rightarrow \infty$ limit, a clear phase-transition like phenomenom emerges. Around the value of $\omega_c=185$ BPM the order parameter exhibits a sharp variation, which becomes sharper and sharper as the number of metronomes is increased. This is a clear sign of phase-transition like behavior. Plotting the standard deviation of the order parameter values obtained from different experiments, we get a characteristic peak around the $\omega_c=185$ BPM value. As is expected for a phase-transition-like phenomenon this peak narrows as the number of metronomes on the disk increases. \begin{figure}[ht!] \centering \resizebox{0.40\textwidth}{!}{% \includegraphics{FIG10a.eps} } \resizebox{0.40\textwidth}{!}{% \includegraphics{FIG10b.eps} } \caption{Simulation results for the Kuramoto order parameter (top) and its standard deviation, $\sigma_r$, for the $100$ computational experiments (bottom) as a function of the $\omega_0$ frequency. Different curves are for different numbers of metronomes as indicated in the legends. In each case, the standard deviation of the metronomes' natural frequency $\omega_i$ is fixed as close as possible to $\sigma=8 \times 10^{-7}$ BPM.} \label{fre} \end{figure} Our next aim is to prove that the reason for not reaching the $r=1$ complete synchronization is the finite standard deviation of the metronomes' natural frequencies. Simulations with up to $64$ identical metronomes with $\omega_0=192$ BPM were considered, and the $r(t)$ dynamics of the Kuramoto order parameter was investigated. Results for different numbers of metronomes are plotted in Figure \ref{fig11}. From these graphs one can readily observe that in each case the completely synchronized state emerged. This proves that the lack of complete synchronization is due to the finite spread in the metronomes' natural frequencies. From this simulation we have also learned that variations of the equilibrium order parameter value as a function of $N$ is also due to the finite $\sigma$ value. \begin{figure}[ht!] \centering \resizebox{0.40\textwidth}{!}{% \includegraphics{FIG11.eps} } \caption{Simulation results for identical metronomes. The time evolution of the order parameter for various numbers of metronomes, ranging from 2 to 64, is plotted. ($\omega_0=192$ BPM)} \label{fig11} \end{figure} \section{Conclusions} The dynamics of a system composed of coupled metronomes was investigated both by simple experiments and computer simulations. We were interested in finding the conditions for the emergence of synchronization. Contrarily to many previous studies, here the problem was analyzed not from the viewpoint of dynamical systems, but from the viewpoint of collective behavior and emerging synchronization. The experiments suggest that there is a limiting natural frequency of the metronomes below which spontaneous synchronization is not possible. By increasing the frequency above this limit, partial synchronization will e\-mer\-ge. The obtained synchronization level increases monotonically as the natural frequency of the oscillators increases. The experiments also suggest that increasing the number of metronomes in the system leads to a decrease of the observed synchronization level. In order to better understand the dynamics of the system a realistic model was built. We have shown that damping due to friction forces and the presence of driving are both important in order to understand the emerging synchronization. The parameters of the model were fixed in agreement with the experimental conditions and the equations of motion were integrated numerically. The model proved to be successful in describing the experimental results, and reproduced the experimentally observed trends. The model allowed a fine verification of our findings regarding the conditions under which spontaneous synchronization emerges and the trends in the observed synchronization level. Computer simulations suggested that, for an ensemble of metronomes with a fixed standard deviation of their natural frequencies, the order parameter increases as a function of the metronomes' average frequency, $\omega_0$. The model also suggests that this increase happens sharply for large ensembles, closely resembling a phase-transition like phenomenon. With the help of the simulations we have also shown that the reason behind an incomplete synchronization ($r=1$) is the finite spread of the metronomes' natural frequencies ($\sigma \ne 0$). The successes of the discussed model opens the way for many further studies regarding the dynamics of this simple system. Indeed, many other interesting questions can be formulated regarding the influence of the met\-ro\-nome and rotating platform parameters on the obtained synchronization level and the observed trends. Also, one can study systems where the metronomes or groups of met\-ro\-no\-mes are fixed to different natural frequencies, or where there is an external driving force acting on the system. The discussed model has the advantage that the equations of motion are easily integrable and the model parameters are realistic, with a direct connection to the parameters of an experimentally realizable system. Finally, we hope that the novel experimental setup and the results presented here will help in clarifying some aspects for one of the oldest problems in physics, namely the spontaneous synchronization of coupled pendulum clocks. Although several similar problems have been considered in previous studies, we have shown that there are still many fascinating aspects that one can investigate in this simple mechanical system. \nocite{*} \section{Acknowledgments} Work supported by the Romanian IDEAS research grant PN-II-ID-PCE-2011-3-0348. The work of B.Sz. is supported by the POSDRU/107/1.5/S/76841 PhD fellowship. B. Ty. acknowledges the support of Collegium Talentum Hungary. We thank E. Kaptalan from SC Rulmenti Suedia for constructing the rotating platform.
1,477,468,750,783
arxiv
\section{Introduction} Over the past few decades, artificial neural networks (ANN) have been employed on various classification tasks, many of them previously performed by humans. One popular example is the computer-assisted diagnosis (CAD), in which the output of the ANN may assist doctors in making more accurate decisions \cite{eadie2012systematic}. Most applications in CAD consist of a machine learning model performing tasks that would be handmade by specialists, reducing the financial and human costs while also avoiding possible mistakes caused by fatigue \cite{pereira2019survey,qian1995computer}. Moreover, computers have been repeatedly appointed as outperforming unaided professionals in these tasks \cite{qian1995computer,leaper1972computer}. CAD applications have been effectively used, for example, for leukemia identification \cite{Salah2019MachineDirections}. Acute lymphoblastic leukemia (ALL) is a blood pathology that can be lethal in only a few weeks if left unchecked. ALL is a type of blood cancer identified by immature lymphocytes, known as lymphoblasts, in the blood and bone marrow. The peak incidence lies in children ages 2-5 years, and one of the primary forms of ALL detection is through microscopic blood sample inspection. Computer-aided leukemia diagnosis has been achieved using many different approaches in the past years, including deep learning models combined with transfer learning and unsharpening techniques \cite{Bibi2020IoMT-Based,Genovese2021HistopathologicalDetection,Genovese2021AcuteLearning,Zolfaghari2022ACells}. Also, it has been reported in the literature that a quaternion-valued convolutional neural network exhibited better generalization capability than its corresponding real-valued model to classify white cells as lymphoblasts \cite{Granero2021Quaternion-ValuedDiagnosis}. Furthermore, the quaternion-valued CNN has about 34\% of the total number of parameters of the corresponding real-valued network. This paper investigates further the application of hypercomplex-valued convolutional neural networks (HvCNN) ALL detection. Like complex numbers, quaternions are hypercomplex numbers with a wide range of applications in science and engineering. For example, quaternions provide an efficient tool for describing 3D rotations used in computer graphics. Furthermore, quaternion-valued neural networks have been effectively applied for signal processing and control \cite{Talebi2020Quaternion-ValuedControl,Takahashi2021ComparisonControl}, image classification \cite{shang14,Valle2020Quaternion-valuedQuaternions}, and many other pattern recognition tasks \cite{Parcollet2020ANetworks,Bayro-Corrochano2020QuaternionApplications}. However, besides the quaternions, many other hypercomplex-valued algebras exist, including the coquaternions, the tessarines, Clifford algebras, and Cayley-Dickson algebras. The tessarines, introduced by Cockle a few years after the introduction of quaternions, is a commutative hypercomplex algebra that has been effectively used for signal processing and the development of neural networks \cite{Navarro-Moreno2020TessarineCondition,Navarro-Moreno2021Wide-senseConditions,Ortolani2017OnProcessing,Carniello2021UniversalApproximationTessarineNetworks,Senna2021TessarineQuaternionImageClassification}. The Clifford algebras comprise a broad family of associative hypercomplex algebras with interesting geometrical properties \cite{hestenes12,hitzer13,vaz16}. A family of hypercomplex algebras, obtained from the recursive process of Cayley-Dickson, can be used to develop effective HvNNs \cite{Brown1967OnAlgebras.,Vieira2020ExtremeAuto-Encoding}. Furthermore, four-dimensional hypercomplex algebras have been used for designing neural networks for controlling a robot manipulator \cite{Takahashi2021ComparisonControl}. This paper considers the hypercomplex algebra framework proposed by Kantor and Solodovnikov \cite{Kantor1989HypercomplexAlgebras}. This approach furnishes a broad class of hypercomplex algebras which comprises the tessarines, the Clifford algebras, and the Cayley-Dickson algebras. On the downside, the general framework by Kantor and Solodovnikov also contains many hypercomplex algebras with no attractive features. Thus, we shall focus only on eight four-dimensional associative hypercomplex algebras, four of which are commutative. We consider the tessarines, the bi-complex numbers, and the Klein 4-group algebra among the commutative algebras \cite{Takahashi2021ComparisonControl,Kobayashi2020HopfieldFour-group}. The four non-commutative hypercomplex algebras include quaternions and coquaternions. Also, they are isomorphic to Clifford as well as Cayley-Dickson algebras \cite{Vieira2020ExtremeAuto-Encoding,Vieira2022AMachines}. The paper is organized as follows: Next section presents the basic concepts of hypercomplex algebras and explores notable four-dimensional algebras. Section \ref{sec:hvcnn} addresses hypercomplex-valued CNN models and shows how to emulate them using a real-valued convolutional network. Section \ref{sec:Applications} describes the computational experiments conducted using the ALL-IDB dataset. The paper finishes with some concluding remarks in Section \ref{sec:concluding-remarks}. \section{Hypercomplex Numbers and Some Notable Associative Four-Dimensional Algebras} \label{sec:hc-matrix-algebra} Let us present a few basic definitions regarding hypercomplex numbers. Hypercomplex algebras can be defined in any field, but we focus on algebras over the real numbers in this work. As pointed out in the introduction, we consider the general framework proposed by Kantor and Solodovnikov, which includes the most used hypercomplex algebras \cite{Kantor1989HypercomplexAlgebras}. We finish this section by presenting eight notable associative four-dimensional hypercomplex algebras. \subsection{A Brief Review on Hypercomplex Number Systems} \label{sec:review} A hypercomplex algebra $\mathbb{A}$ of dimension $n+1$ over $\mathbb{R}$ is a set \begin{equation} \mathbb{A} = \{ x = x_0 + x_1 \boldsymbol{i}_1 + x_2 \boldsymbol{i}_2 + \dots + x_n \boldsymbol{i}_n: x_\mu \in \mathbb{R}, \forall \mu \}, \end{equation} equipped with two operations, namely addition and multiplication \cite{Kantor1989HypercomplexAlgebras}. The symbols $\boldsymbol{i}_\mu$, with $\boldsymbol{i}_\mu \notin \mathbb{R}$ for all $\mu = 1,\ldots,n$, denote the so-called hyperimaginary units of $\mathbb{A}$. The addition of two hypercomplex numbers $x = \hyperN{x}$ and $y = \hyperN{y}$ is simply defined as \begin{equation} \label{eq:addition} x + y = (x_0+y_0)+(x_1+y_1)\boldsymbol{i}_1+\ldots+(x_n+y_n)\boldsymbol{i}_n. \end{equation} The multiplication operation is carried out distributively and replacing the product of two hypercomplex units $\boldsymbol{i}_\mu$ and $\boldsymbol{i}_\nu$ by an hypercomplex number $p_{\mu\nu} := \boldsymbol{i}_\mu \boldsymbol{i}_\nu$, where \begin{equation} \label{eq:multiplication_rule} p_{\mu\nu} = \hyperN{(p_{\mu\nu})} \in \mathbb{A}, \end{equation} for all $\mu,\nu=1,\ldots,n$. Precisely, the multiplication of two hypercomplex numbers is given by the equation \begin{equation} \begin{aligned} \label{eq:multiplication} xy &= \left(x_0 y_0 + \sum_{\mu,\nu=1}^n x_\mu y_\nu (p_{\mu\nu})_0 \right) \\ & + \left(x_0y_1 + x_1 y_0+ \sum_{\mu,\nu=1}^n x_\mu y_\nu (p_{\mu\nu})_1 \right) \boldsymbol{i}_1 + \ldots \\ &+ \left( x_0 y_n + x_ny_0 + \sum_{\mu,\nu=1}^n x_\mu y_\nu (p_{\mu\nu})_n \right)\boldsymbol{i}_n. \end{aligned} \end{equation} Note that hyperimaginary units products characterize a hypercomplex algebra $\mathbb{A}$. In other words, the hypercomplex numbers $p_{\mu\nu}$ given by \eqref{eq:multiplication_rule} determine $\mathbb{A}$. It is common practice to arrange the hypercomplex numbers $p_{\mu\nu}$ in a table, called the multiplication table. Examples of multiplication tables are given in Tables \ref{tab:four-dimension-alg} and \ref{tab:non_cd_algebras}. Algebraic properties of an hypercomplex algebra $\mathbb{A}$ can be inferred from its multiplication tables. For example, symmetric multiplication tables represent commutative hypercomplex algebras \cite{Kantor1989HypercomplexAlgebras}. Note that the four multiplication tables in Table \ref{tab:non_cd_algebras} are symmetric. Thus, they represent commutative hypercomplex algebras. We will discuss in detail the multiplication tables provided in Tables \ref{tab:four-dimension-alg} and \ref{tab:non_cd_algebras} in the following subsection. A scalar $\alpha \in \mathbb{A}$ can be identified with the hypercomplex number $\alpha = \alpha + 0 \boldsymbol{i}_1 + \dots + 0 \boldsymbol{i}_n$, and vice-versa. Furthermore, from \eqref{eq:multiplication}, the product by scalar in any algebra $\mathbb{A}$ satisfies \begin{equation} \label{eq:scalar_product} \alpha x = \hyperN{\alpha x}. \end{equation} This remark shows that a hypercomplex algebra $\mathbb{A}$ can be identified with an $(n+1)$-dimensional vector space with the vector addition and the product by a scalar given by \eqref{eq:addition} and \eqref{eq:scalar_product}, respectively. Moreover, the basis elements of such $(n+1)$-dimensional vector space are $1,\boldsymbol{i}_1,\ldots,\boldsymbol{i}_n$. Besides the addition and the product by a scalar, a hypercomplex algebra is equipped with the multiplication of vectors given by \eqref{eq:multiplication}. We would like to point out that different multiplication tables do not necessarily yield different hypercomplex algebras. Precisely, we know from linear algebra that a vector space can be represented using different bases. Similarly, a hypercomplex algebra can be obtained from different bases or, equivalently, using different hypercomplex units. Since the multiplication table determines the outcome of the product of any two hypercomplex units, a change of basis results in a different multiplication table. Because of this remark, we say two hypercomplex algebras $\mathbb{A}$ and $\mathbb{A}'$ are isomorphic if they differ by a change of basis. In other words, $\mathbb{A}$ and $\mathbb{A}'$ are isomorphic hypercomplex algebras if the multiplication table of $\mathbb{A}'$ can be obtained from the multiplication table of $\mathbb{A}$ through a change of basis. \subsection{Four-dimensional Hypercomplex Algebras} \label{sec:4dim} Let us now focus on four-dimensional hypercomplex algebras, i.e., hypercomplex algebras of the form \begin{equation} \label{eq:4D} \mathbb{A} = \{x = \quat{x}: x_0,\ldots,x_3 \in \mathbb{R}\}, \end{equation} where $\boldsymbol{i} \equiv \boldsymbol{i}_1$, $\boldsymbol{j} \equiv \boldsymbol{i}_2$, and $\boldsymbol{k} \equiv \boldsymbol{i}_3$ are the three hypercomplex units. Four-dimensional hypercomplex algebras are particularly useful for image processing because a color can be represented using a single hypercomplex number. In other words, four-dimensional hypercomplex algebras allows representing a color by a single entity. Note that the hypercomplex numbers $p_{\mu\nu} = \quat{(p_{\mu\nu})}$ in the multiplication table results in a large number of algebras. Some of these algebras are isomorphic, and many of them have no attractive features. Therefore, we present a construction rule that yields eight associative four-dimensional hypercomplex algebras in the following. Like in the construction of Clifford algebras \cite{Renaud2020CLIFFORDPHYSICS}, we assume the product of hypercomplex units is associative. Furthermore, we let the identity \begin{equation} \label{eq:k=ij} \boldsymbol{k} = \boldsymbol{i} \boldsymbol{j}, \end{equation} hold true. Finally, we assume the four-dimensional hypercomplex algebra is either commutative or anti-commutative. \subsubsection{Anticommutative Algebras} We obtain an anticommutative four-dimensional hypercomplex algebra by imposing $\boldsymbol{i} \boldsymbol{j} = -\boldsymbol{j} \boldsymbol{i}$. From the associativity and \eqref{eq:k=ij}, we obtain \begin{align} \boldsymbol{k}^2 &= (\boldsymbol{i}\boldsymbol{j})(\boldsymbol{i}\boldsymbol{j}) = \boldsymbol{i} (\boldsymbol{j} \boldsymbol{i}) \boldsymbol{j} = \boldsymbol{i} (-\boldsymbol{i} \boldsymbol{j}) \boldsymbol{j} = - \boldsymbol{i}^2 \boldsymbol{j}^2, \label{eq:A-k2}\\ \boldsymbol{i} \boldsymbol{k} &= \boldsymbol{i} (\boldsymbol{i}\boldsymbol{j}) = \boldsymbol{i}^2 \boldsymbol{j}, \\ \boldsymbol{j} \boldsymbol{k} &= \boldsymbol{j} (\boldsymbol{i}\boldsymbol{j}) = \boldsymbol{j} (-\boldsymbol{j}\boldsymbol{i}) = - \boldsymbol{j}^2 \boldsymbol{i}. \label{eq:A-jk} \end{align} To simplify the exposition, let $\boldsymbol{i}^2 = \gamma_1$ and $\boldsymbol{j}^2 = \gamma_2$. From \eqref{eq:k=ij}-\eqref{eq:A-jk}, we obtain an associative and anticommutative four-dimensional algebra denoted by $A[\gamma_1,\gamma_2]$, whose multiplication table is then \begin{equation} \begin{tabular}{c|rrr} $A[\gamma_1,\gamma_2]$ & $\boldsymbol{i}$ & $\boldsymbol{j}$ & $\boldsymbol{k}$ \\ \hline $\boldsymbol{i}$ & $\gamma_1$ & $\boldsymbol{k}$ & $\gamma_1 \boldsymbol{j}$ \\ $\boldsymbol{j}$ & $-\boldsymbol{k}$ & $\gamma_2$ & $-\gamma_2 \boldsymbol{i}$ \\ $\boldsymbol{k}$ & $-\gamma_1 \boldsymbol{j}$ & $\gamma_2 \boldsymbol{i}$ & $-\gamma_1\gamma_2$ \end{tabular} \end{equation} By considering $\gamma_1,\gamma_2 \in \{-1,+1\}$, we obtain the four-dimensional hypercomplex algebras $A[-1,-1]$, $A[-1,+1]$, $A[+1,-1]$, and $A[+1,+1]$ whose multiplication tables are depicted in Table \ref{tab:four-dimension-alg}. \begin{table*}[t] \centering \caption{Multiplication tables of the anticommutative algebras.} \label{tab:four-dimension-alg} \begin{tabular}{cccc} Quaternions & $C\ell_{2,0}$ & Coquaternions & $C\ell_{1,1}$ \\ \begin{tabular}{c|rrr} $A[-1,-1]$ & $\boldsymbol{i}$ & $\boldsymbol{j}$ & $\boldsymbol{k}$ \\ \hline $\boldsymbol{i}$ & $-1$ & $\boldsymbol{k}$ & $-\boldsymbol{j}$ \\ $\boldsymbol{j}$ & $-\boldsymbol{k}$ & $-1$ & $\boldsymbol{i}$ \\ $\boldsymbol{k}$ & $\boldsymbol{j}$ & $-\boldsymbol{i}$ & $-1$\\ \end{tabular} & \begin{tabular}{c|rrr} $A\left[+1,+1\right]$ & $\boldsymbol{i}$ & $\boldsymbol{j}$ & $\boldsymbol{k}$ \\ \hline $\boldsymbol{i}$ & $1$ & $\boldsymbol{k}$ & $\boldsymbol{j}$ \\ $\boldsymbol{j}$ & $-\boldsymbol{k}$ & $1$ & $-\boldsymbol{i}$ \\ $\boldsymbol{k}$ & $-\boldsymbol{j}$ & $\boldsymbol{i}$ & $-1$\\ \end{tabular} & \begin{tabular}{c|rrr} $A\left[-1,+1 \right]$ & $\boldsymbol{i}$ & $\boldsymbol{j}$ & $\boldsymbol{k}$ \\ \hline $\boldsymbol{i}$ & $-1$ & $\boldsymbol{k}$ & $-\boldsymbol{j}$ \\ $\boldsymbol{j}$ & $-\boldsymbol{k}$ & $1$ & $-\boldsymbol{i}$ \\ $\boldsymbol{k}$ & $\boldsymbol{j}$ & $\boldsymbol{i}$ & $1$\\ \end{tabular} & \begin{tabular}{c|rrr} $A\left[ +1,-1 \right]$ & $\boldsymbol{i}$ & $\boldsymbol{j}$ & $\boldsymbol{k}$ \\ \hline $\boldsymbol{i}$ & $1$ & $\boldsymbol{k}$ & $\boldsymbol{j}$ \\ $\boldsymbol{j}$ & $-\boldsymbol{k}$ & $-1$ & $\boldsymbol{i}$ \\ $\boldsymbol{k}$ & $-\boldsymbol{j}$ & $-\boldsymbol{i}$ & $1$\\ \end{tabular} \end{tabular} \end{table*} \begin{table*}[t] \centering \caption{Multiplication tables of commutative algebras.} \label{tab:non_cd_algebras} \begin{tabular}{cccc} Bicomplex numbers & & Tessarines & Klein 4-group \\ \begin{tabular}{c|rrr} $B[-1,-1]$ & $\boldsymbol{i}$ & $\boldsymbol{j}$ & $\boldsymbol{k}$ \\ \hline $\boldsymbol{i}$ & $-1$ & $\boldsymbol{k}$ & $-\boldsymbol{j}$ \\ $\boldsymbol{j}$ & $\boldsymbol{k}$ & $-1$ & $-\boldsymbol{i}$ \\ $\boldsymbol{k}$ & $-\boldsymbol{j}$ & $-\boldsymbol{i}$ & $1$\\ \end{tabular} & \begin{tabular}{c|rrr} $B[+1,-1]$ & $\boldsymbol{i}$ & $\boldsymbol{j}$ & $\boldsymbol{k}$ \\ \hline $\boldsymbol{i}$ & $1$ & $\boldsymbol{k}$ & $\boldsymbol{j}$ \\ $\boldsymbol{j}$ & $\boldsymbol{k}$ & $-1$ & $-\boldsymbol{i}$ \\ $\boldsymbol{k}$ & $\boldsymbol{j}$ & $-\boldsymbol{i}$ & $-1$\\ \end{tabular} & \begin{tabular}{c|rrr} $B[-1,+1]$ & $\boldsymbol{i}$ & $\boldsymbol{j}$ & $\boldsymbol{k}$ \\ \hline $\boldsymbol{i}$ & $-1$ & $\boldsymbol{k}$ & $-\boldsymbol{j}$ \\ $\boldsymbol{j}$ & $\boldsymbol{k}$ & $1$ & $\boldsymbol{i}$ \\ $\boldsymbol{k}$ & $-\boldsymbol{j}$ & $\boldsymbol{i}$ & $-1$\\ \end{tabular} & \begin{tabular}{c|rrr} $B[+1,+1]$ & $\boldsymbol{i}$ & $\boldsymbol{j}$ & $\boldsymbol{k}$ \\ \hline $\boldsymbol{i}$ & $1$ & $\boldsymbol{k}$ & $\boldsymbol{j}$ \\ $\boldsymbol{j}$ & $\boldsymbol{k}$ & $1$ & $\boldsymbol{i}$ \\ $\boldsymbol{k}$ & $\boldsymbol{j}$ & $\boldsymbol{i}$ & $1$\\ \end{tabular} \end{tabular} \end{table*} \begin{remark} The hypercomplex algebra $A[-1,-1]$ coincides with the quaternions because they have the same multiplication table. The algebra $A[-1,+1]$ corresponds to the co-quaternions, also known as split-quaternions \cite{Takahashi2021ComparisonControl}. Similarly, $A[+1,+1]$ and $A[+1,-1]$ can be identified with the Clifford algebras $C\ell_{2,0}$ and $C\ell_{1,1}$, respectively. Furthermore, the algebras $A[-1,+1]$, $A[+1,-1]$, and $A[+1,+1]$ are all isomorphic. Finally, we would like to remark that the algebras $A[-1,-1]$, $A[-1,+1]$, $A[+1,-1]$, and $A[+1,+1]$ can be derived using the generalized Cayley-Dickson process \cite{Brown1967OnAlgebras.,Vieira2020ExtremeAuto-Encoding}. \end{remark} \subsubsection{Commutative Algebras} In a similar fashion, we impose the condition $\boldsymbol{i} \boldsymbol{j} = \boldsymbol{j} \boldsymbol{i}$ to obtain the commutative four-dimensional hypercomplex algebras. Again, using associativity and \eqref{eq:k=ij}, we are able to write the identities \begin{align} \boldsymbol{k}^2 &= (\boldsymbol{i}\boldsymbol{j})(\boldsymbol{i}\boldsymbol{j}) = \boldsymbol{i} (\boldsymbol{j} \boldsymbol{i}) \boldsymbol{j} = \boldsymbol{i} (\boldsymbol{i} \boldsymbol{j}) \boldsymbol{j} = \boldsymbol{i}^2 \boldsymbol{j}^2, \label{eq:B-k2}\\ \boldsymbol{i} \boldsymbol{k} &= \boldsymbol{i} (\boldsymbol{i}\boldsymbol{j}) = \boldsymbol{i}^2 \boldsymbol{j}, \\ \boldsymbol{j} \boldsymbol{k} &= \boldsymbol{j} (\boldsymbol{i}\boldsymbol{j}) = \boldsymbol{j} (\boldsymbol{j}\boldsymbol{i}) = \boldsymbol{j}^2 \boldsymbol{i}. \label{eq:B-jk} \end{align} By expressing $\boldsymbol{i}^2 = \gamma_1$ and $\boldsymbol{j}^2 = \gamma_2$, we obtain a commutative hypercomplex algebra $B[\gamma_1,\gamma_2]$ whose multiplication table is \begin{equation} \begin{tabular}{c|rrr} $B[\gamma_1,\gamma_2]$ & $\boldsymbol{i}$ & $\boldsymbol{j}$ & $\boldsymbol{k}$ \\ \hline $\boldsymbol{i}$ & $\gamma_1$ & $\boldsymbol{k}$ & $\gamma_1 \boldsymbol{j}$ \\ $\boldsymbol{j}$ & $\boldsymbol{k}$ & $\gamma_2$ & $\gamma_2 \boldsymbol{i}$ \\ $\boldsymbol{k}$ & $\gamma_1 \boldsymbol{j}$ & $\gamma_2 \boldsymbol{i}$ & $\gamma_1\gamma_2$ \end{tabular} \end{equation} Analogously, we end up with the four-dimensional algebras $B[-1,-1],B[-1,+1],B[+1,-1],B[+1,+1]$ by taking $\gamma_1,\gamma_2 \in \{ -1, +1\}$. Table \ref{tab:non_cd_algebras} contains the multiplication table of these four algebras. \begin{remark} Because they have the same multiplication table, the hypercomplex algebra $B[-1,+1]$ corresponds to the tessarines, also known as commutative quaternions \cite{Ortolani2017OnProcessing,Navarro-Moreno2020TessarineCondition}. Similarly, the algebra $B[-1,-1]$ corresponds to the bi-complex numbers \cite{Takahashi2021ComparisonControl} while $B[+1,+1]$ is equivalent to the Klein 4-group, a commutative algebra of great interest in symmetric group theory \cite{Kobayashi2020HopfieldFour-group}. Finally, we would like to point out that the algebras $B[-1,-1]$ and $B[+1,-1]$ can be both obtained from $B[-1,+1]$ by a change of bases. Therefore, the three algebras $B[-1,-1]$, $B[+1,-1]$ and $B[-1,+1]$ are all isomorphic. \end{remark} Concluding, we have a total of eight four-dimensional hypercomplex algebras. All of them are associative, four are anticommutative and the remaining are commutative. Furthermore, the hypercomplex units are well structured in their multiplication table. Precisely, their multiplication table can be written as follows \begin{equation} \label{eq:s_table} \begin{tabular}{c|rrr} & $\boldsymbol{i}$ & $\boldsymbol{j}$ & $\boldsymbol{k}$ \\ \hline $\boldsymbol{i}$ & $s_{11}$ & $s_{12}\boldsymbol{k}$ & $s_{13} \boldsymbol{j}$ \\ $\boldsymbol{j}$ & $s_{21}\boldsymbol{k}$ & $s_{22}$ & $s_{23} \boldsymbol{i}$ \\ $\boldsymbol{k}$ & $s_{31} \boldsymbol{j}$ & $s_{32} \boldsymbol{i}$ & $s_{33}$ \end{tabular} \end{equation} where $s_{ij} \in \{-1,+1\}$, for all $i,j = 1,\ldots,3$, depends on the parameters $\gamma_1$ and $\gamma_2$ as well as on the commutativity or anticommutativity of the multiplication. The multiplication table \eqref{eq:s_table} helped us to efficiently implement convolutional neural networks based on the eight hypercomplex algebras presented in this section. We detail this remark in the following section. \section{Hypercomplex-valued Convolutional Neural Networks (HvCNN)} \label{sec:hvcnn} Convolutional layers are crucial building blocks of convolutional neural networks. The main strength of convolutional layers is their ability to process data locally and, thus, learn local patterns. Convolutional neural networks have been widely applied to image processing tasks \cite{Geron19HandsOn}. Let us briefly describe a real-valued convolutional layer. \subsection{Real-valued Convolutional Layers} Suppose a convolutional layer is fed by a real-valued image $\mathbf{I}$ with $C$ feature channels. Let $\mathbf{I}(p,c) \in \mathbb{R}$ denote the intensity of the $c$th channel of the image $\mathbf{I}$ at pixel $p$. The neurons of a convolutional layer are parameterized structures called filters. The filters have the same number $C$ of channels as the image $\mathbf{I}$. Let $D$, commonly a rectangular grid, denote the domain of the filters. Also, let the weights of a convolutional layer with $K$ real-valued filters be arranged in an array $\mathbf{F}$ such that $\mathbf{F}(q,c,k)$ denotes the value of the $c$th channel of the $k$th filter at the point $q \in D$, for $c=1,\ldots,C$ and $k = 1,\ldots,K$. A convolutional layer with $K$ filters yields a real-valued image $\mathbf{J}$ with $K$ feature channels obtained by evaluating an activation function on the addition of a bias term and the convolution of the image $\mathbf{I}$ by each of the $K$ filters. Precisely, let $(\mathbf{I}\ast \mathbf{F})(p,k)$ denote the convolution of the image $\mathbf{I}$ by the $k$th filter at pixel $p$. Intuitively, $(\mathbf{I}\ast \mathbf{F})(p,k)$ is the sum of the multiplication of the weights of the $k$th filter and the intensities of the pixels of the image in a window characterized by the translation of $p$ by $S(q)$, for $q \in D$. The term $S(q)$, for $q$ in the domain $D$ of the filter, represents a translation that can take vertical and horizontal strides into account. In mathematical terms, the convolution of the image $\mathbf{I}$ by the $k$th filter at pixel $p$ is given by \begin{equation} \label{eq:real-conv} (\mathbf{I} \ast \mathbf{F})(p,k) = \sum_{c=1}^C \sum_{q \in D} \mathbf{I}\big(p+S(q),c\big)\mathbf{F}(q,c,k). \end{equation} Moreover, the intensity of the $k$th feature channel of the output of a convolutional layer at pixel $p$ is defined by \begin{equation} \label{eq:real-conv-layer} \mathbf{J}(p,k) = \varphi \left( b(k) + (\mathbf{I}\ast \mathbf{F})(p,k)\right), \end{equation} where $\varphi:\mathbb{R} \to \mathbb{R}$ denotes the activation function. \subsection{Hypercomplex-valued Convolutional Layers} The hypercomplex-valued convolutional layer is defined analogously to the real-valued convolutional layer by replacing the real numbers and operations with their corresponding hypercomplex versions in \eqref{eq:real-conv} and \eqref{eq:real-conv-layer} \cite{Trabelsi17complex,Gaudet2017DeepNetworks}. Precisely, the ``intensity'' of the $k$th channel of the hypercomplex-valued output image $\mathbf{J}^{(h)}$ at pixel $p$ is given by \begin{equation} \label{eq:hyper-conv-layer} \mathbf{J}^{(h)}(p,k) = \varphi_{\mathbb{A}} \left( b^{(h)}(k) + (\mathbf{I}^{(h)} \ast \mathbf{F}^{(h)})(p,k) \right), \end{equation} where $\varphi_{\mathbb{A}}:\mathbb{A} \to \mathbb{A}$ is a hypercomplex-valued activation function, $b_k^{(h)} \in \mathbb{A}$ is the bias term, and \begin{equation} \label{eq:hyper-conv} (\mathbf{I}^{(h)} \ast \mathbf{F}^{(h)})(p,k) = \sum_{c=1}^C \sum_{q \in D} \mathbf{I}^{(h)}(p+S(q),c)\mathbf{F}^{(h)}(q,c,k), \end{equation} is the convolution of $\mathbf{I}^{(h)}$ by the $k$th hypercomplex-valued filter at pixel $p$. In this paper, we only consider split-functions defined using a real-valued function $\varphi:\mathbb{R} \to \mathbb{R}$ as follows for all $x \in \hyperN{x} \in \mathbb{A}$: \begin{equation} \label{eq:split-function} \varphi_{\mathbb{A}}(x) = \varphi(x_0) + \varphi(x_1)\boldsymbol{i}_1+ \ldots + \varphi(x_n)\boldsymbol{i}_n. \end{equation} \subsection{Emulating Hypercomplex-valued Convolutional Layers} Since most deep neural network libraries are designed for real-valued inputs, we show how to emulate a four-dimensional hypercomplex-valued convolutional layer using a real-valued convolutional layer. The reasoning is quite similar to the approaches reported in the literature for complex- and quaternion-valued deep networks \cite{Trabelsi17complex,Gaudet2017DeepNetworks}. First, an image $\mathbf{I}^{(h)}$ with $C$ channels defined on a four-dimensional hypercomplex algebra can be represented by \begin{equation} \mathbf{I}^{(h)} = \quat{\mathbf{I}}, \end{equation} where $\mathbf{I}_0$, $\mathbf{I}_1$, $\mathbf{I}_2$, and $\mathbf{I}_3$ are real-valued images with $C$ channels. Similarly, a bank of $K$ hypercomplex-valued filters can be represented by \begin{equation} \mathbf{F}^{(h)} = \mathbf{F}_{0} + \mathbf{F}_{1} \boldsymbol{i} + \mathbf{F}_{2} \boldsymbol{j} + \mathbf{F}_{3} \boldsymbol{k}, \end{equation} where $\mathbf{F}_{0}$, $\mathbf{F}_{1}$, $\mathbf{F}_{2}$, and $\mathbf{F}_{3}$ are real-valued arrays representing each a bank of $K$ filters with domain $D$ and $C$ feature channels. From the multiplication table \eqref{eq:s_table} and omitting the arguments $(p+S(q),c)$ and $(q,c,k)$ to simplify the notation, we obtain \begin{align*} & \mathbf{I}^{(h)}(p+S(q),c)\mathbf{F}^{(h)}(q,c,k) \\ &= (\quat{\mathbf{I}})(\mathbf{F}_{0} + \mathbf{F}_{1} \boldsymbol{i} + \mathbf{F}_{2} \boldsymbol{j} + \mathbf{F}_{3} \boldsymbol{k}) \\ &= \mathbf{I}_0 \mathbf{F}_{0}+ s_{11} \mathbf{I}_1 \mathbf{F}_{1} + s_{22} \mathbf{I}_2 \mathbf{F}_{2} + s_{33} \mathbf{I}_3 \mathbf{F}_{3} \\ &\;+(\mathbf{I}_0 \mathbf{F}_{1}+ \mathbf{I}_1 \mathbf{F}_{0} + s_{23} \mathbf{I}_2 \mathbf{F}_{3} + s_{32} \mathbf{I}_3 \mathbf{F}_{2})\boldsymbol{i} \\ &\;+(\mathbf{I}_0 \mathbf{F}_{2}+ s_{13} \mathbf{I}_1 \mathbf{F}_{3} + \mathbf{I}_2 \mathbf{F}_{0} + s_{31} \mathbf{I}_3 \mathbf{F}_{1})\boldsymbol{j} \\ &\;+(\mathbf{I}_0 \mathbf{F}_{3}+ s_{12} \mathbf{I}_1 \mathbf{F}_{2} + s_{21} \mathbf{I}_2 \mathbf{F}_{1} + \mathbf{I}_3 \mathbf{F}_{0})\boldsymbol{k} \end{align*} Therefore, the real-part of the convolution given by \eqref{eq:hyper-conv} satisfies the equation \begin{align} \big(\mathbf{I}^{(h)} \ast \mathbf{F}^{(h)}\big)_0 (p,k) &= \sum_{c=1}^C \sum_{q \in D} \Big[ \mathbf{I}_0(p+S(q),c) \mathbf{F}_{0}(q,c,k) \nonumber \\ & \;+s_{11} \mathbf{I}_1(p+S(q),c) \mathbf{F}_{1}(q,c,k) \label{eq:I*F_0} \\ & \;+s_{22} \mathbf{I}_2(p+S(q),c) \mathbf{F}_{2}(q,c,k) \nonumber \\ & \; +s_{33} \mathbf{I}_3(p+S(q),c) \mathbf{F}_{3}(q,c,k)\Big],\nonumber \end{align} Equivalently, the real-part of the convolution of the image $\mathbf{I}^{(h)}$ by the $k$th hypercomplex-valued filter at pixel $p$ can be computed using the real-valued convolution \begin{equation} \big(\mathbf{I}^{(h)} \ast \mathbf{F}^{(h)}\big)_0 (p,k) = \big(\mathbf{I}^{(r)} \ast \mathbf{F}_0^{(r)}\big)(p,k), \end{equation} where $\mathbf{I}^{(r)}$ is the real-valued image with $4C$ features channels obtained by concatenating the real and imaginary parts of $\mathbf{I}^{(h)}$ as follows \begin{equation} \label{eq:I^r} \mathbf{I}^{(r)}(p,:) = [\mathbf{I}_0(p,:),\mathbf{I}_1(p,:),\mathbf{I}_2(p,:),\mathbf{I}_3(p,:)], \end{equation} for all pixels $p$, and $\mathbf{F}_{0}^{(r)}$ is the real-valued filter defined by \begin{align} & \mathbf{F}^{(r)}_{0}(q,1:C,k) = \mathbf{F}_{0}(q,1:C,k), \\ & \mathbf{F}^{(r)}_{0}(q,C+1:2C,k) = s_{11}\mathbf{F}_{1}(q,1:C,k), \\ & \mathbf{F}^{(r)}_{0}(q,2C+1:3C,k) = s_{22}\mathbf{F}_{2}(q,1:C,k), \\ & \mathbf{F}^{(r)}_{0}(q,3C+1:4C,k) = s_{33} \mathbf{F}_{3}(q,1:C,k), \end{align} for all $q \in D$ and $k=1,\ldots,K$. For short, the notation \begin{equation} \label{eq:F_0^r} \mathbf{F}^{(r)}_{0} = [\mathbf{F}_{0}, s_{11} \mathbf{F}_{1}, s_{22} \mathbf{F}_{2}, s_{33} \mathbf{F}_{3}], \end{equation} means that $\mathbf{F}^{(r)}_{0}$ is obtained by concatenating $\mathbf{F}_0$, $s_{11} \mathbf{F}_{1}$, $s_{22} \mathbf{F}_{2}$, and $s_{33} \mathbf{F}_{3}$ as prescribed above. Note from \eqref{eq:real-conv} that \begin{align*} & \big(\mathbf{I}^{(r)} \ast \mathbf{F}_0^{(r)}\big)(p,k) = \sum_{c=1}^{4C} \sum_{q \in D} \mathbf{I}^{(r)}\big(p+S(q),c\big)\mathbf{F}_0^{(r)}(q,c,k) \\ & \quad = \sum_{q \in D}\Bigg[ \sum_{c=1}^{C} \mathbf{I}^{(r)}\big(p+S(q),c\big)\mathbf{F}_0^{(r)}(q,c,k) + \ldots \\ & \qquad + \sum_{c=3C+1}^{4C} \mathbf{I}^{(r)}\big(p+S(q),c\big)\mathbf{F}_0^{(r)}(q,c,k) \Bigg] \\ & \quad = \sum_{q \in D}\Bigg[ \sum_{c=1}^{C} \mathbf{I}_0\big(p+S(q),c\big)\mathbf{F}_0(q,c,k) + \ldots \\ & \qquad + \sum_{c'=1}^{C} s_{33} \mathbf{I}_3\big(p+S(q),c'\big)\mathbf{F}_3(q,c',k) \Bigg], \end{align*} which coincides with $\big(\mathbf{I}^{(h)} \ast \mathbf{F}^{(h)}\big)_0 (p,k)$ given by \eqref{eq:I*F_0}. Therefore, using a split-function, the real-part $\mathbf{J}_0(p,k)$ of the output $\mathbf{J}^{(h)}(p,k)$ of the hypercomplex-valued convolutional layer given by \eqref{eq:hyper-conv} can be computed using a real-valued convolutional layer as follows \begin{equation} \mathbf{J}_0(p,k) = \varphi\Big(b_0(k) + \big(\mathbf{I}^{(r)} \ast \mathbf{F}_0^{(r)}\big)(p,k)\big), \end{equation} where the bias term $b_0(k)$ corresponds to the real-part of $b(k)$ while $\mathbf{I}^{(r)}$ and $\mathbf{F}_0^{(r)}$ are given by \eqref{eq:I^r} and \eqref{eq:F_0^r}, respectively. In a similar vein, the three imaginary parts $\mathbf{J}_1(p,k)$, $\mathbf{J}_2(p,k)$, and $\mathbf{J}_3(p,k)$ of the $k$th channel of the hypercomplex-valued image $\mathbf{J}^{(h)}$ at pixel $p$ can be computed using real-valued convolutional layers with bias terms $b_{1}(k)$, $b_{2}(k)$, and $b_{3}(k)$ and real-valued filters \begin{align} & \mathbf{F}^{(r)}_{1} = [\mathbf{F}_{1}, \mathbf{F}_{0}, s_{23} \mathbf{F}_{3}, s_{32} \mathbf{F}_{2}], \\ & \mathbf{F}^{(r)}_{2} = [\mathbf{F}_{2}, s_{13} \mathbf{F}_{3}, \mathbf{F}_{0}, s_{31} \mathbf{F}_{1}], \\ & \mathbf{F}^{(r)}_{3} = [\mathbf{F}_{3}, s_{12} \mathbf{F}_{2}, s_{21} \mathbf{F}_{1}, \mathbf{F}_{0}], \end{align} respectively. We would like to finish this section with few remarks: First, emulating a hypercomplex-valued convolutional layer allow us to take advantage of open-source deep libraries such as \texttt{Tensorflow} and \texttt{PyTorch} for \texttt{python} and \texttt{Flux} for \texttt{Julia Language}. Consequently, hypercomplex-valued versions of well-known deep neural networks can be implemented in current deep learning libraries. On top of that, although not properly designed for hypercomplex-valued models, pooling layers, sophisticated optimizers, and speed-up training techniques such as batch normalization and weight initialization can be incorporated as the first attempt into hypercomplex-valued deep networks. In the following section, we consider a hypercomplex-valued deep neural network for the classification of lymphocytes from smear blood images that resembles the LeNet architecture \cite{LeCun1998Gradient-basedRecognition}. \section{Lymphoblast Image Classification Task} \label{sec:Applications} One of the primary forms of diagnosis of acute lymphoblastic leukemia (ALL) is through microscopic blood smear image inspection \cite{Bibi2020IoMT-Based,Genovese2021HistopathologicalDetection,Genovese2021AcuteLearning,Zolfaghari2022ACells}. Precisely, physicians diagnose ALL by the presence of many lymphoblasts in a blood smear image in which white cells are stained with bluish-purple coloration. A candidate cell image is selected, cut from the blood smear image, and fed into a machine learning classifier for a computer-aided leukemia diagnosis. For illustrative purposes, Fig. \ref{fig:lymphobast} shows two candidate cell images used in such a classification task. \begin{figure} \centering \begin{tabular}{cc} \footnotesize{a) Probable lymphoblast} & \footnotesize{b) Healthy cell} \\ \includegraphics[width=0.4\columnwidth]{Im001_1.jpg} & \includegraphics[width=0.4\columnwidth]{Im142_0.jpg} \end{tabular} \caption{Examples of candidate cells for the classification task. Images from the ALL-IDB dataset \cite{Labati2011All-IDB:Processing}.} \label{fig:lymphobast} \end{figure} The classifier predicts if the candidate image is a lymphoblast in the simplest binary case. In our model, the candidate images have $100 \times 100$ pixels and have been saved using either the RGB (red-green-blue) or the HSV (hue-saturation-value) color encodings. Using the RGB encoding, an image is characterized by the intensities of red (R), green (G), and blue (B), respectively. An RGB encoded image has been mapped into a hypercomplex-valued image $\mathbf{I}^{(h)}_{RGB}$ by means of the equation \begin{equation} \mathbf{I}^{(h)}_{RGB} = R\boldsymbol{i} + G\boldsymbol{j} + B\boldsymbol{k}. \end{equation} Similarly, a HSV-encoded image is characterized by the hue $H \in [0,2\pi)$, the value $V \in [0,1]$, and the saturation $S \in [0,1]$. A hypercomplex-valued image $\mathbf{I}^{(h)}_{HSV}$ is derived from an HSV-encoded image as follows \cite{Granero2021Quaternion-ValuedDiagnosis}: \begin{equation} \label{eq:hyper-HSV} \mathbf{I}^{(h)}_{HSV} = \big(\cos(H) + \sin(H) \boldsymbol{i}\big)(S + V\boldsymbol{j}). \end{equation} Note that, because $\boldsymbol{i} \boldsymbol{j} = \boldsymbol{k}$ in all the considered hypercomplex algebras, \eqref{eq:hyper-HSV} is equivalent to \begin{equation} \mathbf{I}^{(h)}_{HSV} = S \cos(H) + S \sin(H)\boldsymbol{i} + V \cos(H) \boldsymbol{j} + V \sin(H)\boldsymbol{k}. \end{equation} We performed the lymphocyte classification task using real-valued CNNs (RvCNNs) and the proposed HvCNNs. Similar architectures are adopted for both real- and hypercomplex-valued models, as suggested in \cite{Granero2021Quaternion-ValuedDiagnosis}. The RvCNN features three convolutional layers with $3\times 3$ filters followed by a max-pooling layer with $2\times2$ kernels. A dense layer with 1 unit yields the output. The hypercomplex-valued models have the same layer layout but a much smaller number of filters per convolution layer because each hypercomplex-valued channel is equivalent to four real-valued feature channels. The activation function adopted for all convolutional layers is the rectified linear unit (ReLU). The dense layer for both architectures is a single neuron that outputs the label $0$ for a healthy white cell and $1$ for a lymphoblast image. Table \ref{tab:params} shows a breakdown of total parameters per layer for each architecture. All deep network models in this work were implemented using the python libraries \texttt{Keras} and \texttt{Tensorflow}. The source codes are available at \url{https://github.com/mevalle/Hypercomplex-valued-Convolutional-Neural-Networks}. \begin{table} \centering \caption{Parameter distribution per layer for each architecture.} \label{tab:params} \begin{tabular}{|c|c|c|c|} \hline & & RvCNN & HvCNN \\ \hline \multirow{2}*{Conv Layer 1} & (3,3) filters & {32} & {8} \\ \cline{2-4} & Parameters & {896} & {320} \\ \hline \multirow{2}*{Conv Layer 2} & (3,3) filters & {64} & {16} \\ \cline{2-4} & Parameters & {18,496} & {4,672} \\ \hline \multirow{2}*{Conv Layer 3} & (3,3) filters & {128} & {32} \\ \cline{2-4} & Parameters & {73,856} & {18,560} \\ \hline \multirow{2}*{Dense Layer} & Units & 1 & 1 \\ \cline{2-4} & Parameters & 12,801 & 12,801 \\ \hline \textbf{Total} & & 106,049 & 36,353 \\ \hline \end{tabular} \end{table} \subsection{Computational Experiments} Let us now describe the outcome of the computational experiments performed using the acute lymphoblastic leukemia image database (ALL-IDB), a popular public dataset for segmentation and classification tasks directed at ALL detection \cite{Labati2011All-IDB:Processing}. The ALL-image database (ALL-IDB) consists of two sets of images. The ALL-IDB1 contains 108 blood smear images for segmentation and classification tasks. The ALL-IDB2 contains 260 segmented images, each containing a single blood element like the images depicted in Fig. \ref{fig:lymphobast}, and is aimed exclusively for the classification task \cite{Labati2011All-IDB:Processing}. We used all the 260 images from the ALL-IDB2 dataset in our computational experiments. Like Genovese et al. \cite{Genovese2021HistopathologicalDetection,Genovese2021AcuteLearning}, we randomly split the dataset into the training and test sets containing each 50\% of the total number of images. The training set is enlarged using horizontal and vertical flips. We conducted 100 simulations per experiment. Each simulation consists of splitting training and test set, augmenting the training set, initializing the network parameters, training for 100 epochs using ADAM optimizer and the binary cross-entropy loss function, and predicting test images. We evaluate the performances using the accuracy score in the test set. We derive a total of 18 network configurations using the real numbers and the eight hypercomplex algebras detailed in Section \ref{sec:hc-matrix-algebra}. Namely, each of the nine models (1 RvCNN and 8 HvCNNs) is used in the classification tasks of both RGB and HSV encoded image sets. The box plots depicted in Fig. \ref{fig:boxplot} summarize the outcome of the computational experiment. \begin{figure*} \centering \begin{tabular}{cc} \footnotesize{a) RGB-encoded images} & \footnotesize{b) HSV-encoded images} \\ \includegraphics[width=0.48\textwidth]{BoxPlot_RGB.pdf} & \includegraphics[width=0.48\textwidth]{BoxPlot_HSV.pdf} \end{tabular} \caption{Boxplot of test set accuracies produced by real-valued and hypercomplex-valued neural networks.} \label{fig:boxplot} \end{figure*} Note that all models performed well for the RGB encoded images, with medians close to $95\%$ accuracy except for the HvCNN based on the algebra $B[-1,+1]$ that yields a median accuracy rate of $93.8\%$. In the HSV case, however, performances vary more drastically. The real-valued model exhibits poor performance compared to all the hypercomplex-valued ones, with a median accuracy rate of $91.5\%$. The HvCNNs based on the algebras $A[-1,-1]$, $B[-1,-1]$, and $B[-1,+1]$ yielded all a median accuracy score of $96.2\%$ while the HvCNN based on the algebra $B[+1,-1]$ achieved a median accuracy score of $96.5\%$. Moreover, the HvCNNs based on the isomorphic algebras $A[-1,+1]$, $A[+1,-1]$, and $A[+1,+1]$ as well as the HvCNN based on $B[+1,+1]$ yielded all a median accuracy score of $96.9\%$. The significant improvement in the accuracy scores of the HvCNN models indicates they can take advantage of the locally cohesive structure of the HSV encoding. \begin{figure} \hspace{-1.5em} \includegraphics[width=1.1\columnwidth]{HoTDiagram_Selection.pdf} \caption{Hasse diagram of representative experiment configurations.} \label{fig:hot} \end{figure} To better depict the outcome of this computational experiment, we summarize the performance of the network classifiers in the Hasse diagram shown in Fig. \ref{fig:hot}. This figure depicts a single HvCNN model of each isomorphic hypercomplex algebra group to simplify the exposition. Precisely, Fig. \ref{fig:hot} compares the performance of the HvCNNs based on the hypercomplex algebras $A[-1,-1]$ (quaternions), $A[-1,+1]$ (coquaternions), $B[-1,+1]$ (tessarines), $B[+1,+1]$ (Klein 4-group), along with the real-valued models. In this diagram, a solid line connecting two models indicates that the one above achieved better performance than the one below, with a confidence level of 0.99 according to a Student's t-test. In other words, models higher up in the diagram perform significantly better than those on the lower end. Furthermore, the solid lines indicate a transitive relation. Thus, if model A is better than B and B is better than C, then A is better than C. First off, Fig. \ref{fig:hot} confirms that the HvCNN models performed better using the HSV than the RGB encoding, and HvCNNs on HSV encoded images outperform the real-valued models on both encodings. In addition, it shows the real-valued model on HSV encoded images as the poorest performer. Moreover, coquaternion- and tessarine-based HvCNNs outperformed the HvCNN based on quaternions, the four-dimensional algebra most widely used in applications. At this point, we would like to recall that superior performance of neural networks based on non-usual hypercomplex algebras has been previously reported in the literature \cite{Vieira2020ExtremeAuto-Encoding,Vieira2022AMachines}, despite quaternion-based neural network yielding better performance in applications like controlling a robot manipulator \cite{Takahashi2021ComparisonControl}. Finally, we would like to recall that Genovese et al. obtained average accuracy rates of $97.92\%$ using a ResNet18 combined with histopathological transfer learning \cite{Genovese2021HistopathologicalDetection}. The top-performing HvCNNs achieved average accuracy scores of $96.51\%$ and $96.39\%$. However, in contrast to the ResNet18 network with approximately 11.4M parameters, the HvCNNs have only 36K trainable parameters. In particular, the coquaternion-valued HvCNN with HSV encoding achieved $98\%$ of the average accuracy score of the ResNet18 with transfer learning but with only $0.3\%$ of its total number of trainable parameters. \section{Concluding Remarks} \label{sec:concluding-remarks} In this work, we extended the concept of a quaternion-valued CNN to general hypercomplex algebras. We constructed eight such algebras with desirable properties according to Kantor and Solodovnikov framework. These algebras are isomorphic to well-known four-dimensional algebras: quaternions, coquaternions, tessarines, Klein 4-group, Cayley-Dickson algebras, and Clifford algebras. The hypercomplex-valued neural networks have been applied for lymphoblast image classification. Using the public dataset ALL-IDB2, we conducted experiments featuring real-valued and hypercomplex-valued models. We observed that the HvCNNs performed similarly to the real-valued models in the RGB-encoded images while outperforming the RvCNNs with HSV-encoded images. The superior performance of the HvCNN indicates that they take better advantage of the HSV color system, which is more akin to human vision and more widely used in computer vision applications \cite{cheng2001color}. Consequently, HvCNN models seem more efficient at comprising information in its filters and four-dimensional entities than the real-valued models. Moreover, coquaternion- and tessarine-valued models outperformed the quaternion-valued CNNs. Lastly, the performance attained by the top-performing HvCNN models is significant and comparable to notable results in the literature. Indeed, Genovese et al. observed an average accuracy score of $97.92\%$ with the ResNet18, a deep network containing around 11 million parameters \cite{Genovese2021HistopathologicalDetection}. The coquaternion-valued CNN yielded an average accuracy of $96.51\%$ with approximately $0.3\%$ of the total of trainable parameters of the real-valued ResNet18.
1,477,468,750,784
arxiv
\section{Introduction} Image retrieval is a key compute vision problem and it has made great progress due to deep learning \cite{chopra2005learning,gordo2016deep,cakir2019deep,cao2019hybrid,tian2020bootstrap}. Cross-modal image retrieval allows using other types of query, such as text to image retrieval \cite{wang2016learning,hu2019multi}, sketch to image retrieval \cite{sangkloy2016sketchy,pang2019generalising} and cross-view image retrieval \cite{lin2015learning,hu2018cvm}. In this paper, we consider the case where input queries are formulated as an input image plus the text that describes desired modifications to the image. Different from attribute-based image retrieval \cite{zhao2017memory}, our input text can be multi-word instead of a single attribute. For instance, our input image is a women clog and the text could be ``have a buckle and strap, no patterns". Our desired image should meet the requirement of the two input modalities (Figure \ref{fig1}). \begin{figure} \centering \includegraphics[scale=0.42]{Fig1.pdf} \caption{Example of image retrieval based on image and text fusion. The text states the desired modification to the image and the information of the two input modalities conveys into the system.} \label{fig1} \end{figure} To solve this problem, TIRG \cite{vo2019composing} first extracts the features of the source image and the modification text by ResNet-18 and LSTM respectively, then fuses them via a gated residual connection, and finally learns the similarity metric using softmax cross-entropy loss to minimize the distance between fusion feature and the feature of the desired image. Similarly, another research work \cite{guo2018dialog} gets the feature of the source image and that of the text by ResNet-101 and GRU, fuses them through concatenation and improves the rank of the desired image by deep metric learning based on reinforcement learning. Although there are many other related work \cite{socher2014grounded,nagarajan2018attributes,noh2016image,perez2018film}, most of them focus on designing new feature fusion networks and employing different DML loss function. These methods align the feature vector and assess the similarity between the desired image and the fusion of the source image and the modified text by common retrieval losses. However, since features of different modalities usually have inconsistent distribution and representation, a modality gap exists so as to affect the retrieval performance significantly \cite{wang2017adversarial}. Mutual information (MI) can capture non-linear statistical dependencies between random variables and act as a measure of true dependence \cite{kinney2014equitability}. The recent research \cite{belghazi2018mine,bachman2019learning,hjelm2018learning} offers various general-purpose parametric neural estimators of mutual information between different representations in the deep neural network. Thus, we align the feature distributions of text, image, and their fusion by Deep InfoMax (DIM) \cite{hjelm2018learning} between the representations in the encoders of these modalities. Specifically, we maximize the MI between the low-level representation in the text encoder and the high-level representation in the desired image encoder (ITDIM) to projects these two modalities into a common subspace. As the text and the desired image are not exactly semantically same, we realize ITDIM by estimating their overlapping semantic information. Compared with two modalities with independent distribution (like text and images), the image modality is the key component of the fusion modality, so their features' distribution aligns with each other to some extent. Our method gets a better alignment by maximizing the MI between the low-level representation of the desired image and the fusion's high-level representation (IFDIM). Here the semantic information of the different-level representations is identical. A handful of literature in the text to image retrieval field narrows the modality gap using adversarial loss \cite{wang2017adversarial,wang2019learning}, which attempts to make the features of different modalities indistinguishable. In essence, these methods can be treated as the special cases of MINE (the basic version of Deep InfoMax) \cite{belghazi2018mine}, which maximize the MI between the last layers of different encoders using minimax objective \cite{belghazi2018mine,hjelm2018learning,nowozin2016f}. Maximizing mutual information between two complete feature vectors is often unsufficient for increasing the dependence of two modalities. Therefore, DIM also maximizes the average MI between the high-level representation and local regions of the low-level representation (e.g., patches rather than the complete image) to make the alignment better. Because our method uses the Text Image Residual Gating (TIRG) \cite{vo2019composing} as our basic network architecture, we call it TIRG-DIM. The experiment shows that the proposed method can achieve higher retrieval accuracy compared to existing methods on three standard benchmark datasets, namely Fasion200K \cite{han2017automatic}, MIT-states \cite{isola2015discovering} and CSS \cite{vo2019composing}. To summarize, our contributions are threefold: \begin{itemize} \item We design a novel framework for cross-modal image retrieval based on Deep InfoMax. By using ITDIM, maximizing MI by estimating the overlapping semantic information between the representations of the text modality and the image modality, we project the features of these two semantically different modalities with independent distribution into a common subspace, which can improve retrieval accuracy by learning a higher quality fusion feature. \item We accurately align the distribution of the features of the fusion modality and its main component, the image modality, by IFDIM that maximizes mutual information between the semantically same representations in the fusion network and the desired image encoder, which leads to more competitive retrieval results. \item The empirical results show that our method outperforms the state-of-the-art approaches for cross-modal image retrieval on three public benchmarks, Fashion200K, MIT-states and CSS. \end{itemize} \section{Related Work} In this section, we briefly review the methods of cross-modal image retrieval based on feature fusion and concisely introduce deep mutual information maximization. \subsection{Cross-modal Image Retrieval Based on Feature Fusion} In addition to the methods mentioned before, the previous work \cite{vo2019composing} also provides seven benchmarks which use the same system pipeline as TIRG except feature fusion modules. For similarity, we define the feature of the source image, that of the modified text and the fusion feature as $\phi_s$, $\phi_t$ and $\phi_{st}$. The feature fusion methods of these benchmarks are as follows, \begin{itemize} \item Image Only: $\phi_{st} = \phi_s$. \item Text Only: $\phi_{st} = \phi_t$. \item Concatenating features of image and text using $\phi_{st} = f_{MLP}([\phi_s,\phi_t])$ \cite{socher2014grounded,antol2015vqa}. In experiments, it is implemented by making use of two layers of MLP with RELU, the batch-norm and the dropout rate of 0.1. \item Show and Tell \cite{vinyals2015show}: In this method, $\phi_{st}$ is the final state of a LSTM which encoders the image and the words in the text in turn. \item Attribute as operator: Embed each text as a transformation and apply it to $\phi_s$ to obtain $\phi_{st}$ \cite{nagarajan2018attributes} \item Parameter Hashing \cite{noh2016image}: $\phi_{st}$ is the output of the image CNN which replaces the weights of a fc layer with transformation matrix, i.e. the hash of $\phi_t$. \item Relationship \cite{santoro2017simple} first constructs relationship features by concatenating text feature $\phi_t$ and the feature-map vectors from the convolved image; then these features pass through a MLP and the result is averaged to get $\phi_{st}$. \item FiLM \cite{perez2018film} outputs $\phi_{st}$ by a feature-wise affine transformation of the image feature, $\phi_{st} = \gamma_i \phi_s + \beta_i$, where $\gamma_i,\beta_i \in R^C$ is the modulation features predicted by $\phi_t$, the $i$ is the index of the layer and $C$ is the number of features or feature maps. \end{itemize} The above approaches learn the text feature, the desired image feature, and the fusion feature separately. This leads to the modality gap due to the inconsistent distribution of these features, which greatly affects the retrieval accuracy. To alleviate this problem, we align these distributions by maximizing the mutual information between the representations of different modalities. \subsection{Deep Mutual Information Maximization} Mutual information (MI) is a fundamental quantity across data science for measuring the relationship between random variables \cite{becker1992information,becker1996mutual,wiskott2002slow}. Unlike correlation, MI captures non-linear statistical dependence between variables and thus can act as a measure of true dependence \cite{kinney2014equitability}. Though the infomax principle that the idea of maximizing MI between the input and output has been proposed in many traditional feature learning methods \cite{bell1995information,linsker1988self}, MI is often hard to compute \cite{paninski2003estimation}, especially for the high-dimensional and continuous variables in the deep neural network. Fortunately, the recent research makes a theoretical breakthrough in deep mutual information estimation and provides the method for computing and optimizing the MI between input and output in a deep neural network. Mutual Information Neural Estimation (MINE) \cite{belghazi2018mine} is the first general-purpose estimator of the MI of continuous variables. Furthermore, Deep InfoMax \cite{hjelm2018learning} leverages local structure apart from global MI utilized in MINE to improve the suitability of representations for classification and provides various MI estimators. Moreover, mutual information maximization between features extracted from multiple views also draws much attention \cite{bachman2019learning,tian2019contrastive}, and these studies demonstrate that the quality of the representation improves as the number of views increases. As a member of self-supervised learning \cite{oord2018representation,henaff2019data,he2019momentum,chen2020simple}, deep mutual information maximization exploiting dual optimization to estimate divergences goes beyond the minimax objective as formalized in GANs \cite{goodfellow2014generative,arjovsky2017towards,arjovsky2017wasserstein}. Many deep learning tasks have adopted this method for estimating MI via back-propagation and proven its effectiveness, like text generation \cite{zhang2018generating,qian2019enhancing,mccarthy2019improved} and representation learning \cite{kong2019mutual,tschannen2019mutual,wen2020mutual}. In the cross-modal field, the use of deep mutual information is diverse. Since the mutual information between different modalities usually has higher semantic meaning compared to information that is modality-specific, people verify whether two input data correspond to each other by capturing mutual information between the two modalities \cite{sayed2018cross,jing2020self}. Also, the researchers utilize mutual information estimation to improve the qualities of representations \cite{guo2019learning,vemulapalli2017deep}. In the visual question answering field, Information Maximization Visual Question Generator \cite{krishna2019information} employs mutual information maximization to guarantee relevance between the generated question with the image and the expected answer. To our best knowledge, there are few research utilizing deep mutual information maximization to align the features' distributions from different modalities up till now. \begin{figure} \centering \includegraphics[scale=0.3]{Fig2.pdf} \caption{The system pipeline for training based on the abstract modification text.} \label{fig2} \end{figure} \section{Text Image Residual Gating Based on Deep Information Maximization} Since the modality gap caused by the distributional difference of features between modalities significantly influences the cross-modal image retrieval accuracy, we erase this gap by applying mutual information maximization between the representations of the image, the text and their fusion. Figure \ref{fig2} is the system pipeline of our model. \subsection{Feature Fusion Based on Deep Mutual Information Maximization} In our task, there are two main modality gaps influencing our retrieval accuracy: 1)the gap between the source image and text which makes feature fusion insufficient; 2) the gap between the fusion and the desired image which directly affects the similarity learning. We utilize deep mutual information to narrow these two gaps in this and the next section. The main task of the feature fusion module is to compose the semantic information extracted from the source image and the modified text. Since the fusion network's inputs are the features of the source image and the modified text, we first encode each input by a corresponding classical encoder. For the input images, we adopt a ResNet-18 whose output dimension of the last fc is changed to 512 to extract their features. For the modified text, we firstly embed it into a distributed embedding space and get the word embeddings; Then, we employ a widely used sequence learning model: Long Short-Term Memory (LSTM) to learn the sentence representations. We define the text's feature as the hidden state at the final time step. After obtaining these features, we fuse them by the basic network of our method, Text Image Residual Gating (TIRG) \cite{vo2019composing}. TIRG is composed of gating feature and residual feature and it fuses image and text features by the following approach, \begin{align} \phi_{st}^{rg} = w_g f_{g}(\phi_s,\phi_t) + w_r f_{r}(\phi_s,\phi_t) \label{eqution1} \end{align} \noindent where $f_g, f_r$ denote gating and the residual features presented in Figure \ref{fig2}. $w_g$ and $w_r$ are the trade-off between these two features. This gating feature in TIRG can be formulated as follows, \begin{align} f_{g}(\phi_s,\phi_t) = \sigma(\mathrm{FC_{g2}}(\mathrm{ReLU}(\mathrm{FC_{g1}}([\phi_s,\phi_t])))) \odot \phi_s \label{eqution2} \end{align} \noindent where $\sigma$ is the sigmoid function, $\odot$ is element wise product, $\mathrm{FC}_{g1}$ and $\mathrm{FC}_{g2}$ represent fully connected layers and $[ {\phi_s,\phi_t} ]$ denotes the concatenation of $\phi_s$ and $\phi_t$. This feature is utilized to judge whether the modified text is helpful to the query image or not and retain the image feature if these two inputs are sufficiently different. The residual feature is computed as follows, \begin{align} f_{r}(\phi_s,\phi_t) =\mathrm{FC_{r2}}( \mathrm{ReLU}(\mathrm{FC_{r1}}([\phi_s,\phi_t]))) \label{eqution3} \end{align} \noindent where $FC_{r1}$ and $FC_{r2}$ are fully connected layers. Figure \ref{fig2} presents the model for the dataset with abstract modification text, such as Fashion200k and MIT-states, and we call it TIRG$_A$. For the dataset whose modification text is more concrete like CSS, we use the TIRG$_C$ model. TIRG$_C$ changes the feature extraction module to alter the spatial properties of the output of the image encoder. It replaces the source image encoder with ResNet-17 and broadcasts the text's feature along the height and width dimension to make it match the source image's feature. Accordingly, fully connected layers in the fusion network are replaced by convolutional layers with $3 \times 3$ filter kernels. In our framework, distance metric learning optimizes the whole network by measuring the similarity between the desired image and the fusion of the source image and the modified text. As a classical encoder, ResNet-18 can get the desired image's high-quality feature in the fully labeled dataset \cite{he2016deep,shwartz2017opening}. Thus, the quality of the fusion feature is crucial to the retrieval accuracy. However, as the source image and the modified text are from different modalities, their features usually have inconsistent distribution and representation, which leads to a modality gap. The fusion network can not compose the semantic information captured from the source image and the modified text sufficiently without erasing the modality gap before fusing. Inspired by the recent advance of Deep InfoMax, we use mutual information maximization the image modality and the text modality (ITDIM) to narrow their modality gap. Considering that the source image and the modified text are completely semantically different, it's hard to narrow their modality gap by capturing the non-linear statistical dependencies using MI maximization between their representation \cite{belghazi2018mine}. Thus, ITDIM maximizes mutual information between the representation of the desired image and that of the modified text which contain partially same semantic information. In the previous work of the image to text retrieval, researchers attempt to obtain a better alignment of distributions of item representations across modalities by adversarial learning. They treat the input query encoder as the "generator" in GAN \cite{goodfellow2014generative} and design a modality classifier, which acts as the "discriminator". Given an unknown feature projection, this classifier detects the modality of an item as reliably as possible \cite{wang2017adversarial,wang2019learning}. In essence, these methods are equivalent to maximizing the mutual information between the feature vectors of the different modalities with the same semantic information. Further, narrowing modality gap by adversarial loss can be viewed as the special cases of MINE \cite{belghazi2018mine}, the basic version of Deep Infomax. However, it is often unsufficient to quantify the dependency between two modalities by estimating mutual information between the two complete representations (i.e., feature vectors), namely global MI maximization. Rather, combining the average MI maximization between the high-level representation and local regions of the low-level representation (e.g., patches rather than the complete representation) \cite{hjelm2018learning}, namely local MI maximization, can get a better distribution alignment. When the representation is the outputs of different layers of the same encoder, the local MI maximization makes the encoder prefer information that is shared among patches and filter noise specific to local patches. For our task, each modality makes use of different encoders and the semantic information the modified text contains is part of that the desired image contains. If we maximize the local MI between the high-level representation of the modified text and the low-level representation of the desired image, the desired image encoder will discard some image-specific semantic information as noise. Thus, our ITDIM maximizes mutual information between the high-level representation in the desired image encoder and the low-level representation in the modified text encoder. We verify the said analysis through experiments in section 4.5.3. In this paper, we maximize MI using different MI($X; Y$) objectives, where $X$ is a low-level feature map, and $Y$ is a high-level feature vector \cite{hjelm2018learning}. Generally speaking, mutual information quantifies the dependence of $X$ and $Y$. we formulate it as follows, \begin{equation} I(X;Y)=\int_{\mathcal{X} \times \mathcal{Y}} \log \frac{d \mathbb{P}_{XY}}{d \mathbb{P}_{X} \otimes \mathbb{P}_{Y}} d \mathbb{P}_{XY} = \int_{\mathcal{X} \times \mathcal{Y}} \log \frac{d \mathbb{P}_{XY/X}}{d \mathbb{P}_{Y}} d \mathbb{P}_{XY} \label{eqution4} \end{equation} where $ \mathbb{P}_{XY}$ is the joint probability distribution, and $\mathbb{P}_X= \int_{\mathcal{Y}} \mathbb{P}_{XY}$ and $\mathbb{P}_Y= \int_{\mathcal{X}} \mathbb{P}_{XY}$ are the marginals \cite{belghazi2018mine}. In the original Deep InfoMax, $X$ and $Y$ are the different-level representations in an encoder and contain the same semantic information. According the Equation (\ref{eqution4}), the mutual information maximization makes $Y$ capture the representative information of $X$ as much as possible. When we set $X$ and $Y$ as the representations of the modified text and the desired image, ITDIM actually guarantees the image representation to hold the semantic information related to the modified text as much as possible. To make the following section more clear, we give some notations. We define the modified text, the source image, the desired image and their features as$t$, $s$, $d$, $\phi_t$, $\phi_s$ and $\phi_d$, respectively. The fusion feature is denoted by $\phi_{st}$. Then we define the input image as $I \in (s,d)$ and its feature as $\phi_I \in (\phi_s,\phi_d)$. In the image encoder, the representation is defined as $i = I_{m}(I,\theta_{im})$ where $I_{m}$ denotes the image CNN before $i$ and $\theta_{im}$ denotes the parameters of this CNN. To obtain the best retrieval accuracy, we set $i$ to varied layers in terms of different MI objectives and datasets. In the text encoder, we set each representation as $e = E_m(t,\theta_{tm})$, which $E_m$ is the LSTM network with parameters $\theta_{tm}$. The text encoder in our method consists of two representations: the output and the parallel connection of the word embeddings. The key to maximize the MI is to design an appropriate MI estimator. Noise-Contrastive Estimation \cite{gutmann2010noise,gutmann2012noise} and Donsker-Varadhan estimators \cite{donsker1983asymptotic} require a large number of negative samples to be competitive and quickly becomes cumbersome with increasing batch size. By contrast, Jensen-Shannon MI estimator \cite{hjelm2018learning,nowozin2016f} performs well using a small quantity of negative samples, so we apply this estimator to our model. Our estimator $\widehat{\mathcal{I}}_{\theta_d,\theta_{tm},\theta_{im}}^{\mathrm{JSD}}$ for ($e;i$), $i = \phi_d$ can be formulated as, \begin{align} \widehat{\mathcal{I}}_{\theta_d,\theta_{tm},\theta_{im}}^{\mathrm{JSD}}(e;i) & := \mathbb{E}_{\mathbb{P}_e}[\mathrm{-sp}(-T_{\theta_d,\theta_{tm},\theta_{im}}(e,i))] - {\mathbb{E}_{\mathbb{P}_e \times \mathbb{P}_{e^{'}}}} [\mathrm{sp} (T_{\theta_d,\theta_{tm},\theta_{im}}(e^{'},i))] \label{eqution5} \end{align} \noindent where $\mathbb{P}_e$ is the empirical probability distribution of text representation $e$, $e^{'}$ is the low-level representation sampled from $\mathbb{P}_{e^{'}}$ = $\mathbb{P}_e$, $\mathrm{sp}(z) = \log(1+e^z)$ and $T$ can be concretized as a discriminator function modeled by deep neural network with parameters $\theta_d$. To compute the mutual information between high dimensional representation pairs effectively and sufficiently, we maximize MI by adopting global MI objectives and local MI objective, which maximizes the MI between the complete $X$ and $Y$ and estimates the MI between $Y$ and local regions of $X$ and respectively. Based on our estimator, we define our global MI and local MI objectives as follows, \begin{align} MI_E^G(e;i) = \max \limits_{\theta_{dg},\theta_{tm},\theta_{im}} \widehat{\mathcal{I}}_{\theta_{dg},\theta_{tm},\theta_{im}}^{\mathrm{JSD}} (e;i) \label{eqution6} \end{align} \begin{align} MI_E^L(e;i) = \max \limits_{\theta_{dl},\theta_{tm},\theta_{im}} \frac{1}{M^2} \sum_{i=1}^{M^2} \widehat{\mathcal{I}}_{\theta_{dl},\theta_{tm},\theta_{im}}^{\mathrm{JSD}} (e^p;i) \label{eqution7} \end{align} \noindent where $\theta_{dg}$ and $\theta_{dl}$ are the parameters of the discriminators for the global and local MI objectives and $e^p$ is the $p$th patch of the feature map $e$. Besides increasing the dependence between the text modality and the image modality, we also improve the compactness of the image feature by imposing prior matching objective, which makes $Y$ match a prior distribution. This objective can be formulated as, \begin{align} MI_E^P(i) = \mathbb{E}_{\mathbb{V}_y}[\log\mathcal{D}_{\theta_{dp}}(y)] + \mathbb{E}_{\mathbb{P}_e}[\log(1-\mathcal{D}_{\theta_{dp}}(i)] \label{eqution8} \end{align} \noindent where $y$ denotes a random variable with prior probability distribution $\mathbb{V}_y$ and ${\theta_{dp}}$ is the parameters of the discriminator function $\mathcal{D}_{\theta_{dp}}$ used in this objective. Finally, we utilize these three objectives together and get the complete objective, \begin{align} L_{E} = MI_E(e;i) = \alpha MI_E^G(e;i) + \beta MI_E^L(e;i) + \gamma MI_E^P(i) \label{eqution9} \end{align} \noindent where $\alpha$, $\beta$ and $\gamma$ are the trade-off parameters. The discriminators in MI objectives vary according to different application scenarios \cite{hjelm2018learning}. In our method, we define the discriminators for the global, local and prior matching objectives as Table \ref{table1}. We set the unit number of most hidden layers to the dimension of each encoder's output, $512$. Since the semantic information in the modified text is a portion of that in the desired image, there may be different text corresponding to the same image. And it's hard to determine which text-image pairs should have more mutual information. Thus, the 'fake' sample $e^{'}$ in Equation (\ref{eqution5}) is set as the same low-level feature as the 'real' sample $k$ extracted from another text that is not the description of the desired image. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|} \hline Objective & Operation & R@size & Activation\\ \hline \multirow{3}{*}{Global} & Input $\rightarrow$ Linear layer & 512 & ReLU\\ & Linear layer & 512 & ReLU\\ & Linear layer & 1 &\\ \hline \multirow{3}{*}{Local} & Input $\rightarrow$ 1 $\times$ 1 conv & 512 & ReLU\\ & 1 $\times$ 1 conv & 512 & ReLU\\ & 1 $\times$ 1 & 1 &\\ \hline \multirow{3}{*}{Prior} & Input $\rightarrow$ 1 $\times$ 1 conv & 512 & ReLU\\ & 1 $\times$ 1 conv & 300 & ReLU\\ & 1 $\times$ 1 & 1 &\\ \hline \end{tabular} \caption{Network architecture for global DIM,local DIM and prior matching} \label{table1} \end{table} \subsection{Distance Metric Learning based on Deep Mutual Information Maximization} The goal of deep metric learning (DML) is to push closer the fusion feature $\phi_{st}$ and the desired image's feature $\phi_d$ while pulling apart the non-similar image's feature $\phi_n$. More precisely, suppose the training minibatch has $B$ queries, we select one fusion feature $\phi_{st}^t$ and create a corresponding set $\mathcal{N}_i$ that consists of a desired image $\phi_d^t$ and $K-1$ non-similar images $\phi_n^1,...,\phi_n^{K-1}$. We repeat this selection $M$ times and denote the $m$th selction as $\mathcal{N}_i^m$. We adopt the following softmax cross-entropy loss, \begin{align} L_{T} = -\frac{1}{MB}\sum_{t=1}^B \sum_{m=1}^M \log\left\{\frac{\exp\{\kappa(\phi_{st}^t,\phi_d^t)\}}{\sum_{\phi_a \in \mathcal{N}_i^m}\exp\{\kappa(\phi_{st}^t,\phi_a)\}} \right\} \label{eqution10} \end{align} \noindent where $\kappa$ is a similarity kernel and can be implemented as the dot product or negative $l_2$ distance. If we apply large $K$ to the above equation, each desired image is contrasted with a lot of other non-similar images. Our model becomes more discriminative and fits faster than that using small $K$, but can be more apt to overfitting. Hence, we adopt this model to the dataset which is difficult to converge. In our experience, we use $K=B$ and $M=1$ for Fashion200k and the Equation (\ref{eqution11}) can be rewritten as follows, \begin{align} L_{T} = -\frac{1}{B} \sum_{t=1}^B \log\left\{\frac{\exp\{\kappa(\phi_{st}^t,\phi_d^t)\}}{\sum_{j=1}^B \exp\{\kappa(\phi_{st}^t,\phi_d^j)\}} \right\}. \label{eqution11} \end{align} By contrast, $K$ can also be set very small. In the extreme case, when we use the smallest value of $K=2$, the loss is the same as the soft triplet loss in the previous literature \cite{vo2016localizing,hermans2017defense}. The loss function can be formulated as follows, \begin{align} L_{T} = \frac{1}{MB} \sum_{t=1}^B \sum_{m=1}^M \log \{1+\exp\{\kappa(\phi_{st}^m,\phi_n^{m,t})-\kappa(\phi_{st}^m,\phi_d^t)\}\} \label{eqution12} \end{align} where $\phi_n^{m,t}$ denotes the $t$th fusion feature in the $\mathcal{N}_i^m$. This loss function is applied to the other two datasets, namely MIT-States and CSS. Compared with the DML in the unimodal scenario \cite{weinberger2009distance,gu2019local,oh2016deep,wang2019multi}, the precondition of the similarity learning between different modalities is to learn a common subspace where the items of different modalities can be directly compared to each other. As the inputs of the DML are the fusion feature and the desired image feature, there is also a modality gap existing due to their inconsistent distribution. Compared to the modality gap between the image and the text in the last section, the modality gap between the fusion and the image is smaller because most of the semantic information in the fusion modality comes from the image modality. Hence, the distributions of the features of the image and the fusion are similar to some extent. If we want to get better retrieval performance, we need to improve the similarity until these two distributions are highly consistent. We achieve this goal by maximizing the mutual information between the representations of the fusion and the desired image, which contains the same semantic information. Since TIRG obtains the fusion feature by adding the gating feature and the residual feature, no feature map which contains all semantic information in the fusion network can be used as low-level representation in the local MI objective. If we maximize MI between the high-level layer in the gating or the residual network which contains partial semantic information of the desired image and the low-level layer in the desired image encoder, local MI objective will discard partial semantic information that is unique to desired image as noise. Thus, we maximize mutual information between the low-level representation in the desired image encoder and the high-level representation in the fusion network. Furthermore, experiments in section 4.5.3 demonstrate that using different layers in the desired image encoder as $X$ in the global and local MI objectives is much better than using fusion feature ($\phi_{st})$) as $X$ in global MI objective. Thus, we set $Y$ in MI($X; Y$) as the high-level layer in the fusion network and $X$ as the low-level layer in the desired image's encoder, $i = I_{m}(I,\theta_{im})$, which is defined in the last section. The mutual information maximization here is between the image modality and the fusion modality, so we call it IFDIM. The setting of $X$ and $Y$ makes IFDIM optimizes the parameters of the whole architecture. We define $\theta_a$ as the parameters of the entire model and our cross-modal Jensen-Shannon MI estimator can be written as, \begin{align} \widehat{\mathcal{I}}_{\theta_d,\theta_{a}}^{\mathrm{CJ}}(i;\phi_{st}) := \mathbb{E}_{\mathbb{P}_i}[\mathrm{-sp}(-T_{\theta_d,\theta_{a}}(i,\phi_{st}))] - {\mathbb{E}_{\mathbb{P}_i \times \mathbb{P}_{i^{'}}}} [\mathrm{sp} (T_{\theta_d,\theta_{a}}(i^{'},\phi_{st}))] \label{eqution13} \end{align} where $\mathbb{P}_i$ is the empirical probability distribution of $i$, $\mathbb{P}_{i'} = \mathbb{P}_i$ is the distribution of $i'$ and $T$ stands for a discriminator function with parameters $\theta_d$ as Equation (\ref{eqution5}). As section 3.1, we increase the dependence between the fusion modality and the image modality by global MI, local MI and prior matching objectives. Since the formulas of these objectives are similar and can be obtained by altering corresponding parameters, we directly provide the complete objective, \begin{align} MI_F(i;\phi_{st}) = \alpha MI_F^G(i;\phi_{st}) + \beta MI_F^L(i;\phi_{st}) + \gamma MI_F^P(\phi_{st}) \label{eqution14} \end{align} \noindent where $\alpha$, $\beta$ and $\gamma$ are trade-off parameters defined in Equation (\ref{eqution5}). And the loss function for the cross-modal Deep InfoMax can be represented as $L_F = MI_F(i;\phi_{st})$. Finally, we train our model by the overall loss function defined as, \begin{align} L_{ALL} =\mu (L_{E} + L_{F}) + L_{T} \label{eqution15} \end{align} \noindent where $\mu$ is dynamic tradeoff hyperparameters. \section{Experiments} This section consists of three parts: 1) introduce the experimental settings; 2) compare our method with the state-of-the-art algorithms on different datasets; 3) provide ablation experiments to study the effect of the ITDIM and the IFDIM in our model. \subsection{Experimental Settings} We compare our method with TIRG \cite{vo2019composing} and seven benchmarks mentioned in section 2.1 on three datasets: Fashion200k \cite{han2017automatic}, MIT-States \cite{isola2015discovering} and CSS \cite{vo2019composing}. Our main metric for retrieval is recall at rank k (R@k), computed as the percentage of the text queries where (at least 1) desired or correct labeled image is within the top K retrieval images. In order to get stable retrieval results, we repeat each experiment $5$ times, and both mean and standard deviation are reported. We use PyTorch in our experiments. For all datasets, the low-level representation in $MI(X; Y)$ objectives used in the image encoder are set as the last convolutional layer for its better performance. By default, training is run for 160k iterations with a start learning rate 0.01. We will release the code to the public. The weights $\alpha$, $\beta$ and $\gamma$ are set as 0.5, 1 and 0.1. We set $\mu = {L_T}/{15(L_E+L_F)}$ with initial value 0.001 and update it every 10k iterations. We apply our method TIRG-DIM$_A$ to Fashion200k and MIT-States and TIRG-DIM$_C$ to CSS in terms of their modified text's attribute. \subsection{Fashion200k} \begin{table}[t] \centering \begin{tabular}{llll} \toprule Method&R@1&R@10&R@50\\ \hline \cite{han2017automatic} &6.3 &19.9 &38.3\\ Image only & 3.5 & 22.7 & 43.7\\ Text only & 1.0 & 12.3 & 21.8\\ Concatenation & $11.9^{\pm 1.0}$ & $39.7^{\pm 1.0}$ & $62.6^{\pm 0.7}$\\ Show and Tell & $12.3^{\pm 1.1}$ & $40.2^{\pm 1.7}$ & $61.8^{\pm 0.9}$\\ Param Hashing & $12.2^{\pm 1.1}$ & $40.0^{\pm 1.1}$ & $61.7^{\pm 0.8}$\\ Relationship & $13.0^{\pm 0.6}$ & $40.5^{\pm 0.7}$ & $62.4^{\pm 0.6}$\\ Film & $12.9^{\pm 0.7}$ & $39.5^{\pm 2.1}$ & $61.9^{\pm 1.9}$\\ TIRG & $\underline{14.1}^{\pm 0.6}$ & $\underline{42.5}^{\pm 0.7}$ & $\underline{63.8}^{\pm 0.8}$\\ \hline TIRG-DIM$_A$ & $\mathbf{17.4}^{\pm 0.3}$ & $\mathbf{43.4}^{\pm 0.4}$ & $\mathbf{64.5}^{\pm 0.6}$\\ \bottomrule \end{tabular} \caption{Retrieval performance on Fashion200k. The best result is in bold and the second best in underline.} \label{table2} \end{table} \begin{figure}[htbp] \centering \includegraphics[scale=0.45]{visualization-fashion200k.pdf} \caption{Qualitative results of image retrieval with modified text on Fashion200k. blue/green boxes: source/desired images.} \label{fig3} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.45]{visualization-MIT.pdf} \caption{Qualitative results of image retrieval with modified text on MIT-states. blue/green boxes: source/desired images.} \label{fig4} \end{figure} Fashion200k is a widely-used dataset in the filed of cross-modal image retrieval. It is composed of 200k images of fashion products and each image has a compact attribute-like description (such as mini and short dress or knee length skirt). Following the previous work \cite{han2017automatic}, queries are generated as follows: the query images and its desired images have one word difference in their descriptions, and the modified text is this different word. We adopt the same training split as TIRG \cite{vo2019composing} and generate queries on the fly for training. We randomly sample $10$ validation sets of $3167$ test queries and report the mean. Figure \ref{fig3} illustrates some qualitative results and Table \ref{table2} shows the retrieval accuracy on this dataset. From the results we have the following observations: 1) our method outperforms all the other approaches with a large margin, especially the R@1 performance, which has a more than 23 percent increase over the best competitor; 2) The standard deviations of the proposed method are smaller than others. These observations demonstrate that we can get more accurate and stable retrieval performance by improving the quality of each input's feature and the fusion feature using mutual information maximization. \begin{table}[t] \centering \begin{tabular}{llll \toprule Method & R@1 & R@5 & R@10\\ \hline Image only & 3.3 & 12.8 & 20.9\\ Text only & 7.4 & 21.5 & 32.7\\ Concatenation & $11.8^{\pm 0.2}$ & $30.8^{\pm 0.2}$ & $42.1^{\pm 0.3}$\\ Show and Tell & $11.9^{\pm 0.1}$ & $31.0^{\pm 0.5}$ & $42.0^{\pm 0.8}$\\ Att. as Operator & $8.8^{\pm 0.1}$ & $27.3^{\pm 0.3}$ & $39.1^{\pm 0.3}$\\ Relationship & $\underline{12.3}^{\pm 0.5}$ & $\underline{31.9}^{\pm 0.7}$ & $42.9^{\pm 0.9}$\\ Film & $10.1^{\pm 0.3}$ & $27.7^{\pm 0.7}$ & $38.3^{\pm 0.7}$\\ TIRG & $12.2^{\pm 0.4}$ & $\underline{31.9}^{\pm 0.3}$ & $\underline{43.1}^{\pm 0.3}$\\ \hline TIRG-DIM$_A$ & $\mathbf{14.1}^{\pm 0.3}$ & $\mathbf{33.8}^{\pm 0.5}$ & $\mathbf{45.0}^{\pm 0.5}$ \\ \bottomrule \end{tabular} \caption{Retrieval performance on MIT-States. The best result is in bold and the second best in underline.} \label{table3} \end{table} \subsection{MIT-States} MIT-States has $63440$ images, and each image is described by an object/noun word and a state/adjective word (such as wide belt or tiny island). In total, this dataset contains $245$ nouns and $115$ adjectives and each individual noun is only modified by ~9 adjectives it affords. For image retrieval, we create the query image and desired image by sampling pairs of images with the same object labels and different state labels. The state of the desired image is considered as the modified text. Therefore, our method is to retrieve image which possesses the same object but new state compared with the query image. In the experiments, we select 80 nouns for training, and the others are adopted for testing. Based on these settings, models are trained by different state/adjective (modified text) and tested using unseen objects. A number of qualitative results are shown in Figure \ref{fig4} and the quantitative retrieval results can be seen from Table \ref{table3}. Obviously, our proposed method obtains the highest retrieval accuracy at different R@k on this dataset. More specifically, we achieve $15\%$ and $6\%$ improvement on R@1 and R@10 respectively compared with the second best algorithm, namely Relationship \cite{santoro2017simple}. Because the same object with varied states can look extremely different, the modification text becomes more significant. Therefore, the ``Text only" baseline overcomes ``Image only". \subsection{CSS dataset} \begin{figure}[htbp] \centering \includegraphics[scale=0.45]{visualization-css.pdf} \caption{Qualitative results of image retrieval with modified text on CSS. blue/green boxes: source/desired images.} \label{fig5} \end{figure} \begin{table}[t] \centering \begin{tabular}{llll \toprule Method & R@1 & R@5 & R@10\\ \hline Image only & $6.3$ & $29.3$ & $54.0$\\ Text only & $0.1$ & $0.5$ & $0.8$\\ Concatenation & $60.6^{\pm 0.5}$ & $88.2^{\pm 0.4}$ & $92.8^{\pm 0.4}$\\ Show and Tell & $33.0^{\pm 3.2}$ & $75.0^{\pm 1.3}$ & $83.0^{\pm 0.9}$\\ Para.Hasing & $60.5^{\pm 1.9}$ & $88.1^{\pm 0.8}$ & $92.9^{\pm 0.6}$\\ Relationship & $62.1^{\pm 1.2}$ & $89.1^{\pm 0.4}$ & $93.5^{\pm 0.7}$\\ Film & $65.6^{\pm 0.5}$ & $89.7^{\pm 0.6}$ & $94.1^{\pm 0.5}$\\ TIRG & $\underline{73.7}^{\pm 0.4}$ & $\underline{90.7}^{\pm 0.4}$ & $\underline{94.6}^{\pm 0.4}$\\ \hline TIRG-DIM$_A$ & $\mathbf{77.0}^{\pm 0.2}$ & $\mathbf{95.6}^{\pm 0.4}$ & $\mathbf{97.6}^{\pm 0.3}$ \\ \bottomrule \end{tabular} \caption{Retrieval performance on CSS. The best result is in bold and the second best in underline.} \label{table4} \end{table} CSS consists of 32k synthesized images in a 3-by-3 grid scene which are generated by CLEVR toolkit. Objects in the images are rendered with different color, shape and size occupy. Each image comes in a simple 2D blobs version and 3D version, and we utilize the second one in this paper. There are 16k queries for training and 16k queries for testing in this dataset. Each query is composed of a source image, a modified text and a desired image (Figure \ref{fig1}). The modification text has three templates: adding, removing or changing object attributes, such as "add small green rectangle to top-right", "remove bottom-center small red circle" or "make bottom-left large green object gray". We provide a stronger test of generation by making certain object shape and color combinations only appear in training and not in testing, and vice versa. We can find the quantitative and qualitative results from Figure \ref{fig5} and Table \ref{table4} respectively. All the methods except the Image Only and the Text Only approaches get much higher retrieval accuracy on this dataset than on the other two. We believe this is because the image queries are simple and the text queries contain more information. Compared to the second best method TIRG, TIRG-DIM$_{A}$ improves retrieval accuracy by 3.3, 4.9 and 3.0 percentage points on R@1, R@5 and R@10 score. \subsection{Ablation Studies} In this section, we first provide the R@1 accuracy of various ablation studies to gain insight into which part of our method matters the most. The results are in Table \ref{table5}. Then we offer the loss values and the visualization of distribution by Figure \ref{fig6} and Figure \ref{fig7}. \begin{table}[ht] \centering \begin{tabular}{llll \toprule Method & Fashion & MIT-State & CSS\\ \hline TIRG$_A$ & $14.1^{\pm 0.6}$ & $12.2^{\pm 0.4}$ & $71.2^{\pm 0.4}$\\ TIRG$_A$ + DIM$_{TextSour}$ & $13.6^{\pm 0.6}$ & $11.6^{\pm 0.5}$ & $70.5^{\pm 0.6}$\\ TIRG$_A$ + DIM$_{SourText}$ & $13.7^{\pm 0.4}$ & $11.7^{\pm 0.6}$ & $70.7^{\pm 0.5}$\\ TIRG$_A$ + DIM$_{SourDes}$ & $13.5^{\pm 0.5}$ & $11.4^{\pm 0.4}$ & $70.4^{\pm 0.5}$\\ TIRG$_A$ + DIM$_{DesSour}$ & $13.3^{\pm 0.7}$ & $11.3^{\pm 0.5}$ & $70.2^{\pm 0.4}$\\ TIRG$_A$ + DIM$_{DesText}$ & $13.1^{\pm 0.6}$ & $11.2^{\pm 0.4}$ & $70.1^{\pm 0.5}$\\ TIRG$_A$ + DIM$_{TextFus}$ & $14.2^{\pm 0.6}$ & $12.3^{\pm 0.4}$ & $71.4^{\pm 0.3}$\\ TIRG$_A$ + DIM$_{SourFus}$ & $14.3^{\pm 0.5}$ & $12.4^{\pm 0.3}$ & $71.3^{\pm 0.2}$\\ TIRG$_A$ +DIM$_{FusDes}$ & $14.5^{\pm 0.4}$ & $12.5^{\pm 0.3}$ & $71.6^{\pm 0.3}$\\ TIRG$_A$ +DIM$_{ResiDes}$ & $14.7^{\pm 0.5}$ & $12.6^{\pm 0.4}$ & $71.7^{\pm 0.3}$\\ TIRG$_A$ +DIM$_{GatingDes}$ & $14.8^{\pm 0.4}$ & $12.6^{\pm 0.3}$ & $71.8^{\pm 0.3}$\\ TIRG$_A$ + ITDIM & $15.4^{\pm 0.4}$ & $12.7^{\pm 0.3}$ & $72.1^{\pm 0.2}$\\ TIRG$_A$ + IFDIM & $16.5^{\pm 0.3}$ & $13.7^{\pm 0.2}$ & $73.2^{\pm 0.3}$\\ TIRG-DIM$_A$ & $17.4^{\pm 0.3}$ & $14.1^{\pm 0.3}$ & $73.8^{\pm 0.2}$\\ \hline TIRG$_C$ & $12.4^{\pm 0.5}$ & $10.3^{\pm 0.5}$ & $73.7^{\pm 0.4}$\\ TIRG$_C$ + DIM$_{TextSour}$ & $11.8^{\pm 0.6}$ & $9.9^{\pm 0.5}$ & $73.1^{\pm 0.5}$\\ TIRG$_C$ + DIM$_{SourText}$ & $12.0^{\pm 0.5}$ & $10.0^{\pm 0.4}$ & $73.3^{\pm 0.6}$\\ TIRG$_C$ + DIM$_{SourDes}$ & $11.6^{\pm 0.5}$ & $9.8^{\pm 0.5}$ & $73.1^{\pm 0.5}$\\ TIRG$_C$ + DIM$_{DesSour}$ & $11.5^{\pm 0.6}$ & $9.6^{\pm 0.4}$ & $72.9^{\pm 0.5}$\\ TIRG$_C$ + DIM$_{DesText}$ & $11.4^{\pm 0.5}$ & $9.5^{\pm 0.4}$ & $72.7^{\pm 0.5}$\\ TIRG$_C$ + DIM$_{TextFus}$ & $12.6^{\pm 0.4}$ & $10.4^{\pm 0.4}$ & $74.0^{\pm 0.4}$\\ TIRG$_C$ + DIM$_{SourFus}$ & $12.6^{\pm 0.3}$ & $10.5^{\pm 0.5}$ & $73.9^{\pm 0.3}$\\ TIRG$_C$ +DIM$_{FusDes}$ & $12.7^{\pm 0.4}$ & $10.7^{\pm 0.5}$ & $74.1^{\pm 0.3}$\\ TIRG$_C$ +DIM$_{ResiDes}$ & $12.9^{\pm 0.3}$ & $10.8^{\pm 0.4}$ & $74.3^{\pm 0.3}$\\ TIRG$_C$ +DIM$_{GatingDes}$ & $13.0^{\pm 0.3}$ & $11.0^{\pm 0.3}$ & $74.5^{\pm 0.3}$\\ TIRG$_A$ + ITDIM & $15.4^{\pm 0.4}$ & $12.7^{\pm 0.3}$ & $72.1^{\pm 0.2}$\\ TIRG$_C$ + IFDIM & $13.9^{\pm 0.3}$ & $12.3^{\pm 0.2}$ & $76.5^{\pm 0.3}$\\ TIRG-DIM$_C$ & $14.8^{\pm 0.2}$ & $12.9^{\pm 0.1}$ & $77.0^{\pm 0.2}$ \\ \hline Our Full Model & $\mathbf{17.4}^{\pm 0.3}$ & $\mathbf{14.1}^{\pm 0.3}$ & $\mathbf{77.0}^{\pm 0.2}$ \\ \bottomrule \end{tabular} \caption{Retrieval performance (R@1) of ablation studies} \label{table5} \end{table} \subsubsection{Effect of ITDIM} We study the effect of the deep mutual information maximization between low-level representation in the text encoder and the hihg-level representation in the desired image encoder (ITDIM) in both TIRG$_A$ and TIRG$_C$. From the results in Table \ref{table5}, we can see that the performance of the two models has a remarkable improvement by adding ITDIM into these models. It demonstrates that the ITDIM can help to get a better alignment of distributions of item representations between the modified text and the image, though the semantic information in the text modality is much less than that in the image modality. \subsubsection{Effect of IFDIM} Comparing the results obtained with deep mutual information maximization between the low-level representation in the desired image encoder and the high-level representation in the fusion network (IFDIM) and that with ITDIM, we can see the models based on IFDIM leads to more considerable improvement. We believe this is because the representations of these two modalities all contain rich semantic information. The performance of DIM in our model is partly related to the quantity of the semantic information contained in its two inputs. \subsubsection{Effect of other Deep InfoMaxs} In order to give a more detailed comparison between different deep InfoMaxs, we also provide experiments based on deep mutual information maximization using other low-level and high-level representation in Table \ref{table5}. To obtain better fusion features, we attempt to make us of DIM$_{TextSour}$, which maximize mutual information between the low-level representation in the text encoder and the high-level representation in the source image encoder, and DIM$_{SourText}$, which swaps the positions of the representation in MI objectives. As the low-level and high-level representation in these InfoMaxs are semantically different, it's harmful to retrieval to force Deep InfoMax to capture non-linear statistical dependencies between two semantically independent modalities. We also employ DIM$_{SourDes}$, DIM$_{DesSour}$ and DIM$_{DesText}$ (Des in the subscripts represents the representation in the the desired image encoder), whose low-level representation contain partially different semantic information compared to their high-level representation. In these InfoMaxs, local mutual information objectives discard part of semantic information that is unique to the low-level representation as mentioned in section 3.1, which affect the retrieval results. Moreover, we try to optimize the fusion network by aligning the distribution between the text representation and the fusion feature by DIM$_{TextFus}$ and narrowing the modality gap between the source image and the fusion feature by DIM$_{SourFus}$. The experimental results show that these two InfoMaxs marginally improve the performance. We think this is because the effect of mutual information maximization between the low-level representation and the high-level representation of the same deep neural network is limited in fully supervised learning. However, ITDIM can significantly improve the retrieval accuracy by estimating mutual information between the representation from different encoders. For distance metric learning, learning a common space between fusion feature and the desired image feature is the key to guarantee the items of different modalities can be directly compared to each other. Apart from IFDIM, we exploit Deep InfoMax between the low-level representation in the fusion network and the high-level representation in the desired image encoder. Considering that fusion feature in TIRG is the weighted sum of the residual feature and the gating feature, no feature map which contains all semantic information in the fusion network can be used as low-level representation in the local MI objective. Here we conduct experiments based on DIM$_{FusDes}$, DIM$_{ResiDes}$ and DIM$_{GatingDes}$, which give up using the local MI objective, apply feature map in the residual network and feature map in the gating work to local MI objective respectively. Although these Deep InfoMaxs can't compete with ITDIM and IFDIM, they overcome the others. \subsubsection{Qualitative Analyses of the Effects of ITDIM and IFDIM} Apart from the quantitative analysis of the DIM effect using retrieval accuracy, we also provide qualitative analysis based on the trend of the loss and the visualization of distribution. From Figure \ref{fig6}, we can find that the models' loss trends using different DIMs vary each other. Overall, the models with DIM perform better than that without DIM. The larger the number of iterations, the greater the difference becomes. In detail, TIRG-DIM is superior to the methods with ITDIM or IFDIM. It shows that aligning the distribution of all three modalities performs better than aligning two of them, which is in line with common sense. The TIRG-IFDIM is a little bit better than TIRG-ITDIM in terms of the training loss. This is because the semantic information of the inputs in IFDIM is rich and identical, while the text modality in ITDIM is semantically deficient. We also visualize the distributions of the features of the desired image and the fusion of source image and the text on the MIT-states dataset using t-SNE tool (1280 sample points for each modality) by Figure \ref{fig7}. An ideal distribution alignment across modalities should make their features project into a compact common subspace like Figure \ref{fig7:d}, and vice versa. Figure \ref{fig7:a}, Figure \ref{fig7:b}, Figure \ref{fig7:c} and Figure \ref{fig7:d} represent the distribution of the features learned by TIRG, TIRG-ITDIM, TIRG-IFDIM and TIRG-DIM (ITDIM+IFDIM), respectively. The alignment of the distributions of these two features realized by the models with mutual information maximization is much better than that realized by the models without mutual information maximization. Obviously, TIRG-DIM reaches the best alignment of distributions of item representations across modalities. It should be noted that the model based on IFDIM learns a better common subspace for the image modality and the fusion modality compared to the model based on ITDIM, which corresponds to trends of the loss in Figure \ref{fig6}. \begin{figure}[htbp] \centering \includegraphics[scale=0.7]{MIT_loss.pdf} \caption{The training loss of the distance metric learning using different models on MIT-States dataset.} \label{fig6} \end{figure} \begin{figure}[htbp] \subfigure[The cross-modal invariance preserved without DIM]{ \includegraphics[width = 0.5\textwidth]{Distribution_origin.pdf} \label{fig7:a} } \subfigure[The cross-modal invariance preserved with ITDIM]{ \includegraphics[width=0.5\textwidth]{Distribution_TDIM.pdf} \label{fig7:b} } \subfigure[The cross-modal invariance preserved with IFDIM]{ \includegraphics[width=0.5\textwidth]{Distribution_IDIM.pdf} \label{fig7:c} } \subfigure[The cross-modal invariance preserved with DIM]{ \includegraphics[width=0.5\textwidth]{Distribution_DIM.pdf} \label{fig7:d} } \caption{The cross-modal invariance preserved. We visualise the distribution of the fusion modality and the image modality using the dots of different colors. The red and green solid dots represent the sampled fusion features and desired image features, respectively. We specify different lightness for the dots of the same color (as shown in the colorbar) to make them easier to distinguish.} \label{fig7} \end{figure} \section{Conclusion} In this paper, we have proposed a new method for cross-modal image retrieval based on the contrastive self-supervised learning method Deep InfoMax \cite{belghazi2018mine,hjelm2018learning}. Our approach makes retrieval more accurate by aligning the feature distributions of text, image, and their fusion. We maximize the MI between semantically different representations of the image modality and the text modality to project the features of these two modalities into a common subspace. Moreover, our method gets a precise alignment of distribution of the image modality and the fusion modality by maximizing the MI between the semantically identical representations in the desired image encoder and the fusion network. The proposed method gives an improved performance on three benchmark datasets. In the future, we would like to apply our work to other areas of cross-modal retrieval. \section{Acknowledgments} This work is supported by Alibaba-Zhejiang University Joint Institute of Frontier Technologies, The National Key R$\&$D Program of China (No. 2018YFC2002603, 2018YFB1403202), Zhejiang Provincial Natural Science Foundation of China (No. LZ13F020001), the National Natural Science Foundation of China (No. 61972349, 61173185, 61173186) and the National Key Technology R$\&$D Program of China (No. 2012BAI34B01, 2014BAK15B02).
1,477,468,750,785
arxiv
\section{Introduction} The definition of causality given by Halpern and Pearl \cite{HP01b}, like other definitions of causality in the philosophy literature going back to Hume \cite{Hum39}, is based on {\em counterfactual dependence}. Essentially, event $A$ is a cause of event $B$ if, had $A$ not happened (this is the counterfactual condition, since $A$ did in fact happen) then $B$ would not have happened. Unfortunately, this definition does not capture all the subtleties involved with causality. For example, suppose that Suzy and Billy both pick up rocks and throw them at a bottle ( the example is due to Hall \cite{Hall98}). Suzy's rock gets there first, shattering the bottle. Since both throws are perfectly accurate, Billy's would have shattered the bottle had it not been preempted by Suzy's throw. (This story is taken from \cite{Hall98}.) Thus, according to the counterfactual condition, Suzy's throw is not a cause for shaterring the bottle. This problem is dealt with in \cite{HP01b} by, roughly speaking, taking $A$ to be a cause of $B$ if $B$ counterfactually depends on $A$ under some contingency. For example, Suzy's throw is a cause of the bottle shattering because the bottle shattering counterfactually depends on Suzy's throw, under the contingency that Billy doesn't throw. It may seem that this solves one problem only to create another. While this allows Suzy's throw to be a cause of the bottle shattering, it also seems to allow Billy's throw to be a cause too, which seems counter-intuitive to most people. As is shown in \cite{HP01}, it is possible to build a more sophisticated model that expresses the subtlety of pre-emption in this case, using auxiliary variables to express the order in which the rocks hit the bottle and preventing Billy's throw from being a cause of the bottle shattering. One moral of this example is that, according to the \cite{HP01} definitions, whether or not $A$ is a cause of $B$ depends in part on the model used. Halpern and Pearl's definition of causality, while extending and refining the counter-factual definition, still treats causality as an all-or-nothing concept. That is, $A$ is either a cause of $B$ or it is not. The concept of responsibility, introduced in~\cite{CH04}, presents a way to quantify causality and hence the ability to measure the degree of influence (aka ``the degree of responsibility'') of different causes on the outcome. For example, suppose that Mr.~B wins an election against Mr.~G. If the vote has been 11--0, it is clear that each of the people who voted for Mr.~B is a cause of him winning, but the degree of responsibility of each voter for Mr.~B is lower than in a vote 6--5. Thinking in terms of causality and responsibility was shown to be beneficial for a seemingly unrelated area of research, namely formal verification (model checking) of computerised systems. In {\em model checking}, we verify the correctness of a finite-state system with respect to a desired behavior by checking whether a labeled state-transition graph that models the system satisfies a specification of this behavior \cite{CGP99}. If the answer to the correctness query is negative, the tool provides a counterexample to the satisfaction of the specification in the system. These counterexamples are used for debugging the system, as they demonstrate an example of an erroneous behaviour~\cite{CGMZ95}. As counterexamples can be very long and complex, there is a need for an additional tool explaining ``what went wrong'', that is, pinpointing the \emph{causes} of an error on the counterexample. As I describe in more details in the following sections, the causal analysis of counterexamples, described in~\cite{BBCOT12}, is an integral part of an industrial hardware verification platform RuleBase of IBM~\cite{RBurl}. On the other hand, if the answer is positive, the tools usually perform some type of a sanity check, to verify that the positive result was not caused by an error or underspecification (see ~\cite{Kup06} for a survey). \emph{Vacuity} check, which is the most common sanity check and is a standard in industrial model checking tools, was first defined by Beer et al.~\cite{BBER01} as a situation, where the property passes in a ``non-interesting way'', that is, a part of a property does not affect the model checking procedure in the system. Beer et al. state that vacuity was a serious problem in verification of hardware designs at IBM: ``our experience has shown that typically 20\% of specifications pass vacuously during the first formal-verification runs of a new hardware design, and that vacuous passes always point to a real problem in either the design or its specification or environment'' \cite{BBER01}. The general vacuity problem was formalised by Kupferman and Vardi, who defined a vacuous pass as a pass, where some subformula of the original property can be replaced by its $\bot$ value without affecting the satisfaction of the property in the system \cite{KV03a}, and there is a plethora of subsequent papers and definitions addressing different aspects and nuances of vacuous satisfaction. Note that vacuity, viewed from the point of view of causal analysis, is \emph{counterfactual causality}. Indeed, a property passes non-vacuously if each its subformula, if replaced by $\bot$, causes falsification of the property in the system. Due to the nature of specifications as more general than implementations, it is quite natural to expect that the specification allows some degree of freedom in how it is going to be satisfied. Consider, for example, a specification $\G(p \vee q)$, meaning that either $p$ or $q$ should hold in each state. If in the system under verification both $p$ and $q$ are \T in all states, the standard vacuity check will alert the verification engineer to vacuity in $p$ and in $q$. In contrast, introducing a contingency where $p$ is set to \F causes the result of model checking to counterfactually depend on $q$ (and similarly for $q$ and $p$), indicating that both $p$ and $q$ play some role in the satisfaction of the specification in the system. \emph{Coverage} check is a concept ``borrowed'' from testing and simulation-based verification, where various coverage metrics are traditionally used as heuristic measures of exhaustiveness of the verification procedure \cite{TK01}. In model checking, the suitable coverage concept is \emph{mutation coverage}, introduced by Hoskote et al.~\cite{HKHZ99}, and formalized in \cite{CKV01,CKKV01,CK02a,CKV03}. In this definition, an element of the system under verification is considered \emph{covered} by the specification if changing (\emph{mutating}) this element falsifies the specification in the system. Note, again, that this definition is, essentially, a counter-factual causality. As a motivating example to the necessity to finer-grained analysis consider the property $\cF req$, meaning in every computation there is at least one request. Now consider a system in which requests are sent frequently, resulting in several requests on each of the computational paths. All these requests will be considered not covered by the mutation coverage metric, as changing each one of them separately to \F does not falsify the property; and yet, each of these requests plays a role in the satisfaction of the specification (or, using the notions from causality, for each request there exists a contingency that creates a counterfactual dependence between this request and the satisfaction of the specification) \cite{CHK08}. While causality allows to extend the concepts of vacuity and coverage to include elements that \emph{affect} the satisfaction of the specification in the system in some way, it is still an all-or-nothing concept. Harnessing the notion of responsibility to measure the influence of different elements on the success or failure of the model checking process introduces the quantification aspect, providing a finer-grained analysis. While, as I discuss in more detail below, the full-blown responsibility computation is intractable for all but very small systems, introducing a \emph{threshold} on the value of responsibility, in order to detect only the most influential causes, reduces the complexity and makes the computation manageable for real systems \cite{CHK08}. The quantification provided by the notion of responsibility and the distinction between influential and non-influential causes have been applied to the symbolic trajectory evaluation, where ordering the causes by their degree of responsibility was demonstrated to be a good heuristic for instantiating a minimal number of variables that is sufficient to determine the output value of the circuit~\cite{CGY08}. In the next sections I provide a brief overview of the relevant concepts and describe the applications of causality to formal verification in more detail. \section{Causality and Responsibility -- Definitions}\label{sec:definitions} \label{sec:prelim} In this section, I briefly review the details of Halpern and Pearl's definition (HP) of causal models and causality \cite{HP01b} and the definitions of responsibility and blame \cite{CH04} in causal models. \subsection{Causal models} A {\em signature\/} is a tuple $\cS = \zug{\U,\V,\R}$, where $\U$ is a finite set of {\em exogenous\/} variables, $\V$ is a finite set of {\em endogenous\/} variables, and $\R$ associates with every variable $Y \in \U \cup \V$ a finite nonempty set $\R(Y)$ of possible values for $Y$. Intuitively, the exogenous variables are ones whose values are determined by factors outside the model, while the endogenous variables are ones whose values are ultimately determined by the exogenous variables. A (recursive) {\em causal model\/} over signature $\cS$ is a tuple $M = \zug{\cS,\cF}$, where $\cF$ associates with every endogenous variable $X \in \V$ a function $F_X$ such that $F_X: (\times_{U \in \U} \R(U) \times (\times_{Y \in \V \setminus \{ X \}} \R(Y))) \rightarrow \R(X)$, and functions have no circular dependency. That is, $F_X$ describes how the value of the endogenous variable $X$ is determined by the values of all other variables in $\U \cup \V$. If all variables have only two values, we say that $M$ is a {\em binary causal model}. We can describe (some salient features of) a causal model $M$ using a {\em causal network}, which is a graph with nodes corresponding to the variables in $M$ and edges corresponding to the dependence between variables. We focus our attention on \emph{recursive models} -- those whose associated causal network is a directed acyclic graph. A causal network is a graph with nodes corresponding to the random variables in $\V$ and an edge from a node labeled $X$ to one labeled $Y$ if $F_Y$ depends on the value of $X$. Intuitively, variables can have a causal effect only on their descendants in the causal network; if $Y$ is not a descendant of $X$, then a change in the value of $X$ has no affect on the value of $Y$. A setting $\vec{u}$ for the variables in $\U$ is called a {\em context}. It should be clear that if $M$ is a recursive causal model, then there is always a unique solution to the equations in $M$, given a context. Given a causal model $M = (\cS,\cF)$, a (possibly empty) vector $\vec{X}$ of variables in $\V$, and a vector $\vec{x}$ of values for the variables in $\vec{X}$, a new causal model, denoted $M_{\vec{X} \gets \vec{x}}$, is defined as identical to $M$, except that the equation for the variables $\vec{X}$ in $\cF$ is replaced by $\vec{X} = \vec{x}$. Intuitively, this is the causal model that results when the variables in $\vec{X}$ are set to $\vec{x}$ by some external action that affects only the variables in $\vec{X}$ (and overrides the effects of the causal equations). A causal formula $\phi$ -- a Boolean combination of events capturing the value of variables in the model -- is \T or \F in a causal model, given a \emph{context}. We write $(M,\vec{u}) \models \phi$ if $\phi$ is \T in the causal model $M$, given the context $\vec{u}$. $(M,\vec{u}) \models [\vec{Y} \gets \vec{y}](X = x)$ if the variable $X$ has value $x$ in the unique solution to the equations in $M_{\vec{Y} \gets \vec{y}}$ in context $\vec{u}$. We now review the HP definition of causality.. \begin{definition}[Cause \cite{HP01b}]\label{def-cause} $\vec{X} = \vec{x}$ is a {\em cause\/} of $\varphi$ in $(M,\vec{u})$ if the following three conditions hold: \begin{description} \item[AC1.] $(M,\vec{u}) \models (\vec{X} = \vec{x}) \wedge \varphi$. \item[AC2.] There exist a partition $(\vec{Z},\vec{W})$ of $\V$ with $\vec{X} \subseteq \vec{Z}$ and some setting $(\vec{x}',\vec{w})$ of the variables in $(\vec{X},\vec{W})$ such that if $(M,\vec{u}) \models Z = z^*$ for $Z \in \vec{Z}$, then \be \item[(a)] $(M,\vec{u}) \models [ \vec{X} \leftarrow \vec{x}', \vec{W} \leftarrow \vec{w}]\neg{\varphi}$. \item[(b)] $(M,\vec{u}) \models [ \vec{X} \leftarrow \vec{x}, \vec{W}' \leftarrow \vec{w}, \vec{Z}' \leftarrow \vec{z}^*]\varphi$ for all subsets $\vec{Z}'$ of $\vec{Z} \setminus \vec{X}$ and all subsets $\vec{W}'$ of $\vec{W}$. The tuple $(\vec{W}, \vec{w}, \vec{x}')$ is said to be a \emph{witness} to the fact that $\vec{X} = \vec{x}$ is a cause of $\phi$. \ee \item[AC3.] $(\vec{X} = \vec{x})$ is minimal; no subset of $\vec{X}$ satisfies AC2. \end{description} \end{definition} Essentially, Definition~\ref{def-cause} extends the counterfactual definition of causality by considering contingencies $\vec{W}$ -- changes in the current context that by themselves do not change the value of $\varphi$, but create a counterfactual dependence between the value of a $\vec{X}$ and the value of $\varphi$. The variables in $\vec{Z}$ should be thought of as describing the ``active causal process'' from $X$ to $\phi$, and are needed in order to make sure that, while introducing contingencies, we preserve the causal process from $\vec{X}$ to $\varphi$. The minimality requirement AC3 is needed in order to avoid adding irrelevant variables to $\vec{X}$. We note that Halpern recently updated the definition of causality, changing the concept to focus on the variables that are \emph{frozen} in their original values, rather than considering contingencies \cite{Hal15b}. Since the existing work on the applications of causality to formal verification uses the previous definition, we continue using it in this paper. \vspace{-0.3cm} \subsection{Responsibility and Blame}\label{sec-def-resp} Causality is a ``0--1'' concept; $\vec{X} = \vec{x}$ is either a cause of $\phi$ or it is not. Now consider two voting scenarios: in the first, Mr.~G beats Mr.~B by a vote of 11--0. In the second, Mr.~G beats Mr.~B by a vote of 6--5. According to the HP definition, all the people who voted for Mr. G are causes of him winning. While this does not seem so unreasonable, it does not capture the intuition that each voter for Mr. G is more critical to the victory in the case of the 6--5 vote than in the case of the 11--0 vote. The notion of \emph{degree of responsibility}, introduced by Chockler and Halpern \cite{CH04}, extends the notion of causality to capture the differences in the degree of criticality of causes. In the case of the 6--5 vote, no changes have to be made to make each voter for Mr.~G critical for Mr.~G's victory; if he had not voted for Mr.~G, Mr.~G would not have won. Thus, each voter has degree of responsibility 1 (i.e., $k=0$). On the other hand, in the case of the 11--0 vote, for a particular voter to be critical, five other voters have to switch their votes; thus, $k=5$, and each voter's degree of responsibility is $1/6$. This notion of degree of responsibility has been shown to capture (at a qualitative level) the way people allocate responsibility \cite{LGZ13}. \begin{definition}[Degree of Responsibility \cite{CH04}]\label{def-resp} The {\em degree of responsibility of $\vec{X}=\vec{x}$ for $\phi$ in $(M,\vec{u})$\/}, denoted $\dr((M,\vec{u}), (\vec{X} \gets \vec{x}), \phi)$, is $0$ if $\vec{X}=\vec{x}$ is not a cause of $\phi$ in $(M,\vec{u})$; it is $1/(k+1)$ if $\vec{X}=\vec{x}$ is a cause of $\phi$ in $(M,\vec{u})$ according to Definition~\ref{def-cause} with $\vec{W}$ of size $k$ being a smallest witness to the fact that $\vec{X}=\vec{x}$ is a cause of $\phi$. \end{definition} When determining responsibility, it is assumed that everything relevant about the facts of the world and how the world works (which we characterize in terms of {\em structural equations\/}) is known. But this misses out on important component of determining what Chockler and Halpern call {\em blame}:~the epistemic state. Formally, the degree of blame, introduced by Chockler and Halpern is the expected degree of responsibility \cite{CH04}. This is perhaps best understood by considering a firing squad with ten excellent marksmen (the example is by Tim Williamson). Only one of them has live bullets in his rifle; the rest have blanks. The marksmen do not know which of them has the live bullets. The marksmen shoot at the prisoner and he dies. The only marksman that is the cause of the prisoner's death is the one with the live bullets. That marksman has degree of responsibility $1$ for the death; all the rest have degree of responsibility $0$. However, each of the marksmen has degree of blame $1/10$. An agent's uncertainty is modelled by a pair $(\K,\Pr)$, where $\K$ is a set of pairs of the form $(M,\vec{u})$, where $M$ is a causal model and $\vec{u}$ is a context, and $\Pr$ is a probability distribution over $\K$. Note that probability is used here in a rather non-traditional sense, to capture the epistemic state of an agent, rather than an actual probability over values of variables. \begin{definition}[Blame \cite{CH04}]\label{def-blame} The {\em degree of blame of setting $\vec{X}$ to $\vec{x}$ for $\phi$ relative to epistemic state $(\K,\Pr)$\/}, denoted $\db(\K,\Pr,\vec{X} \gets \vec{x}, \phi)$, is defined as an \emph{expected value} of the degree of responsibility over the probability space $(\K,\Pr)$. \end{definition} \section{Coverage in the framework of causality}\label{cover-cause} The following definition of coverage is based on the study of {\em mutant coverage\/} in simulation-based verification ~\cite{MLS78,MO91,AB01}, and is the one that is adopted in all (or almost all) papers on coverage metrics in formal verification today (see, for example, \cite{HKHZ99,CKV01,CKKV01,CK02a,CKV03}). For a Kripke structure $K$, an atomic proposition $q$, and a state $w$, we denote by $\dKw$ the Kripke structure obtained from $K$ by flipping the value of $q$ in $w$. \begin{definition}[Coverage]\label{cov-def} Consider a Kripke structure $K$, a specification $\varphi$ that is satisfied in $K$, and an atomic proposition $q \in AP$. A state $w$ of $K$ is {\em $q$-covered by $\varphi$\/} if $\dKw$ does not satisfy $\varphi$. \end{definition} It is easy to see that coverage corresponds to the simple counterfactual-dependence approach to causality. Indeed, a state $w$ of $K$ is {\em $q$-covered by $\varphi$\/} if $\varphi$ holds in $K$ and if $q$ had other value in $w$, then $\varphi$ would not have been true in $K$. The following example illustrates the notion of coverage and shows that the counter-factual approach to coverage misses some important insights in how the system satisfies the specification. Let $K$ be a Kripke structure presented with one path, where one request is followed by three grants in subsequent states, and let $\phi = \G(req \rightarrow \cF grant)$ (every request is eventually granted). It is easy to see that $K$ satisfies $\phi$, but that none of the states are covered with respect to $grant$, as flipping the value of $grant$ in one of them does not falsify $\phi$ in $K$. On the other hand, representing the model checking procedure as a causal model with a context corresponding to the actual values of atomic propositions in states (see \cite{CHK08} for a formal description of this representation), demonstrates that for each state there exists a contingency where the result counterfactually depends on the value of \emph{grant} in this state; the contingency is removing grants in the two other states. Hence, while none of the states is covered with respect to \emph{grant}, they are all causes of $\phi$ in $K$ with the responsibility $1/3$. In the example above, and typically in the applications of causality to formal verification, there are no structural equations over the internal endogenous variables. However, if we want to express the temporal characteristics of our model -- for example, to say that the first \emph{grant} is important, whereas subsequent ones are not -- the way to do so is by introducing internal auxiliary variables, expressing the order between the grants in the system. \section{Explanation of Counterexamples Using Causality} Explanation of counterexamples addresses a basic aspect of understanding a counterexample: the task of finding the failure in the trace itself. To motivate this approach, consider a verification engineer, who is formally verifying a hardware design written by a logic designer. The verification engineer writes a specification -- a temporal logic formula -- and runs a model checker, in order to check the formula on the design. If the formula fails on the design-under-test (DUT), a counterexample trace is produced and displayed in a trace viewer. The verification engineer does not attempt to debug the DUT implementation (since that is the responsibility of the the logic designer who wrote it). Her goal is to look for some basic information about the manner in which the formula fails on the specific trace. If the formula is a complex combination of several conditions, she needs to know which of these conditions has failed. These basic questions are prerequisites to deeper investigations of the failure. Ben-David et al. present a method and a tool for explaining the trace, without involving the model from which it was extracted \cite{BBCOT12}. This gives the approach the advantage of being light-weight, as its running time depends only on the size of the specification and the counterexample, which is much smaller than the size of the system. An additional advantage of the tool is that it is independent on the verification procedure, and can be added as an external layer to any tool, or even applied as an explanation of simulation traces. The main idea of the algorithm is to represent the trace and the property that fails on this trace as a causal model and context (see Section~\ref{cover-cause} on the description of the transformation). Then, the values of signals in specific cycles are viewed as variables, and the set of causes for failure is marked as red dots on the trace that is shown to the user graphically, in form of a timing diagram. The tool is a part of the IBM RuleBase verification platform~\cite{RBurl} and is used extensively by the users. While the trace is small compared to the system, the complexity of the exact algorithm for computing causality leads to time-consuming computations; in order to keep the interactive nature of the tool, the algorithm operates in one pass on the trace and computes an \emph{approximate} set of causes, which coincides with the actual set of causes on all but contrived examples. The reader is referred to \cite{BBCOT12} for more details of the implementation. \section{Responsibility in Symbolic Trajectory Evaluation} Symbolic Trajectory Evaluation (STE)~\cite{STEBryant} is a powerful model checking technique for hardware verification, which combines symbolic simulation with 3-valued abstraction. Consider a circuit $M$, described as a Directed Acyclic Graph of nodes that represent gates and latches. For such a circuit, an STE assertion is of the form $A\rightarrow C$, where the \emph{Antecedent A} imposes constraints over nodes of $M$ at different times, and the \emph{Consequent C} imposes requirements on $M$'s nodes at different times. The antecedent may introduce symbolic Boolean variables on some of the nodes. The nodes that are not restricted by $A$ are initialized by STE to the value $X$ ("unknown"), thus obtaining an $abstraction$ of the checked model. STE is successfully used in the hardware industry for verifying very large models with wide data paths~\cite{forte,HighSchubert,CaseStudy}. The common method for performing STE is by representing the values of each node in the circuit by \emph{Binary Decision Diagrams (BDDs)}. To avoid the potential state explosion resulting from instantiating all unconstrained nodes, typically the circuit is refined manually in iterations, until the value of the circuit is determined. To avoid the need for manual refinement (which requires a close familiarity with the structure of the circuit), \cite{CGY08} suggest to compute an approximation of the \emph{degree of responsibility} of each node in the value of the output circuit. Then, the instantiation can proceed in the order of decreasing degree of responsibility. The idea behind this algorithm is that the nodes with the highest degree of responsibility are more likely to influence the value of the circuit, and hence we will avoid instantiating too many nodes. The algorithm was implemented in Intel STE framework for hardware verification and demonstrated better results than manual refinement~\cite{CGY08}. \section{\ldots and Beyond} In formal verification, there is no natural application for the notion of blame (Def.~\ref{def-blame}), since the model and the property are assumed to be known. On the other hand, in legal applications it is quite natural to talk about an epistemic state of an agent, representing what the agent knew or should have known. As~\cite{HP01b} points out, the legal system does not agree with the structural definitions of causality, responsibility, and blame. However, it is still possible to apply our methodology of representing a problem using causal models in order to improve our understanding and analysis of particular situations. In \cite{CFKL15}, we make the case of using the framework of actual causality in order to guide legal inquiry. In fact, the concepts of responsibility and blame fit the procedure of legal inquiry very well, since we can capture both the limited knowledge of the participants in the case, and the unknown factors in the case. We use the case of baby P. -- a baby that died from continuous neglect and abuse, which was completely missed by the social services and the doctor who examined him -- to demonstrate how we can capture the known and unknown factors in the case and attempt to quantify the blame of different parties involved in the case. \bibliographystyle{eptcs}
1,477,468,750,786
arxiv
\section{Introduction} \label{Sect1} The standard cosmological model describes a universe with homogeneous and isotropic geometry. The matter contents is described by a set of cosmic fluids satisfying some equations of state (EoS). The inhomogeneities are allowed only in the form of small perturbations, which define most of the observables which are used to define most of relevant observables. We know that the spectrum of CMB, Large Scale Structure, BAO and other observations demonstrate the correctness of this description, that is the dynamics of perturbations proves that the expanding universe is very close to homogeneity and isotropy of expanding universe at the sufficiently large scale. The question is whether universe was ``born'' isotropic and homogeneous or it became such due to some internal mechanism at the early stage of its evolution. The complete analytical description of the possible anisotropies and non-homogeneities of the early universe is impossible. Therefore the standard approach is to assume certain symmetry of the metric tensor. For instance, homogeneity and isotropy are possible symmetries. In formulating a more general metrics one of the possibilities is to consider an anisotropic but homogeneous space-time. The pioneer work \cite{Kantowski:1966te} explored the case of the metric anisotropic in two space directions with a given group of symmetries, in the universe filled by dust, while \cite{Ellis:1966ta} dealt with the special cases of locally rotationally symmetric and shear-free dust. The homogeneous models can be grouped by the possible space symmetries given by the Bianchi classification, which is based on the Lie algebras satisfied by the Killing vectors or, equivalently, the structure constants of the hypersurface's tetrad system \cite{landau1987}. The first possible space of this classification, called Bianchi type I, has three Killing vectors corresponding to the three spatial translations. Due to the simplicity of the Bianchi-I metric, it was extensively used in anisotropic cosmological models, including the Kasner vacuum solution \cite{Kasner:1921zz}. More sophisticated cosmological models based on other types of Bianchi classification, are possible. One can mention, for instance, the renowed works on the Bianchi type IX models by Belinskii, Khalatnikov and Lifshitz \cite{Belinsky:1970ew,Belinsky:1982pk}, and the Mixmaster universe model by Misner \cite{Misner:1967uu,Misner:1969hg}, which shows a chaotic behaviour. One of the main questions concerning anisotropic cosmological models is whether the universe could be anisotropic in the early epoch and evolve to be isotropic? It is certainly interesting to identify a mechanism which could be responsible by such an isotropization. It is highly desirable to have a maximally simple description of such a universe, such that further analysis of the perturbations could provide an observational evidence of isotropization. Some of mechanisms of this kind consist from the analysis of the asymptotic behaviour of solutions with isotropic classical fluids \cite{Jacobs:1968zz,Ellis:1968vb} (see also \cite{Ellis2012}), viscous and anisotropic stress tensor \cite{hervik2007}, primordial magnetic field \cite{Jacobs:1969qca,Thorne:1967zz} and quantum effects in primordial universe \cite{Lukash:1976kr,Hu:1978zd}. In what follows we concentrate on the Bianchi-I model. From the mentioned references we know that the speed of isotropization of the metric may depend of the EoS of the contents of the universe and, in particular, is different for matter or radiation. The situation is qualitatively similar to the dynamics of metric and density perturbations on the isotropic background, but we do not need to treat anisotropies as small perturbations. Usually, the EoS is assumed to be a linear relation between pressure and energy density, $p = \omega\rho$, with a constant $\omega$. The value of $\omega$ corresponds to the type of a fluid. For example, $\omega = -1$ means cosmological constant, $\omega=0$ dust and $\omega=1/3$ radiation. According to the recent data (see, e.g., \cite{Riess:1998cb} and \cite{Bergstrom:2000pn}) the present-day universe is dominated by non-luminous sources, such as Dark Matter (DM) and Dark Energy. It is most likely that the DM is a gas of weakly interacting massive particles, while the main candidate to be Dark Energy is the cosmological constant. The observational data show that most of its history universe was very isotropic, and therefore the isotropization should occur very early. Since in the past the universe was much hotter than now, the contribution of the cosmological constant to the overall energy density balance at the epoch of isotropization was very small \cite{Bludman}. At the same time, regardless the mass and warmness of the DM particles are unknown, the DM is supposed to be very hot in the early universe and then to became relatively cold at the later stage. Therefore it makes sense to explore the isotropization mechanism for the case of a universe filled by baryonic and dark matter, which are hot in the early universe and dust-like in the present epoch. The simplest appropriate description for the particles in a very early universe is the ideal relativistic gas of massive particles. Perhaps the most useful representation of such a gas is through the Reduced Relativistic Gas model (RRG), which provides a simplified approximation to the Maxwell distribution. The EoS of RRG was originally invented by A.D. Sakharov in the famous 1966 paper \cite{Sakharov:1966aja}, to interpolate between radiation and dust regimes. In this work the interpolating EoS has been used for the first derivation of the CMB spectrum, but the details of how to obtain the EoS of the model were not given. More recently RRG model was reinvented by our group in Refs.~\cite{FlaFlu,Fabris:2008qg}. The main advantage of this model includes the fact that the solutions for the background cosmology can be obtained in a closed, analytic form for a wide class of models including RRG and other fluids \cite{Medeiros:2012ud}, while the EoS is very close to the one of the relativistic gas of ideal particles \cite{FlaFlu}. Consequently RRG has been used for a simplified evaluation of the bounds of warmness of DM \cite{Fabris:2008qg,Hipolito-Ricaldi:2017kar}, description of energy exchange between matter and radiation and for an overall rough estimate for the cosmological observables in the model with the running cosmological constant \cite{Fabris:2011am}. In the present work we apply RRG to describe the isotropization of the universe in the transition period when the matter contents of the universe is in the transition from the radiation to the dust EoS. We will follow the classical works \cite{Jacobs:1968zz,Jacobs:1969qca}, but instead of dealing with radiation and dust cases separately, consider the RRG fluid which interpolates smoothly between the two regimes. The paper is organized as follows. In Sec. \ref{Sect2} we present a new derivation of the EoS of the RRG \cite{Sakharov:1966aja}. This new derivation is instructive and more formal than the previous one in \cite{FlaFlu}. In Sec. \ref{Sect3} we formulate the equations describing the dynamics of Bianchi type I model in the universe filled by RRG. Sec. \ref{Sect4} describes the simplest approximation for solving these equations. In particular it is shown that the previously known radiation and dust cases represent the limiting cases of the new system of equations. The solution in the general case of RRG can be possible only by means of numerical methods, as described in Sec. \ref{Sect5}. Finally, in Sec. \ref{Sect6} we draw our conclusions and describe the perspectives for the further work. \section{Reduced relativistic gas: equation of state} \label{Sect2} Let us consider the EoS for the RRG model in a way different from \cite{FlaFlu}. The model describes ideal relativistic gas of massive identical particles. The main simplification compared to the J$\ddot{\rm u}$ttner model \cite{Juttner} (see also the book \cite{Pauli}) is that within RRG particles have identical kinetic energies. This assumption make the EoS very simple and, in particular, provides great simplification in cosmology, both at the background and perturbations level \cite{Sakharov:1966aja}. At the same time, the difference with the EoS of the J$\ddot{\rm u}$ttner model, derived on the basis of Maxwell distribution does not exceed $2.5\%$ \cite{FlaFlu}. For the cosmological applications, since J$\ddot{\rm u}$ttner model and, in general, an ideal gas of identical particles, is certainly just an approximation, the RRG is perfectly justified and useful model. The derivation of EoS in \cite{FlaFlu} is very simple, one can say it is at at the high-school level. Let us present a little bit more formal scheme of deriving this equation in the flat Minkowski metric, which enables one, in principle, to evaluate the difference with the J$\ddot{\rm u}$ttner model analytically. The number of particles $N$ is evaluated on a three-dimensional space-like hypersurface with the normal vector $n^\mu$, with the hypersurface element area $d\sigma$. The general expression for a non-degenerate gas composted of identical particles is \cite{Hakim2011} \begin{eqnarray} N = \int d\sigma\,d^{4}p\,\, n_{\mu} p^{\mu}\,f(x,p)\,\delta(p^2-m^2), \label{equationN} \end{eqnarray} where $p^2=(p^0)^2-\delta_{ij}\,p^i\,p^j$. The distribution function $f(x,p)$ depends of space-time coordinates and momenta, denoted by $x$ and $p$. Taking the integral over $dp_{0}$ and using the properties of the delta function, we get \begin{eqnarray} N= \int d\sigma\,\frac{d^{3}p}{p^0}\,n_{\mu}\,p^{\mu}\,f(x,p). \end{eqnarray} For the constant time hypersurface $n^{\mu}=\delta^\mu_0$ and $d\sigma=d^{3}x$ we arrive at the expression \begin{eqnarray} N=\int d^{3}x\,d^{3}p\,\,f(x,p). \label{equationNconstantTime} \end{eqnarray} The RRG corresponds to the \textit{ansatz} for for distribution function, \begin{eqnarray} f(x,p) \,=\, C\,\delta(E-E_{0}), \end{eqnarray} where $C$ is a normalization constant, $E=p^0=\sqrt{{\vec p}^2+m^2}$ and $E_{0}$ is a constant energy of a gas particle. Using the expression for distribution function in (\ref{equationNconstantTime}), one can easily obtain \begin{eqnarray} N \,= C\int d^3x\,d\Omega\,dE\,\,E\,\sqrt{E^2-m^2}\,\delta(E-E_{0}), \end{eqnarray} where $d\Omega$ is the solid angle element. From the last expression, one can determine the constant C, leading to the final form of the distribution function, \begin{eqnarray} f\,=\,\frac{n}{4\pi E_0 \,\sqrt{E_{0}^2-m^2}}\,\delta(E-E_{0}), \label{distrubutionfunction} \end{eqnarray} Here $n=N/V$ is the concentration (number of particles per volume) of the gas. The expression for the energy-momentum tensor is \cite{landau1987,Hakim2011} (see also brief derivation in the Appendix) \begin{eqnarray} T^{\mu\nu} \,=\, \int\,d^{3}p\,\,\frac{p^{\mu}p^{\nu}}{p^{0}}\,f(x,p). \label{energymomentumtensor} \end{eqnarray} In the reference frame of an observer with four-velocity $u^{\mu}$ the projection of the energy-momentum tensor onto the hypersurface with normal vector $u^{\mu}$ leads to the energy density $\rho$ and pressure $p$. According to Ref.~\cite{Ellis2012}, \begin{eqnarray} \rho &=& u_{\mu}u_{\nu} T^{\mu\,\nu}, \qquad p \,=\, -\frac{1}{3}\,h_{\mu\nu}\,T^{\mu\,\nu}, \label{projections} \end{eqnarray} where $\,h_{\mu\nu}= \eta_{\mu\nu} - u_{\mu} u_{\nu}$. In case of a comoving reference frame, in which observer has the four-velocity $u^{\mu}=\delta^{\mu}_{0}$ with the distribution function (\ref{distrubutionfunction}), the expressions (\ref{projections}) become \begin{eqnarray} \rho = n E_{0} \quad \mbox{and} \quad p = \frac{n(E_0^2-m^2)}{3E_{0}}. \label{pressure} \end{eqnarray} Defining the rest energy density $\,\rho_d=nm$, the pressure and energy density are related by the expression \begin{eqnarray} p &=& \frac{\rho}{3}\,\Big(1-\frac{\rho_d^2}{\rho^2}\Big), \label{rrgsateequation} \end{eqnarray} which is nothing else but the EoS of the RRG model \cite{Sakharov:1966aja,FlaFlu}. It is easy to see that this EoS interpolates between radiation, $p \sim \rho/3$, at high energies, when $\rho^2 \gg \rho_d^2$, and dust $p \sim 0$, at low energies, when $\rho^2 \approx \rho_d^2$. \section{Bianchi-I type cosmology with RRG} \label{Sect3} Consider the anisotropic cosmology with the RRG fluid. Our starting point will be the space of Bianchi-I type, with the metric of the form \cite{landau1987}, \begin{eqnarray} ds^2 = dt^2 - a^2_{1}(t)\,dx^2-a_{2}^2(t)\,dy^2-a_{3}^2(t)\,dz^2. \label{lineelement} \end{eqnarray} A useful parametrization of anisotropic metric was introduced by Misner in \cite{Misner:1967uu,Misner:1969hg}, \begin{eqnarray} a_{1/2}(t) = a(t)\,e^{\beta_{+}(t) \pm \sqrt{3}\,\beta_{-}(t)}\,, \qquad a_{3}(t) = a(t)\,e^{-2\,\beta_{+}(t)}, \label{MisnerParametrization} \end{eqnarray} where $a(t)$, $\beta_{+}(t)$ and $\beta_{-}(t)$ are unknown functions of time. In this parametrization $\sqrt{-g}=a_{1} a_{2} a_{3}= a^3\,$ and the relation between $\,\beta_{\pm}$ and $a_{i}$ is \begin{eqnarray} \beta_{+} =\frac{1}{6}\,\,\mbox{ln}\,\Big(\,\frac{a_{1}\,a_{2}}{a_{3}^2} \,\Big) \, \qquad \beta_{-} =\frac{1}{2\,\sqrt{3}}\,\,\mbox{ln}\,\Big(\,\frac{a_{1}}{a_{2}} \,\Big). \end{eqnarray} In Ref. \cite{Ellis2012}, within the \textit{1+3} covariant formalism of a system of time-like geodesic congruence, the change of a connecting vector between geodesics, expressing the relative distance, is split in the irreducible parts called \textit{shear}, \textit{vorticity} and \textit{expansion}. In particular, the functions $\beta_{\pm}$ are the independent components of a traceless tensor, which represents the shear. In what follows, we analyse the dynamics of gravitational field for the metric (\ref{MisnerParametrization}), generated by Einstein equations. The matter contents of the universe is modelled by an isotropic RRG, where pressure is assumed to be the same in all spatial directions. The energy-momentum tensor is \begin{eqnarray} T_{\mu\nu} = (\,\rho + p\,)\,u_{\mu}\,u_{\nu}+ p\,g_{\mu\nu}. \label{energymomentumtensor-1} \end{eqnarray} The conservation equation $\nabla_{\mu}\,T^{\mu\nu}=0$ leads to \begin{eqnarray} \dot{\rho}+ 3\,H\,(\,\rho+p\,)=0, \qquad H=\frac{\dot{a}}{a}\,, \label{conservationequation} \end{eqnarray} where the dot means derivative with respect to the physical time. Let us note that the anisotropy of the metric does not affect the last equation because of the isotropic pressure. Eq.~(\ref{conservationequation}) can be integrated by using the EoS of the RRG, yielding the same result as in the isotropic case \cite{FlaFlu}, \begin{eqnarray} \rho = \sqrt{\rho^2_{1}\,\Big(\,\frac{a_{0}}{a} \,\Big)^6 +\rho^2_{2}\,\Big(\,\frac{a_{0}}{a} \,\Big)^8 }, \label{energydensity-rho} \end{eqnarray} where $\rho_{1}$, $\rho_{2}$ and $a_{0}$ are integration constants. As usual, one can distinguish two extreme regimes in the solution (\ref{energydensity-rho}). In the case $\rho_1 \ll \rho_2$, one meets the ultra-relativistic case, that is RRG demonstrates radiation-like behaviour. On the other hand, for $\rho_1 \gg \rho_2$, RRG behaves like a dust. Now we are in a position to consider the Einstein equations for the Bianchi - I metric. According to Ref.~\cite{hervik2007}, the Einstein tensor, $G_{\mu\nu}$ for the metric (\ref{MisnerParametrization}) assumes the form \begin{eqnarray} G_{0\,0} &=& 3\,H^2 \,-\, 3\big(\,\dot{\beta}_+^2 + \dot{\beta}_-^2\,\big) \,, \nonumber \\ G_{1\,1} &=& -3H^2 - 2\dot{H} \,-\, 3\big(\,\dot{\beta}_+^2 + \dot{\beta}_-^2\,\big) \,+\nonumber\\ & &+\Big(\frac{d^2}{dt^2} +3H\frac{d}{dt}\Big)\,\big(\,\beta_{+} +\sqrt{3}\,\beta_{-}\,\big),\nonumber \\ G_{2\,2} &=& -3H^2 - 2\dot{H} \,-\, 3\big(\,\dot{\beta}_+^2 + \dot{\beta}_-^2\,\big) \,+\nonumber\\ & &+\Big(\frac{d^2}{dt^2}+3H\frac{d}{dt}\Big) \,\big(\,\beta_{+}-\sqrt{3}\,\beta_{-}\,\big),\nonumber \\ G_{3\,3} &=& - 3H^2 - 2\dot{H} \,-\, 3\big(\,\dot{\beta}_+^2 + \dot{\beta}_-^2\,\big) \,-\nonumber\\ & &-2\, \Big(\frac{d^2}{dt^2}+3H\frac{d}{dt}\Big)\,\beta_{+}. \label{eisnteintensor} \end{eqnarray} The Einstein equations are given by $G_{\mu\nu}= 8\pi G T_{\mu\nu}$. For an isotropic $T_{\mu\nu}$ tensor, Einstein equations can be rewritten such that the pressure of matter does not enter the equations. Following \cite{hervik2007}, we define the new quantities \begin{eqnarray} G_{+} &=& \frac{1}{6}\, \big(\,G_{11}+G_{22}-2\,G_{33}\,\big), \nonumber \\ G_{-} &=& \frac{1}{2\,\sqrt{3}}\, \big(\,G_{11}-G_{22}\,\big), \label{einsteinmaismenosdefinition} \end{eqnarray} yielding significant simplifications compared to (\ref{eisnteintensor}), \begin{eqnarray} G_{\pm} &=& \ddot{\beta}_{\pm} + 3\,H\dot{\beta}_{\pm}. \label{einsteinmaismenorcalculated} \end{eqnarray} Einstein equations boil down to\footnote{ Eqs.~(\ref{einsteintensormaismenos}) also follows from the variation of the Einstein-Hilbert action with respect to $\,\beta_{\pm}$.} \begin{eqnarray} G_{+} &=& \frac{8\,\pi\,G}{6}\,\big(T_{11}+T_{22}-2\,T_{33}\big), \nonumber \\ G_{-} &=& \frac{8\,\pi\,G}{2\sqrt{3}}\,\big(T_{11} - T_{22}\,\big). \label{einsteintensormaismenos} \end{eqnarray} Furthermore, since we assume isotropic energy-momentum tensor, $T_{11}=T_{22}=T_{33}$ and \begin{eqnarray} G_{\pm} = 0. \label{anisotropicequation} \end{eqnarray} Finally, $00$-component of Einstein equations, together with Eqs.~(\ref{anisotropicequation}) and (\ref{einsteinmaismenorcalculated}), yield \begin{eqnarray} H^2 \,-\, \big( \dot{\beta}_{+}^2 + \dot{\beta}_{-}^2 \big) &=& \frac{8\pi G}{3}\,\rho \label{eisnteinequations1} \\ \mbox{and } \qquad 3H\,\dot{\beta_{\pm}}+ \ddot{\beta_{\pm}}&=& 0. \label{eisnteinequations2} \end{eqnarray} A first integral of (\ref{eisnteinequations2}) can be easily found in the form \begin{eqnarray} \dot{\beta_{\pm}} = \gamma_{\pm}\,a^{-3}, \label{firsintegral} \end{eqnarray} where $\gamma_{\pm}$ are integration constants. The last result transforms (\ref{eisnteinequations1}) into an equation for the conformal factor of isotropic expansion $a(t)$. Defining useful constants $\Gamma$ and $\phi$, \begin{eqnarray} \gamma_{+} &=& \Gamma \cos \phi \,,\qquad \gamma_{-} \,=\, \Gamma\sin \phi\,, \label{angularrelations} \end{eqnarray} we arrive at the generalized form of Friedmann equation for anisotropic Bianchi-I metric with RRG matter contents, \begin{eqnarray} H^2\,=\, \frac{\dot{a}^2}{a^2} &=& \Gamma^2\,\Big(\,\frac{a_{0}}{a}\,\Big)^6 \,+\nonumber\\ & & +\frac{8\pi G}{3}\,\rho_1\Big(\,\frac{a_{0}}{a}\,\Big)^4 \,\sqrt{\Big(\,\frac{a_{0}}{a}\,\Big)^{-2}+b^2}\,, \label{prefriedmann} \end{eqnarray} where $b = \rho_2/\rho_1$ is the warmness parameter \cite{Fabris:2008qg}. The specific new element compared to isotropic cosmological model is the first term in the {\it r.h.s.}. This term has the ultrarelativistic $a^{-6}$ scaling feature and hence it is irrelevant for the late cosmology. At the same time, it may be quite relevant in the early universe. Due the new term, caused by anisotropy, the very early universe behaves according to \begin{eqnarray} a \,\sim t^{1/3}\,, \label{13} \end{eqnarray} different from the radiation-dominated universe. It is well-known that the same dynamics of the conformal factor can be achieved in the isotropic plane universe with an ideal fluid with EoS $p=\rho$. In order to see this consider the EoS $p=w\rho$, with constant $w$. Using the conservation law results in $\rho = \rho_0 a^{-3\,(w+1)}$. By comparing this result to the first term of the \textit{r.h.s} of (\ref{prefriedmann}), we arrive at $w=1$. A fluid of this kind was called \textit{stiff matter}, when first introduced by Zel'dovich \cite{Zeldovich:1972zz}. We have seen that this EoS results from integrating anisotropies at the early stage of the evolution of the universe. The solution of Eqs.~(\ref{firsintegral}) can be expressed as \begin{eqnarray} \beta_{\pm}(t)-\beta^{0}_{\pm} = \gamma_{\pm}W(t), \qquad W(t)=\int_{t_0}^t \frac{dt'}{a^3(t')}. \label{firstintegral2} \end{eqnarray} In this expression $t_0$ correspond to the initial moment of time and $\beta^{0}_{\pm}$ are integration constants. One can notice that both $\beta_{\pm}$, with exception of the integration constants, will lead to a same functional form. After $W(t)$ is found, the parameters $\,\gamma_{\pm}\,$ determine $\beta_{\pm}$ and consequently the metric components by Eqs.~(\ref{MisnerParametrization}). \section{Approximations} \label{Sect4} It is easy to present the solution for (\ref{prefriedmann}) and (\ref{firstintegral2}) in the form of quadratures, however the integrals are not elementary functions and the qualitative analysis becomes cumbersome. Therefore, in order to have better idea about the physical output of these equations, we split the derivation of the scale factor dependence into two different considerations. In the present section we consider three approximations, namely, vacuum, radiation and dust. In the next section we present the results of a numerical solution in the general case. The radiation and dust approximations come from the limits of the RRG EoS depending on the value of parameter $b$. The approximation for vacuum will be explained bellow. The considerations in this section are almost completely non-original and are presented as to serve as reference for the consequent numerical solutions. When $a(t)$ is very small, one can keep only the first (stiff matter of anisotropic origin) term on the \textit{r.h.s} of (\ref{prefriedmann}). This procedure is equivalent to taking a vacuum solution, because in this regime we are disregarding the terms coming from the matter contents. Indeed, it is known that for the evolution of homogeneous and anisotropic models in the vicinity of the singularity the matter contents has no much relevance \cite{Lifshitz:1963ps} (see also \cite{Zeldovich:1983cr}). The vacuum metric of Bianchi type I is called Kasner solution. Following \cite{hervik2007}, we arrive at \begin{eqnarray} \frac{\dot{a}^2}{a^2} = \Gamma^2\Big(\frac{a_{0}}{a}\Big)^6, \end{eqnarray} which can be solved in the form \begin{eqnarray} \Big(\frac{a}{a_{0}}\Big)^3 = 3\Gamma (t-t_{0}). \label{kasner1} \end{eqnarray} Setting $a_{0}=1$, the equations (\ref{firstintegral2}) can be integrated, yielding \begin{eqnarray} \beta_{\pm}(t)\,=\,\beta^{(0)}_{\pm} \,+\,\frac{\gamma_{\pm}}{3\,\Gamma}\,\,\mbox{ln}\,\Big(\,\frac{t}{t_{0}}\,\Big). \label{kasner2} \end{eqnarray} Here $\beta^{(0)}_{\pm}$ and $t_{0}$ are integration constants. From the angular relations (\ref{angularrelations}) the functions $a_k(t)$ can be presented as \begin{eqnarray} a_k(t) &=& (3\Gamma)^{1/3}\,t^{p_k}\,,\qquad k=1,2,3\,. \label{kasner3} \end{eqnarray} Parameters $p_k$ can be written down using notations (\ref{angularrelations}), \begin{eqnarray} p_{1/2} &=& \frac{1}{3}\,\big(1+ \cos\,\phi \pm \sqrt{3}\,\sin\,\phi\big)\,, \nonumber \\ p_{3} &=& \frac{1}{3}\,\big(1- 2\, \cos\,\phi \big)\,. \label{kasnerparameters1} \end{eqnarray} and the line element as \begin{eqnarray} ds^2 = dt^2 - \big(3\Gamma\big)^{2/3}\,\big[t^{2p_{1}}\,dx^2 - t^{2p_{2}}\,dy^2 - t^{2p_{3}}\,dz^2\,\big], \label{kasnerelement1} \end{eqnarray} The multiplicative constant can be absorbed into the spatial coordinates, providing the standard form \cite{landau1987}, \begin{eqnarray} ds^2 = dt^2 - t^{2p_{1}}\,dx^2- t^{2p_{2}}\,dy^2-t^{2p_{3}}\,dz^2, \label{kasnerelement2} \end{eqnarray} where the parameters $p_k$, $p_{2}$ and $p_{3}$ satisfy the algebraic constraints \begin{eqnarray} p_{1}^2+p_{2}^2+ p_{3}^2 = 1 ,\quad p_{1}+p_{2}+ p_{3} =1. \label{kasnerrelations} \end{eqnarray} Finally, in the Kasner solution \begin{eqnarray} a(t) = \big[\,a_{1}(t)\,a_{2}(t)\,a_{3}(t)\,\big]^{1/3} =t^{\frac{1}{3}\,(p_{1}+p_{2}+p_{3})}=t^{1/3}. \end{eqnarray} The approximations for which the analytic solution can be easily obtained correspond to the ultra-relativistic, $b^{-1}\to 0$ or dust, $b\to 0$ regimes. In what follows we consider these two cases separately. Let us note that the general form of solutions (\ref{firstintegral2}) remains the same independent on the approximations for the isotropic energy-momentum tensor. In the ultra-relativistic case one can perform the expansion up to the first order in $b^{-1}$ in (\ref{prefriedmann}). Taking $a_{0}=1$, we arrive at \begin{eqnarray} \dot{a}^2 \,=\, \frac{\Gamma^2}{a^4} + \frac{8 \pi G\rho_{1}b}{3a^2} \,\Big(1+\frac{a^2}{2b^2}\Big). \label{radiationaprox1} \end{eqnarray} Taking into account the $a^2/b^2$-term in the parenthesis, this is the Bianchi type I model with radiation, which has initial density expressed by $\rho_{2}$. This is exactly the classical result of \cite{Jacobs:1968zz} for the radiation, but we obtained it as a limit of the RRG solution. It proves useful to make a change of variables \begin{eqnarray} a &=& \frac{1}{\chi_{rad}}\,\sinh\,\xi, \quad \kappa_{\rm rad}^2 \,=\, \frac{8\pi G\,\rho_2}{3\Gamma^2}, \quad 0\leq\,\xi\,<\infty\,, \label{implicit1} \end{eqnarray} in Eq.~(\ref{prefriedmann}). This results in the relation \begin{eqnarray} dt \,=\, \frac{1}{\Gamma\kappa_{\rm rad}^3}\,\sinh^2\xi \,d\xi\,. \end{eqnarray} Then Eqs.~(\ref{prefriedmann}) and (\ref{firstintegral2}) become the parametric relations \begin{eqnarray} \Gamma t &=& \frac{1}{4\kappa_{\rm rad}^3}\,(\sinh 2\xi \,-\, 2\xi)\,, \quad \beta_{\pm} \,=\, \frac{\Gamma_{\pm}}{\Gamma}\,\,\mbox{ln}\, (\tanh \xi)\,. \label{implicitequationsrad} \end{eqnarray} From the relation between $a$ and $\xi$ in (\ref{implicit1}), one can obtain \begin{eqnarray} \tanh \xi &=& \Big(1+ \frac{1}{a^2\,\kappa_{\rm rad}^2}\,\Big)^{-\,\frac12}. \label{tgh} \end{eqnarray} In case $(a^2\,\kappa_{\rm rad}^2)^{-1}$ is very small, one gets the relation \begin{eqnarray} \tanh \xi &=& 1 - \frac{1}{2}\,\frac{1}{a^2\,\kappa_{\rm rad}^2} \,+\, \dots\,. \end{eqnarray} Consequently, due to the (\ref{implicitequationsrad}), \begin{eqnarray} \beta_{\pm} \,=\, -\,\frac{\gamma_{\pm}}{\Gamma}\, \Big[\,\frac{1}{a^2\,\kappa_{\rm rad}^2} \,+\, \dots \Big]. \end{eqnarray} In the radiation approximation, if we disregard the term $\big(a^2 \kappa_{\rm rad}^2\big)^{-1}$, then $\beta_{\pm}$ tend to zero for great values of $a$, and effectively there is isotropization. Another way to arrive at the same conclusion is by observing that when $\xi \to \infty$, we have $\beta_{\pm}\to 0$. In the same limit \begin{eqnarray} \sinh\xi \sim \cosh\xi \sim \frac{1}{2}\,e^\xi \end{eqnarray} and dominate in the Eq.~(\ref{implicitequationsrad}). Using (\ref{implicit1}), \begin{eqnarray} t \,\approx\, \frac{2a^2}{\kappa_{\rm rad}}, \label{time} \end{eqnarray} which yields the standard expression for the isotropic radiation dominated universe, \begin{eqnarray} a = \Big(\frac{2\pi G\,\rho_2}{3\Gamma^2}\,\Big)^{1/4} \,\sqrt{t}. \n{a} \end{eqnarray} This expression means that the role of anisotropy is negligible for the evolution of the scale factor and hence we have isotropization. \vskip 2mm Let us now consider the limit $a\gg b$, which means a dust-dominated universe. The solution of the dynamical equations (\ref{prefriedmann}) and (\ref{firstintegral2}) for dust is simpler than for the radiation-dominated case \cite{hervik2007} and was originally obtained in \cite{heckmann1958}. Here we will try to arrive at the same result by taking the corresponding limit in the general solution for RRG, which interpolates between radiation and dust. The solutions of Eqs.~(\ref{prefriedmann},\ref{firstintegral2}) for $a(t)$ and $\beta_{\pm}(t)$ are given by \begin{eqnarray} a^3 = \frac{3\Gamma}{t_{I}}\,t\,\big(\,t+t_{I}\,\big), \qquad \beta_{\pm} = \frac{\gamma_{\pm}}{3\Gamma}\,\,\mbox{ln}\,\Big(\frac{t}{t + t_I} \Big). \label{dustaprox2} \end{eqnarray} Here \begin{eqnarray} t_{I}=\frac{4}{3\Gamma\kappa_{\rm dust}^2} \qquad \mbox{and} \qquad \kappa_{\rm dust}^2= \frac{8\pi G \rho_1}{3\Gamma^2} \label{ka} \end{eqnarray} are constants. The solutions (\ref{dustaprox2}) in the dust-dominated approximation can be considered in two different asymptotic situations. The first one is $t \ll t_{I}$, which implicates in the expansions \begin{eqnarray} a^3 &=& 3\Gamma\,\Big[\,\Big(\frac{t}{t_I}\Big)^2 + t\,\Big], \nonumber \\ \beta_{\pm} &=& \frac{\gamma_{\pm}}{3\Gamma}\, \,\mbox{ln}\,\Big[\frac{t}{t_I}\,\Big(\,1+\frac{t}{t_I}+\dots\,\Big)\Big]. \label{dustkasner} \end{eqnarray} Disregarding terms with powers greater than two, the solutions tend to the Kasner expressions (\ref{kasner1}) and (\ref{kasner2}). In the second case $t \gg t_{I}$ one can use the same scheme as before, but now making expansion in the powers of $t_I/t$. In this way we obtain the standard solution for the dust, with $a \sim t^{2/3}$ and $\beta_{\pm}\to 0$. Following the same logic as in the radiation case, we conclude that the behaviour in the late times demonstrates isotropization. \section{Numerical Solution} \label{Sect5} Let us consider numerical solution of the dynamical system of Eqs.~(\ref{prefriedmann}) and (\ref{firstintegral2}) without assuming high- or low-energy approximations. Exactly as it was done in the previous section, we consider a simplified model with one fluid described by RRG and the anisotropy which enters the general energy balance by means of the stiff matter energy density. It proves useful to express the solution in terms of initial values of the relative energy densities parameters $\Omega_{an}^{(i)}$ and $\Omega_{RRG}^{(i)}$, defined by \begin{eqnarray} \Omega_{an} = \frac{\Gamma^2}{H^2}, \quad \Omega_{RRG} = \frac{8\,\pi\,G\,\rho_{1}}{3\,H^2}\,\sqrt{1+b^2}=1-\Omega_{an}. \label{Omega} \end{eqnarray} The subscript $(i)$ denotes the values of the parameters in the initial moment of time. Our purpose is to evaluate the isotropization of the universe starting from the initial moment of time $t=0$, when $a(0)=a_{i}=1$ and $H(0)=H_{i}$. Therefore, in the initial instant of time the values are $\,\Omega_{an}^{(i)}\,$ and $\,\Omega_{RRG}^{(i)}$, corresponding to $H=H_i$ in (\ref{Omega}). It proves useful to define the dimensionless time variable $\tau = H_i t$. In this way we arrive in the equations \begin{eqnarray} \frac{\dot{a}^2}{a^2} &=& \frac{\Omega_{an}^{(i)}}{a^6} + \frac{\Omega_{RRG}^{(i)}}{a^4\,\sqrt{1+b^2}}\,\sqrt{a^2+b^2}, \nonumber \\ \dot{\beta}_{\pm} &=& \frac{\sqrt{\Omega_{an}^{(i)}}\,\gamma_{\pm}}{\Gamma a^3}, \label{newsystem} \end{eqnarray} where the dots mean derivatives with respect to $\tau$, The value of $\Omega_{an}(t)$ measures the amount of anisotropy, such that greater values correspond to higher degree of anisotropy. As before, $b$ is the warmness parameter of the RRG matter. In the nowadays universe the value of $\Gamma$ is very small implying in a very small value of $\Omega^0_{an}$. The warmness $b$ today is bounded from above by approximately $0.001$ for the dominating fluid, namely for the Dark Matter \cite{Fabris:2011am}. Indeed, in the early universe when $\Omega^0_{an}$ was significant, the warmness could have a large value. The framework of RRG enables one to see how the warmness affects the time of isotropization, that is the typical time of transition from large value of $\Omega_{an}^{(i)}$ to a small value at the later period. The second equation in (\ref{newsystem}) can be expressed via the angular parameter in (\ref{angularrelations}). This angle becomes relevant only in the vicinity of the singularity, when the metric can be approximated by the Kasner solution, and in the subsequent numerical analysis it will not play much role. Let us present the numerical solutions for different values of the warmness parameter $b$ using Mathematica software \cite{M9}. We used the initial conditions $a=a_{i}=1$, $\beta_{+}=10$, $\beta_{-}=15$, such that $\Omega_{an}^{(i)}=0.99$ and $\Omega_{RRG}^{(i)}=0.01$ at $\tau=0$. In all plots the scale factor and anisotropy measure $\Omega_{an}$ are compared with the plots for the cases of vacuum, radiation and dust, by assuming the same initial values of $\Omega^{(i)}$'s. The Figs.~\ref{fig1} and \ref{fig2} clearly shows that RRG behaviour tends to Kasner at the early stage of evolution, and is very close of radiation during some time for both scale factor and $\Omega_{an}(\tau)$. In the Figs.~\ref{fig3} and \ref{fig4}, the isotropisation can be observed, because $\beta_{+}$ and $\beta_{-}$ tend to constants. It is easy to see that the isotropization for for RRG occurs faster than for the dust-like contents, close to the rate in the radiation case. \begin{figure}[H] \centering \includegraphics[height= 5.0 cm,width=\columnwidth]{a_1.eps} \caption{Plots of scale factors for the initial values of $b=10$ and $\Omega_{ani}^{(i)}=0.99$.} \label{fig1} \end{figure} \begin{figure}[H] \centering \includegraphics[height= 5.0 cm,width=\columnwidth]{omega_1.eps} \caption{Plots of $\Omega_{ani}(\tau)$ for the initial values of $b=10$ and $\Omega_{ani}^{(i)}=0.99$.} \label{fig2} \end{figure} \begin{figure}[H] \centering \includegraphics[height= 5.0 cm,width=\columnwidth]{betamais_1.eps} \caption{Plots of $\beta_{+}$ corresponding to the parameters $b=10$ and $\Omega_{ani}^{(i)}=0.99$, while the initial condition $\beta_{+}=10$.} \label{fig3} \end{figure} \begin{figure}[H] \centering \includegraphics[height= 5.0 cm,width=\columnwidth]{betamenos_1.eps} \caption{Plots of $\beta_{-}$ corresponding to the parameters $b=10$ and $\Omega_{ani}^{(i)}=0.99$, while the initial condition $\beta_{-}=15$.} \label{fig4} \end{figure} For smaller warmness, $b=0.5$, one can observe in Figs.~\ref{fig5}-\ref{fig8} another behaviour, when RRG plot is (quite naturally) close to dust. \begin{figure}[H] \centering \includegraphics[height= 5.0 cm,width=\columnwidth]{a_2.eps} \caption{Plots of scale factors for the moderate warmness. Parameters are as follows: $b=0.5$, $\Omega_{ani}^{(i)}=0.99$.} \label{fig5} \end{figure} \begin{figure}[H] \centering \includegraphics[height= 5.0 cm,width=\columnwidth]{omega_2.eps} \caption{Plots of $\Omega_{ani}(\tau)$ for the moderate warmness. Parameters are as follows: $b=0.5$, $\Omega_{ani}^{(i)}=0.99$.} \label{fig6} \end{figure} \begin{figure}[H] \centering \includegraphics[height= 5.0 cm,width=\columnwidth]{betamais_2.eps} \caption{Plots of $\beta_{+}$ for the moderate warmness. Parameters are as follows: $b=0.5$, $\Omega_{ani}^{(i)}=0.99$ with the initial condition $\beta_{+}=10$.} \label{fig7} \end{figure} \begin{figure}[H] \centering \includegraphics[height= 5.0 cm,width=\columnwidth]{betamenos_2.eps} \caption{Plots of $\beta_{-}$ for the moderate warmness. Parameters are as follows: $b=0.5$, $\Omega_{ani}^{(i)}=0.99$ while the initial condition is $\beta_{-}=15$.} \label{fig8} \end{figure} The plots presented above show that the RRG is perfectly well interpolating between radiation and dust regimes, as it should be expected. The asymptotic behavior of $\beta_{\pm}(\tau)$ is constant, which means an effective isotropization of the solutions. Concerning the time of isotropization, depending on warmness the RRG model can be closer to dust or radiation. \section{Conclusions} \label{Sect6} We formulated the framework of RRG model applied to the dynamics of anisotropy in the early epoch, where the universe was filled by radiation and matter (baryonic and dark), which was so hot that has the EoS which interpolates between the radiation and pressureless matter. For the Bianchi-I universe away from the singularity region the gravitational theory based on the Einstein-Hilbert action provides an isotropization mechanism for RRG, exactly like for both radiation and dust matter contents with isotropic EoS. This physical situation is a subject of current interest, see, e.g., \cite{Khalatnikov:2003ph}. More complicated spaces may require a more complicated gravitational theories to explain isotropization mechanism. We believe that the simple and efficient RRG model can be useful for describing the realistic matter contents in these complicated cases, as it was for the rather simples Bianchi-I universe described above. Another potentially interesting application of our results is related to the cosmic perturbations in the anisotropic universe, which is not sufficiently well explored. Since the problem is technically complicated, it maybe very useful to have a simple albeit realistic description of the matter contents in the early universe in the epoch when the isotropization occurs. In this respect the framework RRG looks perfect, since it is extremely simple and enables one to quantify the transition from radiation to matter epochs, exactly as it was used in the pioneer work of Sakharov \cite{Sakharov:1966aja}. As it was expected from the previous works on this model \cite{FlaFlu,Fabris:2008qg,Fabris:2011am}, the RRG shows the behaviour which is intermediate between radiation and dust and approaches one or another depending on the value of warmness parameter. We have shown that this feature can be extended to the simplest anisotropic Bianchi-I model. One of the natural further developments can be related to the derivation and analysis of density and metric perturbations in the universe filled by RRG, including the case with interaction between RRG and radiation \cite{FAW}. One can expect that RRG would be eventually useful as a model which helps to explore the observables which can tell us about the dynamics of anisotropies in the early universe. The formalism which was developed in the present work can be useful for the description of Bianchi I phase between the two FLRW phases of the history of universe in the models proposed in \cite{Comer:2011ss,Comer:2011dn}. In this case the matter contents of the universe is supposed to be hot and therefore the RRG can be helpful in its efficient description. One can also use the same description of the hot or warm matter in other approaches to anisotropy, like the recent consideration of gravity with $R^2$ term \cite{Muller:2017nxg} or with the sigma-model like scalar field \cite{Kamenshchik:2017ojc}, or even in loop quantum gravity \cite{Ashtekar:2009vc}. \section*{Appendix. Brief derivation of Eq.~(\ref{energymomentumtensor})} Let us present a very brief derivation of the main expression for the energy-momentum tensor which was used in the main text to arrive at the EoS of the RRG model. More details can be found in \cite{Hakim2011} and also in \cite{landau1987}. Consider a gas of free massive relativistic particles with equal masses in the equilibrium state. Once in the comoving frame for each particle $T^{00}$ is the energy density, the standard arguments show that the energy-momentum tensor of the gas can be expressed as a sum over particles which are labeled by the subscript $a$, \begin{eqnarray} T^{\mu\nu}(x) \,=\,\sum_{a}\,\int\,ds\,\, \delta^{4}\big(x-x_{a}(s)\big)\,\, \frac{p_{a}^{\mu}(s)\,p_{a}^{\nu}(s)}{m_{a}}, \label{em1} \end{eqnarray} and $s$ is an integration over the proper time for individual particles. Using the definition of Dirac's delta function, one can rewrite this expression in the form \begin{eqnarray} T^{\mu\nu}(x) &=& \int d^4p\,\,\,p^{\mu}p^{\nu}\,f(x,p), \label{em2} \end{eqnarray} where \begin{eqnarray} f(x,p)\,=\, \sum_{a}\int ds\, \frac{\delta^4\big(p-p_{a}(s)\big)\,\delta^4\big(x-x_{a}(s)\big)}{m_{a}}. \end{eqnarray} The expression (\ref{em2}) includes an integral over four-momenta. As far as each of the free particles satisfies a dispersion relation $p^2=m^2$ with $p^0\geq 0$, one can replace $d^4p$ by the expression \begin{eqnarray} d^3p\,dp^0\,\,\delta \big(p_0^2 - {\vec p}^2 - m^2\big). \label{delta} \end{eqnarray} Taking the integral over $\,p^0\,$ one has to replace the invariant element of integration in four dimensions $d^4p$ to the invariant element of integration in the space sector, $(m/p^0)d^3p$, because the normal vector to the $p^2=m^2$ has the same direction as $p^\mu$ \cite{landau1987}. Finally, using the properties of the delta function leads us to Eq.~(\ref{energymomentumtensor}), where the distribution function $f$ depends only on ${\vec p}$. Let us stress that the distribution function $f(x,p)$ is defined to be dependent on the motion of all particles. For the many-body system the use of the methods of Statistical Mechanics, in the case of a thermal equilibrium in Minkowski space leads to the distribution function of the J$\ddot{\rm u}$ttner model. The simplifying assumption of the RRG is that all particles have the same kinetic energy, and that is why the distribution function is chosen as a delta function. As we know from the previous work \cite{FlaFlu}, this approach provides an excellent approximation to complicated EoS of the J$\ddot{\rm u}$ttner model. \section*{Acknowledgments} Authors are very grateful to Patrick Peter for useful discussions. S.C.R. is grateful to CAPES for supporting his Ph.D. project. I.Sh. was partially supported by CNPq, FAPEMIG and ICTP.
1,477,468,750,787
arxiv
\section{Introduction} In December 2019, a novel strand of Coronavirus (SARS-CoV-2) was identified in Wuhan, Hubei Province, China causing a severe and potentially fatal respiratory syndrome, i.e., COVID-19. Since then, it has become a pandemic declared by World Health Organization (WHO) on March 11, which has spread around the globe \cite{Who2019,Wu2020,Novel19,Tang2020,Kraemer2020}. WHO published in its website preliminary guidelines with public health care for the countries to deal with the pandemic \cite{WHO19}. Since then, the infectious disease has become a public health threat. Italy and USA are severely affected by COVID-19 \cite{Song2020,Wang2020,Dan2020}. Millions of people are forced by national governments to stay in self-isolation and in difficult conditions. The disease is growing fast in many countries around the world. In the absence of availability of a proper medicine or vaccine, currently social distancing, self-quarantine and wearing a face mask have been emerged as the most widely-used strategy for the mitigation and control of the pandemic. In this context, mathematical models are required to estimate disease transmission, recovery, deaths and other significant parameters separately for various countries, that is for different, specific regions of high to low reported cases of COVID-19. Different countries have already taken precise and differentiated measures that are important to control the spread of the disease. However, still now, important factors such as population density, insufficient evidence for different symptoms, transmission mechanism and unavailability of a proper vaccine, makes it difficult to deal with such a highly infectious and deadly disease, especially in high population density countries such as India \cite{Ranjan2020,Pulla2020,MOH2020}. Recently, many research articles have adopted the modelling approach, using real incidence datasets from affected countries and, have investigated different characteristics as a function of various parameters of the outbreak and the effects of intervention strategies in different countries, respective to their current situations. It is imperative that mathematical models are developed to provide insights and make predictions about the pandemic, to plan effective control strategies and policies \cite{Scarpino2019,Chinazzi2020,Rud2020}. Modelling approaches \cite{Wang2020,Kuchar2020,Yang2020,Fan2020,Xue2020,Post2020,Li2020} are helpful to understand and predict the possibility and severity of the disease outbreak and, provide key information to determine the intensity of COVID-19 disease intervention. The susceptible-infected-removed (SIR) model and its extended modifications \cite{Heth1989,Heth2000,Heth2009,Weiss2013}, such as the extended-susceptible-infected-removed (eSIR) mathematical model in various forms have been used in previous studies \cite{Amaro2020,Cala2020,Ndairou2020} to model the spread of COVID-19 within communities. Here, we propose the use of a novel SIR model with different characteristics. One of the major assumptions of the classic SIR model is that there is a homogeneous mixing of the infected and susceptible populations and that the total population is constant in time. In the classic SIR model, the susceptible population decreases monotonically towards zero. However, these assumptions are not valid in the case of the spread of the COVID-19 virus, since new epicentres spring up around the globe at different times. To account for this, the SIR model that we propose here does not consider the total population and takes the susceptible population as a variable that can be adjusted at various times to account for new infected individuals spreading throughout a community, resulting in an increase in the susceptible population, i.e., to the so-called surges. The SIR model we introduce here is given by the same simple system of three ordinary differential equations (ODEs) with the classic SIR model and can be used to gain a better understanding of how the virus spreads within a community of variable populations in time, when surges occur. Importantly, it can be used to make predictions of the number of infections and deaths that may occur in the future and provide an estimate of the time scale for the duration of the virus within a community. It also provides us with insights on how we might lessen the impact of the virus, what is nearly impossible to discern from the recorded data alone! Consequently, our SIR model can provide a theoretical framework and predictions that can be used by government authorities to control the spread of COVID-19. In our study, we used COVID-19 datasets from \cite{Corona} in the form of time-series, spanning January to June, 2020. In particular, the time series are composed of three columns which represent the total cases $I^{d}_{tot}$, active cases $I^d$ and Deaths $D^d$ in time (rows). These datasets were used to update parameters of the SIR model to understand the effects and estimate the trend of the disease in various communities, represented by China, South Korea, India, Australia, USA, Italy and the state of Texas in the USA. This allowed us to estimate the development of COVID-19 spread in these communities by obtaining estimates for the number of deaths $D$, susceptible $S$, infected $I$ and removed $R_m$ populations in time. Consequently, we have been able to estimate its characteristics for these communities and assess the effectiveness of modelling the disease. The paper is organised as following: In Sec. \ref{sec_SIR_model}, we introduce the SIR model and discuss its various aspects. In Sec. \ref{sec_methodology_and_results}, we explain the approach we used to study the data in \cite{Corona} and in Sec. \ref{sec_results}, we present the results of our analysis for China, South Korea, India, Australia, USA, Italy and the state of Texas in the USA. Section \ref{sec_flattening_the_curve} discusses the implications of our study to the ``flattening the curve'' approach. Finally, in Sec. \ref{sec_conclusions}, we conclude our work and discuss the outcomes of our analysis and its connection to the evidence that has been already collected on the spread of COVID-19 worldwide. \section{The SIR model that can accommodate surges in the susceptible population}\label{sec_SIR_model} The world around us is highly complicated. For example, how a virus spreads, including the novel strand of Coronavirus (SARS-CoV-2) that was identified in Wuhan, Hubei Province, China, depends upon many factors, among which some of them are considered by the classic SIR model, which is rather simplistic and cannot take into consideration surges in the number of susceptible individuals. Here, we propose the use of a modified SIR model with characteristics, based upon the classic SIR model. In particular, one of the major assumptions of the classic SIR model is that there is a homogeneous mixing of the infected $I$ and susceptible $S$ populations and that the total population $N$ is constant in time. Also, in the SIR model, the susceptible population $S$ decreases monotonically towards zero. These assumptions however are not valid in the case of the spread of the COVID-19 virus, since new epicentres spring up around the globe at different times. To account for this, we introduce here a SIR model that does not consider the total population $N$, but rather, takes the susceptible population $S$ as a variable that can be adjusted at various times to account for new infected individuals spreading throughout a community, resulting in its increase. Thus, our model is able to accommodate surges in the number of susceptible individuals in time, whenever these occur and as evidenced by published data, such as those in \cite{Corona} that we consider here. Our SIR model is given by the same, simple system of three ordinary differential equations (ODEs) with the classic SIR model that can be easily implemented and used to gain a better understanding of how the COVID-19 virus spreads within communities of variable populations in time, including the possibility of surges in the susceptible populations. Thus, the SIR model here is designed to remove many of the complexities associated with the real-time evolution of the spread of the virus, in a way that is useful both quantitatively and qualitatively. It is a dynamical system that is given by three coupled ODEs that describe the time evolution of the following three populations: \begin{enumerate} \item {{\it Susceptible individuals}, $S(t)$: These are those individuals who are not infected, however, could become infected. A susceptible individual may become infected or remain susceptible. As the virus spreads from its source or new sources occur, more individuals will become infected, thus the susceptible population will increase for a period of time (surge period).} \item{ {\it Infected individuals}, $I(t)$: These are those individuals who have already been infected by the virus and can transmit it to those individuals who are susceptible. An infected individual may remain infected, and can be removed from the infected population to recover or die.} \item{{\it Removed individuals}, $R_m(t)$: These are those individuals who have recovered from the virus and are assumed to be immune, $R_m(t)$ or have died, $D(t)$.} \end{enumerate} Furthermore, it is assumed that the time scale of the SIR model is short enough so that births and deaths (other than deaths caused by the virus) can be neglected and that the number of deaths from the virus is small compared with the living population. Based on these assumptions and concepts, the rates of change of the three populations are governed by the following system of ODEs, what constitutes our SIR model \begin{equation} \begin{aligned} \frac{dS(t)}{dt}&=-aS(t)I(t),\\ \frac{dI(t)}{dt}&=aS(t)I(t) - bI(t),\\ \frac{d{R_m}(t)}{dt}&=bI(t), \label{SIR_model_ODEs} \end{aligned} \end{equation} where $a$ and $b$ are real, positive, parameters of the initial exponential growth and final exponential decay of the infected population $I$. It has been observed that in many communities, a spike in the number of infected individuals, $I$, may occur, which results in a surge in the susceptible population, $S$, recorded in the COVID-19 datasets \cite{Corona}, what amounts to a secondary wave of infections. To account for such a possibility, $S$ in the SIR model \eqref{SIR_model_ODEs}, can be reset to $S_{surge}$ at any time $t_s$ that a surge occurs, and thus it can accommodate multiple such surges if recorded in the published data in \cite{Corona}, what distinguishes it from the classic SIR model. The evolution of the infected population $I$ is governed by the second ODE in system \ref{SIR_model_ODEs}, where $a$ is the transmission rate constant and $b$ the removal rate constant. We can define the basic effective reproductive rate $R_e=aS(t)/b$, as the fate of the evolution of the disease depends upon it. If $R_e$ is smaller than one, the infected population $I$ will decrease monotonically to zero and if greater than one, it will increase, i.e., if $\frac{dI(t)}{dt}<0\Rightarrow R_e<1$ and if $\frac{dI(t)}{dt} > 0\Rightarrow R_e>1$. Thus, the effective reproductive rate $R_e$ acts as a threshold that determines whether an infectious disease will die out quickly or will lead to an epidemic. At the start of an epidemic, when $R_e>1$ and $S \approx1$, the rate of infected population is described by the approximation $\frac{{dI(t)}}{{dt}}\approx\left({a-b} \right)I(t)$ and thus, the infected population $I$ will initially increase exponentially according to $I(t)=I(0)\,{e^{(a-b)t}}$. The infected population will reach a peak when the rate of change of the infected population is zero, $dI(t)/dt=0$, and this occurs when $R_e=1$. After the peak, the infected population will start to decrease exponentially, following $I(t) \propto{e^{-bt}}$. Thus, eventually (for $t\rightarrow\infty$), the system will approach $S\to0$ and $I\to0$. Interestingly, the existence of a threshold for infection is not obvious from the recorded data, however can be discerned from the model. This is crucial in identifying a possible second wave where a sudden increase in the susceptible population $S$ will result in $R_e>1$, and to another exponential growth of the number of infections $I$. \section{Methodology}\label{sec_methodology_and_results} The data in \cite{Corona} for China, South Korea, India, Australia, USA, Italy and the state of Texas (communities) are organised in the form of time-series where the rows are recordings in time (from January to June, 2020), and the three columns are, the total cases $I^d_{tot}$ (first column), number of infected individuals $I^d$ (second column) and deaths $D^d$ (third column). Consequently, the number of removals can be estimated from the data by \begin{equation}\label{removals_data_equation} R^d_m=I^d_{tot}-I^d-D^d. \end{equation} Since we want to adjust the numerical solutions to our proposed SIR model \eqref{SIR_model_ODEs} to the recorded data from \cite{Corona}, for each dataset (community), we consider initial conditions in the interval $[0,1]$ and scale them by a scaling factor $f$ to fit the recorded data by visual inspection. In particular, the initial conditions for the three populations are set such that $S(0)=1$ (i.e., all individuals are considered susceptible initially), $I(0)=R_m(0)=I^d_{max}/f<1$, where $I^d_{max}$ is the maximum number of infected individuals $I^d$. Consequently, the parameters $a$, $b$, $f$ and $I^d_{max}$ are adjusted manually to fit the recorded data as best as possible, based on a trial-and-error approach and visual inspections. A preliminary analysis using non-linear fittings to fit the model to the published data \cite{Corona} provided at best inferior results to those obtained in this paper using our trial-and-error approach with visual inspections, in the sense that the model solutions did not follow as close the published data, what justifies our approach in the paper. A prime reason for this is that the published data (including those in \cite{Corona} we are using here) are data from different countries that follow different methodologies to record them, with not all infected individuals or deaths accounted for. In this context, $S$, $I$ and $R_m\geq0$ at any $t\geq0$. System \eqref{SIR_model_ODEs} can be solved numerically to find how the scaled (by $f$) susceptible $S$, infected $I$ and removed $R_m$ populations (what we call model solutions) evolve with time, in good agreement with the recorded data. In particular, since this system is simple with well-behaved solutions, we used the first-order Euler integration method to solve it numerically, and a time step $h=200/5000=0.04$ that corresponds to a final integration time $t_f$ of 200 days since January, 2020. This amounts to double the time interval in the recorded data in \cite{Corona} and allows for predictions for up to 100 days after January, 2020. Obviously, what is important when studying the spread of a virus is the number of deaths $D$ and recoveries $R$ in time. As these numbers are not provided directly by the SIR model \eqref{SIR_model_ODEs}, we estimated them by first, plotting the data for deaths $D^d$ vs the removals $R^d_m$, where $R^d_m=D^d+R^d=I^d_{tot}-I^d$ and then fitting the plotted data with the nonlinear function \begin{equation}\label{nonlinear_fitting_function_Dd_Rmd} D={D_0}\,\left({1-{e^{-k{R_m}}}}\right), \end{equation} where $D_0$ and $k$ are constants estimated by the non-linear fitting. The function is expressed in terms of only model values and is fitted to the curve of the data. Thus, having obtained $D$ from the non-linear fitting, the number of recoveries $R$ can be described in time by the simple observation that it is given by the scaled removals, $R_m$ from the SIR model \eqref{SIR_model_ODEs}, less the number of deaths, $D$ from Eq. \eqref{nonlinear_fitting_function_Dd_Rmd}, \begin{equation}\label{recoveries_equation} R=R_m-D. \end{equation} \section{Results}\label{sec_results} The rate of increase in the number of infections depends on the product of the number of infected and susceptible individuals. An understanding of the system of Eqs. \eqref{SIR_model_ODEs} explains the staggering increase in the infection rate around the world. Infected people traveling around the world has led to the increase in infected numbers and this results in a further increase in the susceptible population \cite{Chinazzi2020}. This gives rise to a positive feedback loop leading to a very rapid rise in the number of active infected cases. Thus, during a surge period, the number of susceptible individuals increases and as a result, the number of infected individuals increases as well. For example, as of 1 March, 2020, there were 88 590 infected individuals and by 3 April, 2020, this number had grown to a staggering 1 015 877 \cite{Corona}. Understanding the implications of what the system of Eqs. \eqref{SIR_model_ODEs} tells us, the only conclusion to be drawn using scientific principles is that drastic action needs to be taken as early as possible, while the numbers are still low, before the exponential increase in infections starts kicking in. For example, if we consider the results of policies introduced in the UK to mitigate the spread of the disease, there were 267 240 total infections and 37 460 deaths by 27 May and in the USA, 1 755 803 and 102 107, total infections and deaths, respectively. Thus, even if one starts with low numbers of infected individuals, the number of infections will at first grow slowly and then, increase approximately exponentially, then taper off until a peak is reached. Comparing these results for the UK and USA with those for South Korea, where steps were taken immediately to reduce the susceptible population, there were 11 344 total infections and 269 deaths by 27 May. The number of infections in China reached a peak about 16 February, 2020. The government took extreme actions with closures, confinement, social distancing, and people wearing masks. This type of action produces a decline in the number of infections and susceptible individuals. If the number of susceptible individuals does not decrease, then the number of infections just gets increased rapidly. As at this moment, there is no effective vaccine developed, the only way to reduce the number of infections is to reduce the number of individuals that are susceptible to the disease. Consequently, the rate of infection tends to zero only if the susceptible population goes to zero. Here, we have applied the SIR model \eqref{SIR_model_ODEs} considering data from various countries and the state of Texas in the USA provided in \cite{Corona}. Assuming the published data are reliable, the SIR model \eqref{SIR_model_ODEs} can be applied to assess the spread of the COVID-19 disease and predict the number of infected, removed and recovered populations and deaths in the communities, accommodating at the same time possible surges in the number of susceptible individuals. Figures \ref{Fig1}--\ref{Fig9} show the time evolution of the cumulative total infections $I_{tot}$, current infected individuals, $I$, recovered individuals, $R$, dead individuals, $D$, and normalized susceptible populations, $S$ for China, South Korea, India, Australia, USA, Italy and Texas in the USA, respectively. The crosses show the published data \cite{Corona} and the smooth lines, solutions and predictions from the SIR model. The cumulative total infections plots also show a curve for the initial exponential increase in the number of infections, where the number of infections doubles every five days. The figures also show predictions, and a summary of the SIR model parameters in \eqref{SIR_model_ODEs} and published data in \cite{Corona} for easy comparisons. We start by analysing the data from China and then move on to the study of the data from South Korea, India, Australia, USA, Italy and Texas. \subsection{China} \begin{figure} \centering \includegraphics[width=14cm,height=9.5cm]{Fig1} \caption{China: Model predictions for the period from 22 January to 9 August, 2020 with data from January to June, 2020. The data show a discrete jump in deaths $D$ in mid-April.}\label{Fig1} \end{figure} \begin{figure} \centering \includegraphics[width=12cm,height=8.5cm]{Fig1A} \caption{China: (a) Nonlinear fitting with Eq. \eqref{nonlinear_fitting_function_Dd_Rmd} using a trial-and-error method to estimate the number of deaths, $D$ from the removed population, $R_m$ (see text for the details). (b) Plots of the number of removals, $R_m$ against the cumulative total infections $I_{tot}$ and current active cases $I$.}\label{Fig1A} \end{figure} The number of infections peaked in China about 16 February, 2020 and since then, it has slowly decreased. The decrease only occurs when the susceptible population numbers decrease and this decrease in susceptible numbers only occurred through the drastic actions taken by the Chinese government. China quarantined and confirmed potential patients, and restricted citizens' movements as well as international travel. Social distancing was widely practiced, and most of the people wore face masks. The actual numbers of infections have decreased at a greater rate than predicted by the SIR model (see Figs. \ref{Fig1} and \ref{Fig1A}). Our results in Figs. \ref{Fig1} and \ref{Fig1A} provide evidence that the Chinese government has done well in limiting the impact of the spread of COVID-19. \subsection{South Korea} \begin{figure} \centering \includegraphics[width=14cm,height=8.5cm]{Fig2} \caption{South Korea: Model predictions for the period from 26 February to 13 September, 2020 with data from February to June, 2020.} \label{Fig2} \end{figure} \begin{figure} \centering \includegraphics[width=12cm,height=8.5cm]{Fig2A} \caption{South Korea: (a) Nonlinear fitting with Eq. \eqref{nonlinear_fitting_function_Dd_Rmd} using a trial-and-error method to estimate the number of deaths, $D$ from the removed population, $R_m$ (see text for the details). (b) Plots of the number of removals, $R_m$ against the cumulative total infections $I_{tot}$ and current active cases $I$.} \label{Fig2A} \end{figure} From the plots shown in Figs. \ref{Fig2} and \ref{Fig2A}, it is obvious that the South Korean government has done a wonderful job in controlling the spread of the virus. The country has implemented an extensive virus testing program. There has also been a heavy use of surveillance technology: closed-circuit television (CCTV) and tracking of bank cards and mobile phone usage, to identify who to test in the first place. South Korea has achieved a low fatality rate (currently one percent) without resorting to such authoritarian measures as in China. The most conspicuous part of the South Korean strategy is simple enough: implementation of repeated cycles of {\it test and contact trace} measures. \subsection{India} \begin{figure} \centering \includegraphics[width=14cm,height=8.5cm]{Fig3} \caption{India: Model predictions for the period from 14 March to 30 September, 2020 with data from March to June, 2020.} \label{Fig3} \end{figure} \begin{figure} \centering \includegraphics[width=12cm,height=8.5cm]{Fig3A} \caption{India: (a) Nonlinear fitting with Eq. \eqref{nonlinear_fitting_function_Dd_Rmd} using a trial-and-error method to estimate the number of deaths, $D$ from the removed population, $R_m$ (see text for the details). (b) Plots of the number of removals, $R_m$ against the cumulative total infections $I_{tot}$ and current active cases $I$.} \label{Fig3A} \end{figure} To match the recorded data from India with predictions from the SIR model \eqref{SIR_model_ODEs}, it is necessary to include a number of surge periods, as shown in Fig. \ref{Fig3}. This is because the SIR model cannot predict accurately the peak number of infections, if the actual numbers in the infected population have not peaked in time. It is most likely the spread of the virus as of early June, 2020 is not contained and there will be an increasing number of total infections. However, by adding new surge periods, a higher and delayed peak can be predicted and compared with future data. In Fig. \ref{Fig3}, a consequence of the surge periods is that the peak is delayed and higher than if no surge periods were applied. The model predictions for the 30 September, 2020 including the surges are: 330 000 total infections, 700 active infections and 7 500 deaths, whereas if there were no surge periods, there would be 130 000 total infections, 700 active infections and 6 300 deaths, with the peak of 60 000, which is about 40\% of the current number of active cases occuring around 20 May 2020. Thus, the model can still give a rough estimate of future infections and deaths, as well as the time it may take for the number of infections to drop to safer levels, at which time restrictions can be eased, even without an accurate prediction in the peak in active infections (see Figs. \ref{Fig3} and \ref{Fig3A}). \subsection{Australia} \begin{figure} \centering \includegraphics[width=14cm,height=8.5cm]{Fig4} \caption{Australia: Model predictions for the period from 22 January to 9 August, 2020 with data from January to June, 2020.} \label{Fig4} \end{figure} \begin{figure} \centering \includegraphics[width=12cm,height=8.5cm]{Fig4A} \caption{Australia: (a) Nonlinear fitting with Eq. \eqref{nonlinear_fitting_function_Dd_Rmd} using a trial-and-error method to estimate the number of deaths, $D$ from the removed population, $R_m$ (see text for the details). (b) Plots of the number of removals, $R_m$ against the cumulative total infections $I_{tot}$ and current active cases $I$.} \label{Fig4A} \end{figure} A surge in the susceptible population was applied in early March, 2020 in the country. The surge was caused by 2 700 passengers disembarking from the Ruby Princes cruise ship in Sydney and then, returning to their homes around Australia. More than 750 passengers and crew have become infected and 26 died. Two government enquires have been established to investigate what went wrong. Also, at this time many infected overseas passengers arrived by air from Europe and the USA. The Australian government was too slow in quarantining arrivals from overseas. From mid-March, 2020 until mid-May, 2020, the Australian governments introduced measures of testing, contact tracing, social distancing, staying at home policy, closure of many businesses and encouraging people to work from home. From Figs. \ref{Fig4} and \ref{Fig4A}, it can be observed that actions taken were successful as the actual number of infections declined in accord with the model predictions. There have been no further surge periods. From end of May, 2020, these restrictions are being removed in stages. The SIR model can be used when future data becomes available to see if the number of susceptible individuals starts to increase. If so, the model can accommodate this by introducing surge factors. \subsection{USA} \begin{figure} \centering \includegraphics[width=14cm,height=8.5cm]{Fig5} \caption{USA: Model predictions for the period from 22 January to 9 August, 2020 with data from January to June, 2020.} \label{Fig5} \end{figure} \begin{figure} \centering \includegraphics[width=12cm,height=8.5cm]{Fig5A} \caption{USA: (a) Nonlinear fitting with Eq. \eqref{nonlinear_fitting_function_Dd_Rmd} using a trial-and-error method to estimate the number of deaths, $D$ from the removed population, $R_m$ (see text for the details). (b) Plots of the number of removals, $R_m$ against the cumulative total infections $I_{tot}$ and current active cases $I$.} \label{Fig5A} \end{figure} As of early June, 2020, the peak number of infections has not been reached. When a peak in the data is not reached, it is more difficult to fit the model predictions to the data. In the model, it is necessary to add a few surge periods. This is because new epicentres of the virus arose at different times. The virus started spreading in Washington State, followed by California, New York, Chicago and the southern states of the USA. The need to add surge periods shows clearly that the spread of the virus is not under control. In the USA, by the end of May, 2020, the number of active infected cases has not yet peaked and the cumulative total number of infections keeps getting bigger. This can be accounted for in the SIR model by considering how the susceptible population changes with time in May. During that time, to match the data to the model predictions, surge periods were used where the normalized susceptible population $S$ was reset to $0.2$ every four days. What is currently happening in the USA is that as susceptible individuals become infected, their population decreases, with these infected individuals mixing with the general population, leading to an increase in the susceptible population. This is shown in the model by the variable for the susceptible population, $S$, varying from about $0.06$ to $0.20$, repeatedly during May. Until this vicious cycle is broken, the cumulative total infected population will keep growing at a steady rate and not reach an almost steady-state. The fluctuating normalized susceptible variable provides clear evidence that government authorities do not have the spread of the virus under control (see Figs. \ref{Fig5} and \ref{Fig5A}). \subsection{Texas} \begin{figure} \centering \includegraphics[width=14cm,height=8.5cm]{Fig6} \caption{Texas: Model predictions for the period from 12 March to 28 September, 2020 with data from March to June, 2020.} \label{Fig6} \end{figure} \begin{figure} \centering \includegraphics[width=12cm,height=8.5cm]{Fig6A} \caption{Texas: (a) Nonlinear fitting with Eq. \eqref{nonlinear_fitting_function_Dd_Rmd} using a trial-and-error method to estimate the number of deaths, $D$ from the removed population, $R_m$ (see text for the details). (b) Plots of the number of removals, $R_m$ against the cumulative total infections $I_{tot}$ and current active cases $I$.} \label{Fig6A} \end{figure} The plots in Figs. \ref{Fig6} and \ref{Fig6A} show that the peak in the total cumulative number of infections has not been reached as early as June, however, the peak is probably not far away. If there are no surges in the susceptible population, then one could expect that by late September, 2020, the number of infections will have fallen to very small numbers and the virus will have been well under control with the total number of deaths in the order of 2 000. In mid-May, 2020, some restrictions have been lifted in the state of Texas. The SIR model can be used to model some of the possible scenarios if the early relaxation of restrictions leads to increasing number of susceptible populations. If there is a relatively small increase in the future number of susceptible individuals, no series impacts occur. However, if there is a large outbreak of the virus, then the impacts can be dramatic. For example, at the end of June, 2020, if $S$ was reset to 0.8 $(S = 0.8)$, a second wave of infections occurs with the peak number of infections occurring near the end of July, with the second wave peak being higher than the initial peak number of infections. Subsequently, the number of deaths will rise from about 2 000 to nearly 5 000, as shown in Figs. \ref{Fig7} and \ref{Fig7A}. \begin{figure} \centering \includegraphics[width=14cm,height=8.5cm]{Fig7} \caption{Texas: Model predictions with a surge period occurring at the end of June, 2020.} \label{Fig7} \end{figure} \begin{figure} \centering \includegraphics[width=12cm,height=8.5cm]{Fig7A} \caption{Texas: If a second wave occurs, there could be increase in the number of deaths, $D $.}\label{Fig7A} \end{figure} If governments start lifting their containment strategies too quickly, then it is probable there will be a second wave of infections with a larger peak in active cases, resulting to many more deaths. \subsection{Italy} \begin{figure} \centering \includegraphics[width=14cm,height=8.5cm]{Fig8} \caption{Italy: Model predictions for the period from 26 February to 13 September, 2020 with data from February to June, 2020.} \label{Fig8} \end{figure} \begin{figure} \centering \includegraphics[width=12cm,height=8.5cm]{Fig8A} \caption{Italy: (a) Nonlinear fitting with Eq. \eqref{nonlinear_fitting_function_Dd_Rmd} using a trial-and-error method to estimate the number of deaths, $D$ from the removed population, $R_m$ (see text for the details). (b) Plots of the number of removals, $R_m$ against the cumulative total infections $I_{tot}$ and current active cases $I$.} \label{Fig8A} \end{figure} Figure \ref{Fig8} shows clearly that the peak of the pandemic has been reached in Italy and without further surge periods, the spread of the virus is contained and number of active cases is declining rapidly. The plots in panels (a), (b) in Fig. \ref{Fig8A} are a check on how well the model can predict the time evolution of the virus. These plots also assist in selecting the model's input parameters. \section{Flattening the curve}\label{sec_flattening_the_curve} The term {\it flattening the curve} has rapidly become a rallying cry in the fight against COVID-19, popularised by the media and government officials. Claims have been made that flattening the curve results in: (i) reduction in the peak number of cases, thereby helping to prevent the health system from being overwhelmed and (ii) in an increase in the duration of the pandemic with the total burden of cases remaining the same. This implies that social distancing measures and management of cases, with their devastating economic and social impacts, may need to continue for much longer. The picture which has been widely shown in the media is shown in Fig. \ref{Fig9}(a). \begin{figure} \centering \includegraphics[width=14cm,height=8.5cm]{Fig9} \caption{Flattening the curve: Panel (a): The {\it flattening of the curve} diagram used widely in the media to represent a means of reducing the impacts of COVID-19. Panel (b) If the number of susceptible individuals is reduced, then the peak number of infections will be less and the time for the number of infections to fall to low numbers is reduced.} \label{Fig9} \end{figure} The idea presented in the media as shown in Fig. \ref{Fig9}(a) is that by flattening the curve, the peak number of infections will decrease, however, the total number of infections will be the same and the duration of the pandemic will be longer. Hence, they concluded that by {\it flattening the curve}, it will have a lesser impact upon the demands in hospitals. Figure \ref{Fig9}(b) gives the scientific meaning of {\it flattening the curve}. By governments imposing appropriate measures, the number of susceptible individuals can be reduced and combined with isolating infected individuals, will reduce the peak number of infections. When this is done, it actually shortens the time the virus impacts the society. Thus, the second claim has no scientific basis and is incorrect. What is important is reducing the peak in the number of infections and when this is done, it shortens the duration in which drastic measures need to be taken and not lengthen the period as stated in the media and by government officials. Figure \ref{Fig9} shows that the peak number of infections actually reduces the duration of the impact of the virus on a community. \section{Conclusions}\label{sec_conclusions} Mathematical modelling theories are effective tools to deal with the time evolution and patterns of disease outbreaks. They provide us with useful predictions in the context of the impact of intervention in decreasing the number of infected-susceptible incidence rates \cite{Giordano2020,Hou2020,Anas2020}. In this work, we have augmented the classic SIR model with the ability to accommodate surges in the number of susceptible individuals, supplemented by recorded data from China, South Korea, India, Australia, USA and the state of Texas to provide insights into the spread of COVID-19 in communities. In all cases, the model predictions could be fitted to the published data reasonably well, with some fits better than others. For China, the actual number of infections fell more rapidly than the model prediction, which is an indication of the success of the measures implemented by the Chinese government. There was a jump in the number of deaths reported in mid-April in China, which results in a less robust estimate of the number of deaths predicted by the SIR model. The susceptible population dropped to zero very quickly in South Korea showing that the government was quick to act in controlling the spread of the virus. As of the beginning of June, 2020, the peak number of infections in India has not yet been reached. Therefore, the model predictions give only minimum estimates of the duration of the pandemic in the country, the total cumulative number of infections and deaths. The case study of the virus in Australia shows the importance of including a surge where the number of susceptible individuals can be increased. This surge can be linked to the arrival of infected individuals from overseas and infected people from the Ruby Princess cruise ship. The data from USA is an interesting example, since there are multiple epicentres of the virus that arise at different times. This makes it more difficult to select appropriate model parameters and surges where the susceptible population is adjusted. The results for Texas show that the model can be applied to communities other than countries. Italy provides an example where there is excellent agreement between the published data and model predictions. Thus, our SIR model provides a theoretical framework to investigate the spread of the COVID-19 virus within communities. The model can give insights into the time evolution of the spread of the virus that the data alone does not. In this context, it can be applied to communities, given reliable data are available. Its power also lies to the fact that, as new data are added to the model, it is easy to adjust its parameters and provide with best-fit curves between the data and the predictions from the model. It is in this context then, it can provide with estimates of the number of likely deaths in the future and time scales for decline in the number of infections in communities. Our results show that the SIR model is more suitable to predict the epidemic trend due to the spread of the disease as it can accommodate surges and be adjusted to the recorded data. By comparing the published data with predictions, it is possible to predict the success of government interventions. The considered data are taken in between January and June, 2020 that contains the datasets before and during the implementation of strict and control measures. Our analysis also confirms the success and failures in some countries in the control measures taken. Strict, adequate measures have to be implemented to further prevent and control the spread of COVID-19. Countries around the world have taken steps to decrease the number of infected citizens, such as lock-down measures, awareness programs promoted via media, hand sanitization campaigns, etc. to slow down the transmission of the disease. Additional measures, including early detection approaches and isolation of susceptible individuals to avoid mixing them with no-symptoms and self-quarantine individuals, traffic restrictions, and medical treatment have shown they can help to prevent the increase in the number of infected individuals. Strong lockdown policies can be implemented, in different areas, if possible. In line with this, necessary public health policies have to be implemented in countries with high rates of COVID-19 cases as early as possible to control its spread. The SIR model used here is only a simple one and thus, the predictions that come out might not be accurate enough, something that also depends on the published data and their trustworthiness. However, as the model data show, one thing that is certain is that COVID-19 is not going to go way quickly or easily. \section*{Acknowledgements} AM is thankful for the support provided by the Department of Mathematical Sciences, University of Essex, UK to complete this work.
1,477,468,750,788
arxiv
\section{Introduction} One aim of our model (see e.g. Ref.~\refcite{2007AIPC..910...55R} and references therein) is to derive from first principles both the luminosity in selected energy bands and the time resolved/integrated spectra of GRBs.\cite{2004IJMPD..13..843R} The luminosity in selected energy bands is evaluated integrating over the equitemporal surfaces (EQTSs)\cite{2004ApJ...605L...1B,2005ApJ...620L..23B} the energy density released in the interaction of the optically thin fireshell with the CircumBurst Medium (CBM) measured in the co-moving frame, duly boosted in the observer frame. The radiation viewed in the co-moving frame of the accelerated baryonic matter is assumed to have a thermal spectrum and to be produced by the interaction of the CBM with the front of the expanding baryonic shell.\cite{2004IJMPD..13..843R} \section{The instantaneous GRB spectra} In Ref.~\refcite{2005ApJ...634L..29B} it is shown that, although the instantaneous spectrum in the co-moving frame of the optically thin fireshell is thermal, the shape of the final instantaneous spectrum in the laboratory frame is non-thermal. In fact, as explained in Ref.~\refcite{2004IJMPD..13..843R}, the temperature of the fireshell is evolving with the co-moving time and, therefore, each single instantaneous spectrum is the result of an integration of hundreds of thermal spectra with different temperature over the corresponding EQTS. This calculation produces a non thermal instantaneous spectrum in the observer frame.\cite{2005ApJ...634L..29B} Another distinguishing feature of the GRBs spectra which is also present in these instantaneous spectra is the hard to soft transition during the evolution of the event.\cite{1997ApJ...479L..39C,1999PhR...314..575P,2000ApJS..127...59F,2002A&A...393..409G} In fact the peak of the energy distributions $E_p$ drift monotonically to softer frequencies with time.\cite{2005ApJ...634L..29B} This feature explains the change in the power-law low energy spectral index\cite{1993ApJ...413..281B} $\alpha$ which at the beginning of the prompt emission of the burst ($t_a^d=2$ s) is $\alpha=0.75$, and progressively decreases for later times.\cite{2005ApJ...634L..29B} In this way the link between $E_p$ and $\alpha$ identified in Ref.~\refcite{1997ApJ...479L..39C} is explicitly shown. \section{The time-integrated GRB spectra - Application to GRB 031203} The time-integrated observed GRB spectra show a clear power-law behavior. Within a different framework (see e.g. Ref.~\refcite{1983ASPRv...2..189P} and references therein) it has been argued that it is possible to obtain such power-law spectra from a convolution of many non power-law instantaneous spectra monotonically evolving in time. This result was recalled and applied to GRBs\cite{1999ARep...43..739B} assuming for the instantaneous spectra a thermal shape with a temperature changing with time. It was shown that the integration of such energy distributions over the observation time gives a typical power-law shape possibly consistent with GRB spectra. Our specific quantitative model is more complicated than the one considered in Ref.~\refcite{1999ARep...43..739B}: the instantaneous spectrum here is not a black body. Each instantaneous spectrum is obtained by an integration over the corresponding EQTS: \cite{2004ApJ...605L...1B,2005ApJ...620L..23B} it is itself a convolution, weighted by appropriate Lorentz and Doppler factors, of $\sim 10^6$ thermal spectra with variable temperature. Therefore, the time-integrated spectra are not plain convolutions of thermal spectra: they are convolutions of convolutions of thermal spectra.\cite{2004IJMPD..13..843R,2005ApJ...634L..29B} \begin{figure} \includegraphics[width=\hsize,clip]{031203_spettro} \caption{Three theoretically predicted time-integrated photon number spectra $N(E)$, computed for GRB 031203,\cite{2005ApJ...634L..29B} are here represented for $0 \le t_a^d \le 5$ s, $5 \le t_a^d \le 10$ s and $10 \le t_a^d \le 20$ s (dashed and dotted curves), where $t_a^d$ is the photon arrival time at the detector.\cite{2001ApJ...555L.107R,2005ApJ...634L..29B} The hard to soft behavior is confirmed. Moreover, the theoretically predicted time-integrated photon number spectrum $N(E)$ corresponding to the first $20$ s of the ``prompt emission'' (black bold curve) is compared with the data observed by INTEGRAL\cite{2004Natur.430..646S}. This curve is obtained as a convolution of 108 instantaneous spectra, which are enough to get a good agreement with the observed data. Details in Ref.~\refcite{2005ApJ...634L..29B}.} \label{031203_spettro} \end{figure} In Fig.~\ref{031203_spettro} we present the photon number spectrum $N(E)$ time-integrated over the $20$ s of the whole duration of the prompt event of GRB 031203 observed by INTEGRAL:\cite{2004Natur.430..646S} in this way we obtain a typical non-thermal power-law spectrum which results to be in good agreement with the INTEGRAL data\cite{2004Natur.430..646S,2005ApJ...634L..29B} and gives a clear evidence of the possibility that the observed GRBs spectra are originated from a thermal emission.\cite{2005ApJ...634L..29B} \input{bianco_chardonnet.bbl} \end{document}
1,477,468,750,789
arxiv
\section{INTRODUCTION} Yang et al. defined autonomy levels for medical robots on a scale from 0 to 5, ranging from no autonomy to full automation which requires no human input~\cite{yang2017medical}. The highest level of fully automated Robotic-Assisted Surgery (RAS) is still far from reality due to technical challenges, regulatory, legal, and ethical concerns. A more achievable short-term goal may be to envision a higher level of assistance offered by surgical robots - Robot-Enhanced Surgery (RES) - where collaborative and adaptive robot partners can leverage surgeon strengths and help overcome possible motor limitations~\cite{battaglia2021rethinking}. Recent research developments are enabling this new class of assistive surgical robots with contributions to predict surgeon intent~\cite{Qin2020}, perceive surgical states perception~\cite{qin2020fusion}, measure expertise levels~\cite{wang2018deep}, estimate procedural progress ~\cite{VanAmsterdam2020}, monitor surgeon's stress levels~\cite{zheng2022frame} and provide novel guidance through audio, augmented reality, and haptic channels~\cite{kitagawa2005effect,culjat2008pneumatic,ershad2021adaptive,Zheng2022styles}. The first step for these RES applications is often to perceive the current activities of surgical task and to predict the future surgical activities based on current activities~\cite{van2021gesture,gao2020,gao2021future,park2021}. Segmentation and recognition can be used to decompose surgical tasks, for example, suturing, into a sequence of surgical gestures (e.g., reaching needle, position needle), and perform surgical skill evaluation based on the executed sequence of gestures~\cite{tao2013surgical}. Padoy and Hager introduced a collaborative control method which could assist the surgeon by recognizing the completion of manual subtasks and automated the remaining ones on a da Vinci surgical robot~\cite{padoy2011human}. Moreover, the gesture recognition can also be used to trigger appropriate information displays on either the surgeon console monitor or trainer monitor. Another important aspect is anticipating or predicting the operator's intent and robot movements. The prediction of robotic surgical instrument's trajectory can potentially contribute to preventing collision between instruments or with obstacles; therefore, enabling a method to prevent these dangerous adverse events during RAS. The prediction of surgeon's movements or the trajectory of surgeon side manipulators can help generate reference trajectories to develop haptic feedback methods to improve surgical training outcomes. Surgical activity recognition and prediction are both time-series sequence modeling problems. Recurrent Neural Networks (RNN) have been widely used for time-series modeling problems, for example, Gated Recurrent Units (GRU)~\cite{cho2014gru} and Long-Short-Term-Memory (LSTM) networks~\cite{hochreiter1997lstm}. These techniques were initially aimed to solve NLP problems~\cite{cho2014gru,sundermeyer2012lstm}, but they can be easily used for solving other real-world problems such as stock price prediction and human activity recognition~\cite{selvin2017stock,singh2017human}. Bahdanau et al. first introduced attention in machine translation where the output will focus its attention on a certain part of a sequence~\cite{Bahdanau2015}. Although the attention mechanism has been widely studied, such attention mechanisms are primarily used in conjunction with a RNN~\cite{qin2017dual}. The Transformer model has gained popularity after being published by Vaswani et al.~\cite{vaswani2017attention}. Unlike attention-based RNN, the key feature of the Transformer model is its novel attention mechanism which avoids recurrence and only relies on attention to draw global dependencies between model input and output. It replaces the recurrent layers commonly used in encoder-decoder architectures with multi-head attention. According to Vaswani et al., it is believed that Transformer can be trained significantly faster than RNN-based architectures during translation tasks. \textbf{Contributions:} In this paper, we propose the use of Transformer model for surgical activity recognition and prediction. Our method relies only on kinematic data from the surgical robot and has comparable if not better performance than state-of-the-art surgical activity recognition and prediction methods. \section{BACKGROUND} \subsection{Prior Work in Surgical Activity Recognition} Surgical activity recognition from robot kinematic data has been studied over the last decade. With developments in machine learning techniques, especially deep learning, the methods for surgical activity recognition have evolved from Hidden Markov Models (HMMs)~\cite{tao2012sparse} and Conditional Random Fields (CRFs)~\cite{tao2013surgical} to more complex deep learning models such as LSTM neural networks~\cite{VanAmsterdam2020,dipietro2016recognizing,Yasar2020,dipietro2019segmenting}. LSTM is an appropriate tool for time-series sequence modeling due to its inherent structure to ``memorize" and ``forget" certain points within a sequence of data. In addition to robot kinematic data, surgical video which directly embeds surgical activity information, was also introduced to surgical activity recognition based on the development of Convolutional Neural Networks (CNN) and computer visions~\cite{lea2016segmental}. Qin et al. recently proposed Fusion-KVE which uses Temporal Convolution Networks (TCN)~\cite{lea2016tcn} and LSTM to process multiple data sources, such as kinematic data and video for surgical gesture estimation~\cite{qin2020fusion}. \subsection{Prior Work in Surgical Activity Prediction} Time-series prediction is another popular topic in deep learning, especially LSTM. For example, LSTM has gained its popularity in stock price prediction~\cite{selvin2017stock,mehtab2020stock}, weather forecasting~\cite{salman2018single}, etc. The use of LSTM is relatively limited in the surgical activity prediction literature. Qin et al. introduced daVinciNet which can simultaneously predict the instrument paths and surgical states in robotic-assisted surgery~\cite{Qin2020}. The daVinciNet method uses kinematics, vision and events data sequences as an input and uses an LSTM encoder-decoder model as well as a dual-stage attention mechanism to extract information from input sequence and therefore, making predictions seconds in advance~\cite{qin2017dual}. Gao et al. proposed a ternary prior guided variational autoencoder model for future frame prediction in robotic surgical video sequences~\cite{gao2021future}. \subsection{Transformer Applications} Although the Transformer model was first designed for machine translation problems, it has been studied recently in other time-series modeling problems. Wu et al. employed a Transformer-based approach to forecasting time-series data and used influenza-like illness (ILI) forecasting as a case study. They showed that the results produced by the approach were favorably comparable to the state-of-the-art~\cite{wu2020deep}. Giuliari et al. used Transformer Network and the larger Bidirectional Transformer (BERT) to predict the future trajectories of the individual people in the scene~\cite{giuliari2021transformer}. In surgical applications, Gao et al. introduced Trans-SVNet for surgical workflow analysis~\cite{gao2021trans}. These studies give us the confidence that the Transformer model, currently the state-of-the-art in NLP tasks, could have promising performance on time-series modeling. Therefore, inspired by the recent development and validation of Transformer model, we move a step forward to using Transformer model in RAS applications, i.e., gesture recognition, gesture and trajectory prediction. \section{DATASET} The JIGSAWS dataset contains three types of surgical tasks (Knot tying, Needle passing and Suturing) completed by eight subjects in a benchtop setting using a da Vinci Surgical System~\cite{Ahmidi2017,Gao2014}. For each trial, the kinematic data of the two surgeon-side manipulators (MTMs) and two patient-side manipulators (PSMs), as well as synchronized video data, are saved. For each manipulator, the time-series kinematic data includes the end-effector position (3), rotation matrix (9), linear velocity (3), angular velocity (3) and gripper angle (1), resulting in 19 features in total for each end-effector. In addition, a key distinguishing feature of the JIGSAWS dataset is annotated gestures which are synchronized with the kinematic data. The dataset specified a common vocabulary including 15 gestures (Table~\ref{tab:gestures}~\cite{Gao2014}). We also labeled the unannotated data as $``0's"$, resulting in 16 classes in gestures. We used 39 suturing trials for model evaluation, and used all 38 kinematic features of two MTMs or PSMs. \section{METHOD} We formulate the surgical activity recognition and prediction tasks as supervised machine learning tasks. We take advantage of the Transformer encoder-decoder model - a model widely used in natural language processing, to process historical information all at once, aiming at better satisfying the requirement of real-time Robot-Enhanced Surgery. Though the three tasks of gesture recognition, gesture prediction and trajectory prediction share a similar Transformer model architecture, there are slight differences in the input and output format according to their different objectives. We define an observation window with size $T_{obs}$ and a prediction window with size $T_{pred}$. The gesture recognition task would take the input of the current kinematic data ($K$) within the $t+1$ to $t+T_{obs}$ window to generate the gesture labels ($G$) for the same window. The gesture prediction task would take the input of the current kinematic data within the $t+1$ to $t+T_{obs}$, as well as the current gesture labels ($t+1$ to $t+T_{obs}$ window) estimated by the first task, to predict the future time-series gestures labels within the $t+T_{obs}+1$ to $t+T_{obs}+T_{pred}$ window. The trajectory prediction task, we will use the current kinematic data within the $t+1$ to $t+T_{obs}$ window, together with the current gesture labels within the $t+1$ to $t+T_{obs}$ window (from task 1) and the future gesture labels within the $t+T_{obs}+1$ to $t+T_{obs}+T_{pred}$ window (from task 2), to predict the future time-series end-effector trajectory ($P$) within the time window of $t+T_{obs}+1$ to $t+T_{obs}+T_{pred}$. \begin{table}[] \centering \caption{Gesture Descriptions in JIGSAWS} \label{tab:gestures} \begin{tabular}{c|l} \hline Gesture ID & Description \\ \hline G1 & Reaching for needle with right hand \\ \hline G2 & Positioning needle \\ \hline G3 & Pushing needle through tissue \\ \hline G4 & Transferring needle from left to right \\ \hline G5 & Moving to center with needle in grip \\ \hline G6 & Pulling suture with left hand \\ \hline G7 & Pulling suture with right hand \\ \hline G8 & Orienting needle \\ \hline G9 & Using right hand to help tighten suture \\ \hline G10 & Loosening more suture \\ \hline G11 & Dropping suture at end and moving to end points \\ \hline G12 & Reaching for needle with left hand \\ \hline G13 & Making C loop around right hand \\ \hline G14 & Reaching for suture with right hand \\ \hline G15 & Pulling suture with both hands \end{tabular}% \end{table} \begin{figure*}[ht] \centering \includegraphics[width=0.9\linewidth]{transformer_architecture.pdf} \caption{Architecture of the proposed Transformer model. In Encoder, $d_{enc} = 38$ during gesture recognition and prediction, $d_{enc} = 54$ during trajectory prediction. In Decoder, $T = T_{obs}$ during gesture recognition and $T = T_{pred}$ during gesture and trajectory prediction; $d_{dec} = 16$ during gesture recognition and prediction, $d_{dec} = 22$ during trajectory prediction; $d_{out} = 16$ during gesture recognition and prediction, $d_{out} = 6$ during trajectory prediction.} \label{fig:architecture} \end{figure*} \subsection{Transformer Model} \begin{table*}[] \centering \caption{List of modifications based on original Transformer model~\cite{vaswani2017attention}.} \label{tab:changes} \begin{tabular}{ll} \hline Modification & Description \\ \hline Removing the Embedding layer & It is designed for NLP tasks in which words are embedded to vectors. \\ \hline Removing the Padding mask & \begin{tabular}[c]{@{}l@{}}It is designed for NLP tasks in which sentence lengths are different. \\ Padding mask ensures that the loss can be calculated efficiently.\end{tabular} \\ \hline Adding Encoder output layer & \begin{tabular}[c]{@{}l@{}}It is a fully connected layer to map the encoder output's dimension to decoder dimension,\\ in order to calculate the attention between encoder sequence and decoder sequence.\end{tabular} \end{tabular}% \end{table*} Our proposed Transformer model resembles the original Transformer architecture which consists of an Encoder and a Decoder~\cite{vaswani2017attention}. We made modifications on the original Transformer architecture based on our needs, for example, removing the embedding layers which were designed for machine translation tasks (Fig~\ref{fig:architecture} and Table~\ref{tab:changes}). For each trial, we used a sliding window (Window size: $T_{obs} = 1\,second$, stride: $S = 1\,sample$) to organize the kinematic data into frames. \paragraph{Encoder} The Encoder consists of a positional encoding layer, a stack of $N$ identical encoder layers, and an output layer. The encoder dimension $d_{enc}$ is determined by the number of features of the encoder input sequence ($d_{enc} = 38$ for gesture recognition and prediction, $d_{enc} = 54$ for trajectory prediction). In order to make use of the order of the encoder input sequence, positional encoding is used to encode sequential information in the encoder input sequence by element-wise addition of the encoder input sequence with a positional encoding vector. Then, the resulting sequence is fed into encoder layers. Each encoder layer has two sub-layers: a multi-head self-attention mechanism, and a fully connected feed-forward network. The encoder layer is repeated for $N$ times. Finally, the data is passed through a fully connected output layer to map the data from encoder dimension $d_{enc}$ to decoder dimension $d_{dec}$. \paragraph{Decoder} Similarly, the Decoder consists of a positional encoding layer, a stack of $N$ identical decoder layers, and an output layer. The decoder dimension $d_{dec}$ is determined by the number of features of the decoder input sequence ($d_{dec} = 16$ for gesture recognition and prediction, $d_{dec} = 22$ for trajectory prediction). The decoder input will be discussed in the following sections. After positional encoding, the sequence is fed to decoder layers. The decoder layer has an additional multi-head attention layer between the multi-head self-attention mechanism and the fully connected feed-forward network. The added multi-head attention layer performs the attention over the output of the encoder stack. Finally, the sequence passes a fully connected layer with a dimension of $d_{out}$ to generate the results. We also employed the look-ahead masking on the decoder input sequence to ensure that the prediction of a time-series data point will only depend on previous data points. \begin{figure} \centering \subfloat[Gesture Recognition training method.]{ \includegraphics[width=\linewidth]{recog_train.pdf}\label{fig:recog_train}} \\ \subfloat[Gesture Recognition inference method.]{ \includegraphics[width=\linewidth]{recog_infer.pdf}\label{fig:recog_infer}} \caption{Training and inference (testing) methods during Gesture Recognition. $R$ in (b) is a random vector for initializing inference. $\hat{G_i}$ in (b) is the estimated gesture value by the model. \label{fig:recog_architecture}} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{traj.png} \caption{Training and inference methods during Trajectory Prediction.} \label{fig:traj_architecture} \end{figure} \subsection{Training and Testing} Transformer model has three important hyperparameters that can significantly affect the model performance: Number of Encoder/Decoder Layers ($N$ in Figure~\ref{fig:architecture}), Number of Encoder Heads ($h_{enc}$), Number of Decoder Heads ($h_{dec}$). The hyperparameters were tuned using grid search. We shuffled the data frames of all 39 suturing trials in JIGSAWS and splitted the training/testing set by 70/30 split for grid search purposes. \paragraph{Gesture Recognition} Gesture recognition can be treated as ``translating" from current kinematic data to current gestures. During training, the encoder input sequence consisted of all 38 current kinematic features of either MTMs or PSMs ($K_i$, $i = t+1, t+2, ..., t+T_{obs}$). The decoder ground-truth (output) sequence consisted of all 16 gesture classes (current gestures) of the input time steps ($G_i$, $i = t+1, t+2, ..., t+T_{obs}$). Following the teacher forcing procedure, the decoder input sequence was the shifted-right ground-truth sequence ($G_i$, $i = t, t+1, ..., t+T_{obs}-1$), as shown in Fig~\ref{fig:recog_train}. During testing and inference, shown in Fig~\ref{fig:recog_infer}, the first instance in decoder input was a randomized vector $R$. Then, for each inference step, the predicted value $\hat{G_i}$ from decoder output was added to the decoder input sequence recurrently for the next inference step. \paragraph{Gesture Prediction} Similar to gesture recognition, the encoder input sequence consisted of all 38 current kinematic features of either MTMs or PSMs ($K_i$, $i = t+1, t+2, ..., t+T_{obs}$). The decoder ground-truth (output) sequence consisted of all 16 gesture classes of the future time steps ($G_i$, $i = t+T_{obs}+1, t+T_{obs}+2, ..., t+T_{obs}+T_{pred}$). The decoder input sequence consisted of the current gesture classes ($G_i$, $i = t+1, t+2, ..., t+T_{obs}$). During testing and inference, since the inputs of Encoder and Decoder were known: current kinematic data $K$ from $t+1$ to $t+T_{obs}$ and current gesture data $G$ from $t+1$ to $t+T_{obs}$, respectively, the inference did not have any recurrence. Both inputs can be fed into the model at the same time. In both gesture recognition and gesture prediction, cumulative categorical cross-entropy loss is used for the discrepancies between the model output and the ground-truth gesture label. \paragraph{Trajectory Prediction} The encoder input sequence consisted of the concatenation of all 38 current kinematic features and the one-hot vector of the 16 gesture class, of PSMs ($[K_i, G_i]$, $i = t+1, t+2, ..., t+T_{obs}$). The decoder ground-truth (output) sequence consisted of 6 position dimensions $x,y,z$ of both left and right end-effectors during the future time steps ($P_i$, $i = t+T_{obs}+1, t+T_{obs}+2, ..., t+T_{obs}+T_{pred}$). The decoder input sequence consisted of the current position information ($P_i$, $i = t+1, t+2, ..., t+T_{obs}$) and the future gesture information ($G_i$, $i = t+T_{obs}+1, t+T_{obs}+2, ..., t+T_{obs}+T_{pred}$), as shown in Fig~\ref{fig:traj_architecture}. We use the cumulative $L_2$ loss between the predicted end-effector trajectory and the ground-truth trajectory, summed over the $T_{pred}$, as the trajectory loss function. To take the advantage of transformer for processing time series all at once, no recurrent inference as gesture recognition is used. Both inputs are fed into the model all at the same time. Similar to the original Transformer implementation, We used the Adam optimizer~\cite{kingma2014adam} with $\beta_1 = 0.9$, $\beta_2 = 0.98$ and $\epsilon = 10^{-9}$ and varied the learning rate ($lr$) over training steps~\cite{vaswani2017attention}. We used $warmup\_steps = 2000$ : \begin{equation} lr = d^{-0.5}_{dec}*(steps^{-0.5}, steps*warmup\_steps^{-1.5}) \end{equation} \section{EXPERIMENTAL EVALUATIONS} We evaluated our gesture recognition, gesture prediction and trajectory prediction models on the JIGSAWS dataset (See Table~\ref{tab:gestures}). To evaluate gesture recognition and gesture prediction accuracy, for each data frame, we calculated the percentage of accurately recognized or predicted time steps in the frame. Then, the accuracy was averaged across all the frames in the testing dataset. To evaluate the performance of end-effectors trajectory prediction, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) were used: \begin{equation} RMSE=\sqrt{\frac{\sum_{i=1}^{N}\left(y^{i}-\hat{y}^{i}\right)^{2}}{N}} \end{equation} \begin{equation} MAE=\frac{\sum_{i=1}^{N}\left|y^{i}-\hat{y}^{i}\right|}{N} \end{equation} We calculated both metrics for each dimension of the Cartesian end-effector path in the endoscopic reference frame (x,y,z) and also the end-effector distance $d=\sqrt{x^2+y^2+z^2}$ from the origin (camera tip). We adopted the Leave-One-User-Out cross validation (LOUO) to train and test the generalizability of our model. In LOUO, for each iteration, the data of $i^{th}$ subject was left out as testing set, and the rest of the data for training. Then we averaged the evaluation metrics across all the iterations which led to averaging over all subjects as testing set and reported the mean. The $batch\_size = 64$ and the model was trained for $epoch = 15$ during gesture recognition, $epoch = 40$ during gesture prediction, and $epoch = 50$ during trajectory prediction. For the final evaluation of the trajectory prediction performance, the evaluation metrics are calculated only on the time step of $T_{pred}$ without accounting for the previous prediction steps in the entire prediction window. \section{RESULTS AND DISCUSSIONS} In order to compare with previous studies in the literature, during gesture recognition we kept the data as its original frequency of 30Hz. During gesture and trajectory prediction, we downsampled the data to 10Hz. For each task, the model was trained and evaluated independently. \subsection{Gesture Recognition} We kept the JIGSAWS data as its original frequency 30Hz. We did not run hyperparameter tuning for gesture recognition as it would take significant computational effort with the data of 30Hz. Instead, we decided to keep the model for gesture recognition in its simplest form: $N = 1$, $h_{enc} = 1$, and $h_{dec} = 1$. The encoder input sequence was 38 dimensional kinematic features with a length of current observation $T_{obs} = 1s\,(30\,samples)$. The decoder output was a 16-class gesture with a length of current observation $T_{obs} = 1s\,(30\,samples)$ of the corresponding encoder input sequence. We used the data of MTMs and PSMs individually to test the model performance. After LOUO cross validation, the reported accuracy was promising and outperformed the state-of-the-art algorithms (89.3\% with MTMs kinematic data as encoder input; 89.2\% with PSMs kinematic data as encoder input, in Table~\ref{tab:recognition_compare}). An example of gesture recognition using the kinematic data of PSMs of a random trial as testing dataset is illustrated in Fig~\ref{fig:recog_ex}. \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{Recog_Ex.jpg} \caption{An example of Gesture Recognition using a random suturing trial. The top row is the estimated gestures by our proposed Gesture recognition model. The bottom row is the ground-truth labels.} \label{fig:recog_ex} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{Pred_Ex.jpg} \caption{An example of Gesture Prediction using a random suturing trial. The top row is the predicted gestures by our proposed gesture prediction model. The bottom row is the ground-truth labels. The Decoder uses real current gestures as input.} \label{fig:pred_ex} \end{figure*} Fusion-KVE is a method which incorporates kinematic data, video and events data. However, our proposed Transformer model only uses kinematic data and outperforms Fusion-KVE in accuracy. Using fewer types of data could potentially shorten the computational time for gesture recognition and therefore, enables a near-real-time recognition manner. \begin{table}[] \centering \caption{Comparison to prior works of Gesture Recognition under LOUO cross validation. The listed models used the data of 30Hz.} \label{tab:recognition_compare} \begin{tabular}{lll} \hline & Data Sources & Accuracy \\ \hline Fusion-KVE~\cite{qin2020fusion} & PSMs+Video & 86.3\% \\ Forward LSTM~\cite{dipietro2016recognizing} & PSMs & 80.5\% \\ Bidir. LSTM~\cite{dipietro2016recognizing} & PSMs & 83.3\% \\ \textbf{Transformer} & \textbf{MTMs} & \textbf{89.3\%} \\ \textbf{Transformer} & \textbf{PSMs} & \textbf{89.2\%} \\ \end{tabular}% \end{table} \subsection{Gesture Prediction} In order to compare our proposed model with the state-of-the-art, we also downsampled JIGSAWS data to 10Hz. The downsampled data could shorten the computational time during training, therefore, we applied hyperparameter tuning using grid search to optimize the prediction performance. After grid search, the resulting hyperparameters were: $N = 4$, $h_{enc} = 1$ and $h_{dec} = 4$. During training, the encoder input sequence was 38 dimensional kinematic features of current observation ($K_i$, $i = t-T_{obs}+1, t-T_{obs}+2, ..., t$, $T_{obs} = 1s$). The decoder output sequence was the prediction of 16-class future gestures ($G_i$, $i = t+1, t+2, ..., t+T_{pred}$). It is worth noting that the decoder input sequence is the current gesture sequence ($G_i$, $i = t+1, t+2, ..., t+T_{obs}$). Although our proposed gesture recognition model could output a good estimation of the current gesture sequence, and it's more reasonable to use the current gesture estimation to mimic the real-world application, we decided to train and evaluate the gesture prediction model on "real" current gestures, assuming a perfect current gesture estimation during gesture recognition. It allowed us to independently evaluate the performance of gesture prediction model. We summarized the gesture prediction model performance in Table~\ref{tab:ges_prediction_compare}. Both observation $T_{obs}$ and prediction $T_{pred}$ time lengths were 1 second. Although the reported accuracy (84.6\% with MTMs kinematic data as encoder input; 84.0\% with PSMs kinematic data as encoder input) did not significantly outperform the state of the art, we still believe our proposed gesture prediction model is promising since less data source (only kinematic data) was used in our study. One example of gesture prediction using the kinematic data of MTMs is shown in Fig~\ref{fig:pred_ex}. \begin{table}[] \centering \caption{Comparison to prior works of Gesture Prediction under LOUO cross validation. The listed models used the data of 10Hz in JIGSAWS and a prediction length of 1 second.} \label{tab:ges_prediction_compare} \begin{tabular}{lll} \hline & Data Sources & Accuracy \\ \hline daVinciNet~\cite{Qin2020} & PSMs+Video & 84.3\% \\ \textbf{Transformer} & \textbf{MTMs} & \textbf{84.6\%} \\ Transformer & PSMs & 84.0\% \end{tabular}% \end{table} \begin{table*}[] \centering \caption{End-effector trajectory prediction performance measures with prediction window of one second ahead($T_{pred}=10$). The prediction performances are reported for the Cartesian end-effector path in the endoscopic reference frame $(x,y,z)$ and $d=\sqrt{x^2+y^2+z^2}$.} \label{tab:traj_prediction_compare} \begin{tabular}{lllllllllll} \hline & Data Sources & Metric & $x_1$ & $y_1$ & $z_1$ & $d_1$ & $x_2$ & $y_2$ & $z_2$ & $d_2$ \\ \hline daVinciNet & PSMs & \multirow{2}{*}{RMSE} & \textbf{2.81} & \textbf{2.42} & \textbf{3.28} & 4.16 & \textbf{3.8} & 4.26 & 4.75 & 5.92 \\ Transformer & PSMs & & 3.15 & 3.03 & 3.30 & \textbf{2.93} & 3.93 & \textbf{4.21} & \textbf{4.22} & \textbf{4.92} \\ \hline daVinciNet & PSMs & \multirow{2}{*}{MAE} & \textbf{2.19} & \textbf{1.95} & \textbf{2.86} & 3.7 & \textbf{3.42} & 3.91 & 4.31 & 5.34 \\ Transformer & PSMs & & 2.86 & 2.85 & 3.00 & \textbf{2.71} & 3.60 & \textbf{3.88} & \textbf{3.84} & \textbf{4.44} \end{tabular}% \end{table*} \subsection{Trajectory Prediction} For the trajectory prediction task, we downsampled JIGSAWS data to 10Hz. We used the hyperparameters: $N = 1$, $h_{enc} = 6$ and $h_{dec} = 11$. The Encoder took both the kinematic features and the 16-class gesture class as input. The Decoder took the current $x,y,z$ positions of two end-effectors and the future gestures as input. Similar to Gesture Prediction, during training and testing, we used the ground truth future gestures in decoder input, assuming a perfect gesture prediction, to evaluate the model independently. Table~\ref{tab:traj_prediction_compare} summarizes the Transformer performance on the JIGSAWS suturing dataset with the prediction time-step of 1 second ($T_{pred}=10$). Using only the PSM data, the Transformer has better performance than daVinciNet on the right arm trajectory prediction, while slightly worse performance on the left arm. Although results are mixed, the Transformer still only uses the kinematics data to obtain as competitive results as those using both complex video and kinematic data. This would help largely reduce the computation complexity in the applications with a guarantee of accurate and real-time motion and gesture monitoring. \section{CONCLUSIONS AND FUTURE WORK} In this paper, we used the Transformer model, a novel deep learning model initially designed for NLP tasks, to recognize and predict the surgical activities. We modified the Transformer model architecture from the original paper according to the need of our tasks: gesture recognition, and gesture and trajectory prediction during RAS. In gesture recognition, the model took current kinematic data as its input sequence and estimated the corresponding surgical gestures (accuracy: 89.3\% using MTMs and 89.2\% using PSMs); In gesture prediction, the model took current kinematic data and current surgical gestures as its input sequences and predicted the future (1 second) surgical gestures (accuracy: 84.6\% using MTMs and 84.0\% using PSMs); In trajectory prediction, we jointly utilize the current kinematic data as well as the future gestures to predict the future (1 second) end-effector trajectory, and reached distance error as low as 2.71 mm. Considering that our models are purely based on the kinematic data of the end-effectors (MTMs and PSMs) of the daVinci Surgical System without the aid of visual features, the results are very much competitive. Although some studies have shown that combining kinematic data and video could improve the recognition and prediction performance, our work shows the potential to achieve similar performance with only kinematic data, which is preferred when running surgical activity recognition and prediction in a real-time manner~\cite{tao2013surgical,Zappella2013}, since vision data processing is inherently time consuming. Though the proposed models have outperformed the state-of-the-art methods from the literature, there are still some limitations that remain unsolved. Future work would include jointly evaluating the performance of gesture and trajectory prediction models. Our gesture prediction model took the current gesture sequence as its decoder input. And our trajectory prediction model took the future gesture sequence as part of its decoder input. However, in our current implementation, we used the ground truth values of the current gestures in gesture prediction and the ground truth values of the future gestures in trajectory prediction, to train and evaluate the models independently. To test the robustness and the feasibility of the near-real-time manner of the models in real-world gesture and trajectory prediction tasks, our next step would be to use the estimated values of gesture recognition as gesture prediction decoder input, and then, use the estimated values of gesture prediction as trajectory prediction decoder input. We also plan to do more ablation studies on measuring the respective difficulty of trajectory prediction of the 16 gesture classes. This would help develop a global sense of when the RAS system should offer more motion guidance and deviation warning. We believe our proposed methods can contribute to implementation of robot enhanced surgical (RES) applications, therefore, augment the role of robots in assisting surgeons through modern control strategies. \bibliographystyle{IEEEtran} \IEEEtriggercmd{\enlargethispage{2in}}
1,477,468,750,790
arxiv
\section{Introduction} \label{intro} \begin{figure}[t] \resizebox{1\columnwidth}{!}{% \includegraphics{figure1.eps} } \caption{Equivalent circuit for our model of a charge trap (red) coupled to a quantum dot (black). The trap occupation is 0 or 1 electron and it carries a very small current compared to the dot. The dot is treated within the orthodox model.} \label{schema} \end{figure} \begin{figure*} \resizebox{\textwidth}{!}{% \includegraphics{figure2.eps} } \caption{Simulated Coulomb diamonds (differential drain-source conductance versus drain and gate voltages) for our model depicted in Figure \ref{schema}. $C_\mathrm{gt}= 0.004\,\mathrm{aF} \ll C_\mathrm{gd}= 16\,\mathrm{aF}$, i.e.\ the gate-trap capacitance is very small compared to the gate-dot capacitance. Near trap degeneracy ($V_g=0\,\mathrm{V}$) Coulomb diamonds are replicated because of the oscillating trap occupancy with $V_g$ (see Fig.~\ref{fig2Ter}). At larger $V_g$ the diamonds are shifted and undistorted; the trap is always occupied at zero bias but can still be empty at finite bias. This gives lines of differential conductance (see Fig.~\ref{fig2Bis}) slowly evolving with $V_g$.} \label{fig2} \end{figure*} The diversity of nanostructures available for physicists to perform transport experiments at low temperature has grown considerably in the last decade. Following pionnering works on single-electron transistors (SETs) and quantum dots in the early 90's, it is now very common to observe and analyze Coulomb blockade in novel structures such as carbon based electronics or semiconducting nanowires. We have developped in the past \cite{hofheinz06B} a silicon nanowire MOSFET which turns into a remarkably stable and simple SET below approximately 10\,K. As it is widely accepted that background charges in the vicinity of the Coulomb island or the tunnel barriers are responsible for the large 1/f noise observed in many SETs, we developped a model based on a single charge trap capacitively coupled to the SET. This picture has been considered before in metallic single electron transistor, but without predictions for the Coulomb blockade spectroscopy \cite{Grupp}. A peculiarity of our model is that the charge trap energy is not only sensitive to both source-drain and gate voltages, but also to the dot occupation number, because of the capacitive back-action of the dot upon the trap. We have shown that solving the rate equations for this model gives the same sawtooth pattern than observed experimentally \cite{hofheinz06}. Sawtooth-like distortions of Coulomb diamonds are widely reported in the literature of quantum dots, for instance in carbon nanotubes SETs \cite{Cobden2}, graphene \cite{Ponomarenko}, molecules in gaps \cite{Heersche,Kubatkin} or epitaxial nanowires \cite{Thelander}. As the data published on Coulomb blockade transport spectroscopy, either from us or other groups, have released more and more details, we have pushed our model further in order to see if the features observed by experimentalists and usually attributed to other effects can also arise from our model. The most common feature is certainly lines of differential conductance above the blocked region, parallel to the diamonds edges. These lines are often observed in undistorted diamonds, therefore one can conclude that our Background Charge (BC) model cannot apply in this case, as it predicts a clear distortion of the pattern. In this work we show that with the right choice of capacitive couplings our BC model does predict these lines in Coulomb spectra. It also gives negative differential conductance lines in a natural way without introducing adhoc hypothesis. We compare the BC model to the excitation spectrum (ES) model and the density of states (DOS) fluctuations model which are both usually invoked to explain such lines. We explain in details the origin of the most important features predicted by the BC model, the differences between the 3 models and how they can be distinguished. Finally we report data on silicon nanowire transistors where the traps are implanted arsenic dopants. We observe lines of differential conductance which are well reproduced by the BC model. \section{The background charge model} \label{sec:1} In this paper we consider the general case represented in figure \ref{schema} of a quantum dot connected to a source, a drain and a control gate allowing to shift its electrostatic energy. In addition a charge trap occupied by 0 or 1 electron is located nearby. The current through the trap is negligible compared to the main current through the dot but the trap is weakly coupled by tunneling barrier to one contact and to the dot to allow its electron occupation to toggle between 0 and 1 electron. The energy of the trap is also fixed by the same gate voltage used to control the dot, but with a different capacitive coupling. This purely electrostatic model is a particular case of coupled quantum dots \cite{vanderwiel}. We chose arbitrarily to locate the trap on the source side. The source is grounded and a transport voltage is applied to the drain ($V_d$). We neglect any energy dependence of the tunnel transparencies. The dots are neither simply in series, as there is a finite transparency between the source and the dot, nor in parallel as there is no direct source-drain current via the trap. The electrostatic model is fully characterized by the capacitive couplings (see Fig.~\ref{schema}): $C_\mathrm{mn}$ is the capacitance between m and n where m,n=d (dot), t (trap), s (source), dr (drain) or g (gate). $C_\mathrm{t}= C_\mathrm{td}+ C_\mathrm{st} +C_\mathrm{gt}$ is the total capacitance of the trap. $C_\mathrm{d}= C_\mathrm{td}+ C_\mathrm{sd} +C_\mathrm{drd} +C_\mathrm{gd}$ is the total capacitance of the dot. We consider sequential tunneling events driving the system into a finite number of states, defined by $(\mathrm{N}_\mathrm{dot},\mathrm{N}_\mathrm{trap})$, the occupation numbers of the dot and trap. We deduce the drain-source current from the stationary occupation probability of the trap and the probability distribution of each charge state in the dot calculated with the rate equations. It is possible to null the dot-trap or source-trap current without changing the conclusions. This does not change the electrostatic scheme but changes the resulting differential conductance lines (not discussed here). The reference of electrostatic energies is arbitrary and does not affect our model which implies only energy differences between configurations. Figure~\ref{fig2} shows the result for $C_\mathrm{gt}$, $C_\mathrm{gd}$, $C_\mathrm{td}$, $C_\mathrm{st}$, $C_\mathrm{d}$ $= 0.004$, $16$, $1$, $2$, $117$\,aF respectively i.e. for very small trap-gate capacitance. The current via the trap is 1\,pA, to be compared with the dot-source and dot-drain conductances of $0.1\,\frac{ e^{2}}{h}$ (0.4\,nA for $V_{d} = 10^{-4}\,\mathrm{V}$). The level arm parameter for the trap, $ \alpha_{t}= \frac{ C_\mathrm{gt} }{ C_\mathrm{t}} \simeq 0.0013 $ is very small, about 10 times smaller than in ref. \cite{hofheinz06}. The number of successive distorted Coulomb diamonds increases as $\alpha_{t}$ decreases. A small $ \alpha_{t}$ means that the trap is much less sensitive to the gate voltage than to the dot occupation and drain voltage. As we will see later the limit $ \alpha_{t}\rightarrow 0$ is similar to the DOS model. A small $\alpha_{t}$ also implies that the differential conductance lines are parallel to the diamond edges. We have checked that larger $\alpha_{t}$ gives different slopes. We set to 0 volt the degeneracy point where the trap mean occupation number is $\frac{1}{2}$. Near this point we observe distorted Coulomb diamonds: two replicas appear, corresponding to the two charge states of the trap. The shift in gate voltage for the replicated diamond is given by $\Delta V_\mathrm{g} = \frac{e C_\mathrm{td}}{C_\mathrm{gd}C_\mathrm{t}}$. The largest distortion $\Delta V_\mathrm{g} \simeq \frac{e}{2 C_\mathrm{gd}}$ occurs when $C_\mathrm{st}= C_\mathrm{td}$ \cite{hofheinz06}. This occurs if the trap has equal couplings to source and dot. If the trap is more coupled either to the source or to the dot the apparent shift is reduced. \begin{figure} \resizebox{\columnwidth}{!}{\includegraphics{figure3.eps}} \caption{Same simulation than in figure~\ref{fig2}, focusing on the degeneracy region of the trap ($V_{g} \simeq 0\,\mathrm{V}$). Top panel: energy of lowest states at $V_d = 0\,\mathrm{V}$. Broken and solid lines are respectively for empty and occupied trap. Bottom panel: corresponding stability diagram. When increasing the gate voltage the (N-1,1)$\rightarrow$(N,0) transition (left square) is reached before the (N-1,1)$\rightarrow$(N,1) transition (circle). The system then follows adiabatically the lowest energy state and the next transition is (N,0)$\rightarrow$(N,1) (right square). The (N-1,1)$\rightarrow$(N,1) transition is then replaced by (N-1,1)$\rightarrow$(N,0)$\rightarrow$(N,1). The latter does not give a drain current.} \label{fig2Ter} \end{figure} \begin{figure} \resizebox{\columnwidth}{!}{ \includegraphics{figure4.eps}} \caption{Same simulation than in figure~\ref{fig2}, focusing far from the degeneracy region of the trap ($V_{g} \gg 0\,\mathrm{V}$). Here the trap is always occupied at small bias voltage and the diamond is not distorted. Top panel: energy of lowest states at $V_d = 0.6\,\mathrm{mV}$. Bottom panel: corresponding stability diagram. Region \textit{a} : transitions (N-1,1)$\leftrightarrow$(N,1) can occur. Both states are sequentially obtained, hence a current flows through the dot. Region \textit{c}: in addition, (N-1,1)$\rightarrow$(N,0) is also possible. When this occurs, no current passes through the dot anymore until the slow rate (N,0)$\rightarrow$(N,1) transition happen. The 0 state of the trap is a "dark state" lowering current through the dot. As a result a negative differential conductance line separates regions \textit{a} and \textit{c}. Region \textit{b}: like in region \textit{c} the trap can exchange its electron with the dot, in addition the (N-1,0)$\leftrightarrow$(N,0) transitions can happen. Therefore, whatever the trap state current through the dot can circulate and there is more current through the main dot than in \textit{a}, hence a positive differential conductance line separates regions \textit{a} and \textit{b}. Region \textit{d}: the dot can even be filled with N+1 electrons when the trap is empty.} \label{fig2Bis} \end{figure} A very important point is that the drain current vanishes at small bias near the degeneracy point, when the diamonds are replicated. Figure \ref{fig2Ter} illustrates the physical origin of the effect: Starting from $V_g=0\,\mathrm{V}$ at small bias ($V_d \simeq 0\,\mathrm{V}$) when the gate voltage increases an electron is transferred from the trap onto the main dot. This corresponds to the (N-1,1)$\rightarrow$(N,0) transition (left square in figure \ref{fig2Ter}), which occurs before the (N-1,1)$\rightarrow$(N,1) transition (circle on figure \ref{fig2Ter}) because of the repulsion by the charged trap. This charge exchange between the trap and the dot does not give any drain-source current. More complex sequences with the same initial and final states and an intermediate state could in principle give a finite current. First, co-tunneling events are not taken into account. One could also expect an extra electron to tunnel first from the source onto the dot, then the trap releasing its electron to the source. Alternatively the latter could happen first, then an electron could tunnel from source to dot. However both sequences involve intermediate states too high in energy ((N,1) and (N-1,0) respectively), and therefore are forbidden. For the same reason, once in the (N,0) state it is impossible for an electron to exit the dot into the drain, because the (N-1,0) state is too high in energy at this gate voltage. Increasing the gate voltage further, the system reaches the (N,0)$\leftrightarrow$(N,1) degeneracy (right square in Figure \ref{fig2Ter}). At this point the source can release an electron to the trap. This second transition at constant number of electrons in the dot does not yield any source-drain current. In summary the (N-1,1)$\rightarrow$(N,1) transition has been replaced by the (N-1,1)$\rightarrow$(N,0)$\rightarrow$(N,1) sequence where the trap occupation oscillates in gate voltage by successive transfers from the source and into the dot. In this gate voltage range the only way to recover a drain current is by applying a sufficient bias to allow N-1 or N electrons on the dot. Although a comparable situation has been described qualitatively in Ref. \cite{Tans}, our calculations do not predict the 'kinks' they observe, but replicas instead. Far from the trap degeneracy ($V_g \gg 0\,\mathrm{V}$), our model recovers undistorted Coulomb diamonds, as shown in Figure~\ref{fig2}. Drain current is restored at low bias as (N-1,1)$\rightarrow$(N,1) is now the lowest energy transition. More interestingly, our model predicts lines of differential conductance in the non blockaded regions, both negative and positive, as readily visible in Figure~\ref{fig2} and shown in more detail in Figure~\ref{fig2Bis}. At some bias, the mean occupation of the trap is allowed to vary. The system can therefore be in various charge states, implying different total conductances, hence maxima of differential conductance. We now discuss in more detail the origin of a negative differential conductance line between regions labeled \textit{a} and \textit{c} in Figure~\ref{fig2Bis}. As mentionned before, in both of these regions (N-1,1)$\leftrightarrow$(N,1) transitions are allowed and responsible for a finite drain current. In addition, the (N-1,1)$\rightarrow$(N,0) transition is possible only in region \textit{c}, allowing the trap to transfer its electron to the dot. Whenever this event happens no current passes through the dot anymore until the trap is filled again by an electron from the source. The latter event is much slower than tunneling between source, dot and drain, so the current is smaller in region \textit{c} than in \textit{a} and therefore the differential conductance is negative. The (N,0) state (trap empty) is a "dark state" blocking the current through the dot. We have checked that slowing the (N,0)$\rightarrow$(N,1) transition by decreasing the source-trap tunneling rate reduces further the current in region \textit{c}. The main conclusion of this study over a large gate voltage range (Figure~\ref{fig2}) is that both the negative differential conductance lines far from degeneracy and the suppression of current at low bias near degeneracy have the same origin, namely the electrostatic interaction between the trap and the dot. This effect can be exploited to build Coulomb blockade rectifiers suggested in ref. \cite{Stopa02,Likharev01}. \section{Comparison with the ES and DOS models} \label{sec:2} Usual explanations for lines parallel to the edges of Cou\-lomb diamonds are based on the ES or DOS models. Transport excitation spectra have been investigated in much detail \cite{zumbuhl,cobden} and their modification due to the Zeeman effect \cite{Cobden2}, photons \cite{Oosterkamp} or phonons \cite{Fujisawa} are widely observed in the Coulomb blockade stability diagram of quantum dots. The ES Model is based on the existence of a resolved excitation spectrum because of the quantized kinetic energy of electrons confined in a small volume. It requires a temperature much lower than the mean energy level spacing $\Delta$. We illustrate how lines appear in this model in figure \ref{spectre_discret}. The left panel is the orthodox model simulation of a quantum dot with $\Delta\ll k_BT$. The charging energy is constant and equal to 1.38\,meV ($C_d = 116\,\mathrm{aF}$). The right panel corresponds to a dot with the same charging energy, at the same temperature but with 4 electrons occupying 4 non degenerate orbital states with spacings of 0.4, 0.3 and 0.1\,meV respectively. Positive differential conductance lines appear. In this constant charging energy model the one-particle excitation spectrum shifts between successive diamonds as levels get occupied, the first excited state for N electrons becoming the ground state for N+1 electrons. Very often scrambling of the one-particle spectrum after adding electrons is observed but recently a clear correlation between successive diamonds has been reported \cite{Ralph08}. The absence of scrambling is interpreted as the absence of variation of the dot shape when adding electrons. That requires a steep enough confinement potential, for instance sharp etched edges. In the ES model it is natural to expect only positive differential conductance lines because excited states (at higher energies than ground states by principle) are more tunnel coupled to the electrodes. If equal tunneling rates are supposed for the ground and excited states, as done in Figure \ref{spectre_discret}, then differential conductance is always positive. To observe negative differential conductance within the ES model the excited state must be less conducting. This could occur because of a specific fluctuation of the wavefunction envelop, a selection rule \cite{weinmann}, a blocking state \cite{datta} or Stark effect \cite{Dollfus06}. \begin{figure} \resizebox{\columnwidth}{!}{% \includegraphics{figure5.eps} } \caption{Left panel: Simulation of the stability diagram (differential conductance versus gate and drain voltage) for a quantum dot, where the mean level spacing is negligible compared to the temperature T= 0.1K (orthodox model). The charging energy is constant and equal to 1.38 meV ($C_\mathrm{d}$= 116 aF). Right panel: simulation of a dot with the same charging energy at the same temperature, but with 4 electrons occupying 4 non degenerate orbital states with spacings of 0.4, 0.3 and 0.1 meV respectively (constant charging energy model). Positive differential conductance lines appear outside the Coulomb diamonds. The one-particle excitation spectrum shifts between successive diamonds as levels get occupied, the excited state for N electrons becoming the ground state for N+1 electrons.} \label{spectre_discret} \end{figure} The DOS model is appropriate if the very same pattern is observed in successive diamonds \cite{Falko97,Falko01}, in strong contrast with the ES model, because the energy in the electrodes does not depend on the gate voltage. In this model the source-drain current is proportionnal to the local density of states $\nu_{S,D}(E)$ in the source and drain. If for simplicity we suppose that the conductance is dominated by source-dot tunnel barrier, then $G(V)\propto {\frac{d\nu_{S}}{dE}}$. In that case lines of differential conductance are expected to be parallel to the negative slope edge of the Coulomb diamond. This edge corresponds to the alignment of the last unoccupied level in the dot with the Fermi level in the source. In the opposite case of a dominating dot-drain barrier the lines will be parallel to the positive slope and $G(V)\propto{\frac{d\nu_{D}}{dE}}$. In heavily doped semiconducting electrodes, just like in any diffusive reservoir, the local DOS fluctuates with the energy because of quantum interferences of elastically scattered quasi-particles diffusing coherently within a length scale related to their lifetime at a particular energy \cite{Falko01}. The characteristic energy for these fluctuations is set by the inverse of the quasi-particle relaxation time, which decreases as the energy moves away from the Fermi energy. The local DOS fluctuations increase when the diffusion coefficient $D=\frac{1}{d} v_{F}^{2}\tau$ decreases, with $d$ the dimensionality of the electrodes, $v_{F}$ the Fermi velocity and $\tau$ the elastic mean free time. Indeed a very large $D$ corresponds to a near perfect electronic reservoir without local fluctuations of the DOS. The local DOS fluctuations are reinforced in small, confined geometries. If the trap considered in our model gets closer to the source, at some point electronic orbitals in the source and trap will overlap and a strong local fluctuation of the DOS appears due to hybridization. Then the energy of the trap depends less and less on the gate voltage. Therefore there is a continuous evolution from the DOS model (which is a purely quantum mechanical model) to our electrostatic model when increasing the trap-electrode distance. The DOS model can explain other features which cannot be understood within the ES model \cite{Fasth}. First the DOS model naturaly explains negative differential conductance lines, as the sign of $\frac{d \nu_{S}}{dE}$ changes. In average one expects as many positive and negative differential conductance lines. Also the DOS model, can explain lines pointing to energies below the first available level in the dot. Such lines cannot arise from the ES model, as shown in the first diamond in the right panel of Figure~\ref{spectre_discret}. Our model differs from the ES and DOS models by the 3 following facts. 1) Lines are shifted progressively from one diamond to the next, in contrast to both ES and DOS models. 2) Lines coexist with sawtooth distortions (replicas) of diamonds at different gate voltages. 3) Negative differential conductance appears at finite bias voltage together with anomalously small current at low bias in distorted diamonds. Like the DOS model it explains why lines can appear at energies extrapolated below the ground state of the artificial atom. Finally our model, unlike both ES and DOS models, can explain lines not parallel to the diamond edges (if $\alpha_{t}$ is large) \cite{Cobden2,Zhong}. \section{Silicon Quantum dot with arsenic donor as a trap} \label{sec:3} \begin{figure} \resizebox{\columnwidth}{!}{% \includegraphics{figure6.eps} } \caption{(a) Schematic view of our MOS-SET. (b) Color plot of the drain differential conductance versus gate and drain voltages at T=350\,mK, which exhibits the same lines of conductance in successive diamonds with a small shift from diamond to diamond. These lines are explained with a purely electrostatic model involving a background charge. (c) Simulation with our model and parameters given in the text. We obtain a line at the same position than in the experimental data and which evolves slowly with $V_g$ because the trap is weakly coupled to the gate.} \label{fig6} \end{figure} We originally developped our model for silicon nanowire transistors with implanted arsenic traps in the tunnel barriers \cite{hofheinz06}. Here we report new data recorded in similar samples which clearly show the correlations in successive undistorted Coulomb diamonds predicted by our model. Fig.~\ref{fig6}b shows a typical stability diagram with a pattern of differential conductance lines very weakly dependent on the number of carriers in the dot. This weakly shifted pattern is a characteristic signature of our model for traps much more coupled to the source (or drain or dot) than to the gate. The samples are described in ref. \cite{hofheinz06B,hofheinz06,thesemax} and schematically drawn in Fig.~\ref{fig6}a. They are SOI-MOSFETs adapted in terms of doping to become controlled SETs at low temperature. A 20 to 80\,nm wide wire is etched to form the channel. The source and drain parts of the wire are highly doped to form good metallic reservoirs (As, $\simeq 10^{20}\,\mathrm{cm}^{-3}$). The central part of the wire is covered by a 40\,nm poly-Si gate electrode, isolated by Si0$_\mathrm{2}$, and self-aligned silicon nitride spacers (35\,nm long) are deposited on both sides of the gate (rounded walls in Fig.~\ref{fig6}a). The part of the wire below the gate and spacers (light gray regions on Fig.~\ref{fig6}a) is only lightly doped (As, $5\times 10^{17}\,\mathrm{cm}^{-3}$), so that at low temperature it forms an insulator. However directly below the gate electrode it can be tuned into a metallic state by applying a positive gate voltage. That way a quantum dot is formed under the gate. The tunnel barriers are the low-doped parts of the wire adjacent to the dot. The arsenic dopants inside these tunnel barriers are weakly capacitively coupled to the gate and are the traps considered in our model. Well centered donors give replica for the diamonds \cite{hofheinz06}. There is a gradient of Arsenic concentration (typically one order of magnitude for a 5 nm lateral distance in our process) at the border between the source-drain and channel. Many arsenic donors are located close, and are therefore well coupled to the source or drain. Such donor have a small lever arm parameter $ \alpha_{t}$ and give undistorted diamonds with lines of differential conductance. Donors which are strongly coupled to the dot (due to electrostatic bending of the impurity band below the gate edges) produce the same effect. We performed our measurements at 350\,mK in a $^\mathrm{3}$He refrigerator and measured the differential conductance with a standard ac lock-in technique. At this temperature we do not expect to resolve the quantum levels in the dot. The mean energy level spacing $\Delta$ between quantum states is the largest for small dots at low gate voltages where only a 2D electron gas is formed at the surface of the channel. In this limit we expect $\Delta_\mathrm{2D} \sim \frac{2 \pi \hbar^2}{d m^* A} \sim 150\,\mu\mathrm{eV}$, with $d=4$ the spin and valley degeneracy, $m^*=0.19\,m_e$ the 2D effective mass, and $A \simeq 4000\,\mathrm{nm}^2$ the total surface area of the gate/wire overlap, including the flanks of the wire. As the dot gets filled, the electron gas eventually fills up the whole volume of the wire below the gate and $\Delta$ falls below $20\,\mu\mathrm{eV}$. Quantum levels can only be resolved when $\Delta$ is larger than the width of the resulting lines of differential conductance. These lines have a full width at half maximum of approximately $3.5\,k_\mathrm{B}T \sim 100\,\mu\mathrm{eV}$ given by the Fermi distribution in the leads. For the large gate voltages shown in figure~\ref{fig6} we are in the high density regime where $\Delta$ is too small to play a significant role. Therefore the sharp lines of differential conductance seen on figure ~\ref{fig6} cannot be explained within the ES model. The lines have a typical energy separation of 1\,meV, much larger than the calculated mean spacing of $20\,\mu\mathrm{eV}$. They are also observed at larger energy than expected in the DOS model. In our samples the reservoirs are highly doped silicon wires in which the mean level spacing is very small, and local DOS fluctuations have a correlation in energy of the order of the inverse inelastic time $\simeq 1\,\mathrm{ns}$ (${\frac{h}{\tau_{in}}} \simeq 4\,\mu\mathrm{eV}$) at T=1K in heavily doped silicon \cite{heslinga}. In summary, unlike the ES and DOS models, our model explains quantitatively the weakly shifting lines between successive diamonds measured in Figure~\ref{fig6}b, with a trap located on the drain side of the dot. Figure~\ref{fig6}c shows the result of the simulation with $C_\mathrm{gt}$, $C_\mathrm{gd}$, $C_\mathrm{td}$, $C_\mathrm{drt}$, $C_\mathrm{d} = 0.006$, $13.3$, $0.4$, $0.046$, $53.3$\,aF. \section{Conclusions} \label{conclusion} We have extended a simple electrostatic model of a charge trap coupled to a dot to the case of very weak coupling to the gate. In this regime new features are predicted over a large gate voltage range. Near degeneracy of the trap the sawtooth pattern calculated in a previous work is recovered and the current suppression at low bias voltage is understood in more detail. We obtained new features far from this degeneracy, where Coulomb diamonds are not distorted. Lines of differential conductance appear in the diamonds, very similar to the ones usually attributed excited states, although our model does not involve a discrete spectrum for the dot. These differential conductance lines can be positive or negative, and parallel to either edge of the diamond. They also evolve very weakly with gate voltage, an original feature not predicted by other models. Our model easily accounts for negative differential conductance lines, a feature usually attributed to density-of-state fluctuations in the contacts. This model and ours are converging when considering a trap located very close to the electrodes, as hybridization occurs and coupling to the gate goes to zero. Although the most basic signature of our model is the sawtooth pattern and associated current suppression, we emphasize that its experimental observation is not required to validate our charge trap scheme. Indeed lines can very well be observed in undistorted diamonds while the degeneracy region remains out of the energy range which can be probed experimentally. Even though our model involves only a single trap occupied with zero or one electron and a dot treated in the orthodox model, it already gives a complicated pattern of lines and features in the stability diagram. It provides a quantitative but simple way to simulate the Coulomb blockade spectroscopy of quantum dots, and shows the great impact of a single charge on this spectrum. Further extensions like several traps, resolved mean one-particle level spacing in the dot, non-negligible current through the trap, double occupation of the trap, Zeeman effect on the trap energy can be implemented in the near future. As more and more nanostructures designed for transport experiment exhibit Coulomb blockade, our model could account for many features observed experimentally as the presence of charge traps is very realistic.
1,477,468,750,791
arxiv
\section{Introduction} Cellular-connected unmanned aerial vehicles (UAVs) will be an integral component of future wireless networks as evidenced by recent interest from academia, industry, and 3GPP standardizations~\cite{3GPP_standards, Qualcomm_UAV, LTEintheSky, SkyNotLimit, coexistence_ground_aerial, christian}. Unlike current wireless UAV connectivity that relies on short-range communication range (e.g., WiFi, bluetooth, and radio waves), cellular-connected UAVs allow beyond line-of-sight control, low latency, real time communication, robust security, and ubiquitous coverage. Such \emph{cellular-connected UAV-user equipments (UEs)} will thus enable a myriad of applications ranging from real-time video streaming to surveillance. Nevertheless, the ability of UAV-UEs to establish line-of-sight (LoS) connectivity to cellular base stations (BSs) is both a blessing and a curse. On the one hand, it enables high-speed data access for the UAV-UEs. On the other hand, it can lead to substantial inter-cell mutual interference among the UAVs and to the ground users. As such, a wide-scale deployment of UAV-UEs is only possible if interference management challenges are addressed~\cite{LTEintheSky, SkyNotLimit, coexistence_ground_aerial}. While some literature has recently studied the use of UAVs as mobile BSs~\cite{U_globecom, ferryMessage, zhang_trajectory_power, path_planning_WCNC, mohammad_UAV, qingqing_UAV, chen2016caching}, the performance analysis of cellular-connected UAV-UEs (\emph{short-handed hereinafter as UAVs}) remains relatively scarce~\cite{LTEintheSky, SkyNotLimit, coexistence_ground_aerial, reshaping_cellular}. For instance, in~\cite{LTEintheSky}, the authors study the impact of UAVs on the uplink performance of a ground LTE network. Meanwhile, the work in~\cite{SkyNotLimit} uses measurements and ray tracing simulations to study the airborne connectivity requirements and propagation characteristics of UAVs. The authors in~\cite{coexistence_ground_aerial} analyze the coverage probability of the downlink of a cellular network that serves both aerial and ground users. In~\cite{reshaping_cellular}, the authors consider a network consisting of both ground and aerial UEs and derive closed-form expressions for the coverage probability of the ground and drone UEs. Nevertheless, this prior art is limited to studying the impact that cellular-connected UAVs have on the ground network. Indeed, the existing literature~\cite{LTEintheSky, SkyNotLimit, coexistence_ground_aerial, reshaping_cellular} does not provide any concrete solution for optimizing the performance of a cellular network that serves both aerial and ground UEs in order to overcome the interference challenge that arises in this context. UAV trajectory optimization is essential in such scenarios. An online path planning that accounts for wireless metrics is vital and would, in essence, assist in addressing the aforementioned interference challenges along with new improvements in the design of the network, such as 3D frequency resue. Such a path planning scheme allows the UAVs to adapt their movement based on the rate requirements of both aerial UAV-UEs and ground UEs, thus improving the overall network performance. The problem of UAV path planning has been studied mainly for non-UAV-UE applications~\cite{ferryMessage, zhang_trajectory_power, path_planning_WCNC, networked_camera} with~\cite{path_cellular_UAVs} being the only work considering a cellular-connected UAV-UE scenario. In~\cite{ferryMessage}, the authors propose a distributed path planning algorithm for multiple UAVs to deliver delay-sensitive information to different ad-hoc nodes. The authors in~\cite{zhang_trajectory_power} optimize a UAV's trajectory in an energy-efficient manner. The authors in~\cite{path_planning_WCNC} propose a mobility model that combines area coverage, network connectivity, and UAV energy constraints for path planning. In~\cite{networked_camera}, the authors propose a fog-networking-based system architecture to coordinate a network of UAVs for video services in sports events. However, despite being interesting, the body of work in~\cite{ferryMessage, zhang_trajectory_power, path_planning_WCNC} and~\cite{networked_camera} is restricted to UAVs as BSs and does not account for UAV-UEs and their associated interference challenges. Hence, the approaches proposed therein cannot readily be used for cellular-connected UAVs. On the other hand, the authors in~\cite{path_cellular_UAVs} propose a path planning scheme for minimizing the time required by a cellular-connected UAV to reach its destination. Nevertheless, this work is limited to one UAV and does not account for the interference that cellular-connected UAVs cause on the ground network during their mission. Moreover, the work in~\cite{path_cellular_UAVs} relies on offline optimization techniques that cannot adapt to the uncertainty and dynamics of a cellular network. The main contribution of this paper is a novel deep reinforcement learning (RL) framework based on echo state network (ESN) cells for optimizing the trajectories of multiple cellular-connected UAVs in an online manner. This framework will allow cellular-connected UAVs to minimize the interference they cause on the ground network as well as their wireless transmission latency. To realize this, we propose a dynamic noncooperative game in which the players are the UAVs and the objective of each UAV is to \emph{autonomously} and \emph{jointly} learn its path, transmit power level, and association vector. For our proposed game, the UAV's cell association vector, trajectory optimization, and transmit power level are closely coupled with each other and their optimal values vary based on the dynamics of the network. Therefore, a major challenge in this game is the need for each UAV to have full knowledge of the ground network topology, ground UEs service requirements, and other UAVs' locations. Consequently, to solve this game, we propose a deep RL ESN-based algorithm, using which the UAVs can predict the dynamics of the network and subsequently determine their optimal paths as well as the allocation of their resources along their paths. Unlike previous studies which are either centralized or rely on the coordination among UAVs, our approach is based on a self-organizing path planning and resource allocation scheme. In essence, two important features of our proposed algorithm are \emph{adaptation} and \emph{generalization}. Indeed, UAVs can take decisions for \emph{unseen} network states, based on the reward they got from previous states. This is mainly due to the use of ESN cells which enable the UAVs to retain their previous memory states. We have shown that the proposed algorithm reaches a subgame perfect Nash equilibrium (SPNE) upon convergence. Moreover, upper and lower bounds on the UAVs' altitudes, that guarantee a maximum interference level on the ground network and a maximum wireless transmission delay for the UAV, have been derived. To our best knowledge, \emph{this is the first work that exploits the framework of deep ESN for interference-aware path planning of cellular-connected UAVs}. Simulation results show that the proposed approach improves the tradeoff between energy efficiency, wireless latency, and the interference level caused on the ground network. Results also show that each UAV's altitude is a function of the ground network density and the UAV's objective function and is an important factor in achieving the UAV's target. The rest of this paper is organized as follows. Section~\ref{system_model} presents the system model. Section~\ref{game} describes the proposed noncooperative game model. The deep RL ESN-based algorithm is proposed in Section~\ref{algorithm}. In Section~\ref{simulation}, simulation results are analyzed. Finally, conclusions are drawn in Section~\ref{conclusion}. \vspace{-0.5cm} \section{System Model}\label{system_model} Consider the uplink (UL) of a wireless cellular network composed of a set $\mathcal{S}$ of $S$ ground BSs, a set $\mathcal{Q}$ of $Q$ ground UEs, and a set $\mathcal{J}$ of $J$ cellular-connected UAVs. The UL is defined as the link from UE $q$ or UAV $j$ to BS $s$. Each BS $s \in \mathcal{S}$ serves a set $\mathcal{K}_s\subseteq\mathcal{Q}$ of $K_s$ UEs and a set $\mathcal{N}_s\subseteq\mathcal{J}$ of $N_s$ cellular-connected UAVs. The total system bandwidth, $B$, is divided into a set $\mathcal{C}$ of $C$ resource blocks (RBs). Each UAV $j\in \mathcal{N}_s$ is allocated a set $\mathcal{C}_{j,s}\subseteq\mathcal{C}$ of $C_{j,s}$ RBs and each UE $q\in \mathcal{K}_s$ is allocated a set $\mathcal{C}_{q,s}\subseteq\mathcal{C}$ of $C_{q,s}$ RBs by its serving BS $s$. At each BS $s$, a particular RB $c \in \mathcal{C}$ is allocated to \emph{at most} one UAV $j\in \mathcal{N}_s$, or UE $q\in \mathcal{K}_s$. An airborne Internet of Things (IoT) is considered in which the UAVs are equipped with different IoT devices, such as cameras, sensors, and GPS that can be used for various applications such as surveillance, monitoring, delivery and real-time video streaming. The 3D coordinates of each UAV $j \ \in \mathcal{J}$ and each ground user $q \ \in \mathcal{Q}$ are, respectively, $(x_j, y_j, h_j)$ and $(x_q, y_q, 0)$. All UAVs are assumed to fly at a fixed altitude $h_j$ above the ground (as done in~\cite{zhang_trajectory_power, path_cellular_UAVs, relaying, optimization}) while the horizonal coordinates $(x_j, y_j)$ of each UAV $j$ vary in time. Each UAV $j$ needs to move from an initial location $o_j$ to a final destination $d_j$ while transmitting \emph{online} its mission-related data such as sensor recordings, video streams, and location updates. We assume that the initial and final locations of each UAV are pre-determined based on its mission objectives. For ease of exposition, we consider a virtual grid for the mobility of the UAVs. We discretize the space into a set $\mathcal{A}$ of $A$ equally sized unit areas. The UAVs move along the center of the areas $c_a=(x_a, y_a, z_a)$, which yields a finite set of possible paths $\boldsymbol{p}_j$ for each UAV $j$. The path $\boldsymbol{p}_j$ of each UAV $j$ is defined as a sequence of area units $\boldsymbol{p}_j=(a_1, a_2, \cdots, a_l)$ such that $a_1=o_j$ and $a_l=d_j$. The area size of the discretized area units $(a_1, a_2, \cdots, a_A) \in \mathcal{A}$ is chosen to be sufficiently small such that the UAVs' locations can be assumed to be approximately constant within each area even at the maximum UAV's speed, as commonly done in the literature~\cite{relaying}. We assume a constant speed $0 < V_j \leq \widehat{V}_j$ for each UAV where $\widehat{V}_j$ is the maximum speed of UAV $j$. Therefore, the time required by each UAV to travel between any two unit areas is constant \vspace{-0.1cm} \subsection{Channel Models} \vspace{-0.1cm} We consider the sub-6 GHz band and the free-space path loss model for the UAV-BS data link. The path loss between UAV $j$ at location $a$ and BS $s$, $\xi_{j,s,a}$, is given by~\cite{hourani}: \begin{align} \xi_{j,s,a} (\mathrm{dB})= 20\ \mathrm{log}_{10} (d_{j,s,a}) + 20\ \mathrm{log}_{10} (\hat{f}) - 147.55, \end{align} \noindent where $\hat{f}$ is the system center frequency and $d_{j,s,a}$ is the Euclidean distance between UAV $j$ at location $a$ and BS $s$. We consider a Rician distribution for modeling the small-scale fading between UAV $j$ and ground BS $s$ thus accounting for the LoS and multipath scatterers that can be experienced at the BS. In particular, adopting the Rician channel model for the UAV-BS link is validated by the fact that the channel between a given UAV and a ground BS is mainly dominated by a LoS link~\cite{zhang_trajectory_power}. We assume that the Doppler spread due to the mobility of the UAVs is compensated for based on existing techniques such as frequency synchronization using a phase-locked loop~\cite{mengali} as done in~\cite{zhang_trajectory_power} and~\cite{relaying}. For the terrestrial UE-BS links, we consider a Rayleigh fading channel. For a carrier frequency, $\hat{f}$, of 2 GHz, the path loss between UE $q$ and BS $s$ is given by~\cite{pathloss_ground}: \begin{align} \zeta_{q,s}(\mathrm{dB}) = 15.3+37.6\ \mathrm{log}_{10}(d_{q,s}), \end{align} \noindent where $d_{q\textrm{,}s}$ is the Euclidean distance between UE $q$ and BS $s$. The average signal-to-interference-plus-noise ratio (SINR), $\Gamma_{j,s,c,a}$, of the UAV-BS link between UAV $j$ at location $a$ $(a \in \mathcal{A})$ and BS $s$ over RB $c$ will be: \begin{align}\label{SNIR} \Gamma_{j,s,c,a}=\frac{P_{j,s,c,a} h_{j,s,c,a}}{I_{j,s,c}+B_c N_0}, \end{align} \noindent where $P_{j,s,c,a}=\widehat{P}_{j,s,a}/C_{j,s}$ is the transmit power of UAV $j$ at location $a$ to BS $s$ over RB $c$ and $\widehat{P}_{j,s,a}$ is the total transmit power of UAV $j$ to BS $s$ at location $a$. Here, the total transmit power of UAV $j$ is assumed to be distributed uniformly among all of its associated RBs. $h_{j,s,c,a}=g_{j,s,c,a}10^{-\xi_{j,s,a}/10}$ is the channel gain between UAV $j$ and BS $s$ on RB $c$ at location $a$ where $g_{j,s,c,a}$ is the Rician fading parameter. $N_0$ is the noise power spectral density and $B_{c}$ is the bandwidth of an RB $c$. $I_{j,s,c}= \sum_{r=1, r\neq s}^S (\sum_{k=1}^{K_r} P_{k,r,c} h_{k,s,c} + \sum_{n=1}^{N_r} P_{n,r,c,a'} h_{n,s,c,a'})$ is the total interference power on UAV $j$ at BS $s$ when transmitting over RB $c$, where $\sum_{r=1, r\neq s}^S \sum_{k=1}^{K_r} P_{k,r,c} h_{k,s,c}$ and $\sum_{r=1, r\neq s}^S\sum_{n=1}^{N_r} P_{n,r,c,a'} h_{n,s,c,a'}$ correspond, respectively, to the interference from the $K_r$ UEs and the $N_r$ UAVs (at their respective locations $a'$) connected to neighboring BSs $r$ and transmitting using the same RB $c$ as UAV $j$. $h_{k,s,c}=m_{k,s,c}10^{-\zeta_{k,s}/10}$ is the channel gain between UE $k$ and BS $s$ on RB $c$ where $m_{k,s,c}$ is the Rayleigh fading parameter. Therefore, the achievable data rate of UAV $j$ at location $a$ associated with BS $s$ can be defined as $R_{j,s,a}=\sum_{c=1}^{C_{j,s}} B_{c} \mathrm{log}_2(1+\Gamma_{j,s,c,a})$. Given the achievable data rate of UAV $j$ and assuming that each UAV is an M/D/1 queueing system, the corresponding latency over the UAV-BS wireless link is given by~\cite{delay_book}: \begin{align}\label{delay_eqn} \tau_{j,s,a}=\frac{\lambda_{j,s}}{2\mu_{j,s,a}\textrm{(}\mu_{j,s,a}-\lambda_{j,s}\textrm{)}}+\frac{1}{\mu_{j,s,a}}, \end{align} \noindent where $\lambda_{j,s}$ is the average packet arrival rate (packets/s) traversing link $(j,s)$ and originating from UAV $j$. $\mu_{j,s,a}=R_{j,s,a}/\nu$ is the service rate over link $(j,s)$ at location $a$ where $\nu$ is the packet size. On the other hand, the achievable data rate for a ground UE $q$ served by BS $s$ is given by: \vspace{-0.2cm} \begin{align} R_{q,s}=\sum_{c=1}^{C_{q,s}}B_c\mathrm{log}_2\Big(1+\frac{P_{q,s,c}h_{q,s,c}}{I_{q,s,c}+B_cN_0}\Big), \end{align} \noindent where $h_{q,s,c}=m_{q,s,c}10^{-\zeta_{q,s}/10}$ is the channel gain between UE $q$ and BS $s$ on RB $c$ and $m_{q,s,c}$ is the Rayleigh fading parameter. $P_{q,s,c}=\widehat{P}_{q,s}/C_{q,s}$ is the transmit power of UE $q$ to its serving BS $s$ on RB $c$ and $\widehat{P}_{q,s}$ is the total transmit power of UE $q$. Here, we also consider equal power allocation among the allocated RBs for the ground UEs. $I_{q,s,c}= \sum_{r=1, r\neq s}^S (\sum_{k=1}^{K_r} P_{k,r,c} h_{k,s,c} + \sum_{n=1}^{N_r} P_{n,r,c,a'} h_{n,s,c,a'})$ is the total interference power experienced by UE $q$ at BS $s$ on RB $c$ where $\sum_{r=1, r\neq s}^S \sum_{k=1}^{K_r} P_{k,r,c} h_{k,s,c}$ and $\sum_{r=1, r\neq s}^S\sum_{n=1}^{N_r} P_{n,r,c,a'} h_{n,s,c,a'}$ correspond, respectively, to the interference from the $K_r$ UEs and the $N_r$ UAVs (at their respective locations $a'$) associated with the neighboring BSs $r$ and transmitting using the same RB $c$ as UE $q$. \subsection{Problem Formulation} Our objective is to find the optimal path for each UAV $j$ based on its mission objectives as well as its interference on the ground network. Thus, we seek to minimize: a) the interference level that each UAV causes on the ground UEs and other UAVs, b) the transmission delay over the wireless link, and c) the time needed to reach the destination. To realize this, we optimize the paths of the UAVs jointly with the cell association vector and power control at each location $a \in \mathcal{A}$ along each UAV's path. We consider a directed graph $G_j=(\mathcal{V}, \mathcal{E}_j)$ for each UAV $j$ where $\mathcal{V}$ is the set of vertices corresponding to the centers of the unit areas $a \in \mathcal{A}$ and $\mathcal{E}_j$ is the set of edges formed along the path of UAV $j$. We let $\boldsymbol{\widehat{P}}$ be the transmission power vector with each element $\widehat{P}_{j,s,a}\in[0, \overline{P}_{j}]$ being the transmission power level of UAV $j$ to its serving BS $s$ at location $a$ where $\overline{P}_{j}$ is the maximum transmission power of UAV $j$. $\boldsymbol{\alpha}$ is the path formation vector with each element $\alpha_{j,a,b}\in\{0,1\}$ indicating whether or not a directed link is formed from area $a$ towards area $b$ for UAV $j$, i.e., if UAV $j$ moves from $a$ to $b$ along its path. $\boldsymbol{\beta}$ is the UAV-BS association vector with each element $\beta_{j,s,a}\in\{0,1\}$ denoting whether or not UAV $j$ is associated with BS $s$ at location $a$. Next, we present our optimization problem whose goal is to determine the path of each UAV along with its cell association vector and its transmit power level at each location $a$ along its path $\boldsymbol{p}_j$: \vspace{-0.4cm} \begin{multline}\label{obj} \hspace{-0.5cm}\min_{\boldsymbol{\widehat{P}}, \boldsymbol{\alpha}, \boldsymbol{\beta}}\vartheta\sum_{j=1}^{J}\sum_{s=1}^S \sum_{c=1}^{C_{j,s}}\sum_{a=1}^A \sum_{r=1, r\neq s}^S \frac{\widehat{P}_{j,s,a} h_{j,r,c,a}}{C_{j,s}}+\varpi \sum_{j=1}^J\sum_{a=1}^A\sum_{b=1, b\neq a}^A\alpha_{j,a,b} + \phi\sum_{j=1}^J\sum_{s=1}^S\sum_{a=1}^A\beta_{j,s,a}\tau_{j,s,a}, \end{multline} \vspace{-0.4cm} \begin{align}\label{cons_1} \sum_{b=1, b\neq a}^A\alpha_{j,b,a} \leq 1 \;\;\forall j\in \mathcal{J}, a\in \mathcal{A}, \end{align} \vspace{-0.45cm} \begin{align}\label{cons_2} \sum_{a=1, a\neq o_j}^A\alpha_{j,o_j,a} \textrm{=} 1 \;\;\forall j\in \mathcal{J}, \sum_{a=1, a\neq d_j}^A\alpha_{j,a,d_j} \textrm{=} 1 \;\;\forall j\in \mathcal{J}, \end{align} \vspace{-0.4cm} \begin{align}\label{cons_3} \sum_{a\textrm{=}1, a\neq b}^A\alpha_{j,a,b}-\sum_{f\textrm{=}1, f\neq b}^A\alpha_{j,b,f}\textrm{=} 0 \;\forall j\in \mathcal{J},b\in \mathcal{A} \;(b\neq o_j, b\neq d_j), \end{align} \vspace{-0.5cm} \begin{align}\label{cons_4} \widehat{P}_{j,s,a}\geq\sum_{b=1, b\neq a}^A\alpha_{j,b,a} \;\;\forall j\in \mathcal{J}, s\in \mathcal{S}, a\in \mathcal{A}, \end{align} \vspace{-0.74cm} \begin{align}\label{cons_44} \widehat{P}_{j,s,a}\geq\beta_{j,s,a} \;\;\forall j\in \mathcal{J}, s\in \mathcal{S}, a\in \mathcal{A}, \end{align} \vspace{-0.75cm} \begin{align}\label{cons_6} \sum_{s=1}^S \beta_{j,s,a} - \sum_{b=1, b\neq a}^A\alpha_{j,b,a}=0\;\;\;\forall j\in \mathcal{J}, a\in A, \end{align} \vspace{-0.37cm} \begin{align}\label{cons_7} \sum_{c=1}^{C_{j,s}}\Gamma_{j,s,c,a}\geq\beta_{j,s,a}\overline{\Gamma}_j \;\;\;\forall j\in \mathcal{J}, s\in \mathcal{S}, a\in \mathcal{A}, \end{align} \vspace{-0.68cm} \begin{align}\label{cons_8} 0\leq \widehat{P}_{j,s,a}\leq \overline{P}_{j} \;\;\forall j\in \mathcal{J}\textrm{,} s\in \mathcal{S}\textrm{,} \; a\in \mathcal{A}, \end{align} \vspace{-1.03cm} \begin{align}\label{cons_9} \alpha_{j\textrm{,}a\textrm{,}b}\in\{0\textrm{,}1\}\textrm{,}\; \beta_{j\textrm{,}s\textrm{,}a}\in\{0\textrm{,}1\} \;\;\forall j\in \mathcal{J}\textrm{,}\; s\in \mathcal{S}\textrm{,} \;\;a,b\in \mathcal{A}\textrm{.} \end{align} The objective function in (\ref{obj}) captures the total interference level that the UAVs cause on neighboring BSs along their paths, the length of the paths of the UAVs, and their wireless transmission delay. $\vartheta$, $\varpi$ and $\phi$ are multi-objective weights used to control the tradeoff between the three considered metrics. These weights can be adjusted to meet the requirements of each UAV's mission. For instance, the time to reach the destination is critical in search and rescue applications while the latency is important for online video streaming applications. (\ref{cons_1}) guarantees that each area $a$ is visited by UAV $j$ at most once along its path $\boldsymbol{p}_j$. (\ref{cons_2}) guarantees that the trajectory of each UAV $j$ starts at its initial location $o_j$ and ends at its final destination $d_j$. (\ref{cons_3}) guarantees that if UAV $j$ visits area $b$, it should also leave from area $b$ $(b\neq o_j, b\neq d_j)$. (\ref{cons_4}) and (\ref{cons_44}) guarantee that UAV $j$ transmits to BS $s$ at area $a$ with power $\widehat{P}_{j,s,a}>0$ only if UAV $j$ visits area $a$, i.e., $a\in \boldsymbol{p}_j$ and such that $j$ is associated with BS $s$ at location $a$. (\ref{cons_6}) guarantees that each UAV $j$ is associated with one BS $s$ at each location $a$ along its path $\boldsymbol{p}_j$. (\ref{cons_7}) guarantees an upper limit, $\overline{\Gamma}_j$, for the SINR value $\Gamma_{j,s,c,a}$ of the transmission link from UAV $j$ to BS $s$ on RB $c$ at each location $a$, $a\in \boldsymbol{p}_j$. This, in turn, ensures successful decoding of the transmitted packets at the serving BS. The value of $\overline{\Gamma}_j$ is application and mission specific. Note that the SINR check at each location $a$ is valid for our problem since we consider small-sized area units. (\ref{cons_8}) and (\ref{cons_9}) are the feasibility constraints. The formulated optimization problem is a mixed integer non-linear program, which is computationally complex to solve for large networks. To address this challenge, we adopt a distributed approach in which each UAV decides autonomously on its next path location along with its corresponding transmit power and association vector. In fact, a centralized approach requires control signals to be transmitted to the UAVs at all time. This might incur high round-trip latencies that are not desirable for real-time applications such as online video streaming. Further, a centralized approach requires a central entity to have full knowledge of the current state of the network and the ability to communicate with all UAVs at all time. However, this might not be feasible in case the UAVs belong to different operators or in scenarios in which the environment changes dynamically. Therefore, we next propose a distributed approach for each UAV $j$ to learn its path $\boldsymbol{p}_j$ along with its transmission power level and association vector at each location $a$ along its path in an autonomous and online manner. \section{Towards a Self-Organizing Network of an Airborne Internet of Things}\label{game} \subsection{Game-Theoretic Formulation} Our objective is to develop a distributed approach that allows each UAV to take actions in an autonomous and online manner. For this purpose, we model the multi-agent path planning problem as a finite dynamic noncooperative game model $\mathcal{G}$ with perfect information~\cite{walid_book}. Formally, we define the game as $\mathcal{G}=(\mathcal{J}, \mathcal{T}, \mathcal{Z}_j, \mathcal{V}_j, \Pi_j, u_j)$ with the set $\mathcal{J}$ of UAVs being the agents. $\mathcal{T}$ is a finite set of stages which correspond to the steps required for all UAVs to reach their sought destinations. $\mathcal{Z}_j$ is the set of actions that can be taken by UAV $j$ at each $t \in \mathcal{T}$, $\mathcal{V}_j$ is the set of all observed network states by UAV $j$ up to stage $T$, $\Pi_j$ is a set of probability distributions defined over all $z_j \in \mathcal{Z}_j$, and $u_j$ is the payoff function of UAV $j$. At each stage $t \in \mathcal{T}$, the UAVs take actions simultaneously. In particular, each UAV $j$ aims at determining its path $\boldsymbol{p}_j$ to its destination along with its optimal transmission power and cell association vector for each location $a \in \mathcal{A}$ along its path $\boldsymbol{p}_j$. Therefore, at each $t$, UAV $j$ chooses an action $z_j(t) \in \mathcal{Z}_j$ composed of the tuple $\boldsymbol{z}_j(t)=(\boldsymbol{a}_j(t), \widehat{P}_{j,s,a}(t), \boldsymbol{\beta}_{j,s,a}(t))$, where $\boldsymbol{a}_j(t)$=\{left, right, forward, backward, no movement\} corresponds to a fixed step size, $\widetilde{a}_j$, in a given direction. $\widehat{P}_{j,s,a}(t)=[\widehat{P}_{1}, \widehat{P}_{2}, \cdots, \widehat{P}_{O}]$ corresponds to $O$ different maximum transmit power levels for each UAV $j$ and $\boldsymbol{\beta}_{j,s,a}(t)$ is the UAV-BS association vector. For each UAV $j$, let $\mathcal{L}_j$ be the set of its $L_j$ nearest BSs. The observed network state by UAV $j$ at stage $t$, $\boldsymbol{v}_j(t) \in \mathcal{V}_j$, is: \begin{align}\label{input} \boldsymbol{v}_j(t)\textrm{=}\Big[\{\delta_{j\textrm{,}l\textrm{,}a}(t)\textrm{,} \theta_{j\textrm{,}l\textrm{,}a}(t)\}_{l=1}^{L_j}\textrm{,} \theta_{j\textrm{,}d_j\textrm{,}a}(t)\textrm{,} \{x_j(t)\textrm{,} y_j(t)\}_{j \in \mathcal{J}} \Big]\textrm{,} \end{align} \noindent where $\delta_{j,l,a}(t)$ is the Euclidean distance from UAV $j$ at location $a$ to BS $l$ at stage $t$, $\theta_{j,l,a}$ is the orientation angle in the xy-plane from UAV $j$ at location $a$ to BS $l$ defined as $\mathrm{tan}^{-1}(\Delta y_{j,l}/\Delta x_{j,l})$~\cite{orientation_angle} where $\Delta y_{j,l}$ and $\Delta x_{j,l}$ correspond to the difference in the $x$ and $y$ coordinates of UAV $j$ and BS $l$, $\theta_{j,d_j,a}$ is the orientation angle in the xy-plane from UAV $j$ at location $a$ to its destination $d_j$ defined as $\mathrm{tan}^{-1}(\Delta y_{j,d_j}/\Delta x_{j,d_j})$, and $\{x_j(t)\textrm{,} y_j(t)\}_{j \in \mathcal{J}}$ are the horizonal coordinates of all UAVs at stage $t$. For our model, we consider different range intervals for mapping each of the orientation angle and distance values, respectively, into different states. Moreover, based on the optimization problem defined in (\ref{obj})-(\ref{cons_9}) and by incorporating the Lagrangian penalty method into the utility function definition for the SINR constraint~(\ref{cons_7}), the resulting utility function for UAV $j$ at stage $t$, $u_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t))$, will be given by \vspace{-0.6cm} \begin{align}\label{utility_t} u_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t))\textrm{=} \begin{cases} \Phi(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)) \textrm{+} C\textrm{,} \; \mathrm{if} \; \delta_{j,d_j,a}(t)<\delta_{j,d_j,a'}(t-1)\textrm{,}\\ \Phi(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))\textrm{,} \; \mathrm{if} \; \delta_{j,d_j,a}(t)=\delta_{j,d_j,a'}(t-1)\textrm{,}\\ \Phi(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)) \textrm{-} C \textrm{,} \; \mathrm{if} \; \delta_{j,d_j,a}(t)>\delta_{j,d_j,a'}(t-1)\textrm{,} \end{cases} \end{align} \vspace{-0.3cm} \noindent where $\Phi(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ is defined as: \vspace{-0.1cm} \begin{multline} \hspace{-0.3cm}\Phi(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))\textrm{=}-\vartheta' \sum_{c=1}^{C_{j,s}(t)} \sum_{r=1, r\neq s}^S \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) h_{j,r,c,a}(t)}{C_{j,s}(t)} - \phi'\tau_{j,s,a}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)) \\- \varsigma (\mathrm{min}(0, \sum_{c=1}^{C_{j,s}(t)}\Gamma_{j,s,c,a}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))-\overline{\Gamma}_j))^2, \end{multline} \noindent subject to (\ref{cons_1})-(\ref{cons_6}), (\ref{cons_8}) and (\ref{cons_9}). $\varsigma$ is the penalty coefficient for (\ref{cons_7}) and $C$ is a constant parameter. $a'$ and $a$ are the locations of UAV $j$ at $(t-1)$ and $t$ where $\delta_{j,d_j,a}$ is the distance between UAV $j$ and its destination $d_j$. It is worth noting here that the action space of each UAV $j$ and, thus, the complexity of the proposed game $\mathcal{G}$ increases exponentially when updating the 3D coordinates of the UAVs. Nevertheless, each UAV's altitude must be bounded in order to guarantee an SINR threshold for the UAV and a minimum achievable data rate for the ground UEs. Next, we derive an upper and lower bound for the optimal altitude of any given UAV $j$ based on the proposed utility function in (\ref{utility_t}). In essence, such bounds are valid for all values of the multi-objective weights $\vartheta '$, $\phi '$, and $\varsigma$. \begin{theorem}\label{theorem_altitude} \emph{For all values of $\vartheta '$, $\phi '$, and $\varsigma$, a given network state $\boldsymbol{v}_j(t)$, and a particular action $\boldsymbol{z}_j(t)$, the upper and lower bounds for the altitude of UAV $j$ are, respectively, given by:} \begin{align} h_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \mathrm{max} (\chi, \hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))), \end{align} \vspace{-0.9cm} \begin{align} h_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \mathrm{max} (\chi, \hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))), \end{align} \noindent \emph{where $\chi$ corresponds to the minimum altitude at which a UAV can fly. $\hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ and $\hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$} \emph{are expressed as:} \vspace{-0.3cm} \begin{multline} \hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \\ \sqrt{\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t))}{C_{j,s}(t) \cdot \overline{\Gamma}_j \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2} \cdot \sum_{c=1}^{C_{j,s}(t)}\frac{g_{j,s,c,a}(t)}{I_{j,s,c}(t)+B_cN_0} - (x_j - x_s)^2 - (y_j - y_s)^2}, \end{multline} \emph{and} \begin{align} \hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \max_r \hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)), \end{align} \emph{where $\hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ is the minimum altitude that UAV $j$ should operate at with respect to a particular neighboring BS $r$ and is expressed as:} \begin{align} \hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \sqrt{\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot \sum_{c=1}^{C_{j,s}(t)} g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2 \cdot \sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a}} - (x_j - x_r)^2 - (y_j - y_r)^2}, \end{align} \end{theorem} \begin{proof} See Appendix A. \end{proof} From the above theorem, we can deduce that the optimal altitude of the UAVs is a function of their objective function, location of the ground BSs, network design parameters, and the interference level from other UEs and UAVs in the network. Therefore, at each time step $t$, UAV $j$ would adjust its altitude level based on the values of $h_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)$ and $h_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)$ thus adapting to the dynamics of the network. In essence, the derived upper and lower bounds for the optimal altitude of the UAVs allows a reduction of the action space of game $\mathcal{G}$ thus simplifying the process needed for the UAVs to find a solution, i.e., equilibrium, of the game. Next, we analyze the equilibrium point of the proposed game $\mathcal{G}$. \vspace{-0.3cm} \subsection{Equilibrium Analysis} \vspace{-0.1cm} For our game $\mathcal{G}$, we are interested in studying the subgame perfect Nash equilibrium (SPNE) in behavioral strategies. An SPNE is a profile of strategies which induces a Nash equilibrium (NE) on every subgame of the original game. Moreover, a \emph{behavioral strategy} allows each UAV to assign independent probabilities to the set of actions at each network state that is independent across different network states. Here, note that there always exists at least one SPNE for any finite horizon extensive game with perfect information [Selten's Theorem]~\cite{SPNE_existence}. Let $\boldsymbol{\pi}_j(\boldsymbol{v}_j(t))=(\pi_{j,z_1}(\boldsymbol{v}_j(t)), \pi_{j,z_2}(\boldsymbol{v}_j(t)), \cdots, \pi_{j,\boldsymbol{z}_{\mid Z_j\mid}}(\boldsymbol{v}_j(t))) \in \Pi_j$ be the behavioral strategy of UAV $j$ at state $\boldsymbol{v}_j(t)$ and let $\Delta (\mathcal{Z})$ be the set of all probability distributions over the action space $\mathcal{Z}$. Next, we define the notion of an SPNE. \begin{definition}\emph{A behavioral strategy $(\boldsymbol{\pi}^*_1(\boldsymbol{v}_j(t))\textrm{,} \cdots\textrm{,} \boldsymbol{\pi}_J^*(\boldsymbol{v}_j(t))) = (\boldsymbol{\pi}_j^*(\boldsymbol{v}_j(t)), \boldsymbol{\pi}^*_{-j}(\boldsymbol{v}_j(t)))$ constitutes a} subgame perfect Nash equilibrium \emph{if, $\forall j \in \mathcal{J}$, $\forall t \in \mathcal{T}$ and $\forall \boldsymbol{\pi}_j(\boldsymbol{v}_j(t)) \in \Delta (\mathcal{Z})$, $\overline{u}_j(\boldsymbol{\pi}^*_j(\boldsymbol{v}_j(t)), \boldsymbol{\pi}^*_{-j}(\boldsymbol{v}_j(t)))\geq \overline{u}_j(\boldsymbol{\pi}_j(\boldsymbol{v}_j(t)), \boldsymbol{\pi}^*_{-j}(\boldsymbol{v}_j(t)))$.} \end{definition} Therefore, each state $\boldsymbol{v}_j(t)$ and stage $t$, the goal of each UAV $j$ is to maximize its expected sum of discounted rewards, which is computed as the summation of the immediate reward for a given state along with the expected discounted utility of the next states: \vspace{-0.4cm} \begin{multline}\label{expected_utility} \overline{u}(\boldsymbol{v}_j(t), \boldsymbol{\pi}_j(\boldsymbol{v}_j(t))\textrm{,} \boldsymbol{\pi}_{\textrm{-}j}(\boldsymbol{v}_j(t)))=\mathds{E}_{\boldsymbol{\pi}_j(t)}\left\{\sum_{l=0}^\infty \gamma^{l} u_j(\boldsymbol{v}_j(t+l)\textrm{,} \boldsymbol{z}_j(t+l)\textrm{,} \boldsymbol{z}_{\textrm{-}j}(t+l))| \boldsymbol{v}_{j,0}=\boldsymbol{v}_j\right\}\\ \textrm{=}\sum_{\boldsymbol{z}\in\mathcal{Z}} \sum_{l=0}^\infty \gamma^{l} u_j(\boldsymbol{v}_j(t+l)\textrm{,} \boldsymbol{z}_j(t+l)\textrm{,} \boldsymbol{z}_{\textrm{-}j}(t+l)) \prod_{j=1}^J \pi_{j\textrm{,}z_j}(\boldsymbol{v}_j(t+l))\textrm{,} \end{multline} \vspace{-0.2cm} \noindent where $\gamma^l \in (0, 1)$ is a discount factor for delayed rewards and $\mathds{E}_{\boldsymbol{\pi}_j(\boldsymbol{v}_j(t))}$ denotes an expectation over trajectories of states and actions, in which actions are selected according to $\boldsymbol{\pi}_j(\boldsymbol{v}_j(t))$. Here, $\boldsymbol{u}_j$ is the short-term reward for being in state $\boldsymbol{v}_j$ and $\boldsymbol{\overline{u}}_j$ is the expected long-term total reward from state $\boldsymbol{v}_j$ onwards. Here, note that the UAV's cell association vector, trajectory optimization, and transmit power level are closely coupled with each other and their corresponding optimal values vary based on the UAVs' objectives. In a multi-UAV network, each UAV must have full knowledge of the future reward functions at each information set and thus for all future network states in order to find the SPNE. This in turn necessitates knowledge of all possible future actions of all UAVs in the network and becomes challenging as the number of UAVs increases. To address this challenge, we rely on deep recurrent neural networks (RNNs)~\cite{RNN_survey}. In essence, RNNs exhibit dynamic temporal behavior and are characterized by their adaptive memory that enables them to store necessary previous state information to predict future actions. On the other hand, deep neural networks are capable of dealing with large datasets. Therefore, next, we develop a novel deep RL based on ESNs, a special kind of RNN, for solving the SPNE of our game $\mathcal{G}$. \section{Deep Reinforcement Learning for Online Path Planning and Resource Management}\label{algorithm} In this section, we first introduce a deep ESN-based architecture that allows the UAVs to store previous states whenever needed while being able to learn future network states. Then, we propose an RL algorithm based on the proposed deep ESN architecture to learn an SPNE for our proposed game. \subsection{Deep ESN Architecture}\label{ESN_architecture} ESNs are a new type of RNNs with feedback connections that belong to the family of reservoir computing (RC)~\cite{RNN_survey}. An ESN is composed of an input weight matrix $\boldsymbol{W}_{\mathrm{in}}$, a recurrent matrix $\boldsymbol{W}$, and an output weight matrix $\boldsymbol{W}_{\mathrm{out}}$. Because only the output weights are altered, ESN training is typically quick and computationally efficient compared to training other RNNs. Moreover, multiple non-linear reservoir layers can be stacked on top of each other resulting in a \emph{deep ESN architecture}. Deep ESNs exploit the advantages of a hierarchical temporal feature representation at different levels of abstraction while preserving the RC training efficiency. They can learn data representations at different levels of abstraction, hence disentangling the difficulties in modeling complex tasks by representing them in terms of simpler ones hierarchically. Let $N_{j,R}^{(n)}$ be the number of internal units of the reservoir of UAV $j$ at layer $n$, $N_{j,U}$ be the external input dimension of UAV $j$ and $N_{j,L}$ be the number of layers in the stack for UAV $j$. Next, we define the following ESN components: \begin{itemize} \item $\boldsymbol{v}_j(t) \in \mathds{R}^{N_{j,U}}$ the external input of UAV $j$ at stage $t$ which effectively corresponds to the current network state, \item $\boldsymbol{x}^{(n)}_j(t) \in \mathds{R}^{N_{j,R}^{(n)}}$ as the state of the reservoir of UAV $j$ at layer $n$ at stage $t$, \item $\boldsymbol{W}_{j, \mathrm{in}}^{(n)}$ as the input-to-reservoir matrix of UAV $j$ at layer $n$, where $\boldsymbol{W}_{j, \mathrm{in}}^{(n)} \in \mathds{R}^{N_{j,R}^{(n)} \times N_{j,U}}$ for $n=1$, and $\boldsymbol{W}_{j, \mathrm{in}}^{(n)} \in \mathds{R}^{N_{j,R}^{(n)} \times N_{j,R}^{(n-1)}}$ for $n>1$, \item $\boldsymbol{W}_j^{(n)} \in \mathds{R}^{N_{j,R}^{(n)} \times N_{j,R}^{(n)}}$ as the recurrent reservoir weight matrix for UAV $j$ at layer $n$, \item $\boldsymbol{W}_{j, \mathrm{out}} \in \mathds{R}^{\mid\mathcal{Z}_j\mid \times (N_{j,U}+\sum_{n}N_{j,R}^{(n)})}$ as the reservoir-to-output matrix of UAV $j$ for layer $n$ only. \end{itemize} The objective of the deep ESN architecture is to approximate a function $\boldsymbol{F}_j=(F_j^{1}, F_j^{2}, \cdots, F_j^{N_{j,L}})$ for learning an SPNE for each UAV $j$ at each stage $t$. For each $n=1, 2, \cdots, N_{j,L}$, the function $F_j^{(n)}$ describes the evolution of the state of the reservoir at layer $n$, i.e., $\boldsymbol{x_{j}^{(n)}}(t)=F_j^{(n)}(\boldsymbol{v}_j(t), \boldsymbol{x}_j^{(n)}(t-1))$ for $n=1$ and $\boldsymbol{x_{j}^{(n)}}(t)=F_j^{(n)}(\boldsymbol{x}_j^{(n-1)}(t), \boldsymbol{x}_j^{(n)}(t-1))$ for $n>1$. $\boldsymbol{W}_{j, \mathrm{out}}$ and $\boldsymbol{x}^{(n)}_j(t)$ are initialized to zero while $\boldsymbol{W}_{j, \mathrm{in}}^{(n)}$ and $\boldsymbol{W}_j^{(n)}$ are randomly generated. Note that although the dynamic reservoir is initially generated randomly, it is combined later with the external input, $\boldsymbol{v}_j(t)$, in order to store the network states and with the trained output matrix, $\boldsymbol{W}_{j, \mathrm{out}}$, so that it can approximate the reward function. Moreover, the spectral radius of $\boldsymbol{W}_j^{(n)}$ (i.e., the largest eigenvalue in absolute value), $\rho_j^{(n)}$, must be strictly smaller than 1 to guarantee the stability of the reservoir~\cite{echo_state_property}. In fact, the value of $\rho_j^{(n)}$ is related to the variable memory length of the reservoir that enables the proposed deep ESN framework to store necessary previous state information, with larger values of $\rho_j^{(n)}$ resulting in longer memory length. We next define the deep ESN components: the input and reward functions. For each deep ESN of UAV $j$, we distinguish between two types of inputs: external input, $\boldsymbol{v}_j(t)$, that is fed to the first layer of the deep ESN and corresponds to the current state of the network and input that is fed to all other layers for $n>1$. For our proposed deep ESN, the input to any layer $n>1$ at stage $t$ corresponds to the state of the previous layer, $\boldsymbol{x}_j^{(n-1)}(t)$. Define $\widetilde{u}_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t))= u_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t)) \prod_{j=1}^J \pi_{j,z_j}(\boldsymbol{v}_j(t))$ as the expected value of the instantaneous utility function $u_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t))$ in (\ref{utility_t}) for UAV $j$ at stage $t$. Therefore, the reward that UAV $j$ obtains from action $\boldsymbol{z}_j$ at a given network state $\boldsymbol{v}_j(t)$: \vspace{-0.32cm} \begin{multline}\label{reward} r_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t)) \textrm{=} \begin{cases} \widetilde{u}_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{\textrm{-}j}(t)) \textrm{,} \; \mathrm{if\; UAV} \;j\; \mathrm{reaches} \; d_j\textrm{,}\\ \widetilde{u}_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{\textrm{-}j}(t))\textrm{+}\gamma \mathrm{max}_{\boldsymbol{z}_j \in \mathcal{Z}_j} \boldsymbol{W}_{j\textrm{,} \mathrm{out}}(\boldsymbol{z}_j(t \textrm{+} 1)\textrm{,} t \textrm{+}1) \\ \hspace{0.4 cm} [\boldsymbol{v}'_j(t), \boldsymbol{x}'^{(1)}_j(t), \boldsymbol{x}'^{(2)}_j(t), \cdots, \boldsymbol{x}'^{(n)}_j(t)]\textrm{,} \; \mathrm{otherwise}\textrm{.} \end{cases} \end{multline}\raisetag{3\baselineskip} \noindent Here, $\boldsymbol{v}'_j(t+1)$ and $\boldsymbol{x}'^{(n)}_j(t)$, correspond, respectively, to the next network state and reservoir state of layer $(n)$, at stage $(t+1)$, upon taking actions $\boldsymbol{z}_j(t)$ and $\boldsymbol{z}_{-j}(t)$ at stage $t$. Fig.~\ref{Deep_ESN} shows the proposed reservoir architecture of the deep ESN consisting of two layers. \begin{figure}[t!] \begin{center} \centering \vspace{-0.1cm} \includegraphics[width=13cm]{figures/Deep_ESN} \vspace{-0.5cm} \caption{Proposed Deep ESN architecture.}\label{Deep_ESN} \vspace{-0.7cm} \end{center} \end{figure} \subsection{Update Rule Based on Deep ESN} We now introduce the deep ESN's update phase that each UAV uses to store and estimate the reward function of each path and resource allocation scheme at a given stage $t$. In particular, we consider leaky integrator reservoir units~\cite{leaky_integrator} for updating the state transition functions $\boldsymbol{x}^{(n)}_j(t)$ at stage $t$. Therefore, the state transition function of the first layer $\boldsymbol{x}^{(1)}_j(t)$ will be: \begin{align}\label{state_1} \boldsymbol{x}^{(1)}_j(t)= (1-\omega_j^{(1)})\boldsymbol{x}_j^{(1)}(t-1)+\omega_j^{(1)}\mathrm{tanh}(\boldsymbol{W}_{j, \mathrm{in}}^{(1)}\boldsymbol{v}_j(t)+\boldsymbol{W}_j^{(1)}\boldsymbol{x}_j^{(1)}(t-1)), \end{align} \noindent where $\omega_j^{(n)} \in [0, 1]$ is the leaking parameter at layer $n$ for UAV $j$ which relates to the speed of the reservoir dynamics in response to the input, with larger values of $\omega_j^{(n)}$ resulting in a faster response of the corresponding $n$-th reservoir to the input. The state transition of UAV $j$, $\boldsymbol{x}^{(n)}_j(t)$, for $n>1$ is given by: \begin{align}\label{state_n} \boldsymbol{x}^{(n)}_j(t)= (1-\omega_j^{(n)})\boldsymbol{x}_j^{(n)}(t-1)+\omega_j^{(n)}\mathrm{tanh}(\boldsymbol{W}_{j,\mathrm{in}}^{(n)}\boldsymbol{x}_j^{(n-1)}(t)+\boldsymbol{W}_j^{(n)}\boldsymbol{x}_j^{(n)}(t-1)), \end{align} The output $y_j(t)$ of the deep ESN at stage $t$ is used to estimate the reward of each UAV $j$ based on the current adopted action $\boldsymbol{z}_j(t)$ and $\boldsymbol{z}_{-j}(t)$ of UAV $j$ and other UAVs $(-j)$, respectively, for the current network state $\boldsymbol{v}_j(t)$ after training $\boldsymbol{W}_{j, \mathrm{out}}$. It can be computed as: \begin{align}\label{output} y_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t))=\boldsymbol{W}_{j, \mathrm{out}}(\boldsymbol{z}_j(t), t) [\boldsymbol{v}_j(t), \boldsymbol{x}^{(1)}_j(t), \boldsymbol{x}^{(2)}_j(t), \cdots, \boldsymbol{x}^{(n)}_j(t)]. \end{align} We adopt a temporal difference RL approach for training the output matrix $W_{j, \mathrm{out}}$ of the deep ESN architecture. In particular, we employ a linear gradient descent approach using the reward error signal, given by the following update rule~\cite{RL_ESN}: \begin{multline}\label{W_out} \hspace{-0.2cm}\boldsymbol{W}_{j\textrm{,} \mathrm{out}}(\boldsymbol{z}_j(t)\textrm{,} t\textrm{+}1)\textrm{=}\boldsymbol{W}_{j\textrm{,} \mathrm{out}}(\boldsymbol{z}_j(t)\textrm{,} t)\textrm{+}\lambda_j ( r_j(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{\textrm{-}j}(t)) -y_j(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t))) [\boldsymbol{v}_j(t)\textrm{,} \\ \boldsymbol{x}^{(1)}_j(t)\textrm{,} \boldsymbol{x}^{(2)}_j(t)\textrm{,} \cdots\textrm{,} \boldsymbol{x}^{(n)}_j(t)]^T\textrm{.} \end{multline} Here, note that the objective of each UAV is to minimize the value of the error function $e_j(\boldsymbol{v}_j(t))= \left| r_j(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{\textrm{-}j}(t)) - y_j(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t))\right|$. \subsection{Proposed Deep RL Algorithm} Based on the proposed deep ESN architecture and update rule, we next introduce a multi-agent deep RL framework that the UAVs can use to learn an SPNE in behavioral strategies for the game $\mathcal{G}$. The algorithm is divided into two phases: \emph{training and testing}. In the former, UAVs are trained offline before they become active in the network using the architecture of Subsection~\ref{ESN_architecture}. The testing phase corresponds to the actual execution of the algorithm after which the weights of $\boldsymbol{W}_{j, \mathrm{out}}, \forall j \in \mathcal{J}$ have been optimized and is implemented on each UAV for execution during run time. \begin{algorithm}[t!] \scriptsize \caption{Training phase of the proposed deep RL algorithm} \label{training_algorithm} \begin{algorithmic}[t!] \STATE \textbf{Initialization:}\\ $\boldsymbol{\pi}_{j,z_j}(\boldsymbol{v}_j(t))=\frac{1}{\mid \mathcal{A}_j\mid} \forall t\in T, z_j \in \mathcal{Z}_j$, $y_j(\boldsymbol{v}_j(t), \boldsymbol{z}_{j}(t))=0$, $\boldsymbol{W}_{j, \mathrm{in}}^{(n)}$, $\boldsymbol{W}_{j}^{(n)}$, $\boldsymbol{W}_{j, \mathrm{out}}$. \\ \vspace{0.2cm} \FOR {The number of training iterations} \WHILE{At least one UAV $j$ has not reached its destination $d_j$,} \vspace{0.05cm} \FOR{all UAVs $j$ (in a parallel fashion)} \STATE \textbf{Input:} Each UAV $j$ receives an input $\boldsymbol{v}_j(t)$ based on (\ref{input}). \STATE \textbf{Step 1: Action selection}\\ Each UAV $j$ selects a random action $\boldsymbol{z}_j(t)$ with probability $\epsilon$,\\ Otherwise, UAV $j$ selects $\boldsymbol{z}_j(t)= \mathrm{argmax}_{z_j \in \mathcal{Z}_j} y_{j}\left(\boldsymbol{v}_{j}(t), \boldsymbol{z}_{j}(t)\right)$. \STATE \textbf{Step 2: Location, cell association and transmit power update}\\ Each UAV $j$ updates its location, cell association and transmission power level based on the selected action $\boldsymbol{z}_j(t)$.\\ \STATE \textbf{Step 3: Reward computation}\\ Each UAV $j$ computes its reward values based on (\ref{reward}).\\ \STATE \textbf{Step 4: Action broadcast}\\ Each UAV $j$ broadcasts its selected action $\boldsymbol{z}_j(t)$ to all other UAVs.\\ \STATE \textbf{Step 5: Deep ESN update}\\ - Each UAV $j$ updates the state transition vector $\boldsymbol{x}_j^{(n)}(t)$ for each layer $(n)$ of the deep ESN architecture based on (\ref{state_1}) and (\ref{state_n}).\\ - Each UAV $j$ computes its output $y_j\left(\boldsymbol{v}_{j}(t), \boldsymbol{z}_{j}(t)\right)$ based on (\ref{output}).\\ - The weights of the output matrix $\boldsymbol{W}_{j,\mathrm{out}}$ of each UAV $j$ are updated based on the linear gradient descent update rule given in (\ref{W_out}).\\ \ENDFOR \ENDWHILE \ENDFOR \end{algorithmic} \end{algorithm} During the training phase, each UAV aims at optimizing its output weight matrix $\boldsymbol{W}_{j\textrm{,} \mathrm{out}}$ such that the value of the error function $e_j(\boldsymbol{v}_j(t))$ at each stage $t$ is minimized. In particular, the training phase is composed of multiple iterations, each consisting of multiple rounds, i.e., the number of steps required for all UAVs to reach their corresponding destinations $d_j$. At each round, UAVs face a tradeoff between playing the action associated with the highest expected utility, and trying out all their actions to improve their estimates of the reward function in (\ref{reward}). This in fact corresponds to the exploration and exploitation tradeoff, in which UAVs need to strike a balance between exploring their environment and exploiting the knowledge accumulated through such exploration~\cite{sutton}. Therefore, we adopt the $\epsilon$-greedy policy in which UAVs choose the action that yields the maximum utility value with a probability of $1- \epsilon + \frac{\epsilon}{\mid \mathcal{Z}_j\mid}$ while exploring randomly other actions with a probability of $\frac{\epsilon}{\mid\mathcal{A}_j \mid}$. The strategy over the action space will be: \vspace{-0.1cm} \begin{align} \pi_{j,z_j}(\boldsymbol{v}_j(t))= \begin{cases} 1- \epsilon + \frac{\epsilon}{\mid \mathcal{Z}_j\mid}, \; \mathrm{argmax}_{z_j \in \mathcal{Z}_j} y_{j}\left(\boldsymbol{v}_j(t), \boldsymbol{z}_{j}(t) \right),\\ \frac{\epsilon}{\mid \mathcal{Z}_j\mid}, \; \mathrm{otherwise}. \end{cases} \end{align} Based on the selected action $\boldsymbol{z}_j(t)$, each UAV $j$ updates its location, cell association, and transmission power level and computes its reward function according to (\ref{reward}). To determine the next network state, each UAV $j$ broadcasts its selected action to all other UAVs in the network. Then, each UAV $j$ updates its state transition vector $\boldsymbol{x}_j^{(n)}(t)$ for each layer $(n)$ of the deep ESN architecture according to (\ref{state_1}) and (\ref{state_n}). The output $y_j$ at stage $t$ is then updated based on (\ref{output}). Finally, the weights of the output matrix $\boldsymbol{W}_{j,\mathrm{out}}$ of each UAV $j$ are updated based on the linear gradient descent update rule given in (\ref{W_out}). Note that, a UAV stops taking any actions once it has reached its destination. A summary of the training phase is given in Algorithm~\ref{training_algorithm}. \begin{algorithm}[t!] \scriptsize \caption{Testing phase of the proposed deep RL algorithm} \label{testing_algorithm} \begin{algorithmic}[t!] \vspace{0.2cm} \WHILE{At least one UAV $j$ has not reached its destination $d_j$,} \vspace{0.05cm} \FOR{all UAVs $j$ (in a parallel fashion)} \STATE \textbf{Input:} Each UAV $j$ receives an input $\boldsymbol{v}_j(t)$ based on (\ref{input}). \STATE \textbf{Step 1: Action selection}\\ Each UAV $j$ selects an action $\boldsymbol{z}_j(t)= \mathrm{argmax}_{z_j \in \mathcal{Z}_j} y_{j}\left(\boldsymbol{v}_{j}(t), \boldsymbol{z}_{j}(t)\right)$. \STATE \textbf{Step 2: Location, cell association and transmit power update}\\ Each UAV $j$ updates its location, cell association and transmission power level based on the selected action $\boldsymbol{z}_j(t)$.\\ \STATE \textbf{Step 3: Action broadcast}\\ Each UAV $j$ broadcasts its selected action $\boldsymbol{z}_j(t)$ to all other UAVs.\\ \STATE \textbf{Step 4: State transition vector update}\\ Each UAV $j$ updates the state transition vector $\boldsymbol{x}_j^{(n)}(t)$ for each layer $(n)$ of the deep ESN architecture based on (\ref{state_1}) and (\ref{state_n}).\\ \ENDFOR \ENDWHILE \end{algorithmic} \end{algorithm} Meanwhile, the testing phase corresponds to the actual execution of the algorithm. In this phase, each UAV chooses its action greedily for each state $\boldsymbol{v}_j(t)$, i.e., $\mathrm{argmax}_{z_j \in \mathcal{Z}_j} y_{j}(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t))$, and updates its location, cell association, and transmission power level accordingly. Each UAV then broadcasts its selected action and updates its state transition vector $\boldsymbol{x}_j^{(n)}(t)$ for each layer $n$ of the deep ESN architecture based on (\ref{state_1}) and (\ref{state_n}). A summary of the testing phase is given in Algorithm \ref{testing_algorithm}. It is important to note that analytically guaranteeing the convergence of the proposed deep learning algorithm is challenging as it is highly dependent on the hyperparameters used during the training phase. For instance, using too few neurons in the hidden layers results in underfitting which could make it hard for the neural network to detect the signals in a complicated data set. On the other hand, using too many neurons in the hidden layers can either result in overfitting or an increase in the training time that could prevent the training of the neural network. Overfitting corresponds to the case when the model learns the random fluctuations and noise in the training data set to the extent that it negatively impacts the model's ability to generalize when fed with new data. Therefore, in this work, we limit our analysis of convergence by providing simulation results (see Section~\ref{simulation}) to show that, under a reasonable choice of the hyperparameters, convergence is observed for our proposed game. In such cases, it is important to study the convergence point and the convergence complexity of our proposed algorithm. Next, we characterize the convergence point of our proposed algorithm. \begin{proposition} \emph{If Algorithm~\ref{training_algorithm} converges, then the convergence strategy profile corresponds to a SPNE of game $\mathcal{G}$.} \end{proposition} \begin{proof} An SPNE is a strategy profile that induces a Nash equilibrium on every subgame. Therefore, at the equilibrium state of each subgame, there is no incentive for any UAV to deviate after observing any history of joint actions. Moreover, given the fact that an ESN framework exhibits adaptive memory that enables it to store necessary previous state information, UAVs can essentially retain other players' actions at each stage $t$ and thus take actions accordingly. To show that our proposed scheme guarantees convergence to an SPNE, we use the following lemma from~\cite{SPNE_existence}. \begin{lemma} For our proposed game $\mathcal{G}$, the payoff functions in (\ref{reward}) are bounded, and the number of players, state space and action space is finite. Therefore, $\mathcal{G}$ is a finite game and hence a SPNE exists. This follows from Selten's theorem which states that every finite extensive form game with perfect recall possesses an SPNE where the players use behavioral strategies. \end{lemma} Here, it is important to note that for finite dynamic games of perfect information, any backward induction solution is a SPNE~\cite{walid_book}. Therefore, given the fact that, for our proposed game $\mathcal{G}$, each UAV aims at maximizing its expected sum of \emph{discounted rewards} at each stage $t$ as given in (\ref{reward}), one can guarantee that the convergence strategy profile corresponds to a SPNE of game $\mathcal{G}$. This completes the proof. \end{proof} Moreover, it is important to note that the convergence complexity of the proposed deep RL algorithm for reaching a SPNE is $O(J \times A^2)$. Next, we analyze the computational complexity of the proposed deep RL algorithm for practical scenarios in which the number of UAVs is relatively small. \begin{theorem}\label{proposition_complexity} \emph{For practical network scenarios, the computational complexity of the proposed training deep RL algorithm is $O(A^3)$ and reduces to $O(A^2)$ when considering a fixed altitude for the UAVs, where $A$ is the number of discretized unit areas.} \end{theorem} \begin{proof} Consider the case in which the UAVs can move with a fixed step size in a 3D space. For such scenarios, the state vector $\boldsymbol{v}'_j(t)$ is defined as: \begin{align}\label{input_3D} \boldsymbol{v}'_j(t)\textrm{=}\Big[\{\delta_{j\textrm{,}l\textrm{,}a}(t)\textrm{,} \theta_{j\textrm{,}l\textrm{,}a}(t)\}_{l=1}^{L_j}\textrm{,} \theta_{j\textrm{,}d_j\textrm{,}a}(t)\textrm{,} \{x_j(t)\textrm{,} y_j(t)\textrm{,} h_j(t)\}_{j \in \mathcal{J}} \Big]\textrm{,} \end{align} For each state $\boldsymbol{v}'_j(t)$, the action of UAV $j$ is a function of the location, transmission power level and cell association vector of all other UAVs in the network. Nevertheless, the number of possible locations of other UAVs in the network is much larger than the possible number of transmission power levels and the size of the cell association vector of those UAVs. Therefore, by the law of large numbers, one can consider the number of possible locations of other UAVs only when analyzing the convergence complexity of the proposed training algorithm. Moreover, for practical scenarios, the total number of UAVs in a given area is considered to be relatively small as compared to the number of discretized unit areas i.e., $J \ll A$ (3GPP admission control policy for cellular-connected UAVs~\cite{3GPP_standards}). Therefore, by the law of large numbers and given the fact that the UAVs take actions in a parallel fashion, the computational complexity of our proposed algorithm is $O(A^3)$ when the UAVs update their x, y and z coordinates and reduces to $O(A^2)$ when considering fixed altitudes for the UAVs. This completes the proof. \end{proof} From Theorem \ref{proposition_complexity}, we can conclude that the convergence speed of the proposed training algorithm is significantly reduced when considering a fixed altitude for the UAVs. This in essence is due to the reduction of the state space dimension when updating the $x$ and $y$ coordinates only. It is important to note here that there exists a tradeoff between the computational complexity of the proposed training algorithm and the resulting network performance. In essence, updating the 3D coordinates of the UAVs at each step $t$ allows the UAVs to better explore the space thus providing more opportunities for maximizing their corresponding utility functions. Therefore, from both Theorems~\ref{proposition_complexity} and~\ref{theorem_altitude}, the UAVs can update their x and y coordinates only during the learning phase while operating within the upper and lower altitude bounds derived in Theorem~\ref{theorem_altitude}. \section{Simulation Results and Analysis}\label{simulation} \begin{table}[t!] \scriptsize \setlength{\belowcaptionskip}{0pt} \setlength{\abovedisplayskip}{3pt} \captionsetup{belowskip=0pt} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#1.6\end{tabular}} \setlength{\abovecaptionskip}{2pt} \renewcommand{\captionlabelfont}{\small} \caption[table]{\scriptsize{\\SYSTEM PARAMETERS}}\label{parameters} \centering \tabcolsep=0.06cm \scalebox{0.99}{ \begin{tabular}{|c|c|c|c|} \hline \textbf{Parameters} & \textbf{Values} & \textbf{Parameters} & \textbf{Values} \\ \hline UAV max transmit power $(\overline{P}_j)$ & 20 dBm & SINR threshold $(\overline{\Gamma}_j)$ & -3 dB \\ \hline UE transmit power $(\widehat{P}_q)$ & 20 dBm & Learning rate $(\lambda_j)$ & 0.01\\ \hline Noise power spectral density $(N_0)$ & -174 dBm/Hz & RB bandwidth $(B_c)$& 180 kHz\\ \hline Total bandwidth $(B)$ & 20 MHz & \# of interferers $(L)$ & 2\\ \hline Packet arrival rate $(\lambda_{j,s})$ & (0,1) & Packet size $(\nu)$ & 2000 bits\\ \hline Carrier frequency $(\hat{f})$ & 2 GHz & Discount factor $(\gamma)$ & 0.7 \\ \hline \# of hidden layers & 2 & Step size $(\widetilde{a}_j)$ & 40 m \\ \hline Leaky parameter/layer $(\omega_j^{(n)})$ & 0.99, 0.99 & $\epsilon$ & 0.3\\ \hline \end{tabular} } \vspace{-0.24cm} \end{table} For our simulations, we consider an 800 m $\times$ 800 m square area divided into 40 m $\times$ 40 m grid areas, in which we randomly uniformly deploy 15 BSs. All statistical results are averaged over several independent testing iterations during which the initial locations and destinations of the UAVs and the locations of the BSs and the ground UEs are randomized. The maximum transmit power for each UAV is discretized into 5 equally separated levels. We consider an uncorrelated Rician fading channel with parameter $\widehat{K}=1.59$~\cite{rician_fading}. The external input of the deep ESN architecture, $\boldsymbol{v}_j(t)$, is a function of the number of UAVs and thus the number of hidden nodes per layer, $N_{j,R}^{(n)}$, varies with the number of UAVs. For instance, $N_{j,R}^{(n)}= 12$ and $6$ for $n=1$ and $2$, respectively, for a network size of 1 and 2 UAVs, and 20 and 10 for a network size of 3, 4, and 5 UAVs. Table~\ref{parameters} summarizes the main simulation parameters. \begin{figure}[!t] \begin{subfigure}{1.0\textwidth} \centering \includegraphics[width=10cm]{figures/maximum_altitude} \caption{} \label{maximum_altitude} \end{subfigure}\\ \begin{subfigure}{1.0\textwidth} \centering \includegraphics[width=10cm]{figures/minimum_altitude} \caption{} \label{minimum_altitude} \end{subfigure} \vspace{-0.3cm} \caption{The (a) upper bound for the optimal altitude of the UAVs as a function of the SINR threshold value $(\bar{\Gamma})$ and for different transmit power levels and ground network density and (b) lower bound for the optimal altitude of the UAVs as a function of the interference threshold value $(\sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a})$ and for different transmit power levels.}\label{altitude_results} \vspace{-0.2cm} \end{figure} Fig.~\ref{maximum_altitude} shows the upper bound for the optimal altitude of UAV $j$ as a function of the SINR threshold value, $\bar{\Gamma}$, and for different transmit power levels, based on Theorem~\ref{theorem_altitude}. On the other hand, Fig.~\ref{minimum_altitude} shows the lower bound for the optimal altitude of UAV $j$ as a function of the SINR threshold value, $\bar{\Gamma}$, and for different transmit power levels and ground network density, based on Theorem~\ref{theorem_altitude}. From Figs.~\ref{maximum_altitude} and~\ref{minimum_altitude}, we can deduce that the optimal altitude range of a given UAV is a function of network design parameters, ground network data requirements, the density of the ground network, and its action $\boldsymbol{v}_j(t)$. For instance, the upper bound on the UAV's optimal altitude decreases as $\bar{\Gamma}$ increases while its lower bound decreases as $\sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a}$ increases. Moreover, the maximum altitude of the UAV decreases as the ground network gets denser while the its lower bound increases as the ground network data requirements increase. Thus, in such scenarios, a UAV should operate at higher altitudes. A UAV should also operate at higher altitudes when its transmit power level increases due to the increase in the lower and upper bounds of its optimal altitude. \vspace{-0.1cm} \begin{figure}[t!] \begin{center} \centering \includegraphics[width=11cm]{figures/snapshot} \vspace{-0.4cm} \caption{Path of a UAV for our approach and shortest path scheme.}\label{snapshot} \vspace{-0.5cm} \end{center} \end{figure} \begin{table}[t!]\footnotesize \setlength{\belowcaptionskip}{0pt} \setlength{\abovedisplayskip}{3pt} \captionsetup{belowskip=0pt} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#1.6\end{tabular}} \setlength{\abovecaptionskip}{2pt} \renewcommand{\captionlabelfont}{\small} \caption[table]{\scriptsize{\\Performance assessment for one UAV}}\label{snapshot_table} \centering \tabcolsep=0.1cm \scalebox{0.99}{ \begin{tabular}{|c|c|c|c|} \hline & \# of steps & delay (ms) & average rate per UE (Mbps) \\ \hline Proposed approach & 32 & 6.5 & 0.95\\ \hline Shortest path & 32 & 12.2 & 0.76\\ \hline \end{tabular} } \vspace{-0.24cm} \end{table} Fig.~\ref{snapshot} shows a snapshot of the path of a single UAV resulting from our approach and from a shortest path scheme. Unlike our proposed scheme which accounts for other wireless metrics during path planning, the objective of the UAVs in the shortest path scheme is to reach their destinations with the minimum number of steps. Table~\ref{snapshot_table} presents the performance results for the paths shown in Fig.~\ref{snapshot}. From Fig.~\ref{snapshot}, we can see that, for our proposed approach, the UAV selects a path away from the densely deployed area while maintaining proximity to its serving BS in a way that would minimize the steps required to reach its destination. This path will minimize the interference level that the UAV causes on the ground UEs and its wireless latency (Table~\ref{snapshot_table}). From Table~\ref{snapshot_table}, we can see that our proposed approach achieves 25\% increase in the average rate per ground UE and 47\% decrease in the wireless latency as compared to the shortest path, while requiring the same number of steps that the UAV needs to reach the destination. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=11cm, scale=1.9]{figures/scalability} \vspace{-0.4cm} \caption{Performance assessment of the proposed approach in terms of average (a) wireless latency per UAV and (b) rate per ground UE as compared to the shortest path approach, for different number of UAVs.}\label{scalability} \vspace{-0.5cm} \end{center} \end{figure} \begin{table}[t!]\footnotesize \setlength{\belowcaptionskip}{0pt} \setlength{\abovedisplayskip}{3pt} \captionsetup{belowskip=0pt} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#1.6\end{tabular}} \setlength{\abovecaptionskip}{2pt} \renewcommand{\captionlabelfont}{\small} \captionsetup{justification=centering} \caption[table]{\scriptsize{\\The required number of steps for all UAVs to reach their corresponding destinations based on our proposed approach and that of the shortest path scheme for different number of UAVs}}\label{steps_table} \centering \tabcolsep=0.1cm \scalebox{0.99}{ \begin{tabular}{|c|c|c|c|c|c|} \hline \# of steps & 1 UAV & 2 UAVs & 3 UAVs & 4 UAVs & 5 UAVs\\ \hline Proposed approach & 4 & 4 & 6 & 7 & 8\\ \hline Shortest path & 4 & 4 & 6 & 6 & 7\\ \hline \end{tabular} } \vspace{-0.24cm} \end{table} Fig.~\ref{scalability} compares the average values of the (a) wireless latency per UAV and (b) rate per ground UE resulting from our proposed approach and the baseline shortest path scheme. Moreover, Table~\ref{steps_table} compares the number of steps required by all UAVs to reach their corresponding destinations for the scenarios presented in Fig.~\ref{scalability}. From Fig.~\ref{scalability} and Table~\ref{steps_table}, we can see that, compared to the shortest path scheme, our approach achieves a lower wireless latency per UAV and a higher rate per ground UE for different numbers of UAVs while requiring a number of steps that is comparable to the baseline. In fact, our scheme provides a better tradeoff between energy efficiency, wireless latency, and ground UE data rate compared to the shortest path scheme. For instance, for 5 UAVs, our scheme achieves a 37\% increase in the average achievable rate per ground UE, 62\% decrease in the average wireless latency per UAV, and 14\% increase in energy efficiency. Indeed, one can adjust the multi-objective weights of our utility function based on several parameters such as the rate requirements of the ground network, the power limitation of the UAVs, and the maximum tolerable wireless latency of the UAVs. Moreover, Fig.~\ref{scalability} shows that, as the number of UAVs increases, the average delay per UAV increases and the average rate per ground UE decreases, for all schemes. This is due to the increase in the interference level on the ground UEs and other UAVs as a result of the LoS link between the UAVs and the BSs. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=11cm, scale=1.9]{figures/altitude} \vspace{-0.4cm} \caption{Performance assessment of the proposed approach in terms of average (a) wireless latency per UAV and (b) rate per ground UE for different utility functions and for different altitudes of the UAVs.}\label{altitude} \vspace{-0.9cm} \end{center} \end{figure} Fig.~\ref{altitude} studies the effect of the UAVs' altitude on the average values of the (a) wireless latency per UAV and (b) rate per ground UE for different utility functions. From Fig.~\ref{altitude}, we can see that, as the altitude of the UAVs increases, the average wireless latency per UAV increases for all studied utility functions. This is mainly due to the increase in the distance of the UAVs from their corresponding serving BSs which accentuates the path loss effect. Moreover, higher UAV altitudes result in a higher average data rate per ground UE for all studied utility functions mainly due to the decrease in the interference level that is caused from the UAVs on neighboring BSs. Here, there exists a tradeoff between minimizing the average wireless delay per UAV and maximizing the average data rate per ground UE. Therefore, alongside the multiobjective weights, the altitude of the UAVs can be varied such that the ground UE rate requirements is met while minimizing the wireless latency for each UAV based on its mission objective. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=11cm, scale=1.9]{figures/power_densification} \vspace{-0.4cm} \caption{Effect of the ground network densification on the average transmit power level of the UAVs along their paths.}\label{power_densification} \vspace{-0.9cm} \end{center} \end{figure} Fig.~\ref{power_densification} shows the average transmit power level per UAV along its path as a function of the number of BSs considering two utility functions, one for minimizing the average wireless latency for each UAV and the other for minimizing the interference level on the ground UEs. From Fig.~\ref{power_densification}, we can see that network densification has an impact on the transmission power level of the UAVs. For instance, when minimizing the wireless latency of each UAV along its path, the average transmit power level per UAV increases from 0.04 W to 0.06 W as the number of ground BSs increases from 10 to 30, respectively. In essence, the increase in the transmit power level is the result of the increase in the interference level from the ground UEs as the ground network becomes denser. As a result, the UAVs will transmit using a larger transmission power level so as to minimize their wireless transmission delay. On the other hand, the average transmit power level per UAV decreases from 0.036 W to 0.029 W in the case of minimizing the interference level caused on neighboring BSs. This is due to the fact that as the number of BSs increases, the interference level caused by each UAV on the ground network increases thus requiring each UAV to decrease its transmit power level. Note that, when minimizing the wireless latency, the average transmit power per UAV is always larger than the case of minimizing the interference level, irrespective of the number of ground BSs. Therefore, the transmit power level of the UAVs is a function of their mission objective and the number of ground BSs. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=11cm, scale=1.9]{figures/densification} \vspace{-0.4cm} \caption{Effect of the ground network densification on the average (a) wireless latency per UAV and (b) rate per ground UE for different utility functions and for a fixed altitude of 120m.}\label{densification} \vspace{-0.7cm} \end{center} \end{figure} Fig.~\ref{densification} presents the (a) wireless latency per UAV and (b) rate per ground UE for different utilities as a function of the number of BSs and for a fixed altitude of 120 m. From this figure, we can see that, as the ground network becomes more dense, the average wireless latency per UAV increases and the average rate per ground UE decreases for all considered cases. For instance, when the objective is to minimize the interference level along with energy efficiency, the average wireless latency per UAV increases from 13 ms to 47 ms and the average rate per ground UE decreases from 0.86 Mbps to 0.48 Mbps as the number of BSs increases from 10 to 30. This is due to the fact that a denser network results in higher interference on the UAVs as well as other UEs in the network. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=11cm, scale=1.9]{figures/densification_altitude} \vspace{-0.4cm} \caption{Effect of the ground network densification on the average (a) wireless latency per UAV and (b) rate per ground UE for different utility functions and for various altitudes of the UAVs.}\label{densification_altitude} \vspace{-0.7cm} \end{center} \end{figure} Fig.~\ref{densification_altitude} investigates the (a) wireless latency per UAV and (b) rate per ground UE for different values of the UAVs altitude and as a function of the number of BSs. From this figure, we can see that as the UAV altitude increases and/or the ground network becomes denser, the average wireless latency per UAV increases. For instance, the delay increases by 27\% as the altitude of the UAVs increases from 120 to 240 m for a network consisting of 20 BSs and increases by 120\% as the number of BSs increases from 10 to 30 for a fixed altitude of 180 m. This essentially follows from Theorem~\ref{theorem_altitude} and the results in Fig.~\ref{maximum_altitude} which shows that the maximum altitude of the UAV decreases as the ground network gets denser and thus the UAVs should operate at a lower altitude when the number of BSs increases from 10 to 30. Moreover, the average rate per ground UE decreases as the ground network becomes denser due to the increase in the interference level and increases as the altitude of the UAVs increases. Therefore, the resulting network performance depends highly on both the UAVs altitude and the number of BSs in the network. For instance, in case of a dense ground network, the UAVs need to fly at a lower altitude for applications in which the wireless transmission latency is more critical and at a higher altitude in scenarios in which a minimum achievable data rate for the ground UEs is required. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=11cm, scale=1.9]{figures/interferers} \vspace{-0.4cm} \caption{The average rate per ground UE as a function of the number of interferer BSs in the state definition $(\emph{L}_j)$.}\label{interferers} \vspace{-0.9cm} \end{center} \end{figure} Fig.~\ref{interferers} shows the effect of varying the number of nearest BSs ($\emph{L}_j$) in the observed network state of UAV $j$, $\boldsymbol{v}_j(t)$, on the average data rate per ground UE for different utility functions. From Fig.~\ref{interferers}, we can see an improvement in the average rate per ground UE as the number of nearest BSs in the state definition increases. For instance, in scenarios in which the UAVs aim at minimizing the interference level they cause on the ground network along their paths, the average rate per ground UE increases by 28\% as the number of BSs in the state definition increases from 1 to 5. This gain results from the fact that as $\emph{L}_j$ increases, the UAVs get a better sense of their surrounding environment and thus can better select their next location such that the interference level they cause on the ground network is minimized. It is important to note here, that as $\emph{L}_j$ increases, the size of the external input ($\boldsymbol{v}_j$) increases thus requiring a larger number of neurons in each layer. This in turn increases the number of required iterations for convergence. Therefore, a tradeoff exists between improving the performance of the ground UEs and the running complexity of the proposed algorithm. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=11cm, scale=1.9]{figures/learning_rate} \vspace{-0.31cm} \caption{Effect of the learning rate on the convergence of offline training.}\label{learning_rate} \vspace{-0.9cm} \end{center} \end{figure} Fig.~\ref{learning_rate} shows the average of the error function $e_j(\boldsymbol{v}_j(t))$ resulting from the offline training phase as a function of a multiple of 20 iterations while considering different values for the learning rate, $\lambda$. The learning rate determines the step size the algorithm takes to reach the optimal solution and, thus, it impacts the convergence rate of our proposed framework. From Fig.~\ref{learning_rate}, we can see that small values of the learning rate, i.e., $\lambda =0.0001$, result in a slow speed of convergence. On the other hand, for large values of the learning rate, such as $\lambda=0.1$, the error function decays fast for the first few iterations but then remains constant. Here, $\lambda=0.1$ does not lead to convergence during the testing phase, but $\lambda =0.0001$ and $\lambda=0.01$ result in convergence, though requiring a different number of training iterations. In fact, a large learning rate can cause the algorithm to diverge from the optimal solution. This is because large initial learning rates will decay the loss function faster and thus make the model get stuck at a particular region of the optimization space instead of better exploring it. Clearly, our framework achieves better performance for $\lambda=0.01$, as compared to smaller and larger values of the learning rate. We also note that the error function does not reach the value of zero during the training phase. This is due to the fact that, for our approach, we adopt the early stopping technique to avoid overfitting which occurs when the training error decreases at the expense of an increase in the value of the test error~\cite{RNN_survey}. \section{Conclusion}\label{conclusion} \vspace{-0.1cm} In this paper, we have proposed a novel interference-aware path planning scheme that allows cellular-connected UAVs to minimize the interference they cause on a ground network as well as their wireless transmission latency while transmitting online mission-related data. We have formulated the problem as a noncooperative game in which the UAVs are the players. To solve the game, we have proposed a deep RL algorithm based ESN cells which is guaranteed to reach an SPNE, if it converges. The proposed algorithm enables each UAV to decide on its next location, transmission power level, and cell association vector in an autonomous manner thus adapting to the changes in the network. Simulation results have shown that the proposed approach achieves better wireless latency per UAV and rate per ground UE while requiring a number of steps that is comparable to the shortest path scheme. The results have also shown that a UAV's altitude plays a vital role in minimizing the interference level on the ground UEs as well as the wireless transmission delay of the UAV. In particular, we have shown that the altitude of the UAV is a function of the ground network density, the UAV's objective and the actions of other UAVs in the network. \section*{Appendix} \subsection{Proof of Theorem \ref{theorem_altitude}} For a given network state $\boldsymbol{v}_j(t)$ and a particular action $\boldsymbol{z}_j(t)$, the upper bound for the altitude of UAV $j$ can be derived when UAV $j$ aims at minimizing its delay function only, i.e., $\vartheta '=0$. For such scenarios, UAV $j$ should guarantee an upper limit, $\overline{\Gamma}_j$, for the SINR value $\Gamma_{j,s,c,a}$ of the transmission link from UAV $j$ to BS $s$ on RB $c$ at location $a$ as given in constraint (\ref{cons_7}). Therefore, $\hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ corresponds to the altitude at which UAV $j$ achieves $\overline{\Gamma}_j$ and beyond which (\ref{cons_7}) is violated. The derivation of the expression of $\hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ is: \begin{align} \sum_{c=1}^{C_{j,s}(t)}\Gamma_{j,s,c,a} = \overline{\Gamma}_j, \end{align} \begin{align} \sum_{c=1}^{C_{j,s}(t)} \frac{\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t))}{C_{j,s}(t)}\cdot g_{j,s,c,a}(t)}{\left(\frac{4 \pi \hat{f} d_{j,s,a}^{\mathrm{max}}}{\hat{c}}\right)^2 \cdot (I_{j,s,c}(t)+B_cN_0)}= \overline{\Gamma}_j, \end{align} \begin{align} \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t))}{C_{j,s}(t)} \cdot \frac{1}{\left(\frac{4 \pi \hat{f} d_{j,s,a}^{\mathrm{max}}}{\hat{c}}\right)^2} \cdot \sum_{c=1}^{C_{j,s}(t)} \frac{g_{j,s,c,a}{}(t)}{I_{j,s,c}(t)+B_cN_0} = \overline{\Gamma}_j, \end{align} \begin{align} (d_{j,s,a}^{\mathrm{max}})^2=\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t))}{C_{j,s}(t)} \cdot \frac{1}{\overline{\Gamma}_j \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2} \cdot \sum_{c=1}^{C_{j,s}(t)}\frac{g_{j,s,c,a}(t)}{I_{j,s,c}(t)+B_cN_0}, \end{align} \noindent where $d_{j,s,a}$ is the Euclidean distance between UAV $j$ and its serving BS $s$ at location $a$. Assume that the altitude of BS $s$ is negligible, i.e., $z_s=0$, $\hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ can be expressed as: \begin{multline} \hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \\ \sqrt{\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t))}{C_{j,s}(t) \cdot \overline{\Gamma}_j \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2} \cdot \sum_{c=1}^{C_{j,s}(t)}\frac{g_{j,s,c,a}(t)}{I_{j,s,c}(t)+B_cN_0} - (x_j - x_s)^2 - (y_j - y_s)^2}, \end{multline} \noindent where $x_s$ and $y_s$ correspond to the x and y coordinates of the serving BS $s$ and $\hat{c}$ is the speed of light. On the other hand, for a given network state $\boldsymbol{v}_j(t)$ and a particular action $\boldsymbol{z}_j(t)$, the lower bound for the altitude of UAV $j$ can be derived when the objective function of UAV $j$ is to minimize the interference level it causes on the ground network only, i.e., $\phi '=0$ and $\varsigma=0$. For such scenarios, the interference level that UAV $j$ causes on neighboring BS $r$ at location $a$ should not exceed a predefined value given by $\sum_{c=1}^{C_{j,s}(t)}\bar{I}_{j,r,c,a}$\footnote{$\sum_{c=1}^{C_{j,s}(t)}\bar{I}_{j,r,c,a}$ is a network design parameter that is a function of the ground network density, number of UAVs in the network and the data rate requirements of the ground UEs. The value of $\bar{I}_{j,r,c,a}$ is in fact part of the admission control policy which limits the number of UAVs in the network and their corresponding interference level on the ground network~\cite{3GPP_standards}.}. Therefore, $\hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ corresponds to the altitude at which UAV $j$ achieves $\sum_{c=1}^{C_{j,s}(t)}\bar{I}_{j,r,c,a}$ and below which the level of interference it causes on BS $r$ exceeds the value of $\sum_{c=1}^{C_{j,s}(t)}\bar{I}_{j,r,c,a}$. The derivation of the expression of $\hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ is given by: \begin{align} \sum_{c=1}^{C_{j,s}(t)}\sum_{r=1, r\neq s}^{S} \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) h_{j,r,c,a}(t)}{C_{j,s}(t)}= \sum_{c=1}^{C_{j,s}(t)} \sum_{r=1, r\neq s}^{S}\bar{I}_{j,r,c,a}, \end{align} \begin{align}\label{all_interferers} \sum_{c=1}^{C_{j,s}(t)}\sum_{r=1, r\neq s}^{S} \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f} d_{j,r,a}^{\mathrm{min}}}{\hat{c}}\right)^2 }= \sum_{c=1}^{C_{j,s}(t)} \sum_{r=1, r\neq s}^{S}\bar{I}_{j,r,c,a}, \end{align} To find $\hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$, we need to solve (\ref{all_interferers}) for each neighboring BS $r$ separately. Therefore, for a particular neighboring BS $r$, (\ref{all_interferers}) can be written as: \begin{align} \sum_{c=1}^{C_{j,s}(t)} \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f} d_{j,r,a}^{\mathrm{min}}}{\hat{c}}\right)^2}= \sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a}, \end{align} \begin{align} \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot \sum_{c=1}^{C_{j,s}(t)} g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f} d_{j,r,a}^{\mathrm{min}}}{\hat{c}}\right)^2} = \sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a}, \end{align} \begin{align} (d_{j,r,a}^{\mathrm{min}})^2=\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot \sum_{c=1}^{C_{j,s}(t)} g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2 \cdot \sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a}}, \end{align} \noindent where $d_{j,r,a}$ is the Euclidean distance between UAV $j$ and its neighboring BS $r$ at location $a$. Assume that the altitude of BS $r$ is negligible, i.e., $z_r=0$, we have: \begin{align} \hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \sqrt{\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot \sum_{c=1}^{C_{j,s}(t)} g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2 \cdot \sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a}} - (x_j - x_r)^2 - (y_j - y_r)^2}, \end{align} Therefore, $\hat{h}_{j}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ corresponds to the maximum value of $\hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ among all neighboring BSs $r$ and is expressed as: \begin{align} \hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \max_r \hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)), \end{align} \noindent where $x_r$ and $y_r$ correspond to the x and y coordinates of other neighboring BSs $r$. This completes the proof. \def\baselinestretch{0.92} \bibliographystyle{IEEEtran} \section{Introduction} Cellular-connected unmanned aerial vehicles (UAVs) will be an integral component of future wireless networks as evidenced by recent interest from academia, industry, and 3GPP standardizations~\cite{3GPP_standards, Qualcomm_UAV, LTEintheSky, SkyNotLimit, coexistence_ground_aerial, christian}. Unlike current wireless UAV connectivity that relies on short-range communication range (e.g., WiFi, bluetooth, and radio waves), cellular-connected UAVs allow beyond line-of-sight control, low latency, real time communication, robust security, and ubiquitous coverage. Such \emph{cellular-connected UAV-user equipments (UEs)} will thus enable a myriad of applications ranging from real-time video streaming to surveillance. Nevertheless, the ability of UAV-UEs to establish line-of-sight (LoS) connectivity to cellular base stations (BSs) is both a blessing and a curse. On the one hand, it enables high-speed data access for the UAV-UEs. On the other hand, it can lead to substantial inter-cell mutual interference among the UAVs and to the ground users. As such, a wide-scale deployment of UAV-UEs is only possible if interference management challenges are addressed~\cite{LTEintheSky, SkyNotLimit, coexistence_ground_aerial}. While some literature has recently studied the use of UAVs as mobile BSs~\cite{U_globecom, ferryMessage, zhang_trajectory_power, path_planning_WCNC, mohammad_UAV, qingqing_UAV, chen2016caching}, the performance analysis of cellular-connected UAV-UEs (\emph{short-handed hereinafter as UAVs}) remains relatively scarce~\cite{LTEintheSky, SkyNotLimit, coexistence_ground_aerial, reshaping_cellular}. For instance, in~\cite{LTEintheSky}, the authors study the impact of UAVs on the uplink performance of a ground LTE network. Meanwhile, the work in~\cite{SkyNotLimit} uses measurements and ray tracing simulations to study the airborne connectivity requirements and propagation characteristics of UAVs. The authors in~\cite{coexistence_ground_aerial} analyze the coverage probability of the downlink of a cellular network that serves both aerial and ground users. In~\cite{reshaping_cellular}, the authors consider a network consisting of both ground and aerial UEs and derive closed-form expressions for the coverage probability of the ground and drone UEs. Nevertheless, this prior art is limited to studying the impact that cellular-connected UAVs have on the ground network. Indeed, the existing literature~\cite{LTEintheSky, SkyNotLimit, coexistence_ground_aerial, reshaping_cellular} does not provide any concrete solution for optimizing the performance of a cellular network that serves both aerial and ground UEs in order to overcome the interference challenge that arises in this context. UAV trajectory optimization is essential in such scenarios. An online path planning that accounts for wireless metrics is vital and would, in essence, assist in addressing the aforementioned interference challenges along with new improvements in the design of the network, such as 3D frequency resue. Such a path planning scheme allows the UAVs to adapt their movement based on the rate requirements of both aerial UAV-UEs and ground UEs, thus improving the overall network performance. The problem of UAV path planning has been studied mainly for non-UAV-UE applications~\cite{ferryMessage, zhang_trajectory_power, path_planning_WCNC, networked_camera} with~\cite{path_cellular_UAVs} being the only work considering a cellular-connected UAV-UE scenario. In~\cite{ferryMessage}, the authors propose a distributed path planning algorithm for multiple UAVs to deliver delay-sensitive information to different ad-hoc nodes. The authors in~\cite{zhang_trajectory_power} optimize a UAV's trajectory in an energy-efficient manner. The authors in~\cite{path_planning_WCNC} propose a mobility model that combines area coverage, network connectivity, and UAV energy constraints for path planning. In~\cite{networked_camera}, the authors propose a fog-networking-based system architecture to coordinate a network of UAVs for video services in sports events. However, despite being interesting, the body of work in~\cite{ferryMessage, zhang_trajectory_power, path_planning_WCNC} and~\cite{networked_camera} is restricted to UAVs as BSs and does not account for UAV-UEs and their associated interference challenges. Hence, the approaches proposed therein cannot readily be used for cellular-connected UAVs. On the other hand, the authors in~\cite{path_cellular_UAVs} propose a path planning scheme for minimizing the time required by a cellular-connected UAV to reach its destination. Nevertheless, this work is limited to one UAV and does not account for the interference that cellular-connected UAVs cause on the ground network during their mission. Moreover, the work in~\cite{path_cellular_UAVs} relies on offline optimization techniques that cannot adapt to the uncertainty and dynamics of a cellular network. The main contribution of this paper is a novel deep reinforcement learning (RL) framework based on echo state network (ESN) cells for optimizing the trajectories of multiple cellular-connected UAVs in an online manner. This framework will allow cellular-connected UAVs to minimize the interference they cause on the ground network as well as their wireless transmission latency. To realize this, we propose a dynamic noncooperative game in which the players are the UAVs and the objective of each UAV is to \emph{autonomously} and \emph{jointly} learn its path, transmit power level, and association vector. For our proposed game, the UAV's cell association vector, trajectory optimization, and transmit power level are closely coupled with each other and their optimal values vary based on the dynamics of the network. Therefore, a major challenge in this game is the need for each UAV to have full knowledge of the ground network topology, ground UEs service requirements, and other UAVs' locations. Consequently, to solve this game, we propose a deep RL ESN-based algorithm, using which the UAVs can predict the dynamics of the network and subsequently determine their optimal paths as well as the allocation of their resources along their paths. Unlike previous studies which are either centralized or rely on the coordination among UAVs, our approach is based on a self-organizing path planning and resource allocation scheme. In essence, two important features of our proposed algorithm are \emph{adaptation} and \emph{generalization}. Indeed, UAVs can take decisions for \emph{unseen} network states, based on the reward they got from previous states. This is mainly due to the use of ESN cells which enable the UAVs to retain their previous memory states. We have shown that the proposed algorithm reaches a subgame perfect Nash equilibrium (SPNE) upon convergence. Moreover, upper and lower bounds on the UAVs' altitudes, that guarantee a maximum interference level on the ground network and a maximum wireless transmission delay for the UAV, have been derived. To our best knowledge, \emph{this is the first work that exploits the framework of deep ESN for interference-aware path planning of cellular-connected UAVs}. Simulation results show that the proposed approach improves the tradeoff between energy efficiency, wireless latency, and the interference level caused on the ground network. Results also show that each UAV's altitude is a function of the ground network density and the UAV's objective function and is an important factor in achieving the UAV's target. The rest of this paper is organized as follows. Section~\ref{system_model} presents the system model. Section~\ref{game} describes the proposed noncooperative game model. The deep RL ESN-based algorithm is proposed in Section~\ref{algorithm}. In Section~\ref{simulation}, simulation results are analyzed. Finally, conclusions are drawn in Section~\ref{conclusion}. \vspace{-0.5cm} \section{System Model}\label{system_model} Consider the uplink (UL) of a wireless cellular network composed of a set $\mathcal{S}$ of $S$ ground BSs, a set $\mathcal{Q}$ of $Q$ ground UEs, and a set $\mathcal{J}$ of $J$ cellular-connected UAVs. The UL is defined as the link from UE $q$ or UAV $j$ to BS $s$. Each BS $s \in \mathcal{S}$ serves a set $\mathcal{K}_s\subseteq\mathcal{Q}$ of $K_s$ UEs and a set $\mathcal{N}_s\subseteq\mathcal{J}$ of $N_s$ cellular-connected UAVs. The total system bandwidth, $B$, is divided into a set $\mathcal{C}$ of $C$ resource blocks (RBs). Each UAV $j\in \mathcal{N}_s$ is allocated a set $\mathcal{C}_{j,s}\subseteq\mathcal{C}$ of $C_{j,s}$ RBs and each UE $q\in \mathcal{K}_s$ is allocated a set $\mathcal{C}_{q,s}\subseteq\mathcal{C}$ of $C_{q,s}$ RBs by its serving BS $s$. At each BS $s$, a particular RB $c \in \mathcal{C}$ is allocated to \emph{at most} one UAV $j\in \mathcal{N}_s$, or UE $q\in \mathcal{K}_s$. An airborne Internet of Things (IoT) is considered in which the UAVs are equipped with different IoT devices, such as cameras, sensors, and GPS that can be used for various applications such as surveillance, monitoring, delivery and real-time video streaming. The 3D coordinates of each UAV $j \ \in \mathcal{J}$ and each ground user $q \ \in \mathcal{Q}$ are, respectively, $(x_j, y_j, h_j)$ and $(x_q, y_q, 0)$. All UAVs are assumed to fly at a fixed altitude $h_j$ above the ground (as done in~\cite{zhang_trajectory_power, path_cellular_UAVs, relaying, optimization}) while the horizonal coordinates $(x_j, y_j)$ of each UAV $j$ vary in time. Each UAV $j$ needs to move from an initial location $o_j$ to a final destination $d_j$ while transmitting \emph{online} its mission-related data such as sensor recordings, video streams, and location updates. We assume that the initial and final locations of each UAV are pre-determined based on its mission objectives. For ease of exposition, we consider a virtual grid for the mobility of the UAVs. We discretize the space into a set $\mathcal{A}$ of $A$ equally sized unit areas. The UAVs move along the center of the areas $c_a=(x_a, y_a, z_a)$, which yields a finite set of possible paths $\boldsymbol{p}_j$ for each UAV $j$. The path $\boldsymbol{p}_j$ of each UAV $j$ is defined as a sequence of area units $\boldsymbol{p}_j=(a_1, a_2, \cdots, a_l)$ such that $a_1=o_j$ and $a_l=d_j$. The area size of the discretized area units $(a_1, a_2, \cdots, a_A) \in \mathcal{A}$ is chosen to be sufficiently small such that the UAVs' locations can be assumed to be approximately constant within each area even at the maximum UAV's speed, as commonly done in the literature~\cite{relaying}. We assume a constant speed $0 < V_j \leq \widehat{V}_j$ for each UAV where $\widehat{V}_j$ is the maximum speed of UAV $j$. Therefore, the time required by each UAV to travel between any two unit areas is constant \vspace{-0.1cm} \subsection{Channel Models} \vspace{-0.1cm} We consider the sub-6 GHz band and the free-space path loss model for the UAV-BS data link. The path loss between UAV $j$ at location $a$ and BS $s$, $\xi_{j,s,a}$, is given by~\cite{hourani}: \begin{align} \xi_{j,s,a} (\mathrm{dB})= 20\ \mathrm{log}_{10} (d_{j,s,a}) + 20\ \mathrm{log}_{10} (\hat{f}) - 147.55, \end{align} \noindent where $\hat{f}$ is the system center frequency and $d_{j,s,a}$ is the Euclidean distance between UAV $j$ at location $a$ and BS $s$. We consider a Rician distribution for modeling the small-scale fading between UAV $j$ and ground BS $s$ thus accounting for the LoS and multipath scatterers that can be experienced at the BS. In particular, adopting the Rician channel model for the UAV-BS link is validated by the fact that the channel between a given UAV and a ground BS is mainly dominated by a LoS link~\cite{zhang_trajectory_power}. We assume that the Doppler spread due to the mobility of the UAVs is compensated for based on existing techniques such as frequency synchronization using a phase-locked loop~\cite{mengali} as done in~\cite{zhang_trajectory_power} and~\cite{relaying}. For the terrestrial UE-BS links, we consider a Rayleigh fading channel. For a carrier frequency, $\hat{f}$, of 2 GHz, the path loss between UE $q$ and BS $s$ is given by~\cite{pathloss_ground}: \begin{align} \zeta_{q,s}(\mathrm{dB}) = 15.3+37.6\ \mathrm{log}_{10}(d_{q,s}), \end{align} \noindent where $d_{q\textrm{,}s}$ is the Euclidean distance between UE $q$ and BS $s$. The average signal-to-interference-plus-noise ratio (SINR), $\Gamma_{j,s,c,a}$, of the UAV-BS link between UAV $j$ at location $a$ $(a \in \mathcal{A})$ and BS $s$ over RB $c$ will be: \begin{align}\label{SNIR} \Gamma_{j,s,c,a}=\frac{P_{j,s,c,a} h_{j,s,c,a}}{I_{j,s,c}+B_c N_0}, \end{align} \noindent where $P_{j,s,c,a}=\widehat{P}_{j,s,a}/C_{j,s}$ is the transmit power of UAV $j$ at location $a$ to BS $s$ over RB $c$ and $\widehat{P}_{j,s,a}$ is the total transmit power of UAV $j$ to BS $s$ at location $a$. Here, the total transmit power of UAV $j$ is assumed to be distributed uniformly among all of its associated RBs. $h_{j,s,c,a}=g_{j,s,c,a}10^{-\xi_{j,s,a}/10}$ is the channel gain between UAV $j$ and BS $s$ on RB $c$ at location $a$ where $g_{j,s,c,a}$ is the Rician fading parameter. $N_0$ is the noise power spectral density and $B_{c}$ is the bandwidth of an RB $c$. $I_{j,s,c}= \sum_{r=1, r\neq s}^S (\sum_{k=1}^{K_r} P_{k,r,c} h_{k,s,c} + \sum_{n=1}^{N_r} P_{n,r,c,a'} h_{n,s,c,a'})$ is the total interference power on UAV $j$ at BS $s$ when transmitting over RB $c$, where $\sum_{r=1, r\neq s}^S \sum_{k=1}^{K_r} P_{k,r,c} h_{k,s,c}$ and $\sum_{r=1, r\neq s}^S\sum_{n=1}^{N_r} P_{n,r,c,a'} h_{n,s,c,a'}$ correspond, respectively, to the interference from the $K_r$ UEs and the $N_r$ UAVs (at their respective locations $a'$) connected to neighboring BSs $r$ and transmitting using the same RB $c$ as UAV $j$. $h_{k,s,c}=m_{k,s,c}10^{-\zeta_{k,s}/10}$ is the channel gain between UE $k$ and BS $s$ on RB $c$ where $m_{k,s,c}$ is the Rayleigh fading parameter. Therefore, the achievable data rate of UAV $j$ at location $a$ associated with BS $s$ can be defined as $R_{j,s,a}=\sum_{c=1}^{C_{j,s}} B_{c} \mathrm{log}_2(1+\Gamma_{j,s,c,a})$. Given the achievable data rate of UAV $j$ and assuming that each UAV is an M/D/1 queueing system, the corresponding latency over the UAV-BS wireless link is given by~\cite{delay_book}: \begin{align}\label{delay_eqn} \tau_{j,s,a}=\frac{\lambda_{j,s}}{2\mu_{j,s,a}\textrm{(}\mu_{j,s,a}-\lambda_{j,s}\textrm{)}}+\frac{1}{\mu_{j,s,a}}, \end{align} \noindent where $\lambda_{j,s}$ is the average packet arrival rate (packets/s) traversing link $(j,s)$ and originating from UAV $j$. $\mu_{j,s,a}=R_{j,s,a}/\nu$ is the service rate over link $(j,s)$ at location $a$ where $\nu$ is the packet size. On the other hand, the achievable data rate for a ground UE $q$ served by BS $s$ is given by: \vspace{-0.2cm} \begin{align} R_{q,s}=\sum_{c=1}^{C_{q,s}}B_c\mathrm{log}_2\Big(1+\frac{P_{q,s,c}h_{q,s,c}}{I_{q,s,c}+B_cN_0}\Big), \end{align} \noindent where $h_{q,s,c}=m_{q,s,c}10^{-\zeta_{q,s}/10}$ is the channel gain between UE $q$ and BS $s$ on RB $c$ and $m_{q,s,c}$ is the Rayleigh fading parameter. $P_{q,s,c}=\widehat{P}_{q,s}/C_{q,s}$ is the transmit power of UE $q$ to its serving BS $s$ on RB $c$ and $\widehat{P}_{q,s}$ is the total transmit power of UE $q$. Here, we also consider equal power allocation among the allocated RBs for the ground UEs. $I_{q,s,c}= \sum_{r=1, r\neq s}^S (\sum_{k=1}^{K_r} P_{k,r,c} h_{k,s,c} + \sum_{n=1}^{N_r} P_{n,r,c,a'} h_{n,s,c,a'})$ is the total interference power experienced by UE $q$ at BS $s$ on RB $c$ where $\sum_{r=1, r\neq s}^S \sum_{k=1}^{K_r} P_{k,r,c} h_{k,s,c}$ and $\sum_{r=1, r\neq s}^S\sum_{n=1}^{N_r} P_{n,r,c,a'} h_{n,s,c,a'}$ correspond, respectively, to the interference from the $K_r$ UEs and the $N_r$ UAVs (at their respective locations $a'$) associated with the neighboring BSs $r$ and transmitting using the same RB $c$ as UE $q$. \subsection{Problem Formulation} Our objective is to find the optimal path for each UAV $j$ based on its mission objectives as well as its interference on the ground network. Thus, we seek to minimize: a) the interference level that each UAV causes on the ground UEs and other UAVs, b) the transmission delay over the wireless link, and c) the time needed to reach the destination. To realize this, we optimize the paths of the UAVs jointly with the cell association vector and power control at each location $a \in \mathcal{A}$ along each UAV's path. We consider a directed graph $G_j=(\mathcal{V}, \mathcal{E}_j)$ for each UAV $j$ where $\mathcal{V}$ is the set of vertices corresponding to the centers of the unit areas $a \in \mathcal{A}$ and $\mathcal{E}_j$ is the set of edges formed along the path of UAV $j$. We let $\boldsymbol{\widehat{P}}$ be the transmission power vector with each element $\widehat{P}_{j,s,a}\in[0, \overline{P}_{j}]$ being the transmission power level of UAV $j$ to its serving BS $s$ at location $a$ where $\overline{P}_{j}$ is the maximum transmission power of UAV $j$. $\boldsymbol{\alpha}$ is the path formation vector with each element $\alpha_{j,a,b}\in\{0,1\}$ indicating whether or not a directed link is formed from area $a$ towards area $b$ for UAV $j$, i.e., if UAV $j$ moves from $a$ to $b$ along its path. $\boldsymbol{\beta}$ is the UAV-BS association vector with each element $\beta_{j,s,a}\in\{0,1\}$ denoting whether or not UAV $j$ is associated with BS $s$ at location $a$. Next, we present our optimization problem whose goal is to determine the path of each UAV along with its cell association vector and its transmit power level at each location $a$ along its path $\boldsymbol{p}_j$: \vspace{-0.4cm} \begin{multline}\label{obj} \hspace{-0.5cm}\min_{\boldsymbol{\widehat{P}}, \boldsymbol{\alpha}, \boldsymbol{\beta}}\vartheta\sum_{j=1}^{J}\sum_{s=1}^S \sum_{c=1}^{C_{j,s}}\sum_{a=1}^A \sum_{r=1, r\neq s}^S \frac{\widehat{P}_{j,s,a} h_{j,r,c,a}}{C_{j,s}}+\varpi \sum_{j=1}^J\sum_{a=1}^A\sum_{b=1, b\neq a}^A\alpha_{j,a,b} + \phi\sum_{j=1}^J\sum_{s=1}^S\sum_{a=1}^A\beta_{j,s,a}\tau_{j,s,a}, \end{multline} \vspace{-0.4cm} \begin{align}\label{cons_1} \sum_{b=1, b\neq a}^A\alpha_{j,b,a} \leq 1 \;\;\forall j\in \mathcal{J}, a\in \mathcal{A}, \end{align} \vspace{-0.45cm} \begin{align}\label{cons_2} \sum_{a=1, a\neq o_j}^A\alpha_{j,o_j,a} \textrm{=} 1 \;\;\forall j\in \mathcal{J}, \sum_{a=1, a\neq d_j}^A\alpha_{j,a,d_j} \textrm{=} 1 \;\;\forall j\in \mathcal{J}, \end{align} \vspace{-0.4cm} \begin{align}\label{cons_3} \sum_{a\textrm{=}1, a\neq b}^A\alpha_{j,a,b}-\sum_{f\textrm{=}1, f\neq b}^A\alpha_{j,b,f}\textrm{=} 0 \;\forall j\in \mathcal{J},b\in \mathcal{A} \;(b\neq o_j, b\neq d_j), \end{align} \vspace{-0.5cm} \begin{align}\label{cons_4} \widehat{P}_{j,s,a}\geq\sum_{b=1, b\neq a}^A\alpha_{j,b,a} \;\;\forall j\in \mathcal{J}, s\in \mathcal{S}, a\in \mathcal{A}, \end{align} \vspace{-0.74cm} \begin{align}\label{cons_44} \widehat{P}_{j,s,a}\geq\beta_{j,s,a} \;\;\forall j\in \mathcal{J}, s\in \mathcal{S}, a\in \mathcal{A}, \end{align} \vspace{-0.75cm} \begin{align}\label{cons_6} \sum_{s=1}^S \beta_{j,s,a} - \sum_{b=1, b\neq a}^A\alpha_{j,b,a}=0\;\;\;\forall j\in \mathcal{J}, a\in A, \end{align} \vspace{-0.37cm} \begin{align}\label{cons_7} \sum_{c=1}^{C_{j,s}}\Gamma_{j,s,c,a}\geq\beta_{j,s,a}\overline{\Gamma}_j \;\;\;\forall j\in \mathcal{J}, s\in \mathcal{S}, a\in \mathcal{A}, \end{align} \vspace{-0.68cm} \begin{align}\label{cons_8} 0\leq \widehat{P}_{j,s,a}\leq \overline{P}_{j} \;\;\forall j\in \mathcal{J}\textrm{,} s\in \mathcal{S}\textrm{,} \; a\in \mathcal{A}, \end{align} \vspace{-1.03cm} \begin{align}\label{cons_9} \alpha_{j\textrm{,}a\textrm{,}b}\in\{0\textrm{,}1\}\textrm{,}\; \beta_{j\textrm{,}s\textrm{,}a}\in\{0\textrm{,}1\} \;\;\forall j\in \mathcal{J}\textrm{,}\; s\in \mathcal{S}\textrm{,} \;\;a,b\in \mathcal{A}\textrm{.} \end{align} The objective function in (\ref{obj}) captures the total interference level that the UAVs cause on neighboring BSs along their paths, the length of the paths of the UAVs, and their wireless transmission delay. $\vartheta$, $\varpi$ and $\phi$ are multi-objective weights used to control the tradeoff between the three considered metrics. These weights can be adjusted to meet the requirements of each UAV's mission. For instance, the time to reach the destination is critical in search and rescue applications while the latency is important for online video streaming applications. (\ref{cons_1}) guarantees that each area $a$ is visited by UAV $j$ at most once along its path $\boldsymbol{p}_j$. (\ref{cons_2}) guarantees that the trajectory of each UAV $j$ starts at its initial location $o_j$ and ends at its final destination $d_j$. (\ref{cons_3}) guarantees that if UAV $j$ visits area $b$, it should also leave from area $b$ $(b\neq o_j, b\neq d_j)$. (\ref{cons_4}) and (\ref{cons_44}) guarantee that UAV $j$ transmits to BS $s$ at area $a$ with power $\widehat{P}_{j,s,a}>0$ only if UAV $j$ visits area $a$, i.e., $a\in \boldsymbol{p}_j$ and such that $j$ is associated with BS $s$ at location $a$. (\ref{cons_6}) guarantees that each UAV $j$ is associated with one BS $s$ at each location $a$ along its path $\boldsymbol{p}_j$. (\ref{cons_7}) guarantees an upper limit, $\overline{\Gamma}_j$, for the SINR value $\Gamma_{j,s,c,a}$ of the transmission link from UAV $j$ to BS $s$ on RB $c$ at each location $a$, $a\in \boldsymbol{p}_j$. This, in turn, ensures successful decoding of the transmitted packets at the serving BS. The value of $\overline{\Gamma}_j$ is application and mission specific. Note that the SINR check at each location $a$ is valid for our problem since we consider small-sized area units. (\ref{cons_8}) and (\ref{cons_9}) are the feasibility constraints. The formulated optimization problem is a mixed integer non-linear program, which is computationally complex to solve for large networks. To address this challenge, we adopt a distributed approach in which each UAV decides autonomously on its next path location along with its corresponding transmit power and association vector. In fact, a centralized approach requires control signals to be transmitted to the UAVs at all time. This might incur high round-trip latencies that are not desirable for real-time applications such as online video streaming. Further, a centralized approach requires a central entity to have full knowledge of the current state of the network and the ability to communicate with all UAVs at all time. However, this might not be feasible in case the UAVs belong to different operators or in scenarios in which the environment changes dynamically. Therefore, we next propose a distributed approach for each UAV $j$ to learn its path $\boldsymbol{p}_j$ along with its transmission power level and association vector at each location $a$ along its path in an autonomous and online manner. \section{Towards a Self-Organizing Network of an Airborne Internet of Things}\label{game} \subsection{Game-Theoretic Formulation} Our objective is to develop a distributed approach that allows each UAV to take actions in an autonomous and online manner. For this purpose, we model the multi-agent path planning problem as a finite dynamic noncooperative game model $\mathcal{G}$ with perfect information~\cite{walid_book}. Formally, we define the game as $\mathcal{G}=(\mathcal{J}, \mathcal{T}, \mathcal{Z}_j, \mathcal{V}_j, \Pi_j, u_j)$ with the set $\mathcal{J}$ of UAVs being the agents. $\mathcal{T}$ is a finite set of stages which correspond to the steps required for all UAVs to reach their sought destinations. $\mathcal{Z}_j$ is the set of actions that can be taken by UAV $j$ at each $t \in \mathcal{T}$, $\mathcal{V}_j$ is the set of all observed network states by UAV $j$ up to stage $T$, $\Pi_j$ is a set of probability distributions defined over all $z_j \in \mathcal{Z}_j$, and $u_j$ is the payoff function of UAV $j$. At each stage $t \in \mathcal{T}$, the UAVs take actions simultaneously. In particular, each UAV $j$ aims at determining its path $\boldsymbol{p}_j$ to its destination along with its optimal transmission power and cell association vector for each location $a \in \mathcal{A}$ along its path $\boldsymbol{p}_j$. Therefore, at each $t$, UAV $j$ chooses an action $z_j(t) \in \mathcal{Z}_j$ composed of the tuple $\boldsymbol{z}_j(t)=(\boldsymbol{a}_j(t), \widehat{P}_{j,s,a}(t), \boldsymbol{\beta}_{j,s,a}(t))$, where $\boldsymbol{a}_j(t)$=\{left, right, forward, backward, no movement\} corresponds to a fixed step size, $\widetilde{a}_j$, in a given direction. $\widehat{P}_{j,s,a}(t)=[\widehat{P}_{1}, \widehat{P}_{2}, \cdots, \widehat{P}_{O}]$ corresponds to $O$ different maximum transmit power levels for each UAV $j$ and $\boldsymbol{\beta}_{j,s,a}(t)$ is the UAV-BS association vector. For each UAV $j$, let $\mathcal{L}_j$ be the set of its $L_j$ nearest BSs. The observed network state by UAV $j$ at stage $t$, $\boldsymbol{v}_j(t) \in \mathcal{V}_j$, is: \begin{align}\label{input} \boldsymbol{v}_j(t)\textrm{=}\Big[\{\delta_{j\textrm{,}l\textrm{,}a}(t)\textrm{,} \theta_{j\textrm{,}l\textrm{,}a}(t)\}_{l=1}^{L_j}\textrm{,} \theta_{j\textrm{,}d_j\textrm{,}a}(t)\textrm{,} \{x_j(t)\textrm{,} y_j(t)\}_{j \in \mathcal{J}} \Big]\textrm{,} \end{align} \noindent where $\delta_{j,l,a}(t)$ is the Euclidean distance from UAV $j$ at location $a$ to BS $l$ at stage $t$, $\theta_{j,l,a}$ is the orientation angle in the xy-plane from UAV $j$ at location $a$ to BS $l$ defined as $\mathrm{tan}^{-1}(\Delta y_{j,l}/\Delta x_{j,l})$~\cite{orientation_angle} where $\Delta y_{j,l}$ and $\Delta x_{j,l}$ correspond to the difference in the $x$ and $y$ coordinates of UAV $j$ and BS $l$, $\theta_{j,d_j,a}$ is the orientation angle in the xy-plane from UAV $j$ at location $a$ to its destination $d_j$ defined as $\mathrm{tan}^{-1}(\Delta y_{j,d_j}/\Delta x_{j,d_j})$, and $\{x_j(t)\textrm{,} y_j(t)\}_{j \in \mathcal{J}}$ are the horizonal coordinates of all UAVs at stage $t$. For our model, we consider different range intervals for mapping each of the orientation angle and distance values, respectively, into different states. Moreover, based on the optimization problem defined in (\ref{obj})-(\ref{cons_9}) and by incorporating the Lagrangian penalty method into the utility function definition for the SINR constraint~(\ref{cons_7}), the resulting utility function for UAV $j$ at stage $t$, $u_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t))$, will be given by \vspace{-0.6cm} \begin{align}\label{utility_t} u_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t))\textrm{=} \begin{cases} \Phi(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)) \textrm{+} C\textrm{,} \; \mathrm{if} \; \delta_{j,d_j,a}(t)<\delta_{j,d_j,a'}(t-1)\textrm{,}\\ \Phi(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))\textrm{,} \; \mathrm{if} \; \delta_{j,d_j,a}(t)=\delta_{j,d_j,a'}(t-1)\textrm{,}\\ \Phi(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)) \textrm{-} C \textrm{,} \; \mathrm{if} \; \delta_{j,d_j,a}(t)>\delta_{j,d_j,a'}(t-1)\textrm{,} \end{cases} \end{align} \vspace{-0.3cm} \noindent where $\Phi(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ is defined as: \vspace{-0.1cm} \begin{multline} \hspace{-0.3cm}\Phi(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))\textrm{=}-\vartheta' \sum_{c=1}^{C_{j,s}(t)} \sum_{r=1, r\neq s}^S \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) h_{j,r,c,a}(t)}{C_{j,s}(t)} - \phi'\tau_{j,s,a}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)) \\- \varsigma (\mathrm{min}(0, \sum_{c=1}^{C_{j,s}(t)}\Gamma_{j,s,c,a}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))-\overline{\Gamma}_j))^2, \end{multline} \noindent subject to (\ref{cons_1})-(\ref{cons_6}), (\ref{cons_8}) and (\ref{cons_9}). $\varsigma$ is the penalty coefficient for (\ref{cons_7}) and $C$ is a constant parameter. $a'$ and $a$ are the locations of UAV $j$ at $(t-1)$ and $t$ where $\delta_{j,d_j,a}$ is the distance between UAV $j$ and its destination $d_j$. It is worth noting here that the action space of each UAV $j$ and, thus, the complexity of the proposed game $\mathcal{G}$ increases exponentially when updating the 3D coordinates of the UAVs. Nevertheless, each UAV's altitude must be bounded in order to guarantee an SINR threshold for the UAV and a minimum achievable data rate for the ground UEs. Next, we derive an upper and lower bound for the optimal altitude of any given UAV $j$ based on the proposed utility function in (\ref{utility_t}). In essence, such bounds are valid for all values of the multi-objective weights $\vartheta '$, $\phi '$, and $\varsigma$. \begin{theorem}\label{theorem_altitude} \emph{For all values of $\vartheta '$, $\phi '$, and $\varsigma$, a given network state $\boldsymbol{v}_j(t)$, and a particular action $\boldsymbol{z}_j(t)$, the upper and lower bounds for the altitude of UAV $j$ are, respectively, given by:} \begin{align} h_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \mathrm{max} (\chi, \hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))), \end{align} \vspace{-0.9cm} \begin{align} h_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \mathrm{max} (\chi, \hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))), \end{align} \noindent \emph{where $\chi$ corresponds to the minimum altitude at which a UAV can fly. $\hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ and $\hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$} \emph{are expressed as:} \vspace{-0.3cm} \begin{multline} \hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \\ \sqrt{\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t))}{C_{j,s}(t) \cdot \overline{\Gamma}_j \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2} \cdot \sum_{c=1}^{C_{j,s}(t)}\frac{g_{j,s,c,a}(t)}{I_{j,s,c}(t)+B_cN_0} - (x_j - x_s)^2 - (y_j - y_s)^2}, \end{multline} \emph{and} \begin{align} \hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \max_r \hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)), \end{align} \emph{where $\hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ is the minimum altitude that UAV $j$ should operate at with respect to a particular neighboring BS $r$ and is expressed as:} \begin{align} \hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \sqrt{\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot \sum_{c=1}^{C_{j,s}(t)} g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2 \cdot \sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a}} - (x_j - x_r)^2 - (y_j - y_r)^2}, \end{align} \end{theorem} \begin{proof} See Appendix A. \end{proof} From the above theorem, we can deduce that the optimal altitude of the UAVs is a function of their objective function, location of the ground BSs, network design parameters, and the interference level from other UEs and UAVs in the network. Therefore, at each time step $t$, UAV $j$ would adjust its altitude level based on the values of $h_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)$ and $h_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)$ thus adapting to the dynamics of the network. In essence, the derived upper and lower bounds for the optimal altitude of the UAVs allows a reduction of the action space of game $\mathcal{G}$ thus simplifying the process needed for the UAVs to find a solution, i.e., equilibrium, of the game. Next, we analyze the equilibrium point of the proposed game $\mathcal{G}$. \vspace{-0.3cm} \subsection{Equilibrium Analysis} \vspace{-0.1cm} For our game $\mathcal{G}$, we are interested in studying the subgame perfect Nash equilibrium (SPNE) in behavioral strategies. An SPNE is a profile of strategies which induces a Nash equilibrium (NE) on every subgame of the original game. Moreover, a \emph{behavioral strategy} allows each UAV to assign independent probabilities to the set of actions at each network state that is independent across different network states. Here, note that there always exists at least one SPNE for any finite horizon extensive game with perfect information [Selten's Theorem]~\cite{SPNE_existence}. Let $\boldsymbol{\pi}_j(\boldsymbol{v}_j(t))=(\pi_{j,z_1}(\boldsymbol{v}_j(t)), \pi_{j,z_2}(\boldsymbol{v}_j(t)), \cdots, \pi_{j,\boldsymbol{z}_{\mid Z_j\mid}}(\boldsymbol{v}_j(t))) \in \Pi_j$ be the behavioral strategy of UAV $j$ at state $\boldsymbol{v}_j(t)$ and let $\Delta (\mathcal{Z})$ be the set of all probability distributions over the action space $\mathcal{Z}$. Next, we define the notion of an SPNE. \begin{definition}\emph{A behavioral strategy $(\boldsymbol{\pi}^*_1(\boldsymbol{v}_j(t))\textrm{,} \cdots\textrm{,} \boldsymbol{\pi}_J^*(\boldsymbol{v}_j(t))) = (\boldsymbol{\pi}_j^*(\boldsymbol{v}_j(t)), \boldsymbol{\pi}^*_{-j}(\boldsymbol{v}_j(t)))$ constitutes a} subgame perfect Nash equilibrium \emph{if, $\forall j \in \mathcal{J}$, $\forall t \in \mathcal{T}$ and $\forall \boldsymbol{\pi}_j(\boldsymbol{v}_j(t)) \in \Delta (\mathcal{Z})$, $\overline{u}_j(\boldsymbol{\pi}^*_j(\boldsymbol{v}_j(t)), \boldsymbol{\pi}^*_{-j}(\boldsymbol{v}_j(t)))\geq \overline{u}_j(\boldsymbol{\pi}_j(\boldsymbol{v}_j(t)), \boldsymbol{\pi}^*_{-j}(\boldsymbol{v}_j(t)))$.} \end{definition} Therefore, each state $\boldsymbol{v}_j(t)$ and stage $t$, the goal of each UAV $j$ is to maximize its expected sum of discounted rewards, which is computed as the summation of the immediate reward for a given state along with the expected discounted utility of the next states: \vspace{-0.4cm} \begin{multline}\label{expected_utility} \overline{u}(\boldsymbol{v}_j(t), \boldsymbol{\pi}_j(\boldsymbol{v}_j(t))\textrm{,} \boldsymbol{\pi}_{\textrm{-}j}(\boldsymbol{v}_j(t)))=\mathds{E}_{\boldsymbol{\pi}_j(t)}\left\{\sum_{l=0}^\infty \gamma^{l} u_j(\boldsymbol{v}_j(t+l)\textrm{,} \boldsymbol{z}_j(t+l)\textrm{,} \boldsymbol{z}_{\textrm{-}j}(t+l))| \boldsymbol{v}_{j,0}=\boldsymbol{v}_j\right\}\\ \textrm{=}\sum_{\boldsymbol{z}\in\mathcal{Z}} \sum_{l=0}^\infty \gamma^{l} u_j(\boldsymbol{v}_j(t+l)\textrm{,} \boldsymbol{z}_j(t+l)\textrm{,} \boldsymbol{z}_{\textrm{-}j}(t+l)) \prod_{j=1}^J \pi_{j\textrm{,}z_j}(\boldsymbol{v}_j(t+l))\textrm{,} \end{multline} \vspace{-0.2cm} \noindent where $\gamma^l \in (0, 1)$ is a discount factor for delayed rewards and $\mathds{E}_{\boldsymbol{\pi}_j(\boldsymbol{v}_j(t))}$ denotes an expectation over trajectories of states and actions, in which actions are selected according to $\boldsymbol{\pi}_j(\boldsymbol{v}_j(t))$. Here, $\boldsymbol{u}_j$ is the short-term reward for being in state $\boldsymbol{v}_j$ and $\boldsymbol{\overline{u}}_j$ is the expected long-term total reward from state $\boldsymbol{v}_j$ onwards. Here, note that the UAV's cell association vector, trajectory optimization, and transmit power level are closely coupled with each other and their corresponding optimal values vary based on the UAVs' objectives. In a multi-UAV network, each UAV must have full knowledge of the future reward functions at each information set and thus for all future network states in order to find the SPNE. This in turn necessitates knowledge of all possible future actions of all UAVs in the network and becomes challenging as the number of UAVs increases. To address this challenge, we rely on deep recurrent neural networks (RNNs)~\cite{RNN_survey}. In essence, RNNs exhibit dynamic temporal behavior and are characterized by their adaptive memory that enables them to store necessary previous state information to predict future actions. On the other hand, deep neural networks are capable of dealing with large datasets. Therefore, next, we develop a novel deep RL based on ESNs, a special kind of RNN, for solving the SPNE of our game $\mathcal{G}$. \section{Deep Reinforcement Learning for Online Path Planning and Resource Management}\label{algorithm} In this section, we first introduce a deep ESN-based architecture that allows the UAVs to store previous states whenever needed while being able to learn future network states. Then, we propose an RL algorithm based on the proposed deep ESN architecture to learn an SPNE for our proposed game. \subsection{Deep ESN Architecture}\label{ESN_architecture} ESNs are a new type of RNNs with feedback connections that belong to the family of reservoir computing (RC)~\cite{RNN_survey}. An ESN is composed of an input weight matrix $\boldsymbol{W}_{\mathrm{in}}$, a recurrent matrix $\boldsymbol{W}$, and an output weight matrix $\boldsymbol{W}_{\mathrm{out}}$. Because only the output weights are altered, ESN training is typically quick and computationally efficient compared to training other RNNs. Moreover, multiple non-linear reservoir layers can be stacked on top of each other resulting in a \emph{deep ESN architecture}. Deep ESNs exploit the advantages of a hierarchical temporal feature representation at different levels of abstraction while preserving the RC training efficiency. They can learn data representations at different levels of abstraction, hence disentangling the difficulties in modeling complex tasks by representing them in terms of simpler ones hierarchically. Let $N_{j,R}^{(n)}$ be the number of internal units of the reservoir of UAV $j$ at layer $n$, $N_{j,U}$ be the external input dimension of UAV $j$ and $N_{j,L}$ be the number of layers in the stack for UAV $j$. Next, we define the following ESN components: \begin{itemize} \item $\boldsymbol{v}_j(t) \in \mathds{R}^{N_{j,U}}$ the external input of UAV $j$ at stage $t$ which effectively corresponds to the current network state, \item $\boldsymbol{x}^{(n)}_j(t) \in \mathds{R}^{N_{j,R}^{(n)}}$ as the state of the reservoir of UAV $j$ at layer $n$ at stage $t$, \item $\boldsymbol{W}_{j, \mathrm{in}}^{(n)}$ as the input-to-reservoir matrix of UAV $j$ at layer $n$, where $\boldsymbol{W}_{j, \mathrm{in}}^{(n)} \in \mathds{R}^{N_{j,R}^{(n)} \times N_{j,U}}$ for $n=1$, and $\boldsymbol{W}_{j, \mathrm{in}}^{(n)} \in \mathds{R}^{N_{j,R}^{(n)} \times N_{j,R}^{(n-1)}}$ for $n>1$, \item $\boldsymbol{W}_j^{(n)} \in \mathds{R}^{N_{j,R}^{(n)} \times N_{j,R}^{(n)}}$ as the recurrent reservoir weight matrix for UAV $j$ at layer $n$, \item $\boldsymbol{W}_{j, \mathrm{out}} \in \mathds{R}^{\mid\mathcal{Z}_j\mid \times (N_{j,U}+\sum_{n}N_{j,R}^{(n)})}$ as the reservoir-to-output matrix of UAV $j$ for layer $n$ only. \end{itemize} The objective of the deep ESN architecture is to approximate a function $\boldsymbol{F}_j=(F_j^{1}, F_j^{2}, \cdots, F_j^{N_{j,L}})$ for learning an SPNE for each UAV $j$ at each stage $t$. For each $n=1, 2, \cdots, N_{j,L}$, the function $F_j^{(n)}$ describes the evolution of the state of the reservoir at layer $n$, i.e., $\boldsymbol{x_{j}^{(n)}}(t)=F_j^{(n)}(\boldsymbol{v}_j(t), \boldsymbol{x}_j^{(n)}(t-1))$ for $n=1$ and $\boldsymbol{x_{j}^{(n)}}(t)=F_j^{(n)}(\boldsymbol{x}_j^{(n-1)}(t), \boldsymbol{x}_j^{(n)}(t-1))$ for $n>1$. $\boldsymbol{W}_{j, \mathrm{out}}$ and $\boldsymbol{x}^{(n)}_j(t)$ are initialized to zero while $\boldsymbol{W}_{j, \mathrm{in}}^{(n)}$ and $\boldsymbol{W}_j^{(n)}$ are randomly generated. Note that although the dynamic reservoir is initially generated randomly, it is combined later with the external input, $\boldsymbol{v}_j(t)$, in order to store the network states and with the trained output matrix, $\boldsymbol{W}_{j, \mathrm{out}}$, so that it can approximate the reward function. Moreover, the spectral radius of $\boldsymbol{W}_j^{(n)}$ (i.e., the largest eigenvalue in absolute value), $\rho_j^{(n)}$, must be strictly smaller than 1 to guarantee the stability of the reservoir~\cite{echo_state_property}. In fact, the value of $\rho_j^{(n)}$ is related to the variable memory length of the reservoir that enables the proposed deep ESN framework to store necessary previous state information, with larger values of $\rho_j^{(n)}$ resulting in longer memory length. We next define the deep ESN components: the input and reward functions. For each deep ESN of UAV $j$, we distinguish between two types of inputs: external input, $\boldsymbol{v}_j(t)$, that is fed to the first layer of the deep ESN and corresponds to the current state of the network and input that is fed to all other layers for $n>1$. For our proposed deep ESN, the input to any layer $n>1$ at stage $t$ corresponds to the state of the previous layer, $\boldsymbol{x}_j^{(n-1)}(t)$. Define $\widetilde{u}_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t))= u_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t)) \prod_{j=1}^J \pi_{j,z_j}(\boldsymbol{v}_j(t))$ as the expected value of the instantaneous utility function $u_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t))$ in (\ref{utility_t}) for UAV $j$ at stage $t$. Therefore, the reward that UAV $j$ obtains from action $\boldsymbol{z}_j$ at a given network state $\boldsymbol{v}_j(t)$: \vspace{-0.32cm} \begin{multline}\label{reward} r_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{-j}(t)) \textrm{=} \begin{cases} \widetilde{u}_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{\textrm{-}j}(t)) \textrm{,} \; \mathrm{if\; UAV} \;j\; \mathrm{reaches} \; d_j\textrm{,}\\ \widetilde{u}_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t), \boldsymbol{z}_{\textrm{-}j}(t))\textrm{+}\gamma \mathrm{max}_{\boldsymbol{z}_j \in \mathcal{Z}_j} \boldsymbol{W}_{j\textrm{,} \mathrm{out}}(\boldsymbol{z}_j(t \textrm{+} 1)\textrm{,} t \textrm{+}1) \\ \hspace{0.4 cm} [\boldsymbol{v}'_j(t), \boldsymbol{x}'^{(1)}_j(t), \boldsymbol{x}'^{(2)}_j(t), \cdots, \boldsymbol{x}'^{(n)}_j(t)]\textrm{,} \; \mathrm{otherwise}\textrm{.} \end{cases} \end{multline}\raisetag{3\baselineskip} \noindent Here, $\boldsymbol{v}'_j(t+1)$ and $\boldsymbol{x}'^{(n)}_j(t)$, correspond, respectively, to the next network state and reservoir state of layer $(n)$, at stage $(t+1)$, upon taking actions $\boldsymbol{z}_j(t)$ and $\boldsymbol{z}_{-j}(t)$ at stage $t$. Fig.~\ref{Deep_ESN} shows the proposed reservoir architecture of the deep ESN consisting of two layers. \begin{figure}[t!] \begin{center} \centering \vspace{-0.1cm} \includegraphics[width=13cm]{figures/Deep_ESN} \vspace{-0.5cm} \caption{Proposed Deep ESN architecture.}\label{Deep_ESN} \vspace{-0.7cm} \end{center} \end{figure} \subsection{Update Rule Based on Deep ESN} We now introduce the deep ESN's update phase that each UAV uses to store and estimate the reward function of each path and resource allocation scheme at a given stage $t$. In particular, we consider leaky integrator reservoir units~\cite{leaky_integrator} for updating the state transition functions $\boldsymbol{x}^{(n)}_j(t)$ at stage $t$. Therefore, the state transition function of the first layer $\boldsymbol{x}^{(1)}_j(t)$ will be: \begin{align}\label{state_1} \boldsymbol{x}^{(1)}_j(t)= (1-\omega_j^{(1)})\boldsymbol{x}_j^{(1)}(t-1)+\omega_j^{(1)}\mathrm{tanh}(\boldsymbol{W}_{j, \mathrm{in}}^{(1)}\boldsymbol{v}_j(t)+\boldsymbol{W}_j^{(1)}\boldsymbol{x}_j^{(1)}(t-1)), \end{align} \noindent where $\omega_j^{(n)} \in [0, 1]$ is the leaking parameter at layer $n$ for UAV $j$ which relates to the speed of the reservoir dynamics in response to the input, with larger values of $\omega_j^{(n)}$ resulting in a faster response of the corresponding $n$-th reservoir to the input. The state transition of UAV $j$, $\boldsymbol{x}^{(n)}_j(t)$, for $n>1$ is given by: \begin{align}\label{state_n} \boldsymbol{x}^{(n)}_j(t)= (1-\omega_j^{(n)})\boldsymbol{x}_j^{(n)}(t-1)+\omega_j^{(n)}\mathrm{tanh}(\boldsymbol{W}_{j,\mathrm{in}}^{(n)}\boldsymbol{x}_j^{(n-1)}(t)+\boldsymbol{W}_j^{(n)}\boldsymbol{x}_j^{(n)}(t-1)), \end{align} The output $y_j(t)$ of the deep ESN at stage $t$ is used to estimate the reward of each UAV $j$ based on the current adopted action $\boldsymbol{z}_j(t)$ and $\boldsymbol{z}_{-j}(t)$ of UAV $j$ and other UAVs $(-j)$, respectively, for the current network state $\boldsymbol{v}_j(t)$ after training $\boldsymbol{W}_{j, \mathrm{out}}$. It can be computed as: \begin{align}\label{output} y_j(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t))=\boldsymbol{W}_{j, \mathrm{out}}(\boldsymbol{z}_j(t), t) [\boldsymbol{v}_j(t), \boldsymbol{x}^{(1)}_j(t), \boldsymbol{x}^{(2)}_j(t), \cdots, \boldsymbol{x}^{(n)}_j(t)]. \end{align} We adopt a temporal difference RL approach for training the output matrix $W_{j, \mathrm{out}}$ of the deep ESN architecture. In particular, we employ a linear gradient descent approach using the reward error signal, given by the following update rule~\cite{RL_ESN}: \begin{multline}\label{W_out} \hspace{-0.2cm}\boldsymbol{W}_{j\textrm{,} \mathrm{out}}(\boldsymbol{z}_j(t)\textrm{,} t\textrm{+}1)\textrm{=}\boldsymbol{W}_{j\textrm{,} \mathrm{out}}(\boldsymbol{z}_j(t)\textrm{,} t)\textrm{+}\lambda_j ( r_j(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{\textrm{-}j}(t)) -y_j(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t))) [\boldsymbol{v}_j(t)\textrm{,} \\ \boldsymbol{x}^{(1)}_j(t)\textrm{,} \boldsymbol{x}^{(2)}_j(t)\textrm{,} \cdots\textrm{,} \boldsymbol{x}^{(n)}_j(t)]^T\textrm{.} \end{multline} Here, note that the objective of each UAV is to minimize the value of the error function $e_j(\boldsymbol{v}_j(t))= \left| r_j(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{\textrm{-}j}(t)) - y_j(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t))\right|$. \subsection{Proposed Deep RL Algorithm} Based on the proposed deep ESN architecture and update rule, we next introduce a multi-agent deep RL framework that the UAVs can use to learn an SPNE in behavioral strategies for the game $\mathcal{G}$. The algorithm is divided into two phases: \emph{training and testing}. In the former, UAVs are trained offline before they become active in the network using the architecture of Subsection~\ref{ESN_architecture}. The testing phase corresponds to the actual execution of the algorithm after which the weights of $\boldsymbol{W}_{j, \mathrm{out}}, \forall j \in \mathcal{J}$ have been optimized and is implemented on each UAV for execution during run time. \begin{algorithm}[t!] \scriptsize \caption{Training phase of the proposed deep RL algorithm} \label{training_algorithm} \begin{algorithmic}[t!] \STATE \textbf{Initialization:}\\ $\boldsymbol{\pi}_{j,z_j}(\boldsymbol{v}_j(t))=\frac{1}{\mid \mathcal{A}_j\mid} \forall t\in T, z_j \in \mathcal{Z}_j$, $y_j(\boldsymbol{v}_j(t), \boldsymbol{z}_{j}(t))=0$, $\boldsymbol{W}_{j, \mathrm{in}}^{(n)}$, $\boldsymbol{W}_{j}^{(n)}$, $\boldsymbol{W}_{j, \mathrm{out}}$. \\ \vspace{0.2cm} \FOR {The number of training iterations} \WHILE{At least one UAV $j$ has not reached its destination $d_j$,} \vspace{0.05cm} \FOR{all UAVs $j$ (in a parallel fashion)} \STATE \textbf{Input:} Each UAV $j$ receives an input $\boldsymbol{v}_j(t)$ based on (\ref{input}). \STATE \textbf{Step 1: Action selection}\\ Each UAV $j$ selects a random action $\boldsymbol{z}_j(t)$ with probability $\epsilon$,\\ Otherwise, UAV $j$ selects $\boldsymbol{z}_j(t)= \mathrm{argmax}_{z_j \in \mathcal{Z}_j} y_{j}\left(\boldsymbol{v}_{j}(t), \boldsymbol{z}_{j}(t)\right)$. \STATE \textbf{Step 2: Location, cell association and transmit power update}\\ Each UAV $j$ updates its location, cell association and transmission power level based on the selected action $\boldsymbol{z}_j(t)$.\\ \STATE \textbf{Step 3: Reward computation}\\ Each UAV $j$ computes its reward values based on (\ref{reward}).\\ \STATE \textbf{Step 4: Action broadcast}\\ Each UAV $j$ broadcasts its selected action $\boldsymbol{z}_j(t)$ to all other UAVs.\\ \STATE \textbf{Step 5: Deep ESN update}\\ - Each UAV $j$ updates the state transition vector $\boldsymbol{x}_j^{(n)}(t)$ for each layer $(n)$ of the deep ESN architecture based on (\ref{state_1}) and (\ref{state_n}).\\ - Each UAV $j$ computes its output $y_j\left(\boldsymbol{v}_{j}(t), \boldsymbol{z}_{j}(t)\right)$ based on (\ref{output}).\\ - The weights of the output matrix $\boldsymbol{W}_{j,\mathrm{out}}$ of each UAV $j$ are updated based on the linear gradient descent update rule given in (\ref{W_out}).\\ \ENDFOR \ENDWHILE \ENDFOR \end{algorithmic} \end{algorithm} During the training phase, each UAV aims at optimizing its output weight matrix $\boldsymbol{W}_{j\textrm{,} \mathrm{out}}$ such that the value of the error function $e_j(\boldsymbol{v}_j(t))$ at each stage $t$ is minimized. In particular, the training phase is composed of multiple iterations, each consisting of multiple rounds, i.e., the number of steps required for all UAVs to reach their corresponding destinations $d_j$. At each round, UAVs face a tradeoff between playing the action associated with the highest expected utility, and trying out all their actions to improve their estimates of the reward function in (\ref{reward}). This in fact corresponds to the exploration and exploitation tradeoff, in which UAVs need to strike a balance between exploring their environment and exploiting the knowledge accumulated through such exploration~\cite{sutton}. Therefore, we adopt the $\epsilon$-greedy policy in which UAVs choose the action that yields the maximum utility value with a probability of $1- \epsilon + \frac{\epsilon}{\mid \mathcal{Z}_j\mid}$ while exploring randomly other actions with a probability of $\frac{\epsilon}{\mid\mathcal{A}_j \mid}$. The strategy over the action space will be: \vspace{-0.1cm} \begin{align} \pi_{j,z_j}(\boldsymbol{v}_j(t))= \begin{cases} 1- \epsilon + \frac{\epsilon}{\mid \mathcal{Z}_j\mid}, \; \mathrm{argmax}_{z_j \in \mathcal{Z}_j} y_{j}\left(\boldsymbol{v}_j(t), \boldsymbol{z}_{j}(t) \right),\\ \frac{\epsilon}{\mid \mathcal{Z}_j\mid}, \; \mathrm{otherwise}. \end{cases} \end{align} Based on the selected action $\boldsymbol{z}_j(t)$, each UAV $j$ updates its location, cell association, and transmission power level and computes its reward function according to (\ref{reward}). To determine the next network state, each UAV $j$ broadcasts its selected action to all other UAVs in the network. Then, each UAV $j$ updates its state transition vector $\boldsymbol{x}_j^{(n)}(t)$ for each layer $(n)$ of the deep ESN architecture according to (\ref{state_1}) and (\ref{state_n}). The output $y_j$ at stage $t$ is then updated based on (\ref{output}). Finally, the weights of the output matrix $\boldsymbol{W}_{j,\mathrm{out}}$ of each UAV $j$ are updated based on the linear gradient descent update rule given in (\ref{W_out}). Note that, a UAV stops taking any actions once it has reached its destination. A summary of the training phase is given in Algorithm~\ref{training_algorithm}. \begin{algorithm}[t!] \scriptsize \caption{Testing phase of the proposed deep RL algorithm} \label{testing_algorithm} \begin{algorithmic}[t!] \vspace{0.2cm} \WHILE{At least one UAV $j$ has not reached its destination $d_j$,} \vspace{0.05cm} \FOR{all UAVs $j$ (in a parallel fashion)} \STATE \textbf{Input:} Each UAV $j$ receives an input $\boldsymbol{v}_j(t)$ based on (\ref{input}). \STATE \textbf{Step 1: Action selection}\\ Each UAV $j$ selects an action $\boldsymbol{z}_j(t)= \mathrm{argmax}_{z_j \in \mathcal{Z}_j} y_{j}\left(\boldsymbol{v}_{j}(t), \boldsymbol{z}_{j}(t)\right)$. \STATE \textbf{Step 2: Location, cell association and transmit power update}\\ Each UAV $j$ updates its location, cell association and transmission power level based on the selected action $\boldsymbol{z}_j(t)$.\\ \STATE \textbf{Step 3: Action broadcast}\\ Each UAV $j$ broadcasts its selected action $\boldsymbol{z}_j(t)$ to all other UAVs.\\ \STATE \textbf{Step 4: State transition vector update}\\ Each UAV $j$ updates the state transition vector $\boldsymbol{x}_j^{(n)}(t)$ for each layer $(n)$ of the deep ESN architecture based on (\ref{state_1}) and (\ref{state_n}).\\ \ENDFOR \ENDWHILE \end{algorithmic} \end{algorithm} Meanwhile, the testing phase corresponds to the actual execution of the algorithm. In this phase, each UAV chooses its action greedily for each state $\boldsymbol{v}_j(t)$, i.e., $\mathrm{argmax}_{z_j \in \mathcal{Z}_j} y_{j}(\boldsymbol{v}_j(t), \boldsymbol{z}_j(t))$, and updates its location, cell association, and transmission power level accordingly. Each UAV then broadcasts its selected action and updates its state transition vector $\boldsymbol{x}_j^{(n)}(t)$ for each layer $n$ of the deep ESN architecture based on (\ref{state_1}) and (\ref{state_n}). A summary of the testing phase is given in Algorithm \ref{testing_algorithm}. It is important to note that analytically guaranteeing the convergence of the proposed deep learning algorithm is challenging as it is highly dependent on the hyperparameters used during the training phase. For instance, using too few neurons in the hidden layers results in underfitting which could make it hard for the neural network to detect the signals in a complicated data set. On the other hand, using too many neurons in the hidden layers can either result in overfitting or an increase in the training time that could prevent the training of the neural network. Overfitting corresponds to the case when the model learns the random fluctuations and noise in the training data set to the extent that it negatively impacts the model's ability to generalize when fed with new data. Therefore, in this work, we limit our analysis of convergence by providing simulation results (see Section~\ref{simulation}) to show that, under a reasonable choice of the hyperparameters, convergence is observed for our proposed game. In such cases, it is important to study the convergence point and the convergence complexity of our proposed algorithm. Next, we characterize the convergence point of our proposed algorithm. \begin{proposition} \emph{If Algorithm~\ref{training_algorithm} converges, then the convergence strategy profile corresponds to a SPNE of game $\mathcal{G}$.} \end{proposition} \begin{proof} An SPNE is a strategy profile that induces a Nash equilibrium on every subgame. Therefore, at the equilibrium state of each subgame, there is no incentive for any UAV to deviate after observing any history of joint actions. Moreover, given the fact that an ESN framework exhibits adaptive memory that enables it to store necessary previous state information, UAVs can essentially retain other players' actions at each stage $t$ and thus take actions accordingly. To show that our proposed scheme guarantees convergence to an SPNE, we use the following lemma from~\cite{SPNE_existence}. \begin{lemma} For our proposed game $\mathcal{G}$, the payoff functions in (\ref{reward}) are bounded, and the number of players, state space and action space is finite. Therefore, $\mathcal{G}$ is a finite game and hence a SPNE exists. This follows from Selten's theorem which states that every finite extensive form game with perfect recall possesses an SPNE where the players use behavioral strategies. \end{lemma} Here, it is important to note that for finite dynamic games of perfect information, any backward induction solution is a SPNE~\cite{walid_book}. Therefore, given the fact that, for our proposed game $\mathcal{G}$, each UAV aims at maximizing its expected sum of \emph{discounted rewards} at each stage $t$ as given in (\ref{reward}), one can guarantee that the convergence strategy profile corresponds to a SPNE of game $\mathcal{G}$. This completes the proof. \end{proof} Moreover, it is important to note that the convergence complexity of the proposed deep RL algorithm for reaching a SPNE is $O(J \times A^2)$. Next, we analyze the computational complexity of the proposed deep RL algorithm for practical scenarios in which the number of UAVs is relatively small. \begin{theorem}\label{proposition_complexity} \emph{For practical network scenarios, the computational complexity of the proposed training deep RL algorithm is $O(A^3)$ and reduces to $O(A^2)$ when considering a fixed altitude for the UAVs, where $A$ is the number of discretized unit areas.} \end{theorem} \begin{proof} Consider the case in which the UAVs can move with a fixed step size in a 3D space. For such scenarios, the state vector $\boldsymbol{v}'_j(t)$ is defined as: \begin{align}\label{input_3D} \boldsymbol{v}'_j(t)\textrm{=}\Big[\{\delta_{j\textrm{,}l\textrm{,}a}(t)\textrm{,} \theta_{j\textrm{,}l\textrm{,}a}(t)\}_{l=1}^{L_j}\textrm{,} \theta_{j\textrm{,}d_j\textrm{,}a}(t)\textrm{,} \{x_j(t)\textrm{,} y_j(t)\textrm{,} h_j(t)\}_{j \in \mathcal{J}} \Big]\textrm{,} \end{align} For each state $\boldsymbol{v}'_j(t)$, the action of UAV $j$ is a function of the location, transmission power level and cell association vector of all other UAVs in the network. Nevertheless, the number of possible locations of other UAVs in the network is much larger than the possible number of transmission power levels and the size of the cell association vector of those UAVs. Therefore, by the law of large numbers, one can consider the number of possible locations of other UAVs only when analyzing the convergence complexity of the proposed training algorithm. Moreover, for practical scenarios, the total number of UAVs in a given area is considered to be relatively small as compared to the number of discretized unit areas i.e., $J \ll A$ (3GPP admission control policy for cellular-connected UAVs~\cite{3GPP_standards}). Therefore, by the law of large numbers and given the fact that the UAVs take actions in a parallel fashion, the computational complexity of our proposed algorithm is $O(A^3)$ when the UAVs update their x, y and z coordinates and reduces to $O(A^2)$ when considering fixed altitudes for the UAVs. This completes the proof. \end{proof} From Theorem \ref{proposition_complexity}, we can conclude that the convergence speed of the proposed training algorithm is significantly reduced when considering a fixed altitude for the UAVs. This in essence is due to the reduction of the state space dimension when updating the $x$ and $y$ coordinates only. It is important to note here that there exists a tradeoff between the computational complexity of the proposed training algorithm and the resulting network performance. In essence, updating the 3D coordinates of the UAVs at each step $t$ allows the UAVs to better explore the space thus providing more opportunities for maximizing their corresponding utility functions. Therefore, from both Theorems~\ref{proposition_complexity} and~\ref{theorem_altitude}, the UAVs can update their x and y coordinates only during the learning phase while operating within the upper and lower altitude bounds derived in Theorem~\ref{theorem_altitude}. \section{Simulation Results and Analysis}\label{simulation} \begin{table}[t!] \scriptsize \setlength{\belowcaptionskip}{0pt} \setlength{\abovedisplayskip}{3pt} \captionsetup{belowskip=0pt} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#1.6\end{tabular}} \setlength{\abovecaptionskip}{2pt} \renewcommand{\captionlabelfont}{\small} \caption[table]{\scriptsize{\\SYSTEM PARAMETERS}}\label{parameters} \centering \tabcolsep=0.06cm \scalebox{0.99}{ \begin{tabular}{|c|c|c|c|} \hline \textbf{Parameters} & \textbf{Values} & \textbf{Parameters} & \textbf{Values} \\ \hline UAV max transmit power $(\overline{P}_j)$ & 20 dBm & SINR threshold $(\overline{\Gamma}_j)$ & -3 dB \\ \hline UE transmit power $(\widehat{P}_q)$ & 20 dBm & Learning rate $(\lambda_j)$ & 0.01\\ \hline Noise power spectral density $(N_0)$ & -174 dBm/Hz & RB bandwidth $(B_c)$& 180 kHz\\ \hline Total bandwidth $(B)$ & 20 MHz & \# of interferers $(L)$ & 2\\ \hline Packet arrival rate $(\lambda_{j,s})$ & (0,1) & Packet size $(\nu)$ & 2000 bits\\ \hline Carrier frequency $(\hat{f})$ & 2 GHz & Discount factor $(\gamma)$ & 0.7 \\ \hline \# of hidden layers & 2 & Step size $(\widetilde{a}_j)$ & 40 m \\ \hline Leaky parameter/layer $(\omega_j^{(n)})$ & 0.99, 0.99 & $\epsilon$ & 0.3\\ \hline \end{tabular} } \vspace{-0.24cm} \end{table} For our simulations, we consider an 800 m $\times$ 800 m square area divided into 40 m $\times$ 40 m grid areas, in which we randomly uniformly deploy 15 BSs. All statistical results are averaged over several independent testing iterations during which the initial locations and destinations of the UAVs and the locations of the BSs and the ground UEs are randomized. The maximum transmit power for each UAV is discretized into 5 equally separated levels. We consider an uncorrelated Rician fading channel with parameter $\widehat{K}=1.59$~\cite{rician_fading}. The external input of the deep ESN architecture, $\boldsymbol{v}_j(t)$, is a function of the number of UAVs and thus the number of hidden nodes per layer, $N_{j,R}^{(n)}$, varies with the number of UAVs. For instance, $N_{j,R}^{(n)}= 12$ and $6$ for $n=1$ and $2$, respectively, for a network size of 1 and 2 UAVs, and 20 and 10 for a network size of 3, 4, and 5 UAVs. Table~\ref{parameters} summarizes the main simulation parameters. \begin{figure}[!t] \begin{subfigure}{1.0\textwidth} \centering \includegraphics[width=10cm]{figures/maximum_altitude} \caption{} \label{maximum_altitude} \end{subfigure}\\ \begin{subfigure}{1.0\textwidth} \centering \includegraphics[width=10cm]{figures/minimum_altitude} \caption{} \label{minimum_altitude} \end{subfigure} \vspace{-0.3cm} \caption{The (a) upper bound for the optimal altitude of the UAVs as a function of the SINR threshold value $(\bar{\Gamma})$ and for different transmit power levels and ground network density and (b) lower bound for the optimal altitude of the UAVs as a function of the interference threshold value $(\sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a})$ and for different transmit power levels.}\label{altitude_results} \vspace{-0.2cm} \end{figure} Fig.~\ref{maximum_altitude} shows the upper bound for the optimal altitude of UAV $j$ as a function of the SINR threshold value, $\bar{\Gamma}$, and for different transmit power levels, based on Theorem~\ref{theorem_altitude}. On the other hand, Fig.~\ref{minimum_altitude} shows the lower bound for the optimal altitude of UAV $j$ as a function of the SINR threshold value, $\bar{\Gamma}$, and for different transmit power levels and ground network density, based on Theorem~\ref{theorem_altitude}. From Figs.~\ref{maximum_altitude} and~\ref{minimum_altitude}, we can deduce that the optimal altitude range of a given UAV is a function of network design parameters, ground network data requirements, the density of the ground network, and its action $\boldsymbol{v}_j(t)$. For instance, the upper bound on the UAV's optimal altitude decreases as $\bar{\Gamma}$ increases while its lower bound decreases as $\sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a}$ increases. Moreover, the maximum altitude of the UAV decreases as the ground network gets denser while the its lower bound increases as the ground network data requirements increase. Thus, in such scenarios, a UAV should operate at higher altitudes. A UAV should also operate at higher altitudes when its transmit power level increases due to the increase in the lower and upper bounds of its optimal altitude. \vspace{-0.1cm} \begin{figure}[t!] \begin{center} \centering \includegraphics[width=11cm]{figures/snapshot} \vspace{-0.4cm} \caption{Path of a UAV for our approach and shortest path scheme.}\label{snapshot} \vspace{-0.5cm} \end{center} \end{figure} \begin{table}[t!]\footnotesize \setlength{\belowcaptionskip}{0pt} \setlength{\abovedisplayskip}{3pt} \captionsetup{belowskip=0pt} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#1.6\end{tabular}} \setlength{\abovecaptionskip}{2pt} \renewcommand{\captionlabelfont}{\small} \caption[table]{\scriptsize{\\Performance assessment for one UAV}}\label{snapshot_table} \centering \tabcolsep=0.1cm \scalebox{0.99}{ \begin{tabular}{|c|c|c|c|} \hline & \# of steps & delay (ms) & average rate per UE (Mbps) \\ \hline Proposed approach & 32 & 6.5 & 0.95\\ \hline Shortest path & 32 & 12.2 & 0.76\\ \hline \end{tabular} } \vspace{-0.24cm} \end{table} Fig.~\ref{snapshot} shows a snapshot of the path of a single UAV resulting from our approach and from a shortest path scheme. Unlike our proposed scheme which accounts for other wireless metrics during path planning, the objective of the UAVs in the shortest path scheme is to reach their destinations with the minimum number of steps. Table~\ref{snapshot_table} presents the performance results for the paths shown in Fig.~\ref{snapshot}. From Fig.~\ref{snapshot}, we can see that, for our proposed approach, the UAV selects a path away from the densely deployed area while maintaining proximity to its serving BS in a way that would minimize the steps required to reach its destination. This path will minimize the interference level that the UAV causes on the ground UEs and its wireless latency (Table~\ref{snapshot_table}). From Table~\ref{snapshot_table}, we can see that our proposed approach achieves 25\% increase in the average rate per ground UE and 47\% decrease in the wireless latency as compared to the shortest path, while requiring the same number of steps that the UAV needs to reach the destination. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=11cm, scale=1.9]{figures/scalability} \vspace{-0.4cm} \caption{Performance assessment of the proposed approach in terms of average (a) wireless latency per UAV and (b) rate per ground UE as compared to the shortest path approach, for different number of UAVs.}\label{scalability} \vspace{-0.5cm} \end{center} \end{figure} \begin{table}[t!]\footnotesize \setlength{\belowcaptionskip}{0pt} \setlength{\abovedisplayskip}{3pt} \captionsetup{belowskip=0pt} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#1.6\end{tabular}} \setlength{\abovecaptionskip}{2pt} \renewcommand{\captionlabelfont}{\small} \captionsetup{justification=centering} \caption[table]{\scriptsize{\\The required number of steps for all UAVs to reach their corresponding destinations based on our proposed approach and that of the shortest path scheme for different number of UAVs}}\label{steps_table} \centering \tabcolsep=0.1cm \scalebox{0.99}{ \begin{tabular}{|c|c|c|c|c|c|} \hline \# of steps & 1 UAV & 2 UAVs & 3 UAVs & 4 UAVs & 5 UAVs\\ \hline Proposed approach & 4 & 4 & 6 & 7 & 8\\ \hline Shortest path & 4 & 4 & 6 & 6 & 7\\ \hline \end{tabular} } \vspace{-0.24cm} \end{table} Fig.~\ref{scalability} compares the average values of the (a) wireless latency per UAV and (b) rate per ground UE resulting from our proposed approach and the baseline shortest path scheme. Moreover, Table~\ref{steps_table} compares the number of steps required by all UAVs to reach their corresponding destinations for the scenarios presented in Fig.~\ref{scalability}. From Fig.~\ref{scalability} and Table~\ref{steps_table}, we can see that, compared to the shortest path scheme, our approach achieves a lower wireless latency per UAV and a higher rate per ground UE for different numbers of UAVs while requiring a number of steps that is comparable to the baseline. In fact, our scheme provides a better tradeoff between energy efficiency, wireless latency, and ground UE data rate compared to the shortest path scheme. For instance, for 5 UAVs, our scheme achieves a 37\% increase in the average achievable rate per ground UE, 62\% decrease in the average wireless latency per UAV, and 14\% increase in energy efficiency. Indeed, one can adjust the multi-objective weights of our utility function based on several parameters such as the rate requirements of the ground network, the power limitation of the UAVs, and the maximum tolerable wireless latency of the UAVs. Moreover, Fig.~\ref{scalability} shows that, as the number of UAVs increases, the average delay per UAV increases and the average rate per ground UE decreases, for all schemes. This is due to the increase in the interference level on the ground UEs and other UAVs as a result of the LoS link between the UAVs and the BSs. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=11cm, scale=1.9]{figures/altitude} \vspace{-0.4cm} \caption{Performance assessment of the proposed approach in terms of average (a) wireless latency per UAV and (b) rate per ground UE for different utility functions and for different altitudes of the UAVs.}\label{altitude} \vspace{-0.9cm} \end{center} \end{figure} Fig.~\ref{altitude} studies the effect of the UAVs' altitude on the average values of the (a) wireless latency per UAV and (b) rate per ground UE for different utility functions. From Fig.~\ref{altitude}, we can see that, as the altitude of the UAVs increases, the average wireless latency per UAV increases for all studied utility functions. This is mainly due to the increase in the distance of the UAVs from their corresponding serving BSs which accentuates the path loss effect. Moreover, higher UAV altitudes result in a higher average data rate per ground UE for all studied utility functions mainly due to the decrease in the interference level that is caused from the UAVs on neighboring BSs. Here, there exists a tradeoff between minimizing the average wireless delay per UAV and maximizing the average data rate per ground UE. Therefore, alongside the multiobjective weights, the altitude of the UAVs can be varied such that the ground UE rate requirements is met while minimizing the wireless latency for each UAV based on its mission objective. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=11cm, scale=1.9]{figures/power_densification} \vspace{-0.4cm} \caption{Effect of the ground network densification on the average transmit power level of the UAVs along their paths.}\label{power_densification} \vspace{-0.9cm} \end{center} \end{figure} Fig.~\ref{power_densification} shows the average transmit power level per UAV along its path as a function of the number of BSs considering two utility functions, one for minimizing the average wireless latency for each UAV and the other for minimizing the interference level on the ground UEs. From Fig.~\ref{power_densification}, we can see that network densification has an impact on the transmission power level of the UAVs. For instance, when minimizing the wireless latency of each UAV along its path, the average transmit power level per UAV increases from 0.04 W to 0.06 W as the number of ground BSs increases from 10 to 30, respectively. In essence, the increase in the transmit power level is the result of the increase in the interference level from the ground UEs as the ground network becomes denser. As a result, the UAVs will transmit using a larger transmission power level so as to minimize their wireless transmission delay. On the other hand, the average transmit power level per UAV decreases from 0.036 W to 0.029 W in the case of minimizing the interference level caused on neighboring BSs. This is due to the fact that as the number of BSs increases, the interference level caused by each UAV on the ground network increases thus requiring each UAV to decrease its transmit power level. Note that, when minimizing the wireless latency, the average transmit power per UAV is always larger than the case of minimizing the interference level, irrespective of the number of ground BSs. Therefore, the transmit power level of the UAVs is a function of their mission objective and the number of ground BSs. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=11cm, scale=1.9]{figures/densification} \vspace{-0.4cm} \caption{Effect of the ground network densification on the average (a) wireless latency per UAV and (b) rate per ground UE for different utility functions and for a fixed altitude of 120m.}\label{densification} \vspace{-0.7cm} \end{center} \end{figure} Fig.~\ref{densification} presents the (a) wireless latency per UAV and (b) rate per ground UE for different utilities as a function of the number of BSs and for a fixed altitude of 120 m. From this figure, we can see that, as the ground network becomes more dense, the average wireless latency per UAV increases and the average rate per ground UE decreases for all considered cases. For instance, when the objective is to minimize the interference level along with energy efficiency, the average wireless latency per UAV increases from 13 ms to 47 ms and the average rate per ground UE decreases from 0.86 Mbps to 0.48 Mbps as the number of BSs increases from 10 to 30. This is due to the fact that a denser network results in higher interference on the UAVs as well as other UEs in the network. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=11cm, scale=1.9]{figures/densification_altitude} \vspace{-0.4cm} \caption{Effect of the ground network densification on the average (a) wireless latency per UAV and (b) rate per ground UE for different utility functions and for various altitudes of the UAVs.}\label{densification_altitude} \vspace{-0.7cm} \end{center} \end{figure} Fig.~\ref{densification_altitude} investigates the (a) wireless latency per UAV and (b) rate per ground UE for different values of the UAVs altitude and as a function of the number of BSs. From this figure, we can see that as the UAV altitude increases and/or the ground network becomes denser, the average wireless latency per UAV increases. For instance, the delay increases by 27\% as the altitude of the UAVs increases from 120 to 240 m for a network consisting of 20 BSs and increases by 120\% as the number of BSs increases from 10 to 30 for a fixed altitude of 180 m. This essentially follows from Theorem~\ref{theorem_altitude} and the results in Fig.~\ref{maximum_altitude} which shows that the maximum altitude of the UAV decreases as the ground network gets denser and thus the UAVs should operate at a lower altitude when the number of BSs increases from 10 to 30. Moreover, the average rate per ground UE decreases as the ground network becomes denser due to the increase in the interference level and increases as the altitude of the UAVs increases. Therefore, the resulting network performance depends highly on both the UAVs altitude and the number of BSs in the network. For instance, in case of a dense ground network, the UAVs need to fly at a lower altitude for applications in which the wireless transmission latency is more critical and at a higher altitude in scenarios in which a minimum achievable data rate for the ground UEs is required. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=11cm, scale=1.9]{figures/interferers} \vspace{-0.4cm} \caption{The average rate per ground UE as a function of the number of interferer BSs in the state definition $(\emph{L}_j)$.}\label{interferers} \vspace{-0.9cm} \end{center} \end{figure} Fig.~\ref{interferers} shows the effect of varying the number of nearest BSs ($\emph{L}_j$) in the observed network state of UAV $j$, $\boldsymbol{v}_j(t)$, on the average data rate per ground UE for different utility functions. From Fig.~\ref{interferers}, we can see an improvement in the average rate per ground UE as the number of nearest BSs in the state definition increases. For instance, in scenarios in which the UAVs aim at minimizing the interference level they cause on the ground network along their paths, the average rate per ground UE increases by 28\% as the number of BSs in the state definition increases from 1 to 5. This gain results from the fact that as $\emph{L}_j$ increases, the UAVs get a better sense of their surrounding environment and thus can better select their next location such that the interference level they cause on the ground network is minimized. It is important to note here, that as $\emph{L}_j$ increases, the size of the external input ($\boldsymbol{v}_j$) increases thus requiring a larger number of neurons in each layer. This in turn increases the number of required iterations for convergence. Therefore, a tradeoff exists between improving the performance of the ground UEs and the running complexity of the proposed algorithm. \begin{figure}[t!] \begin{center} \centering \includegraphics[width=11cm, scale=1.9]{figures/learning_rate} \vspace{-0.31cm} \caption{Effect of the learning rate on the convergence of offline training.}\label{learning_rate} \vspace{-0.9cm} \end{center} \end{figure} Fig.~\ref{learning_rate} shows the average of the error function $e_j(\boldsymbol{v}_j(t))$ resulting from the offline training phase as a function of a multiple of 20 iterations while considering different values for the learning rate, $\lambda$. The learning rate determines the step size the algorithm takes to reach the optimal solution and, thus, it impacts the convergence rate of our proposed framework. From Fig.~\ref{learning_rate}, we can see that small values of the learning rate, i.e., $\lambda =0.0001$, result in a slow speed of convergence. On the other hand, for large values of the learning rate, such as $\lambda=0.1$, the error function decays fast for the first few iterations but then remains constant. Here, $\lambda=0.1$ does not lead to convergence during the testing phase, but $\lambda =0.0001$ and $\lambda=0.01$ result in convergence, though requiring a different number of training iterations. In fact, a large learning rate can cause the algorithm to diverge from the optimal solution. This is because large initial learning rates will decay the loss function faster and thus make the model get stuck at a particular region of the optimization space instead of better exploring it. Clearly, our framework achieves better performance for $\lambda=0.01$, as compared to smaller and larger values of the learning rate. We also note that the error function does not reach the value of zero during the training phase. This is due to the fact that, for our approach, we adopt the early stopping technique to avoid overfitting which occurs when the training error decreases at the expense of an increase in the value of the test error~\cite{RNN_survey}. \section{Conclusion}\label{conclusion} \vspace{-0.1cm} In this paper, we have proposed a novel interference-aware path planning scheme that allows cellular-connected UAVs to minimize the interference they cause on a ground network as well as their wireless transmission latency while transmitting online mission-related data. We have formulated the problem as a noncooperative game in which the UAVs are the players. To solve the game, we have proposed a deep RL algorithm based ESN cells which is guaranteed to reach an SPNE, if it converges. The proposed algorithm enables each UAV to decide on its next location, transmission power level, and cell association vector in an autonomous manner thus adapting to the changes in the network. Simulation results have shown that the proposed approach achieves better wireless latency per UAV and rate per ground UE while requiring a number of steps that is comparable to the shortest path scheme. The results have also shown that a UAV's altitude plays a vital role in minimizing the interference level on the ground UEs as well as the wireless transmission delay of the UAV. In particular, we have shown that the altitude of the UAV is a function of the ground network density, the UAV's objective and the actions of other UAVs in the network. \section*{Appendix} \subsection{Proof of Theorem \ref{theorem_altitude}} For a given network state $\boldsymbol{v}_j(t)$ and a particular action $\boldsymbol{z}_j(t)$, the upper bound for the altitude of UAV $j$ can be derived when UAV $j$ aims at minimizing its delay function only, i.e., $\vartheta '=0$. For such scenarios, UAV $j$ should guarantee an upper limit, $\overline{\Gamma}_j$, for the SINR value $\Gamma_{j,s,c,a}$ of the transmission link from UAV $j$ to BS $s$ on RB $c$ at location $a$ as given in constraint (\ref{cons_7}). Therefore, $\hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ corresponds to the altitude at which UAV $j$ achieves $\overline{\Gamma}_j$ and beyond which (\ref{cons_7}) is violated. The derivation of the expression of $\hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ is: \begin{align} \sum_{c=1}^{C_{j,s}(t)}\Gamma_{j,s,c,a} = \overline{\Gamma}_j, \end{align} \begin{align} \sum_{c=1}^{C_{j,s}(t)} \frac{\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t))}{C_{j,s}(t)}\cdot g_{j,s,c,a}(t)}{\left(\frac{4 \pi \hat{f} d_{j,s,a}^{\mathrm{max}}}{\hat{c}}\right)^2 \cdot (I_{j,s,c}(t)+B_cN_0)}= \overline{\Gamma}_j, \end{align} \begin{align} \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t))}{C_{j,s}(t)} \cdot \frac{1}{\left(\frac{4 \pi \hat{f} d_{j,s,a}^{\mathrm{max}}}{\hat{c}}\right)^2} \cdot \sum_{c=1}^{C_{j,s}(t)} \frac{g_{j,s,c,a}{}(t)}{I_{j,s,c}(t)+B_cN_0} = \overline{\Gamma}_j, \end{align} \begin{align} (d_{j,s,a}^{\mathrm{max}})^2=\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t))}{C_{j,s}(t)} \cdot \frac{1}{\overline{\Gamma}_j \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2} \cdot \sum_{c=1}^{C_{j,s}(t)}\frac{g_{j,s,c,a}(t)}{I_{j,s,c}(t)+B_cN_0}, \end{align} \noindent where $d_{j,s,a}$ is the Euclidean distance between UAV $j$ and its serving BS $s$ at location $a$. Assume that the altitude of BS $s$ is negligible, i.e., $z_s=0$, $\hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ can be expressed as: \begin{multline} \hat{h}_j^{\mathrm{max}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \\ \sqrt{\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t))}{C_{j,s}(t) \cdot \overline{\Gamma}_j \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2} \cdot \sum_{c=1}^{C_{j,s}(t)}\frac{g_{j,s,c,a}(t)}{I_{j,s,c}(t)+B_cN_0} - (x_j - x_s)^2 - (y_j - y_s)^2}, \end{multline} \noindent where $x_s$ and $y_s$ correspond to the x and y coordinates of the serving BS $s$ and $\hat{c}$ is the speed of light. On the other hand, for a given network state $\boldsymbol{v}_j(t)$ and a particular action $\boldsymbol{z}_j(t)$, the lower bound for the altitude of UAV $j$ can be derived when the objective function of UAV $j$ is to minimize the interference level it causes on the ground network only, i.e., $\phi '=0$ and $\varsigma=0$. For such scenarios, the interference level that UAV $j$ causes on neighboring BS $r$ at location $a$ should not exceed a predefined value given by $\sum_{c=1}^{C_{j,s}(t)}\bar{I}_{j,r,c,a}$\footnote{$\sum_{c=1}^{C_{j,s}(t)}\bar{I}_{j,r,c,a}$ is a network design parameter that is a function of the ground network density, number of UAVs in the network and the data rate requirements of the ground UEs. The value of $\bar{I}_{j,r,c,a}$ is in fact part of the admission control policy which limits the number of UAVs in the network and their corresponding interference level on the ground network~\cite{3GPP_standards}.}. Therefore, $\hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ corresponds to the altitude at which UAV $j$ achieves $\sum_{c=1}^{C_{j,s}(t)}\bar{I}_{j,r,c,a}$ and below which the level of interference it causes on BS $r$ exceeds the value of $\sum_{c=1}^{C_{j,s}(t)}\bar{I}_{j,r,c,a}$. The derivation of the expression of $\hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ is given by: \begin{align} \sum_{c=1}^{C_{j,s}(t)}\sum_{r=1, r\neq s}^{S} \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) h_{j,r,c,a}(t)}{C_{j,s}(t)}= \sum_{c=1}^{C_{j,s}(t)} \sum_{r=1, r\neq s}^{S}\bar{I}_{j,r,c,a}, \end{align} \begin{align}\label{all_interferers} \sum_{c=1}^{C_{j,s}(t)}\sum_{r=1, r\neq s}^{S} \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f} d_{j,r,a}^{\mathrm{min}}}{\hat{c}}\right)^2 }= \sum_{c=1}^{C_{j,s}(t)} \sum_{r=1, r\neq s}^{S}\bar{I}_{j,r,c,a}, \end{align} To find $\hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$, we need to solve (\ref{all_interferers}) for each neighboring BS $r$ separately. Therefore, for a particular neighboring BS $r$, (\ref{all_interferers}) can be written as: \begin{align} \sum_{c=1}^{C_{j,s}(t)} \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f} d_{j,r,a}^{\mathrm{min}}}{\hat{c}}\right)^2}= \sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a}, \end{align} \begin{align} \frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot \sum_{c=1}^{C_{j,s}(t)} g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f} d_{j,r,a}^{\mathrm{min}}}{\hat{c}}\right)^2} = \sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a}, \end{align} \begin{align} (d_{j,r,a}^{\mathrm{min}})^2=\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot \sum_{c=1}^{C_{j,s}(t)} g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2 \cdot \sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a}}, \end{align} \noindent where $d_{j,r,a}$ is the Euclidean distance between UAV $j$ and its neighboring BS $r$ at location $a$. Assume that the altitude of BS $r$ is negligible, i.e., $z_r=0$, we have: \begin{align} \hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \sqrt{\frac{\widehat{P}_{j,s,a}(\boldsymbol{v}_j(t)) \cdot \sum_{c=1}^{C_{j,s}(t)} g_{j,r,c,a}(t)}{C_{j,s}(t) \cdot \left(\frac{4 \pi \hat{f}}{\hat{c}}\right)^2 \cdot \sum_{c=1}^{C_{j,s}(t)} \bar{I}_{j,r,c,a}} - (x_j - x_r)^2 - (y_j - y_r)^2}, \end{align} Therefore, $\hat{h}_{j}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ corresponds to the maximum value of $\hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))$ among all neighboring BSs $r$ and is expressed as: \begin{align} \hat{h}_j^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t))= \max_r \hat{h}_{j,r}^{\mathrm{min}}(\boldsymbol{v}_j(t)\textrm{,} \boldsymbol{z}_j(t)\textrm{,} \boldsymbol{z}_{-j}(t)), \end{align} \noindent where $x_r$ and $y_r$ correspond to the x and y coordinates of other neighboring BSs $r$. This completes the proof. \def\baselinestretch{0.92} \bibliographystyle{IEEEtran}
1,477,468,750,792
arxiv
\section{Holography} \label{sec:holography} A harmonic traveling wave at frequency $\omega$ can be described by a complex-valued wave function, \begin{equation} \label{eq:wave} \psi(\vec{r},t) = u(\vec{r}) \, e^{i \varphi(\vec{r})} e^{-i \omega t}, \end{equation} that is characterized by real-valued amplitude and phase profiles, $u(\vec{r})$ and $\varphi(\vec{r})$, respectively. Eq.~\eqref{eq:wave} can be generalized for vector fields by incorporating separate amplitude and phase profiles for each of the Cartesian coordinates. The field propagates according to the wave equation, \begin{equation} \label{eq:waveequation} \nabla^2 \psi = k^2 \psi , \end{equation} where the wave number $k$ is the magnitude of the local wave vector, \begin{equation} \label{eq:wavevector} \vec{k}(\vec{r}) = \nabla \varphi(\vec{r}). \end{equation} A hologram is produced by illuminating an object with an incident wave, $\psi_0(\vec{r},t)$, whose amplitude and phase profiles are $u_0(\vec{r})$ and $\varphi_0(\vec{r})$, respectively. The object scatters some of that wave to produce $\psi_s(\vec{r},t)$, which propagates to the imaging plane. In-line holography uses the remainder of the incident field as a reference wave that interferes with the scattered field to produce a superposition \begin{equation} \label{eq:superposition} \psi(\vec{r},t) = \psi_0(\vec{r},t) + \psi_s(\vec{r},t) \end{equation} whose properties are recorded. The wave equation then can be used to numerically reconstruct the three-dimensional field from its value in the plane. In this way, numerical back-propagation can provide information about the object's position relative to the recording plane as well as its size, shape and properties. The nature of the recording determines how much information can be recovered. \subsection{Optical holography: Intensity holograms} \label{sec:optical} Optical cameras record the intensity of the field in the plane, and so discard all of the information about the wave's direction of propagation that is encoded in the phase. Interfering the scattered wave with a reference field yields an intensity distribution, \begin{subequations} \label{eq:intensity} \begin{align} I(\vec{r}) & = \abs{\psi(\vec{r},t)}^2 \\ & = \abs{ u_0(\vec{r}) \, e^{i\varphi_0(\vec{r})} + u_s(\vec{r}) \, e^{i\varphi_s(\vec{r})}}^2, \end{align} \end{subequations} that blends information about both the amplitude and the phase of the scattered wave into a single scalar field. The properties of the scattered field can be interpreted most easily if the incident field can be modeled as a unit-amplitude plane wave, \begin{subequations} \label{eq:scatteringmodel} \begin{equation} \psi_0(\vec{r},t) \approx e^{ikz} e^{-i\omega t}, \end{equation} in which case, \begin{equation} I(\vec{r}) \approx \abs{1 + u_s(\vec{r}) \, e^{i \varphi_s(\vec{r})}}^2. \end{equation} If, furthermore, the scattering process may be modeled with a transfer function, \begin{equation} \psi_s(\vec{r}) = T(\vec{r} - \vec{r}_s) \, \psi_0(\vec{r}_s), \end{equation} \end{subequations} then $I(\vec{r})$ can be used to estimate parameters of $T(\vec{r})$, including the position and properties of the scatterer. This model has proved useful for interpreting in-line holograms of micrometer-scale colloidal particles \cite{lee07a}. Fitting to Eq.~\eqref{eq:scatteringmodel} can locate a colloidal sphere in three dimensions with nanometer precision \cite{fung11,krishnatreya14a}. The same fit yields estimates for the sphere's diameter and refractive index to within a part per thousand \cite{krishnatreya14a}. Generalizations of this method \cite{wang14using} work comparably well for tracking clusters of particles \cite{fung12,perry12,fung13}. \subsection{Rayleigh-Sommerfeld back propagation} \label{sec:rayleighsommerfeld} The success of fitting methods is based on \emph{a priori} knowledge of the nature of the scatterer, which is encoded in the transfer function, $T(\vec{r})$. In instances where such knowledge is not available, optical holograms also can be used as a basis for reconstructing the scattered field, $\psi_s(\vec{r},t)$, in three dimensions. This reconstruction can serve as a proxy for the structure of the sample. One particularly effective reconstruction method \cite{lee07,cheong10} is based on the Rayleigh-Sommerfeld diffraction integral \cite{goodman05}. The field, $\psi(x, y, 0,t)$ in the imaging plane, $z = 0$, propagates to point $\vec{r}$ in plane $z$ as \cite{goodman05} \begin{subequations} \label{eq:rsconvolution} \begin{equation} \psi(\vec{r},t) = \psi(x, y, 0, t) \otimes h_z(x, y), \end{equation} where \begin{equation} h_z(x, y) = \frac{1}{2\pi} \, \frac{d}{dz} \frac{e^{i kr}}{r}. \end{equation} \end{subequations} is the Rayleigh-Sommerfeld propagator. The convolution in Eq.~\eqref{eq:rsconvolution} is most easily computed with the Fourier convolution theorem using the Fourier transform of the propagator, \begin{equation} \label{repropagator} H_z(\vec{q}) = e^{-i z \sqrt{k^2 - q^2}}. \end{equation} Equation~\eqref{eq:rsconvolution} can be used to numerically propagate a measured wave back to its source, thereby reconstructing the three-dimensional field responsible for the recorded pattern. \begin{figure}[t!] \includegraphics[width=0.9\columnwidth]{schematic02} \caption{(color online) Schematic representation of the scanning acoustic camera. A polargraph composed of two stepping motors and a timing belt translates a microphone across the field of view in a serpentine pattern. A harmonic sound field is created by an audio speaker driven by a signal generator. The signal detected by the microphone is analyzed by a lock-in amplifier to obtain the amplitude $u(\vec{r})$ and phase $\varphi(\vec{r})$ of the sound's pressure field at each position $\vec{r} = (x, y)$.} \label{fig:schematic} \end{figure} \subsection{Acoustic holography: Complex holograms} \label{sec:acousticholography} Early implementations of acoustic holography resembled optical holography in recording the intensity of the sound field \cite{mueller1971acoustic}. Sound waves of modest intensity, however, are fully characterized by a scalar pressure field, $p(\vec{r},t)$, whose amplitude and phase can be measured directly. In that case, Rayleigh-Sommerfeld back-propagation can be used to reconstruct the complex sound field with an accuracy that is limited by instrumental noise and by the size of the recording area. For appropriate systems, the transfer-function model can be used to obtain information about scatterers in the field. Whereas optical holography benefits from highly developed camera technology, implementations of quantitative acoustic holography must confront a lack of suitable area detectors for sound waves. Commercial acoustic transducer arrays typically include no more than a few dozen elements. Conventional area scanners yield excellent results over small areas \cite{melde2016holograms} but become prohibitively expensive for large-area scans. We therefore introduce flexible and cost-effective techniques for recording complex-valued acoustic holograms. \section{Scanning acoustic camera} \label{sec:instrument} Figure~\ref{fig:schematic} depicts our implementation of a scanning acoustic camera for holographic imaging. A signal generator (Stanford Research Systems DS345) drives an audio speaker at a desired frequency $\omega$. The resulting sound wave propagates to a microphone whose output is analyzed by a dual-phase lock-in amplifier (Stanford Research Systems SR830) referenced to the signal generator. Our reference implementation operates at \SI{8}{\kilo\hertz}, which corresponds to a wavelength of \SI{42.5}{\mm}. The lock-in amplifier records the amplitude and phase of the pressure field at the microphone's position. We translate the microphone across the $x$-$y$ plane using a flexible low-cost two-dimensional scanner known as a polargraph that initially was developed for art installations \cite{lehni02}. Correlating the output of the lock-in amplifier with the position of the polargraph yields a map of the complex pressure field in the plane. \subsection{Polargraph for flexible wide-area scanning} \label{sec:polargraph} The polargraph consists of two stepper motors (Nema 17, 200 steps/rev) with toothed pulleys (16 tooth, GT2, \SI{7.5}{\mm} diameter) that control the movement of a flexible GT2 timing belt, as indicated in Fig.~\ref{fig:schematic}. The acoustic camera's microphone (KY0030 high-sensitivity sound module) is mounted on a laser-cut support that hangs under gravity from the middle of the timing belt. The motors' rotations determine the lengths, $s_1(t)$ and $s_2(t)$ of the two chords of timing belt at time $t$, and therefore the position of the microphone, $\vec{r}(t)$. If the motors are separated by distance $L$ and the microphone initially is located at height $y_0$ below their midpoint, then \begin{subequations} \label{eq:position} \begin{align} \vec{r}(t) & = x(t) \, \hat{x} + y(t) \, \hat{y}, \quad\text{where} \\ x(t) & = \frac{s_1^2 - s_2^2}{2L} \quad \text{and}\\ y(t) & = \sqrt{\frac{s_1^2+s_2^2}{2}-\frac{L^2}{4}-x^2(t)}-y_0. \end{align} \end{subequations} In our implementation, $L = \SI{1}{\meter}$ and $y_0 = \SI{10}{\cm}$. The stepper motors are controlled with an Arduino microcontroller that is addressed by software running on a conventional computer \footnote{The open-source software for this instrument is available online at \url{http://github.com/davidgrier/acam}}. Smoothest operation is obtained by running the stepper motors at constant rates. This results in the microphone translating through a serpentine pattern, as indicated in Fig.~\ref{fig:schematic}. Horizontal sweeps of \SI{0.6}{\meter} are separated by vertical steps of $\Delta y = \SI{5}{\mm}$, and the motors' step rates are configured to maintain a constant scan speed of \SI{46.5}{\mm\per\second}. The polargraph is deployed by mounting the stepper motors at the upper corners of the area to be scanned. Our implementation has the motors mounted on a frame constructed from extruded 1-inch square aluminum T-slot stock, as indicated in Fig.~\ref{fig:schematic}. The scan area is limited by the length of the timing belt and by the requirement that both chords of the belt remain taut throughout the scan. This can be facilitated by adding weight to the microphone's mounting bracket or by adding a tensioning cable. Mechanical vibrations during the scan are substantially smaller than the effective pixel size of the resulting acoustic holograms. \subsection{Lockin detection} \label{sec:lockin} While the polargraph scans the microphone across the field view, the lockin amplifier reports the amplitude and phase of the signal detected by the microphone. Setting the lockin's time constant to \SI{30}{\ms} effectively suppresses background noise yet is short enough to enable independent measurements at \SI{0.1}{\second} intervals. Given the translation speed, this corresponds to an effective spatial resolution of \SI{4.65 \pm 0.05}{\mm}. The vertical separation, $\Delta y$, between horizontal sweeps is set accordingly. We report the relative amplitude of the instrument's response with full scale corresponding to roughly \SI{1}{\pascal}. The amplifier's measurements of $u(t)$ and $\varphi(t)$ at time $t$ are associated with the polargraph's position, $\vec{r}(t)$, that is computed with Eq.~\eqref{eq:position} at the same time. Because the lockin amplifier's readout is not synchronized to the polargraph's motion, this yields an irregularly gridded representation of the complex pressure field. We therefore resample $u(t)$ and $\varphi(t)$ onto a \SI{128 x 128}{pixel} Cartesian grid (\num{16384} effective pixels) with \SI{4.90 \pm 0.08}{\milli\meter} spacing using bilinear interpolation. This measurement grid is finer than the wavelength of sound and is substantially larger than can be achieved with currently available microphone arrays. The parameters selected here yield one complete measurement in about \SI{25}{\minute}. \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{subtracted_image1} \caption{(color online) Acoustic holography of objects in a sound field. (a) Amplitude (top) and phase (bottom) of the harmonic field at \SI{8}{\kilo\hertz} projected by a speaker nearly \SI{2}{\meter} from the imaging plane. Scale bar corresponds to \SI{15}{\cm}. (b) The same field of view with scattering objects in the foreground. (c) Background-subtracted estimate for the scattered field in the imaging plane. (d) Refocused acoustic image obtained from (c) by Rayleigh-Sommerfeld back-propagation with Eq.~\eqref{eq:rsconvolution}. (e) Photograph of the scene recorded by the acoustic camera, including the camera's microphone.} \label{fig:reconstruction} \end{figure*} \section{Results} \label{sec:results} \subsection{Holographic imaging} \label{sec:imaging} Figure~\ref{fig:reconstruction}(a) shows the amplitude and phase of the sound field reaching the recording plane when the speaker is located nearly \SI{2}{\meter} away. The recording plane is normal to the direction of sound propagation and the walls of the \SI{1.5 x 1.5}{\meter} experimental volume are lined with 2-inch wedge acoustic tiles to minimize reflections. Even so, off-axis reflections reach the measurement plane and interfere with the directly propagating sound field in the measurement plane. These interference features are particularly evident in the amplitude profile in Fig.~\ref{fig:reconstruction}(a) and appear in all of the results that we present. The phase profile projected by the speaker is smoothly curved, which is expected for the diverging pressure field from a localized source. Taking the speed of sound in air to be $v = \SI{340}{\meter\per\second}$, the center of curvature of the phase profile is located \SI{160(10)}{\cm} away from the observation plane, which is consistent with the the speaker's position. Because the lockin amplifier measures phase delay over a limited range, we present the phase modulo $2\pi$. This causes abrupt transitions to appear in the rendering. We move these out of the field of view by shifting the lockin amplifier's phase response by $\Delta \varphi = \SI{0.02}{\degree}$. \begin{figure} \includegraphics[width=0.8\columnwidth]{resonance_jet_zoom_1} \caption{(color online) Detecting and localizing mechanical resonances in insonated objects. (a) Amplitude (left) and phase (right) of the transfer function for the system in Fig.~\ref{fig:reconstruction}. Shaded boxes indicate the region of interest that is presented in (b). (c) Amplitude and phase of the transfer function for a system identical except with a similar-sized block of wood instead of a water bottle. Shaded boxes indicate the region of interest that is presented in (d). Scale bars correspond to \SI{15}{\cm}.} \label{fig:normalized} \end{figure} The complex pressure field, $p_0(\vec{r})$, serves as the background field for other holograms recorded by this instrument. Figure~\ref{fig:reconstruction}(b) shows the same field of view partially obstructed by a collection of objects, specifically a plastic bottle filled with water placed atop a styrofoam box. The objects appear as a shadow in the amplitude profile and as a pattern of discontinuities in the phase profile. Because the phase profile is wrapped into the range $0 \leq \varphi(\vec{r}) \leq 2 \pi$, it cannot be used to measure the speed of sound within the objects. It does, however, provide enough information to reconstruct the three-dimensional sound field. The complex pressure field in the imaging plane, $p(\vec{r}) = p_0(\vec{r}) + p_s(\vec{r})$, includes both the source field, $p_0(\vec{r})$, and the field scattered by the objects, $p_s(\vec{r})$. The difference $\Delta p(\vec{r}) = p(\vec{r}) - p_0(\vec{r})$ between the recorded hologram of the object and the previously recorded background pressure field is an estimate of the scattered field due to the object in the measurement plane. This is plotted in Fig.~\ref{fig:reconstruction}(c). The object's shadow appears bright in the amplitude distribution because it differs substantially from the background amplitude. The sign of this difference is encoded in the phase, which confirms that the object has reduced the sound level directly downstream. Figure~\ref{fig:reconstruction}(d) shows the result of numerically reconstructing the field at a distance $z = \SI{15.6}{\cm}$ behind the recording plane. This effectively brings the object into focus without otherwise distorting its image. The result is consistent with a photograph of the scene, which is reproduced in Fig.~\ref{fig:reconstruction}(e). \subsection{Holographic characterization of dynamical properties} \label{sec:dynamics} If the object scattering the incident wave is not substantially larger than the wavelength, the wave it scatters may be modeled as the incident wave in the scattering plane, $z = z_s$, modified by a complex transfer function: \begin{equation} \label{eq:transferfunction} p_s(x,y,z_s) = T(\vec{r}) \, p_0(x,y,z_s). \end{equation} We estimate the transfer function by numerically back-propagating the incident wave to the scattering plane using Eq.~\eqref{eq:rsconvolution}, and using it to normalize the back-propagated scattered wave: \begin{equation} \label{eq:scattering} T(\vec{r}) = \frac{p(x,y,z_s)}{p_0(x,y,z_s)} - 1. \end{equation} In the absence of any object, we expect $T(\vec{r}) = 0$. Figure \ref{fig:normalized}(a) shows the amplitude and phase of the transfer function, $T(\vec{r})$, of the sample from Fig.~\ref{fig:reconstruction}. As expected, the background amplitude of the computed transfer function has magnitude and phase near zero. The transfer function of a passive object should advance the phase of the incident wave while reducing its amplitude. We therefore expect $\abs{T(\vec{r})} \leq 1$, with $T(\vec{r}) = -1$ corresponding to a perfect absorber. In fact, the water bottle's transfer function presents a well localized peak of magnitude \num{5} near one corner of the object. This can be seen more clearly in the expanded field of view in Fig.~\ref{fig:normalized}(b). The peak in the transfer function's amplitude is associated with a localized reversal of its phase. Most of the bottle's acoustic transfer function has phase $\arg T(\vec{r}) \approx - \pi$, which suggests that it is removing energy from the field in those regions. The abrupt transition through $\arg T(\vec{r}) = 0$ to $\arg T(\vec{r}) = \pi$ near the bottle's left side is consistent with localized phase reversals in the region of the peak in $\abs{T(\vec{r})}$. These observations suggest that this feature may be identified with a resonant mode of the water bottle that focuses acoustic energy. To test this interpretation, we replace the water bottle with a comparably sized block of wood and repeat the measurement. The resulting transfer function is plotted in Fig.~\ref{fig:normalized}(c), with an expanded field of view in Fig.~\ref{fig:normalized}(d). The magnitude of the block's transfer function is no greater than \num{0.2} across the entire field of view in Fig.~\ref{fig:normalized}(d). The phase profile within the block similarly lacks any prominent features. \section{Conclusions} \label{sec:conclusions} We have presented a scanning acoustic camera that can be deployed flexibly to record the amplitude and phase profiles of harmonic pressure fields over wide areas. We further have demonstrated the use of the Rayleigh-Sommerfeld propagator to back-propagate the measured acoustic hologram to reconstruct the three-dimensional sound field. This is useful for numerically refocusing a recorded image of an object immersed in the sound field. We have shown, furthermore, that numerical back propagation can be used to estimate an object's complex acoustic transfer function and thus to probe its dynamical properties at the driving frequency of the sound field. We demonstrate this by imaging a resonance in a container of water. We anticipate that the acoustic imaging capabilities we have described will be useful for research groups deploying structured acoustic fields for communication, sensing and manipulation. In such cases, the scanning technique provides a cost-effective means to characterize the projected wave, even when it covers a very large area. The camera also is useful for creating images of scenes in continuous harmonic waves. Our implementation is deployed for imaging in transmission. Reflected and oblique imaging also should be possible. Numerically refocused acoustic imaging is particularly useful for remote imaging of dynamical properties. While our implementation is based on lockin detection of harmonic waves, generalizations to broad-spectrum sources can be implemented with correlation-based detection. The scanned approach similarly should lend itself to imaging in noise\cite{potter1994acoustic}, including imaging of dynamical properties. \begin{acknowledgments} This work was supported by the MRSEC program of the National Science Foundation through award number DMR-1420073. \end{acknowledgments}
1,477,468,750,793
arxiv
\section{Introduction} We order the two elements of a set $\Sigma=\{0,1\}$ such that $0<1$. This extends to a partial ordering on the set $\Sigma^{n}=\{0,1\}^{n}$ by comparing words coordinate-wise. Let $x=x_{1}...x_{n}$ and $y=y_{1}...y_{n}$. Here, $x\succeq y$ means that $x_{i}\geq y_{i}$, for every $i=1,...,n$. A Boolean function $f:\Sigma^{n}\longrightarrow\Sigma$ is \emph{monotone} when $f\left( x\right) \geq f\left( y\right) $ if $x\succeq y$, for every $x,y\in \Sigma^{n}$. Monotone Boolean functions have an important role for proving lower bounds of circuit complexity (see, \emph{e.g.}, Leeuwen \cite{l90}, Chapter 14.4). Any function obtained by composition of monotone Boolean functions is itself monotone. Examples of monotone Boolean functions are the conjuction {\scriptsize AND} and the disjunction {\scriptsize OR}. Indeed, every monotone Boolean function can be realized by {\scriptsize AND} and {\scriptsize OR }operations (but without {\scriptsize NOT}). Boolean functions are important in applications, for example, in the implementation of a class of non-linear digital filters called stack filters \cite{as}. Important methods for obtaining non-trivial bounds on specific monotone Boolean functions have been studied (see, \emph{e.g.}, \cite{al}). The concept of \emph{zero forcing} on graphs is a recent idea that is part of a program studying minimum ranks of matrices with specific combinatorial constraints. Zero forcing has been also called graph infection and graph propagation \cite{bg, s}. Notice that, in the context described here, the term \textquotedblleft zero forcing\textquotedblright\ seems to be unfortunate, because we are forcing ones, not zeros. However, we keep the term given that this is now the most commonly used in the literature. In order to define zero forcing, we first need to define a \emph{color-change rule}: if $G=(V,E)$ is a graph with each vertex colored either white or black, $u$ is a black vertex of $G$, and exactly one neighbor $v$ of $u$ is white, then change the color of $v$ to black. Given a coloring of $G$, the \emph{final coloring} is the result of applying the color-change rule until no more changes are possible. A \emph{zero forcing set} for $G$ is a set $Z\subseteq V\left( G\right) $ such that if the elements of $Z$ are initially colored black and the elements of $V(G)\backslash Z$ are colored white, the final coloring of $G$ is all black. Zero forcing is related to certain minimum rank/maximum nullity problems of matrices associated to graphs (see \cite{min}) and to the controllability of quantum spin systems \cite{bg, bs}. Minimimizing the size of zero forcing sets is a difficult combinatorial optimization problem \cite{aa}. The remainder of this paper is organized as follows. In Section 2, we prove that zero forcing on graphs realizes all monotone Boolean functions, and highlight some simple related facts. The connection between zero forcing and circuits is obtained by associating a graph to each logic gate. We will show that the functions {\scriptsize AND} and {\scriptsize OR} are indeed easily realized by two different gadgets with a few vertices. This is not the first work observing that monotone Boolean functions can be realized in a combinatorial setting. For example, Demaine \emph{et al. }\cite{de} have used the movements of a collections of simple interlocked polygons. In Section 3, we describe the phenomenon of \emph{back forcing} in the circuit. The phenomenon occurs when the color-change rule acts to modify the color of a vertex which has been already used during the computation. In some cases, back forcing implies that information about the output of a Boolean circuit can be read not just by looking at the color of a \emph{target} vertex corresponding to the final output of the process, but at the color of the vertices in certain intermediate or initial gadgets. The idea opens a simple but intriguing scenario consisting of many parties that perform computation in a distributed way: each party holds a subset of the gates and it is able to read certain information about the input of other parties, since the color of its gates may have been modified by back forcing. Back forcing can be avoided by including some extra gadget acting as a filter. In Section 4, we show that zero forcing becomes \emph{universal}, \emph{i.e.}, it can realize any Boolean function, if we apply a proper encoding. Specifically the \emph{dual rail encoding}, where two vertices are assigned to each logical bit, is a method to construct the {\scriptsize NOT }gate and therefore to obtain universal computation. Conclusions are in Section~5. \section{Main result} Our main result is easy to prove: \begin{theorem} Zero forcing realizes all monotone Boolean functions. \end{theorem} \begin{proof} It is sufficient to show that zero forcing realizes the functions {\scriptsize AND} and {\scriptsize OR}. \emph{Claim 1.} The gate {\scriptsize AND} is realized by the gadget $G_{\text{{\scriptsize AND}}}$ with vertices $\{1,2,3\}$ and edges $\{\{1,2\},\{1,3\},\{2,3\}\}$, where $1$ and $2$ are the input vertices and $3$ is the output vertex, containing the result and being able to propagate the color. All vertices are initially colored white. An illustration of the gadget $G_{\text{{\scriptsize AND}}}$ is below \begin{figure} [h] \begin{center} \includegraphics[ height=0.7185in, width=1.499in {AND.eps \caption{The gate for the function {\scriptsize AND}. \label{and \end{center} \end{figure} \emph{Proof of Claim 1. }If no action is taken then the final coloring of the gadget is white. If we color vertex $1$ black then the final coloring is all white but for vertex $1$. The same holds for vertex $2$. However, if we color vertex $1$ and vertex $2$ black then the color-change rule implies that vertex $3$ is black at step $2$. In fact, $\{1,2\}$ is a zero forcing set for $G_{\text{{\scriptsize AND}}}$. \emph{Claim 2. }The gate {\scriptsize OR} is realized by the gadget $G_{\text{{\scriptsize OR}}}$ with vertices $\{1,2,3,4\}$ and edges $\{\{1,3\},\{1,4\},\{2,3\},\{2,4\}\}$, where $1$ and $2$ are the input vertices. The output vertex is vertex $4$. Vertex $3$ is initially colored black \begin{figure} [h] \begin{center} \includegraphics[ height=0.7185in, width=1.4981in {OR0.eps \caption{The gate for the function {\scriptsize OR}. \label{or \end{center} \end{figure} \emph{Proof of Claim 2. }If no action is taken then the final coloring of the gadget is all white, but for vertex $3$. If we color vertex $1$ black then the color-change rule implies that vertex $4$ is black at step $2$. The same holds for vertex $2$ and for vertex $1$ and vertex $2$ together. In fact, $\{1,3\},\{2,3\},\{1,2,3\}$ are zero forcing sets for $G_{\text{{\scriptsize OR}}}$, able to propagate the color for inducing the next step of the computation. It is important to observe that zero forcing does not realize the function {\scriptsize NOT}, since when a vertex is colored black, it can not change color anymore. The consequence is that zero forcing does not realize universal computation (any Boolean function can be implemented using {\scriptsize AND}, {\scriptsize OR} and {\scriptsize NOT} gates) but monotone Boolean functions only. This concludes the proof. \end{proof} \bigskip It may be worth observing the following points: \begin{itemize} \item Notice that extra vertices forming \emph{delay lines} may be needed to assemble a circuit such that the output produced by zero forcing in parallel gates is syncronous. However, given our choice of gadgets, exactly $2$ time steps are required for output of zero forcing in $G_{\text{{\scriptsize AND} }$ and $G_{\text{{\scriptsize OR}}}$. At time step $3$ the color-change rule acts on the next gate in the circuit. There is then a convenient distinction between internal and external time:\ \emph{internal time} refers to the zero forcing steps inside the gadgets/gates; \emph{external time} refers to the time steps of the computation. \item The gadgets $G_{\text{{\scriptsize AND}}}$ and $G_{\text{{\scriptsize OR}}}$ have three and four vertices, respectively. By inspection on all possible combinations of white and black vertices for graphs with at most four vertices, we can observe that we have chosen the smallest possible gadgets, in terms of number of vertices and edges, realizing the two functions. One might think that the gate {\scriptsize OR} is realized also by the gagdet with three vertices in the figure \begin{figure} [h] \begin{center} \includegraphics[ height=0.7185in, width=1.4981in {OR3vertices.eps \caption{A gate for the function {\scriptsize OR}, where color-change rule does not move the input forward. \end{center} \end{figure} Although the gadget implements the {\scriptsize OR} correctly, it cannot be used as an initial or intermediate gate of a circuit, since in this gadget the color-change rule does not move fowards the output to the next gate, but it halts at vertex $3$ \begin{figure} [h] \begin{center} \includegraphics[ height=0.7185in, width=1.2152in {OR3verticesstop.eps \caption{The figure shows that an {\scriptsize OR }gate in which all vertices are initially white does not move the input forward. \end{center} \end{figure} \item Let us consider the gadget $G_{\text{{\scriptsize OR}}}$. If we color vertex $1$ black then the color-change rule implies that vertex $4$ is black at step $2$. Suppose that vertex $2$ is colored white at step $1$. At step $2$ the gate has computed the {\scriptsize OR} function in vertex $4$ with input $\{0,1\}$. At step $2$ vertex $2$ is also colored black under the action of the color-change rule, because this is the unique white neighbour of vertex $3$. This is necessary in order for the computation to proceed using the output (black vertex $4$). So, for all inputs with output $1$, the vertices of $G_{\text{{\scriptsize OR}}}$ are black after two steps of the internal time. Such behaviour is discussed in more detail in the next section. \item It is straightforward to realize the operation {\scriptsize COPY} \begin{figure} [h] \begin{center} \includegraphics[ height=0.7202in, width=1.1149in {Copy.eps \caption{The gate for the function {\scriptsize COPY}. \label{figucopy \end{center} \end{figure} \end{itemize} \section{Back forcing} If each Boolean variable in the input of a circuit is set to $1$, then the vertices of the circuit that are initially colored black form a zero forcing set. However, this is not the only situation in which we have a zero forcing set. The next figure gives an example \begin{figure} [h] \begin{center} \includegraphics[ height=1.5221in, width=2.009in {back1.eps \caption{A circuit computing the Boolean function $(x_{1}$ {\scriptsize AND} $x_{2})$ {\scriptsize OR} $(x_{3}$ {\scriptsize AND} $x_{4})$. The circuit exhibits the phenomenon of back forcing. \end{center} \end{figure} This is a circuit computing the Boolean function $(x_{1}$ {\scriptsize AND} $x_{2})$ {\scriptsize OR} $(x_{3}$ {\scriptsize AND} $x_{4})$. The number in the vertices of the figure specify the internal time step at which the vertex is black; the vertices labeled by $1$ are initially colored black. The output of the circuit is $1$ at step $4$ and at step $6$ of the internal time the vertices encoding the input of the function are all colored black. This can happen if and only if three of the input vertices are colored white at internal time $1$. The phenomenon will be called \emph{back forcing}, because it is induced by the color-change rule acting backwards with respect to the direction from input to output in the whole circuit. The gadget $G_{\text{{\scriptsize AND} }$ exhibits back forcing conditionally on having input $\{0,1\}$. The type of back forcing in $G_{\text{{\scriptsize AND}}}$ can be called \emph{transmittal back forcing}, because if something back forces its output black then the gate transmits the back force, \emph{i.e.}, it modifies the color of the output vertex in a gate used previously. The figure clarifies the dynamics \begin{figure} [h] \begin{center} \includegraphics[ height=1.5957in, width=2.8605in {backAND.eps \caption{The steps of back forcing. \end{center} \end{figure} The gadget $G_{\text{{\scriptsize OR}}}$ needs to force an input forward in order to color black one of the output vertices adjacent to its inputs and in another gate. In this sense, $G_{\text{{\scriptsize OR}}}$ does not have transmittal back forcing. In other words, a gate at external time $t$, can not back force its color into $G_{\text{{\scriptsize OR}}}$ at external time $t+1$. In contrast, the circuit $(x_{1}$ {\scriptsize AND} $x_{2})$ {\scriptsize OR} $(x_{3}$ {\scriptsize AND} $x_{4})$ can initiate back forcing as described above (when it an intermediate element in the circuit). We can also slow down back forcing, by including appropriate \emph{delay lines} -- for example, by adding extra vertices in each gadget or between them. Alternatively, we could consider delay lines directly embedded in the structure of the gadgets implementing the logical gates. Also, back forcing can be avoided completely by including the gadget below. The gadget acts as a \emph{filter}. In some sense, the filter can be understood as an \emph{electronic diode} allowing zero forcing only in one direction \begin{figure} [h] \begin{center} \includegraphics[ height=0.7193in, width=1.5655in {filter.eps \caption{A gadget acting as a filter: its role is to avoid back forcing. \end{center} \end{figure} In relation to the circuit for the function $(x_{1}$ {\scriptsize AND} $x_{2})$ {\scriptsize OR} $(x_{3}$ {\scriptsize AND} $x_{4})$, it may be interesting to see that if there are two parties each one chosing the input of one of the two {\scriptsize AND} gates, and each one having access to only the corrisponding vertices, given the back forcing, the parties can then learn the output of the circuit by looking at the color of their vertices at the end of the computation, except when a party chooses $(0,0)$ (\emph{i.e.}, white, white). \section{Universality} Despite the fact that the color-change rule induces a non-reversible process (black coloring cannot be undone) a simple modification of the encoding strategy allows us to implement universal, and hence also reversible, computation. The idea is to adopt a \emph{dual rail} strategy, where two vertices are employed to encode a single \emph{logical bit}. Specifically, as shown in Fig.~\ref{figudual}, in this scheme we associate the logical bit 0 to a configuration in which (say) the first vertex is colored in black while the second is kept white, and the logical 1 to the opposite configuration (i.e. the first vertex being left white and the second one being colored black). With such encoding we can now design the gate {\scriptsize NOT} by simply drawing a graph in which the nodes are exchanged at the output (see Fig.~\ref{figunot}). Also a dual rail {\scriptsize AND} gate can be easily realized. Universal computation is hence achieved by constructing a {\scriptsize NAND} gate via concatenation of {\scriptsize AND} with {\scriptsize NOT} and by observing that the {\scriptsize COPY} gate for the dual rail encoding is simply obtained by just applying to both the nodes that form a bit the transformation of Fig.~\ref{figucopy}. Once universal computation has been achieved, we can easily turn it into a reversible one, \emph{e.g.}, by building a Toffoli gate~\cite{toffoli}. This to remark that even if zero forcing is an irreversible process, it can still be used to induce a reversible computational dynamics. \begin{figure} [h] \begin{center} \includegraphics[ height=0.8426in, width=1.6968in {zeroonedual.eps \caption{Physical bits for $0$ and $1$ in a dual rail encoding. \label{figudual \end{center} \end{figure} \begin{figure} [h] \begin{center} \includegraphics[ height=1.0191in, width=1.5646in {NOT.eps \caption{In a dual rail encoding the logical {\scriptsize NOT }can be implemented by swapping the physical bits. \label{figunot \end{center} \end{figure} \section{Conclusions} We have shown that all monotone Boolean functions can be realized by zero forcing in a graph constructed by \emph{gluing} together the copies of two types of subgraphs/gadgets corresponding to the Boolean gates {\scriptsize AND} and {\scriptsize OR}. We have briefly discussed the minimality of such gadgets in terms of vertices and edges. We have highlighted a back forcing action. Back forcing has an effect on the coloring of gates already used, as a function of what has happened in the \textquotedblleft future\textquotedblright, \emph{i.e.}, at a later stage of the computation. Because of the relation between zero forcing and minimum ranks, the model described here is amenable to be studied with linear algebraic tools, potentially suggesting a novel direction in the analysis of monotone Boolean functions. Finally, we have shown that universal computations can be obtained by zero forcing by simply adopting a dual rail encoding. An open problem suggested by the paper is to understand the link between zero forcing and the dynamics at the basis of other unconventional models of computation, like, for example, the billiard ball computer -- introduced as a model of reversible computing \cite{ft} --, models involving geometric objects, and dominos \cite{de}. \bigskip \begin{acknowledgments} This work has been done while DB was with the Blackett Laboratory at Imperial College London, supported by EPSRC grant EP/F043678/1. VG acknowledges support by the FIRB-IDEAS project (RBID08B3FM). SS is a Newton International Fellow. \end{acknowledgments}
1,477,468,750,794
arxiv
\section{Introduction} Accurately revealing the spiral structure of the Milky\ Way and its evolution has attracted the attention of many astronomers. After the M\,33 galaxy was explored by \cite{hubble1926}, it was speculated that our Milky Way is also possibly a spiral galaxy. Since then, much effort has been dedicated to uncover the spiral structure of the Galaxy with different kinds of tracers and methods~\citep[e.g.,][]{lindblad1927, oort1958, Becker1963, Becker1964, Becker1970, georgelin1976, Moffat1979, russeil2003, paladini2004, dias2005, xu2006, Moitinho2006, Vazquez2008, reid2009, Moitinho2010, hou2014, hou2015, reid2019, xu2018, xu2021, Poggio2021}. However, because the Sun is deeply embedded in the Galactic disc, multiple structures at different distances along the line of sight are superimposed; this makes very difficult to accurately decompose these structures and depict the actual spiral structure \citep{xu2018b}. Moreover, another big challenge is reliably tracing the Galaxy's morphology to the past to ultimately understand its evolutionary history. Open clusters (OCs) are one of the good spiral tracers and have some specific advantages for understanding the properties of spiral structure of the Galaxy. An open cluster is a group of stars that formed from the same giant molecular cloud, which are roughly the same age. Unlike the other good tracers for investigating the spiral arms of the Galaxy (e.g., high-mass star formation region (HMSFR) masers, massive O--B type (OB) stars, and HII regions), the ages of OCs cover a wide range, from a few million years (Myr) to tens of billions of years, making them potentially good tracers for investigating the spiral structure outlined by young to old objects, and in particular the evolution of spiral arms. In addition, the large number of cluster members makes it possible to derive more accurate values of cluster parameters (e.g., distances, proper motions, and radial velocities) than those of individual star. The spiral structure of the Galaxy has been explored by many research works using OCs as tracers. The relationship between OCs and spiral arms was first discussed by \cite{Becker1963, Becker1964}, who studied a sample of 156 OCs with photometric distances at that time. Becker suggested that the distribution of those OCs with earliest spectral type of O--B2 followed three spiral arm segments in the solar neighbourhood. In comparison, the distribution of older OCs with earliest spectral type from B3 to F did not present arm-like structures and seemed to be random. These results were confirmed by \cite{Becker1970} and \cite{Fenkart1979} with larger samples of OCs. While a diferent explanation was proposed by \citet{janes1982} and \citet{lynga1982}, these authors suggested that the distributions of OCs seemed more like some clumpy-like concentrations or complexes rather than a spiral pattern. \citet{dias2005} used a sample of 212 OCs to showed that the young OCs (ages $<1.2\times10^7$ yr) still remain in the three major arm segments in the solar neighbourhood, that is, the Perseus, Local, and Sagittarius-Carina Arms. The OCs $\sim$ 20~Myr in age are leaving the spiral arms and moving to the inter-arm regions, while for the older OCs, their distributions do not present clear clump-like or spiral-like structures. \citet{dias2005} also estimated the spiral pattern rotation speed of the Milky Way with OCs and confirmed that a dominant fraction of the OCs are formed in spiral arms. These results were recently updated by \citet{dias2019} with the second data release of Gaia \citep{gaia2018}. \citet{Moitinho2006} analysed the stellar photometry data for many OCs; a sample of 61 OCs was used to trace the disc structure in the third Galactic quadrant. The detailed spiral structure in the third Galactic quadrant was also explored by \citet{Vazquez2008} by taking advantage of the data of young OCs, blue plumes, and molecular clouds. For a review about the previous efforts to use OCs to study the Galactic structure, we recommend \citet{Moitinho2010}. Later, \citet{cama13} studied a sample of young OCs, which are related with the Perseus Arm and even the Outer Arm \citep[also see][]{bobylev2014}. By taking advantage of the second data release of {\it Gaia}, \citet[][]{cantat2018,cantat2019,cantat2020a} have derived the members and mean parameters for more than 1800 Galactic OCs. The projected distribution of these OCs onto the Galactic disc was compared with a spiral arm model of our Galaxy \citep[][]{reid2014}. In brief, it is commonly accepted that young OCs can be used to trace the nearby spiral arms, while older OCs have more scattered distribution, which may be used to represent the distribution of older stellar components in the Galactic disc. However, the evolutionary properties and the lifetime of the spiral arms of the Galaxy are still puzzles and have not been well constrained from observations. With its specialty, OC serves as good tracer for better understanding these issues. Fortunately, the number of OCs and the accuracy of OC parameters (parallaxes, proper motions, and radial velocities) have been improved significantly in the past few years, largely as a result of the efforts of {\it Gaia} (see Table~\ref{table:table1}). In this work, we aim to compile a large catalogue of OCs from references and accurately derive their parallax distances, proper motions, and radial velocities from the latest \textit{Gaia} Early Data Release 3 (hereafter {\it Gaia} EDR3) \citep{gaia2016, gaia2020}. This largest catalogue of OCs available to date provides a good chance to study the above issues about spiral structure of the Galaxy, not only to map the nearby spiral arms, but also to understand their evolution, and further, to place observational constraints on the different evolutionary mechanisms. This article is organised as follows. In Sect. \ref{sample}, we describe the sample of OCs compiled in this work. The results and discussions are presented in Sect. \ref{result}, including the spiral structure in the solar neighbourhood traced by OCs, the kinematic properties of OCs, the evolutionary properties of their scale heights, and a regression analysis. Conclusions are given in Sect. \ref{conclusion}. \setlength{\tabcolsep}{0.5mm} \begin{table}[ht] \centering \caption{A list of references for the previously known OCs.} \begin{tabular}{ccc} \hline & OCs & \\ Reference & Number & Data used \\ \hline \cite{dias2002} & 2167 & WEBDA$^{(1)}$ et al. \\ \cite{dias2014} & 1805 & UCAC4$^{(2)}$\\ \cite{kharchenko2013} & 3006 & 2MASS$^{(3)}$ \& PPMXL$^{(4)}$ \\ \cite{schmeja2014} & 139 & 2MASS \\ \cite{scholz2015} & 63 & PPMXL \& UCAC4 \\ \cite{cantat2018} & 1229 & \textit{Gaia} DR2 \\ \cite{cantat2019} & 41 & \textit{Gaia} DR2 \\ \cite{castro2018} & 23 & \textit{Gaia} DR2 \& TGAS$^{(5)}$ \\ \cite{castro2019} & 53 & \textit{Gaia} DR2 \\ \cite{castro2020} & 582 & \textit{Gaia} DR2 \\ \cite{liu2019} & 76 & \textit{Gaia} DR2 \\ \cite{he2021} & 74 & \textit{Gaia} DR2 \\ \cite{ferreria2020} & 25 & \textit{Gaia} DR2 \\ \hline \end{tabular} \tablefoot{(1) WEBDA: \url{http://obswww.unige.ch/webda/}; (2) UCAC = United States Naval Observatory CCD Astrograph Catalog; (3) 2MASS = Two Micron All Sky Survey; (4) PPMXL: \url{https://dc.zah.uni-heidelberg.de/ppmxl/q/cone/info}; (5) TGAS = \textit{Tycho-Gaia} Astrometric Solution. } \label{table:table1} \end{table} \section{Sample} \label{sample} To better understand the spiral structure of the Galaxy with OCs, an updated OC catalogue with accurately measured parameters from {\it Gaia} EDR3 is necessary. The recently published {\it Gaia} EDR3 contains the positions, parallaxes, and proper motions for more than 1.5 billion stars of different types, ages, and evolutionary stages in the whole sky \citep{gaia2020}. For the sources with magnitude $G<$ 15, the parallax uncertainties could be 0.02--0.03~mas, and the proper motion uncertainties could be 0.02--0.03~mas~yr$^{-1}$. {\it Gaia} EDR3 provides excellent database to improve the parameter accuracies for a large number of OCs. \begin{figure*} \centering \includegraphics[scale=0.30]{figures/Fig1.png} \caption{Distributions of Galactic OCs. (a) YOCs (cyan dots; ages $<$ 20~Myr) and HMSFR masers (white triangles) projected onto the Galactic disc. The dot size is proportional to the number of cluster members. The solid curved lines trace the centres (and dotted lines the widths enclosing 90\% of the masers) of the best-fit spiral arms given by \cite{reid2019}. The distance uncertainties of the masers are indicated by the inverse size of the symbols. The yellow dashed lines denote the best-fit spiral arm centres from the distribution of YOCs. The Galactic centre (red star) is at (0, 0) kpc and the Sun (red symbol) is at (0, 8.15) kpc. The background is a new concept map of the Milky Way credited by Xing-Wu Zheng \& Mark Reid BeSSeL/NJU/CFA. The distributions of the older clusters are shown in panel (b), (c), and (d) for the OCs with ages of 20--200~Myr (purple dots), 200--1000~Myr (brown dots), and $>$ 1000~Myr (red dots), respectively. The dot size is also proportional to the number of cluster members. The parallax uncertainties of the OCs shown are all smaller than 10\%. } \label{fig:fig1} \end{figure*} \setlength{\tabcolsep}{2.5mm} \begin{table*}[ht] \centering \caption{Best-fit parameters of nearby spiral arms using HMSFR masers and OCs of different age groups.} \begin{tabular}{ccccccccc} \hline && Arm & $\varphi$ & $R_{\rm ref}$ & $\beta_{\rm ref}$ & Arm width &&\\ && & (deg) & (kpc) & (deg) & (kpc) &&\\ \hline && Sagittarius-Carina & 17.1 $\pm$ 1.6 & 6.04 $\pm$ 0.09 & 24 & 0.27 $\pm$ 0.04 &&\\ &HMSFR masers& Local & 11.4 $\pm$ 1.9 & 8.26 $\pm$ 0.05 & 9 & 0.31 $\pm$ 0.05 &&\\ && Perseus & 10.3 $\pm$ 1.4 & 8.87 $\pm$ 0.13 & 40 & 0.35 $\pm$ 0.06 &&\\ \hline && Sagittarius-Carina & 16.2 $\pm$ 0.4 & 6.72 $\pm$ 0.07 & 2 & 0.27 $\pm$ 0.02 &&\\ &YOCs ($<$ 20~Myr)& Local & 10.8 $\pm$ 0.6 & 8.18 $\pm$ 0.06 & 6 & 0.24 $\pm$ 0.06 && \\ && Perseus & 9.6 $\pm$ 0.8 & 9.42 $\pm$ 0.04 & 20 & 0.33 $\pm$ 0.03 &&\\ \hline && Sagittarius-Carina & 15.7 $\pm$ 0.5 & 6.80 $\pm$ 0.07 & 2 & 0.30 $\pm$ 0.03 &&\\ &OCs (20--60~Myr)& Local & 10.7 $\pm$ 0.8 & 8.24 $\pm$ 0.07 & 6 & 0.31 $\pm$ 0.03 &&\\ && Perseus & 9.0 $\pm$ 1.0 & 9.45 $\pm$ 0.06 & 20 & 0.36 $\pm$ 0.04&&\\ \hline && Sagittarius-Carina & 15.2 $\pm$ 0.5 & 6.79 $\pm$ 0.07 & 2 & 0.33 $\pm$ 0.04 &&\\ &OCs (60--100~Myr) & Local & 8.9 $\pm$ 0.9 & 8.09 $\pm$ 0.06 & 5 & 0.40 $\pm$ 0.03 &&\\ && Perseus & 9.0 $\pm$ 0.9 & 9.43 $\pm$ 0.07 & 20 & 0.39 $\pm$ 0.05&&\\ \hline \end{tabular} \tablefoot{ For each spiral arm, the best-fit pitch angle ($\varphi$), initial radius ($R_{\rm ref}$), reference Galactocentric azimuth angle ($\beta_{\rm ref}$), and arm width are listed. The OCs are divided into three subsamples: (1) YOCs ($<$ 20~Myr, 633 clusters); (2) OCs with ages from 20 to 60~Myr (334 clusters); and (3) OCs with ages from 60 to 100~Myr (262 clusters). } \label{table:table2} \end{table*} Up to now, thousands of OCs have been identified, but scattered in references. \cite{dias2002} assembled a large catalogue of OCs based on many previous studies, which contains 2167 objects. \cite{dias2014} gave the proper motions and membership probabilities for 1805 optically visible OCs. By taking advantage of near-infrared photometric data of approximately 470 million objects in Two Micron All Sky Survey ~\citep[2MASS,][]{skrutskie2006} and proper motions from the PPMXL catalogue~\citep{roser2010}, \citet{kharchenko2013} completed a survey of all previously known OCs called the Milky Way Star Clusters Catalog (MWSC), which lists 3006 objects, including known OCs and candidates, globular clusters, associations, and asterisms. Subsequently, the MWSC catalogue was complemented by 139 new OCs at high Galactic latitudes \citep{schmeja2014} and 63 new OCs identified by \cite{scholz2015}. After the publication of \textit{Gaia} DR2 \citep{gaia2018}, the membership probabilities of $\sim$ 1200 OCs presented in previous catalogues were recalculated by \cite{cantat2018}. These authors also discovered 60 new OCs. Besides, the radial velocities of 861 OCs in \cite{cantat2018} were obtained by \cite{soubiran2018}. \cite{castro2018} developed a machine-learning approach and found 23 new OCs from {\it Gaia} DR2. Soon after, \citet{castro2019} detected 53 new OCs in the direction of the Galactic anti-centre. \citet{cantat2019} identified 41 new OCs in the direction of the Perseus Arm. Meanwhile, 76 new OCs were reported by \cite{liu2019}. Recently, 582 new OCs were discovered by \cite{castro2020}, 25 new OCs were detected by \cite{ferreria2020}, and 74 new OCs were found by \cite{he2021}, respectively. As the first step of this work, we compiled a large catalogue\footnote{\url{https://cdsarc.u-strasbg.fr/ftp/vizier.submit//hcjgaia2021_v2/}}, which includes 3794 Galactic OCs collected from references (see Table~\ref{table:table1}). The procedure is described as follows. Firstly, 2469 OCs were selected from \cite{kharchenko2013}, which provides details of member stars for each of the OCs. Then, the member stars with probabilities greater than 60\% were picked out to conduct positional cross-matching with \textit{Gaia} EDR3 using a matching radius of 1~arcsec. Making use of the sample-based clustering search method \citep{hao2020}, which was an adaptation of the OC search method presented in \cite{castro2018}, we yielded 2457 OCs. For the OC catalogue of \cite{dias2002}, we cross-matched it with the catalogue of \cite{kharchenko2013}, and 315 OCs not listed in \citet{kharchenko2013} were obtained. The sample-based clustering search method was applied to these targets, and we got 309 OCs. Subsequently, we cross-matched the catalogue of \cite{cantat2018} with these 2766 OCs and then obtained 153 OCs. For these 153 OCs, only the member stars with probabilities greater than 60\% were picked out. The same procedure was conducted on the catalogue of \cite{cantat2019}. Then, we obtained detailed parameters of 658 OCs from the catalogues of \cite{castro2018,castro2019,castro2020}, including the member stars. In addition, we collected 76 new OCs from the catalogue of \cite{liu2019}, 25 new OCs from \cite{ferreria2020}, and 74 new OCs from \cite{he2021}, respectively. Finally, we obtained a total of 3794 OCs. The parameters of these OCs were determined using {\it Gaia} EDR3. \textit{Gaia} EDR3 provides parallax uncertainties for almost all of its stars. To obtain precise parallax for each of these OCs, we only extracted the member stars whose parallax uncertainties are smaller than 10\%. Not all of the 3794 OCs have member stars with parallax uncertainties better than 10\%. Hence, a high-accuracy subsample of 3611 OCs was extracted, in which, 1742 OCs have radial velocities. For these 1742 OCs, the means, standard errors, and uncertainties of their radial velocities were determined using the method of \cite{soubiran2018}. After that, the member stars brighter than \textit{G} = 17~mag were used to determine the position and proper motions for each of these OCs. For the bright member stars, the uncertainties are better than 0.05~mas for the position and 0.07~mas~yr$^{-1}$ for the proper motions, respectively \citep{gaia2020}. In terms of photometry, for the stars of \textit{G} = 17~mag, the associated photometric error is 0.001~mag and 0.012~mag for \textit{G$_{BP}$}, and 0.006~mag for \textit{G$_{RP}$}~\citep{gaia2020}. The determined parameters of the OCs, together with the reference(s) where the OC was selected, all are given in the on-line catalogue of this work. Some recent works \citep[e.g.,][]{cantat2018, cantat2020b,Monteiro2020} suggest that there are some false positive or non-existing clusters in the previous catalogues. Most of these controversial or erroneous objects are the alleged old and high-altitude clusters \citep[][]{cantat2020b}. We checked the 146 false positive or nonexistent clusters given by \citet{dias2002}, and we found that 81 of these are not included in our catalogue. In this work, our aim is to study the spiral arms in the Galactic disc, and we primarily adopt relatively young OCs (e.g., ages $<$ 100~Myr). For the 65 possibly false positive clusters suggested by \citet{dias2002} and kept in our compiled catalogue, only 16 of these are $<$ 100~Myr in age. In comparison, the total number of OCs $<$ 100~Myr in age in our catalogue is 1229. Hence, we believe that this problem does not influence the following analysis results about the spiral structure of the Galaxy. Besides, the ages for the 2837 OCs in our compiled catalogue were already given in previous works; these are adopted directly. For the remaining 957 ($\sim$ 25\%) OCs, most of these ($\sim$ 70\%) are recently identified OCs based on the \textit{Gaia} data. We determined their ages and the errors following the methods described in \cite{liu2019}, \cite{hao2020}, and \cite{he2021}. The age parameter for each of the 3794 OCs and the corresponding reference(s) are also given in the on-line catalogue. \section{Results and discussions} \label{result} \subsection{Spiral structure in the solar neighbourhood} \label{spiral structure} Figure~\ref{fig:fig1} presents the projected distributions of OCs onto the Galactic disc. Figure~\ref{fig:fig1}(a) shows that the 633 young OCs (YOCs; i.e., ages $<$ 20~Myr) in our catalogue are concentrated in distinct structures, which are almost concordant with the known spiral arms outlined by HMSFR masers \citep{reid2019} and/or bright OB stars~\citep{xu2018,xu2021}. A large number of YOCs are located in the Perseus, Local, Sagittarius--Carina, and Scutum--Centaurus Arms, which seem to be the nurseries for many OCs. These arm features traced by YOCs extend as far as $\sim$3 to 8~kpc along different spiral arms. These properties are in general consistent with previous results \citep[e.g., see][]{Becker1970,dias2005,Moitinho2006,Vazquez2008, dias2019,cantat2018,cantat2020a}. In Figure~\ref{fig:fig1}(a), it is also shown that the more populous OCs (i.e., those with more member stars) seem to be more concentrated on the major spiral arms than the less populous ones. In addition, spur-like features indicated by the concentration of some YOCs are visible in the inter-arm regions, which is consistent with the results shown using HMSFR masers \citep[][]{reid2019} and bright OB stars \citep[][]{xu2018,xu2021}. This all suggests that our Milky Way is not a pure grand-design spiral, but perhaps a multi-armed spiral galaxy showing complex substructures \citep{xu2016}. The distributions of older OCs are shown in Figure~\ref{fig:fig1}(b), (c) ,and (d). Consistent with previous results~\citep[e.g.,][]{Moitinho2006, Vazquez2008, bobylev2014, Reddy2016, dias2005,cantat2018, cantat2020a}, the OCs are gradually dispersed in the Galactic disc as they age. Structural features are still discernible for the populous OCs with ages of tens of million years, but the older OCs (e.g., ages $>$ 200~Myr) present more diffuse distributions. Indeed, when using the OCs older than 1000~Myr as tracers, spiral arm features are almost unrecognisable. These properties indicate that as the YOCs age they gradually migrate away from their birthplaces, probably accompanied by the evolution of spiral arms. \begin{figure*} \centering \includegraphics[scale=0.25]{figures/Fig2.png} \caption{Proper motion vectors of the YOCs. A motion scale of 40~km~s$^{-1}$ is indicated in the bottom left corner of the plot. The ages of OCs are colour coded. The background is the same as that of \citet[][]{reid2019}.} \label{fig:fig2} \end{figure*} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figures/Fig3.png} \caption{Average peculiar motions of the OCs in the direction of Galactic rotation as a function of cluster age. The age bin is 5~Myr. The magenta symbols and error bars indicate the average PMs and their associated errors.} \label{fig:fig3} \end{figure} Inspired by the above phenomena, we considered that as OCs gradually age the spiral arms traced by OCs of different ages are probably not the same; instead, they can be used to reflect the evolution of the spiral arms of the Galaxy. A subsample of OCs below 100~Myr in age is extracted from our catalogue and used to explore this issue. To ensure that there are plenty of OCs in each age interval, we divide those OCs that are below 100~Myr into three different age groups of $<$ 20~Myr (633 clusters), 20--60~Myr (334 clusters), and 60--100~Myr (262 clusters). From their distributions, the three nearby spiral arms (i.e., the Sagittarius-Carina, Local, and Perseus Arms) are discernible. For the more distant Scutum--Centaurus Arm, as fewer OCs are located in it, this arm cannot be well traced and hence is omitted in the following analysis. Then, in order to make a comparison with the maser results, we determine the parameters of spiral arms with the same method of \citet[][]{reid2019}, which is used in analysing their maser parallax data. To fit the pitch angle and arm width for a spiral arm traced by YOCs, a logarithmic spiral-arm model is adopted \citep{reid2014}, \begin{equation} \\\\ \ln (R / R_{\mathrm{ref}}) = - (\beta - \beta_{\mathrm{ref}}) \tan \varphi, \end{equation} where \textit{R}$_{\rm ref}$ is the reference Galactocentric radius, $\beta_{\rm ref}$ is the reference azimuthal angle, and $\varphi$ is the pitch angle. The zero point of the Galactocentric azimuthal angle, $\beta$, is defined as a line towards the Sun from the Galactic centre, and the azimuthal angle increases towards the direction of Galactic rotation. The quantity \textit{R} is the Galactocentric radius at a Galactocentric azimuth $\beta$. The fitting results are listed in Table~\ref{table:table2}. In comparison to the values derived by using HMSFR masers, the pitch angles of the three nearby spiral arms traced by OCs tend to be smaller; the differences become larger when older OCs were used as tracers, thus implying a tighter winding of the spiral arms traced by older OCs. This is possibly a true feature for the Milky Way; however, the differences of pitch angles are not significant when considering the fitting uncertainties and higher quality data of OCs would be necessary to confirm this. Besides, arm width is also an important parameter for a spiral arm. We note that the fitted arm widths for the OCs with ages of 60--100~Myr are inclined to be larger than those of the HMSFR masers. In comparison, the arm widths derived from YOCs are in general consistent with HMSFR masers. \begin{figure*}[t] \centering \includegraphics[width=0.70\textwidth]{figures/Fig4.png} \caption{Evolution of the scale height of OCs with OC age. The 3376 OCs within 4.0~kpc of the Sun are divided into 32 bins according to their ages. The blue line indicates the evolution of the scale height with cluster age. The blue dots correspond to the typical values of log$_{10}$(age) in each bin. The solid magenta dots show the distances of the OCs to the Galactic plane as a function of cluster age.} \label{fig:fig4} \end{figure*} \subsection{Kinematic properties of OCs} With the distances, proper motions, and radial velocities for the 1742 OCs in the high-accuracy sample, we can derive their three-dimensional (3-D) velocity information following the methods of \cite{xu2013} and \cite{reid2019}. The 3-D velocities were straightforwardly calculated using the linear speeds projected onto the celestial sphere and radial velocities, which were obtained from the proper motions and distances. Then, the peculiar motions (PMs; i.e., non-circular motions) of the OCs were estimated by subtracting the effect of Galactic rotation and the solar PMs using the updated Galactic parameters. In this approach, we adopted a Galactic rotation speed of 236 $\pm$ 7~km~s$^{-1}$, where the Sun is at a distance of 8.15 $\pm$ 0.15~kpc to the Galactic centre \citep{reid2019}. Solar PMs of \textit{U}$_{\odot}$ = 10.6 $\pm$ 1.2~km~s$^{-1}$, \textit{V}$_{\odot}$ = 10.7 $\pm$ 6.0~km~s$^{-1}$, and \textit{W}$_{\odot}$ = 7.6 $\pm$ 0.7~km~s$^{-1}$ were assumed \citep{reid2019}, which are the velocity components towards the Galactic centre in the direction of Galactic rotation and towards the North Galactic Pole, respectively. The sources with an uncertainties larger than 10~km~s$^{-1}$ were eliminated in the following analysis. In Figure~\ref{fig:fig2}, we present the PMs of YOCs in the solar neighbourhood. The PMs of YOCs seem to be random. As the gravitational potential of the Galactic bar is weak in these regions, the PMs of YOCs should be intrinsic. We find that the average peculiar motion is 1.3 $\pm$ 0.2~km~s$^{-1}$ towards the Galactic centre and $-$6.8 $\pm$ 0.2~km~s$^{-1}$ in the direction of the Galactic rotation. Hence, there is an average lagging motion for YOCs behind the Galactic rotation. Similar properties were recently noticed for HMSFR masers \citep[][]{reid2019} and OB stars~\citep{xu2018b}. We also find that the average lagging motion tends to increase as the OCs age (Figure~\ref{fig:fig3}). As discussed in Sect.~\ref{spiral structure}, the YOCs gradually migrate away from their birth arms. With the kinematic data, the mean time for the YOCs to traverse their natal spiral arms could be estimated. The average arm width \textit{w} is taken as 0.31 kpc, which is the mean width of the segments of the Perseus, Local, and Sagittarius-Carina Arms \citep{reid2019}. We select the YOCs that are located in spiral arms according to the best-fit model of \citet{reid2019}, which is consistent with our fitting results with YOCs as shown in Figure~\ref{fig:fig1} and Table~\ref{table:table2}. As shown in Figure~\ref{fig:fig2}, the YOCs approximately migrate either towards the Galactic centre direction or Galactic anti-centre direction. Hence, the travel speed of YOCs is defined as the average of the absolute values of the derived mean velocities. The travel velocity for the YOCs located in spiral arms, $v$, is estimated to be 14.6 $\pm$ 0.2~km~s$^{-1}$. The mean time taken by a YOC to leave its natal spiral arms is simply estimated as $t = w \div v =$ 20.8 $\pm$ 2.2~Myr. \subsection{Scale height of OCs} \label{sec:scale height} In this section, we focus on the vertical distribution of OCs. The high-accuracy subsample with 3611 OCs (Sect.~\ref{sample}) is adopted. Statistically, about 43.4\% of the 3611 OCs in our sample are located above the International Astronomical Union (IAU) defined Galactic plane ($b=0^\circ$). In previous works, asymmetric vertical distributions of gas or stars above and below the IAU-defined Galactic plane were commonly found \citep[e.g.,][]{ander19,reid2019}, which are generally interpreted as the Sun slightly deviating from the Galactic midplane towards the north Galactic pole, that is, the height of the Sun $z_{\odot}$ > 0~pc. Many efforts have been made to determine the value of $z_{\odot}$ \citep[e.g.,][and references therein]{cantat2020a}. By using the OCs with accurate {\it Gaia} parallaxes, we can refine the Galactic plane. We divide a subsample of 3376 OCs within 4.0~kpc of the Sun into 32 groups according to the OC ages, which are used to determine \textit{z}$_{\odot}$. Then, the \textit{z}$_{\odot}$ values of the 32 groups are averaged to obtain the mean vertical displacement of the Sun with respect to the Galactic plane. Here, the values of \textit{z}$_{\odot}$ and the scale height are determined using the following function \citep{maiz2001}: \begin{equation} \\\\ N (z) = N \times \exp{\left[-\frac {1} {2}\left(\frac {z - z_{\odot}} { h}\right)^2\right]}, \end{equation} where $N$ is the central space density of OCs at \textit{z} = \textit{z}$_{\odot}$ with respect to the Galactic plane ($b=0^\circ$) and $h$ is the scale height. By analysing the vertical distribution of OCs with the accurate {\it Gaia} parallaxes, the value of \textit{z}$_{\odot}$ is refined to be 17.5 $\pm$ 3.8 pc above the true Galactic plane. This value is consistent with the results given by \citet{buckner2014} and \citet{karim2017}, but smaller than that of \cite{cantat2020a}. The value of \textit{z}$_{\odot}$~=~17.5~$\pm$~3.8~pc is adopted in this work to adjust the $z$-offsets of the OCs and calculate the scale heights. Figure~\ref{fig:fig4} shows the evolution of the scale height with cluster age for the 3376 OCs within 4.0~kpc of the Sun, which is consistent with the dispersion of the projected OCs as time goes by. The scale height is about 42~pc for the OCs with a mean age of $t_{\rm OC}\sim$ 1.5~Myr and increases to about 130~pc for the OCs with $t_{\rm OC}\sim$ 850~Myr. For the OCs with ages greater than $t_{\rm OC}\sim$ 1~Gyr, the scale heights increase obviously. For the YOCs (ages $<$ 20~Myr), the scale height is estimated to be 70.5 $\pm$ 2.3~pc. While for the OCs with ages of 20--100~Myr, the value is 87.4 $\pm$ 3.6 pc. The deviations of the scale heights of the very young OCs from those of HMSFR masers \citep[19 $\pm$ 2 pc;][]{reid2019} and O--B$_{5}$ stars \citep[34 $\pm$ 3 pc;][]{maiz2001} are small, but increases gradually as OC age. These results are consistent with the properties discussed in Sect.~\ref{spiral structure}. The OCs within an age of a few million years are still located in or very close to their birthplaces, hence have a small scale height. The spiral structure traced by these OCs resembles that of HMSFR masers or OB stars. But as the OCs age, the OCs gradually migrated further from their birthplaces and have larger scale height. \begin{figure*} \centering \includegraphics[scale=1.48]{figures/Fig5.png} \caption{Distributions of the regressed OCs. The OC ages have been grouped into: (a) 20--40~Myr, (b) 40--60~Myr, (c) 60--80~Myr, (d) 80--100~Myr, and traced back to 20, 40, 60, and 80 million years ago, respectively. The solid curved lines trace the centres (and dotted lines the widths enclosing 90\% of the masers) of the best-fit spiral arms given by \cite{reid2019}, and have been rigidly rotated back to the past for comparison. The background is a new concept map of the Milky Way (credited by: Xing-Wu Zheng \& Mark Reid BeSSeL/NJU/CFA), which is also rigidly rotated back to the past. The adopted pattern speed is 27~km~s$^{-1}$ kpc$^{-1}$.} \label{fig:fig5} \end{figure*} \begin{table*}[ht!] \centering \caption{Best-fit results of the spiral arm parameters for the present-day OCs and the regressed OCs.} \begin{tabular}{cccccccc} \hline &Age groups & 0--20~Myr & 20--40~Myr & 40--60~Myr & 60--80~Myr & 80--100~Myr &\\ &Regress to: & & 20 MYA & 40 MYA & 60 MYA & 80 MYA& \\ \hline && & & Pitch angle: $\varphi$ (degree) & & \\ \hline &Sgr--Car Arm & 16.2 $\pm$ 0.4 & 16.3 $\pm$ 1.0 & 16.2 $\pm$ 1.1 & 16.2 $\pm$ 1.0 & 16.0 $\pm$ 1.0 &\\ &Local Arm & 10.8 $\pm$ 0.6 & 10.8 $\pm$ 0.9 & 10.5 $\pm$ 1.0 & 10.5 $\pm$ 0.9 & 10.6 $\pm$ 1.0 &\\ &Perseus Arm & 9.6 $\pm$ 0.8 & 9.3 $\pm$ 0.9 & 9.5 $\pm$ 0.9 & 9.6 $\pm$ 0.9 & 9.5 $\pm$ 1.0 &\\ \hline && & & Arm width: $w$ (kpc) & & &\\ \hline &Sgr--Car Arm & 0.27 $\pm$ 0.02 & 0.24 $\pm$ 0.09 & 0.24 $\pm$ 0.09 & 0.25 $\pm$ 0.09 & 0.25 $\pm$ 0.10& \\ &Local Arm & 0.24 $\pm$ 0.06 & 0.28 $\pm$ 0.03 & 0.20 $\pm$ 0.05 & 0.23 $\pm$ 0.10 & 0.22 $\pm$ 0.09 &\\ &Perseus Arm & 0.33 $\pm$ 0.03 & 0.29 $\pm$ 0.09 & 0.29 $\pm$ 0.09 & 0.30 $\pm$ 0.10 & 0.31 $\pm$ 0.10 &\\ \hline \end{tabular} \label{table:table3} \tablefoot{For each of the spiral arms near the Sun, we list its best-fit pitch angle ($\varphi$) and arm width (\textit{w}), including their 1$\sigma$ errors. Sgr--Car: Sagittarius-Carina Arm; MYA: million years ago.} \end{table*} \subsection{Regression analysis} Based on multiwavelength observations, we have some knowledge about the spiral structure of our Milky Way \citep[e.g.,][]{xu2018b}. However, the lifetime of the spiral arms of the Galaxy of the galaxy is still a puzzle; the question is whether they are long-lived or short-lived. Many of the older OCs (ages $>$ 20~Myr) in our sample have accurate parallaxes, proper motions, and radial velocities. With an appropriate model, we may trace their trajectories back tens of millions of years ago, until they were very young (i.e., ages $<20$~Myr), and resided in their natal spiral arms in the past. With this method, the spiral structure in the past could be explored and compared with its current view. Such an approach provides a good and special way to inspect the evolutionary history of nearby spiral pattern and helps to address questions such as whether the spiral arms remain stable or have they evolved with time. Actually, tracing OCs back to their birthplaces has been proposed by some previous works~\citep[e.g.,][]{dias2005,dias2019} and used to determine the pattern speed of the Galaxy by assuming a stable spiral pattern. In the following, we adopt a regression analysis method to inspect the stability of nearby spiral structure. Since the YOCs ($<$ 20~Myr) can well trace the spiral arms, we perform the regression analysis using 20-Myr intervals. The OCs in the age ranges of 20--40~Myr, 40--60~Myr, 60--80~Myr, and 80--100~Myr are regressed to 20, 40, 60, and 80~million years ago, respectively, by adopting a commonly used model of OC motions in the Milky Way \citep{wu2009}. This model has been widely used to calculate the orbits of different objects \citep[e.g.,][]{odenkirchen1997, allen2006}, which employs an axisymmetric Galactic gravitational potential \citep{allen1991}, consists of a spherical central bulge and a disc, plus a massive, spherical halo extending to a distance of 100~kpc from the Galactic centre \citep{paczynski1990, flynn1996}. The total mass of the model is 9.0 $\times$ 10$^{11}$~M$_{\odot}$, and the local total mass density in the solar neighbourhood is 0.15 M$_{\odot}$ pc$^{-3}$. The results of regression are shown in Figure~\ref{fig:fig5}. We also fit spiral arms to the distributions of regressed OCs with the same method described in Sect.~\ref{spiral structure}. The fitting results are listed in Table~\ref{table:table3}. At present, about 72\% of the YOCs are located in the spiral arms defined by HMSFR masers. In comparison, the percentages are about 39\% $-$ 59\% for the older clusters in the four different age groups (20$-$40~Myr, 40$-$60~Myr, 60$-$80~Myr and 80$-$100~Myr). After regression, many OCs are traced back to spiral arms in the past as shown in Figure~\ref{fig:fig5}. We found that the percentages of regressed OCs in spiral arms increase to 71\%$-$85\%, which are comparable to the present-day value. The vertical scale height of the present-day OCs in the age groups of 20--100~Myr is about 87 pc. After regression, the scale height decreases to about 69 pc, close to the value for the present-day YOCs ($\sim$ 71 pc). In addition, we also notice that the best-fit pitch angles and arm widths of the Sagittarius-Carina, Local, and Perseus Arms at different regression times are consistent with each other within the uncertainties, which are all are very close to the present-day values traced by YOCs (Table~\ref{table:table3}). Hence, the spiral-arm features depicted by regressed OCs are roughly concordant with those traced by present-day YOCs. These results jointly suggest that the spiral arms near the Sun probably are compatible with a long-lived pattern, which has remained stable in the past 80~Myr. The Local Arm, which was thought to be a spur or branch but now is suggested as a major spiral arm segment, also has probably existed for a long time. However, although the spiral pattern can be roughly recognised over almost 80 Myr, it is not enough to support a long-lived stable pattern. It is also possible that the pattern could be in the last stages of disintegration. In the next works, we expect to further inspect this issue with more OCs and an improved model of OC motions in the Milky Way. The evolutionary characteristics of nearby spiral arms revealed by OCs could provide some observational constraints on the formation mechanisms of the spiral arms of the Galaxy, which are hotly debated all along. There are two major competing scenarios: the dynamic spiral mechanism and the density wave theory. Proponents of the former argue that the spiral arms are short-lived, transient, and recurrent \citep{toomre1972, sellwood1984}. Besides, the dynamic spiral mechanism may be able to explain the offsets in some local regions, but it does not predict the systematic positional offsets between spiral arms delineated by different kinds of tracers. Our above results are compatible with the long-lived and rigid, rotating nature of nearby spiral arms, and hence seem to conflict with the predictions of the dynamic spirals, indicating that this mechanism might be not prevalent for our Milky Way. In comparison, the (quasi-stationary) density wave theory suggests that the spiral arms are long-lived and rigidly rotated; this theory also predicts systematic spatial offsets between the spiral arms traced by HMSFRs and old stars \citep{lin1964, lin1966, shu2016}. In this work we show that the evolutionary properties of spiral arms indicated by OCs are consistent with the expectations of the density wave theory. Besides, the results from the study addressing the metallicity gradient step near the Galactic corotation radius with OCs and cepheids also suggested long-lived spiral arms for the Milky Way \citep{lepine2003}. However, we noticed that the density wave theory expects that the Milky Way has four major spiral arms and the Local Arm should not exist \citep{yuan1969}. The presence of the long-lived Local Arm ($\gtrsim$ 80~Myr) implied by our analysis enhances the challenge previously posed on this issue \citep{xu2013, xu2016}. \citet[][]{lepine2017} has interpreted the Local Arm as an outcome of the spiral corotation resonance, which traps arm tracers and the Sun inside it \citep[also see][]{baros2021}. Density wave theory predicts that the Galactic spiral pattern should remain unchanged over a long time and rotate at a constant speed \citep{shu2016}. With the regression analysis method, we can constrain the spiral pattern speed by comparing the OC locations after regression with the positions of spiral arms rigidly rotated back to that time. Based on this, the present-day spiral arms are rotated back to 20, 40, 60, and 80~million years ago, respectively, and then compared with the distributions of the OCs after regression (Figure~\ref{fig:fig5}). Under the assumption of a constant pattern speed, which is independent of the Galactocentric distance, the pattern speed is limited to be 26--28~km~s$^{-1}$~kpc$^{-1}$, which is consistent with the values given by for example \citet{dias2005, dias2019}, and \citet{gerhard2011}. The results also suggest that the regression analysis method used in this work is feasible. \section{Conclusions} \label{conclusion} In summary, we have studied the evolution of the spiral arms of the Galaxy using a large catalogue of OCs based on {\it Gaia} EDR3. A regression analysis method applied to OCs indicates that the spiral arms near the Sun are compatible with a long-lived spiral pattern, which might have been essentially stable for at least the past 80 million years. In particular, the Local Arm is also probably long-lived in nature, and well traced by many YOCs, HMSFR masers, and OB stars, all of which suggest that the Local Arm is probable a major arm segment of the Milky Way. These evolutionary characteristics of spiral arms are more consistent with the expectations of the density wave theory. The dynamic spiral mechanism might be not prevalent for the Milky Way. These results are expected to be confirmed by conducting further studies with more OCs in the Milky Way. \begin{acknowledgements} We appreciate the anonymous referee for the instructive comments which help us a lot to improve the paper. This work was funded by the NSFC, grant numbers 11933011, 11873019, 11673066, 11988101 and by the Key Laboratory for Radio Astronomy. L.G.Hou thanks the support from the Youth Innovation Promotion Association CAS. We acknowledge Dr. Mark J. Reid for allowing us to use his program, utilised to fit the pitch angles of the Galactic spiral arms. We used data from the European Space Agency mission \textit{Gaia} (\url{http://www.cosmos.esa.int/gaia}), processed by the \textit{Gaia} Data Processing and Analysis Consortium (DPAC; see \url{http://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for DPAC has been provided by national institutions, in particular the institutions participating in the \textit{Gaia} Multilateral Agreement. \end{acknowledgements}
1,477,468,750,795
arxiv
\section{Introduction} Recently, there has been great interest in various unconventional states of matter, in particular those that might appear in strongly-interacting 2D systems. These exotic phases may be characterized by unconventional order parameters. Alternatively, the phases may be \emph{topological}. In this case there is no local order parameter, but there may be a \emph{non-local} order parameter, which is related to the topological properties of the state: exotic braiding statistics, quantum number fractionalization, and a ground-state (GS) degeneracy which depends only on the topology of the underlying 2D manifold \cite{Wen90a}. The low-energy effective description of these phases is a topological quantum field theory (TQFT). A variety of topological phases are known to exist in the quantum Hall systems \cite{DasSarma97}. It has been conjectured that such phases also occur in frustrated magnets, where they may be connected to superconductivity \cite{Anderson87,Kivelson87,Read91a,Balents00,Senthil00,Laughlin88a,Chen89}. Topological phases are also an attractive platform for quantum computation, where their insensitivity to local perturbations leads to fault-tolerance \cite{Kitaev97}. Non-Abelian topological phases are particularly interesting in this context because, in many of these phases, the braiding of quasiparticles generates a set of transformations which is sufficient for universal quantum computation \cite{Freedman02a}. In this paper, we shall concentrate on microscopic models which are related to a class of $P-$ and $T-$invariant non-Abelian topological phases \cite{Freedman04a}. A set of conditions which places a microscopic model in such a topological phase can be briefly summarized as follows. We suppose that the low-energy Hilbert space can be mapped onto that of a quantum loop gas. This is the case in a large class of models, including dimer models, certain spin models, and some interacting hard-core boson models. In such a model, basis states are associated to collections of non-intersecting loops \cite{Freedman04a,FNS03b,Freedman05b}. We give some examples in section \ref{sec:micro-models}. A Hamiltonian can act on states in this Hilbert space by doing the following: (i) the loops can be continuously deformed -- we will call this an isotopy move; (ii) a small loop can be created or annihilated -- the combined effect of this move and the isotopy move has been dubbed ``$d$-isotopy''\cite{Freedman03,Freedman04a,Freedman05b}; (iii) finally, when exactly $k+1$ strands come together in some local neighborhood, the Hamiltonian can cut them and reconnect the resulting ``loose ends'' pairwise so that the newly-formed loops are still non-intersecting. More specifically, in order for this model to be in a topological phase, the ground state of this Hamiltonian should be a superposition of all such pictures with the additional requirements that (i) if two pictures can be continuously deformed into each other, they enter the GS superposition with the same weight; (ii) the amplitude of a picture with an additional loop is $d$ times that of a picture without such loop; (iii) this superposition is annihilated by the application of the Jones--Wenzl (JW) projector that acts locally by reconnecting $k+1$ strands (for a detailed description see the following section). The main goal of this paper is to investigate the energy spectrum of a system subject to the first two conditions. We shall show that a generic local Hamiltonian which enforces $d$-isotopy for its ground state(s) is necessarily gapless provided that $ \vert d \vert \leq \sqrt{2}$. (We shall also argue that, surprisingly, even the addition of the JW projector to the Hamiltonian will not open a gap for $ \vert d \vert = \sqrt{2}$.) Such a Hamiltonian is a sum of projection operators (enforcing on every plaquette of the lattice both isotopy invariance and the value $d$ for a contractible loop). These projection operators do not commute with each other, but they are compatible with each other in the sense that they all annihilate the ground state. (Such Hamiltonians have also arisen in the context of the quantum Hall effect, where the Haldane pseudopotentials are projection operators which annihilate the Laughlin states \cite{Trugman85,Haldane-QHE}, but do not commute so the excited states are not known exactly, and, under a general name of ``parent Hamiltonians'', in quantum antiferromagnetism \cite{Majumdar69,Klein82,Affleck87}.) Exact knowledge of the ground state enables us to construct a variational \emph{ansatz} for the lowest energy excited state. The strategy is quite similar to the single-mode approximation (SMA), in which ${\rho_q}|0\rangle$ is the trial excited state, where $\rho$ is some conserved charge. In the case of a broken symmetry state, $\rho$ is the charge which generates the symmetry transformation which is spontaneously broken in the ground state. Our method generalizes the SMA in an important way: we do not rely on the existence of any conserved charges and, indeed, the model need not have any. Instead, we have the less restrictive condition that the configuration space of the model should break into two (or more) sectors whose volume is parametrically larger than the boundary between them, e.g. by a factor of the system size $L$. If the Hamiltonian only has matrix elements between nearby points in configuration space, then (\`{a} la Lieb-Schultz-Mattis \cite{Lieb61}) we can construct a wavefunction which is equal to the ground state wavefunction except for a relative sign change between the two sectors. The energy cost of this ``twisted'' excited state would be at most $\sim 1/L$ and is even smaller in the case of the models which we consider here. On the other hand, if the Hamiltonian directly connects the two putative sectors, then it means that in reality these are not two distinct sectors, and a sign change in the wavefunction would have an energetic penalty which does not scale to zero as $L\rightarrow\infty$. In this paper, we prove the general result on gaplessness described in the preceding paragraph. We apply this result to microscopic models implementing $d$-isotopy and find the remarkable result that they are gapless with $\omega\propto k^2$ in spite of the absence of any power-law equal-time correlation functions of local operators. \section{$d$-isotopy and its local subspaces} \label{sec:d-isotopy} In this section we present a more formal definition of $d$-isotopy; it can be skipped on a first reading. Readers interested in the details are also referred to Refs.~\onlinecite{Freedman03,Freedman04a,FNS03b}. Although we will eventually be considering a system on a lattice, it is useful to begin by defining the Hilbert spaces of interest, $\overline{V}_d$ and ${V}_d$, in the smooth, lattice-free setting. Consider a compact surface $Y$ and the set $S$ of all multiloops \footnote{A multiloop is a collection of non-intersecting loops and arcs on a surface. The end points of an arc are required to lie on the boundary of the surface, see Ref.~\onlinecite{Freedman04a}.} $X\subset Y$. If $\partial Y$ (the boundary of $Y$) is non-empty, we fix once and for all a finite set $P$ of points on $\partial Y$ with $X \cap \partial Y =P$. We assume $Y$ is oriented but $X$ should \emph{not} be. There is a large vector space $\mathbb{C}^S$, of complex valued functions on $S$. We say $X$ and $X'$ are \emph{isotopic} ($X{\sim}X'$) if one may be gradually deformed into the other with, of course, the deformation being the identity on $\partial Y$ (see Fig.~\ref{fig:isotopy}). \begin{figure}[hbt!] \includegraphics[width=2.75in]{isotopy.eps} \caption{Isotopy on an annulus: $X\sim X'$ but $X \nsim X''$.} \label{fig:isotopy} \end{figure} We may view the isotopy relation as a family of linear constraints on $\mathbb{C}^S$, namely $\Psi(X)-\Psi(X') =0$ if $X \sim X'$. The subspace satisfying these linear constraints is now only of countable dimension; it consists of those functions which depend only of the isotopy class $[X]$ of $X$ and may be identified with $\mathbb{C}^{[S]}$, where $[S]$ is the isotopy classes of multiloops with fixed boundary conditions. Note that all isotopes can be made by composing small locally-supported ones, so the relations we just imposed are ``local'' in the sense that they can be implemented with purely local terms in the Hamiltonian. Let us go further and define an additional local relation which, when added to isotopy, constitutes the ``$d$-isotopy'' relation. This relation is: \begin{equation} \label{eq:d-isotopy} d\;\Psi(X)-\Psi(X \cup \bigcirc)=0 \end{equation} It says that if two multiloops are identical except for the presence of a small (or, it follows, any contractible) circle, then their function values differ by a factor of $d$, a fixed positive real number. In cases of interest to us $1\leq d<2$, so our function is either neutral to or ``likes'' small circles. We call the subspace obeying all these constraints the $d$--isotopy space of $Y$ (with fixed boundary conditions) and write it as $\overline{V}_d \subset \mathbb{C}^{[S]} \subset\mathbb{C}^S$. The subspace $\overline{V}_d (T^2)$ is still of countable dimension, or extensively degenerate, on the torus $T^2$. The vector space $\overline{V}_d$ is clearly the ground state manifold (GSM) of a local Hamiltonian acting on $\mathbb{C}^{[S]}$. It is a remarkable fact \cite{Freedman03,Freedman04a} that it is very difficult to add any further local relations to $d-$isotopy without killing the vector space entirely. For the physically interesting cases $\alpha = \text{e}^{\pi \text{i}/(k+2)}$, $k=1,2,3, \ldots$, there is such a local relation and a natural positive definite inner product on $\overline{V}_d$ (see refs.~\onlinecite{Freedman03,Freedman04a} for definition). In these cases the local relations are essentially the Jones-Wenzl idempotents:\\ $k=1$: \begin{equation} \label{eq:JW1} \pspicture[0.4](1.0,1.0) \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.333333333333333,0) (0.333333333333333,0.5)(0.333333333333333,0.5)(0.333333333333333,1.0) \psbezier[linewidth=1.0pt](0.666666666666667,0)(0.666666666666667,0.5) (0.666666666666667,0.5)(0.666666666666667,1.0) \endpspicture \; - \; \frac{1}{d} \pspicture[0.4](1.0,1.0) \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.333333333333333,0) (0.333333333333333,0.333333333333333) (0.666666666666667,0.333333333333333)(0.666666666666667,0) \psbezier[linewidth=1.0pt](0.666666666666667,1.0) (0.666666666666667,0.666666666666667) (0.333333333333333,0.666666666666667)(0.333333333333333,1.0) \endpspicture =0 \end{equation} $k=2$: \begin{multline} \label{eq:JW2} \pspicture[0.4](1.0,1.0) \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.25,0)(0.25,0.5)(0.25,0.5)(0.25,1.0) \psbezier[linewidth=1.0pt](0.5,0)(0.5,0.5)(0.5,0.5)(0.5,1.0) \psbezier[linewidth=1.0pt](0.75,0)(0.75,0.5)(0.75,0.5)(0.75,1.0) \endpspicture + \frac{1}{d^2-1} \left(\: \pspicture[0.4](1.0,1.0) \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.25,0)(0.25,0.25)(0.5,0.25)(0.5,0) \psbezier[linewidth=1.0pt](0.75,0)(0.75,0.5)(0.25,0.5)(0.25,1.0) \psbezier[linewidth=1.0pt](0.75,1.0)(0.75,0.75)(0.5,0.75)(0.5,1.0) \endpspicture + \pspicture[0.4](1.0,1.0) \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.25,0)(0.25,0.5)(0.75,0.5)(0.75,1.0) \psbezier[linewidth=1.0pt](0.5,0)(0.5,0.25)(0.75,0.25)(0.75,0) \psbezier[linewidth=1.0pt](0.5,1.0)(0.5,0.75)(0.25,0.75)(0.25,1.0) \endpspicture \:\right) \\ - \frac{d}{d^2-1} \left(\: \pspicture[0.4](1.0,1.0) \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.25,0)(0.25,0.25)(0.5,0.25)(0.5,0) \psbezier[linewidth=1.0pt](0.75,0)(0.75,0.5)(0.75,0.5)(0.75,1.0) \psbezier[linewidth=1.0pt](0.5,1.0)(0.5,0.75)(0.25,0.75)(0.25,1.0) \endpspicture + \pspicture[0.4](1.0,1.0) \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.25,0)(0.25,0.5)(0.25,0.5)(0.25,1.0) \psbezier[linewidth=1.0pt](0.5,0)(0.5,0.25)(0.75,0.25)(0.75,0) \psbezier[linewidth=1.0pt](0.75,1.0)(0.75,0.75)(0.5,0.75)(0.5,1.0) \endpspicture \:\right) = 0 \end{multline} $k=3$: \begin{multline} \label{eq:JW3} \pspicture[0.4](0.9,0.9) \scalebox{0.9}{ \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.2,0)(0.2,0.5)(0.2,0.5)(0.2,1.0) \psbezier[linewidth=1.0pt](0.4,0)(0.4,0.5)(0.4,0.5)(0.4,1.0) \psbezier[linewidth=1.0pt](0.6,0)(0.6,0.5)(0.6,0.5)(0.6,1.0) \psbezier[linewidth=1.0pt](0.8,0)(0.8,0.5)(0.8,0.5)(0.8,1.0) } \endpspicture - \frac{d}{d^2-2} \pspicture[0.4](0.9,0.9) \scalebox{0.9}{ \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.2,0)(0.2,0.5)(0.2,0.5)(0.2,1.0) \psbezier[linewidth=1.0pt](0.4,0)(0.4,0.2)(0.6,0.2)(0.6,0) \psbezier[linewidth=1.0pt](0.8,0)(0.8,0.5)(0.8,0.5)(0.8,1.0) \psbezier[linewidth=1.0pt](0.6,1.0)(0.6,0.8)(0.4,0.8)(0.4,1.0) } \endpspicture - \frac{d^2-1}{d^3-2d} \left(\: \pspicture[0.4](0.9,0.9) \scalebox{0.9}{ \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.2,0)(0.2,0.2)(0.4,0.2)(0.4,0) \psbezier[linewidth=1.0pt](0.6,0)(0.6,0.5)(0.6,0.5)(0.6,1.0) \psbezier[linewidth=1.0pt](0.8,0)(0.8,0.5)(0.8,0.5)(0.8,1.0) \psbezier[linewidth=1.0pt](0.4,1.0)(0.4,0.8)(0.2,0.8)(0.2,1.0) } \endpspicture + \pspicture[0.4](0.9,0.9) \scalebox{0.9}{ \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.2,0)(0.2,0.5)(0.2,0.5)(0.2,1.0) \psbezier[linewidth=1.0pt](0.4,0)(0.4,0.5)(0.4,0.5)(0.4,1.0) \psbezier[linewidth=1.0pt](0.6,0)(0.6,0.2)(0.8,0.2)(0.8,0) \psbezier[linewidth=1.0pt](0.8,1.0)(0.8,0.8)(0.6,0.8)(0.6,1.0) } \endpspicture \:\right) \\ + \frac{1}{d^2-2} \left(\: \pspicture[0.4](0.9,0.9) \scalebox{0.9}{ \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.2,0)(0.2,0.2)(0.4,0.2)(0.4,0) \psbezier[linewidth=1.0pt](0.6,0)(0.6,0.5)(0.2,0.5)(0.2,1.0) \psbezier[linewidth=1.0pt](0.8,0)(0.8,0.5)(0.8,0.5)(0.8,1.0) \psbezier[linewidth=1.0pt](0.6,1.0)(0.6,0.8)(0.4,0.8)(0.4,1.0) } \endpspicture + \pspicture[0.4](0.9,0.9) \scalebox{0.9}{ \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.2,0)(0.2,0.5)(0.6,0.5)(0.6,1.0) \psbezier[linewidth=1.0pt](0.4,0)(0.4,0.2)(0.6,0.2)(0.6,0) \psbezier[linewidth=1.0pt](0.8,0)(0.8,0.5)(0.8,0.5)(0.8,1.0) \psbezier[linewidth=1.0pt](0.4,1.0)(0.4,0.8)(0.2,0.8)(0.2,1.0) } \endpspicture + \pspicture[0.4](0.9,0.9) \scalebox{0.9}{ \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.2,0)(0.2,0.5)(0.2,0.5)(0.2,1.0) \psbezier[linewidth=1.0pt](0.4,0)(0.4,0.2)(0.6,0.2)(0.6,0) \psbezier[linewidth=1.0pt](0.8,0)(0.8,0.5)(0.4,0.5)(0.4,1.0) \psbezier[linewidth=1.0pt](0.8,1.0)(0.8,0.8)(0.6,0.8)(0.6,1.0) } \endpspicture + \pspicture[0.4](0.9,0.9) \scalebox{0.9}{ \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.2,0)(0.2,0.5)(0.2,0.5)(0.2,1.0) \psbezier[linewidth=1.0pt](0.4,0)(0.4,0.5)(0.8,0.5)(0.8,1.0) \psbezier[linewidth=1.0pt](0.6,0)(0.6,0.2)(0.8,0.2)(0.8,0) \psbezier[linewidth=1.0pt](0.6,1.0)(0.6,0.8)(0.4,0.8)(0.4,1.0) } \endpspicture \:\right) \\ - \frac{1}{d^3-2d} \left(\: \pspicture[0.4](0.9,0.9) \scalebox{0.9}{ \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.2,0)(0.2,0.2)(0.4,0.2)(0.4,0) \psbezier[linewidth=1.0pt](0.6,0)(0.6,0.5)(0.2,0.5)(0.2,1.0) \psbezier[linewidth=1.0pt](0.8,0)(0.8,0.5)(0.4,0.5)(0.4,1.0) \psbezier[linewidth=1.0pt](0.8,1.0)(0.8,0.8)(0.6,0.8)(0.6,1.0) } \endpspicture + \pspicture[0.4](0.9,0.9) \scalebox{0.9}{ \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.2,0)(0.2,0.5)(0.6,0.5)(0.6,1.0) \psbezier[linewidth=1.0pt](0.4,0)(0.4,0.5)(0.8,0.5)(0.8,1.0) \psbezier[linewidth=1.0pt](0.6,0)(0.6,0.2)(0.8,0.2)(0.8,0) \psbezier[linewidth=1.0pt](0.4,1.0)(0.4,0.8)(0.2,0.8)(0.2,1.0) } \endpspicture \:\right) + \frac{d^2}{d^4-3d^2+2} \pspicture[0.4](0.9,0.9) \scalebox{0.9}{ \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.2,0)(0.2,0.2)(0.4,0.2)(0.4,0) \psbezier[linewidth=1.0pt](0.6,0)(0.6,0.2)(0.8,0.2)(0.8,0) \psbezier[linewidth=1.0pt](0.8,1.0)(0.8,0.8)(0.6,0.8)(0.6,1.0) \psbezier[linewidth=1.0pt](0.4,1.0)(0.4,0.8)(0.2,0.8)(0.2,1.0) } \endpspicture \\ - \frac{d}{d^4-3d^2+2} \left(\: \pspicture[0.4](0.9,0.9) \scalebox{0.9}{ \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.2,0)(0.2,0.2)(0.4,0.2)(0.4,0) \psbezier[linewidth=1.0pt](0.6,0)(0.6,0.2)(0.8,0.2)(0.8,0) \psbezier[linewidth=1.0pt](0.8,1.0)(0.8,0.4)(0.2,0.4)(0.2,1.0) \psbezier[linewidth=1.0pt](0.6,1.0)(0.6,0.8)(0.4,0.8)(0.4,1.0) } \endpspicture + \pspicture[0.4](0.9,0.9) \scalebox{0.9}{ \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.2,0)(0.2,0.6)(0.8,0.6)(0.8,0) \psbezier[linewidth=1.0pt](0.4,0)(0.4,0.2)(0.6,0.2)(0.6,0) \psbezier[linewidth=1.0pt](0.8,1.0)(0.8,0.8)(0.6,0.8)(0.6,1.0) \psbezier[linewidth=1.0pt](0.4,1.0)(0.4,0.8)(0.2,0.8)(0.2,1.0) } \endpspicture \:\right) + \frac{1}{d^4-3d^2+2} \pspicture[0.4](0.9,0.9) \scalebox{0.9}{ \psframe[linewidth=0.5pt](0,0)(1.0,1.0) \psbezier[linewidth=1.0pt](0.2,0)(0.2,0.6)(0.8,0.6)(0.8,0) \psbezier[linewidth=1.0pt](0.4,0)(0.4,0.2)(0.6,0.2)(0.6,0) \psbezier[linewidth=1.0pt](0.8,1.0)(0.8,0.4)(0.2,0.4)(0.2,1.0) \psbezier[linewidth=1.0pt](0.6,1.0)(0.6,0.8)(0.4,0.8)(0.4,1.0) } \endpspicture \\ =0, \end{multline} see Ref.~\onlinecite{Kauffman94} for a recursive formula. These relations define a finite dimensional Hilbert space $V_d (Y) \subset \overline{V}_d (Y) \subset \mathbb{C}^{[S]} \subset \mathbb{C}^S$. In Refs.~\onlinecite{Freedman03,Freedman04a} it is explained that $V_d(Y)$ is the Hilbert spaces for doubled $SU(2)_k$ Chern-Simons theory on $Y$. It has been argued \cite{Freedman05b} that a Hamiltonian with a ground state manifold corresponding to $\overline{V}_d$ is perched at a phase transition. When perturbed, (infinitesimally for $k=1$ or $2$ and under a larger deformation for $k\geq 3$) it will go into a topological phase with low-energy Hilbert space $V_d$, i.e. into a phase described by doubled $SU(2)_k$ Chern-Simons theory. Very briefly, the Hilbert space of a TQFT such as doubled $SU(2)_k$ Chern-Simons theory, can always be defined as the joint null space of commuting local projectors\footnote{See Ref.~\onlinecite{Freedman01} where private communication with A.~Kitaev and G.~Kuperberg is referenced. The required family of commuting projectors is easily derived in the Turaev-Viro approach \cite{Turaev} by writing surface$\times$interval, $\Sigma \times I =$ handle-body union 2-handles. The disjoint attaching curves of the 2-handles yield the commuting projectors.}, implying the existence of a local Hamiltonian with a spectral gap in the thermodynamic limit. Once a Hamiltonian $H_d$ has imposed $d-$isotopy, i.e. $GSM(H_d )= \overline{V}_d$, an extensive degeneracy has been created; the only \emph{local} way of lifting this extensive degeneracy (to a finite degeneracy) without creating frustration \footnote{Here we use the term ``frustration'' in reference to a Hamiltonian which can be written as a sum of projectors yet has no zero-modes. In this sense, new terms breaking the extensive degeneracy down to a crystalline structure will introduce frustration.} is to add the Jones-Wenzl projector to such a Hamiltonian. \section{Microscopic Lattice Models} \label{sec:micro-models} We now briefly review some examples of microscopic Hamiltonians whose ground state(s) are described by $d$-isotopy. These Hamiltonians may not be particularly realistic (e.g. the two spin Hamiltonians presented here do not conserve the total spin), but they have the advantage of being relatively simple local Hamiltonians with the desired ground states. These ground states have a nice property that their square-norms are in one-to-one correspondence to the partition functions of known statistical mechanical models -- a so called \emph{plasma analogy} \cite{Laughlin83} \footnote{We stress that this is a term-by-term correspondence in the sense that, given the appropriate basis, the squared amplitudes of the basis states comprising the ground state are identical to the Gibbs weights of the corresponding statistical mechanical states (up to an overall constant).} This approach has proved useful in studying many frustrated quantum models and, in particular, those with topological and quasi-topological orders \cite{Rokhsar88,Moessner01a,Misguich02,Henley97, Henley04,Castelnovo05a, Castelnovo05b} The first example was presented in Ref.~\onlinecite{Freedman05b} and was inspired by Kitaev's model \cite{Kitaev97}. The model is defined on a honeycomb lattice, with the elementary degrees of freedom being $s=1/2$ spins situated on its links (alternatively, one can think of these spins as occupying the sites of a kagom\'{e} lattice, but the former description lends itself nicer to a loop representation). The Hamiltonian is given by \begin{multline} \label{eq:d-isotopy-Ham} H_{d}^{(1)} = {\sum_v} \biggl(1+{\prod_{i\in{\cal N}(v)}}{\sigma^z_i}\biggr)\\ + {\sum_p} \biggl( \frac{1}{d^2}{\left({F^0_p}\right)^\dagger}{F^0_p} + {F^0_p}{\left({F^0_p}\right)^\dagger} - \frac{1}{d}{F^0_p} - \frac{1}{d}{\left({F^0_p}\right)^\dagger}\\ + {\left({F^1_p}\right)^\dagger}{F^1_p} + {F^1_p}{\left({F^1_p}\right)^\dagger} - {F^1_p} - {\left({F^1_p}\right)^\dagger}\\ + {\left({F^2_p}\right)^\dagger}{F^2_p} + {F^2_p}{\left({F^2_p}\right)^\dagger} - {F^2_p} - {\left({F^2_p}\right)^\dagger}\\ +{\left({F^3_p}\right)^\dagger}{F^3_p} + {F^3_p}{\left({F^3_p}\right)^\dagger} - {F^3_p} - {\left({F^3_p}\right)^\dagger} \biggr) \end{multline} where ${\cal N}(v)$ is the set of 3 links neighboring vertex $v$, and \begin{eqnarray} {F^0_p} &=& {\sigma^-_1}{\sigma^-_2}{\sigma^-_3}{\sigma^-_4}{\sigma^-_5}{\sigma^-_6}\cr {F^1_p} &=& {\sigma^+_1}{\sigma^-_2}{\sigma^-_3}{\sigma^-_4}{\sigma^-_5}{\sigma^-_6} + \mbox{cyclic perm.}\cr {F^2_p} &=& {\sigma^+_1}{\sigma^+_2}{\sigma^-_3}{\sigma^-_4}{\sigma^-_5}{\sigma^-_6} + \mbox{cyclic perm.}\cr {F^3_p} &=& {\sigma^+_1}{\sigma^+_2}{\sigma^+_3}{\sigma^-_4}{\sigma^-_5}{\sigma^-_6} + \mbox{cyclic perm.} \end{eqnarray} Here, $1,2,\ldots,6$ label the six edges of plaquette $p$. This Hamiltonian is a sum of projection operators with positive coefficients and, therefore, is positive-definite. The eigenvalues of the first term in (\ref{eq:d-isotopy-Ham}) are $2, 0$, corresponding, respectively, to whether there is an even or odd number of ${\sigma^z}=-1$ spins neighboring this vertex. When eigenvalue zero is obtained at every vertex, the ${\sigma^z}=1$ links form loops. Hence, the zero-energy subspace of the first term is spanned by all configurations of multiloops (on the honeycomb lattice, they cannot cross). The term on the second line of (\ref{eq:d-isotopy-Ham}) is a projection operator which annihilates a state $|\Psi\rangle$ if the amplitude for all of the spins on a given plaquette to be up is a factor of $d$ times the amplitude for them to all be down, i.e. if the amplitude for a configuration with a small loop encircling a single plaquette is a factor of $d$ times the amplitude for an otherwise identical configuration without the small loop. The other three lines of the Hamiltonian vanish on a state $|\Psi\rangle$ if it accords the same value to a configuration if a loop is deformed to enclose an additional plaquette -- in other words, these terms enforce the usual isotopy relations. These terms are graphically represented in Fig.~\ref{fig:honeycomb-d-iso}. \begin{figure}[hbt] \includegraphics[width=2.5in]{honeycomb-d-iso.eps} \caption{The action of the terms in the last four lines of the Hamiltonian~(\ref{eq:d-isotopy-Ham}) represented graphically. The solid bonds correspond to the up-spins. Notice that the application of the Hamiltonian to any of the above plaquette configurations results in a superposition of this configuration and its counterpart with the appropriate amplitudes. The relative amplitudes of these configurations in the ground (0 eigenvalue) state correspond to the ``d''-isotopy rules described in the Introduction.} \label{fig:honeycomb-d-iso} \end{figure} The second example was presented in Ref.~\onlinecite{Freedman03}, motivated by the connection to a classical statistical mechanical model, namely the self-dual Potts model (this connection will be explored in the next section). The model is defined on a square lattice; the degrees of freedom are once again $s=1/2$ spins situated on its links. The Hamiltonian is given by \begin{multline} \label{eq:Mike-d-isotopy} H_d^{(2)} = \sum_{\pmb{\square}} \left(|3\rangle - \frac{1}{d} |4\rangle \right) \left(\langle 3| - \frac{1}{d} \langle 4| \right)\\ + \sum_{\pmb{+}} \left(|\widehat{1}\rangle - \frac{1}{d} |\widehat{0}\rangle \right)\left( \langle \widehat{1}| - \frac{1}{d} \langle \widehat{0}|\right) \end{multline} in the notation of Ref.~\onlinecite{Freedman03}. The sum in Eq.~(\ref{eq:Mike-d-isotopy}) is taken over all elementary plaquettes of the lattice, $|3\rangle$ is a state with (any) three up-spins around a given plaquette, $|4\rangle$ has all four spins up. The second sum is taken over all vertices; $|\widehat{1}\rangle$ and $|\widehat{0}\rangle$ correspond to a single spin or no spins up around a given vertex. Notice that this Hamiltonian is self-dual under flipping all spins and going to the dual lattice. The loops are now defined on the \emph{surrounding} (or midpoint) lattice, i.e. the lattice obtained by connecting the mid-points of adjacent edges. One should think of placing a double-sided mirror along a bond whose spin is up and placing a mirror along a dual bond if the spin is down. Loops are formed by propagating light in this labyrinth of mirrors. The action of the first term Eq.~(\ref{eq:Mike-d-isotopy}) is demonstrated in Fig.~\ref{fig:Mikes-d-iso}. The action of the second (vertex) term is completely analogous due to the aforementioned duality. \begin{figure}[hbt] \includegraphics[width=0.5\columnwidth]{Mikes_loops.epsi} \caption{The action of the first (plaquette) term of Hamiltonian~(\ref{eq:Mike-d-isotopy}) on a shaded plaquette. A small loop is created or annihilated (with the appropriate amplitude) by being merged with another loop.} \label{fig:Mikes-d-iso} \end{figure} Finally, a reader interested in a more realistic Hamiltonian is referred to [\onlinecite{FNS03b,Freedman05a}] where a possibility of finding a $d$-isotopy ground state(s) in an extended Hubbard model is discussed. We shall refrain from reviewing that construction here due to its complexity, unnecessary for the purpose of this paper. Let us turn to the common features of the above Hamiltonians. Both the terms on the final four lines of (\ref{eq:d-isotopy-Ham}) and the plaquette and vertex terms of (\ref{eq:Mike-d-isotopy}) do not commute with each other. However, they are compatible in the sense that each of these terms annihilates the following state\begin{equation} \label{eq:ground-state} |{\Psi_0}\rangle = {\sum_{\{\alpha\}}} d^{\ell(\alpha)}\,|\alpha\rangle. \end{equation} which is a superposition of all configurations $\alpha$ of multiloops weighted by a factor of $d$ to the number of loops $\ell(\alpha)$. Notice one important difference between these examples: by construction, in the second case the loops are fully packed: every edge of a surrounding lattice is traversed by a loop. Because of this constraint, the notion of simple isotopy is meaningless here; the multiloops, however, can fluctuate by ``ejecting'' and ``absorbing'' small loops. On an $L\times L$ torus, the ground state degeneracy is $\sim L^2$ because the Hamiltonian does not mix states $|\alpha\rangle$ with different winding numbers \footnote{In a fully-packed case, there is an additional geometric constraint limiting these winding numbers to either only even or only odd, depending on the size of the torus}. The different ground states are given by (\ref{eq:ground-state}) but with the sum over $\alpha$ restricted to a single topological class. Notice that at least in the first example, the Hamiltonian dynamics is ergodic within a given topological sector; this is particularly trivial to see for the zero winding case since every loop configuration can be reduced to a configuration with no loops by applying the moves depicted in Fig.~\ref{fig:honeycomb-d-iso} to first shrink the loops to a single plaquette and then to annihilate them. In the second model, ergodicity is less obvious since the loop model is fully-packed and thus reducing any given configuration to a state with no loops is impossible. Moreover, it has been observed in Ref.~\onlinecite{Freedman03} that for a non-zero winding number for certain finite tori there are configurations that are not connected by the the ``moves'' of the Hamiltonian (\ref{eq:Mike-d-isotopy}). For the zero winding number, however, it can be shown that all configurations are connected to the state with all spins down, which in turn translates into a maximum possible number of loops, one per each plaquette of a dual lattice. Hence the Hamiltonian is ergodic in the zero winding sector, and this is the sector we will be concerned with for the remainder of the paper. (Ergodicity is important because the twisted states we propose should have nothing to do with degeneracies associated with possible non-ergodicity.) More generally, the ground state on any genus $g\geq 1$ surface is infinitely degenerate in the thermodynamic limit. As we saw in the previous section if $d=\pm1$ or $d=\pm\sqrt{2}$, there is a Jones-Wenzl projector which also annihilates the ground state (\ref{eq:ground-state}) on a topologically-trivial manifold but mixes different winding number sectors on higher-genus surfaces. Hence, in either model, at $d=\pm 1, \pm \sqrt{2}$, there are two Hamiltonians which have the same ground state on the sphere (or, equivalently, in the zero winding sector). The first ones, given by either Eq.~(\ref{eq:d-isotopy-Ham}) or Eq.~(\ref{eq:Mike-d-isotopy}), have extensively degenerate ground states on the torus while the other kind (with the Jones-Wenzl projector added) have finite degeneracy \cite{Kitaev97}. The second type leads to a topological phase with an energy gap for $d=1$ \cite{Kitaev97}, where the resulting model is exactly soluble, and, we believe, for $d=-1$ (the situation at $d=\pm\sqrt{2}$ is more subtle and will be addressed later). The spectrum of the first kind is the main subject of this paper. While the ground state can be obtained exactly, excited states cannot because the different operators in both (\ref{eq:d-isotopy-Ham}) and (\ref{eq:Mike-d-isotopy}) do not commute with each other. We will use a variational \emph{ansatz} to show that in the absence of other terms, such as Jones-Wenzl projectors, the spectra of these Hamiltonians are gapless in the zero winding sector. But before we can proceed, we need to address the inherent structure of these ground states in more details. \section{Mapping of the Ground-State to a Statistical Mechanics Problem} Many properties of the ground-state wavefunction can be obtained by observing that the norm of the ground state is equal to the partition function of a classical loop model: \begin{eqnarray} \label{eq:sum-over-configs} \langle {\Psi_0}|{\Psi_0}\rangle = \sum_{\{\alpha\}} {d^{2{\ell(\alpha)}}} \end{eqnarray} where $\alpha$ denotes a particular configuration of loops on a lattice (a ``snapshot'' of multiloops). The specific details of possible configurations depend on a particular choice of a Hamiltonian; in what follows we shall consider the cases relevant to each of the proposed Hamiltonians. \subsection{Potts model: random clusters and loops} \label{sec:Potts} The $q$-state Potts model, originally introduced as a generalization of the Ising model, is defined by the following classical Hamiltonian: \begin{equation}  - \beta \mathcal{H} = J\sum_{<i,j>}{\delta_{\sigma_i,       \sigma_j}} \,,   \label{eq:Potts_Hamiltonian} \end{equation} where the sum is taken over all pairs of nearest neighbors and $\sigma_i$ is a discrete ``spin'' variable that can take on $q$ different values, e.g.\ $\sigma_i = 1, \ldots, q$. We now review the the Fortuin--Kasteleyn (\emph{AKA} random cluster) representation for the $q$-state Potts model and underline its basic properties. Given the Hamiltonian \ref{eq:Potts_Hamiltonian}, the partition function can then be written as \begin{multline}   \label{eq:Potts_partition}     Z_{\text{Potts}} = \sum_{\{ \sigma\}}\mathrm{e}^{ - \beta \mathcal{H}} =\sum_{\{       \sigma\}} \prod_{<i,j>}\mathrm{e}^{J \delta_{\sigma_i,         \sigma_j}} \\      = \sum_{\{       \sigma\}}\prod_{<i,j>}\left[1+\left(\mathrm{e}^{J}-1\right)       \delta_{\sigma_i, \sigma_j}\right]  \\      = \sum_{\{ \sigma\}}\prod_{<i,j>}\left[1+ v\,       \delta_{\sigma_i, \sigma_j}\right]\,, \end{multline} with $v\equiv \mathrm{e}^{J}-1$. The next step is to expand the product in Eq.~(\ref{eq:Potts_partition}). Every pair of nearest neighbors contributes a factor of either 1 or $v$ to each of the resulting terms, with the latter possibility available \emph{only} if the neighboring spins agree ($\sigma_i = \sigma_j$). Therefore every such term has a simple graphical representation: a bond is occupied if the pair of spins it connects contributes a factor of $v$ into a given term, and it is left vacant otherwise. Since a bond can be placed between the sites \emph{only} if their spins ``agree'', all spins belonging to the same bond cluster must have the same value. The partition function thus becomes \begin{equation}   \label{eq:Potts_partition_expanded}   Z_{\text{Potts}}   = \sum_{\{ \sigma\}}\prod_{<i,j>}\left[1+ v\, \delta_{\sigma_i, \sigma_j}\right] = \sum_{\{ \sigma\}}\sum_{\{ \omega\}} v^{b(\omega)}   \Delta(\sigma,\omega)\,, \end{equation} where $b(\omega)$ is the total number of occupied bonds in a bond configuration $\omega$, and $\Delta(\sigma,\omega)$ is the appropriate collection of Kronecker delta-symbols that enforces the ``agreement'' between the spin and the bond configurations. The next step is to change the order of summation in Eq.~(\ref{eq:Potts_partition_expanded}) (which is fine as long as the lattice is finite) and then sum over all spin configurations $\sigma$. The only constraint on the spin variables (for a fixed bond configuration $\omega$) is the one that has been mentioned earlier: all connected spins must take on the same value (one of the $q$ possible). Therefore every connected cluster, as well as each isolated site, contributes a factor of $q$ to the resulting bond weight, and the partition function becomes \begin{equation}   \label{eq:FK_rep1}   Z_{\text{Potts}} =\sum_{\{ \omega\}}v^{b(\omega)}q^{c(\omega)} , \end{equation} where $c(\omega)$ is the total number of connected components (including isolated sites). The partition function (\ref{eq:FK_rep1}) defines FK (random cluster) representation \cite{FK-72}\footnote{This derivation can be made slightly more formal if one defines a joint measure on both spin and bond configurations: $\mu(\sigma, \omega) = v^{b(\omega)} \Delta(\sigma,\omega)$. This is known as Edwards--Sokal measure \cite{ES-88}.  As follows from Eq.~(\ref{eq:Potts_partition_expanded}), when traced over both spins and bond occupations, it gives the desired partition function. Its marginal with respect to bond configurations is the Gibbs weight for spins, while its marginal with respect to spin configurations defines the weight for random clusters.} While all previous considerations applied to any number of spatial dimensions, in 2D the partition function (\ref{eq:FK_rep1}) can also be written in a way which makes its self-dual property apparent. In order to do this, notice that if we think of an unoccupied bond of the actual lattice as an occupied bond of the dual lattice (therefore $b^*=B-b$), then every circuit (face) of the actual lattice contains a dual connected component: $f=c^*$ (here we define $b^*\equiv b(\omega^*)$, $c^*\equiv c(\omega^*)$ with $\omega^*$ denoting the configuration of dual bonds). Therefore, using the Euler relation (for a planar graph) \begin{equation}   \label{eq:Euler}   f(\omega) = b(\omega) + c(\omega) - N \end{equation} with $f(\omega)$ being the number of \emph{circuits} (defined as a minimum number of bonds that one has to cut in order to make a graph consist only of \emph{trees} \footnote{It is also known as a \emph{cyclomatic number} and for the case of a planar graph is equal to the number of \emph{finite faces} \cite{Essam-Fisher-70:graphs}.}) and $N$ being the total number of sites, we can rewrite Eq.~(\ref{eq:FK_rep1}) as \begin{equation}   \label{eq:FK_rep3}   Z_{\text{Potts}} =\sum_{\{ \omega\}}v^{f-c+N}q^{c} = v^N \sum_{\{ \omega\}} v^f     \left(\frac{q}{v}\right)^c = v^N \sum_{\{ \omega\}} v^{c^*}     \left(\frac{q}{v}\right)^c \end{equation} If we ignore the uninteresting analytic prefactor in Eq.~(\ref{eq:FK_rep3}), we immediately see that the system is self-dual when $v=q/v$, i.e. $v=\sqrt{q}$. \begin{figure}[hbt!] \includegraphics[width=6.0cm]{Potts_loops.eps} \caption{A typical cluster configuration for the Potts model is shown by dashed lines. Spins belonging to the same cluster take the same value, which must be summed over the $q$ possible values, as described in the text. Clusters can be represented by loops on the surrounding lattice, shown by solid lines. } \label{fig:Potts-loops} \end{figure} A so-called polygon decomposition \cite{BKW-76} lets us relate random clusters to a loop gas on the surrounding lattice. We think of an occupied bond as a double-sided mirror placed at the site of the surrounding lattice. If a bond is not occupied, then its dual bond is considered a mirror. Thus every site of the surrounding lattice gets one of the two possible mirrors. We then use these mirrors to construct paths as shown in figure~\ref{fig:Potts-loops}. Since these paths have no sources or sinks, they always form loops that either surround the clusters or are contained inside clusters (in the latter case, the loops surround \emph{dual} clusters). The number of loops $\ell$ is then given by $\ell=c+c^*$. If $v=\sqrt{q}$ -- i.e. if the Potts model is at its self-dual point, then \begin{eqnarray} \label{eq:Potts-loops} Z_{\text{Self-Dual}} = \sum_{\{ \alpha \}} {\left(\sqrt{q}\right)^{\ell(\alpha)}}. \end{eqnarray} where the sum is taken over all fully-packed loop configurations on the surrounding lattice. Clearly, with the choice of $q=d^4$, the partition function (\ref{eq:Potts-loops}) becomes the norm of the ground state $\langle {\Psi_0}|{\Psi_0}\rangle$ of the quantum Hamiltonian $H_{d}^{(2)}$. \subsection{Random clusters on a torus} \label{sec:mikes_thm} In this section we will take the reader through some mathematical details in order establish certain properties of the critical FK model. These properties will be relied upon later, in the course of demonstrating the central result of this paper. For the sake of concreteness, let us focus the FK representation of the critical $q-$state Potts model ($1 \leq q \leq 4$) on a square lattice with periodic boundary conditions in both directions, i.e. the torus. In the case of $q=1$, the statistical mechanical model reduces to critical bond percolation. In this section we are focusing on the FK clusters established in Sec.~\ref{sec:Potts}. Let us begin by proving: \begin{proposition} \label{prop_fkcluster} Fixing $1 \leq q \leq 4$ and $\lambda > 0$ there is an $\epsilon > 0$ so that for all $L$ sufficiently large there is a probability greater than $\epsilon$ that the largest $\text{FK}_q$ cluster in the $L \times L$ torus has Euclidian diameter $< \lambda L$. \end{proposition} \begin{proof} The technology for this type of result was discovered and developed by the 1990's in the context of critical percolation \cite{FKG-71,ACCN-88,Grimmett-book,CPS-99} but it can be extended to other critical systems provided analogs of the Russo--Seymour--Welsh (RSW) inequality on crossing probabilities for rectangles and of the Fortuin--Kasteleyn--Ginibre (FKG) inequality (``monotone events are positively correlated'') hold. The proof strategy is to build some {\em wiggly} approximation $G_\text{w}$, lying entirely in the complement of FK clusters, to a rectilinear grid of scale (roughly) $\frac{L}{2\lambda}$. $G_\text{w}$ must not be {\em too} wiggly as we need the boxes in its complement to have diameter~$< \lambda$ (see Fig.~\ref{fig:wiggly-approx}). Since each FK cluster lies in some such box, the clusters also have diameter~$< \lambda$. \begin{figure}[hbt] \includegraphics[width=0.8\columnwidth]{grid3.eps} \caption{Construction of a wiggly approximation $G_\text{w}$ to a rectangular grid as described in the text. $G_\text{w}$ is built as a subset of the \emph{dual} clusters; the goal of the construction is to ``trap'' the actual FK clusters inside $G_\text{w}$, as shown here.} \label{fig:wiggly-approx} \end{figure} Building the grid $G_\text{w}$ is a random construction and must succeed with probability $> \epsilon$. The naive idea for building the grid is to first condition on no clusters meeting the (roughly) $\left(\frac{L}{2\lambda}\right)^2$ sites (i.e. $4$-coordinated vertices) of the desired grid $G_S$. Then, use RSW to further condition on cluster-disjoint arcs lying in small rectangles near the (roughly) $2 \left(\frac{L}{2\lambda}\right)^2$ bonds of $G_S$. What goes wrong is that the individual sites are simply too small; in the analogous electromagnetic problem the capacitance vanishes in the thermodynamic limit. The resolution is to consider small annuli enclosing each site and substitute for the site condition the event that an essential circle percolates around the annulus (also an RSW result). Now, elementary planar topology allows bits and pieces of the annular and rectangular percolation paths to be hooked together to build the desired $G_\text{w}$ as shown in Fig.~\ref{fig:wiggly-approx}. Throughout this construction, it is essential that these percolation events are at least non-negatively correlated (we remind the reader that the properties of dual clusters are identical to those of the actual FK clusters at criticality). This is precisely what the FKG inequality provides \cite{FKG-71,ACCN-88}. \end{proof} An immediate consequence of proposition \ref{prop_fkcluster} is that there is a non-zero probability that an arbitrary multi-loop is in the zero-winding-number sector. Thus, we may confine our discussion to that sector and use the measure on that sector induced by the measure $d^{2\ell(\alpha)}$ on all multi-loops. Let us pause for some definitions. Given a loop $a$ in the torus $T$, let $||a||_x$, or simply $||a||$ for short, be the ``breadth in the $x$-direction of $\widetilde{a}$'' where $\widetilde{a}$ represents a (any) lift of $a$ up to the universal cover $\mathbb{R}^2 \rightarrow T^2$. Recall that the universal cover of the torus $T^2$ is constructed by unwrapping completely in both the $x$ and $y$ directions. Very concretely, we may build the cover $\mathbb{R}^2$ from our original $L \times L$ square by taking a copy for each Gaussian integer and gluing these together (corresponding to Gaussian integers differing by $1$ or $i$) to form a tiling of the plane. (Technically, it is the discrete plane $\mathbb{Z}^2$ that we produce.) A loop $a$ in $T^2$ may be regarded as a map from the circle $S^1$ into $T^2$ and a lift $\widetilde{a}$ is any map to $\mathbb{R}^2$ making the following diagram commute: \[\begindc{0}[30] \obj(1,1)[A]{$S^1$} \obj(3,1)[B]{$T^2$} \obj(3,3)[C]{$\mathbb{R}^2$} \mor{A}{B}{$a$} \mor{C}{B}{} \mor{A}{C}{$\widetilde{a}$}[\atleft, \dasharrow] \enddc\] Concretely, the image of $\widetilde{a}$ is obtained by taking an arc in the $L \times L$ square and following its continuation in the tiling of $\mathbb{R}^2$ until it closes back on itself. The assumption that the winding sector is trivial means any such $\widetilde{a}$ is finite and in fact contains exactly as many bonds as $a$ contained in $T^2$, i.e. $< 2L^2$. To return to the definition of $||a||$, we say $||a|| = \max|x_i - x_j|$, where the maximum is taken over all sites $(x_i, y_i)$ on a lift $\widetilde{a}$. Note that any two lifts $\widetilde{a}$ and $\widetilde{\widetilde{a}}$ are congruent by a translation of $\mathbb{R}^2$, so $||a||$ is well defined. Finally, for a multi-loop $\alpha \subset T^2$ we define $||\alpha|| = \max_{a \subset \alpha} ||a||$. We are now ready to state a proposition about the diameter of these unwrapped components of random multi-loops from the trivial sector. \begin{proposition} \label{prop_normalpha} In the scaling limit, $||\alpha||$ is almost surely a bounded function on the trivial winding sector. More precisely, the probability that $||\alpha||>rL$ has an upper bound which is independent of $L$. \end{proposition} \begin{proof} We need a geometric lemma. \begin{lemma} \label{lemma_torus} Let $T$ be a Euclidian torus of area $1$. Let $\gamma : [0,1] \rightarrow T$ be an arc so that the (Euclidian) distance between lifted endpoints in the universal cover, $d(\widetilde{\gamma}(0), \widetilde{\gamma}(1)) = \delta$. Then there is a nontrivial deck translation (additive action of a Gaussian integer) $\widetilde{\gamma}'$ of $\widetilde{\gamma}$ so that $dist(\widetilde{\gamma}, \widetilde{\gamma}') < \frac{1}{\delta}$. \end{lemma} \begin{proof} Let $X$ be the radius $r$ neighborhood of $\gamma$ in the in the cover $\mathbb{R}^2$. Integrating the lengths of slices of $X$ perpendicular to the straight line segment $[\widetilde{\gamma}(0), \widetilde{\gamma}(1)]$ we see (by Fubini's theorem) that $\text{area}(X) > 2r\delta$ (where area is counted {\em without} multiplicity). If $X$ is disjoint from its deck transformations then it descends one-to-one into the torus, implying $\text{area}(X) < 1$. Thus, $2r\delta < 1$ and so $r < \frac{1}{2\delta}$. \end{proof} Now apply the lemma to arcs $\gamma$ within an FK cluster $K$ (in the trivial sector) on $T$. Since we are in the trivial sector, we may define $||K||$ exactly as we defined the norm on loops. By the lemma, if $||K|| = \delta$, $K$ will come within distance ${1}/{\delta}$ of completing a nontrivial wrapping of the torus. An application of the Russo-Seymour-Welsh (RSW) inequality shows that it is unlikely (vanishing algebraically in ${1}/{\delta}$) to come close to wrapping $T^2$ but fail to wrap completely. RSW amounts to integrating the effect of bringing bonds, which may join large clusters, in and out of our FK snapshot. Such fluctuations {\em cannot} be created by any local $H_d$ for $q \neq 1$ since the weight of the update is nonlocal. Nevertheless, these fluctuations do preserve the measure $d^{2\ell(\alpha)}$. So, the fraction of critical, topologically trivial, $\text{FK}_q$ snapshots ($1 \leq q \leq 4$) which contain a cluster $K$ with $||K|| > \delta$ is $\frac{1}{\text{poly}(\delta)}$. \end{proof} \subsection{$\text{O}(n)$ loop model} \label{sec:On} Another model describing the statistics of loops on a lattice is the so-called $\text{O}(n)$ loop model \cite{Domany-81}. Let us begin by defining the following $\text{O}(n)$ spin model on some (finite) lattice by the following \emph{partition function}: \begin{equation} \label{eq:O(n)_part} Z_{\text{O}(n)}(x) = \int {\prod_i} {\frac{d \hat{S}_i}{\Omega_n}}\,\prod_{\langle i,j\rangle} (1+x\,{\mathbf{S}_i}\cdot {\mathbf{S}_j}) \end{equation} with $\mathbf{S}_i \in {\mathbb R}^n$, $\left|{\mathbf{S}_i}\right| = 1$ and $\Omega_n$ is the $n$-dimensional solid angle.  In order to obtain the loop model, we write $\mathbf{S}_i \cdot \mathbf{S}_j = S_i^{(1)} S_j^{(1)} + \ldots + S_i^{(n)} S_j^{(n)}$ and define $n$ different colors (each of which will be associated with a specific component of the $O(n)$-spins).  Multiplying out all terms in Eq.~(\ref{eq:O(n)_part}), we have $n$ choices for each bond plus a possibility of a vacant bond. Thus, the various terms are represented by an $n$-colored bond configuration: $\mathcal{G} = \mathcal{G}_1, \ldots \mathcal{G}_n$ with $\mathcal{G}_\ell$ denoting those bonds where the term $S_i^{(\ell)} S_j^{(\ell)}$ has been selected. Clearly, the various $\mathcal{G}_\ell$'s are pairwise (bond) disjoint. Thus, for each $\mathcal{G}$ we obtain the weight \begin{equation}   W_\mathcal{G} = \mathrm{Tr} \prod_{\langle     i, j \rangle \in \mathcal{G}_1} x \, S_i^{(1)} S_j^{(1)} \ldots   \prod_{\langle i, j \rangle \in \mathcal{G}_n} x \, S_i^{(n)}   S_j^{(n)}\,.   \label{bond_weight} \end{equation} On the basis of elementary symmetry considerations it is clear that $W_\mathcal{G} \neq 0$ if and only if each vertex houses an even number (which could be 0) of bonds of each color. Once this constraint is satisfied, we get an overall factor of $x^{b(\mathcal{G})}$ -- with $b(\mathcal{G})$ being the total number of bonds -- times the product of the \emph{vertex factors} obtained by performing the appropriate $O(n)$ integrals. Obviously, these vertex factors depend only on how many different colors and how many of each of these colors enter each vertex (i.e.\ not on the particular colors involved nor on the directions of approach to the vertex). A particularly easy case is that of the honeycomb lattice where, due to a low coordination number, a maximum of two bonds of a single color can visit a vertex. The corresponding vertex factor is then given by \begin{equation} \label{eq:O(n)_vert} \int {\frac{d \hat{S}_i}{\Omega_n}}\, \left(S_i^{(j)}\right)^2 = \frac{1}{n} \end{equation} leading (after summing over all $n$ colors) to the following expression for the partition function: \begin{equation} \label{eq:O(n)-def} Z_{\text{O}(n)}(x) ={\sum_{\{\alpha\}}} \left(\frac{x}{n}\right)^{b(\alpha)}\,n^{\ell(\alpha)} \end{equation} where $b(\alpha)$ is the total number of occupied bonds (the total perimeter of all loops) while ${\ell(\alpha)}$ is the total number of loops. The last factor appears because each loop could be of one of $n$ colors. The expression in (\ref{eq:O(n)-def}) is well-defined for arbitrary $n$ and $x$, so it can be taken as the \emph{definition} of the $O(n)$ loop model \cite{Nienhuis87}. We note that the partition function (\ref{eq:O(n)-def}) is \emph{exactly} the norm of the ground state $\langle {\Psi_0}|{\Psi_0}\rangle$ of the quantum Hamiltonian $H_{d}^{(1)}$ given by Eq.~(\ref{eq:d-isotopy-Ham}) provided that $x=n$ and $d^2 = n$. \subsection{Correlations} \label{sec:correlations} Both of these models have a Coulomb gas representation \cite{Nienhuis87} so that their correlation functions can be obtained from exponential operators in a Gaussian field theory with a background charge. Consider, for instance, the O$(n)$ model. Precisely the same loop expansion derived in section \ref{sec:On} can be obtained from an SOS model on the kagom\'{e} lattice. After integrating out the triangular faces, the resulting SOS model for the heights on the hexagonal faces is has a low-temperature expansion which is a sum over domain wall configurations. The weights of the SOS model (or, equivalently, 6-vertex model) are such that when a domain wall turns left, it acquires a factor $({x}/{n})e^{i\chi}$; when it turns right, a factor $({x}/{n})e^{-i\chi}$. Since a loop will only close if the difference between the number of right terms and the number of left turns is $\pm 6$, every closed loop receives a factor ${\left({x}/{n}\right)^b} e^{\pm 6 i\chi}$ where $b$ is the length of the loop. Summing over both orientations of the loop, we obtain $2{\left({x}/{n}\right)^b} \cos{ 6\chi}$. Hence, this is equivalent to the O$(n)$ loop model for $n<2$ if we take $n=2\cos{ 6\chi}$. When $x>{x_c}=n/\sqrt{2+\sqrt{2-n}}$, this model is in its low-temperature phase, which is critical. The SOS model is a Coulomb gas with coupling $g=1-{6\chi}/{\pi}$ and background charge $-2(1-g)$ (which ensures the correct phase factor for each turn of a loop). Consider the $\langle {\bf S_i}\cdot {\bf S_j}\rangle$ correlation function in the O$(n)$ model. Every configuration which gives a non-zero contribution must have one curve which does not close into a loop but has endpoints at spins at $i$ and $j$. In Coulomb gas language, they have magnetic charges $\pm 1/2$. In addition, they must each also have electric charge $1-g$. Together with their magnetic charge, this will ensure that a factor $e^{\pm 6i\chi}$ arises whenever the curve winds around either point (as required by the SOS vertex rules). This electric charge also cancels the background charge. Thus, the Coulomb gas operators corresponding to O$(n)$ spin operators have electric and magnetic charges $(1-g,\pm 1/2)$. The exponent associated with a correlation function between field with electric and magnetic charges $({e_1},{m_1})$ and $({e_2},{m_2})$ is $x_{{e_1},{m_1};{e_2},{m_2}}=-{{e_1}{e_2}}/{2g} -{g{m_1}{m_2}}/{2}$. Hence, the O$(n)$ spin-spin correlation function has the power-law decay \cite{Nienhuis87}, \begin{equation} \label{O(n)-spin-spin} \left\langle \mathbf{S}(r) \cdot \mathbf{S}(0)\right\rangle \sim \frac{1}{r^{x_M}}, \end{equation} where ${x_M}=\frac{1}{4}\,g - \frac{1}{g}(1-g)^2$. The loops which surround random clusters as described in Sec.~\ref{sec:Potts} in the self-dual critical $q$-state Potts model can also be mapped onto a 6-vertex model (and, therefore, a Coulomb gas) in a similar fashion. Precisely the same exponents are obtained. Although, they are crucial to our proof of gaplessness, these correlation functions are not the ones of direct physical interest. Physical correlation functions have a rather different behavior. Consider equal-time correlations between \emph{quantum} spins such as $\langle{\sigma^{z}_i}{\sigma^{z}_j}\rangle$ in the original quantum models (\ref{eq:d-isotopy-Ham},\ref{eq:Mike-d-isotopy}). As we now argue, they are \emph{short-ranged in space}. In the first model, the equal-time $\langle{\sigma^{z}_i}{\sigma^{z}_j}\rangle$ correlation function is related to the probability that a loop passes through $i$ and a loop which may or may not be distinct passes through $j$. Such correlation functions vanish in the $O(n)$ loop models. Analogously, in the second case, this correlation function is related to the probability that the two corresponding bonds are parts of clusters, but not necessarily in the same cluster in the related $q$-state Potts model. Such a correlation function once again vanishes. In Coulomb gas language for the associated statistical mechanical models, the reason that such correlation functions vanish is that they are correlation functions of electrically neutral operators, such as gradients of the height (to which the local loop density corresponds). Such correlation functions vanish since they do not cancel the background charge. (The only exception is a height model, with central charge $c=1$, for which there is no background charge. Gradients of the height have power-law correlations.) As we have seen, algebraic decay is possible for correlation functions of operators which are charged in the Coulomb gas picture, but these are non-local in terms of the spins ${\sigma^{z}_i}$ since they measure, for instance, the probability that two spins ${\sigma^{z}_i}$ and ${\sigma^{z}_j}$ lie on the \emph{same loop}. (At $d=1,\sqrt{2}$, this can also be seen from the fact that the ground state on the sphere is the same -- and, therefore, has the same equal-time correlation functions -- as that of a gapped Hamiltonian \cite{Kitaev97,Turaev92} which is a sum of local commuting operators and, therefore, has correlation length zero.) Thus, the ground state wavefunction of (\ref{eq:d-isotopy-Ham}) has an underlying power-law long-ranged structure which is apparent in its loop representation, but it is not manifested in the correlation functions of local operators ${\sigma^{z}_i}$. As we will see momentarily, this long-range structure leads to gapless excitations for the Hamiltonian (\ref{eq:d-isotopy-Ham}) and, therefore, long-ranged correlations in time in spite of the lack of long-ranged correlations in space. We call such a state of matter a \emph{quasi-topological} critical point. \section{Low-Energy Excitations} In spite of the short-ranged nature of equal-time spin-spin correlation functions and the absence of any conservation laws for either of the Hamiltonians (\ref{eq:d-isotopy-Ham},\ref{eq:Mike-d-isotopy}), we can construct a variational argument that this general type of Hamiltonians is gapless using the criticality of non-local correlation functions, specifically the scale-invariant nature of loops. The general idea of our proof is to produce a ``twisted'' state which is both orthogonal to the ground state and has a vanishingly small expectation value of the Hamiltonian, in the general spirit of the Lieb-Schultz-Mattis theorem for quantum antiferromagnets \cite{Lieb61,Hastings04a,Hastings05a}. For the purposes of this theorem we do not need to choose a particular Hamiltonian, but only require that it satisfy the following necessary conditions: \begin{itemize} \item This is a $d$-isotopy Hamiltonian whose ground state is described by Eq.~(\ref{eq:ground-state}) with a scale-invariant distribution of loops in the thermodynamic limit. We make the mathematically nontrivial assumption, widely accepted in physics, that the scaling limit exists. Furthermore, certain probability functions defined from the limit will be formally differentiated. These derivatives can easily be replaced by finite difference quotients, so continuity is, in fact, an adequate assumption. (We shall make this condition more precise later.) \item A second important feature of the ground state which we use is that when viewed as a statistical mechanical ensemble by $\text{Prob}(\psi_i) = |\psi_i|^2$, it is, in the scaling limit, a critical system with the ``no large cluster property,'' i.e. on an $L \times L$ square with periodic boundary conditions (``torus'') $\forall \lambda > 0, \exists \epsilon > 0$ so that the event ``all loops within the random multi-loop $\psi_i$ have breadth (as defined is Section~\ref{sec:mikes_thm}) $< \lambda L$'' occurs with probability greater than $\epsilon$. (We have seen that the FK models for $q \geq 1$ have this property. We also expect this property to hold for the O($n$) loop models, $1\leq n \leq 2$; proving this would be quite interesting.) \item The Hamiltonian is local: all allowed $d$-isotopy moves have finite range (generalizing our theorem to the case of a quasi-local Hamiltonian whose terms decay exponentially with increasing range is straightforward). Without loss of generality, we might as well assume that the range of all terms is limited to a single lattice plaquette. \item The terms responsible for the loop dynamics are bounded uniformly in the size of the system, i.e. for any two multiloops $\alpha$ and $\beta$, $\left|\langle \alpha| H_d | \beta \rangle \right| < V$. (We assume that the basis vectors $|\alpha\rangle$ of the Hilbert space of $H_d$ are orthonormal, i.e. $\langle \alpha| \beta \rangle = 0$ unless $\alpha$ and $\beta$ are identical multi-loops and $\langle \alpha| \alpha \rangle = 1$). Notice that we make no assumptions about the other terms that might be there to enforce the multiloop constraint -- these can potentially include hard-core interactions. \end{itemize} There is little doubt that both of the presented Hamiltonians satisfy the above conditions. However, our argument is more complete in the second case (FK$_q$, $1 \leq q \leq 4$). We will now construct our low energy excitation $|{\psi_1}\rangle$ above the ground state $|{\psi_0}\rangle$. We normalize so that $\langle \psi_0 | \psi_0 \rangle = 1$, $\langle \psi_0 | H | \psi_0 \rangle = 0$, and $\langle \psi_1 | \psi_1 \rangle = 1$. We will construct $|{\psi_1}\rangle$ and then a family of ``harmonics'' $|{\psi_k}\rangle$, all of norm one, and estimate the energy expectation values $\langle \psi_k | H | \psi_k \rangle$, $k \geq 1$. In constructing $|{\psi_k}\rangle$, we will use the language of the scaling limit for conceptual simplicity. It is quite routine, and we leave this to the reader, to back away from the scaling limit and write discrete formulae, replacing derivatives with difference quotients. We use ${\cal C}$ to denote a configuration (i.e. a multi-loop) near the scaling limit, i.e. $L \to \infty$, on $T$, the $L \times L$ torus. By proposition \ref{prop_normalpha} we know there is a function $r_{\cal C}$ on configurations which remains almost surely defined in the scaling limit: $r_{\cal C} \equiv {||{\cal C}||}/{L}$. Our assumption is that, in the scaling limit, the probability $p$ that a configuration $\cal C$ satisfied $r_{\cal C} \leq r$ is a continuous function $p(r)$. The probability is computed with respect to the $FK_q$ measure, i.e. with respect to $|{\psi_0}\rangle$. As explained earlier, we will formally treat $p$ as differentiable, writing ${dp}/{dr}$, but this may be treated as a difference quotient. The ground state wave function (on the trivial sector) $|{\psi_0}\rangle$ is simply: \begin{equation} |{\psi_0}\rangle = Z^{-1/2}{\sum_{\alpha}}d^{\ell(\alpha)}\,|\alpha\rangle \end{equation} where the sum is over all multi-loops $\alpha$ in the zero-winding-number sector and the normalization $Z={\sum_{\{\alpha\}}} d^{2\ell(\alpha)}$ is the partition function of the associated statistical mechanical model (Potts or O(n)). We write the variational ansatz \begin{equation} |{\psi_k}\rangle = Z^{-1/2}{\sum_{\alpha}} e^{2\pi ik\,p({r_\alpha})} \:d^{\ell(\alpha)}\,|\alpha\rangle \end{equation} where $k$ is an integer. These states are orthonormal because \begin{equation} \langle {\psi_k} | {\psi_l} \rangle = Z^{-1}{\sum_{\alpha}} d^{2\ell(\alpha)}\,e^{2\pi i p({r_\alpha}) (k-l)} = {\int_0^1} dp\, e^{2\pi i p(k-l)} = \delta_{kl} \end{equation} In particular, $|{\psi_k}\rangle$ with $k\neq 0$ is orthogonal to the ground state. Hence, the expectation value of the Hamiltonian in this state is an upper bound on the energy gap between the ground state and the first excited state. \begin{eqnarray} \label{eqn:excited-state-energy1} \Delta &\leq& \langle \psi_k | H_d | \psi_k \rangle - \langle \psi_0 | H_d | \psi_0 \rangle \cr &=& \frac{1}{Z} {\sum_{\alpha,\beta}} \left(e^{2\pi i k (p({r_\alpha})-p({r_\beta}))} - 1 \right)\,d^{2\ell(\alpha)} \langle \alpha | H_d | \beta \rangle \end{eqnarray} In the second line, we have used the fact that $\langle \alpha | H_d | \beta \rangle$ is non-zero only if $\ell(\alpha)=\ell(\beta)$. Exchanging $\alpha$ with $\beta$ exchanges each term in the sum in (\ref{eqn:excited-state-energy1}) with its complex conjugate. Hence, we can write: \begin{equation*} \label{eqn:excited-state-energy2} \Delta \leq 2\,\text{Re}\left\{\frac{1}{Z} {\sum_{\alpha,\beta; {r_\beta}>{r_\alpha}}}\!\!\! \left(e^{2\pi i k (p({r_\alpha})-p({r_\beta}))} - 1 \right)\,d^{2\ell(\alpha)} \langle \alpha | H_d | \beta \rangle\right\} \end{equation*} Note that we have dropped the case ${r_\beta}={r_\alpha}$ since this expression vanishes for these configurations. The $d$-isotopy Hamiltonian can change the length of a loop by at most one lattice spacing, so $ \langle \alpha | H_d | \beta \rangle\neq 0$ only if ${r_\beta}={r_\alpha}+\frac{a}{L}$ where $a$ is the lattice spacing. (Recall that $r_\alpha$ has been defined in units of $L$.) Then, writing $p({r_\alpha})-p({r_\beta})\approx p'({r_\alpha})\,\frac{a}{L}$, we have: \begin{equation*} \label{eqn:excited-state-energy3a} \Delta \leq 2\,\text{Re}\left\{\frac{1}{Z} {\sum_{\alpha,\beta; {r_\beta}>{r_\alpha}}}\!\!\! \left(e^{2\pi i k p'({r_\alpha}){a}/{L}} - 1 \right)\,d^{2\ell(\alpha)} \langle \alpha | H_d | \beta \rangle\right\} \end{equation*} Inserting ${\int_0^1} dr\, \delta(r-{r_\alpha})=1$ into this expression, we have: \begin{eqnarray} \label{eqn:excited-state-energy3b} \Delta &\leq& 2\,\text{Re}\biggl\{ {\int_0^1} dr \,\delta(r-{r_\alpha})\frac{1}{Z} {\sum_{\alpha,\beta; {r_\beta}>{r_\alpha}}}\!\!\! \left(e^{2\pi i k p'({r_\alpha}){a}/{L}} - 1 \right)\,\times \cr & & {\hskip 4.5 cm} d^{2\ell(\alpha)} \langle \alpha | H_d | \beta \rangle\biggr\}\cr &=& 2\,\text{Re}\biggl\{ {\int_0^1} dr \,\left(1-e^{2\pi i k p'(r){a}/{L}}\right)\,{\overline \rho}(r)\biggr\} \end{eqnarray} where \begin{eqnarray} \label{eqn:f-def} {\overline \rho}(r) &\equiv& -\frac{1}{Z} {\sum_{\alpha,\beta; {r_\beta}>{r_\alpha}}}\!\!\! \delta(r-{r_\alpha})\, d^{2\ell(\alpha)}\, \langle \alpha | H_d | \beta \rangle\cr &\equiv& \frac{1}{Z} {\sum_{\alpha}} \delta(r-{r_\alpha})\,d^{2\ell(\alpha)}\,{\rho_\alpha} \end{eqnarray} In the first line of (\ref{eqn:f-def}), we define ${\overline \rho}(r)$, which is the expectation value of $\rho_\alpha$, defined in the second line. If we can argue that ${\overline \rho}(r)\leq A$ for some constant $A$ independent of $r$, then \begin{eqnarray} \label{eqn:excited-state-energy4} \Delta &\leq& 2A\,\text{Re}\biggl\{ {\int_0^1} dr \,\left(1-e^{2\pi i k p'(r){a}/{L}}\right)\biggr\}\cr &=& 2A \int dr \,\left(1-\cos\left({2\pi i k p'(r){a}/{L}}\right)\right)\cr &\leq& 2A \left(1-\cos\left({2\pi i k M{a}/{L}}\right)\right) \end{eqnarray} Here, we have taken $p'(r)\leq M$, which follows from our mild continuity assumptions. Consequently, in the large $L$ limit, $\Delta \sim {k^2}/{L^2}$. Note that the $L^{-2}$ reproduces the classical scaling for the lowest eigenvalue of a string of length $L$, while the $k^2$ dependence signals a quadratic dispersion relation associated to a soft mode. We are nearly finished. All that remains is to argue that ${\overline \rho}(r)$ defined by (\ref{eqn:f-def}) satisfies ${\overline \rho}(r)\leq A$ for some constant $A$ independent of $r$. $\rho_\alpha$ is the number of distinct sites (i.e. distinct terms in $H_d$) where $H_d$ can produce a fluctuation stretching out the ($x$-direction of) the widest loop in the multi-loop $\alpha$ from breadth $r_\alpha$ to breadth ${r_\alpha} + \frac{a}{L}$, i.e. by one lattice step. $\overline{\rho}(r)$ is the expectation value of $\rho_\alpha$, averaged over all configurations $\alpha$ whose widest loop has breadth $r$. Note that in the scaling limit $\overline{\rho}(r)$ can have no $r$ dependence. Prior to reaching the scaling limit, we use argument $r$ in $\rho(r)$ to indicate maximum cluster breadth in units of $\frac{1}{L}$. We now define $\rho_{\alpha,n}$ to be $1$ if the widest loop $K\in\alpha$ meets the right side of the unique smallest rectilinear box $B$, containing $K$, $K \subset B$, in $n$ distinct ``fingers'' touching the right-hand wall and zero otherwise. We define $\overline{\rho_n}(r)$ to be its expectation value, \begin{equation} \overline{\rho_n}(r) \equiv \frac{1}{Z} {\sum_{\alpha}} \delta(r-{r_\alpha})\,d^{2\ell(\alpha)}\,\rho_{\alpha,n} \end{equation} Then \begin{equation} \overline{\rho}(r) = {\sum_{n=1}^\infty} n\, \overline{\rho_n}(r) \end{equation} The Hamiltonian $H_d$ can produce $n$ distinct states contributing to ${\overline{\rho}_1}({r +\frac{a}{L}})$ for each state contributing to ${\overline{\rho}_n}(r)$. For example, in the figure above, one may fluctuate any right-most finger further to the right. Hence, \begin{equation} \label{eqn:rho-bound} {\overline{\rho}_1}\!\left(r +\scriptstyle{\frac{a}{L}}\right) \geq \sum_{n=1}^\infty n {\rho_n}(r) \end{equation} But the right-hand-side is simply $\overline{\rho}(r)$. Since $\overline{\rho}^{}_{1}(r)$ is the ensemble average of $\rho^{}_{\alpha,1}=0,1$, it must satisfy $0~\leq~\overline{\rho}^{}_{1}(r)~\leq~1$. Hence, if there is a well-defined scaling limit, $\overline{\rho}^{}_{1}(r)\rightarrow\rho^{}_{1}$, $\overline{\rho}(r)$ and $\overline{\rho}({r +\frac{a}{L}})$ should converge. Then the latter is the desired constant $A$ in the upper bound (\ref{eqn:excited-state-energy4}) on the excited state energy since we can re-write (\ref{eqn:rho-bound}) as \begin{equation} \overline{\rho} \leq \overline{\rho}^{}_{1} \end{equation} Naturally, this is only an upper bound, and in fact we have not proved these are not just other degenerate ground states. However, it seems unlikely that we have discovered ground state degeneracy (on the sphere) since there is no continuous symmetry of the Hamiltonian which could be broken or any other non-ergodicity. Furthermore, we expect $\omega\propto k^2$ to be the correct behavior of the low-lying excitations; our intuition is based on observations made for other quasi-topological models \cite{Henley97,Henley04,Ardonne04}, closely related to our $d=1$ case. Due to this characteristic quadratic spectrum, such lines of critical points have been dubbed ``quantum Lifshitz points''\cite{Ardonne04}. Let us pause here and contemplate the physical reasons for having such gapless modes. These excitations are \emph{not} Goldstone bosons, since neither of the Hamiltonians (\ref{eq:d-isotopy-Ham},\ref{eq:Mike-d-isotopy}) possesses any continuous symmetry (which could be broken). Rather, these gapless modes appear as a result of ``bottle-neck'' quantum dynamics which only allows loops to fluctuate by either slowly growing or slowly shrinking, one lattice plaquette at a time. As a result we can think of configuration space as a very elongated, essentially one-dimensional object parameterized by the diameter of the biggest loop. While the quantum dynamics is in principle ergodic, in order to reach a state with a long, order $L$ loop from a state with only short loops, the entire ``length'' of this ``worm-like'' graph has to be traversed. From this analogy, we see that the eigenvalue problem for our Hamiltonian is very similar to the eigenvalue problem for the Laplacian operator on a string. (In contrast, geometries such as a complete graph or hypercube do have spectral gaps; their bonds tie them together more efficiently than links in a linear chain.) Now, what about the spectrum when the Jones-Wenzl projectors are implemented \cite{Freedman04a}? With the help of such additional terms, going from short to long loops can be achieved by merging existing loops together which can be done substantially faster than by ``growing'' them. These projectors would directly connect various points of the ``worm-like'' configuration space of the system. In the case $d=1$ adding the two-strand projector essentially reproduces Kitaev's ``toric code'' \cite{Kitaev97} and a gap is opened. (We expect this to be the case for $d=-1$ as well.) In the case of $n=d^2=2$, adding the three-strand projector is not sufficient because the probability of three long loops coming together within several lattice constants from each other -- a necessary condition for a JW projector to act efficiently -- vanishes as $L^{-\alpha}$ for some $\alpha > 0$ \cite{Schramm-private}. Hence the effect of such terms on the spectrum is too weak to open a gap (although it might lead to the ``stiffening'' of gapless excitations by reducing the dynamical exponent $z$). \section{Conclusions} \label{sec:concl} In this paper, we have analyzed the spectrum of low lying excitations for a general class of local Hamiltonians whose ground state(s) are characterized by $d$-isotopy. Using the statistical properties of their non-local degrees of freedom we established the analog of a Lieb-Schultz-Mattis theorem for quasi-topological systems. The excitations are gapless, with the variational ansatz strongly suggesting a quadratic ($\omega \propto k^2$) dispersion. What may strike one as interesting and counter-intuitive is the fact that both the Hamiltonian and the correlations of local operators are perfectly short-ranged, however, the quantum dynamics is constrained to operate on non-local, quasi-long-ranged objects -- loops -- which in turn leads to a gapless spectrum. Finally, it is also interesting to remark on the potential implications of such a behavior from the perspective of understanding quantum glasses. The idea of using similar quasi-topological models to model glassy behavior is not entirely new \cite{Yin01,Yin02,Das01}, but until very recently \cite{Chamon05}, all proposed models had a serious drawback, namely quasi-long range correlations between \emph{local} degrees of freedom. This is contrary to the fact that experimentally observed slow glassy dynamics has not been accompanied by any divergent correlations. In this paper we have explicitly demonstrated that such behavior is entirely possible in the context of quasi-topological quantum critical points. \begin{acknowledgments} The authors would like to thank Oded Schramm for illuminating discussions on the statistical properties of critical configurations. We are also thankful to Matthew Hastings for pointing out a gap in the proof presented in the earlier version of this manuscript. In addition, we would like to acknowledge the hospitality of KITP and the Aspen Center for Physics. C.~N.\ and K.~S.\ have been supported by the ARO under Grant No.~W911NF-04-1-0236. C.~N.\ has also been supported by the NSF under Grant No.~DMR-0411800. \end{acknowledgments}
1,477,468,750,796
arxiv
\section{Encoded NOT and Phase} \label{normalizer} Before I advance into the full theory of fault-tolerant operations, I will discuss how to perform encoded NOT and Phase gates on any stabilizer code. The behavior of these gates under more general transformations will tell us what those transformations actually do to the encoded states. The stabilizer $S$ of a code is an abelian subgroup of the group $\cal{G}$ generated by the operations \begin{equation} I = \pmatrix{ 1 & \ 0 \cr 0 & \ 1 }, \ \ X = \pmatrix{ 0 & \ 1 \cr 1 & \ 0 }, \ \ Z = \pmatrix{ 1 & \ 0 \cr 0 & -1 }, \ {\rm and}\ Y = X \cdot Z = \pmatrix{ 0 & -1 \cr 1 & \ 0 } \end{equation} acting on each of the $n$ qubits of the code. I will sometimes write ${\cal G}_n$ to explicitly indicate the number of qubits acted on. The codewords of the code are the states $|\psi \rangle$ which are left fixed by every operator in $S$. Operators in $\cal{G}$ which anticommute with some operator in $S$ will take codewords from the coding space into some orthogonal space. By making a measurement to distinguish the various orthogonal spaces, we can then determine what error has occured and correct it. A quantum code encoding $k$ qubits in $n$ qubits will have a stabilizer with $n-k$ generators. However, there are, in general, a number of operators in $\cal{G}$ that commute with all of the operators in $S$. The set of such operators is the {\em normalizer} $N(S)$ of $S$ in $\cal{G}$.\footnote{Strictly speaking, this is the centralizer of $S$, but in this case it is equal to the normalizer, since $G^{-1} M G = \pm G^{-1} G M = \pm M$, and not both $M$ and $-M$ are in $S$.} $S$ is itself contained in the normalizer, but in general the normalizer is larger than just $S$. If $S$ contains $2^a$ operators (so it has $a$ generators), the normalizer will be generated by $2n-a = n+k$ operators. ($\cal{G}$ has a total of $2^{2n}$ elements, half of which will commute with any other fixed element of $\cal{G}$.) The elements of the normalizer will change one codeword to another, and therefore have a natural interpretation in terms of encoded operations on the code words. Suppose we extend the stabilizer into a maximal set of $n$ commuting operators. Then consider those codewords which, besides being $+1$-eigenvectors of the stabilizer generators, are also eigenvectors of the additional $k$ operators from $N(S)$. Let these codewords be the basis codewords for our code, giving the encoded $|0\ldots 00 \rangle,\ |0\ldots 01 \rangle,\ \ldots |1\ldots 11 \rangle$. The state which has eigenvalue $+1$ for the $k$ new operators will be the encoded $|0 \ldots 0 \rangle$, the state which has eigenvalue $-1$ for everything will be the encoded $|1 \ldots 1 \rangle$, and so on. Then the $k$ new operators have the interpretation of being the encoded $Z$ operators on the $k$ encoded qubits. We will write these encoded $Z$ operators as $\overline{Z_i}$ for the $i$th encoded qubit, or $\overline{Z}$ when there is just one encoded qubit. Now, the remaining elements of $N(S)$ will not commute with all of the encoded $Z$ operators. We can choose a set of generators for $N(S)$ such that each of the last $k$ operators commutes with everything except a single one of the $\overline{Z}$ operators. These generators are then the encoded bit flip operators $\overline{X_i}$ (or $\overline{X}$ when there is just one). An arbitrary element of $N(S)$ is some other encoded operation. If two elements of $N(S)$ differ by an element of the stabilizer, they act the same way on any code word (since the stabilizer element just fixes the codeword). Therefore the actual set of encoded operations represented in $N(S)$ is $N(S)/S$. DiVicenzo and Shor showed how to perform syndrome measurement and error correction fault-tolerantly on any stabilizer code \cite{divincenzo}. Using the same methods, we can measure the eigenvalue of any operator in $\cal{G}$, even if it is not in $S$. This also enables us to prepare the encoded zero state of any stabilizer code by performing error correction and measuring the eigenvalue of the $\overline{Z}$ operators. \section{More General Operations} So far, we have only considered applying products of $X$, $Y$, and $Z$ to the codewords. However, this is not the most general thing we could do. Suppose we have some totally arbitrary unitary transformation $U$ we wish to apply to our codewords. How does this affect other operators, such as the elements of $S$ and $N(S)$? \begin{equation} U M |\psi \rangle = U M U^{\dagger} U |\psi \rangle, \end{equation} so $|\psi \rangle$ is an eigenvector of $M$ if and only if $U |\psi \rangle$ is an eigenvector of $U M U^{\dagger}$. Furthermore, they have the same eigenvalue. Thus, by applying $U$ to $|\psi \rangle$, we effectively transform any operator $M$ of interest into $U M U^\dagger$. In order for the state $|\psi \rangle$ to remain a codeword, the state $U|\psi \rangle$ must still be in the coding space, so $U M U^{\dagger}$ must also fix all the codewords $|\psi \rangle$ for every $M \in S$. Let us consider a restricted set of possible $U$s, those for which $U M U^\dagger$ is actually in $\cal{G}$ (so U is in the normalizer $N(\cal{G})$ of $\cal{G}$ in $U(n)$). We will see that $N(\cal{G})$ is generated by Hadamard rotations, $\pi/2$ phase rotations, and controlled NOT operations~\cite{calderbank2,bennett}. Then, by the definition of the stabilizer and the coding space, we need $U M U^\dagger$ to actually be in $S$ for all $M \in S$. Therefore, $U$ is actually in the normalizer of $S$ in $U(n)$. The same criterion was found previously by Knill~\cite{knill4}. Note that the normalizer of $S$ in $U(n)$ is not necessarily a subset of $N(\cal{G})$. When we restrict our attention to operations that are in both the normalizer of $\cal{G}$ in $U(n)$ and the normalizer of $S$ in $U(n)$, it becomes straightforward to determine the operation actually performed on the encoded states. First, note that the $\overline{X}$ and $\overline{Z}$ operators transform into operators that also commute with everything in $S$. Thus, we can rewrite them as products of the original $\overline{X}$s, $\overline{Z}$s, and elements of $S$. The elements of $S$ just give us the equivalence between elements of $N(S)$ discussed in section~\ref{normalizer}, so we have deduced a transformation of the encoded $X$ and $Z$ operators. Furthermore, we know this encoded transformation also lies in the normalizer of ${\cal G}_k$. Typically, we want to consider transversal operations $U$, which are equal to the tensor product of single-qubit operations (or operations that only affect one qubit per block). For the moment, we will only consider operations of this form and see what collections of them will do to the stabilizer. Before launching into an analysis of which gates can be used on which codes, I will present an overview of the gates that are amenable to this sort of analysis. For instance, one of the simplest and most common fault-tolerant operations is the Hadamard rotation \begin{equation} R = \frac{1}{\sqrt{2}} \pmatrix{ 1 & \ 1 \cr 1 & -1}. \end{equation} Let us see what this does to $X$, $Y$, and $Z$. \begin{eqnarray} R X R^\dagger = \frac{1}{2} \pmatrix{ 1 & \ 1 \cr 1 & -1} \pmatrix{ 1 & -1 \cr 1 & \ 1} = \pmatrix{ 1 & \ 0 \cr 0 & -1} = & Z \\ R Z R^\dagger = \frac{1}{2} \pmatrix{ 1 & \ 1 \cr 1 & -1} \pmatrix{ \ 1 & 1 \cr -1 & 1} = \pmatrix{ 0 & 1 \cr 1 & 0} = & X \\ R Y R^\dagger = \frac{1}{2} \pmatrix{ 1 & \ 1 \cr 1 & -1} \pmatrix{ -1 & 1 \cr \ 1 & 1} = \pmatrix{ \ 0 & 1 \cr -1 & 0} = & -Y. \end{eqnarray} Therefore, applying $R$ bitwise will switch all the $X$s and all the $Z$s, and give a factor of $-1$ for each $Y$. If we do this to the elements of the stabilizer and get other elements of the stabilizer, this is a valid fault-tolerant operation. The seven-qubit code is an example of a code for which this is true. Another common bitwise operation is the $i$ phase \begin{equation} P = \pmatrix{ 1 & 0 \cr 0 & i}. \end{equation} On the basic operations $X$, $Y$, and $Z$ it acts as follows: \begin{eqnarray} P X P^\dagger = \pmatrix{ 1 & \ 0 \cr 0 & i} \pmatrix{ 0 & -i \cr 1 & \ 0} = \pmatrix{ 0 & -i \cr i & \ 0} = & i Y \\ P Y P^\dagger = \pmatrix{ 1 & \ 0 \cr 0 & i} \pmatrix{ 0 & i \cr 1 & 0} = \pmatrix{ \ 0 & i \cr i & \ 0} = & i X \\ P Z P^\dagger = \pmatrix{ 1 & \ 0 \cr 0 & i} \pmatrix{ 1 & 0 \cr 0 & i} = \pmatrix{ 1 & \ 0 \cr 0 & -1} = & Z. \end{eqnarray} This switches $X$ and $Y$, but with extra factors of $i$, so there must be a multiple of 4 $X$s and $Y$s for this to be a valid operation. Again, the seven-qubit code is an example of one where it is. Note that a factor of $i$ appears generically in any operation that switches $Y$ with $X$ or $Z$, because $Y^2 = -1$, while $X^2 = Z^2 = +1$. The operations in $N(\cal{G})$ actually permute $\sigma_X = X$, $\sigma_Z = Z$, and $\sigma_Y = iY$, but for consistency with earlier publications I have retained the notation of $X$, $Y$, and $Z$. The most general single qubit operation in $N(\cal{G})$ can be viewed as a rotation of the Bloch sphere permuting the three coordinate axes. We can also consider two-qubit operations, such as the controlled NOT. Now we must consider transformations of the two involved blocks combined. The stabilizer group of the two blocks is $S \times S$, and we must see how the basic operations $X \otimes I$, $Z \otimes I$, $I \otimes X$, and $I \otimes Z$ transform under the proposed operation. In fact, we will also need to know the transformation of $X \otimes Y$ and other such operators, but the transformation induced on $\cal{G} \times \cal{G}$ is a group homomorphism, so we can determine the images of everything from the images of the four elements listed above. It is straightforward to show that the controlled NOT induces the following transformation: \begin{eqnarray} X \otimes I & \rightarrow & X \otimes X \nonumber \\ Z \otimes I & \rightarrow & Z \otimes I \\ I \otimes X & \rightarrow & I \otimes X \nonumber \\ I \otimes Z & \rightarrow & Z \otimes Z. \nonumber \end{eqnarray} It is easy to see here how amplitudes are copied forwards and phases are copied backwards. The transformation laws for $R$, $P$, and CNOT are also given in \cite{calderbank2}. There are a number of basic gates in $N(\cal{G})$ beyond the ones given above. As with the examples above, any gate can be characterized by its transformation of the generators of $\cal{G}$ (or $\cal{G} \times \cal{G}$ for two-qubit operations, and so on). The primary constraint that must be met is to preserve the algebraic properties of the operators. In fact, there is a complete equivalence between the possible gates and the automorphisms of $D_4$ (the group of products of $I$, $X$, $Y$, and $Z$) or direct products of copies of $D_4$ (for multiple-qubit gates) \cite{knillpc}. Given any such automorphism, we first substitute $iY$ for $Y$ to get the actual transformation. Then we note that $|0\rangle$ is the ``encoded zero'' for the ``code'' with stabilizer $\{I, Z\}$. We know how $Z$ transforms under $U$, so $|0\rangle$ transforms to the state fixed by $U Z U^\dagger$. In addition, $|1\rangle = X |0\rangle$, so $U |1\rangle = U X U^\dagger U |0\rangle$. For instance, consider the cyclic transformation \begin{equation} T = X \rightarrow iY \rightarrow Z \rightarrow X. \label{cyclic} \end{equation} Since $Z \rightarrow X$, \begin{equation} |0\rangle \rightarrow 1/\sqrt{2}\ (|0\rangle + |1\rangle). \end{equation} Also, $X \rightarrow iY$, so \begin{equation} |1\rangle \rightarrow i/\sqrt{2}\ Y(|0\rangle + |1\rangle) = -i/\sqrt{2}\ (|0\rangle - |1\rangle). \end{equation} Thus, the matrix for $T$ is \begin{equation} T = \frac{1}{\sqrt{2}} \pmatrix{ 1 & -i \cr 1 & \ i}. \end{equation} We can perform a similar procedure to determine the matrix corresponding to a multiple-qubit transformation. The next question of interest is how much have we restricted our computational power by restricting our attention to the normalizer of $\cal{G}$? Again, the normalizer of $\cal{G}$ is exactly the group generated by the Hadamard transform $R$, the phase $P$, and the controlled-NOT. I will prove this in section~\ref{measurements}. Unfortunately, this group alone is of only limited interest. Knill~\cite{knillpc} has shown that a quantum computer using only operations from this group can be simulated efficiently on a classical computer.\footnote{The argument goes as follows: we start with an $n$-qubit state $|0\rangle$ which is the single state for the stabilizer code $\langle Z_1, \ldots, Z_n \rangle$. Each operation transforms the state and the stabilizer as above. We can follow each transformation on a classical computer in $O(n^2)$ steps. A measurement picks at random one of the basis kets in the codeword, which can also be chosen classically \cite{gottesman,cleve}. This still leaves the question of partial measurement of the full state, but the results of section~\ref{measurements} show that this can also be classically simulated.} However, the addition of just the Toffoli gate to this group is sufficient to make the group universal \cite{shor2}. \section{Measurements} \label{measurements} Now I will discuss what happens if we perform a measurement on a stabilizer code. Measuring individual qubits of an actual code is not of great interest, but the results of this section will be quite helpful in determining what can be done by combining measurements and specific fault-tolerant operations. Now, using the method of DiVincenzo and Shor~\cite{divincenzo}, we can measure any operator $A$ in ${\cal G}$. There are three possible relationships between $A$ and $S$. First of all, $A$ could actually be in $S$. Then measuring $A$ tells us nothing about the state of the system and does not change it at all. The result of this measurement will always be $+1$ for a valid codeword. The second possibility is for $A$ to commute with everything in $S$ but not to actually be in $S$. Then $A$ is equivalent to a nontrivial element of $N(S)/S$ and measuring it will give us information about the state of the system. This is usually inadvisable. The third possibility, that $A$ anticommutes with something in $S$, is the most interesting. In this case, we can choose the generators of $S$ so that $A$ anticommutes with the first generator $M_1$ and commutes with the remaining generators $M_2, \ldots, M_{n-k}$ (we can do this since if generator $M_j$ anticommutes with $A$, we can replace it with $M_1 M_j$, which commutes). Then measuring $A$ does not disturb the eigenvectors of $M_2$ through $M_{n-k}$, so those still fix the new state, and are in the new stabilizer. The eigenvectors of $M_1$ are disturbed, however, and $M_1$ no longer fixes the states. Measuring $A$ applies one of the projection operators $P_+$ or $P_-$, where \begin{equation} P_{\pm} = \frac{1}{2} (I \pm A). \end{equation} Then $M_1^\dagger P_- M_1 = M_1^\dagger M_1 P_+ = P_+$, so if $|\psi\rangle$ is some codeword, \begin{equation} M_1^\dagger P_- |\psi\rangle = M_1^\dagger P_- M_1 |\psi\rangle = P_+ |\psi\rangle. \end{equation} If the measurement result is $+1$, we do nothing else, and have thus applied $P_+$. If the measurement result is $-1$, apply $M_1^\dagger = M_1$, resulting in the overall application of $P_+$. Either way, $A$ fixes the new state. This puts the system into the space with stabilizer generated by $A, M_2, \ldots, M_{n-k}$. From now on, I will often say ``measure'' when I mean ``measure and correct for a result of $-1$.'' Note that this construction works outside the framework of stabilizer codes. All we really need is a state $|\psi\rangle$, with $M|\psi\rangle =|\psi\rangle$ for some unitary $M$. Then, as above, we can perform the projection $P_+$ for any operator $A$ satisfying $A^2=1$ and $\{M, A\} = 0$. Note that if $A$ is some local measurement, either $|\psi\rangle$ is not entangled between the region affected by $A$ and the region unaffected by it, or $M$ is a nonlocal operator. We will want to know just where in the space a given state goes. To do this, look at the elements of $N(S)/S$. If the state starts in an eigenvector of $N$, it will also be an eigenvector of $N' = MN$ for all $M \in S$. After measuring $A$, the state will no longer be an eigenvector of $N$ if $N$ anticommutes with $A$, but it {\em will} still be an eigenvector of $M_1 N$, which commutes with $A$. Furthermore, the eigenvalue of $M_1 N$ stays the same. Therefore, by measuring $A$ (and correcting the state if the result is $-1$), we effectively transform the operator $N$ into $M_1 N$. We could equally well say it is transformed to $M M_1 N$ instead, where $M \in S$ commutes with $A$, but this will produce the same transformation of the cosets of $N(S)/S$ to $N(S')/S'$ (where $S'$ is the stabilizer after the measurement). Of course, if $N$ commutes with $A$, measuring $A$ leaves $N$ unchanged. Let us see how all this works with a simple, but very useful, example. Suppose we have two qubits, one in an arbitrary state $|\psi \rangle$, the other initialized to $|0\rangle$. The space of possible states then has stabilizer $I \otimes Z$. Suppose we perform a controlled-NOT from the first qubit to the second. This transforms the stabilizer to $Z \otimes Z$. Now let us measure the operator $I \otimes iY$ (we use the factor of $i$ to ensure that the result is $\pm 1$). This anticommutes with $Z \otimes Z$, so if we get $+1$, we leave the result alone, and if we get $-1$, we apply $Z \otimes Z$ to the state. The new state is in a $+1$-eigenstate of $I \otimes iY$, that is, $|\phi \rangle (|0\rangle + i |1\rangle)$. How is $|\psi \rangle$ related to $|\phi \rangle$? For the original ``code,'' $\overline{X} = X \otimes I$ and $\overline{Z} = Z \otimes I$. After the CNOT, $\overline{X} = X \otimes X$ and $\overline{Z} = Z \otimes I$. $X \otimes X$ does not commute with $I \otimes iY$, but the equivalent operator $Y \otimes Y = (X \otimes X) (Z \otimes Z)$ does. $Z \otimes I$ does commute with $I \otimes iY$, so that stays the same. Since the second qubit is guaranteed to be in the $+1$ eigenstate of $iY$, we might as well ignore it. The effective $\overline{X}$ and $\overline{Z}$ operators for the first qubit are thus $-iY$ and $Z$ respectively. This means we have transformed $\overline{X} \rightarrow -i \overline{X} \overline{Z}$ and $\overline{Z} \rightarrow \overline{Z}$. This is the operation $P^\dagger$. This example is simple enough that it is easy to check: \begin{eqnarray} |00\rangle & \rightarrow & |00\rangle = |0\rangle \frac{1}{2} \left[ (|0\rangle + i |1\rangle) + (|0\rangle - i |1\rangle)\right] \\ & \rightarrow & |0\rangle (|0\rangle \pm i |1\rangle) \\ & \rightarrow & |0\rangle (|0\rangle + i |1\rangle) \\ |10\rangle & \rightarrow & |11\rangle = |1\rangle \frac{i}{2} \left[ - (|0\rangle + i |1\rangle) + (|0\rangle - i |1\rangle) \right] \\ & \rightarrow & i |1\rangle (\mp |0\rangle - i|1\rangle) \\ & \rightarrow & \pm i |1\rangle (\mp |0\rangle \mp i|1\rangle) = -i |1\rangle (|0\rangle + i |1\rangle). \end{eqnarray} Thus, ignoring the second qubit gives $|0\rangle \rightarrow |0\rangle$ and $|1\rangle \rightarrow -i|1\rangle$, which is $P^\dagger$. This result is already quite interesting when coupled with the observation that $P$ and CNOT suffice to produce $R$ as long as we can prepare and measure states in the basis $|0\rangle \pm |1\rangle$ \cite{knill3}. To do this we start out with the state $|\psi \rangle$ plus an ancilla $|0\rangle + |1\rangle$. Thus, the initial stabilizer is $I \otimes X$, $\overline{X} = X \otimes I$, and $\overline{Z} = Z \otimes I$. Apply a CNOT from the second qubit to the first. Now the stabilizer is $X \otimes X$, $\overline{X} = X \otimes I$, and $\overline{Z} = Z \otimes Z$. Apply $P$ to the second qubit, so the stabilizer is $X \otimes iY$, $\overline{X} = X \otimes I$, and $\overline{Z} = Z \otimes Z$. Measure $I \otimes X$, performing $X \otimes iY$ if the result is $-1$. This produces $\overline{X} = X \otimes I$ and $\overline{Z} = iY \otimes X$, so dropping the second qubit results in the transformation $Q$: $X \rightarrow X$, $Z \rightarrow iY$. But $R = P Q^\dagger P$: \begin{eqnarray} X \rightarrow & iY \rightarrow\ Z\ \rightarrow & Z \\ Z \rightarrow &\ Z \rightarrow -iY \rightarrow & X \end{eqnarray} Coupled with the previous result, which derives $P$ from CNOT, this allows us to get any single qubit transformation in the normalizer of $\cal{G}$ provided we can perform a CNOT operation. Another interesting application is to gain a new viewpoint on quantum teleportation. Suppose we have three qubits which start in the state $|\psi\rangle (|00\rangle + |11\rangle)$. The initial stabilizer is $I \otimes X \otimes X$ and $I \otimes Z \otimes Z$, $\overline{X} = X \otimes I \otimes I$, and $\overline{Z} = Z \otimes I \otimes I$. We assume the third qubit is far away, so we can do no operations interacting it directly with the other two qubits. We can, however, perform operations on it conditioned on the result of measuring the other qubits. We begin by performing a CNOT from qubit one to two. The stabilizer is now $I \otimes X \otimes X$ and $Z \otimes Z \otimes Z$, $\overline{X} = X \otimes X \otimes I$, and $\overline{Z} = Z \otimes I \otimes I$. Measure $X$ for qubit one and discard qubit one. If the measurement result was $+1$, we leave the state alone; if it was $-1$, we perform $Z$ on qubits two and three. The stabilizer is now $X \otimes X$, $\overline{X} = X \otimes I$ and $\overline{Z} = Z \otimes Z$. Now measure $Z$ for the new first qubit. If the result is $+1$, we leave the final qubit alone; if it is $-1$, we apply $X$ to the last qubit. This results in $\overline{X} = X$ and $\overline{Z} = Z$, both acting on the last qubit. We have succesfully teleported the state $|\psi\rangle$. The operations conditioned on measurement results needed for teleportation arise here naturally as the corrections to the stabilizer for alternate measurement results. The formalism would have told us just as easily what operations were necessary if we had begun with a different Bell state or a more complicated entangled state (as long as it can still be described by a stabilizer). I claimed before that products of $R$, $P$, and CNOT actually gave us all of the elements of $N(\cal{G})$, and I am now ready to prove that. The one-qubit operations correspond to the six automorphisms of $D_4$ given by $R$, $P$, $Q$, $T$, $T^2$, and of course the identity. We have already seen that $Q = P^\dagger R P^\dagger$. Also, $T = P Q^\dagger$, so all one-qubit operations are covered. We can also perform all two-qubit operations. Every automorphism of $D_4 \times D_4$ can be produced by a composition of controlled NOT and single-qubit operations. For instance, take \begin{eqnarray} Z \otimes I & \rightarrow & X \otimes X \nonumber \\ I \otimes Z & \rightarrow & Z \otimes Z \\ X \otimes I & \rightarrow & iY \otimes X \nonumber\\ I \otimes X & \rightarrow & iZ \otimes Y. \nonumber \end{eqnarray} This permutation can be produced by performing the cyclic permutation $X \rightarrow iY \rightarrow Z \rightarrow X$ on the first qubit and a phase rotation $X \rightarrow iY$ on the second qubit, and then performing a standard controlled NOT from the first qubit to the second qubit. It is straightforward to consider the other possibilities and show that they too can be written using a CNOT and one-qubit gates. I will show that the larger gates can be made this way by induction on the number of qubits. Suppose we know this to be true for all $n$-qubit gates, and we have an $(n+1)$-qubit gate $U$. On an arbitrary input state $|0\rangle |\psi \rangle + |1\rangle |\phi \rangle$ (where $|\psi \rangle$ and $|\phi \rangle$ are $n$-qubit states), the output state will be \begin{equation} (|0\rangle |\psi_1 \rangle + |1\rangle |\psi_2 \rangle) + (|0\rangle |\phi_1 \rangle + |1\rangle |\phi_2 \rangle). \end{equation} Suppose that under the applied transformation, $M = U (Z \otimes I \otimes \cdots \otimes I) U^\dagger$ anticommutes with $Z \otimes I \otimes \cdots \otimes I$. If it does not, we can apply a one-qubit transformation and/or rearrange qubits so that $M = X \otimes M'$, where $M'$ is an $n$-qubit operation. Suppose we apply $U$ to $|0\rangle |\psi\rangle$. If we were then to measure $Z$ for the first qubit, we would get either $0$, in which case the other qubits are in state $|\psi_1 \rangle$, or $1$, in which case the remaining qubits are in state $|\psi_2 \rangle$. The above analysis of measurements shows that $|\psi_1 \rangle$ and $|\psi_2 \rangle$ are therefore related by the application of $M'$. Define $U'$ by $U' |\psi \rangle = |\psi_1 \rangle$. Then \begin{equation} U (|0\rangle |\psi \rangle) = (I + M) (|0\rangle \otimes U' |\psi \rangle). \end{equation} Let $N = U (X \otimes I \otimes \cdots \otimes I) U^\dagger$. Again, we can apply a one-qubit operation so that either $N = Z \otimes N'$ or $N = I \otimes N'$. We can always put $M$ and $N$ in this form simultaneously. Then \begin{eqnarray} U (|1\rangle |\phi \rangle) & = & N U (|0\rangle |\phi \rangle) \\ & = & N (I + M) (|0\rangle \otimes U' |\phi \rangle) \\ & = & (I - M) N (|0\rangle \otimes U' |\phi \rangle) \\ & = & (I - M) (|0\rangle \otimes N' U' |\phi \rangle), \end{eqnarray} using the above form of $N$ and the fact that $\{M, N\} = 0$. Now, $U'$ is an $n$-qubit operation, so we can build it out of $R$, $P$, and CNOT. To apply $U$, first apply $U'$ to the last $n$ qubits. Now apply $N'$ to the last $n$ qubits conditioned on the first qubit being $1$. We can do this with just a series of CNOTs and one-qubit operations. Now apply a Hadamard transform to the first qubit. This puts the system in the state \begin{equation} (|0\rangle + |1\rangle) \otimes U' |\psi \rangle + (|0\rangle - |1\rangle) \otimes N' U' |\phi \rangle. \end{equation} Now, apply $M'$ to the last $n$ qubits conditioned on the first qubit. Again, we can do this with just CNOTs and one-qubit operations. This leaves the system in the state \begin{eqnarray} & & |0\rangle \otimes U' |\psi \rangle + |1\rangle \otimes M' U' |\psi \rangle + |0\rangle \otimes N' U' |\phi \rangle - |1\rangle \otimes M' N' U' |\phi \rangle \\ & = & |0\rangle \otimes U' |\psi \rangle + M (|0\rangle \otimes U' |\psi \rangle) + |0\rangle \otimes N' U' |\phi \rangle - M (|0\rangle \otimes N' U' |\phi \rangle) \\ & = & (I + M) (|0\rangle \otimes U' |\psi \rangle) + (I - M) (|0\rangle \otimes N' U' |\phi \rangle), \end{eqnarray} which we can recognize as the desired end state after applying $U$. \section{Operations on CSS Codes} In this section, I will finally begin to look at the problem of which gates can be applied to specific codes. One of the best classes of codes for fault-tolerant computation are the Calderbank-Shor-Steane (CSS) codes~\cite{calderbank1,steane1}, which are converted from certain classical codes. These codes have a stabilizer which can be written as the direct product of two sectors, one of which is formed purely from $X$s and one formed just from $Z$s. These two sectors correspond to the two dual classical codes that go into the construction of the code. Shor \cite{shor2} showed that a punctured doubly-even self-dual CSS code could be used for universal computation. An example of such a code is the seven-qubit code, whose stabilizer is given in Table~\ref{qubit7}. \begin{table} \begin{tabular}{|l|ccccccc|} $M_1$ & $X$ & $X$ & $X$ & $X$ & $I$ & $I$ & $I$ \\ $M_2$ & $X$ & $X$ & $I$ & $I$ & $X$ & $X$ & $I$ \\ $M_3$ & $X$ & $I$ & $X$ & $I$ & $X$ & $I$ & $X$ \\ $M_4$ & $Z$ & $Z$ & $Z$ & $Z$ & $I$ & $I$ & $I$ \\ $M_5$ & $Z$ & $Z$ & $I$ & $I$ & $Z$ & $Z$ & $I$ \\ $M_6$ & $Z$ & $I$ & $Z$ & $I$ & $Z$ & $I$ & $Z$ \\ \hline $\overline{X}$ & $I$ & $I$ & $I$ & $I$ & $X$ & $X$ & $X$ \\ $\overline{Z}$ & $I$ & $I$ & $I$ & $I$ & $Z$ & $Z$ & $Z$ \\ \end{tabular} \caption{The stabilizer and encoded $X$ and $Z$ for the seven-qubit code.} \label{qubit7} \end{table} From the stabilizer, we can now understand why such codes allow the fault-tolerant implemention of the Hadamard rotation, the $\pi / 2$ rotation, and the controlled NOT. The Hadamard rotation switches $X$ and $Z$. For a CSS code, this is a symmetry of the stabilizer if and only if the $X$ sector of the stabilizer is the same as the $Z$ sector. Therefore the two classical codes had to be identical, and the quantum code must be derived from a classical code that contains its own dual. As we can see, this works for the seven-qubit code. In order to understand what the Hadamard rotation does to the encoded states, we must look at what it does to the encoded $X$ and $Z$ operations. For a punctured self-dual CSS code, the $\overline{X}$ and $\overline{Z}$ operations can again be taken to be the same, so the Hadamard rotation will just switch them. It is therefore an operation which switches encoded $X$ with encoded $Z$, and is thus an encoded Hadamard rotation. Similarly, for a self-dual code, the $\pi / 2$ rotation will convert the $X$ generators into the product of all $Y$s. This just converts the $X$ generators into their product with the corresponding $Z$ generator, so this is a valid fault-tolerant operation, provided the overall phase is correctly taken care of. There is a factor of $i$ for each $X$, so there must be a multiple of 4 $X$s in each element of the stabilizer for that to work out in general. This will only be true of a {\em doubly-even} CSS code, which gives us the other requirement for Shor's methods. Again, we can see that the seven-qubit code meets this requirement. Such a code will have $3\ {\rm mod}\ 4$ $X$s in the $\overline{X}$ operation, so the bitwise $\pi /4$ converts $\overline{X}$ to $-i \overline{Y}$. This is thus an encoded $- \pi / 2$ rotation. Finally, we get to the controlled NOT. This can be performed bitwise on {\em any} CSS code. We must look at its operation on $M \otimes I$ and $I \otimes M$. In the first case, if $M$ is an $X$ generator, it becomes $M \otimes M$. Since both the first and second blocks have the same stabilizer, this is an element of $S \times S$. If $M$ is a $Z$ generator, $M \otimes I$ becomes $M \otimes I$ again. Similarly, if $M$ is an $X$ generator, $I \otimes M$ becomes $I \otimes M$, and if $M$ is a $Z$ generator, $I \otimes M$ becomes $M \otimes M$, which is again in $S \times S$. For an arbitrary CSS code, the $\overline{X_i}$ operators are formed from the product of all $X$s and the $\overline{Z_i}$ operators are formed from the product of all $Z$s. Therefore, \begin{eqnarray} \overline{X_i} \otimes I & \rightarrow & \overline{X_i} \otimes \overline{X_i} \nonumber \\ \overline{Z_i} \otimes I & \rightarrow & \overline{Z_i} \otimes I \\ I \otimes \overline{X_i} & \rightarrow & I \otimes \overline{X_i} \nonumber \\ I \otimes \overline{Z_i} & \rightarrow & \overline{Z_i} \otimes \overline{Z_i}. \nonumber \end{eqnarray} Thus, the bitwise CNOT produces an encoded CNOT for every encoded qubit in the block. In fact, we can now easily prove that codes of the general CSS form are the only codes for which bitwise CNOT is a valid fault-tolerant operation. Let us take a generic element of the stabilizer and write it as $MN$, where $M$ is the product of $X$s and $N$ is the product of $Z$s. Then under bitwise CNOT, $MN \otimes I \rightarrow MN \otimes M$, which implies $M$ itself is an element of the stabilizer. The stabilizer is a group, so $N$ is also an element of the stabilizer. Therefore, the stabilizer breaks up into a sector made solely from $X$s and one made solely from $Z$s, which means the code is of the CSS type. \section{The Five-Qubit Code} \label{five-qubit} One code of particular interest is the five-qubit code \cite{bennett,laflamme}, which is the smallest possible code to correct a single error. Until now, there were no known fault-tolerant operations that could be performed on this code except the simple encoded $X$ and encoded $Z$. One presentation \cite{calderbank2} of the five-qubit code is given in Table~\ref{qubit5}. \begin{table} \begin{tabular}{|l|ccccc|} $M_1$ & $X$ & $Z$ & $Z$ & $X$ & $I$ \\ $M_2$ & $I$ & $X$ & $Z$ & $Z$ & $X$ \\ $M_3$ & $X$ & $I$ & $X$ & $Z$ & $Z$ \\ $M_4$ & $Z$ & $X$ & $I$ & $X$ & $Z$ \\ \hline $\overline{X}$ & $X$ & $X$ & $X$ & $X$ & $X$ \\ $\overline{Z}$ & $Z$ & $Z$ & $Z$ & $Z$ & $Z$ \\ \end{tabular} \caption{The stabilizer and encoded $X$ and $Z$ for the five-qubit code.} \label{qubit5} \end{table} This presentation has the advantage of being cyclic, which simplifies somewhat the analysis below. This stabilizer is invariant under the transformation $T : X \rightarrow iY \rightarrow Z \rightarrow X$ bitwise. For instance, \begin{equation} M_1 = X \otimes Z \otimes Z \otimes X \otimes I \rightarrow - Y \otimes X \otimes X \otimes Y \otimes I = M_3 M_4. \end{equation} By the cyclic property of the code, $M_2$ through $M_4$ also get transformed into elements of the stabilizer, and this is a valid fault-tolerant operation. It transforms \begin{equation} \overline{X} \rightarrow i \overline{Y} \rightarrow \overline{Z}. \end{equation} Therefore, this operation performed bitwise performs an encoded version of itself. Operations which have this property are particularly useful because they are easy to apply to concatenated codes~\cite{knill3,aharonov,knill2}. There is no nontrivial two-qubit operation in the normalizer of $\cal{G}$ that can be performed transversally on this code. However, there is a three-qubit transformation $T_3$ that leaves $S \times S \times S$ invariant: \begin{eqnarray} X \otimes I \otimes I & \rightarrow & iX \otimes Y \otimes Z \nonumber \\ Z \otimes I \otimes I & \rightarrow & iZ \otimes X \otimes Y \nonumber \\ I \otimes X \otimes I & \rightarrow & iY \otimes X \otimes Z \\ I \otimes Z \otimes I & \rightarrow & iX \otimes Z \otimes Y \nonumber \\ I \otimes I \otimes X & \rightarrow & X \otimes X \otimes X \nonumber \\ I \otimes I \otimes Z & \rightarrow & Z \otimes Z \otimes Z. \nonumber \end{eqnarray} On operators of the form $M \otimes I \otimes I$ or $I \otimes M \otimes I$, this transformation applies cyclic transformations as above to the other two slots. Operators $I \otimes I \otimes M$ just become $M \otimes M \otimes M$, which is clearly in $S \times S \times S$. The matrix of $T_3$ is (up to normalization) \begin{equation} T_3 = \pmatrix{ \ 1 & \ 0 & \ i & \ 0 & \ i & \ 0 & \ 1 & \ 0 \cr \ 0 & -1 & \ 0 & \ i & \ 0 & \ i & \ 0 & -1 \cr \ 0 & \ i & \ 0 & \ 1 & \ 0 & -1 & \ 0 & -i \cr \ i & \ 0 & -1 & \ 0 & \ 1 & \ 0 & -i & \ 0 \cr \ 0 & \ i & \ 0 & -1 & \ 0 & \ 1 & \ 0 & -i \cr \ i & \ 0 & \ 1 & \ 0 & -1 & \ 0 & -i & \ 0 \cr -1 & \ 0 & \ i & \ 0 & \ i & \ 0 & -1 & \ 0 \cr \ 0 & \ 1 & \ 0 & \ i & \ 0 & \ i & \ 0 & \ 1 }. \end{equation} As with $T$, this operation performs itself on the encoded states. A possible network to produce this operation (based on the construction in section~\ref{measurements}) is given in figure~\ref{buildT3}. \begin{figure} \begin{picture}(220,80) \put(0,20){\line(1,0){154}} \put(166,20){\line(1,0){28}} \put(206,20){\line(1,0){14}} \put(0,40){\line(1,0){94}} \put(106,40){\line(1,0){8}} \put(126,40){\line(1,0){48}} \put(186,40){\line(1,0){34}} \put(0,60){\line(1,0){114}} \put(126,60){\line(1,0){48}} \put(186,60){\line(1,0){34}} \put(20,60){\circle{8}} \put(20,56){\line(0,1){8}} \put(20,40){\circle{8}} \put(20,36){\line(0,1){8}} \put(40,60){\line(0,-1){24}} \put(40,60){\circle*{4}} \put(40,40){\circle{8}} \put(60,40){\line(0,1){24}} \put(60,40){\circle*{4}} \put(60,60){\circle{8}} \put(80,60){\line(0,-1){24}} \put(80,60){\circle*{4}} \put(80,40){\circle{8}} \put(100,60){\line(0,-1){14}} \put(94,34){\framebox(12,12){$Z$}} \put(100,60){\circle*{4}} \put(114,54){\framebox(12,12){$Q$}} \put(114,34){\framebox(12,12){$Q$}} \put(140,20){\line(0,1){44}} \put(140,20){\circle*{4}} \put(140,40){\circle{8}} \put(140,60){\circle{8}} \put(154,14){\framebox(12,12){$R$}} \put(180,20){\line(0,1){14}} \put(180,46){\line(0,1){8}} \put(180,20){\circle*{4}} \put(174,34){\framebox(12,12){$Z$}} \put(174,54){\framebox(12,12){$Z$}} \put(194,14){\framebox(12,12){$R$}} \end{picture} \caption{Network to perform the $T_3$ gate.} \label{buildT3} \end{figure} If we add in the possibility of measurements, this three-qubit operation along with $T$ will allow us to perform any operation in the normalizer of $\cal{G}$. I will describe how to do this on unencoded qubits, and since $T$ and $T_3$ bitwise just perform themselves, this will tell us how to do the same operations on the encoded qubits. To perform $P$, first prepare two ancilla qubits in the state $|00\rangle$ and use the data qubit as the third qubit. The original stabilizer is $Z \otimes I \otimes I$ and $I \otimes Z \otimes I$, $\overline{X} = I \otimes I \otimes X$, and $\overline{Z} = I \otimes I \otimes Z$. Now apply $T_3$, so that the stabilizer is $iZ \otimes X \otimes Y$ and $iX \otimes Z \otimes Y$, $\overline{X} = X \otimes X \otimes X$, and $\overline{Z} = Z \otimes Z \otimes Z$. Measure $Z$ for the second and third qubits. The resulting $\overline{X} = iY \otimes I \otimes Z$ and $\overline{Z} = Z \otimes Z \otimes Z$. Dropping the last two qubits, we have $X \rightarrow iY$ and $Z \rightarrow Z$, which is $P$. Again, $Q = T^\dagger P$ and $R = P Q^\dagger P$, so we can perform any single qubit operation. To get a two-qubit operation, prepare a third qubit in the state $|0\rangle$ and apply $T_3$. This results in the stabilizer $Z \otimes Z \otimes Z$, $\overline{X_1} = i X \otimes Y \otimes Z$, $\overline{X_2} = iY \otimes X \otimes Z$, $\overline{Z_1} = i Z \otimes X \otimes Y$, and $\overline{Z_2} = i X \otimes Z \otimes Y$. Measure $X$ for the second qubit and throw it out. This leaves the transformation \begin{eqnarray} X \otimes I & \rightarrow & i Y \otimes I \nonumber \\ I \otimes X & \rightarrow & i Y \otimes Z \\ Z \otimes I & \rightarrow & i Z \otimes Y \nonumber\\ I \otimes Z & \rightarrow & i Y \otimes X. \nonumber \end{eqnarray} This operation can be produced by applying $Q$ to the second qubit (switching $Z$ and $iY$), then a CNOT from the second qubit to the first one, then $P$ to the first qubit and $T^2$ to the second qubit. Therefore, we can also get a CNOT by performing this operation with the appropriate one-qubit operations. This allows us to perform any operation we desire in the normalizer of $\cal{G}$. Note that section~\ref{anycode} provides us with another way to get these operations. Having two methods available broadens the choices for picking the most efficient implementations. In order to perform universal computation on the five-qubit code, we must know how to perform a Toffoli gate. Shor \cite{shor2} gave a method for producing a Toffoli gate that relied on the ability to perform the gate \begin{equation} |a\rangle |b\rangle |c\rangle \rightarrow (-1)^{a(b c)} |a\rangle |b\rangle |c\rangle, \label{toffoli} \end{equation} where $|a\rangle$ is either $|0 \ldots 0 \rangle$ or $|1 \ldots 1 \rangle$ and $|b\rangle$ and $|c\rangle$ are encoded $0$s or $1$s. For the codes Shor considered, this gate could be performed by applying it bitwise, because the conditional sign could be applied bitwise. All of the qubits in the first block are either $0$ or $1$, so a controlled conditional sign from the first block will produce a conditional sign on the second two blocks whenever the first block is $1$. For the five-qubit code, this gate is not quite as straightforward, but is still not difficult. To perform the two-qubit conditional sign gate on the five-qubit code, we need to perform a series of one- and three-qubit gates and measurements. However, if we perform each of these gates and measurements conditional on the value of $a$, we have performed the conditional sign gate on $|b\rangle |c\rangle$ if and only if the first block is $1$. From this, the rest of Shor's construction of the Toffoli gate carries over straightforwardly. It involves a number of measurements and operations from the normalizer of $\cal{G}$. We have already discussed how to do all of those. The one remaining operation that is necessary is \begin{equation} |a\rangle |d\rangle \rightarrow (-1)^{a d} |a\rangle |d\rangle, \end{equation} where $|d\rangle$ is an encoded state and $|a\rangle$ is again all $0$s or all $1$s. However, this is just $\overline{Z}$ applied to $|d\rangle$ conditioned on the value of $a$, which can be implemented with a single two-qubit gate on each qubit in the block. Therefore, we can perform universal fault-tolerant computation on the five-qubit code. Note that there was nothing particularly unique about the five-qubit code that made the construction of the Toffoli gate possible. The only property we needed was the ability to perform a conditional sign gate. \section{Gates for any Stabilizer Code} \label{anycode} Consider the following transformation: \begin{eqnarray} X\otimes I\otimes I\otimes I &\rightarrow & X\otimes X\otimes X\otimes I \nonumber \\ I\otimes X\otimes I\otimes I &\rightarrow & I\otimes X\otimes X\otimes X \nonumber \\ I\otimes I\otimes X\otimes I &\rightarrow & X\otimes I\otimes X\otimes X \nonumber \\ I\otimes I\otimes I\otimes X &\rightarrow & X\otimes X\otimes I\otimes X \\ Z\otimes I\otimes I\otimes I &\rightarrow & Z\otimes Z\otimes Z\otimes I \nonumber \\ I\otimes Z\otimes I\otimes I &\rightarrow & I\otimes Z\otimes Z\otimes Z \nonumber \\ I\otimes I\otimes Z\otimes I &\rightarrow & Z\otimes I\otimes Z\otimes Z \nonumber \\ I\otimes I\otimes I\otimes Z &\rightarrow & Z\otimes Z\otimes I\otimes Z. \nonumber \end{eqnarray} A possible gate array to perform this operation is given in figure~\ref{buildbig}. \begin{figure} \begin{picture}(180,100) \put(0,20){\line(1,0){180}} \put(0,40){\line(1,0){134}} \put(146,40){\line(1,0){34}} \put(0,60){\line(1,0){134}} \put(146,60){\line(1,0){34}} \put(0,80){\line(1,0){114}} \put(126,80){\line(1,0){28}} \put(166,80){\line(1,0){14}} \put(20,60){\line(0,-1){24}} \put(20,60){\circle*{4}} \put(20,40){\circle{8}} \put(40,40){\line(0,-1){24}} \put(40,40){\circle*{4}} \put(40,20){\circle{8}} \put(60,20){\line(0,1){44}} \put(60,20){\circle*{4}} \put(60,60){\circle{8}} \put(80,40){\line(0,1){24}} \put(80,40){\circle*{4}} \put(80,60){\circle{8}} \put(100,80){\line(0,-1){44}} \put(100,80){\circle*{4}} \put(100,60){\circle{8}} \put(100,40){\circle{8}} \put(114,74){\framebox(12,12){$R$}} \put(140,80){\line(0,-1){14}} \put(140,54){\line(0,-1){8}} \put(140,80){\circle*{4}} \put(134,54){\framebox(12,12){$Z$}} \put(134,34){\framebox(12,12){$Z$}} \put(154,74){\framebox(12,12){$R$}} \end{picture} \caption{Network to perform the four-qubit gate.} \label{buildbig} \end{figure} This operation takes $M \otimes I \otimes I \otimes I$ to $M \otimes M \otimes M \otimes I$, and cyclic permutations of this, so if $M \in S$, the image of these operations is certainly in $S \times S \times S \times S$. This therefore is a valid transversal operation on {\em any} stabilizer code. The encoded operation it performs is just itself. There is a family of related operations for any even number of qubits (the two-qubit case is trivial), but we only need to concern ourselves with the four-qubit operation. Suppose we have two data qubits. Prepare the third and fourth qubits in the state $|00\rangle$, apply the above transformation, and then measure $X$ for the third and fourth qubits. The resulting transformation on the first two qubits is then: \begin{eqnarray} X \otimes I & \rightarrow & X \otimes X \nonumber \\ I \otimes X & \rightarrow & I \otimes X \\ Z \otimes I & \rightarrow & Z \otimes I \nonumber \\ I \otimes Z & \rightarrow & Z \otimes Z. \nonumber \end{eqnarray} This is precisely the controlled NOT. Since I showed in section~\ref{measurements} that the CNOT was sufficient to get any operation in $N(\cal{G})$, we can get any such operation for any stabilizer code! In fact, using the Toffoli gate construction from section~\ref{five-qubit}, we can perform universal computation. Actually, this only gives universal computation for codes encoding a single qubit in a block, since if a block encodes multiple qubits, this operation performs the CNOT between corresponding encoded qubits in different blocks. To actually get universal computation, we will want to perform operations between qubits encoded in the same block. To do this, we need a few more tools, which will be presented in the next section. I will also consider a few more examples where we have tools beyond the ones available for any code. \section{Distance 2 Codes} There is a large class of distance 2 codes with a very simple form. The stabilizer for these codes has just two generators, one a product of all $X$s and one a product of all $Z$s. The total number of qubits $n$ must be even. These codes encode $n-2$ qubits, and therefore serve as a good model for block codes encoding multiple qubits. While these distance 2 codes cannot actually correct a general error, they may be useful in their own right nonetheless. A distance 2 code can be used for error detection \cite{vaidman}. If we encode our computer using distance 2 codes, we will not be able to fix any errors that occur, but we will know if an error has invalidated our calculation. A better potential use of distance 2 codes is to fix located errors \cite{grassl}. Suppose the dominant error source in our hardware comes from qubits leaving the normal computational space. In principle, without any coding, we can detect not only that this has happened, but in which qubit it has occurred. We can then use this information in conjunction with a distance 2 code to correct the state, as with a usual quantum error-correcting code. A final possible use of distance 2 codes is to concatenate them to produce codes that can correct multiple errors. Since the limiting factor in the computational threshold for concatenated codes is the time to do error correction, this offers potentially a great advantage. However, there is a significant complication in this program, since the codes given here encode more than one qubit, which complicates the concatenation procedure. Because of the simple structure of these distance 2 codes, we can immediately see a number of possible fault-tolerant operations. The bitwise Hadamard rotation and the bitwise CNOT are both permissible. If the total number of qubits is a multiple of 4, the $P$ gate and the other single qubit operations are allowed, as well. What is less clear is how these various operations affect the encoded data. The $\overline{X_i}$ operators for these codes are $X_1 X_{i+1}$, where $i$ runs from 1 to $n-2$. The $\overline{Z_i}$ operators are $Z_{i+1} Z_n$. Therefore, swapping the $(i+1)$th qubit with the $(j+1)$th qubit will swap the $i$th encoded qubit with the $j$th encoded qubit. Swapping two qubits in a block is not a transversal operation, but if performed carefully, it can still be done fault-tolerantly. One advantage of the swap operation is that any errors in one qubit will not propagate to the other, since they are swapped as well. However, applying the swap directly to the two qubits allows the possibility of an error in the swap gate itself producing errors in both qubits. We can circumvent this by introducing a third ancilla qubit. Suppose we wish to swap A and B, which are in spots 1 and 2, using ancilla C, in spot 3. First swap the qubits in spots 1 and 3, then 1 and 2, and finally 2 and 3. Then A ends up in spot 2, B ends up in spot 1, and C ends up in spot 3, but A and B have never interacted directly. We would need two swap gates to go wrong in order to introduce errors to both A and B. Note that while the state C does not matter, it should not be something important, since it is exposed to error from all three swap gates. Also note that we should perform error correction before interacting this block with another block, since errors could then spread between corresponding qubits, which have changed. The action of the CNOT is simple. As for other CSS codes, it just produces a CNOT between all of the encoded qubits in the first block with the corresponding encoded qubits in the second block. The Hadamard rotation converts $\overline{X_i}$ to $Z_1 Z_{i+1}$, which is equivalent (via multiplication by $M_2$) to $Z_2 \ldots Z_i Z_{i+2} \ldots Z_n$. This is equal to $\overline{Z_1} \ldots \overline{Z_{i-1}} \overline{Z_{i+1}} \ldots \overline{Z_{n-2}}$. Similarly, $\overline{Z_i}$ becomes $\overline{X_1} \ldots \overline{X_{i-1}} \overline{X_{i+1}} \ldots \overline{X_{n-2}}$. For instance, for the smallest case, $n=4$, \begin{eqnarray} \overline{X_1} & \rightarrow & \overline{Z_2} \nonumber \\ \overline{Z_1} & \rightarrow & \overline{X_2} \\ \overline{X_2} & \rightarrow & \overline{Z_1} \nonumber \\ \overline{Z_2} & \rightarrow & \overline{X_1}. \nonumber \end{eqnarray} The Hadamard rotation for $n=4$ performs a Hadamard rotation on each encoded qubit and simultaneously switches them. For larger $n$, it performs the Hadamard rotation on each qubit, and performs a variation of the class of codes discussed in section~\ref{anycode}. For $n=4$, the $P$ gate acts as follows: \begin{eqnarray} \overline{X_1} & \rightarrow & -Y_1 Y_2 = -\overline{X_1} \overline{Z_2} \nonumber \\ \overline{X_2} & \rightarrow & -Y_1 Y_3 = -\overline{X_2} \overline{Z_1} \\ \overline{Z_1} & \rightarrow & \overline{Z_1} \nonumber \\ \overline{Z_2} & \rightarrow & \overline{Z_2}. \nonumber \end{eqnarray} A consideration of two-qubit gates allows us to identify this as a variant of the conditional sign gate. Specifically, this gate gives a sign of $-1$ unless both qubits are $|0\rangle$. When we allow measurement, a trick becomes available that is useful for any multiple-qubit block code. Given one data qubit, prepare a second ancilla qubit in the state $|0\rangle + |1\rangle$, then apply a CNOT from the second qubit to the first qubit and measure $Z$ for the first qubit. The initial stabilizer is $I \otimes X$; after the CNOT it is $X \otimes X$. Therefore the full operation takes $X \otimes I$ to $I \otimes X$ and $Z \otimes I$ to $Z \otimes Z$. We can discard the first qubit and the second qubit is in the initial data state. However, if we prepare the ancilla in the state $|0\rangle$, then apply a CNOT, the original state is unaffected. Therefore, by preparing a block with all but the $j$th encoded qubit in the state $|0\rangle$, and with the $j$th encoded qubit in the state $|0\rangle + |1\rangle$, then applying a CNOT from the new block to a data block and measuring the $j$th encoded qubit in the data block, we can switch the $j$th encoded qubit out of the data block and into the new, otherwise empty block. This trick enables us to perform arbitrary operations on qubits from the same block for the distance 2 codes. We switch the qubits of interest into blocks of their own, use swap operations to move them into corresponding spots, then perform whole block operations to interact them. Then we can swap them back and switch them back into place in their original blocks. The step that is missing for arbitrary stabilizer codes is the ability to move individual encoded qubits to different places within a block. Since the gate in section~\ref{anycode} gives us a block CNOT, we can perform the switching operation into an empty block. By using switching and whole block operations, we can perform an arbitrary one-qubit operation on any single encoded qubit within a block. The only remaining operation necessary is the ability to swap an encoded qubit from the $i$th place to the $j$th place. We can do this using quantum teleportation. All that is required is an otherwise empty block with the $i$th and $j$th encoded qubits in the entangled state $|00\rangle + |11\rangle$. Then we need only perform single-qubit operations and a CNOT between the qubits in the $i$th places, both of which we can do. To prepare the entangled state, we simply start with the $+1$-eigenstate of $\overline{Z_i}$ and $\overline{Z_j}$, then measure the eigenvalue of $\overline{X_i} \overline{X_j}$ (and correct if the result is $-1$). This is just an operator in ${\cal G}$, so we know how to do this. The state stays in an eigenvector of $\overline{Z_i} \overline{Z_j}$, which commutes with $\overline{X_i} \overline{X_j}$, so the result will be the desired encoded Bell state. We can then teleport the $i$th qubit in one otherwise empty block to the $j$th qubit in the block originally containing the entangled state. This was all we needed to allow universal computation on any stabilizer code. \section{The 8 Qubit Code} There is a code correcting one error encoding 3 qubits in 8 qubits \cite{gottesman,calderbank2,steane2}. The stabilizer is given in table~\ref{qubit8}. \begin{table} \begin{tabular}{|l|cccccccc|} $M_1$ & $X$ & $X$ & $X$ & $X$ & $X$ & $X$ & $X$ & $X$ \\ $M_2$ & $Z$ & $Z$ & $Z$ & $Z$ & $Z$ & $Z$ & $Z$ & $Z$ \\ $M_3$ & $X$ & $I$ & $X$ & $I$ & $Z$ & $Y$ & $Z$ & $Y$ \\ $M_4$ & $X$ & $I$ & $Y$ & $Z$ & $X$ & $I$ & $Y$ & $Z$ \\ $M_5$ & $X$ & $Z$ & $I$ & $Y$ & $I$ & $Y$ & $X$ & $Z$ \\ \hline $\overline{X_1}$ & $X$ & $X$ & $I$ & $I$ & $I$ & $Z$ & $I$ & $Z$ \\ $\overline{X_2}$ & $X$ & $I$ & $X$ & $Z$ & $I$ & $I$ & $Z$ & $I$ \\ $\overline{X_3}$ & $X$ & $I$ & $I$ & $Z$ & $X$ & $Z$ & $I$ & $I$ \\ $\overline{Z_1}$ & $I$ & $Z$ & $I$ & $Z$ & $I$ & $Z$ & $I$ & $Z$ \\ $\overline{Z_2}$ & $I$ & $I$ & $Z$ & $Z$ & $I$ & $I$ & $Z$ & $Z$ \\ $\overline{Z_3}$ & $I$ & $I$ & $I$ & $I$ & $Z$ & $Z$ & $Z$ & $Z$ \\ \end{tabular} \caption{The stabilizer and encoded $X$s and $Z$s for the eight-qubit code.} \label{qubit8} \end{table} There are no transversal operations that leave this stabilizer fixed. However, when we allow swaps between the constituent qubits, a number of possibilities become available. One possible operation is to swap the first four qubits with the second four qubits. This leaves $M_1$, $M_2$, and $M_4$ unchanged. $M_3$ becomes instead $M_1 M_2 M_3$, and $M_5$ becomes $M_1 M_5$. On the encoded qubits, this induces the transformation \begin{eqnarray} X \otimes I \otimes I & \rightarrow & X \otimes I \otimes Z \nonumber \\ I \otimes X \otimes I & \rightarrow & I \otimes X \otimes I \nonumber \\ I \otimes I \otimes X & \rightarrow & Z \otimes I \otimes X \\ Z \otimes I \otimes I & \rightarrow & Z \otimes I \otimes I \nonumber \\ I \otimes Z \otimes I & \rightarrow & I \otimes Z \otimes I \nonumber \\ I \otimes I \otimes Z & \rightarrow & I \otimes I \otimes Z. \nonumber \end{eqnarray} This is just a conditional sign on the first and third qubits, with the second encoded qubit unaffected. Through single-qubit transformations, we can convert this to a controlled NOT, and using this perform a swap between the first and third encoded positions. Another operation is to swap qubits one and two with three and four and qubits five and six with seven and eight. This leaves $M_1$, $M_2$, and $M_3$ unchanged, and converts $M_4$ to $M_2 M_4$ and $M_5$ to $M_1 M_5$. On the encoded qubits, it induces the transformation \begin{eqnarray} X \otimes I \otimes I & \rightarrow & X \otimes Z \otimes Z \nonumber \\ I \otimes X \otimes I & \rightarrow & Z \otimes X \otimes Z \nonumber \\ I \otimes I \otimes X & \rightarrow & Z \otimes Z \otimes X \\ Z \otimes I \otimes I & \rightarrow & Z \otimes I \otimes I \nonumber \\ I \otimes Z \otimes I & \rightarrow & I \otimes Z \otimes I \nonumber \\ I \otimes I \otimes Z & \rightarrow & I \otimes I \otimes Z. \nonumber \end{eqnarray} We could also switch the odd numbered qubits with the even numbered qubits. That leaves $M_1$ and $M_2$ unchanged, while turning $M_3$ into $M_1 M_3$, $M_4$ into $M_1 M_4$, and $M_5$ into $M_1 M_2 M_5$. On the encoded qubits it induces \begin{eqnarray} X \otimes I \otimes I & \rightarrow & X \otimes I \otimes Z \nonumber \\ I \otimes X \otimes I & \rightarrow & I \otimes X \otimes Z \nonumber \\ I \otimes I \otimes X & \rightarrow & Z \otimes Z \otimes X \\ Z \otimes I \otimes I & \rightarrow & Z \otimes I \otimes I \nonumber \\ I \otimes Z \otimes I & \rightarrow & I \otimes Z \otimes I \nonumber \\ I \otimes I \otimes Z & \rightarrow & I \otimes I \otimes Z. \nonumber \end{eqnarray} This is just a conditional sign between the first and third places followed by a conditional sign between the second and third places. Combined with the first operation, it gives us a conditional sign between the second and third places, which we can again convert to a swap between the second and third encoded positions. This allows us to swap any two encoded qubits in the block, which is sufficient to give us universal computation. In this case, the symmetries of the code naturally became allowed transformations of the stabilizer. This is likely to hold true in many other cases as well. As with the five-qubit code, we also had available a universal protocol for swapping encoded qubits, but multiple methods again allow us more freedom in choosing efficient methods. \section{Summary and Discussion} I have presented a general theory for understanding when it is possible to apply a given operation transversally to a given quantum error-correcting code, and for understanding the results of making a measurement on a stabilizer code. These results clarify the advantages of the doubly-even self-dual CSS codes used by Shor~\cite{shor2}. They also provide protocols for performing universal computation on any stabilizer code. In many cases, the protocols described here call for a number of steps to perform most simple operations, so more efficient protocols for specific codes are desirable, and I expect the methods described in this paper will be quite helpful when searching for these protocols. Efficient use of space is also important. Existing methods of fault-tolerant computation use space very inefficiently, and being able to use more efficient codes (such as those encoding multiple qubits in a block) could be very helpful in reducing the space requirements. This work was supported in part by the U.S. Department of Energy under Grant No. DE-FG03-92-ER40701 and by DARPA under Grant No. DAAH04-96-1-0386 administered by the Army Research Office. I would like to thank John Preskill, Manny Knill, Richard Cleve, and David DiVincenzo for helpful discussions.
1,477,468,750,797
arxiv
\section{Introduction} It is well known that, given $[X,Y]:=XY-YX$, the standard Jacobi identity (JI) $[[X,Y],Z]+[[Y,Z],X]+[[Z,X],Y]=0$ is automatically satisfied if the product is associative (which will be assumed throughout). For a Lie algebra ${\cal G}$, expressed by the Lie commutators $[X_i,X_j]=C_{ij}^k X_k$ in a certain basis $\{X_i\}\ i=1,\ldots,r={\rm dim}{\cal G}$, the JI implies the Jacobi condition (JC) \begin{equation} {1\over 2}\epsilon^{j_1j_2j_3}_{i_1i_2i_3} C^\rho_{j_1 j_2}C^\sigma_{\rho j_3}=0\quad, \label{jacid} \end{equation} on the structure constants. Let ${\cal G}$ be simple. Using the Killing metric $k_{ij}=k(X_i,X_j)$ to lower and raise indices, the fully antisymmetric tensor $C_{ijk}=C^s_{ij}k_{sk}=k([X_i,X_j],X_k)$ defines a non-trivial Lie algebra three-cocycle. Since it is obtained from $k$, this three-cocycle is always present ($H^3_0({\cal G},\R)\ne0$ for any ${\cal G}$ simple). In fact, it is known since the classical work of Cartan, Pontrjagin, Hopf and others (see in particular \cite{CAR,PON,HOPF,HOD,CE,SAMEL,BORCHE,BOREL,BOTT}) that, from a topological point of view, the group manifolds of all simple compact groups are essentially equivalent to the products of odd spheres \footnote{More precisely, if $G$ is a compact connected Lie group, $G$ has the ({\it real}) cohomology or homology of a product of odd dimensional spheres.}, that $S^3$ is always present in these products and that the simple Lie algebra ${\cal G}$-cocycles are in one-to-one correspondence with bi-invariant de Rham cocycles on the associated compact group manifolds $G$. The appearance of specific spheres $S^{2p+1}\ (p\ge 1)$ other than $S^3$ depends on the simple group considered. This is due to the intimate relation between the order of the $l\!=$rank$\,{\cal G}$ primitive symmetric polynomials which can be defined on a simple Lie algebra, their $l$ associated generalised Casimir-Racah invariants \cite{RACAH,GEL,LCB,AK,GR,PP,NR,OKPA,SOK} \footnote{In the non-simple case the situation is more involved (see \cite{ABELLANAS}).} and the topology of the associated simple groups, a fact which was found in the eighties to be a key in the understanding of non-abelian anomalies in gauge theories (see \cite{TJZW} for an account of the subject and {\it e.g.}, \cite{AJ,KEP,OKPAII}). By looking at the invariant symmetric polynomials on ${\cal G}$ we may obtain the higher-order cocycles of the Lie algebra cohomology. These cocycles will turn out to define ${\cal G}$-valued skew-symmetric brackets of even order $s$ satisfying a generalised Jacobi condition replacing \eq{jacid}. Higher-order generalisations of Lie algebras, in the form of the {\it strongly homotopy Lie algebras} (SH) of Stasheff \cite{LAST,LAMA}, have recently appeared in physics. This is the case of the (SH) algebra of products of closed string fields (see \cite{WZ,ZIE} and references therein), which involves multilinear, graded-commutative products of $n$ string fields satisfying certain `main identities' which also generalise the standard Jacobi identity. The higher-order Lie algebras to be discussed in this paper satisfy, however, a generalised Jacobi condition which is a consequence of the assumed associativity of the product of algebra elements and which has also appeared in another, but related context \cite{APPB}. As a result the definition of the skew-symmetric multibracket to be given in sec. 2 permits, for each even $s$, the introduction of a coderivation $\partial_s$ of the exterior algebra $\wedge({\cal G})$ constructed on the Lie algebra ${\cal G}$. In contrast, the `main identities' of the SH Lie algebras \cite{LAST,LAMA} (some detailed expressions can be found in \cite{EJ}) involve a further extension of our generalised Jacobi identities which in effect describes how the products fail to satisfy them and how the various $\partial_s$ involved in the main identities fail separately to be a coderivation. Our extended higher-order algebras are thus a particular case of the SH Lie algebras in which only one of the $\partial_s$ is non-zero \footnote{ Note added: This is also the case of the `$k$-algebras' \cite{HW}. We thank P. Hanlon for sending us this reference.}. We shall now show how introduce them (sec. 2) and present their Cartan--like classification in the simple case (secs. 3,4). In sec. 5 we shall describe our results by introducing the complete BRST operator associated with a simple Lie algebra; some comments concerning applications and extensions will be made in sec. 6. \section{Multibrackets and higher-order Lie algebras} Higher-order Lie algebras may be defined by introducing a suitable generalisation of the Lie bracket by means of \medskip \noindent {\bf Definition 2.1}\quad ({\it $s$-bracket}) Let $s$ be even. A $s$-{\it bracket} or skew-symmetric Lie multibracket is a Lie algebra valued $s$-linear skew-symmetric mapping ${\cal G}\times\mathop{\cdots}\limits^s\times{\cal G}\to {\cal G}$, \begin{equation} (X_{i_1},X_{i_2},\ldots,X_{i_s})\mapsto [X_{i_1},X_{i_2},\ldots,X_{i_s}]= {\omega_{\ind is}}^\sigma_\cdot X_\sigma\quad, \label{defpars} \end{equation} where the constants ${\omega_{\ind is}}^\sigma_\cdot$ satisfy the condition \begin{equation} \epsilon^\ind j{2s-1}_\ind i{2s-1} {\omega_{\ind j s}}^\rho_\cdot {\omega_{\rho \ix j{s+1}{2s-1} }}^\sigma_\cdot=0\quad. \label{jaccond} \end{equation} The ${\omega_{\ind is}}^\sigma_\cdot$ will be called {\it higher-order structure constants}, and condition \eq{jaccond} will be referred to as the {\it generalised Jacobi condition} (GJC); for $s=2$ it gives the ordinary JC \eq{jacid}. \medskip \noindent {\it Remark.}\quad Although we shall only consider here the case of Lie algebras, this definition (as others below) is more general. \medskip The GJC \eq{jaccond} is clearly a consistency condition for \eq{defpars}. From now on $X_i$ will denote both the algebra basis elements and its representatives in a faithful representation of ${\cal G}$. Let now $[X_{i_1},X_{i_2},\allowbreak\ldots,X_{i_n}]$, $n$ arbitrary, be defined by \begin{equation} [X_{i_1},X_{i_2},\ldots,X_{i_n}]= \sum_{\sigma\in S_n}(-1)^{\pi(\sigma)} X_{i_{\sigma(1)}}X_{i_{\sigma(2)}}\ldots X_{i_{\sigma(n)}} \quad, \label{defmulti} \end{equation} where $\pi(\sigma)$ is the parity of the permutation $\sigma$ and the (associative) products on the $r.h.s.$ are well defined as products of matrices or as elements of ${\cal U}({\cal G})$. Then, the following Lemma holds: \medskip \noindent {\bf Lemma 2.1}\quad Let $[X_1,\ldots, X_n]$, be as in \eq{defmulti} above. Then, for $n$ even, \begin{equation} {1\over (n-1)!}{1\over n!}\sum_{\sigma\in S_{2n-1}} (-1)^{\pi(\sigma)} [[X_{\sigma(1)},\ldots,X_{\sigma(n)}],X_{\sigma(n+1)},\ldots,X_{\sigma(2n-1)}] =0 \label{genjacid} \end{equation} is an identity, the {\it generalised Jacobi identity} (GJI) which for \eq{defpars} implies the GJC \eq{jaccond}; for $n$ odd, the $l.h.s.$ is proportional to $[X_1,\ldots,X_{2n-1}]$. \medskip \noindent {\it Proof}: Let $Q_p$ be the antisymmetriser for the symmetric group $S_p$ ({\it i.e.}, the pri\-mi\-ti\-ve `idempotent' $[Q_p^2=p!Q_p]$ in the Frobenius algebra of $S_p$, associated with the fully antisymmetric Young tableau). The sum in \eq{genjacid} contains $C^{n-1}_{2n-1}$ different terms ($(2n-1)!/(n-1)!n!=C^{n-1}_{2n-1}$). Consider the first of these, $[[X_1,\ldots,X_n],X_{n+1},\ldots,\allowbreak X_{2n-1}]$. Its full expansion contains $(n!)^2$ terms, which may be written as the sum of $n$ terms \begin{equation} \eqalign{ [[X_1,\ldots,X_n]& ,X_{n+1},\ldots,X_{2n-1}]= Q_n(X_1X_2\ldots X_n)Q_{n-1}(X_{n+1}X_{n+2}\ldots X_{2n-1}) \cr & -Q_{n-1}(X_{n+1} Q_n(X_1X_2\ldots X_n) X_{n+2}\ldots X_{2n-1}) \cr & +Q_{n-1}(X_{n+1} X_{n+2} Q_n(X_1X_2\ldots X_n)X_{n+3}\ldots X_{2n-1}) \cr & +\ldots+(-1)^{n-2}Q_{n-1}(X_{n+1} X_{n+2} \ldots X_{2n-2} Q_n(X_1X_2\ldots X_n) X_{2n-1}) \cr & + (-1)^{n-1} Q_{n-1}(X_{n+1} X_{n+2}\ldots X_{2n-1} ) Q_n(X_1X_2\ldots X_n)\quad, \cr} \label{proofjacobi} \end{equation} where the antisymmetriser $Q_n$ $[Q_{n-1}]$ acts on the $n$ $[n-1]$ indices $(1,\ldots,n)$ $[(n+1,\ldots,2n-1)]$ {\it only}. This sum may be rewritten as \begin{equation} \eqalign{ & Q_nQ_{n-1}\{e+(-1)^n(1,n+1)+(1,n+1)(2,n+2)+ (-1)^n(1,n+1)(2,n+2)\cdot \cr &\cdot (3,n+3)+\ldots+(-1)^n(1,n+1)(2,n+2)\ldots(n-2,2n-2)+ \cr & \phantom{\times}(1,n+1)\ldots(n-1,2n-1)\}X_1\ldots X_{2n-1} \quad, \cr} \label{moreproofjacobi} \end{equation} where $(i,j)$ indicates the transposition in $S_{2n-1}$ which interchanges the indices $i,j$; thus, all the signs in \eq{moreproofjacobi} are positive for $n$ even, and they alternate for $n$ odd according to the parity of the accompanying permutation. Numerical factors apart, the $l.h.s$ of \eq{genjacid} is the result of the action of the $S_{2n-1}$ antisymmetriser in $(2n-1)$ indices, $Q_{2n-1}$, on \eq{proofjacobi} or \eq{moreproofjacobi}. Since $\sigma Q_{2n-1}=(-1)^{\pi(\sigma)}Q_{2n-1}\, \forall\sigma\in S_{2n-1}\,,$ it turns out that $Q_{2n-1}(Q_n Q_{n-1})\propto Q_{2n-1}$. Thus, only the action of $Q_{2n-1}$ on the curly bracket in \eq{moreproofjacobi} has to be considered. Since its permutations are half even and half odd, it becomes identically zero for $n$ even and proportional to $Q_{2n-1}$ for $n$ odd, {\it q.e.d.} \medskip Lemma 2.1 shows that the higher-order bracket may be defined, as the Lie bracket, by the skew-symmetric product of an (even) number of generators. By analogy with the standard Lie algebra ($s=2$) case, we may now give the following \medskip \noindent {\bf Definition 2.2}\quad ({\it Higher-order Lie algebra}) Let ${\cal G}$ be a Lie algebra. A higher-order Lie algebra on ${\cal G}$ is the algebra defined by the $s$-bracket \eq{defpars}, where the higher-order structure constants satisfy the generalised Jacobi condition \eq{jaccond}. \medskip Multibrackets appear naturally if we use for the basis $X_i$ of ${\cal G}$ a set of left-invariant vector fields (LIVF) on the group manifold\footnote{ On $G$, a vector field $X_i$ is expressed as $X_i^j(g)\partial/\partial g^j\,,\,j=1,\ldots,r\;,$ where $g^i$ are local coordinates of $G$ at the unity.} of the Lie group $G$ associated with ${\cal G}$. Then, the exterior algebra $\wedge(G)$ may be identified as the exterior algebra of the LI contravariant, skew-symmetric tensor fields on $G$ obtained by taking the exterior products of LIVF's with constant coefficients; this is analogous to the exterior algebra of LI covariant tensor fields (LI forms) on $G$. Then, in analogy with the exterior derivative of a LI $q$-form $\omega\in\wedge_q(G)$, an exterior {\it coderivation} $\partial:\wedge^q(G)\to\wedge^{q-1}(G)\,,\,\partial^2=0,$ may be introduced by taking \begin{equation} \partial(X_1\wedge\ldots\wedge X_q)= \sum^q_{\scriptstyle l=1 \atop {\scriptstyle l<k}}(-1)^{l+k+1}[X_l,X_k]\wedge X_1\wedge\ldots\widehat X_l\ldots\widehat X_k\ldots\wedge X_q\quad. \label{coder} \end{equation} For instance, on $X_{i_1}\wedge X_{i_2}\wedge X_{i_3}\in \wedge^3(G)$, the statement $\partial^2(X_{i_1}\wedge X_{i_2}\wedge X_{i_3})=0$ is nothing but the standard Jacobi identity. If we now define $\partial_2(X_{i_1}\wedge X_{i_2})= \epsilon^{j_1 j_2}_{i_1 i_2} X_{j_1}X_{j_2}=[X_{i_1},X_{i_2}],$ the coderivation $\partial$ above corresponds to $\partial_2:\wedge^q(G)\to\wedge^{q-1}(G)$. This may now be extended to a general {\it even} coderivation $\partial_s$, $\partial_s:\wedge^q(G)\to\wedge^{q-(s-1)}(G)\,,\, \partial_s^2=0\,$: \medskip \noindent {\bf Definition 2.3}\quad ({\it coderivation $\partial_s$}) Let $s$ be even. The mapping $\partial_s:\wedge^s(G)\to\wedge^1(G)\sim{\cal G}$ given by $\partial_s:X_1\wedge\ldots\wedge X_s\mapsto [X_1,\ldots,X_s]$, where the $s$-bracket is given by Def. 2.1, may be extended to a higher-order coderivation $\partial_s:\wedge^n(G)\to\wedge^{n-s+1}$ by \begin{equation} \partial_s(X_1\wedge\ldots\wedge X_n)= {1\over s!(n-s)!}\epsilon^{\ind i n}_{1\, \ldots\, n} \partial_s(X_{i_1}\wedge\ldots\wedge X_{i_s})\wedge X_{i_{s+1}}\wedge\ldots \wedge X_{i_n}\;, \label{extended} \end{equation} with $\partial_s:\wedge^n(G)=0$ for $s>n$. It follows from \eq{jaccond} that $\partial_s^2=0$. \medskip For $s=2$, eq. \eq{extended} reduces to \eq{coder}. On $X_{i_1}\wedge\ldots\wedge X_{i_7}\in\wedge^7(G)$, for instance, $\partial_4^2=0$ leads to the GJI which must be satisfied by a 4-$th$ order Lie algebra. As mentioned, these higher-order algebras are particular cases of the strongly homotopy algebras \cite{LAST,LAMA} of recent relevance in string field theory (see \cite{ZIE}). We shall now give explicit examples of higher-order algebras and, as a result, provide the classification of all higher-order simple Lie algebras. \section{Higher-order simple Lie algebras. The case of $su(n)$} Let ${\cal G}$ be now a simple Lie algebra. In what follows, we shall also assume ${\cal G}$ to be compact (although compactness is not essential in many reasonings below) so that the non-degenerate Killing matrix $k_{ij}$ may be taken as the unity $\delta_{ij}$ after suitable normalization of the generators. As mentioned, there are $l$ primitive invariant polynomials for each simple algebra of rank $l$ which are in turn related to the Casimir-Racah operators of the algebra \cite{WEYL,RACAH,LCB,AK,GR,NR,OKPA,SOK,EK,LJB}, to the Lie algebra cohomology for the trivial action and to the topology and de Rham cohomology of the associated simple compact Lie group \cite{CAR,HOPF,HOD,CE,SAMEL,BORCHE,BOREL}. We now use this fact to provide a classification of the possible higher-order simple Lie algebras. Given a simple Lie algebra ${\cal G}$, the orders $m_i$ of the $l$ invariant polynomials (or of the generalised Casimir invariants) and of the $l$ cocycles (or bi-invariant forms on the corresponding compact group $G$) are given by the following table \begin{center} \medskip \noindent \vbox{\tabskip=0pt\offinterlineskip \def\noalign{\hrule}{\noalign{\hrule}} \halign to 15.9cm{\strut#&\vrule#\tabskip=0.2em plus 0.2em& \hss $#$ \hss & \vrule # & \hss $#$ \hss & \vrule # & \hss $#$ \hss & \vrule # & \hss $#$ \hss & \vrule #\tabskip=0pt\cr\noalign{\hrule} &&{\cal G} && \hbox{algebra\ dimension}&& \hbox{order\ of\ invariants} && \hbox{order of ${\cal G}$-cocycles} & \cr && && r=\hbox{dim}{\cal G} && m_1,\ldots,m_l && (2m_1-1),\ldots,(2m_l-1) &\cr\noalign{\hrule} && A_l && (l+1)^2-1\ [l\ge 1]&& 2,3,\ldots,l+1 && 3,5,\ldots, 2l+1 &\cr\noalign{\hrule} && B_l && l(2l+1)\ [l\ge 2] && 2,4,\ldots,2l && 3,7,\ldots, 4l-1 &\cr\noalign{\hrule} && C_l && l(2l+1)\ [l\ge 3] && 2,4,\ldots,2l && 3,7,\ldots, 4l-1 &\cr\noalign{\hrule} && D_l && l(2l-1)\ [l\ge 4] && 2,4,\ldots,2l-2,\,l && 3,7,\ldots,4l-5,\,2l-1 &\cr\noalign{\hrule} && G_2 && 14 && 2,6 && 3,11 &\cr\noalign{\hrule} && F_4 && 52 && 2,6,8,12 && 3,11,15,23 &\cr\noalign{\hrule} && E_6 && 78 && 2,5,6,8,9,12 && 3,9,11,15,17,23 &\cr\noalign{\hrule} && E_7 && 133 && 2,6,8,10,12,14,18 && 3,11,15,19,23,27,35 &\cr\noalign{\hrule} && E_8 && 248 && 2,8,12,14,18,20,24,30 && 3,15,23,27,35,39,47,59 &\cr\noalign{\hrule} }} \end{center} \noindent {\it Dimension of the Casimir-Racah invariants and Lie algebra cocycles for ${\cal G}$ simple}. \medskip \noindent We see that $\sum_{i=1}^l(2m_i-1)=r$. \medskip \noindent {\bf Definition 3.1}\quad ({\it Higher-order simple Lie algebras}) A higher-order simple Lie algebra associated with a simple Lie algebra ${\cal G}$ is the higher-order algebra defined by a primitive ${\cal G}$-cocycle (of order $>3$) on ${\cal G}$. \medskip Thus, to find the higher-order simple Lie algebras one has to look for the invariant polynomials on them. For the compact forms on these groups, the cocycle orders are also the dimensions of the primitive de Rham cycles (odd spheres) to which the group manifolds are essentially equivalent. We shall now find explicit realizations of these algebras. Consider first the case of $su(n)\,,\,n\ge 3$ with $k_{ij}\sim\delta_{ij}$ (there are no higher-order simple Lie algebras on $su(2)$). In terms of its structure constants (for hermitian generators $T_i$) $[T_i,T_j]=iC_{ijk}T_k$, the anticommutator of two $n\times n\; su(l+1)$ matrices may be expressed as $\{T_i,T_j\}=c\delta_{ij} + d_{ijk}T_k$ (with $c=1/n$, Tr$(T_iT_j)={1\over 2}\delta_{ij}$). The $d_{ijk}\propto {\rm Tr}(T_i\{T_j,T_k\})$ term (absent for $su(2)$) is the first example of a symmetric invariant polynomial (of $3rd$ order\footnote{ For the properties of the $d$-tensors see \cite{SUDBERY}.}) beyond the Killing tensor $k_{ij}$ (see Table). Invariant, symmetric polynomials are given by the symmetric traces (sTr) of products of $su(n)$ generators (cf. the theory of characteristic classes). Let us then consider the next case, $m_3=4$. The coordinates of this fourth-order polynomial $k_{i_1 i_2 i_3 i_4}$ are given by sTr$(T_{i_1} T_{i_2} T_{i_3} T_{i_4})$ or (ignoring numerical factors) by Tr($s(T_{i_1} T_{i_2} T_{i_3}) T_{i_4})\propto$Tr($s(\{\{T_{i_1},T_{i_2}\},T_{i_3}\})T_{i_4})\propto (d_{(i_1 i_2 l}d_{l i_3) i_4}+ 2c\delta_{(i_1 i_2}\delta_{i_3)i_4})$ where $s$ symmetrises the $i_1,i_2,i_3$ indices. Thus, we may take \begin{equation} \eqalign{ k_{i_1 i_2 i_3 i_4}= & d_{i_1 i_2 l}d_{l i_3 i_4}+d_{i_1 i_3 l}d_{l i_2 i_4}+ d_{i_1 i_4 l}d_{l i_2 i_3} \cr & +2c(\delta_{i_1 i_2}\delta_{i_3 i_4}+\delta_{i_1 i_3}\delta_{i_2 i_4}+ \delta_{i_1 i_4}\delta_{i_2 i_3})\quad. \cr} \label{fourthorder} \end{equation} Clearly, the last term will not generate a primitive 4-$th$ order Casimir operator\footnote{Notice that $l\ge 3$ for $k_{i_1 i_2 i_3 i_4}$ to be primitive. For $su(3)$ the identity $d_{(i_1 i_2 l} d_{i_3 ) i_4 l} = {1\over 3} \delta_{(i_1 i_2} \delta_{i_3) i_4}$, where the brackets mean symmetrisation, precludes eq. (\ref{fourthorder}) from producing a primitive fourth-order invariant. Similar type relations hold for higher ranks \cite{SUDBERY} (see also \cite{BAIS}, where higher order Casimir operators were used to introduce the so-called Casimir ${\cal W}$-algebras).}, since it is proportional to the square of the second order one, $(I_2)^2$. Eq. \eq{fourthorder} reflects the well known ambiguity in the selection of the higher-order Casimirs for the simple Lie algebras (see, {\it e.g.}, \cite{LCB,GR,SOK,BW}). The first part, which generalises easily up to $k_\ind in$ leads to the form of the Casimir-Racah operator $I_n$ given in \cite{AK}. We are now in a position to introduce all $A_l$ higher-order simple Lie algebras \medskip \noindent {\bf Theorem 3.1}\quad ({\it Higher-order $A_l$ Lie algebras}) Let $X_i$ a basis of $A_l$, $i=1,\ldots, (l+1)^2-1$. Then, the even multibracket \begin{equation} [X_{i_1},\ldots,X_{i_{2m-2}}]:= \epsilon^\ind j{2m-2}_\ind i{2m-2}X_{j_1}\ldots X_{j_{2m-2}} \label{evenmultibracket} \end{equation} is ${\cal G}$-valued and defines a higher-order simple Lie algebra \begin{equation} [X_{i_1},\ldots,X_{i_{2m-2}}]={\omega_\ind i{2m-2}}^\sigma_\cdot X_\sigma \quad, \label{cocycle} \end{equation} where the higher-order structure constants ${\omega_\ind i{2m-2}}^\sigma_\cdot$ associated to the invariant polynomial $k_\ind i m$ are given by the skew-symmetric tensor \begin{equation} \omega_{\ind i{2m-2}\sigma}=\epsilon^\ix j2{2m-2}_\ix i2{2m-2} C^{l_1}_{i_1 j_2}\ldots C^{l_{m-1}}_{j_{2m-3}j_{2m-2}}k_{\ind l{m-1}\sigma} \quad, \label{defcocycle} \end{equation} which defines a non-trivial $(2m-1)$-cocycle for $su(l+1)\,,\,3\le m\le l+1$ ($m=2$ is the standard Lie algebra). Before presenting a general proof, let us illustrate the theorem in the two simplest cases. For $m=2$ eq. \eq{defcocycle} reads \begin{equation} \omega_{i_1 i_2 \sigma}=\delta^{j_2}_{i_2}C^{l_1}_{i_1j_2}k_{l_1\sigma}= k([X_{i_1},X_{i_2}],X_\sigma) \quad, \label{secondorder} \end{equation} and the $\omega_{i_1 i_2 \sigma}$ are the standard structure constants of ${\cal G}$. Thus, the $m=2$ (lowest) polynomial corresponds to the ordinary ($su(n)$, in this case) Lie algebra commutators. Let $m=3$. If $d$ denotes the symmetric polynomial, eq. \eq{defcocycle} gives \begin{equation} \eqalign{ \omega_{i_1 i_2 i_3 i_4\sigma}= & \epsilon^{j_2 j_3 j_4}_{i_2 i_3 i_4} C^{l_1}_{i_1 j_2}C^{l_2}_{j_3 j_4} d_{l_1 l_2 \sigma}= \cr =& \epsilon^{j_2 j_3 j_4}_{i_2 i_3 i_4} d([X_{i_1},X_{j_2}],[X_{j_3},X_{j_4}],X_\sigma) \quad,\cr} \label{examplei} \end{equation} which is the expression of the fully antisymmetric five-cocycle. On the other hand, \begin{equation} \eqalign{ [X_{i_1},X_{i_2},X_{i_3},X_{i_4}] & = \epsilon^{j_1 j_2 j_3 j_4}_{i_1 i_2 i_3 i_4}X_{j_1}\ldots X_{j_4}= {1\over 2^2}\epsilon^{j_1 j_2 j_3 j_4}_{i_1 i_2 i_3 i_4} [X_{j_1},X_{j_2}][X_{j_3},X_{j_4}] \cr & ={1\over 2^2}\epsilon^{j_1 j_2 j_3 j_4}_{i_1 i_2 i_3 i_4} C^{k}_{j_1 j_2}C^{l}_{j_3 j_4}X_k X_l \quad.\cr} \label{exampleii} \end{equation} Taking into account that $\epsilon^{j_1 j_2 j_3 j_4}_{i_1 i_2 i_3 i_4} C^{k}_{j_1 j_2}C^{l}_{j_3 j_4}$ is symmetric in $k,l$ this is equal to \begin{equation} {1\over 2^3} \epsilon^{j_1 j_2 j_3 j_4}_{i_1 i_2 i_3 i_4} C^{l_1}_{j_1 j_2}C^{l_2}_{j_3 j_4} (d_{l_1l_2\sigma} X_\sigma + c\delta_{l_1l_2})\quad. \label{exampleiii} \end{equation} The term in $c$ may be dropped since, for each $j_4$, it is proportional to the antisymmetrised sum $C_{j_1 j_2 l}C^l_{j_3 j_4}$ in $j_1,j_2,j_3$ which is zero by the Jacobi identity. Using now that \begin{equation} \epsilon^{j_1 j_2 j_3 j_4}_{i_1 i_2 i_3 i_4}=\sum^4_{s=1}(-1)^{s+1} \delta_{i_1}^{j_{s}}\epsilon^{j_1 \ldots\widehat j_s\ldots j_4}_{i_2 i_3 i_4} \quad, \label{epsilonprop} \end{equation} it is easy to see that all the terms in \eq{epsilonprop} give the same contribution for the remaining $d$ term in \eq{exampleiii}. Hence, the fourth-commutator \begin{equation} [X_{i_1},X_{i_2},X_{i_3},X_{i_4}]= {1\over 2} \epsilon^{j_2 j_3 j_4}_{i_2 i_3 i_4} C^{l_1}_{i_1 j_2}C^{l_2}_{j_3 j_4} d_{l_1 l_2 \sigma}X_\sigma \label{exampleiv} \end{equation} is indeed of the form \eq{examplei}, and it may be checked explicitly that it is in $su(3)$. The proof of Theorem 3.1 requires now the following simple \medskip \noindent {\bf Lemma 3.1}\quad If $k_\ind lm$ is an ad-invariant, symmetric polynomial on a simple Lie algebra ${\cal G}$, \begin{equation} \epsilon^\ind j{2m}_\ind i{2m} C^{l_1}_{j_1 j_2}\ldots C^{l_m}_{j_{2m-1} j_{2m}} k_\ind lm=0 \quad. \label{simplelemma} \end{equation} \medskip \noindent {\it Proof}: First, we note that the ad-invariance condition of the $m$-tensor $k$ may be expressed in coordinates by \begin{equation} \sum_{s=1}^m C^{k}_{j_{2m-1} l_s} k_{l_1\ldots l_{s-1} k l_{s+1}\ldots l_m}=0\quad. \label{invariance} \end{equation} Hence, replacing $C^{l_m}_{j_{2m-1} j_{2m}}k_\ind lm$ in the $l.h.s.$ of \eq{simplelemma}\ by the other terms in \eq{invariance}\ we get \begin{equation} \epsilon^\ind j{2m}_\ind i{2m} C^{l_1}_{j_1 j_2}\ldots C^{l_{m-1}}_{j_{2m-3} j_{2m-2}} (\sum_{s=1}^{m-1}C^{k}_{j_{2m-1} l_s} k_{l_1\ldots l_{s-1} k l_{s+1}\ldots l_{m-1}\,j_{2m}}) \quad, \end{equation} which vanishes since all terms in the sum include products of the form $C^s_{jj'}C^k_{sj''}$ antisymmetrised in $j,j',j''$, which are zero due to the standard JC \eq{jacid}, {\it q.e.d.} \medskip To prove now Theorem 3.1, we write the $(2m-2)$ bracket as \begin{equation} \eqalign{ [X_{i_1},\ldots, X_{i_{2m-2}}]= & {1\over 2^{m-1}}\epsilon^\ind j{2m-2}_\ind i{2m-2} [X_{j_1},X_{j_2}]\ldots[X_{j_{2m-3}},X_{j_{2m-2}}]= \cr & {1\over 2^{m-1}}\epsilon^\ind j{2m-2}_\ind i{2m-2} C^{l_1}_{j_1 j_2}\ldots C^{l_{m-1}}_{j_{2m-3} j_{2m-2}} {1\over (m-1)!}s(X_{l_1}\ldots X_{l_{m-1}}) \cr} \label{prooftheoremi} \end{equation} where we have used (cf. the $m=3$ case) that $\epsilon^\ind j{2m-2}_\ind i{2m-2} C^{l_1}_{j_1 j_2}\ldots C^{l_{m-1}}_{j_{2m-3} j_{2m-2}}$ is symmetric in $l_1,\ldots,l_{m-1}$ to introduce the symmetrised product of generators, which in turn may be replaced, adding the appropriate factors, by $s(\{\{\ldots\{X_{l_1},X_{l_2}\},X_{l_3}\},\allowbreak\ldots,X_{l_{m-1}}\})$. Using that $\{X_i,X_j\}=c\delta_{ij}+d_{ijk}X_k$ in the expression of the nested anticommutators, we then conclude that it has the form \begin{equation}{\rm (factors)}s(X_{l_1}\ldots X_{l_{m-1}})= {\tilde k}_{l_1\ldots l_{m-1}\cdot}^{\phantom{l_1\ldots l_{m-1}}\sigma} X_\sigma+\hat k_{l_1\ldots l_{m-1}}1\quad. \label{symproduct} \end{equation} By Lemma 3.1, the second term does not contribute to \eq{prooftheoremi}\ because $\hat k$ is an invariant polynomial of $(m-1)$-order. On the other hand since Tr$(s(X_{l_1}\ldots X_{l_{m-1}})X_\sigma)\propto\, $sTr$(X_{l_1}\ldots X_{l_{m-1}}X_\sigma)$, we conclude that $\tilde k_{l_1\ldots l_{m-1}\sigma}$ is an invariant symmetric $m$-th order polynomial. Absorbing all numerical factors in $\tilde k$ and renaming it as $k$, we find that the $(2m-2)$-commutator in \eq{prooftheoremi}\ is given by \begin{equation} {1\over (2m-2)}\epsilon^\ind j{2m-2}_\ind i{2m-2} C^{l_1}_{j_1 j_2}\ldots C^{l_{m-1}}_{j_{2m-3} j_{2m-2}} {k_{l_1\ldots l_{m-1}}}^\sigma_\cdot X_\sigma= {\omega_\ind i{2m-2}}^\sigma_\cdot X_\sigma \end{equation} {\it i.e.}, by the $(2m-1)$-cocycle \eq{defcocycle}. Since \eq{evenmultibracket} is given by the product of associative operators, the GJC \eq{jaccond} follows from the GJI \eq{genjacid}, {\it q.e.d.} Equivalently, one may show that the cocycle condition for $\omega_{\ind i{2m-2}\sigma}$ guarantees that the GJC is satisfied (see the Remark after Th. 5.1 below). This establishes the connection between Lie algebra cohomology cocycles and higher-order Lie algebras. \section{Higher-order orthogonal and symplectic algebras} We now extend to the $B_l\ (l\ge 2),\ C_l\ (l\ge 3),\ D_l\ (l\ge 4)$ series the considerations in sec. 3 for $A_l$. First we notice that for all of them the third-order symmetric polynomial is absent and that only for the even orthogonal algebra $D_l$ (and odd $l$) we may have an odd-order invariant polynomial. We shall ignore this case for a moment, and look first for the even-order symmetric polynomials. Let us realise the generators of the above algebras in terms of the $n\times n$ matrices of the defining representation, where $n=(2l+1,2l,2l)$ for $(B_l,C_l,D_l)$ respectively. These matrices $T$ have all in common the metric preserving defining property $Tg=-gT^t$, where $g$ is the $n\times n$ unit matrix for the orthogonal algebras and the symplectic metric for $C_l$. If we define the symmetric third-order anticommutator by \begin{equation} \{T_1,T_2,T_3\}=\sum_{\sigma\in S_3}T_{\sigma(1)}T_{\sigma(2)}T_{\sigma(3)} \equiv s(T_1T_2T_3)\quad, \label{threeanticom} \end{equation} it is trivial to check that $\{T_1,T_2,T_3\}g=-g\{T_1,T_2,T_3\}^t$ so that $\{T_1,T_2,T_3\}\in{\cal G}\ ({\cal G}=so(2l+1)\,,\,sp(2l)\,{\rm or}\,so(2l))$. Notice that such a relation cannot be satisfied for the ordinary anticommutator, and that in general requires {\it odd}-order anticommutators in order to preserve the minus sign in the $r.h.s.$ Note also the absence of the identity matrix in the $r.h.s.$ of the odd-order anticommutator, which was allowed for $A_l$. Let then $\{T_{i_1},T_{i_2},T_{i_3}\}={k_{i_1i_2i_3}}^\sigma_\cdot T_\sigma$. Extending this result to the arbitrary odd case, we find \medskip \noindent {\bf Lemma 4.1} The symmetrised product of an odd number of $n\times n$ matrix generators of $so(2l+1)\,,\,sp(2l)$ or $so(2l)$ is also an element of these algebras which is determined by the associated invariant symmetric polynomial. \medskip \noindent {\it Proof}: \begin{equation} \eqalign{ s(T_{i_1}T_{i_2}T_{i_3}T_{i_4}\ldots T_{i_{2p-1}})= & {1\over 6^{p-1}}s(\{\ldots\{\{T_{i_1},T_{i_2},T_{i_3}\}T_{i_4},T_{i_5}\}, \ldots,T_{i_{2p-2}},T_{i_{2p-1}}\}) \cr = &{1\over 6^{p-1}} s({k_{i_1i_2i_3}}^{\alpha_1}_\cdot{k_{\alpha_1 i_4i_5}}^{\alpha_2}_\cdot\ldots {k_{\alpha_{p-2} i_{2p-2}i_{2p-1}}}^\beta_\cdot T_\beta)\quad. \cr} \label{orthogonal} \end{equation} Since $s$ symmetrises all $i_1,i_2,\ldots,i_{2p-1}$ indices we may write this as \begin{equation} \{T_{i_1},\ldots,T_{i_{2p-1}}\}={k_{i_1\ldots i_{2p-1}}}^\sigma_\cdot T_\sigma \quad, \label{invpolynomial} \end{equation} and identify $k$ with the invariant symmetric polynomial of even $2p$ (see Table) since Tr($\{T_{i_1},\ldots\allowbreak,T_{i_{2p-1}}\}T_\sigma)$ is equal to \begin{equation} {\rm sTr}(T_{i_1}\ldots T_{i_{2p-1}}T_\sigma)= k_{i_i\ldots i_{2p-1}\sigma} \quad, \label{invpol} \end{equation} {\it q.e.d.} This now leads to the following \medskip \noindent {\bf Theorem 4.1} Let ${\cal G}$ be a simple orthogonal or symplectic algebra. Let $k_\ind i{2p}$ be as in \eq{invpol} for $2\le p\le l$ ($B_l,C_l$) and $2\le p\le l-1$ ($D_l$). Then, the even $(4p-2)$ bracket defined as in \eq{evenmultibracket} defines a higher-order orthogonal or symplectic algebra, the structure constants of which are given by the Lie algebra $(4p-1)$-cocycles associated with the symmetric invariant polynomials on ${\cal G}$. \medskip \noindent {\it Proof}: It suffices to use Lemma 4.1 and to insert \eq{invpolynomial} in expression \eq{prooftheoremi}. As a result, the $(4p-1)$-cocycle is given again by \eq{defcocycle} where the $k_{i_i\ldots i_{2p-1}\sigma}$ is now found in \eq{invpol}, {\it q.e.d.} \medskip Let us consider now the order $l$ invariant for $so(2l)$. The reasonings before Lemma 4.1 show that, for $l$ odd, the order $l$ invariant polynomial cannot be obtained from the symmetric trace of $(l-1)$ $2l\times 2l\ T$'s, since the symmetrised bracket of an even number of $T$'s cannot be expressed as a linear combination of the $2l\times 2l$ matrix generators of $so(2l)$. It is well known, however, that for $so(2l)$ there is an order $l$ (even or odd) invariant polynomial (which gives the Euler class of a real oriented vector bundle with even-dimensional fibre) which comes from the Pfaffian, since $Pf(ATA^{t})=Pf(T)$ for $A\in SO(2l)$. Using pairs of indices to relabel the generators $T_i\ i=1,\ldots,({2l \atop 2})$ as $T_{\mu \nu}=-T_{\nu\mu}\ ,\ \mu,\nu=1,\ldots,2l$, the order $l$ invariant (corresponding to the last one in the Table for $D_l$) is given by \begin{equation} Pf(T)={(-1)^l\over 2^l l!} \epsilon^{\mu_1\nu_1\ldots \mu_l\nu_l}_{1\;\ldots\;2l} T_{\mu_1\nu_1} T_{\mu_2\nu_2}\ldots T_{\mu_l\nu_l}\quad. \label{pfaff} \end{equation} The antisymmetric tensor $\epsilon_{\mu_1\nu_1\mu_2\nu_2\ldots \mu_l\nu_l}$ defining the invariant is symmetric under the exchange of {\it pairs} of indices $(\mu_i\nu_i)\ i=1,\ldots,l$. Although it cannot be obtained as the symmetric trace of a product of $2l\times 2l$ generators it may be obtained again in the standard way if we use an appropriate spinorial representation for $so(2l)$. This means that the previous arguments may be also carried through to the $l$-$th$ order invariant of the $D_l$ algebra. To see it explicitly, consider the $2^l$-dimensional Clifford algebra $\{\Gamma_\mu,\Gamma_\nu\}=2\delta_{\mu\nu}\ (\mu,\nu=1,\ldots,2l)$. The $({2l\atop 2})$ $Spin(2l)$ generators are given by $\Sigma_{\mu \nu}={i\over 2}[\Gamma_\mu,\Gamma_\nu]$, and the $\Gamma_{2l+1}$ matrix by $\Gamma_{2l+1}={i^l\over (2l)!}\epsilon_{\ind \mu{2l}}\Gamma_{\mu_1}\ldots \Gamma_{\mu_{2l}}$; ${\Gamma_\mu}^{\dag}=\Gamma_\mu\,,\,{\Gamma_{2l+1}}^{\dag}=\Gamma_{2l+1}$. Thus, we may write with all indices different $\mu_1\ne\nu_1\ne\ldots\ne\mu_{l-1}\ne\nu_{l-1}\ne\alpha\ne\beta$, \begin{equation} \Gamma_{\mu_1}\Gamma_{\nu_1}\ldots\Gamma_{\mu_{l-1}}\Gamma_{\nu_{l-1}} \propto \epsilon_{\mu_1\nu_1\ldots\mu_{l-1}\nu_{l-1}\alpha\beta}\Gamma_{2l+1} \Gamma_\alpha\Gamma_\beta \quad. \label{pfaffi} \end{equation} Antisymmetrising the $(l-1)$ pairs of gammas this leads to \begin{equation} \Sigma_{\mu_1\nu_1}\Sigma_{\mu_2\nu_2}\ldots\Sigma_{\mu_{l-1}\nu_{l-1}} \propto \epsilon_{\mu_1\nu_1\ldots\mu_{l-1}\nu_{l-1}\alpha\beta}\Gamma_{2l+1} \Sigma_{\alpha\beta} \quad, \label{pfaffii} \end{equation} an expression which is symmetric in the $(\mu\nu)$ pairs which are all different. To check that the definition \eq{defmulti} for the $(2l-2)$ bracket is indeed $so(2l)$-valued, we notice that the $so(2l)$ commutators $[\Sigma_{\mu\nu},\Sigma_{\rho\sigma}]\equiv iC^{\lambda\kappa}_{(\mu\nu)(\rho\sigma)}\Sigma_{\lambda\kappa}$ are non-zero only if the pairs $(\mu\nu)\,,\,(\rho\sigma)$ have one (and only one) index in common. Thus, the only non-zero $(2l-2)$-brackets have the form $[\Sigma_{i_1k},\Sigma_{i_2k},\ldots,\Sigma_{i_{2l-2}k}]$ where all indices $i$ are different. Since the ordinary product of such $\Sigma$'s sharing an index is already antisymmetric, we find that (cf. \eq{prooftheoremi}) \begin{equation} \eqalign{ [\Sigma_{i_1k},\Sigma_{i_2k},\ldots,\Sigma_{i_{2l-2}k}]& \propto C^{j_1j_2}_{(i_1k)(i_2k)}\ldots C^{j_{2l-3}j_{2l-2}}_{(i_{2l-3}k)(i_{2l-2}k)} \{\Sigma_{j_1j_2},\ldots,\Sigma_{j_{2l-3}j_{2l-2}}\} \cr & \propto {\omega_{i_1k,\ldots,i_{2l-2}k}}^{\alpha\beta}_{\ \cdot} \Gamma_{2l+1}\Sigma_{\alpha\beta} \quad,\cr} \label{reducible} \end{equation} and we may now use the chiral projectors ${1\over 2}(1\pm\Gamma_{2l+1})$ to extract from the reducible $2^l\times 2^l$ representation $\Sigma_{\mu\nu}$ its two irreducible $2^{l-1}\times 2^{l-1}$ components. \section{Higher-order simple Lie algebras and their complete \break BRST operator} The case of the exceptional algebras requires more care, and we shall not discuss here their realization. We may nevertheless state the following \medskip \noindent {\bf Theorem 5.1}\quad ({\it Classification theorem for higher-order simple algebras}) Given a simple Lie algebra ${\cal G}$ of rank $l$, there are $(l-1)$ $(2m_i-2)$-higher-order simple algebras associated with ${\cal G}$. They are given by the $(l-1)$ Lie algebra cocycles of order $(2m_i-1)>3$ which may be obtained from the $(l-1)$ symmetric invariant polynomials on ${\cal G}$ of order $m_i>m_1=2$. The $m_1=2$ case (Killing metric) reproduces the original simple Lie algebra ${\cal G}$; for the other $(l-1)$ cases, the skew-symmetric $(2m_i-2)$-commutators define an element of ${\cal G}$ by means of the $(2m_i-1)$-cocycles. These higher-order structure constants (as the ordinary structure constants with all indices are written down) are fully antisymmetric and satisfy, by virtue of being Lie algebra cocycles, the generalised Jacobi condition \eq{jaccond}. \medskip \noindent {\it Remark.}\quad It may be checked explicitly that the coordinate definition of the cocycles ${\omega_{\ind i{2m-2}}}^\sigma_\cdot$ and the invariance condition \eq{invariance}\ for their associated invariant polynomials entail the GJI. Indeed, the $l.h.s$ of \eq{jaccond} (for $s=2m-2$) is, using \eq{defcocycle}, equal to \begin{equation} \eqalign{& \epsilon^{j_1\ldots j_{4m-5}}_{i_1\ldots i_{4m-5}} {\omega_{j_{1}\ldots j_{2m-2}\cdot}}^\rho \epsilon^{l_1\ldots l_{2m-3}}_{j_{2m-1}\ldots j_{4m-5}} C^{p_1}_{\rho l_1}\ldots C^{p_{m-1}}_{l_{2m-4}l_{2m-3}} {k_{p_1\ldots p_{m-1}}}_\cdot^\sigma \cr =& (2m-3)! \epsilon^{j_1\ldots j_{2m-2} l_1 \ldots l_{2m-3}}_{i_1\; \ldots \;i_{4m-5}} {\omega_{j_{1}\ldots j_{2m-2}\cdot}}^\rho C^{p_1}_{\rho l_1}\ldots C^{p_{m-1}}_{l_{2m-4}l_{2m-3}} {k_{p_1\ldots p_{m-1}}}_\cdot^\sigma =0\quad, \cr} \label{nuevath} \end{equation} which is zero since if ${\omega_{\ind j{2m-2}}}^\rho_\cdot$ is a $(2m-1)$-cocycle [\eq{defcocycle}] \begin{equation} \epsilon^{j_1\ldots j_{2m-1}}_{i_1\ldots i_{2m-1}} C^{\nu}_{j_1 \rho}{\omega_{j_2\ldots j_{2m-1}}}^\rho_\cdot=0\quad, \label{VIxii} \end{equation} which follows from Lemma 3.1. There is a simple way of expressing the above results making use of the Chevalley-Eilenberg \cite{CE} formulation of the Lie algebra cohomology. For the standard case, we may introduce the BRST operator \begin{equation} s=-{1\over 2}c^ic^j{C_{ij}}^k_\cdot{\partial\over\partial c^k}\quad, \quad s^2=0\quad, \label{BRST} \end{equation} with $c^ic^j=-c^jc^i$ (in a graded algebra case, the $c$'s would have a grading opposite to that of the associated generators). Then, $sc^k=-{1\over 2}{C_{ij}}^k_\cdot c^ic^j$ (Maurer-Cartan eqs.) and the nilpotency of $s$ is equivalent to the JC \eq{jacid}. In the present case, we may describe all the previous results by introducing the following generalisation: \medskip \noindent {\bf Theorem 5.2}\quad ({\it Complete BRST operator for a simple Lie algebra}) Let ${\cal G}$ be a simple Lie algebra. Then, there exists a nilpotent associated operator given by the odd vector field \begin{equation} \eqalign{ s= & -{1\over 2}c^{j_1}c^{j_2}{\omega_{j_1j_2}}^\sigma_\cdot {\partial\over\partial c^\sigma} -\ldots- {1\over (2m_i-2)!}c^{j_1}\ldots c^{j_{2m_i-2}} {\omega_{j_1\ldots j_{2m_i-2}}}^\sigma_\cdot {\partial\over\partial c^\sigma} -\ldots \cr & - {1\over (2m_l-2)!}c^{j_1}\ldots c^{j_{2m_l-2}} {\omega_{j_1\ldots j_{2m_l-2}}}^\sigma_\cdot {\partial\over\partial c^\sigma} \equiv s_2+\ldots+s_{2m_i-2}+\ldots +s_{2m_l-2} \;, \cr} \label{HOBRST} \end{equation} where $i=1,\ldots,l\,,\, {\omega_{j_1j_2}}^\sigma_\cdot\equiv{C_{j_1j_2}}^\sigma_\cdot$ and ${\omega_{j_1\ldots j_{2m_i-2}}}^\sigma_\cdot$ are the corresponding $l$ ($c$-number) higher-order cocycles. The operator $s$ will be called the complete BRST operator associated with ${\cal G}$. \medskip \noindent {\it Proof}: The nilpotency of $s$ encompasses, in fact, the JC and the $(l-1)$ GJC's which have to be satisfied, respectively, by the $\omega$'s which determine the standard BRST operator \eq{BRST} and the $(l-1)$ higher-order BRST operators; all the cohomological information on ${\cal G}$ is contained in the complete BRST operator. The GJC's come from the squares of the individual terms $s_{p}^2$, the crossed products $s_{p} s_{q}$ not contributing since the terms $s_{2m_i-2}$ are given by Lie algebra $(2m_i-1)$-cocycles. To see this, we first notice that there are no $\omega$'s with an even number of indices ($s$ is an odd operator). Consider now a mixed product $s_p s_q$ ($p$ and $q$ even). This is given by \begin{equation} \eqalign{ s_ps_q &\propto {\omega_{i_1\ldots i_{p}}}^\rho_\cdot c^{i_1}\ldots c^{i_{p}} {\omega_{j_1\ldots j_{q}}}^\sigma_\cdot [\sum_{l=1}^q (-1)^{l+1}\delta_\rho^{j_l}c^{j_1}\ldots \hat c^{j_l}\ldots c^{j_{q}}]{\partial\over\partial c^\sigma} \cr &= q{\omega_{i_1\ldots i_{p}}}^\rho_\cdot {\omega_{j_1\ldots j_{q}}}^\sigma_\cdot \delta_\rho^{j_1} c^{i_1}\ldots c^{i_{p}} c^{j_2}\ldots c^{j_q}{\partial\over\partial c^\sigma} \cr &= q{\omega_{i_1\ldots i_{p}}}^\rho_\cdot {\omega_{\rho j_2\ldots j_{q}}}^\sigma_\cdot c^{i_1}\ldots c^{i_{p}} c^{j_2}\ldots c^{j_q}{\partial\over\partial c^\sigma}\quad, \cr} \label{mixed} \end{equation} where the term in ${\partial\over\partial c^\rho}{\partial\over\partial c^\sigma}$ has been omitted since ($p$ and $q$ being even) it cancels with the one coming from $s_q s_p$. Recalling now expression \eq{defcocycle} it is found that \eq{mixed} is zero because of \eq{VIxii}, which in the present language reads $s_{p}s_2=0$. Thus, $s^2 =s^2_2+\ldots+ s^2_{2m_i-2}+\ldots+s^2_{2m_l-2}=0$, each of the $l$ terms being zero separately as a result of the GJC \eq{jaccond}, {\it q.e.d.} \section{Concluding remarks} Many questions arise now that require further study. From a physical point of view it would be interesting to find applications of these higher-order Lie algebras to know whether the cohomological restrictions which determine and condition their existence have a physical significance. Lie algebra cohomology arguments have already been very useful in various physical problems as {\it e.g.}, in the description of anomalies \cite{TJZW} or in the construction of the Wess-Zumino terms required in the action of extended supersymmetric objects \cite{AZTO}. In the form \eq{HOBRST}, the above formulation of the higher algebras has a resemblance with the closed string BRST cohomology and the SH algebras \cite{LAST,LAMA} relevant in the theory of graded string field products \cite{WZ,ZIE} (see also \cite{LZ}). Note, however, that because of the cocycle form of the $\omega$'s, the GJI's are not modified as already mentioned in the introduction. In the SH algebras such a modification is the result of having, for instance, terms lower than quadratic in \eq{HOBRST} (with the appropriate change in ghost grading). Due to their underlying BRST symmetry, similar structures appear in the determination of the different gauge structure tensors through the antibrackets and the master equation in the Batalin-Vilkovisky formalism (for a review, see \cite{MH,GPS}), where violations of the JI are also present (the Batalin-Vilkovisky antibracket is a two-bracket, but higher-order ones may also be considered \cite{BDA}). Other questions may be posed from a purely mathematical point of view. As the discussion in sec. 4 shows, a representation of a simple Lie algebra may not be a representation for the associated higher-order Lie algebras. Thus, the representation theory of higher-order algebras requires a separate analysis. Other problems may be more interesting from a structural point of view as, for instance, the contraction theory of higher-order Lie algebras (which will take us outside the domain of the simple ones), as well as the study of the non-simple higher-order algebras themselves and their cohomology. These, and the generalisation of these ideas to superalgebras (for which there exist simple finite dimensional ones with zero Killing form) are problems for further research. \section*{Acknowledgements} The authors wish to thank J. Stasheff for helpful correspondence and his comments on the manuscript and T. Lada for a copy of \cite{LAMA}. This research has been partially sponsored by the Spanish CICYT and DGICYT (AEN 96-1669, PR 95-439). Both authors wish to thank the kind hospitality extended to them at DAMTP. The support of St. John's College (J.A.) and an FPI grant from the Spanish Ministry of Education and Science and the CSIC (J.C.P.B.) are also gratefully acknowledged.
1,477,468,750,798
arxiv
\section{Introduction} In this paper we obtain error bounds for a recent form of asymptotic expansions for linear differential equations having a simple turning point. The differential equations we study are of the form \begin{equation} d^{2}w/dz^{2}=\left\{u^{2}f(z) +g(z)\right\} w, \label{eq1} \end{equation} where $u$ is a large parameter, real or complex, and $z$ lies in a complex domain which may be unbounded. Many special functions satisfy equations of this form. The functions $f(z)$ and $g(z)$ are meromorphic in a certain domain $Z$ (precisely defined below), and are independent of $u$ (although the latter restriction can often be relaxed without undue difficulty). We further assume that $f(z)$ has no zeros in $Z$ except for a simple zero at $z=z_{0}$, which is the turning point of the equation. From standard Liouville transformations we have two new variables, namely $\xi$ (for Liouville-Green expansions involving elementary functions) and $\zeta$ (for turning point expansions involving Airy functions). These are given by \begin{equation} \xi=\tfrac{2}{3}\zeta ^{3/2}= \int_{z_{0}}^{z} f^{1/2}(t) dt. \label{eq2} \end{equation} We choose the branch here so that $\xi$ is positive when $\zeta$ approaches $0$ through positive values, and by continuity elsewhere. Note $\zeta$ is an analytic function of $z$ at $z=z_{0}$ ($\zeta=0$) whereas $\xi$ has a branch point at the turning point. With $\zeta$ defined as above and \begin{equation} W(u,\zeta) =\zeta ^{-1/4}f^{1/4}(z) w(u,z), \label{eq2a} \end{equation} the differential equation (\ref{eq1}) is transformed to \begin{equation} d^{2}W/d\zeta ^{2}=\left\{ u^{2}\zeta +\psi(\zeta) \right\} W, \label{eq3} \end{equation} where \begin{equation} \psi (\zeta) =\tfrac{5}{16}\zeta ^{-2}+\zeta \Phi(z), \label{eq4} \end{equation} in which \begin{equation} \Phi(z) =\frac{4f(z) {f}^{\prime \prime }(z) -5{f}^{\prime 2}(z) }{16f^{3}(z) }+\frac{g(z) }{f(z) }. \label{eq5} \end{equation} The turning point $z=z_{0}$ of (\ref{eq1}) is mapped to the turning point $\zeta =0$ of (\ref{eq3}). If near $z=z_{0}$ the functions $f(z) $ and $f(z) $ have the following Taylor expansions \begin{equation} f(z) =\sum\limits_{n=1}^{\infty }{f}_{n}\left( z-z_{0}\right) ^{n},\ g(z) =\sum\limits_{n=0}^{\infty }{g}_{n}\left( z-z_{0}\right) ^{n} \label{fgtaylor}, \end{equation} where $f_{1}\neq 0$, then from (\ref{eq2}), (\ref{eq4}) and (\ref{eq5}) we find that \begin{equation} \lim_{\zeta \rightarrow 0}\psi (\zeta) =\left( f_{1}\right) ^{-8/3}\left\{ \tfrac{9}{16}f_{1}f_{3}-\tfrac{3}{10}\left( f_{2}\right) ^{2}+\left( f_{1}\right) ^{2}g_{0}\right\}. \label{psi(0)} \end{equation} We define $\psi (0) $ to take this value, hence rendering $\psi(\zeta)$ analytic at the turning point (which otherwise would be a removable singularity). Following \cite[Chap. 11, Sect. 8.1]{Olver:1997:ASF} we define three sectors \begin{equation} \mathrm{\mathbf{S}}_{j}=\left\{ \zeta :\left\vert {\arg \left(\zeta e^{-2\pi ij/3}\right) }\right\vert \leq {\tfrac{1}{3}}\pi \right\} \ \left( {j=0,\pm 1}\right). \label{eq6a} \end{equation} We also define $\mathrm{\mathbf{T}}_{j}$ to be $\mathrm{\mathbf{S}}_{j}$ rotated negatively by the angle $\tfrac{2}{3}\arg(u)$, so that \begin{equation} \mathrm{\mathbf{T}}_{j}=\left\{ \zeta :\left\vert {\arg \left( {u^{2/3}\zeta e^{-2\pi ij/3}}\right) }\right\vert \leq {\tfrac{1}{3}}\pi \right\} \ \left( {j=0,\pm 1}\right). \label{eq6} \end{equation} Neglecting $\psi (\zeta)$ in (\ref{eq3}) we obtain the so-called comparison equation $d^{2}W/d\zeta ^{2}={u^{2}\zeta }W$. This has numerically satisfactory solutions in terms of the Airy function, namely $\mathrm{Ai}_{j}\left( {u^{2/3}\zeta }\right) :=\mathrm{Ai}\left( {u^{2/3}\zeta e}^{-2\pi ij/3}\right)$ ($j=0,\pm 1$). For large $|u|$ these are characterized as being recessive for $\zeta \in \mathrm{\mathbf{T}}_{j}$ and dominant elsewhere. In \cite{Olver:1964:EBF} and \cite[Chap. 11, Theorem 9.1]{Olver:1997:ASF} Olver obtained three asymptotic solutions to (\ref{eq1}) in the complex plane, of the form \begin{multline} w_{2n+1,j}(u,z) =\left\{ \dfrac{\zeta }{f(z) }\right\} ^{1/4}\left\{ \mathrm{Ai}_{j}\left( {u^{2/3}\zeta }\right) \sum\limits_{s=0}^{n}\dfrac{{A_{s}(\zeta) }}{u^{2s}}\right. \\ \left. +\dfrac{\mathrm{Ai}_{j}^{\prime }\left( {u^{2/3}\zeta }\right) }{u^{4/3} }\sum\limits_{s=0}^{n-1}\dfrac{{B_{s}(\zeta) }}{u^{2s}} +\varepsilon _{2n+1,j}(u,\zeta) \right\}, \label{Olvertp} \end{multline} and explicit bounds on the error terms $\varepsilon _{2n+1,j}(u,\zeta) $ were given. However these bounds are quite complicated since they involve the coefficients $A_{s}(\zeta)$ and $B_{s}(\zeta)$ which themselves are hard to compute (due to iterated integration). An added complication is that the bounds involve so-called auxiliary functions for Airy functions (see \cite[Chap. 11, Sect. 8.3]{Olver:1997:ASF}). In \cite{Dunster:2017:COA} new asymptotic expansions were derived for solutions of (\ref{eq1}) that involved coefficients which are much simpler to evaluate. In this paper we obtain error bounds for these expansions, and these too are much easier to compute than Olver's. Our new bounds only involve explicitly defined coefficients, along with elementary functions, and in particular do not require complicated auxiliary functions or nested integration. Let us present the main results from \cite{Dunster:2017:COA}. In the following the use of a circumflex (\textasciicircum) is in accord with the notation of this paper, and is used to distinguish certain functions and paths that are defined in terms of $z$ rather than $\xi$. Firstly we define the set of coefficients \begin{equation} \hat{F}_{1}(z) ={\tfrac{1}{2}}\Phi (z) ,\ \hat{F}_{2}(z) =-{\tfrac{1}{4}}f^{-1/2}(z) {\Phi }^{\prime }(z), \label{eq8} \end{equation} and \begin{equation} \hat{F}_{s+1}(z) =-\tfrac{1}{2}f^{-1/2}(z) \hat{{F} }_{s}^{\prime }(z) -\tfrac{1}{2}\sum\limits_{j=1}^{s-1}{\hat{F} _{j}(z) \hat{F}_{s-j}(z) }\ \left( {s=2,3,4\cdots } \right). \label{eq9} \end{equation} The odd coefficients appearing in the asymptotic expansions are then given by \begin{equation} \hat{E}_{2s+1}(z) =\int {\hat{F}_{2s+1}(z) f^{1/2}(z) dz}\ \left( {s=0,1,2,\cdots }\right), \label{eq7} \end{equation} where the integration constants must be chosen so that each $\left( z-z_{0}\right) ^{1/2}\hat{E}_{2s+1}(z) $ is meromorphic (non-logarithmic) at the turning point. As shown in \cite{Dunster:2017:COA}, the even ones can be determined without any integration, via the formal expansion \begin{equation} \sum\limits_{s=1}^{\infty }\dfrac{\hat{E}_{2s}(z) }{u^{2s}} \sim -\frac{1}{2}\ln \left\{ 1+\sum\limits_{s=0}^{\infty }\dfrac{\hat{F}_{2s+1}(z) }{u^{2s+2}}\right\} +\sum\limits_{s=1}^{\infty }\dfrac{{\alpha }_{2s}}{u^{2s}}, \label{even} \end{equation} where each ${\alpha }_{2s}$ is arbitrarily chosen. These too are meromorphic at the turning point. We remark that the coefficients $\hat{F}_{s}(z) $ can be obtained explicitly, along with the even terms $\hat{E}_{2s}(z) $, with each of the odd terms $\hat{E}_{2s+1}(z) $ requiring just one integration of an explicitly determined function, either explicitly with the aid of symbolic software, or by quadrature. We next define two sequences $\left\{ a_{s}\right\} _{s=1}^{\infty }$ and $\left\{ \tilde{a}_{s}\right\} _{s=1}^{\infty }$ by $a_{1}=a_{2}={\frac{5}{72}}$, $\tilde{a}_{1}=\tilde{a}_{2}=-{\frac{7}{72}}$, with subsequent terms $a_{s}$ and $\tilde{a}_{s}$ ($s=2,3,\cdots $) satisfying the same recursion formula \begin{equation} b_{s+1}=\tfrac{1}{2}\left( {s+1}\right) b_{s}+\tfrac{1}{2} \sum\limits_{j=1}^{s-1}{b_{j}b_{s-j}}. \label{arec} \end{equation} Then let \begin{equation} \mathcal{E}_{s}(z) =\hat{E}_{s}(z) + (-1)^{s}a_{s}s^{-1}\xi ^{-s}, \label{eq40} \end{equation} and \begin{equation} \tilde{\mathcal{E}}_{s}(z) =\hat{E}_{s}(z) +(-1) ^{s}\tilde{a}_{s}s^{-1}\xi^{-s}, \label{eq38} \end{equation} where $\xi$ is given by (\ref{eq2}). In \cite{Dunster:2017:COA} it was then shown that there exist solutions of the form \begin{equation} w_{j}(u,z) =f^{-1/4}(z) \zeta ^{1/4}\left\{ \mathrm{Ai}_{j}\left(u^{2/3}\zeta \right) A(u,z) +\mathrm{Ai}_{j}^{\prime }\left( u^{2/3}\zeta \right) B(u,z) \right\}, \label{wjs} \end{equation} where \begin{equation} A(u,z) \sim {\exp \left\{ \sum\limits_{s=1}^{\infty }\dfrac{ \tilde{\mathcal{E}}_{2s}(z) }{u^{2s}}\right\} \cosh \left\{ \sum\limits_{s=0}^{\infty }\dfrac{\tilde{\mathcal{E}}_{2s+1}(z) }{u^{2s+1}}\right\} }, \label{Aexp} \end{equation} and \begin{equation} B(u,z) \sim \frac{1}{u^{1/3}\zeta ^{1/2}}\exp \left\{ \sum\limits_{s=1}^{\infty }{\dfrac{\mathcal{E}_{2s}(z) }{u^{2s}}}\right\} \sinh \left\{ \sum\limits_{s=0}^{\infty }\dfrac{\mathcal{E} _{2s+1}(z) }{u^{2s+1}}\right\} , \label{Bexp} \end{equation} in certain complex domains, which we describe in detail in \cref{sec2}. In this paper we truncate the expansions appearing in (\ref{Aexp}) and (\ref{Bexp}) after a finite number of terms, and obtain bounds for the resulting error terms. So rather than the one error term of (\ref{Olvertp}) and its associated complicated bound, we derive separate error bounds for both the $A(u,z)$ and $B(u,z)$ approximations, and this obviates the need for Airy auxiliary functions, since these functions are slowly-varying throughout the asymptotic region of validity. We remark error bounds without auxiliary functions were obtained by Boyd in \cite{Boyd:1987:AEF}, but like Olver's expansions (\ref{Olvertp}) his bounds involve the complicated coefficients $A_{s}(\zeta)$ and $B_{s}(\zeta)$, and required successive approximations. They are consequently more complicated and not easy to compute beyond one term in an expansion. In \cite{Dunster:2001:CEF} convergent expansions were derived for the $A(u,z) $ and $B(u,z) $ coefficient functions, but again these are difficult to compute because they also involve coefficients that are hard to evaluate due to iterated integration. In \cite{Dunster:2014:OEB} asymptotic solutions of (\ref{eq3}) were derived which involved just the Airy function alone (and not its derivative), and where an asymptotic expansion appeared in the argument of this approximant. Error bounds were given, but as in \cite{Boyd:1987:AEF} and \cite[Chap. 11, Theorem 9.1]{Olver:1997:ASF} these are hard to compute. The importance of explicit error bounds for asymptotic approximations was dem\-onstrated in an expository paper by Olver in \cite{Olver:1980:AAA}. Olver noted how explicit error bounds can provide useful analytical insight into the nature and reliability of the approximations, enable somewhat unsatisfactory concepts such as multiple asymptotic expansions and generalized asymptotic expansions to be avoided, and lead to significant extensions of asymptotic results. On the other hand, from a computational point of view, turning point uniform asymptotic expansions are important tools which have been considered for the efficient computation of a good number of special functions. Examples are the algorithms for Bessel functions of real argument and complex variable of \cite{Amos:1986:A6A} (based on expansions from \cite{Olver:1997:ASF}), the methods for modified Bessel functions of imaginary order of \cite{Gil:2004:CSO} (with expansions from \cite{Dunster:1990:BFO}) and the algorithm for parabolic cylynder functions of \cite{Gil:2006:CTR} (see also \cite{Temme:2000:NAA}). In the algorithms \cite{Amos:1986:A6A,Gil:2004:CSO,Gil:2006:CTR}, no error bounds are used for establishing the accuracy of the uniform expansions; instead this is certified by checking consistency with other methods of evaluation. The reason for this lies in the difficulty of computing error bounds for these expansions. The use of error bounds for asymptotic expansions in numerical algorithms is in fact very rare, and we only find examples for expansions of Poincar\'e type (see for instance \cite{Fabijonas:2004:COC}). In this paper, we develop computable error bounds for turning point expansions, thus opening the possibility of using strict error bounds for the numerical computations with turning point asymptotics. A related effort in this direction is that of \cite{Wei:2010:EBF}. The paper is organized as follows. In \Cref{sec2} we use the new results given in \cite{Dunster:2020:LGE} which provide explicit and simple error bounds for Liouville-Green (LG) expansions of exponential form. These rarely-used expansions were used in \cite{Dunster:2017:COA} to obtain (\ref{Aexp}) and (\ref{Bexp}). We apply Dunster's new results to obtain three fundamental LG asymptotic solutions of (\ref{eq3}) complete with error bounds (which are easy to compute). Also in this section we derive an important connection relation between the three solutions. In addition, we obtain similar expansions, with error bounds, for the Airy functions of complex argument that appear in (\ref{wjs}). Both these new connection relations and Airy expansions are used in the subsequent sections, but it is worth remarking that they are interesting and useful in their own right. The results of \Cref{sec2} are then applied in \Cref{sec3} to obtain the desired error bounds for the expansions (\ref{Aexp}) and (\ref{Bexp}) for $z$ not too close to the turning point. These in turn are used in \Cref{sec4} to obtain error bounds for $z$ lying in a bounded domain which includes the turning point. As in \cite{Dunster:2017:COA}, the method is to express the asymptotic solutions as a Cauchy integral around a simple positively orientated loop surrounding the turning point, and bounding the error along the loop. In \Cref{sec5} we illustrate the new results of \Cref{sec3} with an application to Bessel functions of large order. We show how the new simplified expansions and accompanying error bounds can be constructed, how these can then be matched to the exact solutions, and include some numerical examples of the performance of the bounds. \section{Liouville-Green expansions and connection coefficients} \label{sec2} Here we pre\-sent Liouville-Green expansions of exponential form for three numerically satisfactory solutions of (\ref{eq3}), complete with error bounds. To do so we shall employ the new results given in \cite{Dunster:2020:LGE}. We then use these expansions to obtain a connection relation between the three solutions, which will be used in our subsequent error analysis for the expansions (\ref{wjs}) - (\ref{Bexp}). We begin by defining certain domains. Firstly, we partition each of the sectors in (\ref{eq6}) by $\mathrm{\mathbf{T}}_{j}=\mathrm{\mathbf{T}} _{j,k}\cup \mathrm{\mathbf{T}}_{j,l}$ ($j,k,l\in \left\{0,1,-1\right\}$, $j\neq k\neq l\neq j$), where $\mathrm{\mathbf{T}}_{j,k}$ is the closed subsector of angle $\pi /3$ and adjacent to $\mathrm{\mathbf{T}}_{k}$; for example $\mathrm{\mathbf{T}}_{0,1}=\left\{ \zeta {:0\leq \arg \left( {u^{2/3} }\zeta \right) \leq {\tfrac{1}{3}}\pi }\right\} $. We denote $T_{j}$ (respectively $T_{j,k}$) to be the region in the $z$ plane corresponding to the sector $\mathrm{\mathbf{T}}_{j}$ (respectively $\mathrm{\mathbf{T}}_{j,k}$) in the $\zeta $ plane. See \Cref{fig:fig1} for some typical regions in the right half plane for the case $z_{0}$ and $u$ positive. \begin{figure}[htbp] \centering \includegraphics{Fig1.eps} \caption{Regions $T_{j,k}$ in $z$ plane for $u$ positive.} \label{fig:fig1} \end{figure} Next, let $Z$ be the $z$ domain containing $z=z_{0}$ in which $f(z)$ has no other zeros, and in which $f(z)$ and $g(z)$ are meromorphic, with poles (if any) at finite points, at $z=w_{j}$ ($j=1,2,3,\cdots $), say, such that at $z=w_{j}$ (see \cite[Chap. 10, Thm. 4.1]{Olver:1997:ASF}): (i) $f(z)$ has a pole of order $m>2$, and $g(z)$ is analytic or has a pole of order less than $\frac{1}{2}m+1$, or (ii) $f(z)$ and $g(z)$ have a double pole, and $\left( z-w_{j}\right) ^{2}{g(z) \rightarrow -}\frac{1}{4}$ as $z\rightarrow w_{j}$. We shall call these\textit{\ admissible poles}. In some applications the parameter $u$ in (\ref{eq1}), and hence $g(z)$, can be redefined by a translation to make a pole admissible (which would not be otherwise). For $j=0,\pm 1$ choose an arbitrary $z^{(j) }\in T_{j}\cap Z$. These can be chosen at an ordinary point, at an admissible pole, or at infinity if $f(z)$ and $g(z)$ can be expanded in convergent series in a neighborhood of $z=\infty $ of the form \begin{equation} f(z)=z^{m}\sum\limits_{s=0}^{\infty }f_{s}z^{-s},\ g(z)=z^{p}\sum\limits_{s=0}^{\infty }g_{s}{z}^{-s}, \label{fginfinity} \end{equation} where $f_{0}\neq 0$, $g_{0}\neq 0$, and either $m$ and $p$ are integers such that $m>-2$ and $p<\frac{1}{2}m-1$, or $m=p=-2$ and $g_{0}=-\frac{1}{4}$. For details and generalizations of (\ref{fginfinity}) see \cite[Chap. 10, Sects. 4 and 5]{Olver:1997:ASF}. In this paper we assume that each $z^{(j)}$ is chosen at infinity satisfying the above conditions, or at an admissible pole. For each\ $j=0,\pm 1$\ the following LG region of validity $Z_{j}(u,z^{(j)})$ (abbreviated $Z_{j}$) then comprises the $z$ point set for which there is a path $\hat{\mathcal{L}}_{j}$ linking $z$ with $z^{(j) }$ in $Z$ and having the properties (i) $\hat{\mathcal{L}}_{j} $ consists of a finite chain of $R_{2}$ arcs (as defined in \cite[Chap. 5, sec. 3.3]{Olver:1997:ASF}), and (ii) as $v$ passes along $\hat{\mathcal{L}}_{j} $ from $z^{(j) }$ to $z$, the real part of $(-1)^{j}u\xi(v)$ is nonincreasing, where $\xi(v)$is given by (\ref{eq2}) with $z=v$, and with the chosen sign fixed throughout. Following Olver \cite[Chap. 6, sec. 11]{Olver:1997:ASF} these are called \textit{progressive paths}. Typically one would choose each $z^{(j) }$ to maximize the size of $Z_{j}(u,z^{(j)}) $; for example, if $\theta=\arg(u)$ and the positive sign is chosen in (\ref{eq2}), one might choose $ z^{(j) }$ corresponding to $\xi =\xi ^{(j) }:=\infty \exp \left\{ -i\theta +ij\pi \right\} $; in this case $z^{(j) }$ would either also be at infinity (provided (\ref{fginfinity})\ holds), or be an admissible pole. We now apply \cite{Dunster:2020:LGE} to (\ref{eq3}), and this leads to the following. \begin{theorem} \label{thm:2.1} Let three solutions of (\ref{eq3}) be given by \begin{equation} W_{0}(u,\zeta) =\frac{1}{\zeta ^{1/4}}\exp \left\{ -u\xi +\sum\limits_{s=1}^{n-1}{(-1) ^{s}\frac{\hat{E}_{s}\left( z\right) -\hat{E}_{s}\left( {z^{(0) }}\right) }{u^{s}}} \right\} \left\{ 1+\eta _{n,0}(u,z) \right\}, \label{eq10} \end{equation} and \begin{equation} W_{\pm 1}(u,\zeta) =\frac{1}{\zeta ^{1/4}}\exp \left\{ u\xi +\sum\limits_{s=1}^{n-1}\frac{\hat{E}_{s}(z) -\hat{E} _{s}\left( {z^{(\pm 1) }}\right) }{u^{s}}\right\} \left\{ { 1+\eta _{n,\pm 1}(u,z) }\right\}, \label{eq11} \end{equation} where the root in (\ref{eq10}) is such $\mathrm{Re}(u \xi)>0$ in $T_{0}$ and $\mathrm{Re}(u \xi)<0$ in $T_{-1}\cup T_{1}$; the branch in (\ref{eq11}) for $W_{j}(u,\zeta) $ ($j=\pm 1$) is such $\mathrm{Re}(u \xi)<0$ in $T_{j}$ and $\mathrm{Re}(u \xi)>0$ in $T_{0}\cup T_{-j}$. Then each solution is independent of $n$, and for $z\in Z_{j}\left( {u,}z^{(j)}\right) $ ($j=0,\pm 1$) \begin{equation} \left\vert \eta _{n,j}(u,z) \right\vert \leq |u| ^{-n}\omega _{n,j}(u,z) \exp \left\{ {\left\vert u\right\vert ^{-1}\varpi _{n,j}(u,z) +|u| ^{-n}\omega _{n,j}(u,z) }\right\}, \label{eq12} \end{equation} where \begin{multline} \omega _{n,j}(u,z) =2\int_{z^{(j)}}^{z}{\left\vert { \hat{F}_{n}(t) f^{1/2}(t) dt}\right\vert } \\ +\sum\limits_{s=1}^{n-1}\dfrac{1}{{|u| ^{s}}}{ \int_{z^{(j)}}^{z}{\left\vert {\sum\limits_{k=s}^{n-1}{\hat{F} _{k}(t) \hat{F}_{s+n-k-1}(t) }f^{1/2}\left( t\right) dt}\right\vert }}, \label{eq13} \end{multline} and \begin{equation} \varpi _{n,j}(u,z) =4\sum\limits_{s=0}^{n-2}\frac{1}{{ |u| ^{s}}}{\int_{z^{(j)}}^{z}{\left\vert {\hat{ {F}}_{s+1}(t) f^{1/2}(t) dt}\right\vert }}. \label{eq14} \end{equation} Here the paths of integration are taken along $\hat{\mathcal{L}}_{j}$. \end{theorem} \begin{proof} From the definition (\ref{eq2}) of $\xi$ and letting $Y(u,\xi) =\zeta ^{1/4}W(u,\zeta) $ we transform (\ref{eq3}) to \begin{equation} d^{2}Y/d{\xi }^{2}=\left\{ {u^{2}+}\Phi (z) \right\} Y, \label{LGeqn} \end{equation} where $\Phi (z) $ is given by (\ref{eq5}). Then we apply \cite[Theorem 1.1]{Dunster:2020:LGE}, in particular (1.17) yields $W_{0}\left( { u,\zeta }\right) $, and (1.16) yields $W_{\pm 1}(u,\zeta) $ (with different branches of $\xi$ in the $z$ plane, as described above). The constants $\hat{E}_{s}\left( {z^{(0) }}\right) $ in (\ref{eq10}) were chosen so that \begin{equation} \lim_{z\rightarrow {z^{(0) }}}\zeta ^{1/4}e^{u{\xi }}W_{0}(u,\zeta) =1, \label{eq14a} \end{equation} since from (\ref{eq12}) - (\ref{eq14}) $\lim_{z\rightarrow {z^{(0) }}}{\eta _{n,0}(u,z) =0}$, and hence $W_{0}(u,\zeta) $ is independent of $n$. Similarly for the constants $\hat{{E }}_{s}\left( {z^{(\pm 1) }}\right) $ in (\ref{eq11}) and the resulting independence of $n$ for $W_{\pm 1}(u,\zeta) $. \end{proof} \begin{remark}Note all three solutions are analytic and hence single-valued near $\zeta =0$ even though $\xi$ and the coefficients $\hat{E}_{s}(z) $ are not. \end{remark} \subsection{Connection coefficients} We now obtain a connection formula relating the three solutions $W_{j}(u,\zeta) $ ($j=0,\pm 1$). For this, and also throughout this paper, we assume the following. \begin{hypothesis} Let each $z^{(j) }\in T_{j}\cap Z_{j}$ ($j=0,\pm 1$) either be at infinity with (\ref{fginfinity})\ holding, or an admissible pole. Furthermore, assume $z^{(0) }\in Z_{1}\cap Z_{-1}$ and $z^{( \pm 1) }\in Z_{0}\cap Z_{\mp 1}$, i.e. for $j,k=0,\pm 1$ there is a path consisting of a finite chain of $R_{2}$ arcs, linking $z^{(j)}$ with $z^{(k)}$ in $Z$ such as $z$ passes along the path from $z^{(j) }$ to $z^{(k) }$, the real part of $u\xi $ is monotonic. \label{hyp} \end{hypothesis} \begin{lemma} Under \Cref{hyp} \begin{equation} \lambda _{-1}W_{-1}(u,\zeta) =iW_{0}(u,\zeta) +\lambda _{1}W_{1}(u,\zeta), \label{eq15} \end{equation} where (with $\lambda _{0}:=1$) \begin{equation} \lambda _{j}\exp \left\{ -\sum\limits_{s=1}^{n-1}\frac{\hat{E}_{s}\left( {z^{(j) }}\right) }{u^{s}}\right\} =\mu _{n}(u) \left\{ {1+\delta }_{n,j}(u) \right\} \label{lambda} \: (j=0,\pm 1), \end{equation} in which \begin{equation} \mu _{n}(u) =\exp \left\{ {-\sum\limits_{s=1}^{n-1}{ (-1)^{s}\frac{\hat{E}_{s}\left( {z^{(0) }}\right) }{u^{s}}} }\right\}, \label{eq46} \end{equation} \begin{equation} \delta_{n,\pm 1}(u) =\dfrac{\eta _{n,0} \left( u,z^{( \mp 1) }\right) -\eta _{n,\pm 1}\left({u,z^{( \mp 1) }}\right) }{1+\eta _{n,\pm 1}\left( {u,z^{( \mp 1) }}\right) }, \label{delta} \end{equation} and $\delta_{n,0}(u)=0$. \end{lemma} \begin{remark} From (\ref{eq12}) and (\ref{delta}) we note that ${\delta}_{n,j}(u) =\mathcal{O}\left(u^{-n}\right) $, and hence from (\ref{lambda}) and (\ref{eq46}) \begin{equation} \lambda _{j}\exp \left\{ -\sum\limits_{s=1}^{n-1}\frac{\hat{E}_{s}\left( z^{( j) }\right) }{u^{s}}\right\} =\lambda _{k}\exp \left\{ -\sum\limits_{s=1}^{n-1}\frac{\hat{E}_{s}\left( {z^{(k)}} \right) }{u^{s}}\right\} \left\{ 1+\mathcal{O}\left( \frac{1}{{u^{n}}} \right) \right\}, \label{eq22} \end{equation} for $j,k\in \left\{ 0,1,-1\right\} $. \end{remark} \begin{proof} The result is trivial for $j=0$ since by definition $\lambda _{0}=1$ and ${\delta }_{n,0}(u) =0$. For $j=-1$ let $z\rightarrow z^{(-1)}$ in (\ref{eq15}) (correspondingly $\xi \rightarrow \xi ^{(-1) }$ and $\zeta \rightarrow \zeta ^{(-1) }$). For $W_{0}(u,\zeta) $ and $W_{-1}(u,\zeta) $ we can use (\ref{eq10}) and (\ref{eq11}), and the latter function vanishes exponentially in the limit. For $W_{1}(u,\zeta) $ we cross a branch cut as $z\rightarrow z^{(-1) }$, and as such in (\ref{eq11}) we have $\xi \rightarrow -\xi ^{(-1) }$, so that $\mathrm{Re}(u \xi) \rightarrow +\infty $. Thus $W_{1}(u,\zeta) $, like $W_{0}(u,\zeta) $, is exponentially large in this limit. As remarked earlier, $\hat{E}_{2s}(z) $ and $\left(z-z_{0}\right) ^{1/2}\hat{E}_{2s+1}(z) $ are meromorphic in $Z$, and hence single-valued, since they are analytic in that domain except for a pole at the turning point $z=z_{0}$. Thus we have for the coefficients in (\ref{eq11}) for $W_{1}(u,\zeta) $ that $\hat{E}_{2s}(z) \rightarrow \hat{E}_{2s}\left( z^{(-1) }\right) $ and$ \ \hat{E}_{2s+1}(z) \rightarrow -\hat{E}_{2s+1}\left( z^{(-1) }\right) $ as $z\rightarrow z^{(-1) }$, and in addition $\zeta ^{-1/4}\rightarrow -i\left\{ \zeta ^{(-1)}\right\}^{-1/4}$. We then have from (\ref{eq15}) \begin{equation} \underset{z\rightarrow z^{(-1) }}{\lim }\left\{ \lambda_{1}W_{1}(u,\zeta) +iW_{0}(u,\zeta) \right\} =0, \label{eq18} \end{equation} and hence \begin{multline} \lambda _{1}\exp \left\{ -\sum\limits_{s=1}^{n-1}\dfrac{\hat{E} _{s}\left( {z^{(1) }}\right) }{u^{s}}\right\} \left\{ {1+\eta_{n,1}\left( {u,z^{(-1) }}\right) }\right\} \\ -\exp \left\{ -\sum\limits_{s=1}^{n-1}(-1) ^{s}\dfrac{\hat{E}_{s}\left( {z^{(0) }}\right) }{u^{s}}\right\} \left\{ {1+\eta _{n,0}\left( {u,z^{(-1) }}\right) }\right\} =0. \label{eq19} \end{multline} Similarly letting $z\rightarrow z^{(1) }$ in (\ref{eq15}) yields \begin{multline} \lambda _{-1}\exp \left\{ -\sum\limits_{s=1}^{n-1}\dfrac{\hat{E} _{s}\left( {z^{(-1) }}\right) }{u^{s}}\right\} \left\{ { 1+\eta _{n,-1}\left( {u,z^{(1)}}\right) }\right\} \\ -\exp \left\{ -\sum\limits_{s=1}^{n-1}{(-1) ^{s}\dfrac{\hat{{E }}_{s}\left( {z^{(0) }}\right) }{u^{s}}}\right\} \left\{ { 1+\eta _{n,0}\left( {u,z^{(1) }}\right) }\right\} =0. \label{eq21} \end{multline} Then (\ref{lambda}) follows from (\ref{eq46}), (\ref{delta}), (\ref{eq19}) and (\ref{eq21}). \end{proof} \begin{remark} The change in integration constants does not affect the error bounds. Thus for $\eta _{n,j}(u,\xi) $ we can still use (\ref{eq12}). \end{remark} \subsection{Airy functions} We complete this section by presenting similar LG expansions, complete with error bounds, for the Airy functions appearing in (\ref{wjs}). The proof is given in \cref{secA}. We note that the regions of validity of the following asymptotic expansions are not maximal, but they suffice for our purposes. \begin{theorem}\label{thm:10} Let $\left\vert {\arg \left( {u^{2/3}\zeta }\right) }\right\vert \leq {\frac{2}{3}}\pi $ (or equivalently, from (\ref{eq2}), $\left\vert \arg(u \xi) \right\vert \leq \pi $). Then \begin{equation} \mathrm{Ai}\left( {u^{2/3}\zeta }\right) =\frac{1}{2\pi ^{1/2}u^{1/6}\zeta ^{1/4}}\exp \left\{ -u\xi +\sum\limits_{s=1}^{n-1}{(-1) ^{s} \frac{a_{s}}{su^{s}\xi ^{s}}}\right\} \left\{ {1+\eta _{n}^{(0) }(u,\xi) }\right\}, \label{eq24} \end{equation} and \begin{equation} \mathrm{Ai}^{\prime }\left( {u^{2/3}\zeta }\right) =-\frac{u^{1/6}\zeta ^{1/4} }{2\pi ^{1/2}}\exp \left\{-u\xi +\sum\limits_{s=1}^{n-1}{(-1)^{s}\frac{\tilde{a}_{s}}{su^{s}\xi ^{s}}}\right\} \left\{ {1+ \tilde{{\eta }}_{n}^{(0) }(u,\xi) }\right\}, \label{eq25} \end{equation} where \begin{equation} \left\vert {\eta _{n}^{(0) }(u,\xi) }\right\vert \leq |u| ^{-n}\gamma _{n}(u,\xi) \exp \left\{ {|u| ^{-1}\beta _{n}(u,\xi) +|u| ^{-n}\gamma _{n}(u,\xi) }\right\}, \label{eq26} \end{equation} and \begin{equation} \left\vert {\tilde{{\eta }}_{n}^{(0) }(u,\xi) } \right\vert \leq |u| ^{-n}\tilde{{\gamma }}_{n}(u,\xi) \exp \left\{ |u| ^{-1}\tilde{{\beta }} _{n}(u,\xi) +|u| ^{-n}\tilde{{\gamma }} _{n}(u,\xi) \right\}, \label{eq27} \end{equation} where \begin{equation} \gamma _{n}(u,\xi) =\frac{2a_{n}\Lambda _{n+1}}{|\xi| ^{n}}+\frac{1}{|u| \left\vert \xi \right\vert ^{n+1}}\sum\limits_{s=0}^{n-2}\frac{\Lambda _{n+s+2}}{|u \xi| ^{s}}\sum\limits_{k=s+1}^{n-1}{a_{k}a_{s+n-k}}, \label{eq28} \end{equation} \begin{equation} {\beta }_{n}(u,\xi) =\frac{4}{|\xi| } \sum\limits_{s=0}^{n-2}{\frac{a_{s+1}\Lambda _{s+2}}{\left\vert {u\xi } \right\vert ^{s}}}, \label{eq29} \end{equation} \begin{equation} \tilde{\gamma}_{n}(u,\xi) =\frac{2\left\vert \tilde{a} _{n}\right\vert \Lambda _{n+1}}{|\xi| ^{n}}+\frac{1}{ |u| |\xi| ^{n+1}} \sum\limits_{s=0}^{n-2}{\frac{\Lambda _{n+s+2}}{ |u \xi| ^{s}}\sum\limits_{k=s+1}^{n-1}\tilde{a}{_{k}\tilde{a}_{s+n-k}}}, \label{eq28a} \end{equation} \begin{equation} {\tilde{\beta}}_{n}(u,\xi) =\frac{4}{\left\vert \xi \right\vert }\sum\limits_{s=0}^{n-2}{\frac{\left\vert \tilde{a} _{s+1}\right\vert \Lambda _{s+2}}{|u \xi| ^{s}}}, \label{eq29a} \end{equation} and \begin{equation} \Lambda _{n}=\dfrac{\pi ^{1/2}\Gamma \left( \frac{1}{2}n-\frac{1}{2}\right) }{2\Gamma \left( \frac{1}{2}n\right) }, \label{eq93} \end{equation} \end{theorem} On replacing $\zeta$ by $\zeta e^{\mp 2\pi i/3}$ we have the following, assuming the same branches for $\xi(z)$ as in \Cref{thm:2.1}. \begin{corollary}\label{cor:A} For $\left\vert {\arg \left( {u^{2/3}\zeta e^{\mp 2\pi i/3}} \right) }\right\vert \leq {\frac{2}{3}}\pi $ \begin{equation} \mathrm{Ai}_{\pm 1}\left( {u^{2/3}\zeta }\right) =\frac{e^{\pm \pi i/6}}{2\pi ^{1/2}u^{1/6}\zeta ^{1/4}}\exp \left\{ u\xi +\sum\limits_{s=1}^{n-1}\frac{ a_{s}}{su^{s}\xi ^{s}}\right\} \left\{ {1+\eta _{n}^{(\pm 1) }(u,\xi) }\right\}, \label{eq30} \end{equation} and \begin{equation} \mathrm{Ai}_{\pm 1}^{\prime }\left( {u^{2/3}\zeta }\right) =\frac{e^{\pm \pi i/6}u^{1/6}\zeta ^{1/4}}{2\pi ^{1/2}}\exp \left\{ u\xi +\sum\limits_{s=1}^{n-1}\frac{\tilde{a}_{s}}{su^{s}\xi ^{s}}\right\} \left\{ {1+\tilde{\eta }_{n}^{(\pm 1) }(u,\xi) }\right\}, \label{eq31} \end{equation} where the error terms are given by $\eta _{n}^{(\pm 1) }(u,\xi) =\eta _{n}^{(0) }\left( {u,\xi e^{\mp \pi i}} \right) $ and $\tilde{{\eta }}_{n}^{(\pm 1) }(u,\xi) =\tilde{{\eta }}_{n}^{(0) }\left( u,\xi e^{\mp \pi i} \right) $, and satisfy the bounds (\ref{eq26}) and (\ref{eq27}), respectively. \end{corollary} \section{Error bounds away from the turning point} \label{sec3} The main result is given by \Cref{thm:main1} below. In leading to this, we present some preliminary results. We begin, following \cite{Dunster:2017:COA}, by defining $A(u,z) $ and $B(u,z) $ by \begin{equation} \frac{1}{2\pi ^{1/2}u^{1/6}}W_{0}(u,\zeta) =\mathrm{Ai} _{0}\left( {u^{2/3}\zeta }\right) A(u,z) +\mathrm{Ai} _{0}^{\prime }\left( {u^{2/3}\zeta }\right) B(u,z), \label{eq34} \end{equation} and \begin{equation} \frac{e^{\pi i/6}\lambda _{1}}{2\pi ^{1/2}u^{1/6}}W_{1} (u,\zeta) =\mathrm{Ai}_{1}\left( {u^{2/3}\zeta }\right) A(u,z) + \mathrm{Ai}_{1}^{\prime }\left( {u^{2/3}\zeta }\right) B(u,z). \label{eq35} \end{equation} This leads to the following identity. \begin{proposition} Under \Cref{hyp} \begin{equation} \frac{e^{-\pi i/6}\lambda _{-1}}{2\pi ^{1/2}u^{1/6}}W_{-1} (u,\zeta) =\mathrm{Ai}_{-1}\left( {u^{2/3}\zeta }\right) A(u,z) + \mathrm{Ai}_{-1}^{\prime }\left( {u^{2/3}\zeta }\right) B(u,z). \label{eq36} \end{equation} \end{proposition} \begin{proof} This follows from (\ref{eq15}), (\ref{eq34}), (\ref{eq35}) and the Airy function connection formula (\cite[Chap. 11, Eq. (8.03)] {Olver:1997:ASF}) \begin{equation} i\mathrm{Ai}\left( {u^{2/3}\zeta }\right) +e^{-\pi i/6}\mathrm{Ai}_{1}\left( {u^{2/3}\zeta }\right) =e^{\pi i/6}\mathrm{Ai}_{-1}\left( {u^{2/3}\zeta } \right). \label{eq36a} \end{equation} \end{proof} \begin{corollary} Let $z\in Z_{j}\cap Z_{k}$ ($j\neq k$). With $\lambda _{0}=1$ \begin{multline} 2A(u,z) =\lambda _{j}\exp \left\{ \sum\limits_{s=1}^{n-1}{ \dfrac{\tilde{\mathcal{E}}_{s}(z) -\hat{E}_{s}\left( {z^{(j) }}\right) }{u^{s}}}\right\} \left\{ 1+\eta _{n,j} (u,z) \right\} \left\{ 1+\tilde{{\eta }}_{n}^{(k) } (u,\xi)\right\} \\ +\lambda _{k}\exp \left\{ \sum\limits_{s=1}^{n-1}{(-1) ^{s} \dfrac{\tilde{\mathcal{E}}_{s}(z) -\hat{E}_{s}\left( {z^{(k)}}\right) }{u^{s}}}\right\} \left\{ {1+\eta _{n,k} (u,z) }\right\} \left\{ {1+\tilde{{\eta }}_{n}^{(j) } (u,\xi) }\right\}, \label{eq37} \end{multline} where $j=\pm 1$, $k=0$ for $z\in T_{0,\pm 1}\cup T_{\pm 1,0}$, and $j=\pm 1$ , $k=\mp 1$ for $z\in T_{\pm 1,\mp 1}$. Under the same conditions \begin{multline} 2u^{1/3}\zeta ^{1/2}B(u,z) \\ =\lambda _{j}\exp \left\{ \sum\limits_{s=1}^{n-1}\dfrac{\mathcal{E}_{s}(z) -\hat{E} _{s}\left( {z^{(j) }}\right) }{u^{s}}\right\} \left\{ {1+\eta _{n,j}(u,z) }\right\} \left\{ {1+\eta _{n}^{(k) }(u,\xi) }\right\} \\ -\lambda _{k}\exp \left\{ \sum\limits_{s=1}^{n-1}{(-1) ^{s} \dfrac{\mathcal{E}_{s}(z) -\hat{E}_{s}\left( {z^{(k) }}\right) }{u^{s}}}\right\} \left\{ {1+\eta _{n,k}(u,z) } \right\} \left\{ {1+\eta _{n}^{(j) }(u,\xi) } \right\}. \label{eq39} \end{multline} \end{corollary} \begin{proof} Let $z\in T_{0,-1}\cup T_{-1,0}$. Solving (\ref{eq34}) and (\ref{eq36}) \begin{multline} A(u,z) =\pi ^{1/2}u^{-1/6}\left\{ {e^{\pi i/6}W_{0}(u,\zeta) \mathrm{Ai}_{-1}^{\prime }\left( {u^{2/3}\zeta }\right) } \right. \\ \left. {-\lambda _{-1}W_{-1}(u,\zeta) \mathrm{Ai}^{\prime }\left( {u^{2/3}\zeta }\right) }\right\}, \label{eq42} \end{multline} and \begin{multline} B(u,z) =\pi ^{1/2}u^{-1/6}\left\{ {e^{\pi i/6}W_{0}(u,\zeta) \mathrm{Ai}_{-1}^{\prime }\left( {u^{2/3}\zeta }\right) } \right. \\ \left. {-e^{\pi i/6}W_{0}(u,\zeta) \mathrm{Ai}_{-1}\left( { u^{2/3}\zeta }\right) }\right\}. \label{eq43} \end{multline} Then use (\ref{eq10}), (\ref{eq11}), (\ref{eq24}), (\ref{eq25}), (\ref{eq30}) and (\ref{eq31}). For the other sectors we repeat this procedure, starting by solving an appropriate pair of (\ref{eq34}) - (\ref{eq36}) for $A(u,z) $ and $B(u,z) $. \end{proof} We define explicit error terms associated with the expansions in our main theorem below. To do so, first let \begin{equation} \eta _{n,j}^{(k) }(u,z) =\eta_{n,j}(u,z) +\eta _{n}^{(k)}(u,\xi) +\eta_{n,j}(u,z) \eta_{n}^{(k) }(u,\xi), \label{eq44} \end{equation} \begin{equation} \tilde{{\eta }}_{n,j}^{(k) }(u,z) =\eta_{n,j}(u,z) +\tilde{{\eta }}_{n}^{(k) }(u,\xi) +\eta_{n,j}(u,z) \tilde{{\eta }}_{n}^{(k) }(u,\xi), \label{eq45} \end{equation} \begin{equation} \mathcal{A}_{2m+2}(u,z) =\left[ \mu _{2m+2}(u) \right] ^{-1}f^{-1/4}(z) \zeta ^{1/4}A(u,z), \label{eq47} \end{equation} and \begin{equation} \mathcal{B}_{2m+2}(u,z) =\left[ \mu _{2m+2}(u) \right] ^{-1}f^{-1/4}(z) \zeta ^{1/4}B(u,z), \label{eq48} \end{equation} where $\mu _{n}(u)$ is given by (\ref{eq46}). Then using (\ref{lambda}), (\ref{eq37}), (\ref{eq45}) with $n=2m+2$, and (\ref{eq47}) we have \begin{multline} 2\left\{ \dfrac{f(z) }{\zeta }\right\} ^{1/4}\mathcal{A} _{2m+2}(u,z) =\exp \left\{ \sum\limits_{s=1}^{2m+1}{\dfrac{ \tilde{\mathcal{E}}_{s}(z) }{u^{s}}}\right\} \\ +\exp \left\{ \sum\limits_{s=1}^{2m+1}{(-1) ^{s}\dfrac{\tilde{ \mathcal{E}}_{s}(z) }{u^{s}}}\right\} +\tilde{\varepsilon} _{2m+2}(u,z), \label{eq51a} \end{multline} where \begin{multline} \tilde{\varepsilon}_{2m+2}(u,z) =\exp \left\{ \sum\limits_{s=1}^{2m+1}\dfrac{\tilde{\mathcal{E}}_{s}(z) }{ u^{s}}\right\} \tilde{e}_{2m+2,j}^{(k) }(u,z) \\ +\exp \left\{ \sum\limits_{s=1}^{2m+1}{(-1) ^{s}\dfrac{\tilde{ \mathcal{E}}_{s}(z) }{u^{s}}}\right\} \tilde{e} _{2m+2,k}^{(j) }(u,z), \label{eq51b} \end{multline} in which \begin{equation} \tilde{e}_{n,j}^{(k) }(u,z) =\tilde{\eta} _{n,j}^{(k) }(u,z) +{\delta }_{n,j}(u) +\tilde{\eta}_{n,j}^{(k) }(u,z) \delta_{n,j} (u) \label{eq51c} \end{equation} and $\tilde{\varepsilon}_{2m+2}(u,z) ={\mathcal{O}}\left( u^{-2m-2}\right) $ uniformly for $z\in Z_{j}\cap Z_{k}$. Similarly from (\ref{lambda}), (\ref{eq39}), (\ref{eq44}) with $n=2m+2$, and (\ref{eq48}) \begin{multline} 2u^{1/3}f^{1/4}(z) \zeta ^{1/4}\mathcal{B}_{2m+2}(u,z) =\exp \left\{ \sum\limits_{s=1}^{2m+1}\dfrac{\mathcal{E}_{s}\left( z\right) }{u^{s}}\right\} \\ -\exp \left\{ \sum\limits_{s=1}^{2m+1}{(-1) ^{s}\dfrac{ \mathcal{E}_{s}(z) }{u^{s}}}\right\} +\varepsilon _{2m+2} (u,z), \label{eq56b} \end{multline} where \begin{multline} \varepsilon _{2m+2}(u,z) =\exp \left\{ \sum\limits_{s=1}^{2m+1}{ \dfrac{\mathcal{E}_{s}(z) }{u^{s}}}\right\} e_{2m+2,j}^{\left( k\right) }(u,z) \\ -\exp \left\{ \sum\limits_{s=1}^{2m+1}{(-1) ^{s}\dfrac{ \mathcal{E}_{s}(z) }{u^{s}}}\right\} e_{2m+2,k}^{\left( j\right) }(u,z), \label{eq56d} \end{multline} with \begin{equation} e_{n,j}^{(k) }(u,z) =\eta _{n,j}^{(k) }(u,z) +{\delta }_{n,j}(u) +\eta _{n,j}^{\left( k\right) }(u,z) \delta _{n,j}(u) \label{eq56e} \end{equation} and $\varepsilon _{2m+2}(u,z) ={\mathcal{O}}\left( u^{-2m-2}\right) $ uniformly for $z\in Z_{j}\cap Z_{k}$. In order to simplify our error bounds we shall make use of the following elementary result. \begin{lemma} Let $a$, $b$, $c$, and $d$ be real and non-negative. Then if \begin{equation} a\leq c+d+cd, \label{L1} \end{equation} it follows that \begin{equation} a+b+ab\leq \left( b+c+d\right) \left\{ 1+\tfrac{1}{2}\left( b+c+d\right) \right\} ^{2}. \label{L2} \end{equation} \end{lemma} \begin{proof} We have from (\ref{L1}) \begin{multline} \left( b+c+d\right) \left\{ 1+\tfrac{1}{2}\left( b+c+d\right) \right\} ^{2}-\left( a+b+ab\right) \\ \geq \left( b+c+d\right) \left\{ 1+\tfrac{1}{2}\left( b+c+d\right) \right\} ^{2}-\left\{ b+c+d+cd+b\left( c+d+cd\right) \right\}. \label{L3} \end{multline} On expanding the RHS it is easy to verify that all the negative terms cancel out, and the result follows. \end{proof} \begin{remark}If the constants are small and of the same order of magnitude the bound (\ref{L2}) is quite sharp; more precisely, if each constant is $\mathcal{O} \left( \varepsilon \right)$ where $\varepsilon \rightarrow 0$ then \begin{equation} a+b+ab=\left( b+c+d\right) \left\{ 1+\tfrac{1}{2}\left( b+c+d\right) \right\} ^{2}+{\mathcal{O}\left( \varepsilon ^{2}\right) }. \end{equation} This is easily verifiable by examining the negative terms appearing in the expansion of the RHS of (\ref{L3}). \end{remark} Now from (\ref{eq44}) \begin{equation} \left\vert \eta _{n,j}^{(k) }(u,z) \right\vert \leq \left\vert \eta _{n,j}(u,z) \right\vert +\left\vert \eta _{n}^{(k) }(u,\xi) \right\vert +\left\vert \eta _{n,j}(u,z) \right\vert \left\vert \eta _{n}^{(k) }(u,\xi) \right\vert. \label{L4} \end{equation} Then on identifying the corresponding terms of (\ref{L4}) with those of (\ref{L1}) we deduce from (\ref{L2}) that for $z\in Z_{j}$ ($j=0,\pm 1$) \begin{multline} \left\vert \eta _{n,j}^{(k) }(u,z) \right\vert +\left\vert {\delta }_{n,j}(u) \right\vert +\left\vert \eta _{n,j}^{(k) }(u,z) \right\vert \left\vert {\delta } _{n,j}(u) \right\vert \\ \leq \left( \left\vert {\delta }_{n,j}(u) \right\vert +\left\vert \eta _{n,j}(u,z) \right\vert +\left\vert \eta _{n}^{(k) }(u,\xi) \right\vert \right) \\ \times \left\{ 1+\tfrac{1}{2}\left( \left\vert {\delta }_{n,j}\left( u\right) \right\vert +\left\vert \eta _{n,j}(u,z) \right\vert +\left\vert \eta _{n}^{(k) }(u,\xi) \right\vert \right) \right\} ^{2}, \label{eq101} \end{multline} and hence from (\ref{eq12}) and (\ref{eq56e}) \begin{equation} \left\vert e_{n,j}^{(k) }(u,z) \right\vert \leq {|u| ^{-n}}e_{n,j}(u,z) \left\{ 1+\tfrac{1}{2}{|u| ^{-n}}e_{n,j}(u,z) \right\} ^{2}, \label{eq102} \end{equation} where $e_{n,j}(u,z) $ is given by (\ref{eq100}) below. A similar bound can be established for $\tilde{e}_{n,j}^{(k) }(u,z)$. Collecting together (\ref{wjs}), (\ref{eq47}) - (\ref{eq56d}), and (\ref{eq102}) we have arrived at the main result of this section: \begin{theorem} \label{thm:main1} Assume \Cref{hyp} and let $z\in Z_{j}\cap Z_{k}$ ($j,k\in \left\{0,1,-1\right\} $, $j\neq k$). Then for each positive integer $m$ there exist three solutions of (\ref{eq1}) of the form \begin{equation} w_{m,l}(u,z) =\mathrm{Ai}_{l}\left( u^{2/3}\zeta \right) \mathcal{A}_{2m+2}(u,z) +\mathrm{Ai}_{l}^{\prime }\left(u^{2/3}\zeta\right) \mathcal{B}_{2m+2}(u,z) \ (l=0,\pm 1), \label{solutions} \end{equation} where \begin{multline} \mathcal{A}_{2m+2}(u,z) =\left\{ \dfrac{\zeta }{f(z) }\right\} ^{1/4}\exp \left\{ \sum\limits_{s=1}^{m}\dfrac{ \mathcal{\tilde{E}}_{2s}(z) }{u^{2s}}\right\} \cosh \left\{ \sum\limits_{s=0}^{m}\dfrac{\mathcal{\tilde{E}}_{2s+1}(z) }{u^{2s+1}}\right\} \\ +\dfrac{1}{2}\left\{ \dfrac{\zeta }{f(z) }\right\} ^{1/4}\tilde{\varepsilon}_{2m+2}(u,z), \label{Ascript} \end{multline} and \begin{multline} \mathcal{B}_{2m+2}(u,z) =\dfrac{1}{u^{1/3}\left\{ \zeta f(z) \right\} ^{1/4}}\exp \left\{ \sum\limits_{s=1}^{m}\dfrac{ \mathcal{E}_{2s}(z) }{u^{2s}}\right\} \sinh \left\{ \sum\limits_{s=0}^{m}\dfrac{\mathcal{E}_{2s+1}(z) }{u^{2s+1}}\right\} \\ +\dfrac{\varepsilon _{2m+2}(u,z) }{2u^{1/3}\left\{ \zeta f(z) \right\} ^{1/4}}, \label{Bscript} \end{multline} such that \begin{multline} \left\vert \tilde{\varepsilon}_{2m+2}(u,z) \right\vert \leq \dfrac{1}{{|u| ^{2m+2}}}\exp \left\{\sum\limits_{s=1}^{2m+1}\mathrm{Re}{\dfrac{\mathcal{\tilde{E}}_{s}(z) }{u^{s}}}\right\} \tilde{e}_{2m+2,j}(u,z) \left\{ 1+\dfrac{\tilde{e}_{2m+2,j}(u,z) }{2{|u| ^{2m+2}}}\right\} ^{2} \\ +\dfrac{1}{{|u| ^{2m+2}}}\exp \left\{ \sum\limits_{s=1}^{2m+1}{(-1) ^{s}\mathrm{Re}\dfrac{\mathcal{ \tilde{E}}_{s}(z) }{u^{s}}}\right\} \tilde{e}_{2m+2,k}\left( { u,z}\right) \left\{ 1+\dfrac{\tilde{e}_{2m+2,k}(u,z) }{2{ |u| ^{2m+2}}}\right\} ^{2}, \label{Aerror} \end{multline} in which \begin{multline} \tilde{e}_{n,j}(u,z) ={|u| ^{n}}\left\vert {\delta }_{n,j}(u) \right\vert +\omega _{n,j}(u,z) \exp \left\{ {|u| ^{-1}\varpi _{n,j}(u,z) +|u| ^{-n}\omega _{n,j}(u,z) }\right\} \\ +\tilde{\gamma}_{n}(u,\xi) \exp \left\{ {\left\vert u\right\vert ^{-1}\tilde{\beta}_{n}(u,\xi) +\left\vert u\right\vert ^{-n}\tilde{\gamma}_{n}(u,\xi) }\right\}, \label{eq103} \end{multline} and \begin{multline} \left\vert \varepsilon _{2m+2}(u,z) \right\vert \leq \dfrac{1}{|u| ^{2m+2}}\exp \left\{ \sum\limits_{s=1}^{2m+1} \mathrm{Re}{\dfrac{\mathcal{E}_{s}(z) }{u^{s}}}\right\} e_{2m+2,j}(u,z) \left\{ 1+\dfrac{e_{2m+2,j}(u,z) }{2{|u| ^{2m+2}}}\right\} ^{2} \\ +\dfrac{1}{{|u| ^{2m+2}}}\exp \left\{ \sum\limits_{s=1}^{2m+1}{(-1) ^{s}\mathrm{Re}\dfrac{\mathcal{E}_{s}(z) }{u^{s}}}\right\} e_{2m+2,k}(u,z) \left\{1+\dfrac{e_{2m+2,k}(u,z)}{2{|u| ^{2m+2}}} \right\} ^{2}, \label{Berror} \end{multline} where \begin{multline} e_{n,j}(u,z) ={|u| ^{n}}\left\vert {\delta }_{n,j}(u) \right\vert +\omega _{n,j}(u,z) \exp \left\{ {|u| ^{-1}\varpi _{n,j}(u,z) +|u| ^{-n}\omega _{n,j}(u,z) }\right\} \\ +\gamma _{n}(u,\xi) \exp \left\{ {|u| ^{-1}\beta _{n}(u,\xi) +|u| ^{-n}\gamma _{n}(u,\xi) }\right\}. \label{eq100} \end{multline} In (\ref{Aerror}) and (\ref{Berror}) $j=\pm 1$, $k=0$ for $z\in T_{0,\pm 1}\cup T_{\pm 1,0}$, and $j=\pm 1$, $k=\mp 1$ for $z\in T_{\pm 1,\mp 1}$. \end{theorem} \begin{remark} Here $\mathcal{E}_{s}(z) $ and $\tilde{\mathcal{E}} _{s}(z) $ are given by (\ref{eq40}) and (\ref{eq38}), $\omega_{n,j}(u,z) $ and $\varpi _{n,j}(u,z)$ are given by (\ref{eq13}) and (\ref{eq14}), $\gamma _{n}(u,\xi)$, $\beta _{n}(u,\xi)$, $\tilde{\gamma}_{n}(u,\xi)$ and $\tilde{\beta}_{n}(u,\xi)$ are given by (\ref{eq28}), (\ref{eq29}), (\ref{eq28a}) and (\ref{eq29a}), ${\delta }_{n,0}(u) =0$, and ${\delta}_{n,\pm 1}(u) $ are bounded using (\ref{delta}); in the common situation where the connection coefficients $\lambda _{\pm 1}$ of (\ref{eq15}) are known we instead use the exact expressions \begin{equation} {\delta }_{n,\pm 1}(u) =\lambda _{\pm 1}\exp \left\{ \sum\limits_{s=1}^{n-1}{\frac{{{(-1) ^{s}}}\hat{E}_{s}\left( {z^{(0) }}\right) -\hat{E}_{s}\left( {z^{( \pm 1) }} \right) }{u^{s}}}\right\} -1. \label{exactdelta} \end{equation} Since $\delta_{n,j}(u)={\mathcal{O}}\left( u^{-n}\right)$ we observe that $\tilde{e}_{n,j}(u,z), e_{n,j}(u,z) ={\mathcal{O}}(1) $ as $u\rightarrow \infty $ uniformly for $z\in Z_{j}$, and hence the bounds for $\tilde{\varepsilon}_{2m+2}(u,z)$ and $\varepsilon _{2m+2}(u,z)$ are both ${\mathcal{O}}\left( u^{-2m-2}\right)$ uniformly for $z\in Z_{j}\cap Z_{k}$. \end{remark} \begin{remark} If the series on the RHS of (\ref{eq56b}) are expanded and combined as an inverse series of $u$ then only (inverse) odd powers remain. Hence one would expect that $\varepsilon _{2m+2}(u,z)={\mathcal{O}}\left( u^{-2m-3}\right)$, and consequently our error bound for the $\mathcal{B}_{2m+2}(u,z)$ expansion overestimates the true error by a factor ${\mathcal{O}}(u)$. With a more delicate analysis it is possible to sharpen the above bounds to reflect this (and also for the corresponding bounds in \cref{sec4} below). This will be pursued in a subsequent paper. \label{remark6} \end{remark} \section{Error bounds in a vicinity of the turning point} \label{sec4} We now consider the case where $z$ is close to $z_{0}$, so that the bounds of the preceding section can no longer be directly applied. As shown in \cite{Dunster:2017:COA} the coefficient functions of (\ref{solutions}) can be computed to high accuracy by Cauchy integrals in the present case. Here we use the same idea to bound the error terms in (\ref{Ascript}) and (\ref{Bscript}). The idea is quite simple: we express the error terms as Cauchy integrals around a simple positively orientated loop $\Gamma $ (say) which encloses the turning point $z_{0}$ and the point $z$ in question (but is not too close to these points), and which lies in the intersection of $Z_{0}$, $Z_{1}$, and $Z_{-1}$. We then bound the integrand of each integral along its contour using the results of the previous section, from which a bound for the error terms follow. The main result is given by \Cref{thm:main2} below Our choice of $\Gamma $ is the circle $\left\{ z:\left\vert z-z_{0}\right\vert =r_{0}\right\} $ for $r_{0}>0$ is arbitrarily chosen but not too small, and such that the loop lies in the intersection of $Z_{0}$, $Z_{1}$, and $Z_{-1}$. The following result will be used. \begin{lemma} For $\left\vert z-z_{0}\right\vert <r_{0}$ \begin{equation} \oint_{\left\vert t-z_{0}\right\vert =r_{0}}\left\vert {\dfrac{dt}{t-z}} \right\vert =l_{0}(z) :=\frac{{4r_{0}K}(k) }{ \left\vert z-z_{0}\right\vert +r_{0}}, \label{eq50} \end{equation} where \begin{equation} k=\frac{{2}\sqrt{r_{0}\left\vert z-z_{0}\right\vert }}{\left\vert z-z_{0}\right\vert +r_{0}}, \label{defk} \end{equation} and $K(k)$ is the complete elliptic integral of the first kind defined by (\cite[\S 19.2(ii)]{NIST:DLMF}) \begin{equation} {K}(k) =\int\limits_{0}^{\pi /2}{\dfrac{d\tau }{\sqrt{1-k^{2}\sin ^{2}\left( \tau \right) }}}=\int\limits_{0}^{1}{\dfrac{dt}{\sqrt{ \left( 1-t^{2}\right) \left( 1-k^{2}t^{2}\right) }}\ }\left( 0\leq k<1\right). \label{Kelliptic} \end{equation} \end{lemma} \begin{remark}$K(k) \sim -\frac{1}{2}\ln \left( 1-k\right) $ as $ k\rightarrow 1-$ (\cite[Eq. 19.12.1]{NIST:DLMF}), and hence from (\ref{eq50}) and (\ref{defk}) we find that $l_{0}(z) \sim -2\ln \left( r_{0}-\left\vert z-z_{0}\right\vert \right) $ as $\left\vert z-z_{0}\right\vert \rightarrow r_{0}-$; i.e. $l_{0}(z)$ becomes unbounded (logarithmically) as $z$ approaches $\Gamma $ from its interior. This means that $z$ should not be too close to $\Gamma$ in our subsequent error bounds. \end{remark} \begin{proof} Let $z=z_{0}+ae^{i\theta }$ where $a=\left\vert z-z_{0}\right\vert$, and then with the change of variable $t=z_{0}+r_{0}e^{i\varphi }$ we find that \begin{equation} \oint_{\left\vert t-z_{0}\right\vert =r_{0}}\left\vert {\dfrac{dt}{t-z}} \right\vert =\int\limits_{0}^{2\pi }{\dfrac{ r_{0}d\varphi }{\left\vert r_{0}e^{i\varphi }-ae^{i\theta }\right\vert }=} \int\limits_{0}^{2\pi }{\dfrac{r_{0}d\varphi }{\left\vert r_{0}e^{i\left( \varphi -\theta \right) }-a\right\vert }}. \label{eq51} \end{equation} Now let $\varphi \rightarrow \varphi +\theta $, and using $2\pi $ periodicity of the integrand, we get \begin{equation} \int\limits_{0}^{2\pi }{\dfrac{r_{0}d\varphi }{\left\vert r_{0}e^{i\left( \varphi -\theta \right) }-a\right\vert }=}\int\limits_{-\theta }^{2\pi -\theta }{\dfrac{r_{0}d\varphi }{\left\vert r_{0}e^{i\varphi }-a\right\vert }=}\int\limits_{0}^{2\pi } \dfrac{r_{0}d\varphi }{\sqrt{r_{0}^{2}-2ar_{0}\cos \left( \varphi \right) +a^{2}}}. \label{eq52} \end{equation} Then from symmetry of the integrand about $\varphi =\pi $, followed by using the identity $\cos(\varphi) = 1-2 \sin^{2}(\tau) $ where $\tau =\varphi /2,$ we obtain \begin{multline} \int\limits_{0}^{2\pi }{\dfrac{r_{0}d\varphi }{\sqrt{r_{0}^{2}-2ar_{0}\cos \left( \varphi \right) +a^{2}}}=2}\int\limits_{0}^{\pi } \dfrac{r_{0}d\varphi }{\sqrt{r_{0}^{2}+2ar_{0}\cos \left( \varphi \right) +a^{2}}} \\ =2\int\limits_{0}^{\pi}\dfrac{r_{0}d\varphi }{\sqrt{\left(a+r_{0}\right) ^{2}-4ar_{0}\sin ^{2}\left( \frac{1}{2}\varphi \right) }}=4 \int\limits_{0}^{\pi /2}{\dfrac{r_{0}d\tau }{\sqrt{\left( a+r_{0}\right) ^{2}-4ar_{0}\sin ^{2}(\tau) }}}. \label{eq53} \end{multline} The result then follows from (\ref{Kelliptic}) - (\ref{eq53}) and recalling that $a=\left\vert z-z_{0}\right\vert $. \end{proof} We now bound terms appearing in \Cref{thm:main1} on $\Gamma $ and on certain paths containing parts of this loop. Firstly, let $\gamma _{j,l}$ be the union of part of the loop $\Gamma $ that lies in $T_{j,l}$ ($j,l\in \left\{ 0,1,-1\right\} ,j\neq l$) with an arbitrarily chosen progressive path in $T_{j}$ connecting $\Gamma $ to $z^{(j)}$ (if possible a straight line). There are six of these paths to consider. See \Cref{fig:fig2,fig:fig3} for examples with $\mathrm{Re}\, z \geq 0$, $u >0$, $z_{0}>0$, $z^{(0)}$ an admissible pole at the origin, and $z^{(1)}$ at infinity. \begin{figure} [htbp] \centering \includegraphics{Fig2.eps} \caption{Path $\gamma _{0,-1}$ in the $z$ plane.} \label{fig:fig2} \end{figure} \begin{figure} [htbp] \centering \includegraphics{Fig3.eps} \caption{Path $\gamma _{1,-1}$ in the $z$ plane.} \label{fig:fig3} \end{figure} We then define \begin{multline} \omega _{n}(u) =2\max_{j,l} \left\{ \int_{\gamma _{j,l}}{\left\vert{\hat{F}_{n}(t) f^{1/2}(t) dt}\right\vert }\right\} \\ +\sum\limits_{s=1}^{n-1}\dfrac{1}{{|u| ^{s}}}{ \sum\limits_{k=s}^{n-1}\max_{j,l} }\left\{ {\int_{\gamma _{j,l}}{\left\vert {{\hat{F}_{k}(t) \hat{F}_{s+n-k-1}(t) }f^{1/2}(t) dt}\right\vert }}\right\}, \label{eq68} \end{multline} and likewise \begin{equation} \varpi _{n}(u) =4\sum\limits_{s=0}^{n-2}\frac{1}{{\left\vert u\right\vert ^{s}}}{\max_{j,l} }\left\{ {\int_{\gamma_{j,l}}{\left\vert {\hat{F} _{s+1}(t) f^{1/2}(t) dt}\right\vert }}\right\}, \label{eq69} \end{equation} where the maxima are taken over all six paths $\gamma _{j,l}$. We next define \begin{equation} {\delta }_{n}(u) =\max_{j=\pm 1}\left\vert {\delta }_{n,j}\left(u\right) \right\vert \label{deltan}, \end{equation} \begin{equation} \Upsilon =\underset{z\in\Gamma}{{\inf }}\left\vert \zeta f(z) \right\vert ^{1/4},\ \tilde{\Upsilon}=\underset{z\in\Gamma}{\sup }\left\vert \zeta /f(z) \right\vert ^{1/4}, \label{zeds} \end{equation} and \begin{equation} \rho =\underset{z\in\Gamma}{\inf }|\xi|. \label{rho} \end{equation} Let $\theta =\arg(u)$ we further define \begin{equation} M_{s}=\underset{z\in\Gamma}{{\sup }}\,\mathrm{Re}\left\{ e^{-is\theta }\mathcal{E} _{s}(z) \right\} ,\ N_{s}=\underset{z\in\Gamma}{{\sup }}\,\mathrm{Re} \left\{ {(-1) ^{s}}e^{-is\theta }\mathcal{E}_{s}(z) \right\}, \label{eq96} \end{equation} and likewise $\tilde{M}_{s}$ and $\tilde{N}_{s}$ where $\mathcal{E}_{s}$ is replaced by $\tilde{\mathcal{E}}_{s}$. From these definitions we note that on the contour $\Gamma $ \begin{equation} \omega _{n,j}(u,z) \leq \omega _{n}(u) ,\ \varpi _{n,j}(u,z) \leq \varpi _{n}(u) \ \left( j=0,\pm 1\right), \label{eq96a} \end{equation} \begin{equation} \left\vert {\exp \left\{ \sum\limits_{s=n}^{n-1}{\dfrac{\mathcal{E} _{s}(z) }{u^{s}}}\right\} }\right\vert \leq \exp \left\{ \sum\limits_{s=n}^{n-1}{\dfrac{M_{s}}{|u| ^{s}}} \right\}, \label{eq97} \end{equation} and \begin{equation} \left\vert {\exp \left\{ \sum\limits_{s=n}^{n-1}{(-1) ^{s} \dfrac{\mathcal{E}_{s}(z) }{u^{s}}}\right\} }\right\vert \leq \exp \left\{ \sum\limits_{s=n}^{n-1}{\dfrac{N_{s}}{|u| ^{s}}}\right\}. \label{eq98} \end{equation} Next we define \begin{multline} d_{2m+2}(u) \\ =\left[ \exp \left\{ \sum\limits_{s=1}^{2m+1}{ \dfrac{M_{s}}{|u| ^{s}}}\right\} +\exp \left\{\sum\limits_{s=1}^{2m+1}{\dfrac{N_{s}}{|u| ^{s}}} \right\} \right] e_{2m+2}(u) \left\{ 1+\dfrac{e_{2m+2}(u) }{2{|u| ^{2m+2}}}\right\} ^{2}, \label{d2m+1} \end{multline} where $e_{n}(u) ={\mathcal{O}}(1) $ as $ u\rightarrow \infty $ and is given by \begin{multline} e_{n}(u) ={|u| ^{n}}{\delta }_{n}\left( u\right) +\omega _{n}(u) \exp \left\{ {|u| ^{-1}\varpi _{n}(u) +|u| ^{-n}\omega _{n}(u) }\right\} \\ +\gamma _{n}(u,\rho) \exp \left\{ {|u| ^{-1}\beta _{n}(u,\rho) +|u| ^{-n}\gamma _{n}(u,\rho) }\right\} . \label{en} \end{multline} Recall $\gamma _{n}(u,\xi) $ is given by (\ref{eq28}), and $\beta _{n}(u,\xi)$ is given by (\ref{eq29}). Similarly we define \begin{multline} \tilde{d}_{2m+2}(u) \\ =\left[ \exp \left\{ \sum \limits_{s=1}^{2m+1}\dfrac{\tilde{M}_{s}}{|u| ^{s}} \right\} +\exp \left\{\sum\limits_{s=1}^{2m+1}\dfrac{\tilde{N}_{s}}{|u| ^{s}}\right\} \right] \tilde{e}_{2m+2}(u) \left\{ 1+\dfrac{\tilde{e} _{2m+2}(u) }{2{|u| ^{2m+2}}}\right\} ^{2} , \label{d2m+2} \end{multline} where $\tilde{e}_{n}(u) ={\mathcal{O}}(1) $ as $ u\rightarrow \infty $ and is given by \begin{multline} \tilde{e}_{n}(u) ={|u| ^{n}} {\ \delta } _{n}(u) +\omega _{n}(u) \exp \left\{ {\left\vert u\right\vert ^{-1}\varpi _{n}(u) +|u| ^{-n}\omega _{n}(u) }\right\} \\ +\tilde{\gamma}_{n}(u,\rho) \exp \left\{ {\left\vert u\right\vert ^{-1}\tilde{\beta}_{n}(u,\rho) +\left\vert u\right\vert ^{-n}\tilde{\gamma}_{n}(u,\rho) }\right\}, \label{entilda} \end{multline} in which ${\tilde{\gamma}}_{n}(u,\xi) $ and $\tilde{\beta}_{n}(u,\xi)$ are given by (\ref{eq28a}) and (\ref{eq29a}), respectively. We now present the main result of this section. \begin{theorem} \label{thm:main2} Assume \Cref{hyp} and let $\Gamma $ be the circle as described at the beginning of this section, with $z$ lying in its interior. Three solutions of (\ref{eq1}) are then given by (\ref{solutions}) where \begin{multline} \mathcal{A}_{2m+2}(u,z) =\dfrac{1}{2\pi i} \oint_{\left\vert t-z_{0}\right\vert =r_{0}} \exp \left\{ \sum\limits_{s=1}^{m}\dfrac{\tilde{ \mathcal{E}}_{2s}(t) }{u^{2s}}\right\} \\ \times \cosh \left\{ \sum\limits_{s=0}^{m} \dfrac{\tilde{\mathcal{E}}_{2s+1}(t) }{u^{2s+1}}\right\} \left\{ \dfrac{\zeta (t) }{f(t) }\right\} ^{1/4}\dfrac{dt}{t-z}+\dfrac{1}{2}\tilde{\kappa}_{2m+2}(u,z), \label{m02} \end{multline} and \begin{multline} \mathcal{B}_{2m+2}(u,z) =\dfrac{1}{2\pi iu^{1/3}} \oint_{\left\vert t-z_{0}\right\vert =r_{0}} \exp \left\{ \sum\limits_{s=1}^{m}\dfrac{\mathcal{E}_{2s}(t) }{u^{2s}} \right\} \\ \times \sinh \left\{\sum\limits_{s=0}^{m} \dfrac{\mathcal{E}_{2s+1}(t) }{u^{2s+1}}\right\} \dfrac{dt}{\left\{ \zeta (t) f(t) \right\}^{1/4} \left( t-z\right)}+\dfrac{\kappa _{2m+2}(u,z) }{2u^{1/3}}, \label{m01} \end{multline} such that \begin{equation} \left\vert \tilde{\kappa}_{2m+2}(u,z) \right\vert \leq \dfrac{ \tilde{\Upsilon}\tilde{d}_{2m+2}(u) l_{0}(z) }{2\pi |u| ^{2m+2}}, \label{AkappaBound} \end{equation} and \begin{equation} \left\vert \kappa _{2m+2}(u,z) \right\vert \leq \dfrac{ d_{2m+2}(u) l_{0}(z) }{2\pi \Upsilon \left\vert u\right\vert ^{2m+2}}. \label{BkappaBound} \end{equation} \end{theorem} \begin{proof} Consider (\ref{m01}). Since $\mathcal{B}_{2m+2}(u,z)$ is analytic on and inside $\Gamma$ we have by Cauchy's integral formula \begin{equation} \mathcal{B}_{2m+2}(u,z) =\dfrac{1}{2\pi i}\oint_{\left\vert t-z_{0}\right\vert =r_{0}}{\dfrac{\mathcal{B}_{2m+2}\left( {u,t}\right) dt}{ \ t-z }}. \label{BCauchy} \end{equation} On substituting (\ref{eq56b}) into the integrand of (\ref{BCauchy}) and then comparing with (\ref{m01}) we deduce that \begin{equation} \kappa _{2m+2}(u,z) =\dfrac{1}{2\pi i}\oint_{\left\vert t-z_{0}\right\vert =r_{0}}{\dfrac{\varepsilon _{2m+2}(u,t) dt}{ \left\{ \zeta (t) f(t) \right\} ^{1/4}\left(t-z\right) }}, \label{Bkappa} \end{equation} (even though $\varepsilon _{2m+2}(u,z)$ is not analytic at the turning point). Therefore from the definition of $\Gamma$ we have from (\ref{eq50}), (\ref{zeds}) and (\ref{Bkappa}) \begin{multline} \left\vert \kappa _{2m+2}(u,z) \right\vert \leq \dfrac{ \sup_{z\in \Gamma}\left\vert \varepsilon _{2m+2}(u,z) \right\vert }{2\pi \inf_{z\in \Gamma}\left\vert \zeta (z) f(z) \right\vert ^{1/4}}\oint_{\left\vert t-z_{0}\right\vert =r_{0}}\left\vert { \dfrac{dt}{t-z}}\right\vert \\ =\dfrac{\sup_{z\in \Gamma}\left\vert \varepsilon _{2m+2}(u,z) \right\vert l_{0}(z) }{2\pi \Upsilon }. \label{eq94} \end{multline} Now for $z\in\Gamma $ we have from (\ref{rho}) that $|\xi| \geq \rho $ and hence from (\ref{eq28}) and (\ref{eq29}) \begin{equation} {\beta _{n}(u,\xi) \leq \beta _{n}(u,\rho) ,\ } \gamma _{n}(u,\xi) \leq \gamma _{n}(u,\rho). \label{eq95} \end{equation} Thus from (\ref{eq100}), (\ref{deltan}), (\ref{eq96a}) and (\ref{en}) we have $e_{n,j}(u,z) \leq e_{n}(u) $ for $z\in\Gamma$ and $j=0,\pm 1$. Hence (\ref{BkappaBound}) follows from (\ref{Berror}), (\ref{eq97}), (\ref{eq98}), (\ref{d2m+1}) and (\ref{eq94}). The bound (\ref{AkappaBound}) is similarly proved. \end{proof} \section{Bessel functions of large order} \label{sec5} We illustrate our new error bounds in an application to Airy expansions for Bessel functions, therefore providing error bounds for the uniform asymptotic expansions obtained in \cite{Dunster:2017:COA}. For a classical monograph on Bessel functions see \cite{Watson:1995:ATO}. See also \cite{NIST:DLMF}, chapters 9 and 10, for a compendium of important properties of these functions. Similar ideas can be used for bounding the errors in other cases as for example for Laguerre polynomials and Kummer functions \cite{Dunster:2018:USE}. The first step for Bessel functions is to apply the Liouville transformations described in \S 1 to Bessel's equation. To this end, we first note that functions $w=z^{1/2}J_{\nu }(\nu z) $, $w=z^{1/2}H_{\nu }^{(1) }(\nu z) $ and $w=z^{1/2}H_{\nu }^{\left( 2\right) }(\nu z) $ satisfy \begin{equation} \frac{d^{2}w}{dz^{2}}=\left\{ {\nu ^{2}\frac{1-z^{2}}{z^{2}}-\frac{1}{4z^{2}} }\right\} w. \end{equation} Here $z$ is real or complex, and $\nu $ plays the role of our parameter $u$, which we assume is real and positive. On comparing with (\ref{eq1}) we have \begin{equation} f(z)=\frac{1-z^{2}}{z^{2}},\,g(z)=-\frac{1}{4z^{2}}. \label{fandg} \end{equation} For brevity we only consider case \S 3, i.e. $z$ bounded away from the turning point $z_{0}=1$. In a subsequent paper we shall show how our error bounds can be sharpened, including those of \S 4 near the turning point. The Liouville transformation is \begin{equation} \xi =\frac{2}{3}\zeta ^{3/2}=\ln \left\{ {\frac{1+\left( {1-z^{2}}\right)^{1/2}}{z}}\right\} -\left( {1-z^{2}}\right) ^{1/2}, \label{xiBessel} \end{equation} and \begin{equation} W=\left( \frac{1-z^{2}}{\zeta z^2}\right) ^{1/4}w. \end{equation} The transformed variable $\zeta $ is real for real $z\in (0,1)$ ($\zeta \in (0,+\infty )$), and $\zeta (z)$ can be defined by analytic continuation in the whole complex plane cut along the negative real axis. $\xi $ is positive for $z\in (0,1)$ and defined continuously elsewhere. We then obtain (\ref{eq3}) where \begin{equation} \psi (\zeta) =\frac{5}{16\zeta ^{2}}+\frac{\zeta z^{2}\left( { z^{2}+4}\right) }{4\left( {z^{2}-1}\right) ^{3}}. \label{schwarz} \end{equation} We find from (\ref{eq5}), (\ref{eq8}) - (\ref{eq7}), and (\ref{fandg}) that the coefficients are given by \begin{equation} \hat{E}_{s}(z) =\int_{z}^{\infty }t^{-1}\left( {1-t^{2}} \right) ^{1/2}\hat{F}{(t) dt}\quad \left( {s=1,2,3,\cdots } \right) , \end{equation} Here \begin{equation} \hat{F}_{1}(z)=\frac{{z^{2}(z^{2}+4)}}{{8(z^{2}-1)^{3}}},\,\hat{F} _{2}(z)=\frac{{z}}{{2}\left( 1-z^{2}\right) ^{1/2}}\hat{F}_{1}^{\prime }(z), \end{equation} and \begin{equation} \hat{F}_{s+1}(z)=\frac{{z}}{{2}\left( 1-z^{2}\right) ^{1/2}}\hat{F} _{s}^{\prime }(z)-\frac{1}{2}\sum_{j=1}^{s-1}\hat{F}_{j}(z)\hat{F}_{s-j}(z)\quad \left( {s=2,3,\cdots }\right). \label{recuFs} \end{equation} As shown in \cite{Dunster:2017:COA} these coefficients can be explicitly computed, and in particular they have the form \begin{equation} \hat{E}_{s}(z)=\frac{{P_{s}(z^{2})}}{{(1-z^{2})^{3s/2}}}, \label{coebe} \end{equation} where $P_{s}(z)$ are polynomials of degree $s$ in $z$. We note for the odd terms that \begin{equation} \hat{E}_{2j+1}(z)=\frac{1}{\left( 1-z\right) ^{1/2}}\left[ \frac{{ P_{2j+1}(z^{2})}}{{(1-z^{2})^{3j+1}}\left( 1+z\right) ^{1/2}}\right] \ \left( j=0,1,2\cdots \right) , \end{equation} where the term in the square brackets is meromorphic at $z=1$ as desired. The polynomials $P_{s}$ in (\ref{coebe}) have the properties \begin{equation} P_{2s}(0)=0,\,P_{2s+1}(0)=C_{2s+1}, \end{equation} where $C_{2s+1}$ are the coefficients in the Stirling asymptotic series \begin{equation} \Gamma ({\nu })\sim \left( 2\pi \right) ^{1/2}e^{-{\nu }}{\nu }^{{\nu -(1/2)} }\exp \left\{ \sum_{j=0}^{\infty }\frac{{C_{2j+1}}}{{\nu ^{2j+1}}}\right\} \ ({\nu }\rightarrow \infty). \label{stirling} \end{equation} Defining $C_{2j}=0$ ($j=1,2,3,\cdots $) we then have \begin{equation} \hat{E}_{s}\left( {z^{(0) }}\right) =\hat{E}_{s}\left( {0} \right) ={C_{s}}, \label{ez0} \end{equation} and from (\ref{coebe}) \begin{equation} \hat{E}_{s}\left( {z^{(\pm 1) }}\right) =\hat{E}_{s}\left( {\mp i\infty }\right) =0. \label{ez1} \end{equation} Next, from (\ref{eq10}) and (\ref{eq11}), the following asymptotic solutions are obtained \begin{equation} W_{0}(\nu,\zeta) =\frac{1}{\zeta ^{1/4}}\exp \left\{ -\nu \xi +\sum\limits_{s=1}^{n-1}{(-1) ^{s}\frac{\hat{E} _{s}(z) -C_{s}}{\nu ^{s}}}\right\} \left\{ 1+\eta _{n,0}(\nu,z) \right\} , \end{equation} and \begin{equation} W_{\pm 1}(\nu,\zeta) =\frac{1}{\zeta ^{1/4}}\exp \left\{ \nu \xi +\sum\limits_{s=1}^{n-1}{\frac{\hat{E}_{s}(z) }{\nu ^{s}}}\right\} \left\{ 1+\eta _{n,\pm 1}(\nu,z) \right\} . \end{equation} Let us now match these with the corresponding Bessel functions having the same recessive behavior at the singularities. Firstly, for the one recessive at $z=0$, we note as $z\rightarrow 0$ that \begin{equation} J_{\nu }(\nu z) \sim \frac{1}{\Gamma(\nu +1) } \left( \frac{\nu z}{2}\right) ^{\nu } , \end{equation} and hence using \begin{equation} \xi = \ln \left( 2/z\right) -1 +\mathcal{O}(z), \end{equation} we deduce that \begin{equation} J_{\nu }(\nu z) =\frac{{\nu }^{\nu }}{e^{\nu }\Gamma (\nu +1) }\left(\frac{\zeta }{1-z^{2}}\right) ^{1/4}W_{0}(\nu,\zeta). \label{JW0} \end{equation} Next, for the solution that vanishes as $z\rightarrow i\infty $, we use \begin{equation} H_{\nu }^{(1) }(\nu z) \sim \left( \frac{2}{\pi { \nu z}}\right) ^{1/2}\exp \left\{ i{\nu z-}\frac{1}{2}\nu \pi i-\frac{1}{4}{\pi i}\right\} , \end{equation} along with \begin{equation} \xi =iz-\tfrac{1}{2}\pi i+\mathcal{O}\left( z^{-1}\right) , \label{xi0} \end{equation} and we arrive at the identification \begin{equation} H_{\nu }^{(1) }(\nu z) =-i\left( \frac{2}{\pi {\nu }}\right) ^{1/2}\left( {\frac{\zeta }{1-z^{2}}}\right) ^{1/4}W_{-1}(\nu,\zeta) . \label{HWm1} \end{equation} We similarly find that \begin{equation} H_{\nu }^{(2)}(\nu z) =i\left( \frac{2}{\pi {\nu } }\right) ^{1/2}\left( {\frac{\zeta }{1-z^{2}}}\right) ^{1/4}W_{1}(\nu ,\zeta) . \end{equation} We now plug these into the general connection formula (\ref{eq15}), and this yields \begin{equation} \lambda _{-1}H_{\nu }^{(1) }(\nu z) =\left( \frac{2 }{\pi {\nu }}\right) ^{1/2}\frac{e^{{\nu }}\Gamma (\nu +1) }{{\nu }^{\nu }}J_{\nu }(\nu z) -\lambda _{1}H_{\nu }^{(2) }(\nu z) . \end{equation} On comparing this with the well-known connection formula for Bessel functions \begin{equation} J_{\nu }(\nu z) =\tfrac{1}{2}\left\{ H_{\nu }^{(1) }(\nu z) +H_{\nu }^{(2) }(\nu z) \right\} , \end{equation} we deduce that \begin{equation} \lambda _{1}=\lambda _{-1}=\left( \frac{1}{2\pi {\nu }}\right) ^{1/2}\frac{ e^{{\nu }}\Gamma (\nu +1) }{{\nu }^{{\nu }}}. \label{lambdaBessel} \end{equation} We note from (\ref{stirling}) that \begin{equation} \lambda _{\pm 1}\sim \exp \left\{ \sum\limits_{j=0}^{\infty }\frac{{ C_{2j+1}}}{{\nu }^{2j+1}}\right\} \\ \left( {\nu }\rightarrow \infty \right) , \end{equation} in accord with (\ref{eq22}), (\ref{ez0}) and (\ref{ez1}). For $z\in T_{0,-1}\cup T_{-1,0}$ (see \Cref{fig:fig1}) we use (\ref{eq42}), (\ref{eq43}), (\ref{JW0}), (\ref{HWm1}) and (\ref{lambdaBessel}) to obtain the exact expressions \begin{multline} A(\nu,z) =\frac{\pi ^{1/2} e^{{\nu }}\Gamma (\nu +1) }{{\nu }^{{\nu+(1/6) }}}{\left( {\dfrac{1-z^{2}}{\zeta }}\right) ^{1/4}} \\ \times \left\{ e^{\pi i/6}\mathrm{Ai}_{-1}^{\prime }\left( {\nu ^{2/3}\zeta } \right) J_{\nu }(\nu z) -\tfrac{1}{2}i\mathrm{Ai}^{\prime }\left( {\nu ^{2/3}\zeta }\right) H_{\nu }^{(1) } (\nu z) \right\} , \end{multline} and \begin{multline} B(\nu,z) = \frac{\pi ^{1/2} e^{{\nu }}\Gamma (\nu +1) }{{\nu }^{{\nu+(1/6) }}}{\left( {\dfrac{1-z^{2}}{\zeta }}\right) ^{1/4}} \\ \times \left\{ \tfrac{1}{2}i{\mathrm{Ai}\left( {\nu ^{2/3}\zeta }\right) } H_{\nu }^{(1) }(\nu z) {-e^{\pi i/6}\mathrm{Ai} _{-1}\left( {\nu ^{2/3}\zeta }\right) }J_{\nu }(\nu z) \right\} , \end{multline} Now from (\ref{eq46}), (\ref{eq47}), (\ref{eq48}), (\ref{fandg}) and (\ref{ez0}) we have \begin{equation} \mathcal{A}_{2m+2}(\nu,z) =\exp \left\{ -\sum \limits_{j=0}^{m}\frac{{C_{2j+1}}}{{\nu }^{2j+1}}\right\} \left( \frac{ z^{2}\zeta }{1-z^{2}}\right) ^{1/4}A(\nu,z) , \end{equation} and \begin{equation} \mathcal{B}_{2m+2}(\nu,z) =\exp \left\{ -\sum \limits_{j=0}^{m}\frac{C_{2j+1}}{{\nu }^{2j+1}}\right\} \left( \frac{z^{2}\zeta }{1-z^{2}}\right) ^{1/4}B(\nu,z) , \end{equation} and hence \begin{multline} \mathcal{A}_{2m+2}(\nu,z) =\pi ^{1/2}{e^{{\nu }}{\nu }^{-{\nu +(5/6)}}\Gamma (\nu) }\exp \left\{ -\sum\limits_{j=0}^{m}{ \dfrac{{C_{2j+1}}}{{\nu }^{2j+1}}}\right\} \\ \times z^{1/2}\left\{ e^{\pi i/6}\mathrm{Ai}_{-1}^{\prime }\left( {\nu^{2/3}\zeta }\right) J_{\nu }(\nu z) -\tfrac{1}{2}i\mathrm{Ai}^{\prime }\left( {\nu ^{2/3}\zeta }\right) H_{\nu }^{(1) }(\nu z) \right\} , \label{eq5.32} \end{multline} and \begin{multline} \mathcal{B}_{2m+2}(\nu,z) =\pi ^{1/2}e^{{\nu }}{{\nu }^{-{\nu +(5/6)}}}\Gamma (\nu) \exp \left\{ -\sum\limits_{j=0}^{m}{ \dfrac{{C_{2j+1}}}{{\nu }^{2j+1}}}\right\} \\ \times z^{1/2}\left\{ \tfrac{1}{2}i{\mathrm{Ai}\left( {\nu ^{2/3}\zeta } \right) }H_{\nu }^{(1) }(\nu z) {-e^{\pi i/6}\mathrm{ Ai}_{-1}\left( {\nu ^{2/3}\zeta }\right) }J_{\nu }(\nu z) \right\} . \label{eq5.33} \end{multline} These are exact expressions, and can be used to compare numerically the coefficient functions with their approximations, and in particular the exact errors with our bounds (see below). Next, we have from an application of \Cref{thm:main1} \begin{multline} \mathcal{A}_{2m+2}(\nu,z) \\ =\left( {\dfrac{z^{2}\zeta }{1-z^{2} }}\right) ^{1/4} \left[ {\exp \left\{ \sum\limits_{s=1}^{m}{\dfrac{\mathcal{\tilde{E}} _{2s}(z) }{{\nu }^{2s}}}\right\} \cosh \left\{ \sum\limits_{s=0}^{m}{\dfrac{\mathcal{\tilde{E}}_{2s+1}(z) }{{ \nu }^{2s+1}}}\right\} }+\frac{1}{2}\tilde{\varepsilon}_{2m+2}(\nu,z) \right] , \label{eq5.34} \end{multline} and \begin{multline} \mathcal{B}_{2m+2}(\nu,z) =\dfrac{1}{{\nu }^{1/3}}\left\{ {\dfrac{z^{2}}{\zeta \left( 1-z^{2}\right) }}\right\} ^{1/4} \\ \times \left[ \exp \left\{\sum\limits_{s=1}^{m}{\dfrac{\mathcal{E} _{2s}(z) }{{\nu }^{2s}}}\right\} \sinh \left\{ \sum\limits_{s=0}^{m}\dfrac{\mathcal{E}_{2s+1}(z) }{\nu ^{2s+1}}\right\} +\frac{1}{2}\varepsilon _{2m+2}(\nu,z) \right] , \label{eq5.35} \end{multline} where $\mathcal{E}_{s}(z) $ and $\tilde{\mathcal{E}} _{s}(z) $ are given by (\ref{eq40}) and (\ref{eq38}), and for $\nu >0$ and $z\in T_{0,-1}\cup T_{-1,0}$ \begin{multline} \left\vert \tilde{\varepsilon}_{2m+2}(\nu,z) \right\vert \\ \leq \dfrac{1}{{\nu ^{2m+2}}}\exp \left\{ \sum\limits_{s=1}^{2m+1}{\dfrac{\mathrm{Re} \, \mathcal{\tilde{E}}_{s}(z) }{{ \nu }^{s}}}\right\} \tilde{e}_{2m+2,-1}(\nu,z) \left\{ 1+ \dfrac{\tilde{e}_{2m+2,-1}(\nu,z) }{2{{\nu }^{2m+2}}}\right\} ^{2} \\ +\dfrac{1}{{{\nu }^{2m+2}}}\exp \left\{ \sum\limits_{s=1}^{2m+1}{(-1)^{s}\dfrac{\mathrm{Re} \, \mathcal{\tilde{E}}_{s}(z) }{{\nu }^{s}}} \right\} \tilde{e}_{2m+2,0}(\nu,z) \left\{ 1+\dfrac{\tilde{e} _{2m+2,0}(\nu,z) }{2{{\nu }^{2m+2}}}\right\} ^{2}, \label{eq5.36} \end{multline} in which (for $j=0,-1$) \begin{multline} \tilde{e}_{2m+2,j}(\nu,z) ={{\nu }^{2m+2}\delta } _{2m+2,j}(\nu) \\ +\omega _{2m+2,j}(\nu,z) \exp \left\{ {\nu }^{-1}\varpi _{2m+2,j}(\nu,z) +{\nu }^{-2m-2}\omega _{2m+2,j}(\nu ,z) \right\} \\ +\tilde{\gamma}_{2m+2}(\nu,\xi) \exp \left\{ {{\nu }^{-1} \tilde{\beta}_{2m+2}(\nu,\xi) +{\nu }^{-2m-2}\tilde{\gamma} _{2m+2}(\nu,\xi) }\right\} . \end{multline} Here ${\delta }_{2m+2,0}(\nu) =0$, and from (\ref{exactdelta}), (\ref{ez0}) and (\ref{lambdaBessel}) \begin{equation} {\delta }_{2m+2,\pm 1}(\nu) =\left( \frac{1}{2\pi }\right) ^{1/2}\frac{e^{{\nu }}\Gamma (\nu) }{{\nu }^{{\nu -(1/2)}}} \exp \left\{ -\sum\limits_{j=0}^{m}\frac{C_{2j+1}}{{\nu }^{2j+1}} \right\} -1; \end{equation} in addition \begin{multline} \omega _{2m+2,0}(\nu,z) =2\int_{0}^{z}{\left\vert \dfrac{{ \hat{F}_{2m+2}(t) }\left( 1-t^{2}\right) ^{1/2}}{t}{dt} \right\vert } \\ +\sum\limits_{s=1}^{2m+1}\dfrac{1}{{\nu ^{s}}}\int_{0}^{z}{{\left\vert { \sum\limits_{k=s}^{2m+1}}\dfrac{{{\hat{F}_{k}(t) \hat{F} _{s+2m-k+1}(t) }}\left( 1-t^{2}\right) ^{1/2}}{t}{dt} \right\vert }}, \label{eq5.39} \end{multline} \begin{equation} \varpi _{2m+2,0}(\nu,z) =4\sum\limits_{s=0}^{2m}\frac{1}{{\nu ^{s}}}\int_{0}^{z}{{\left\vert \frac{{\hat{F}_{s+1}(t) }\left( 1-t^{2}\right) ^{1/2}}{t}{dt}\right\vert }}, \label{eq5.40} \end{equation} and $\omega _{2m+2,-1}(\nu,z) $ and $\varpi _{2m+2,-1} (\nu,z) $ are the same except the paths of integration are from $z$ to infinity in the upper half plane. These can be taken as straight lines in both cases, in the latter case vertical lines from $z$ to infinity in the upper half plane. Similarly \begin{multline} \left\vert \varepsilon _{2m+2}(\nu,z) \right\vert \\ \leq \dfrac{ 1}{{{\nu }^{2m+2}}}\exp \left\{ \sum\limits_{s=1}^{2m+1}\dfrac{\mathrm{Re} \,\mathcal{E} _{s}(z) }{{\nu }^{s}}\right\} e_{2m+2,-1} (\nu,z) \left\{ 1+\dfrac{e_{2m+2,-1}(\nu,z) }{2{{\nu }^{2m+2}} }\right\} ^{2} \\ +\dfrac{1}{{\nu }^{2m+2}}\exp \left\{ \sum\limits_{s=1}^{2m+1}{(-1)^{s}\dfrac{\mathrm{Re} \,\mathcal{E}_{s}(z) }{{\nu }^{s}}}\right\} e_{2m+2,0}(\nu,z) \left\{ 1+\dfrac{e_{2m+2,0} (\nu,z) }{2{{\nu }^{2m+2}}}\right\} ^{2}, \label{eq5.41} \end{multline} where \begin{multline} e_{2m+2,j}(\nu,z) ={{\nu }^{2m+2}\delta }_{2m+2,j}(\nu) \\ +\omega _{2m+2,j}(\nu,z) \exp \left\{ {\nu }^{-1}\varpi _{2m+2,j}(\nu,z) +{\nu }^{-2m-2}\omega _{2m+2,j}(\nu,z) \right\} \\ +\gamma _{2m+2}(\nu,\xi) \exp \left\{ {\nu }^{-1}\beta _{2m+2}(\nu,\xi) +{\nu }^{-2m-2}\gamma _{2m+2} (\nu,\xi) \right\} . \end{multline} Before proceeding with numerical computations, let us illustrate how the above asymptotic solutions can be matched with the exact solutions. We do so we consider solutions recessive at $z=0$, with the other solutions done similarly. Now, by uniqueness of such solutions we immediately deduce from the $l=0$ solution of \Cref{thm:main1} that \begin{equation} J_{\nu }(\nu z) =c_{m,0}(\nu) z^{-1/2}\left\{ \mathrm{Ai}\left( {\nu ^{2/3}\zeta }\right) \mathcal{A}_{2m+2}\left( {\nu ,z} \right) +\mathrm{Ai}^{\prime }\left( {\nu ^{2/3}\zeta }\right) \mathcal{B} _{2m+2}(\nu,z) \right\} , \label{5.43} \end{equation} for some constant $c_{m,0}(\nu) $. Letting $z\rightarrow 0$ in (\ref{eq5.34}) and (\ref{eq5.35}) and referring to (\ref{xiBessel}) and (\ref{xi0}) we have \begin{equation} \mathcal{A}_{2m+2}(\nu,z) \sim z^{1/2}\zeta ^{1/4}\left[ \cosh \left\{ \sum\limits_{j=0}^{m}\dfrac{C_{2j+1}}{{\nu }^{2j+1}} \right\} +\frac{1}{2}\tilde{\varepsilon}_{2m+2}(\nu,0) \right] , \label{5.44} \end{equation} and \begin{equation} \mathcal{B}_{2m+2}(\nu,z) \sim \frac{z^{1/2}}{\nu^{1/3} \zeta^{1/4}} \left[ \sinh \left\{ \sum\limits_{j=0}^{m}\dfrac{ C_{2j+1}}{{\nu }^{2j+1}}\right\} +\frac{1}{2}\varepsilon _{2m+2}(\nu,0) \right] . \label{5.45} \end{equation} Although we don't know $\tilde{\varepsilon}_{2m+2}(\nu,0) $ and $\varepsilon _{2m+2}(\nu,0) $ explicitly we can bound these values. Specifically, from the above bounds we see that \begin{equation} \left\vert \tilde{\varepsilon}_{2m+2}(\nu,0) \right\vert \leq \dfrac{1}{{{\nu }^{2m+2}}}\exp \left\{ \sum\limits_{j=0}^{m}\frac{C _{2j+1}}{{\nu }^{2j+1}}\right\} \tilde{e}_{2m+2,-1}(\nu ,0) \left\{ 1+\dfrac{\tilde{e}_{2m+2,-1}(\nu,0) }{2{{\nu }^{2m+2}} }\right\} ^{2}, \end{equation} since $\tilde{e}_{2m+2,0}(\nu,0) =0$, and in this \begin{multline} \tilde{e}_{2m+2,-1}(\nu,0) ={{\nu }^{2m+2}\delta } _{2m+2,-1}(\nu) \\ +\omega _{2m+2,-1}(\nu,0) \exp \left\{ {{\nu }^{-1}\varpi _{2m+2,-1}(\nu,0) +{\nu }^{-2m-2}\omega _{2m+2,-1}\left( {\nu ,0}\right) }\right\} . \end{multline} Similarly $\varepsilon _{2m+2}(\nu,0)$ satisfies the same bound, since $e_{2m+2,0}(\nu,0) =0$ and the analogously defined $e_{2m+2,-1}(\nu,0)$ is the same as $\tilde{e}_{2m+2,-1}(\nu,0)$. On using (\ref{5.43}) - (\ref{5.45}), (\ref{Aiinfinity}) and (\ref{eq78a}) we arrive at \begin{multline} \dfrac{\left( \frac{1}{2}{\nu }\right) ^{{\nu }}}{\Gamma \left( {\nu +1} \right) }=\dfrac{c_{m,0}(\nu) \exp \left( -{\nu }\ln \left( 2\right) +{\nu }\right) }{2\pi ^{1/2}{\nu }^{1/6}} \\ \times \left[ \exp \left\{ -{\sum\limits_{j=0}^{m}}\dfrac{C_{2j+1}}{{\nu } ^{2j+1}}\right\} +\frac{1}{2}\tilde{\varepsilon}_{2m+2} (\nu,0) -\frac{1}{2}\varepsilon_{2m+2}(\nu,0) \right] , \end{multline} and therefore the desired value of the proportionality constant is given by \begin{multline} c_{m,0}(\nu) =\dfrac{2\pi ^{1/2}{\nu }^{{\nu -(5/6) }}}{e^{\nu }\Gamma (\nu) } \\ \times \left[ {\exp \left\{ -{\sum\limits_{j=0}^{m}}\dfrac{C_{2j+1}}{{\nu } ^{2j+1}}\right\} }+\frac{1}{2}\tilde{\varepsilon}_{2m+2}(\nu,0) -\frac{1}{2}\varepsilon _{2m+2}(\nu,0) \right] ^{-1}. \end{multline} The identification of the Hankel functions can be done similarly. We omit details. \subsection{ Numerical examples} Examples of the performance of the error bounds given in (\ref{eq5.36}) and (\ref{eq5.41}) are shown in Figures \ref{fig:fig4} and \ref{fig:fig5}. \begin{figure}[h!] \centering \includegraphics[scale=0.35]{Fig4.eps} \caption{Comparison of the bounds given in (\ref{eq5.36}) and (\ref{eq5.41}) with the true numerical accuracy obtained when using (\ref{eq5.34}) and (\ref{eq5.35}) to approximate (\ref{eq5.32}) and (\ref{eq5.33}), respectively, for a fixed value of $m$ ($m=5$) and $\nu$ ($\nu=100$).} \label{fig:fig4} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.35]{Fig5.eps} \caption{Comparison of the bounds given in (\ref{eq5.36}) and (\ref{eq5.41}) with the true numerical accuracy obtained when using (\ref{eq5.34}) and (\ref{eq5.35}) to approximate (\ref{eq5.32}) and (\ref{eq5.33}), respectively, for a fixed value of $m$ ($m=5$) and $\nu$ ($\nu=10$).} \label{fig:fig5} \end{figure} In the figures, these bounds are compared with the true numerical accuracy obtained when using (\ref{eq5.34}) and (\ref{eq5.35}) to approximate (\ref{eq5.32}) and (\ref{eq5.33}), respectively, for a fixed value of $m$ ($m=5$) and two different values of $\nu$ ($\nu=10,\,100$). The computation of (\ref{eq5.32}) and (\ref{eq5.33}) is made using Maple with a large number of digits. For the bounds, two different types of numerical quadrature methods have been considered to evaluate the integrals: (i) a Gauss-Legendre quadrature with 30 nodes for the integrals in (\ref{eq5.39}) and (\ref{eq5.40}); (ii) an adaptative quadrature method over a truncated interval for the integrals for $\omega _{2m+2,-1}(\nu,z) $ and $\varpi _{2m+2,-1}(\nu,z) $. As can be seen in the figures, the bounds (\ref{eq5.36}) and (\ref{eq5.41}) track the exact errors quite well even for moderate values of $\nu$. Also, the accuracy of the bound (\ref{eq5.36}) is better than the accuracy of (\ref{eq5.41}), as expected (see \Cref{remark6}). \newpage
1,477,468,750,799
arxiv
\section{Introduction} \paragraph*{Motivation.} The possibility of dynamically build code instructions as the result of text manipulation is a key aspect in dynamic programming languages. With reflection, programs can turn text, which can be built at run-time, into executable code \cite{RichardsHBV11}. These features are often used in code protection and tamper resistant applications, employing camouflage for escaping attack or detection \cite{DMavrogiannopoulosKP11}, in malware, in mobile code, in web servers, in code compression, and in code optimisation, e.g., in Just-in-Time (JIT) compilers employing optimised run-time code generation. While the use of dynamic code generation may simplify considerably the {\em art and performance of programming\/}, this practice is also highly dangerous, making the code prone to unexpected behaviour and malicious exploits of its dynamic vulnerabilities, such as code/object-injection attacks for privilege escalation, data-base corruption, and malware propagation. It is clear that more advanced and secure functionalities based on reflection could be permitted if we better master how to safely generate, analyse, debug, and deploy programs that dynamically generate and manipulate code. There are lots of good reasons to analyse when a program builds strings that can be later executed as code. Consider the code in Fig.~\ref{rasom}. This is a template of a ransomware that calls a method (``{\tt open}'') by manipulating an obfuscated string which is built in the code. Analysing the flow of the strings corresponds here to approximate the set of strings that may be turned into code at run-time. This possibility would provide important benefits in the analysis of dynamic languages, without ignoring reflection, in automated deobfuscation of dynamic obfuscators, and in the analysis of code injection and XSS attacks. \begin{figure} \begin{center} \includegraphics[scale=.7]{ransomware.pdf} \end{center} \caption{A template rasomware obfuscated attack}\label{rasom} \end{figure} \paragraph*{The problem.} A major problem in dynamic code generation is that static analysis becomes extremely hard if not even impossible. This because program's essential data structures, such as the control-flow graph and the system of recursive equations associated with the program to analyse, are themselves dynamically mutating objects. In a sentence: {\em ''You can't check code you don’t see''\/} \cite{BesseyBCCFHHKME10}. The standard way for treating dynamic code generation in programming is to prevent or even banish it, therefore restricting the expressivity of our development tools. Other approaches instead tries to combine static and dynamic analysis to predict the code structures dynamically generated \cite{Crispo2015,BoddenSSOM11}. Because of this difficulty, most analyses of dynamic languages do not consider reflection \cite{AnCFH11}, thus being inherently unsound for these languages, or implement ad-hoc or pattern driven transformations in order to remove reflection \cite{JensenJM12}. The design and implementation of a sound static analyser for self mutating programs is still nowadays an open challenge for static program analysis. \paragraph*{Contribution.} In this paper we solve this problem by treating the code as any other dynamic structure that can be statically analysed by abstract interpretation \cite{CC77}. We introduce SEA, a proof of concept for a fully automatic sound-by-construction abstract interpreter for string executability analysis of dynamic languages employing finitely nested (bounded) reflection and dynamic code generation. SEA carries a generic standard numerical analysis, in our case a well-known interval analysis, together with a new string executability analysis. Strings are approximated in an abstract domain of finite state automata (FA) with basic operations implemented as symbolic transducers and widening for enforcing termination. The idea in SEA is to re-factor reflection into a program whose semantics is a sound over-approximation of the semantics of the dynamically generated code. This allows us to continue with the standard analysis when the reflection is called on an argument that evaluates to code. In order to recognise whether approximated strings correspond to executable instructions, we approximate a parser as a symbolic transducer and, in case of success, we synthesise a program from the FA approximation of the computed string. The static analysis of reflection determines a call to the same abstract interpreter over the code synthesised from the result of the static string executability analysis at that program point. The choice of regular languages for approximating code structures provides efficient implementations of both the analysis and the code generation at analysis time. The synthesised program reflects most of the structures of the code dynamically generated, yet loosing some aspects such as the number of iterations of loops. Soundness here means that, if the approximated code extracted by the abstract interpreter is accepted by the parser, then the program may dynamically generate executable code at run-time. Moreover, because of the approximation of dynamically generated code structures in a regular language of instructions, any sound and terminating abstract interpretation for safety (i.e., prefix closed) properties of the synthesised code, over-approximates the concrete semantics of the dynamically generated code. This means that a sound over-approximation of the concrete semantics of programs dynamically generating and executing code by reflection is possible for safety properties by combining string analysis and code synthesis in abstract interpretation. Even if nested reflection is not a common practice, the case of potentially unbound nested reflections, which may lead to non termination the analysis, can be handled as in \cite{JensenJM12} by fixing a maximal degree of nesting allowed. In this case, for programs exceeding the maximal degree of nested reflections, we may lose soundness. We briefly discuss how in SEA we may achieve an always sound analysis also for unbound reflection based on a widening with threshold over the recursive applications of the abstract interpreter. \section{Related Works} The analysis of strings is nowadays a relatively common practice in program analysis due to the widespread use of dynamic scripting languages. Examples of analyses for string manipulation are in \cite{DohKS09,ChristensenMS03,YuAB11,Thiemann05,Minamide05,imDS13}. The use of symbolic (grammar-based) objects in abstract domains is also not new (see \cite{CC95a,HeintzeJ94,Venet99}) and some works explicitly use transducers for string analysis in script sanitisation, see for instance \cite{HooimeijerLMSV11} and \cite{YuAB11}, all recognising that specifying the analysis in terms of abstract interpretation makes it suitable for being combined with other analyses, with a better potential in terms of tuning in accuracy and costs. None of these works use string analysis for analysing executability of dynamically generated code. In \cite{JensenJM12}, the authors introduce an automatic code rewriting techniques removing \texttt{eval} constructs in JavaScript applications. This work has been inspired by the work of Richards et al. \cite{RichardsHBV11} showing that \texttt{eval} is widely used, nevertheless in many cases its use can be simply replaced by JavaScript code without \texttt{eval}. In \cite{JensenJM12} the authors integrate a refactoring of the calls to \texttt{eval} into the TAJS data-flow analyzer. TAJS performs inter-procedural data-flow analysis on an abstract domain of objects capturing whether expressions evaluate to constant values. In this case \texttt{eval} calls can be replaced with an alternative code that does not use \texttt{eval}. It is clear that code refactoring is possible only when the abstract analysis recognises that the arguments of the \texttt{eval} call are constants. Moreover, they handle the presence of nested \texttt{eval} by fixing a maximal degree of nesting, but in practice they set this degree to $1$, since, as they claim, it is not often encountered in practice. The solution we propose allows us to go beyond constant values and refactor code also when then arguments of \texttt{eval} are elements of a regular language of strings. While this can be safely used for analysing safety properties of dynamically generated code, the use of our method for code refactoring has to take into account non-terminating code introduced by widening and regular language approximation. A more detailed comparison with TAJS is discussed in Sect.~\ref{evaluation}. Static analysis for a static subset of PHP (i.e., ignoring {\tt eval}-like primitives) has been developed in \cite{BG09}. Static {\em taint analysis\/} keeping track of values derived from user inputs has been developed for self-modifying code by partial derivation of the Control-Flow-Graph \cite{WangJZL08}. The approach is limited to taint analysis, e.g., for limiting code-injection attacks. Staged information flow for JavaScript in \cite{ChughMJL09} with {\em holes\/} provides a conditional (a la abduction analysis in \cite{Giaco98}) static analysis of dynamically evaluated code. Symbolic execution-based static analyses have been developed for scripting languages, e.g., PHP, including primitives for code reflection, still at the price of introducing false negatives \cite{XieA06}. We are not aware of effective general purpose sound static analyses handling self-modifying code for high-level scripting languages. On the contrary, a huge effort was devoted to bring static type inference to object-oriented dynamic languages (e.g., see \cite{AnCFH11} for an account in Ruby) but with a different perspective: {\em Bring into dynamic languages the benefits of static ones -- well-typed programs don't go wrong\/}. Our approach is different: {\em Bring into static analysis the possibility of handling dynamically mutating code}. A similar approach is in \cite{AnckaertMB06} and \cite{PredaGD15} for binary executables. The idea is that of extracting a code representation which is descriptive enough to include most code mutations by a dynamic analysis, and then reform analysis on a linearization of this code. On the semantics side, since the pioneering work on certifying self-modifying code in \cite{CSV07}, the approach to self-modifying code consists in treating machine instructions as regular mutable data structures, and to incorporate a logic dealing with code mutation within a la Hoare logics for program verification. TamiFlex \cite{BoddenSSOM11} also synthesises a program at every \texttt{eval} call by considering the code that has been executed during some (dynamically) observed execution traces. The static analysis can then proceed with the so obtained code without \texttt{eval}. It is sound only with respect to the considered execution traces, producing a warning otherwise. \section{Preliminaries}\label{sft-sfa} \paragraph*{Mathematical Notation.} $S^*$ is the set of all finite sequences of elements in $S$. We often use bold letters to denote them. If $\mathbf{s} = s_1 \dots s_n \in S^\ast$, $s_i \in S$ is the $i$-th element and $|\mathbf{s}| \in \mathbb{N}$ its length. If $\mathbf{s}_1, \mathbf{s}_2 \in\Sigma^\ast$, $\mathbf{s}_1 \cdot \mathbf{s}_2\in\Sigma^\ast$ is their concatenation. A set $L$ with ordering relation $\leq$ is a poset and it is denoted as $\tuple{L,\leq}$. Lattices $L$ with ordering $\leq$, least upper bound (lub) $\vee$, greatest lower bound (glb) $\wedge$, greatest element (top) $\top$, and least element (bottom) $\bot$ are denoted $\tuple{L,\leq,\vee,\wedge,\top,\bot}$. Given $f : S \rarr{} T$ and $g : T \rarr{} Q$ we denote with $g \circ f : S \rarr{} Q$ their composition, \emph{i.e.,\ } $g \circ f = \lambda x.g(f(x))$. For $f,g : L \rarr{} D$ on complete lattices $f\sqcup g$ denotes the point-wise least upper bound, \emph{i.e.,\ } $f\sqcup g = \lambda x.f(x)\vee g(x)$. $f$ is \emph{additive (co-additive)} if for any $Y \subseteq L, f(\vee_L Y ) = \vee_D f(Y)$ ($f(\wedge_L Y ) = \wedge_D f(Y ))$. The additive lift of a function $f:L\rarr{} D$ is the function $\lambda X\subseteq L.\; \sset{f(x)}{x\in X} \in\wp(L)\rarr{} \wp(D)$. We will often identify a function and its additive lift. Continuity holds when $f$ preserves {\it lubs\/}'s of chains. For a continuous function $f$: ${\sf lfp\/}(f) = \bigwedge\sset{x}{x=f(x)}=\bigvee_{n\in\mathbb{N}}f^n(\bot)$ where $f^0(\bot)=\bot$ and $f^{n+1}(\bot)=f(f^n(\bot))$. \paragraph*{Abstract Interpretation.} Abstract interpretation establishes a correspondence between a concrete semantics and an approximated one called abstract semantics \cite{CC77,CC79}. In a Galois Connection (GC) framework, if $C$ and $A$ are complete lattices, a pair of monotone functions $\alpha: C \rarr{} A$ and $\gamma: A \rarr{} C$ forms a GC between $C$ and $A$ if for every $x \in C$ and $y \in A$ we have $\ok{\alpha(x) \leq_A y \Leftrightarrow x \leq_C \gamma(y)}$. $\alpha$ (resp.\ $\gamma$) is the \emph{abstraction} (resp.\ \emph{concretisation}) and it is additive (resp.\ co-additive). Weaker forms of correspondence are possible, e.g., when $A$ is not a complete lattice or when only $\gamma$ exists. In all cases, relative precision in $A$ is given by comparing the meaning of abstract objects in $C$, i.e., $x_1\leq_A x_2$ if $\gamma(x_1)\leq_C \gamma(x_2)$. If $\ok{f:C\rarr{}C}$ is a continuous function and $A$ is an abstraction of $C$ by means of the GC $\tuple{\alpha,\gamma}$, then $f$ always has a {\em best correct approximation\/} in $A$, $\ok{f^A:A\longrightarrow A}$, defined as $\ok{f^{A} \triangleq \alpha \circ f \circ \gamma}$. Any approximation $\ok{f^\sharp:A\rarr{}A}$ of $f$ in $A$ is {\em sound\/} if $\ok{f^A\sqsubseteq f^\sharp}$. In this case we have the fix-point soundness $\ok{\alpha({\sf lfp\/} f)\leq {\sf lfp\/}(f^A)\leq {\sf lfp\/}(f^\sharp)}$ (cf.\ \cite{CC77}). $A$ satisfies the ascending chain condition (ACC) if all ascending chains are finite. When $A$ is not ACC or when it lacks the limits of chains, convergence to the limit of the fix-point iterations can be ensured through widening operators. A \emph{widening operator} $\triangledown: A \times A \rightarrow A$ approximates the lub, i.e., $\forall x,y \in A. x,y \leq_A (x \triangledown y)$ and it is such that for any increasing chain $x_1 \leq x_2 \leq \dots \leq x_n \leq \dots$ the increasing chain $w^0 = \bot$ and $w^{i+1} = w^i \triangledown x_i$ is finite. \paragraph*{Finite State Automata (FA).} A FA $A$ is a tuple $(Q,\delta,q_0,F, \Sigma)$, where $Q$ is the set of states, $\delta\subseteq Q\times \Sigma \times Q$ is the transition relation, $q_0 \in Q$ is the initial state, $F \subseteq Q$ is the set of final states and $\Sigma$ is the finite alphabet of symbols. An element $(q,\sigma,q')\in\delta$ is called transition and is denoted $q' \in \delta(q,\sigma)$. Let $\omega \in \Sigma^\ast$, $\hat{\delta}:Q \times \Sigma^\ast \rightarrow \wp(Q)$ is the transitive closure of $\delta$: $\hat{\delta}(q,\epsilon) = \{q\}$ and $\ok{\hat{\delta}(q,\omega \sigma) = \bigcup_{q'\in\hat{\delta}(q,\omega)}\delta(q',\sigma)}$. $\omega \in \Sigma^\ast$ is accepted by $A$ if $\hat{\delta}(q_0,\omega) \cap F \neq \varnothing$. The set of all these strings defines the language $\cL(A)$ accepted by $A$. Given an FA $A$ and a partition $\pi$ over its states, we denote as $A/\pi = (Q',\delta',q_0',F', \Sigma)$ the \emph{quotient automaton} \cite{bookComp}. \paragraph*{Symbolic Finite Transducers (SFT).} We follow \cite{VeanesHLMB12} in the definition of SFTs and of their background structure. Consider a background universe $\cU_\tau$ of elements of type $\tau$, we denote with $\bB$ to denote the elements of boolean type. Terms and formulas are defined by induction over the background language and are well-typed. Terms of type $\bB$ are treated as formulas. $t:\tau$ denotes a term $t$ of type $\tau$, and $\mathit{FV}(t)$ denotes the set of its free variables. A term $t:\tau$ is \emph{closed} when $\mathit{FV}(t) = \emptyset$. Closed terms have semantics $\grass{t}$. As usual $t[x/v]$ denotes the substitution of a variable $x:\tau$ with a term $v:\tau$. A $\lambda$-\emph{term} $f$ is an expression of the form $\lambda x . t$ where $x:\tau'$ is a variable and $t:\tau''$ is a term such that $\mathit{FV}(t) \subseteq \{x\}$. The $\lambda$-term $f$ has type $\tau' \rightarrow \tau''$ and its semantics is a function $\grass{f}: \cU_{\tau'} \rightarrow \cU_{\tau''}$ that maps $a \in \cU_{\tau'}$ to $\grass{t[x/a]} \in \cU_{\tau''}$. Let $f$ and $g$ range over $\lambda$-terms. A $\lambda$-term of type $\tau \rightarrow \bB$ is called a $\tau$-predicate. Given a $\tau$-predicate $\varphi$, we write $a \in \grass{\varphi}$ for $\grass{\varphi}(a) = \mathit{true}$. Moreover, $\grass{\varphi}$ can be seen as the subset of $\cU_{\tau}$ that satisfies $\varphi$. $\varphi$ is \emph{unsatisfiable} when $\grass{\varphi} = \emptyset$ and \emph{satisfiable} otherwise. A label theory \cite{VeanesHLMB12} for $\tau' \rightarrow \tau''$ is associated with an effectively enumerable set of $\lambda$-terms of type $\tau' \rightarrow \tau''$ and an effectively enumerable set of $\tau'$-predicates that is effectively closed under Boolean operations and relative difference, i.e., $\grass{\varphi \wedge \psi} = \grass{\varphi} \cap \grass{\psi}$, and $\grass{\neg\varphi} = \cU_{\tau'} \smallsetminus \grass{\varphi}$. Let $\tau^\ast$ be the type of sequences of elements of type $\tau$. A Symbolic Finite Transducer \cite{VeanesHLMB12} (SFT) $T$ over $\tau' \rightarrow \tau''$ is a tuple $T = \tuple{Q, q^0, F,R}$, where $Q$ is a finite set of states, $q^0 \in Q$ is the initial state, $F \subseteq Q$ is the set of final states and $R$ is a set of rules $(p,\varphi,\mathbf{f},q)$ where $p,q \in Q$, $\varphi$ is a $\tau'$-predicate and $\mathbf{f}$ is a sequence of $\lambda$-terms over a given label theory for $\tau' \rightarrow \tau''$. A rule $(p,\varphi,\mathbf{f},q)$ of an SFT $T$ is denoted as $p \stackrel{\varphi/\mathbf{f}}{\longrightarrow} q$. The sequence of $\lambda$-terms $\mathbf{f}:(\tau' \rightarrow \tau'')^\ast$ can be treated as a function $\lambda x. [\mathbf{f}_0(x), \dots, \mathbf{f}_k(x)]$ where $k = |\mathbf{f}| - 1$. Concrete transitions are represented as rules. Let $p,q \in Q$, $a \in \cU_{\tau'}$ and $\mathbf{b} \in \cU_{\tau''}^\ast$ then: $\ok{ \stackrel{a/\mathbf{b}}{\longrightarrow_T} q \; \Leftrightarrow \; p \stackrel{\varphi/\mathbf{f}}{\longrightarrow_T} q \in R: a \in \grass{\varphi} \, \wedge \, \mathbf{b} = \grass{\mathbf{f}}(a) }$ Given two sequences $\mathbf{a} \in \cU_{\tau'}^\ast$ and $\mathbf{b} \in \cU_{\tau''}^\ast$, we write $q \stackrel{\mathbf{a}/\mathbf{b}}{\twoheadrightarrow} p$ when there exists a path of transitions from $q$ to $p$ in $T$ with input sequence $\mathbf{a}=\mathbf{a}_0\mathbf{a}_1\cdots\mathbf{a}_n$ and output sequence $\mathbf{b} = \mathbf{b}^0 \mathbf{b}^1 \cdots \mathbf{b}^n$, $n = |\mathbf{a}| -1$ and $\mathbf{b}^i$ denoting a subsequence of $\mathbf{b}$, such that: $ \ok{ p = p_0 \stackrel{\mathbf{a}_0/\mathbf{b}^0}{\longrightarrow} p_1 \stackrel{\mathbf{a}_1/\mathbf{b}^1}{\longrightarrow} p_2 \dots p_n \stackrel{\mathbf{a}_n/\mathbf{b}^n}{\longrightarrow} p_{n+1} = q} $. SFT can have $\varepsilon$-transitions and they can be eliminated following a standard procedure. We assume $p\stackrel{\varepsilon/\varepsilon}{\longrightarrow}p$ for all $p \in Q$. The transduction of an SFT $T$ \cite{VeanesHLMB12} over $\tau' \rightarrow \tau''$ is a function $\fT_T:\cU_{\tau'}^\ast \rightarrow \wp(\cU_{\tau''}^\ast)$ where: $\ok{ \fT_T(\mathbf{a}) \triangleq \sset{\mathbf{b} \in \cU_{\tau''}^\ast}{\exists q \in F: q^0 \stackrel{\mathbf{a}/\mathbf{b}}{\twoheadrightarrow} q} } $ \paragraph*{SFT as FA Transformers.} In the following, we will consider SFTs producing only one symbol in output for each symbol read in input. Namely, we consider SFTs with rules $(q,\varphi,f,q)$ where $f$ is a single $\lambda$-term of type $\tau' \rightarrow \tau''$. Moreover, we consider SFTs and FA over finite alphabets, where the symbolic representation of SFT is useful for having more compact language transformers. In this section we show how, under these assumptions, SFTs can be seen as FA transformers. In particular, given an FA $A$ such that $\cL(A) \in \wp(\cU^\ast_{\tau'})$ and an SFT $T$ over $\tau' \rightarrow \tau''$, we want to build the FA recognizing the language of strings in $\cU_{\tau''}^\ast$ obtained by modifying the strings in $\cL(A)$, according to the SFT $T$. To this end, we define the input language $\mbox{\cL$_{\cI}$}(T)$ of an SFT $T$ as the set of strings producing an output when processed by $T$, and the output language $\mbox{\cL$_{\cO}$}(T)$ as the set of strings generated by $T$. Formally: $\mbox{\cL$_{\cI}$}(T) \triangleq \sset{\mathbf{a} \in \cU_{\tau'}^\ast}{\fT_T(\mathbf{a}) \neq \emptyset}$ and $\mbox{\cL$_{\cO}$}(T) \triangleq \{\mathbf{b} \in \cU_{\tau''}^\ast ~|~ \mathbf{b} \in \fT_T(\mathbf{a}), \mathbf{a} \in \mbox{\cL$_{\cI}$}(T)\}$. Consider $T = \tuple{Q, q^0, F,R}$ over $\tau' \rightarrow \tau''$, with $\cU_{\tau'}$ and $\cU_{\tau''}$ finite alphabets, and rules $(q,\varphi,f,p) \in R$ such that $f$ are $\lambda$-terms of type $\tau' \rightarrow \tau''$. According to \cite{DMG-sas16} it is possible to build an FA $\mbox{\sc fa$_{\cO}$}(T)$ recognising the output language of $T$, i.e., $\cL(\mbox{\sc fa$_{\cO}$}(T)) = \mbox{\cL$_{\cO}$}(T)$. In particular, $\mbox{\sc fa$_{\cO}$}(T) \triangleq (Q,\delta,q_0,F,\cU_{\tau''})$ where $\delta = \{(q,b,p)~|~(q,\varphi,f,p) \in R, b \in \grass{f(\varphi)}\}$. Observe that $\grass{f(\varphi)}$ is finite since $\varphi$ is a predicate over a finite alphabet. We can associate an SFT $\cT(A)$ to an FA $A$, where the input and output languages of $\cT(A)$ are the ones recognized by the FA $A$. Formally, given an FA $A=(Q,\delta,q_0,F,\cU_\tau)$, we define the output SFT over $\tau \rightarrow \tau$ as $\cT(A) \triangleq \tuple{Q,q_0,F,R^\texttt{id}}$ where $R^\texttt{id} \triangleq \sset{(p,\sigma,\ensuremath{\mathit{id}},q)}{(p,\sigma,q)\in \delta}$\footnote{We denote by $\sigma$ the predicate requiring the symbol to be equal to $\sigma$.} and the transduction is: \[ \fT_{\cT(A)}(\mathbf{a}) = \left\{ \begin{array}{ll} \mathbf{a} & \mbox{if } \mathbf{a} \in \cL(A)\\ \emptyset & \mbox{otherwise} \end{array} \right. \] These definitions allow us to associate FAs with SFTs and vice-versa and. According to \cite{VeanesHLMB12}, we define the composition of two transductions $\fT_1$ and $\fT_2$ as: $$ \fT_1 \diamond \fT_2 \triangleq \lambda \mathbf{b} . \bigcup _{\mathbf{a} \in \fT_1(\mathbf{b})}\fT_2(\mathbf{a}) $$ Observe that the composition $\diamond$ applies first $\fT_1$ and then $\fT_2$. It has been proved that if $T_1$ and $T_2$ are SFTs over composable label theories, then there exists an SFT $T_1 \diamond T_2$ that is obtained effectively from $T_1$ and $T_2$ such that $\fT_{T_1\diamond T_2} = \fT_1 \diamond \fT_2$ (see \cite{VeanesHLMB12,TR-bek} for details). At this point, given an FA $A$ with $\cL(A) \in \wp(\cU_{\tau'}^\ast)$ and an SFT $T$ over $\tau' \rightarrow \tau''$, we can model the application of $T$ to $\cL(A)$ as the composition $\cT(A) \diamond T$ where the language recognized by the FA $A$ becomes the input language of the SFT $T$. $\fT_{\cT(A) \diamond T}=\fT_T(\mathbf{b})$ if $\mathbf{b} \in \cL(A)$, it is $\emptyset$ otherwise. Observe that, the FA recognizing the output language of $\cT(A) \diamond T$ is the FA obtained by transforming $A$ with $T$. Indeed, $\cL(\mbox{\sc fa$_{\cO}$}(\cT(A) \diamond T)) = \{\mathbf{b} \in \Gamma^\ast ~|~ \mathbf{b} \in \fT_T(\mathbf{a}), \mathbf{a} \in \cL(A)\}$. Thus, we can say that an SFT $T$ transforms an FA $A$ into the FA $\mbox{\sc fa$_{\cO}$}(\cT(A) \diamond T)$. \section{A Core Dynamic Programming Language }\label{sect:dimp} \subsection{The dynamic language} We introduce a core imperative deterministic dynamic language $\CommS$, in the style of {\sc Imp} for its imperative fragment and of dynamic languages, such as PHP or JavaScript, as far as string manipulation is concerned, with basic types integers in ${\mathbb{Z}}$, booleans, and strings of symbols over a finite alphabet $\Sigma$. Programs $\tt P$ are labeled commands in $\CommS$ built as in Figure~\ref{synta}, on a set of variables $\ensuremath{\textsf{Var}}$ and line of code labels $\pc{\tt P}$ with typical elements $l\in\pc{\tt P}$. \begin{figure} {\footnotesize \begin{align*} \ExpS\ni {\tt e} &::= \; {\tt a}\mid{\tt b}\mid {\tt s}\\ \AexpS \ni {\tt a} &::= \; x \mid n \mid \mbox{\bf rand}() \mid \textbf{len}({\tt s})\mid \textbf{num}({\tt s}) \mid\\ & {\tt a} + {\tt a} \mid {\tt a} -{\tt a} \mid {\tt a}*{\tt a}\qquad(\mbox{where}\ n\in\mathbb{Z})\\ \BexpS \ni {\tt b} &::= \; x \mid\true\mid\false \mid {\tt e}={\tt e} \mid {\tt e} > {\tt e} \mid {\tt e} < {\tt e} \mid {\tt b}\wedge{\tt b} \mid \neg {\tt b} \\ \SexpS\ni {\tt s} &::= \; x \mid \:'\;' \,\mid \mstr{\sigma} \mid {\tt s}\centerdot\sigma\mid \subst{{\tt s}}{{\tt a}}{{\tt a}}\\ & (\mbox{where}\ \sigma\in\Sigma)\\ \Ccomms\ni{\tt c} &::= \;\mbox{\bf skip};\mid x:={\tt e};\mid {\tt c}\Comm\mid \textbf{if} ~{\tt b}~\{{\tt c}\}; \mid\\ & \textbf{while} ~b~ \{{\tt c}\};\mid \textbf{reflect}({\tt s}); \mid x := \textbf{reflect}({\tt s}); \\ \CommS\ni \tt P &::={\tt c}\mbox{\tt \$}\\ \mathsf{Id}\ni x &\quad \mbox{Identifiers (strings not containing punctuation symbols)} \end{align*}} \caption{Syntax of $\CommS$}\label{synta} \end{figure} We assume that all terminal and non terminal symbols of $\CommS$ are in $\Sigma_{\CommS}\subseteq\Sigma^*$. Thus, the language recognized by the context free grammar (CFG) of $\CommS$ is an element of $\wp((\Sigma_{\CommS})^\ast)$, i.e., $\CommS \subseteq (\Sigma_{\CommS})^\ast$. Given $\tt P \in \CommS$ we associate with each statement a program line $l\in \pc{\tt P}$. In order to simplify the presentation of the semantics, we suppose that any program is ended by a termination symbol $\mbox{\tt \$}$, labeled with the last program line denoted $l_e$. When a statement ${\tt c}$ belongs to a program $\tt P$ we write ${\tt c}\in \tt P$, then we define the auxiliary functions $\stm{\tt P}: \pc{\tt P}\rightarrow \CommS$ be such that $\stm{\tt P}(l)={\tt c}$ if ${\tt c}$ is the statement in $\tt P$ at program line $l$ (in the following denoted $\pp{l}{\tt c}$) and $\pcf{\tt P}=\stm{\tt P}^{-1}:\CommS\rightarrow \pc{\tt P}$ with the simple extension to blocks of $\tt P$ instructions $\pcf{\tt P}({\tt c}_1{\tt c}_2)=\pcf{\tt P}({\tt c}_1)$. In general, we denote by $\pcf{\tt P}$ the set of all the program lines in $\tt P$\footnote{Note that, by definition a statement, or a block, ${\tt c}$ ends always with $;$.}. Let $M\triangleq\ensuremath{\textsf{Var}} \rarr{} \mathbb{Z} \cup \{\true,\false\} \cup \Sigma^*$ be the set of memory maps, ranged over by $m$, that assign values (integers, booleans or strings) to variables. $\grasseb{{\tt s}}:M \rarr{} \Sigma^*$ denotes the semantics of string expressions. For strings ${\tt s}_1,{\tt s}_2\in\Sigma^*$, symbol $\delta\in\Sigma$ and values $n_1,n_2\in{\mathbb{Z}}$ we have that $\grasseb{{\tt s}_1\centerdot\delta}m$ returns the concatenation of the string $\grasseb{{\tt s}_1}m$ with the symbol $\delta\in\Sigma$, i.e., $\grasseb{{\tt s}_1}m\cdot\delta$. We abuse notation and use ${\tt s}_1\centerdot{\tt s}_2$ for string concatenation. The semantics $\grasseb{\subst{{\tt s}}{n_1}{n_2}}m$ returns the sub-string of the string $\grasseb{{\tt s}}m$ given by the $n_2$ consecutive symbols starting from the $n_1$-th one (we suppose $n_1\geq 0$)\footnote{The choice of considering only concatenation and substring derives form the fact that most of the operations on strings can be obtained as by using these two operations.}. We denote with $\grasseb{{\tt a}}: M \rarr{} \mathbb{Z}$ the semantics of arithmetic expressions where $\grasseb{\textbf{len}({\tt s})}m$ returns the length of the string $\grasseb{{\tt s}}m$, and $\grasseb{\textbf{num}({\tt s})}m$ returns the integer denoted by the string $\grasseb{{\tt s}}m$ (suppose it returns the empty set if $\grasseb{{\tt s}}m$ is not a number). The semantics of the other arithmetic expressions is defined as usual. Analogously, $\grasseb{{\tt b}}: M \rightarrow \{\true,\false \}$ denotes the semantics of Boolean expressions where, given ${\tt s}_1,{\tt s}_2\in\Sigma^*$, ${\tt s}_2 <{\tt s}_1$ is true iff ${\tt s}_2\preceq{\tt s}_1$ (prefix order). The semantics of the other Boolean expressions is defined as usual. The update of memory $m$, for a variable $x$ with value $v$, is denoted $m[x/v]$. The semantics of $\textbf{reflect}({\tt s})$ evaluates the string ${\tt s}$: if it is a program in $\CommS$ it executes it, otherwise the execution proceeds with the next command. Observe that ${\tt s} \in \Sigma^\ast$ while $\CommS \in \wp((\Sigma_{\CommS})^\ast)$, for this reason we define $\overline{\CommS} \triangleq \{\mathbf{a} \in \Sigma^\ast ~|~ \mathbf{a} = \mathbf{a}_1\cdot\mathbf{a}_2 \cdot \ldots \cdot \mathbf{a}_n, \mathbf{a}_1\mathbf{a}_2 \ldots \mathbf{a}_n \in \CommS\}$ as the set of sequences in $\Sigma^\ast$ that can be obtained by concatenating the sequences $\mathbf{a}^i$ that act like symbols in a program in $\CommS$. We denote with $\overline{{\tt c}}$ the sequence of $\Sigma^\ast$ that corresponds to the sequence ${\tt c} \in (\Sigma_{\CommS})^\ast$. At this point, before computing the semantics of $\overline{{\tt c}}$, we need to recognize which statements it denotes, building the corresponding string ${\tt c} \in \CommS$, and then to label this statements by using the function $\labb{\cdot}$, assigning an integer label to each statement in ${\tt c}\mbox{\tt \$}$ from $1$ to the final program point $l_e$. In the following, we say that ${\tt s}$ evaluates to ${\tt c}$ when it assumes a value $\overline{{\tt c}} \in \overline{\CommS}$ corresponding to the concatenation of the sequences that are symbols of ${\tt c}$. The semantics of $x:=\textbf{reflect}({\tt s})$ evaluates expression ${\tt s}$ and if it is ${\tt c}$ in $\CommS$ it proceeds by assigning $\:'\;'$ to $x$ and executes ${\tt c}$, otherwise it behaves as a standard assignment. Formally, let $\ensuremath{\mathsf{Int}}:\CommS\times M\rarr{} M$ denote the semantics of programs, and $\grasseb{\cdot}m$ the evaluation of an expression in the memory $m$, then: \vspace{-.2cm} {\footnotesize \begin{align*} \ensuremath{\mathsf{Int}}(\pp{l}\textbf{reflect}({\tt s});\pp{l'}\tt Q, \textit{m})&= \left \{ \begin{array}{ll} \ensuremath{\mathsf{Int}}(\pp{l'}\tt Q,\textit{m'})\\ \qquad\mbox{if}\ \grasseb{{\tt s}}m \cap \overline{\CommS} = \overline{{\tt c}}\ \wedge\\ \qquad \textit{m'}=\ensuremath{\mathsf{Int}}(\labb{{\tt c}},\textit{m})\\ \ensuremath{\mathsf{Int}}(\pp{l'}\tt Q,\textit{m})\quad \mbox{otherwise} \end{array} \right . \end{align*} \vspace{-.2cm} } We can observe that the way in which we treat the commands $\pp{l}\textbf{reflect}({\tt s})$ and $\pp{l}x:=\textbf{reflect}({\tt s})$ mimics the classical semantic model and implementation of reflection and reification as introduced in Smith \cite{Smith84}, see \cite{WandF88,DanvyM88} for details. In particular, when string ${\tt s}$ evaluates to a program ${\tt c}$, the program control starts the execution of ${\tt c}$ before returning to the original code. The problem is that ${\tt c}$ may contain other $\textbf{reflect}$ statements leading to the execution of new portions of code. Hence, each nested $\textbf{reflect}$ is an invocation of the interpreter which can be seen as a new layer in the {\em tower} of interpretations: When a layer terminates the execution the control returns to the previous layer with the actual state. Hence, the state $\ensuremath{\mathsf{Int}}(\pp{l}\textbf{reflect}({\tt s});\pp{l'}\tt Q,\textit{m})$, when string ${\tt s}$ evaluates to a program ${\tt c}$, starts a new computation of $\labb{{\tt c}}$ from $m$. Once the execution of the tower derived from ${\tt c}$ terminates, the execution comes back to the continuation $\tt Q$, in the memory resulting from the execution of ${\tt c}$. It is known that in general the construction of the tower of interpreters may be infinite leading to a divergent semantics. \begin{example}\label{infTower} Consider the following program fragment $\tt P$: \begin{equation*} \pp{1}x:=\mstr{\textbf{reflect}(x);\mbox{\tt \$}};\:\pp{2}\textbf{reflect}(x);\:\pp{3}\mbox{\tt \$} \end{equation*} Suppose the initial memory is $m_\bot$ (associating the undefined value to each variable, in this case $x$). After the execution of the first assignment we have the memory $m_1=[x/\mstr{\textbf{reflect}(x)}]$, on which we execute the $\textbf{reflect}(x)$ statement. Since now, $\textbf{reflect}(x)$ is executed starting from $m_1$, hence each $\textbf{reflect}(x)$ activates a tower layer executing the statement in $x$, which is again $\textbf{reflect}(x)$ starting from the same memory. Hence the tower has infinite height. \end{example} \subsection{Flow-sensitive Collecting Semantics} \comment{ \begin{figure*}[ht] {\footnotesize \begin{tabular}{ll} $\begin{array}{l} \grasseb{\true}\mathbb{mem} \triangleq \true \\ \grasseb{\false}\mathbb{mem} \triangleq \false \end{array} \quad \grasseb{\neg{\tt b}}\mathbb{mem} \triangleq \left\{ \begin{array}{ll} \true & \mbox{ if } \grasseb{{\tt b}}\mathbb{mem} = \false\\ \false & \mbox{ if } \grasseb{{\tt b}}\mathbb{mem} = \true\\ \{\true,\false\} & \mbox{ otherwise} \end{array} \right. $ & $\grasseb{{\tt e}_1 = {\tt e}_2}\mathbb{mem} \triangleq \left\{ \begin{array}{ll} \true & \mbox{ if } |\grasseb{{\tt e}_1}\mathbb{mem}| = |\grasseb{{\tt e}_2}\mathbb{mem}| = 1 \mbox{ and } \grasseb{{\tt e}_1}\mathbb{mem} = \grasseb{{\tt e}_2}\mathbb{mem} \\ \false & \mbox{ if } \grasseb{{\tt e}_1}\mathbb{mem} \cap \grasseb{{\tt e}_2}\mathbb{mem} = \varnothing \\ \{\true,\false\} & \mbox{ otherwise} \end{array} \right. $\\ \ \\ $\grasseb{{\tt a}_1 > {\tt a}_2}\mathbb{mem} \triangleq \left\{ \begin{array}{ll} \true & \mbox{ if } \forall x \in \grasseb{{\tt a}_1}\mathbb{mem}, \forall y \in \grasseb{{\tt a}_2}\mathbb{mem}: x > y\\ \false & \mbox{ if } \forall x \in \grasseb{{\tt a}_1}\mathbb{mem}, \forall y \in \grasseb{{\tt a}_2}\mathbb{mem}: x \leq y\\ \{\true,\false\} & \mbox{ otherwise} \end{array} \right. $ & $\grasseb{{\tt b}_1 > {\tt b}_2}\mathbb{mem} \triangleq \left\{ \begin{array}{ll} \false & \mbox{ if } \grasseb{{\tt b}_2}\mathbb{mem} = \true\\ \true & \mbox{ if } \grasseb{{\tt b}_1}\mathbb{mem} = \true \mbox{ or } \grasseb{{\tt b}_2}\mathbb{mem} = \false\\ \{\true,\false\} & \mbox{ otherwise} \end{array} \right. $\\ \mbox{(Analogous for $\grasseb{{\tt s}_1>{\tt s}_2}\mathbb{mem}$)} &\\ \end{tabular}} \caption{Semantics of Boolean expression}\label{csem} \end{figure*} Collecting semantics models program execution by computing, for each program point, the set of all the values that each variables may have. In order to deal with reflection we need to define an interpreter collecting values for each program point which, at each step of computation, keeps trace not only of the collection of values after the last executed program point $p$, but also of the values collected in all the other program points, both already executed and not executed yet. In other words, we define a flow sensitive semantics which, at the end of the computation, observes the trace of collections of values holding {\em at each program point}. In order to model this semantics, we model the concrete state not simply as a memory -- the current memory, but as the tuple of memories holding at each program point. It is clear that, at each step of computation, only the memory in the last executed program point will be modified. First, we define a collecting memory $\mathbb{mem}$, associating with each variable a set of values instead of a single value. We define the set $\ok{\mathbb{M}\triangleq\ensuremath{\textsf{Var}} \rarr{} \wp(\mathbb{Z}) \cup \mathsf{Bool} \cup \wp(\Sigma^*)}$ with meta-variable $\mathbb{mem}$, where $\mathsf{Bool} = \wp(\{\false,\true\})$. We define two particular memories, $\mathbb{mem}_\varnothing$ associating $\varnothing$ to any variable, and $\mathbb{mem}_\top$ associating the set of all possible values to each variable. The update of memory $\mathbb{mem}$ for a variable $x$ with set of values $v$ is denoted $\mathbb{mem}[x/v]$. Finally, lub and glb of memories are $\mathbb{mem}_1\sqcup \mathbb{mem}_2(x)=\mathbb{mem}_1(x)\cup\mathbb{mem}_2(x)$ and $\mathbb{mem}_1\sqcap\mathbb{mem}_2(x)=\mathbb{mem}_1(x)\cap\mathbb{mem}_2(x)$. \\ Then, in order to make the semantics flow-sensitive, we introduce a new notion of {\em flow-sensitive} store (in the following called store) $\Store \triangleq \pc{\tt P}\rarr{}\mathbb{M}$ associating with each program line a memory. We represent a store $\mathfrak{s} \in \Store$ at a given line $l\in\pc{\tt P}$ as a tuple $\tuple{x_1/v_{x_1},\ldots, x_n/v_{x_{n}}}$, where $v_{x_i}$ is the set of possible values of variable $x_i$. We use $\mathfrak{s}_l$ to denote $\mathfrak{s}(l)$, namely the memory at line $l$. Given a store $\mathfrak{s}$, the update of memory $\mathfrak{s}_l$ with a new collecting memory $\mathbb{mem}$ is denoted $\mathfrak{s}[\mathfrak{s}_l \leftarrow \mathbb{mem}]$ and provides a new store $\mathfrak{s}'$ such that $\mathfrak{s}'_l=\mathfrak{s}_l\sqcup\mathbb{mem}$ while $\forall l'\neq l$ we have $\mathfrak{s}'_{l'}=\mathfrak{s}_{l'}$. We abuse notation by denoting with $\grasseb{\cdot}$, not only the concrete, but also the collecting semantics of expressions, all defined as additive lift of the expression semantics. In particular, we denote by $\grasseb{{\tt b}}^{\true}$ the maximal collecting memory making ${\tt b}$ true, i.e., it is $\bigsqcup\sset{m\in M}{\grasseb{{\tt b}}m=\true}\in\mathbb{M}$ (analogous for $\grasseb{{\tt b}}^{\false}$). Hence, by $\mathbb{mem}\sqcap\grasseb{{\tt b}}^{\true}$ we denote the memory $\mathbb{mem}'\triangleq\mathbb{mem}[x\in\ensuremath{\textsf{vars}}({\tt b})/\mathbb{mem}(x)\cap\grasseb{{\tt b}}^{\true}(x)]$, where $\ensuremath{\textsf{vars}}({\tt b})$ is the set of variables of ${\tt b}$. For instance if $\mathbb{mem}=[x/\{1,2,3\},y/\{1,2\}]$ and ${\tt b}=(x<3)$, then $\grasseb{{\tt b}}^{\true}=[x/\{1,2\},y/\top]$, hence $\mathbb{mem}\sqcap\grasseb{{\tt b}}^{\true}=[x/\{1,2\},y/\{1,2\}]$. Finally, let $V\subseteq \ensuremath{\textsf{Var}}$, by $\mathfrak{s}_V$ we denote the store where for each program point $l$, the memory $\mathfrak{s}_l$ is restricted only on the variables in $V$, by ${\sf lfp\/}_V f(\mathfrak{s})$ we denote the computation of the fix point only on the variables $V$, i.e., we compute $\mathfrak{s}$ such that $\mathfrak{s}_V=f(\mathfrak{s})_V$. \comment{ \begin{figure*} {\small \begin{align*} \grasse{\mbox{\bf skip}}\mathfrak{s} &=\grasse{\mbox{\tt \$}}\mathfrak{s}\triangleq \mathfrak{s}\\ \grasse{\pp{l}x:={\tt e};\pp{l_1}\tt P}\mathfrak{s} &\triangleq \mathfrak{s}[\mathfrak{s}_{l_1}\leftarrow \mathfrak{s}_{l}[x/ \mathfrak{s}_{l_1}(x)\sqcup\grasseb{{\tt e}}\mathfrak{s}_l]\\ \grasse{\pp{l_1}{\tt c} ; \pp{l_2}\tt P } \mathfrak{s} &\triangleq \grasse{\pp{l_2}\tt P}(\grasse{\pp{l_1}{\tt c}}\mathfrak{s})\\ \grasse{\pp{l}\mbox{\bf if}\ {\tt b}\ \{\pp{l_1}{\tt c}\};\pp{l_2}\tt P}\mathfrak{s} &\triangleq \grasse{{\tt c}}(\mathfrak{s}[\mathfrak{s}_{l_1}\leftarrow\mathfrak{s}_{l}\sqcap\grasseb{{\tt b}}^{\true}])\sqcup \mathfrak{s}[\mathfrak{s}_{l_2}\leftarrow\mathfrak{s}_{l}\sqcap\grasseb{{\tt b}}^{\false}]\\ \grasse{\pp{l}\mbox{\bf while}\ {\tt b}\ \{\pp{l_1}{\tt c}\};\pp{l_2}\tt P}\mathfrak{s} &\triangleq \tilde{\mathfrak{s}}[\tilde{\mathfrak{s}}_{l_2}\leftarrow(\tilde{\mathfrak{s}}_l\sqcap\grasseb{{\tt b}}^{\false})\sqcup(\mathfrak{s}_l\sqcap\grasseb{{\tt b}}^{\false})]\ \mbox{where}\ \tilde{\mathfrak{s}}\triangleq{\sf lfp\/}_{\ensuremath{\textsf{vars}}({\tt b})}(\lambda\tstore.\:\mathfrak{s}[\mathfrak{s}_{l_1}\leftarrow\mathfrak{s}_l\sqcap\grasseb{{\tt b}}^{\true}])\sqcup\grasse{\pp{l_1}{\tt c};\pp{l}\mbox{\bf skip}}\tstore[\tstore_{l_1}\leftarrow \tstore_l\sqcap\grasseb{{\tt b}}^{\true}])\\ \grasse{\pp{l}x:=\textbf{reflect}({\tt s});\pp{l_1}\tt P}\mathfrak{s}&\triangleq \left \{ \begin{array}{ll} \mathfrak{s}[\mathfrak{s}_{l_1}\leftarrow \mathfrak{s}_l[x/\mathfrak{s}_{l_1}(x)\cup\grasseb{{\tt s}}\mathfrak{s}_{l}]] & \mbox{if}\ \grasseb{{\tt s}}\mathfrak{s}_l\cap \overline{\CommS}=\varnothing\\ \grasse{\pp{l}\textbf{reflect}({\tt s});\pp{l_1}\tt P}\mathfrak{s}[\mathfrak{s}_{l_1}\leftarrow\mathfrak{s}_l[x/\mathfrak{s}_{l_1}(x)\sqcup\{\:'\;'\}]] &\mbox{otherwise} \end{array} \right .\\ \grasse{\pp{l}\textbf{reflect}({\tt s});\pp{l_1}\tt P}\mathfrak{s} &\triangleq \bigsqcup\ssetf{\tilde{\mathfrak{s}}}{\overline{{\tt c}}'\in\grasseb{{\tt s}}\mathfrak{s}_l\cap \overline{\CommS},\ \mathfrak{s}^{\iota}_1\triangleq\mathfrak{s}_l, \forall l'\in\pcf{\labb{{\tt c}'}},\ l'\neq 1.\:\mathfrak{s}^{\iota}_{l'}\triangleq\mathbb{mem}_\varnothing,\ \tilde{\mathfrak{s}}=\mathfrak{s}[\mathfrak{s}_{l_1}\leftarrow(\grasse{\labb{{\tt c}'}}\mathfrak{s}^{\iota})_{l_e}]} \end{align*}} \caption{Flow sensitive denotational collecting semantics of $\CommS$.}\label{sema} \end{figure*} In Fig.~\ref{sema} we provide the definition of the semantics $\grass{\cdot}:\CommS\times\Store\rarr{}\Store$. In particular, we compute denotationally the store mapping each program line with the corresponding collecting memory. Note that, the semantics of each statement seems to depend on the continuation. This is not the case, in our notation, we need to consider also the continuation $\pp{l_1}\tt P$ only to know the following statement label, whose store is modified by the current execution. Note that, in the {\bf while} semantics, the fix point condition has to be verified only on the variables of ${\tt b}$. If $\mathfrak{s}=\grasse{{\tt c}}\mathfrak{s}^{\iota}$, then $(\grasse{{\tt c}}\mathfrak{s}^{\iota})_{l_e}$ is the memory at exit line $l_e$ of ${\tt c}$. If we have $\pp{l}\textbf{reflect}({\tt s})$ and ${\tt c}\in\grasseb{{\tt s}}\mathfrak{s}_l$, then the initial memory (at the first program line of ${\tt c}$) is the memory holding at program line $l$, while the memories for all the other program points in ${\tt c}$ are initialized to $\mathbb{mem}_\varnothing$. The idea is that at the end of the reflect the memory is the one computed up to $l$, i.e., before executing reflection ($\mathfrak{s}_l$), modified by the least upper bound of all the memories computed by the execution of the programs ${\tt c}$ in the semantics of ${\tt s}$ ($(\grasse{{\tt c}}\mathfrak{s}^{\iota})_{l_e}$). } We follow \cite{C00tcs} in the usual definition of the \emph{concrete collecting trace semantics} of a transition system $\tuple{\Conf,\leadsto}$ associated with programs in $\CommS$, where $\Conf=\CommS\times\Store$ is the set of states in the transition system with typical elements $\Statec\in\Conf$ and $\leadsto\subseteq\Conf\times\wp(\Conf)$ is a transition relation. The state space in the transition system is the set of all pairs $\tuple{{\tt c},\mathfrak{s}}$ with ${\tt c}\in\CommS$ and $\mathfrak{s}\in\Store$ representing the store computed by having executed the first statement in ${\tt c}$ and having its continuation still to execute. The transition relation generated by a program ${\tt c} \in\CommS$ is in Appendix. The axiom $\tuple{\pp{l_e}\mbox{\tt \$},\mathfrak{s}}$ identifies the final blocking states $\cB$. When the next command to execute is $\pp{l}\textbf{reflect}({\tt s})$, we need to verify whether the evaluation of the string ${\tt s}$ at program line $l$ returns a set of sequences of symbols of $\Sigma$ that contains sequences representing programs in $\CommS$. If this is the case, we proceed by executing the programs corresponding to $\grasseb{\tt s}\mathfrak{s}_l$ with initial memory (at the first program line of ${\tt c}$) the memory holding at program line $l$, while the memories for all the other program points in ${\tt c}$ are initialized to $\mathbb{mem}_\varnothing$. When the next command to execute is an assignment of the form $\pp{l} x:= \textbf{reflect}({\tt s})$ we need to verify whether string ${\tt s}$ evaluates to a simple set of strings or to strings corresponding to programs in $\CommS$. If the evaluation of the strings in $\grasseb{\tt s}\mathfrak{s}_l$ does not contain programs we proceed as for standard assignments. If $\grasseb{\tt s}\mathfrak{s}_l$ returns programs in $\CommS$ then the assignment becomes an assignment of $\:'\;'$ to variable $x$ and the execution of the programs corresponding to $\grasseb{\tt s}\mathfrak{s}_l$. Observe that, in order to verify whether the possible values assumed by a string ${\tt s}$ at a program point $l$ are programs in $\CommS$, we check if the intersection $\grasseb{{\tt s}}\mathfrak{s}_l \cap \overline{\CommS}$ is not empty. Unfortunately, this step is in general undecidable, for this reason, in Section~\ref{approx2}, we provide a constructive methodology for deciding the executability of $\grasseb{{\tt s}}\mathfrak{s}_l$ and for synthesizing a program that can be executed in order to proceed with the analysis and obtain a sound result. The other rules model standard transitions. Given a program $\tt P\in\CommS$ and a set of initial stores $I$, we denote by $\cI\triangleq\sset{\Statec}{\Statec = \tuple{\tt P,\mathfrak{s}}, \mathfrak{s} \in I}$ the set of initial states. In sake of simplicity, we consider a partial collecting trace semantics observing only the finite prefixes of finite and infinite execution traces: {\small \[ \cF(\tt P,\cI)=\ssetf{\Statec_0\Statec_1\ldots\Statec_n}{\Statec_0 \in \cI,\ \forall i<n.\; \Statec_i\leadsto\Statec_{i+1}} \] }\vspace{-.2cm} It is known that $\cF(\tt P,\cI)$ expresses precisely invariant properties of program executions and it can be obtained by fix-point of the following trace set transformer $\mathtt{F}:\wp(\Conf^*)\rarr{}\wp(\Conf^*)$, starting form the set $\cI$ of initial configurations, such that $\cF(\tt P,\cI) = {\sf lfp\/}(\mathtt{F}_{\tt P,\cI})$. {\small \[ \begin{array}{ll} \mathtt{F}_{\tt P,\cI}\triangleq& \lambda X.\: \cI\:\cup \ssetf{\Statec_0\Statec_1\ldots\Statec_i\Statec_{i+1}}{\Statec_0\Statec_1\ldots\Statec_i\in X\!\!\\ \Statec_i\leadsto\Statec_{i+1}\!\!} \end{array} \]} Finally, we can define the store projection of the partial collecting trace semantics (in the following simply called trace semantics) of a program $\tt P$ form an initial store $\mathfrak{s}\in I$ as {\small \[ \grasstr{\tt P}\mathfrak{s}\triangleq\ssetf{\mathfrak{s}\store^1\ldots\mathfrak{s}^n}{\exists \Statec_0\Statec_1\ldots\Statec_n\in \cF(\tt P,\{\mathfrak{s}\}).\:\Statec_0=\tuple{\tt P,\mathfrak{s}}\\ \wedge\ \forall i\in[1,n].\:\exists \tt P_i\in\CommS.\:\Statec_i=\tuple{\tt P_i,\mathfrak{s}^i}} \]} \begin{example}\label{es1} Consider the following $\CommS$ program $\tt P$ implementing an iterative count by dynamic code modification. \begin{center} {\small \begin{tabular}{l} $\pp{1}x:=1; \pp{2}\mathit{str}:= \mstr{\mbox{\tt \$}};$\\ $\pp{3}\mbox{\bf while} \; x < 3 \; \{\pp{4}\mathit{str} := \mstr{x:=x+1;} \centerdot\mathit{str};\:\pp{5}\textbf{reflect} (\mathit{str});\};\:\pp{6}\mbox{\tt \$}$ \end{tabular}} \end{center} At each step of computation, let us denote by $\tt P$ the continuation of the program. A portion of the iterative computation of the collecting semantics, starting from the store $\mathfrak{s}^0$ such that, for each $l\in[1,6],\:\mathfrak{s}^0_l=\mathbb{mem}_\varnothing$, is reported in Fig.~\ref{fig:es}. Note that, $\mathfrak{s}_1=\mathbb{mem}_\varnothing$ at each step of computation, while $\mathfrak{s}_2=[x/\{1\},\mathit{str}/\varnothing]$ after the execution of the first statement. Moreover, in sake of brevity, we will denote as $\mstr{s}$ the string $\mstr{x:=x+1}$. \begin{figure*}[ht] \scalebox{0.7}[1]{% \vbox{% \hspace{-.5cm} {\tiny \begin{tabular}{|l|l|l|l|l|l|} \hline P & $\mathfrak{s}_3$& $\mathfrak{s}_4$& $\mathfrak{s}_5$& $\mathfrak{s}_6$\\ \hline\hline $\pp{1}x:=1;\pp{2}\tt P$ &$\mathbb{mem}_\varnothing$&$\mathbb{mem}_\varnothing$&$\mathbb{mem}_\varnothing$&$\mathbb{mem}_\varnothing$\\ \hline $\pp{2}\mathit{str}:=\:'\;'; \pp{3}\tt P$ &$\mathbb{mem}_\varnothing$&$\mathbb{mem}_\varnothing$&$\mathbb{mem}_\varnothing$&$\mathbb{mem}_\varnothing$\\ \hline $\pp{3}\mbox{\bf while}\ x<3\ \{\pp{4}{\tt c}\};\pp{6}\mbox{\tt \$}$ &$[x/\{1\},\mathit{str}/\{\:'\;'\}]$&$\mathbb{mem}_\varnothing$&$\mathbb{mem}_\varnothing$&$\mathbb{mem}_\varnothing$\\ \hline $\pp{4}str:= \mstr{x:=x+1;} \centerdot \mathit{str};\pp{5}{\tt c}_1; \pp{3}\tt P$ &$[x/\{1\},\mathit{str}/\{\:'\;'\}]$&$[x/\{1\},\mathit{str}/\{\:'\;'\}]$&$\mathbb{mem}_\varnothing$&$\mathbb{mem}_\varnothing$\\ \hline {\color{red}$\pp{5}\textbf{reflect}(\mathit{str});$}$ \pp{3}\tt P$ &$[x/\{1\},\mathit{str}/\{\:'\;'\}]$&$[x/\{1\},\mathit{str}/\{\:'\;'\}]$&$[x/\{1\},\mathit{str}/\{\:'\;',\mstr{s}\}]$&$\mathbb{mem}_\varnothing$\\ \hline $\pp{3}\mbox{\bf while}\ x<3\ \{\pp{4}{\tt c}\};\pp{6}\mbox{\tt \$}$ &{\color{red}$[x/\{1,2\},\mathit{str}/\{\:'\;',\mstr{s}\}]$}&$[x/\{1\},\mathit{str}/\{\:'\;'\}]$&$[x/\{1\},\mathit{str}/\{\:'\;',\mstr{s}\}]$&$\mathbb{mem}_\varnothing$\\ \hline $\pp{4}str:= \mstr{x:=x+1;} \centerdot \mathit{str};\pp{5}{\tt c}; \pp{3}\tt P$ &$[x/\{1,2\},\mathit{str}/\{\:'\;',\mstr{s}\}]$&$[x/\{1,2\},\mathit{str}/\{\:'\;',\mstr{s}\}]$&$[x/\{1\},\mathit{str}/\{\:'\;',\mstr{s}\}]$&$\mathbb{mem}_\varnothing$\\ \hline {\color{red}$\pp{5}\textbf{reflect}(\mathit{str});$}$ \pp{3}\tt P$ &$[x/\{1,2\},\mathit{str}/\{\:'\;',\mstr{s}\}]$&$[x/\{1,2\},\mathit{str}/\{\:'\;',\mstr{s}\}]$&$[x/\{1,2\},\mathit{str}/\{\:'\;',\mstr{s},\mstr{s;s}\}]$&$\mathbb{mem}_\varnothing$\\ \hline $\pp{3}\mbox{\bf while}\ x<3\ \{\pp{4}{\tt c}\};\pp{6}\mbox{\tt \$}$ &{\color{red} $[x/\{1,2,3,4\},\mathit{str}/\{\:'\;',\mstr{s},\mstr{s;s}\}]$}&$[x/\{1,2\},\mathit{str}/\{\:'\;',\mstr{s}\}]$&$[x/\{1,2\},\mathit{str}/\{\:'\;',\mstr{s},\mstr{s;s}\}]$&$\mathbb{mem}_\varnothing$\\ \hline $\pp{6}\mbox{\tt \$}$ &$[x/\{1,2,3,4\},\mathit{str}/\{\:'\;',\mstr{s},\mstr{s;s}\}]$&$[x/\{1,2\},\mathit{str}/\{\:'\;',\mstr{s}\}]$&$[x/\{1,2\},\mathit{str}/\{\:'\;',\mstr{s},\mstr{s;s}\}]$&$[x/\{3,4\},\mathit{str}/\{\:'\;',\mstr{s},\mstr{s;s}\}]$\\ \hline \end{tabular} } } } \caption{Iterative computation of the collecting semantics of program $\tt P$ in Example~\ref{es1}, with $s\triangleq x:=x+1$}\label{fig:es} \end{figure*} \noindent With a different color, we highlight the execution of reflect activating a new analysis computation, and the memory $\mathfrak{s}_3$ computed by the statements executed by the reflect. In particular, the first execution of $\mathbf{reflect}(\mathit{str})$ is such that $\grasseb{\mathit{str}}\mathfrak{s}_5\cap\CommS=\{x:=x+1;\mbox{\tt \$}\}$. Moreover, the initial store $\mathfrak{s}^\iota$ for the execution of $\textbf{reflect}$ is such that $\mathfrak{s}^\iota_1=\mathfrak{s}_5$, and $\forall 1<l\leq l_e$ $\mathfrak{s}_{l}=\mathbb{mem}_\varnothing$. $\labb{x:=x+1;\mbox{\tt \$}}=\pp{1}x:=x+1;\pp{2}\mbox{\tt \$}$, with $l_e=2$. Now, the computation of $\labb{x:=x+1;\mbox{\tt \$}}$ is given in Fig.~\ref{fig:es2} on the left. In this case $\mathfrak{s}^e_{l_e}=[x/\{2\},\mathit{str}/\{\:'\;',\mstr{s}\}]$, hence the new $\mathfrak{s}_3$ is the least upper bound between this $\mathfrak{s}^e_{l_e}$ and the previous $\mathfrak{s}_3$, which is $[x/\{1,2\},\mathit{str}/\{\:'\;',\mstr{s}\}]$. The second time {\textbf{reflect}} is executed, we have $\grasseb{\mathit{str}}\mathfrak{s}_5\cap\CommS=\{x:=x+1';\mbox{\tt \$}, x:=x+1;x:=x+1;\mbox{\tt \$}\}$. The calling memory is $\mathfrak{s}^{\iota}_1=\mathfrak{s}_5=[x/\{1,2\},\mathit{str}/\{\:'\;',\mstr{s},\mstr{s;s}\}]$. Similarly to the previous case, the execution of $\pp{1}x:=x+1;\pp{2}\mbox{\tt \$}$ returns the least upper bound between $\mathfrak{s}_3$ and $[x/\{2,3\}, \mathit{str}/\{\:'\;',\mstr{s},\mstr{s;s}\}]$ which is $[x/\{1,2,3\}, \mathit{str}/\{\:'\;',\mstr{s},\mstr{s;s}\}]$. Finally, the execution of $\pp{1}x:=x+1;\pp{2}x:=x+1;\pp{3}\mbox{\tt \$}$ is given in Fig.~\ref{fig:es2} (on the right). In this case, the resulting memory is the least upper bound between $\mathfrak{s}_3$ and $[x/\{3,4\},\mathit{str}/\{\:'\;',\mstr{s},\mstr{s;s}\}]$, which is $[x/\{1,2,3,4\}, \mathit{str}/\{\:'\;',\mstr{s},\mstr{s;s}\}]$, which is also the least upper bound of all the resulting memories, i.e., the new $\mathfrak{s}_3$. \begin{figure*}[ht] \scalebox{0.7}[1]{% \vbox{% \begin{center} {\tiny \begin{tabular}{|l|l|l||l|l|l|} \hline $\mathbf{reflect}(\mstr{s})$ & $\mathfrak{s}_1$& $\mathfrak{s}_2$&$\mathbf{reflect}(\mstr{s;s})$ & $\mathfrak{s}_2$&$\mathfrak{s}_3$\\ \hline\hline $\pp{1}x:=x+1;\pp{2}\mbox{\tt \$}$ &$[x/\{1\},\mathit{str}/\{\:'\;',\mstr{s}\}]$&$\mathbb{mem}_\varnothing$& $\pp{1}x:=x+1;\pp{2}\tt P$&$\mathbb{mem}_\varnothing$&$\mathbb{mem}_{\varnothing}$\\ \hline $\pp{2}\mbox{\tt \$}$ &$[x/\{1\},\mathit{str}/\{\:'\;',\mstr{s}\}]$&{\color{red}$[x/\{2\},\mathit{str}/\{\:'\;',\mstr{s}\}]$}&$\pp{2}x:=x+1;\pp{3}\mbox{\tt \$}$&$[x/\{2,3\},\mathit{str}/\{\:'\;',\mstr{s},\mstr{s;s}\}]$&$\mathbb{mem}_{\varnothing}$\\ \hline &&&$\pp{3}\mbox{\tt \$}$&$[x/\{2,3\},\mathit{str}/\{\:'\;',\mstr{s},\mstr{s;s}\}]$&{\color{red}$[x/\{3,4\},\mathit{str}/\{\:'\;',\mstr{s},\mstr{s;s}\}]$}\\ \hline \end{tabular}} \end{center} } } \caption{Some computations of the reflect executions in Example~\ref{es1}, with $s\triangleq x:=x+1$.}\label{fig:es2} \end{figure*} \end{example} \section{Abstract Interpretation of Strings}\label{sect:asem} \subsection{The abstract domain} Let $\Conf^\sharp = \CommS \times\Store^\sharp$ be the domain of abstract states, where $\ok{\Store^\sharp: \pc{\tt P} \rarr{} \mathbb{M}^\sharp}$ denotes abstract stores ranged over $\ok{\astore}$, and $\ok{\mathbb{M}^\sharp:\ensuremath{\textsf{Var}} \rarr{} \ensuremath{\textsf{AbstVal}}}$ denotes the set of abstract memory maps ranged over by $\ok{\amem}$. The domain of abstract values for expressions is $\ok{\ensuremath{\textsf{AbstVal}}\triangleq\{\top,\mathsf{Interval},\mathsf{Bool},\mathsf{FA}_{/\equiv},\bot\}}$\footnote{Note that, we do not consider here implicit type conversion statements, namely each variable, during execution can have values of only one type, nevertheless we consider the reduced product of possible abstract values in order to define only one abstract domain.}. It is composed by $\mathsf{Interval}$, the standard GC-based abstract domain encoding the interval abstraction of $\wp(\mathbb{Z})$, by $\mathsf{Bool}$, the powerset domain of Boolean values, and by $\mathsf{FA}_{/\equiv}$ denoting the domain of FAs up to language equivalence. Given two FA $A_1$ and $A_2$ we have that $A_1 \equiv A_2$ iff $\cL(A_1) = \cL(A_2)$. Hence, the elements of the domain $\mathsf{FA}_{/\equiv}$ are the equivalence classes of FAs recognizing the same language ordered wrt language inclusion $\mathsf{FA}_{/\equiv} = \tuple{[A]_\equiv,\leq_\mathit{FA}}$, where $[A_1]_\equiv \leq_{\mathit{FA}} [A_2]_\equiv$ iff $\cL(A_1) \subseteq \cL(A_2)$. Here concretization is the language recognized $\cL$. By the Myhill-Nerode theorem \cite{bookComp} the domain is well defined and we can use the minimal automata to represent each equivalence class, moreover, the ordering relation is well defined since it does not depend on the choice of the FA used to represent the equivalence class. In particular, we consider the domain $\mathsf{FA}_{/\equiv}$ defined over the finite alphabet $\Sigma$, thus, given $A \in \mathsf{FA}_{/\equiv}$, we have that $\cL(A) \in \wp(\Sigma^\ast)$. FAs are closed for finite language intersection and union. They do not form a Galois connection with $\wp(\Sigma^\ast)$. The finite lub $\sqcup_\ensuremath{\textsf{AbstVal}}$ and glb $\sqcap_\ensuremath{\textsf{AbstVal}}$ among elements of $\ensuremath{\textsf{AbstVal}}$ are defined as expected: the lub of two abstract values of the same type is given by the lub of the corresponding domain, while the lub between abstract values of different types is $\top$. This means that, for example, the lub of two intervals is the standard lub over intervals, while the lub between an interval and a FA is $\top$. Analogously, for the glb returning $\bot$ if applied to different types. Since $\mathsf{Interval}$ and $\mathsf{FA}_{/\equiv}$ are not ACC, and, in particular, $\mathsf{FA}_{/\equiv}$ is not closed by lubs of infinite chains, $\ensuremath{\textsf{AbstVal}}$ is also not ACC and not closed. Therefore, we need to define a widening operator $\triangledown$ on $\ensuremath{\textsf{AbstVal}}$. The widening operator among elements of different types returns $\top$, the widening operator of Boolean elements is the standard lub, $\mathsf{Bool}$ being ACC, the widening operator between elements of the interval domain is the standard widening operator on $\mathsf{Interval}$ \cite{CC79}. Finally, the widening operator on $\mathsf{FA}_{/\equiv}$ is defined in terms of the widening operator $\triangledown_R$ over finite automata introduced in \cite{silva-thesis}. Let us consider two FA $A_1= (Q^1,\delta^1,q_0^1,F^1,\Sigma^1)$ and $A_2=(Q^2,\delta^2,q_0^2,F^2,\Sigma^2)$ such that $\cL(A_1) \subseteq \cL(A_2)$: the widening between $A_1$ and $A_2$ is formalized in terms of a relation $R \subseteq Q^1\times Q^2$ between the set of states of the two automata. The relation $R$ is used to define an equivalence relation $\equiv_R \subseteq Q^2 \times Q^2$ over the states of $A_2$, such that $\equiv_R = R \circ R^{-1}$. The widening between $A_1$ and $A_2$ is then given by the quotient automata of $A_2$ wrt the partition induced by $\equiv_R$: $A_1 \triangledown_R A_2 = A_2/\!\equiv_R$. Thus, the widening operator merges the states of $A_2$ that are in equivalence relation $\equiv_R$. By changing the relation $R$, we obtain different widening operators \cite{silva-thesis}. It has been proved that convergence is guaranteed when the relation $R_n \subseteq Q^1 \times Q^2$, such that $(q_1,q_2) \in R_n$ if $q_1$ and $q_1$, recognizes the same language of strings of length at most $n$~\cite{silva-thesis}. Thus, the parameter $n$ tunes the length of the strings determining the equivalence of states and therefore used for merging them in the widening. It is worth noting that, the smaller is $n$, the more information will be lost by widening automata. In the following, given two FA $A_1$ and $A_2$ with no constraints on the languages they recognize, we define the widening operator parametric on $n$ on $\mathsf{FA}_{/\equiv}$ as follows: $A_1 \triangledown_n A_2 \triangleq A_1 \triangledown_{R_n} (A_1 \sqcup A_2)$. \subsection{Abstract semantics of expressions} In this section, we model string operations, and in particular we observe that they can be expressed as SFTs, namely as symbolic transformers of a language of strings over $\Sigma$. The SFTs that correspond to symbol concatenation ${\tt s}\centerdot\sigma$ and to substring extraction $\subst{{\tt s}}{{\tt a}_1}{{\tt a}_2}$ are given in Fig.~\ref{fig:string-sft}, where $\sigma$ ranges over the alphabet $\Sigma$. \begin{figure} \begin{center} \includegraphics[scale=.]{sftstr.jpg} \end{center} \caption{SFT modeling string transformations with $\sigma\in\Sigma$.}\label{fig:string-sft} \end{figure} In particular, for symbol concatenation we have an SFT $T^C_{\delta}$ for each symbol $\delta\in\Sigma$. Each SFT $T^C_\delta$ adds the considered symbol $\delta$ at the end of any string (note that if non deterministically we follow the $\varepsilon$ edge in the middle of a string then we cannot terminate in a final state anymore, meaning that the input string is not recognized and therefore no output is produced). As far as the sub-string operation is concerned, we have an SFT $T^S_{n,m}$ for each pair of non-negative values $n$ and $m$, which reads $n-1$ symbols in the input string without producing outputs, then it reads $m$ symbols from the $n$-th, releasing the symbol also in output, and finally it reads all the remaining symbols without producing outputs. It is clear that, if the string ends before reaching the starting point $n$, or before reading $m$ symbols, then the string is not accepted and no output is produced. Namely, if ${\tt s}$ is the input string, the transformation works correctly only if $n+m\leq\textbf{len}({\tt s})$. We can now define the abstract semantics of expressions as $\grasseb{\ExpS}^\sharp =\mathbb{M}^\sharp \rarr{} \ensuremath{\textsf{AbstVal}}$ as the best correct approximation of the collecting concrete semantics. For instance, in Fig.~\ref{fig-aexpr} we specify the abstract semantics for string expressions. When we perform operations between expressions of the wrong type then we return $\top$, for example if we add an interval to an FA. \subsection{Abstract Program Semantics} We can now define the abstract transition relation $\leadsto^\sharp \subseteq \Conf^\sharp \times \Conf^\sharp$ among abstract states. The rules defining the abstract transition relation can be obtained from the rules of the concrete transition relation given in Fig.~\ref{fig:rules}, by replacing the collecting semantics of expressions $\grasseb{\cdot}$ with the abstract one, and by modifying the assignment rules and the executability test. In particular, in the abstract transition, the memory update of the assignment rules uses the widening operator over $\ensuremath{\textsf{AbstVal}}$ instead of the least upper bound. The executability test in the reflection rules is now $\cL(\grasseb{{\tt s}}^\sharp\mathfrak{s}_l^\sharp) \cap \overline{\CommS}$. This allows us to compute the \emph{partial abstract collecting trace semantics} $\cF^\sharp(P,\cI^\sharp)$ of the abstract transition system $\tuple{\Conf^\sharp,\leadsto^\sharp}$. Given a set of abstract initial states $\cI^\sharp \subseteq \Conf^\sharp$, we define the abstract fix-point function $F^\sharp:(\Conf^\sharp)^\ast \rarr{}\wp((\Conf^\sharp)^\ast)$, starting from $\cI^\sharp$, such that $\cF^\sharp(\tt P,\cI^\sharp) = {\sf lfp\/}(\mbox{$\mathtt{F}^\sharp_{\tt P,\cI^\sharp}$})$: {\small \[ \begin{array}{ll} \mathtt{F}^\sharp_{\tt P,\cI^\sharp}\triangleq\lambda X.\: \cI^\sharp\:\cu \ssetf{\Statec_0^\sharp\ldots\Statec_i^\sharp\Statec_{i+1}^\sharp}{\Statec_0^\sharp\ldots\Statec_i^\sharp\in X,\ \Statec_i^\sharp\leadsto^\sharp\Statec_{i+1}^\sharp\!\!} \end{array} \]} \begin{theorem} $\cF^\sharp(\tt P,\cI^\sharp)$ is a sound approximation of $\cF(\tt P,\cI)$. \end{theorem} \begin{example}\label{rexe1} Consider the following program fragment $P$ \begin{center} {\footnotesize \begin{tabular}{l} $\pp{1}\mbox{\bf while}\ x<3$\\ $\qquad \{os:=os\centerdot '\!xA:=Bx+1B;y:=1A0;x:=Bx+1A;A\mbox{\tt \$}';\};$\\ $\pp{2}ds:=\textsl{deobf}(os);$\\ $\pp{3}\mbox{\bf if}\ x>10$\\ $\qquad \{os:='\!whiAleBx\!<\!5AA\{x:A=x+1;y:=x;\};B\mbox{\tt \$}';\};$\\ $\pp{4}ds:=\textsl{deobf}(os);$\\ $\pp{5}\mbox{\bf if}\ x=5\ \{os:='\!hello';\};$\\ $\pp{6}\mbox{\bf if}\ x=8\ \{os:='\!wBhilAeBx;';\};$\\ $\pp{7}ds:=\textsl{deobf}(os);$\\ $\pp{8}\textbf{reflect}(ds);$\\ $\pp{9}\mbox{\tt \$}$ \end{tabular}} \end{center} where $ds:=\textsl{deobf}(os)$ is a syntactic sugar for the string transformer in Fig.~\ref{frexe1} In Fig.~\ref{frexe1}-(a) is the FA, namely the abstract value, of $ds$ at program line $8$, computed by the proposed static analysis, wrt $\triangledown_3$. \begin{figure}[ht] \begin{center} \includegraphics[scale=.25]{frexe1-tec.jpg} \end{center} \caption{FA $A^8_{ds}$ abstracting the value of $ds$ at program line $8$ of Ex.~\ref{rexe1}.}\label{frexe1} \end{figure} \end{example} It is worth noting that, even in the approximate computation, we have the problem of decidability of the executability of $\grasseb{{\tt s}}^\sharp\mathfrak{s}_l^\sharp$. Indeed, it is still in general undecidable to compute the intersection $\cL(\grasseb{{\tt s}}^\sharp\mathfrak{s}_l^\sharp)\cap\overline{\CommS}$ between a possibly infinite language modeling the possible values of a string expression ${\tt s}$ in a certain program point and a context free grammar (CFG) modeling the language. This means that, our implementation of the analysis needs to approximate the set of executable strings collected during the abstract computation for the arguments of reflection instructions. \section{The SEA Analyzer}\label{approx2} The SEA analyser implements both a new string analysis domain and it performs an executability analysis in presence of a reflection statement. SEA is indeed a prototype implementation with the ambition of providing a general language-independent sound-by-construction architecture for the static analysis of self modifying code, where only some components are language-dependent, in our case the abstract interpreter for $\CommS$. The first feature of SEA consists in the implementation of the interpreter based on the flow-sensitive collecting semantics proposed in Sect.~\ref{sect:asem}. The main original contribution is in the way the reflection analysis is handled. In particular, we provide an algorithmic approach for approximating in a decidable way the executability test $\cL(\grasseb{{\tt s}}^\sharp\mathfrak{s}^\sharp_l) \cap \overline{\CommS}$ and for building a program in $\CommS$ that soundly approximates the executable programs, i.e., whose semantics soundly approximates the semantics of the code that may be executed in a reflection statement. Our idea is first to filter the automaton collecting the string analysis in order to keep only an over-approximation of the executable strings and then to synthesise a code fragment whose possible executions over-approximate the possible concrete executions. In Fig.~\ref{fig:archSEA} we show how SEA works, and we explain the architecture on a running example. \begin{figure} \begin{center} \includegraphics[scale=.55]{arcSEA-10.pdf} \end{center} \caption{Architecture and call execution structure of SEA.}\label{fig:archSEA} \end{figure} In Ex.~\ref{rexe1} we showed the execution of $\ensuremath{\mathsf{Int}}^{\#} (P,\mathfrak{s})$, where $\mathfrak{s}$ starts with {\em any value} for $x$, up to program line $8$. Now, we can explain how the analysis works. In particular, following the execution structure in Fig.~\ref{fig:archSEA}, at line $8$ we call the execution of $\ensuremath{\mathsf{Exe}}^{\#}$ on $A^8_{ds}$ given in Fig.~\ref{frexe1} and in the following simply denoted $\overline{A}$. The first step consists in reducing the number of states of the automaton, by over-approximating every string recognized as a statement, or partial statement, in $\CommS$. \paragraph*{StmSyn.} The idea is to consider the automaton computed by the collecting semantics $\mathtt{A}$, and to collapse all the consecutive edges up to any punctuation symbol in $\{\mbox{\small $;,\{,\},\mbox{\tt \$}$}\}$. In particular, any executable statement will end with $\mbox{\small $;$}$, while $\mbox{\small $\{$}$ and $\mbox{\small $\}$}$ allow to split strings when the body of a $\mbox{\bf while}$ or of an $\mbox{\bf if}$ begins or ends, finally $\mbox{\tt \$}$ recognises the end of a program. Hence, we design the procedure $\textsc{Build}$, computed by Alg.~\ref{algoIq}, and recursively called by Alg.~\ref{algo} that returns an automaton on a finite subset of the alphabet: $\Sigma_{\tt\tiny Syn}=\{\:\mbox{\small $\},\mbox{\tt \$}$}\}\cup\sset{x;}{x\in\Sigma^*}\cup\sset{x\mbox{\small $\{$}}{x\in\Sigma^*}$. In particular, given the parsing tree $\mathtt{T_A}$ of the automaton $\mathtt{A}$, obtained by performing a depth first visit on $\mathtt{A}$, we define \[\mathsf{Str}\triangleq\ssetf{x\in(\Sigma\smallsetminus\{\mbox{\small $;,\{,\},\mbox{\tt \$}$}\})^*}{\exists\ \mbox{path}\ \pi\ \mbox{in}\ \mathtt{T_A}\ \mbox{such that}\\ x\ \mbox{ maximal substring of}\ \pi}. \] Hence, the finite alphabet of the resulting automaton is $\Sigma^\mathtt{A}_{\tt\tiny Syn}=\{\:\mbox{\small $\},\mbox{\tt \$}$}\}\cup\sset{x;}{x\in\mathsf{Str}}\cup\sset{x\mbox{\small $\{$}}{x\in\mathsf{Str}}$. \begin{algorithm} {\footnotesize\caption{Building the FA.}\label{algo} \begin{algorithmic}[1] \Require An FA $A=(Q,\delta,q_0,F,\Sigma)$ \Ensure An FA $A'=(Q',\delta',q_0,F',\Sigma^\ast)$ \Procedure{\tt StmSyn}{$A$} \State $q_0'=\delta(q_0,')$ $//\mbox{The first apex \mbox{$'$} is erased}$ \State $Q' \gets \{q_0'\};\ F'\gets F\cap\{q_0'\};\ \delta'\gets\varnothing$,\ Visited$\:\gets\{q_0'\}$; \State \textsc{stmsyntr}$(q_0')$; \EndProcedure \Procedure{stmsyntr}{q} \State $B\gets$\textsc{Build}$(A,q)$; \State Visited $\gets$ Visited $\cup\{q\}$;\ $Q'\gets Q'\cup\sset{p}{(\mathbf{a},p)\in B}$; \State $F'\gets Q'\cap F$;\ $\delta'\gets \delta'\cup\sset{(q,\mathbf{a},p)}{(\mathbf{a},p)\in B}$; \State $W\gets\sset{p}{(\mathbf{a},p)\in B}\smallsetminus$Visited; \While{$W\neq\varnothing$} \State select $p$ in $W$ ($W\gets W\smallsetminus\{p\}$); \State \textsc{stmsyntr}$(p)$; \EndWhile \EndProcedure \end{algorithmic}} \end{algorithm} \begin{algorithm} {\footnotesize\caption{Statements recognized from a state $q$.}\label{algoIq} \begin{algorithmic}[1] \Require An FA $A=(Q,\delta,q_0,F,\Sigma)$ \Ensure $I_q$ set of all pairs (statement,reached state) \Procedure{Build}{$A,q$} \State $I_q \gets \varnothing$ \State $\textsc{buildtr}$(q,$\varepsilon$,$\varnothing$) \EndProcedure \Procedure{buildtr}{q,word,Mark} \State $\Delta_q\gets\sset{(\sigma,p)}{\delta(q,\sigma)=p}$ \While{$\Delta_q\neq\varnothing$} \State select $(\sigma,p)$ in $\Delta_q$ ($\Delta_q\gets\Delta_q\smallsetminus\{(\sigma,p)\}$) \If{$(q,p)\notin$ Mark} \If{$\sigma\notin\{\mbox{\small $;,\{,\},\mbox{\tt \$}$}\}\ \wedge\ p\notin F$} \State $\textsc{buildtr}$(p,word.$\sigma$,Mark$\cup\{$(q,p)$\}$) \EndIf \If{$\sigma\in\{\mbox{\small $;,\{,\},\mbox{\tt \$}$}\}$} $I_q \gets I_q\cup\{($word$.\sigma,p)\}$ \EndIf \If{$\sigma=\:'\ \wedge\ p\in F$} $I_q \gets I_q\cup\{($word$,p)\}$ \EndIf \EndIf \EndWhile \EndProcedure \end{algorithmic}} \end{algorithm} The idea of the algorithm is first to reach $q_0'$ from $q_0$ reading the symbol $'$, and then to perform, starting from $q_0'$, a visit of the states recursively identified by Algorithm~\ref{algoIq} and to recursively replace the sequences of edges that recognize a symbol in $\Sigma^\mathtt{A}_{\tt\tiny Syn}$ with a single edge labeled by the corresponding string. In particular, from $q_0'$ we reach the states computed by $\textsc{Build}(q_0')$, and the corresponding read words. Recursively, we apply $\textsc{Build}$ to these states, following only those edges that we have not already visited. It is clear that, in this phase all the non-executable strings not ending with a symbol in $\{\mbox{\small $;,\{,\},\mbox{\tt \$}$}\}$ are erased from the automata, hence we have a reduction of non executable strings. For instance, in Fig.~\ref{frexe5} we have the computation of $\texttt{StmSyn}(\overline{A})$, denoted $\overline{A}_{\mbox{\tt\tiny d}}$. \begin{figure}[ht] \begin{center} \includegraphics[scale=.35]{frexe3-10.jpg} \end{center} \caption{Automaton $\overline{A}_{\mbox{\tt\tiny d}}=\texttt{StmSyn}(\overline{A})$.}\label{frexe5} \end{figure} From the computational point of view, we can observe that the procedure $\textsc{Build}(A,q)$ executes a number of recursive-call sequences equal to the number of maximal acyclic paths starting from $q$ on $A$. The number of these paths can be computed as $\sum_{q \in Q} (\mathit{outDegree}(q) - 1) + 1$, where $\mathit{outDegree}(q)$ is the number of outgoing edges from $q$. The worst case depth of a recursive-call sequence is $|Q|$. Thus, the worst case complexity of $\textsc{Build}$ (when $\mathit{outDegree}(q)=|Q| \times |\Sigma|$ for all $q \in Q$) is $O(|Q|^3)$. As far as $\texttt{StmSyn}$ is concerned, we can observe that in the worst case we keep in $\texttt{StmSyn(A)}$ all the $|Q|$ states of $A$, hence in this case we launch $|Q|$ times the procedure $\textsc{Build}$, and therefore the worst case complexity of $\texttt{StmSyn}$ is $O(|Q|^4)$. \\ Next step consists in verifying whether the labels of each edge in $\texttt{StmSyn}(\mathtt{A})$ are potentially executable, or portion of an executable statement. \paragraph*{Lex-Parser.} In order to proceed with the analysis, we need to synthesize a program from $\mbox{\tt A}_{\mbox{\tt\tiny d+}}$ approximating the set of executable string values assumed by string ${\tt s}$ at program line $l$ where reflection is executed. This would allow us to replace the argument of the reflect with the synthesized program and use the same analyser (abstract interpreter) for the analysis of the generated code. Hence, we have to check whether each label in $\mbox{\tt A}_{\mbox{\tt\tiny d}}=\texttt{StmSyn}(\mathtt{A})$ is in particular in the alphabet $\CommS^- \subseteq \Sigma^\mathtt{A}_{\tt\tiny Syn}$ of (partial) statements of $\CommS$ statements: \[ \CommS^-\triangleq \set{ \mbox{\bf skip};, x:={\tt e};, \textbf{if} ~{\tt b}~\{, \textbf{while} ~{\tt b}~ \{,\\ \textbf{reflect}({\tt s});, x := \textbf{reflect}({\tt s});,\},\mbox{\tt \$}} \] where ${\tt s},{\tt e},{\tt b}$ are expressions in the language $\CommS$. Hence, we need a parser for the language $\CommS^-$. This parser can me modelled as the composition of two SFTs. The first one, $\mathtt{Lex}$, has to recognises the lexemes in the language by identifying the language tokens. We consider the following set of tokens for $\CommS^-$. These tokens correspond to the terminals of $\CommS^-$ except for the punctuation symbols $\mathfrak{P} \triangleq \{\mbox{\small $;,\{,\},),\mbox{\tt \$}$}\}$ that will be directly handled by the parser. \vspace{-.2cm} \[ \mathtt{Tokens}\triangleq\set{\texttt{id},\texttt{const}_{{\tt s}},\texttt{const}_{{\tt a}},\texttt{const}_{{\tt b}},\texttt{aop},\texttt{bop},\texttt{uop},\\\texttt{num},\texttt{len},\texttt{conc}_\delta,\texttt{substr},\texttt{relop},\texttt{if},\texttt{while},\\\texttt{assign},\texttt{skip},\texttt{reflect},\texttt{rand}} \] For each token $\texttt{T} \in \mathtt{Tokens}$, it is possible to define an SFT that recognises its possible lexemes and outputs the lexemes followed by the token name. Let us denote with $T_\texttt{T}$ the SFT that recognises the lexemes of the token $\texttt{T} \in\mathtt{Tokens}$, so, for example, $T_{\texttt{id}}$ is the SFT corresponding to the token $\texttt{id}$. The transduction is $\fT_{\mathtt{Lex}}:\Sigma^\ast\longrightarrow (\Sigma \cup \mathtt{Tokens})^\ast$ defined as: \vspace{-.2cm} \begin{multline*} \fT_{\mathtt{Lex}}(\mathbf{a}) \def \left \{ \begin{array}{ll} \mathbf{a}^0 \texttt{T}^0 \texttt{p}^0 \mathbf{a}^1 \texttt{T}^1 \texttt{p}^1 \dots \mathbf{a}^n\texttt{T}^n \texttt{p}^n& \mbox{if }\mathbf{a} = \mathbf{a}^0 \cdot \mathbf{a}^1 \cdot ... \cdot \mathbf{a}^n \in \Sigma^\ast\\ & \forall i \in [0,n]: \, \mathbf{a}^i \in \cL(T_{\texttt{T}^i}),\\ & \texttt{T}_i \in \mathtt{Tokens}, \texttt{p}^i \in \mathfrak{P}^\ast\\ \emptyset & \mbox{otherwise}\\ \end{array} \right. \end{multline*} In order to build the $\mathtt{Parser}$, we design also the SFT recognising the correct sequences of lexemes and tokens that build respectively arithmetic, boolean and string expressions, and which correctly combines them in order to obtain objects in the language $\CommS^-$. Hence, $\mathtt{Parser}$ should implement the transduction function $\fT_{\mathtt{Parser}}: (\Sigma \cup \mathtt{Tokens})^\ast \rightarrow \Sigma^*$ is such that: $\mathbf{a} \in \CommS^- \; \Rightarrow \; \fT_{\mathtt{Parser}}(\fT_{\mathtt{Lex}}(\mathbf{a})) = \mathbf{a}$. This means that the composition $\mathtt{Lex} \diamond \mathtt{Parser}$ allows sequences of $\Sigma^\mathtt{A}_{\tt\tiny Syn}$ which are in $\CommS^-$. The other implication does not hold since $\mathtt{Parser}$ allows also sequences of commands of $\CommS^-$ that contain syntactic errors due an erroneous number of punctuation symbols in $\mathfrak{P}$. This means that for example the sequence $\mstr{x\texttt{id}:= \texttt{assign}x\texttt{id}+\texttt{aop}1\texttt{const$_{\mbox{\tt a}}$};;;\mathit{skip}\texttt{skip}}$ is allowed by $\mathtt{Parser}$ and given in output as it is. In Fig.~\ref{frexe3} we can find the automaton $\overline{A}_{\mbox{\tt\tiny d+}}$, which is $\overline{A}_{\mbox{\tt\tiny d+}}$ where all the sequences which are not in $\CommS^-$ are erased. This module is implemented in SEA using JavaCC \cite{jcclink}: given in input a BNF-style definition of a grammar $G$ it returns as output the parser for $G$. \begin{figure}[t] \begin{center} \includegraphics[scale=.4]{frexe5.jpg} \end{center} \caption Executable automaton $\overline{A}_{\mbox{\tt\tiny d+}}=\mathtt{Lex} \diamond\mathtt{Parser}(\overline{A}_{\mbox{\tt\tiny d}})$.}\label{frexe3} \end{figure} \paragraph*{Regex.} The so far obtained automaton can be used to synthesize a program by extracting the regular expression corresponding to the language it recognizes \cite{oz64}. Let $\textsl{RE}$ be the domain of regular expressions over $\CommS^-$, and $\mathtt{Regex}:\mbox{FA}\rightarrow\textsl{RE}$ be such an extractor. For instance, in the running example, $\overline{R}_{\mbox{\tt\tiny exp}}=\mathtt{Regex}(\overline{A}_{\mbox{\tt\tiny d+}})$ is the following regular expression (with standard operators in boldface): \begin{center} {\footnotesize \begin{tabular}{l} $\overline{R}_{\mbox{\tt\tiny exp}}=\ x:=x+1;\mbox{\tt \$}\:${\bf\large +}$\:\mbox{\bf while}\:x>5\:\{x:=x+1;y:=x;\};\mbox{\tt \$}$\\ {\bf +}$\:x:=x+1;y:=10;${\bf\large (}$x:=x+1;y:=10;${\bf\large )}$^*x:=x+1;\mbox{\tt \$}\:$ \end{tabular}} \end{center} SEA implements the Brzozowski algebraic method~\cite{oz64} to convert an automaton to an equivalent regular expression. \paragraph*{ProgSyn.} Finally, we define $\mathtt{ProgSyn}$ implementing the function $\trad{\cdot}_{\tt P}: \textsl{RE} \rightarrow \CommS$ that, given a regular expression ${\tt r} \in \textsl{RE}$, translates it into a program in $\CommS$. This is defined in terms of a translation function $\trad{\cdot}:\textsl{RE}\rightarrow\Ccomms$ (erasing $\mbox{\tt \$}$) inductively defined on the structure of the regular expression ${\tt r}$: Let us denote by ${\tt d}_;$ the symbol ${\tt d}$ without the last $;$ (e.g., $(x:=x+1;)_;=x:=x+1$) {\footnotesize \[ \begin{array}{rl} \trad{{\tt d}} = & {\tt d}_; \mbox{ if } {\tt d} \in \CommS^-\\ \trad{{\tt r}\mbox{\tt \$}} = &\trad{{\tt r}};\\ \vspace{.1cm} \trad{{\tt r}_1{\tt r}_2} = & \trad{{\tt r}_1\!};\trad{{\tt r}_2}; \\ \vspace{.1cm} \trad{\textbf{\large (}{\tt r}\textbf{\large )}^*} = & \left [ \begin{array}{l} g: = \mbox{\bf rand}();\\ \mbox{\bf while}\ g = 1\ \{\trad{{\tt r}};g: = \mbox{\bf rand}();\}; \end{array} \right.\\ \vspace{.0cm} \trad{{\tt r}_1\textbf{\large +}{\tt r}_2} = & \left [ \begin{array}{l} g: = \mbox{\bf rand}();\\ \textbf{if } g = 1 \, \{\trad{{\tt r}_1};\}; \textbf{if } g = 2 \, \{\trad{{\tt r}_2};\}; \end{array} \right. \end{array} \]} and $\trad{{\tt r}}_{\tt P}=\labb{\trad{{\tt r}}\mbox{\tt \$}}$. Hence, in our running example, the synthesis from the regular expression $\overline{R}_{\mbox{\tt\tiny exp}}$, i.e., $\overline{P}_{\mbox{\tt\tiny syn}}=\mathtt{ProgSyn}(\overline{R}_{\mbox{\tt\tiny exp}})$, is the program\\ \\ {\footnotesize \begin{tabular}{l} $\pp{1}g1:= \mbox{\bf rand}();$\\ $\pp{2}\mbox{\bf if}\ g1=1\ \{\pp{3}x:=x+1;\};$\\ $\pp{4}\mbox{\bf if}\ g1=2\ \{$\\ \qquad$\pp{5}g2:=\mbox{\bf rand}();$\\ \qquad$\pp{6}\mbox{\bf if}\ g2=1\ \{\pp{7}\mbox{\bf while}\ x>5\ \{\pp{8}x:=x+1;\pp{9}y:=x;\};\};$\\ \qquad$\pp{10}\mbox{\bf if}\ g2=2\ \{$\\ \qquad\qquad$\pp{11}x:=x+1;\pp{12}y:=10;\pp{13}g3=\mbox{\bf rand}();$\\ \qquad\qquad$\pp{14}\mbox{\bf while}\ g3 = 1\ \{\pp{14}x:=x+1;\pp{15}y:=10;$\\ \qquad\qquad\qquad\qquad\qquad\ \ \ \ \ \: $\pp{16}g3=\mbox{\bf rand}();\};$\\ \qquad\qquad$\pp{17}x:=x+1;\};$\\ \qquad$\};\pp{18}\mbox{\tt \$} \end{tabular}} \paragraph*{Soundness.} Next theorem proves the soundness of the approximate program synthesis. Safety (i.e., prefix closed) properties of dynamically generated code are soundly approximated by the synthesized program output of our analysis. \begin{theorem}\label{th} Let $\tt P\in\CommS$ containing $\pp{l}\textbf{reflect}({\tt s})$, and let $\mathfrak{s}\in\Store$ be the store on which $\tt P$ is executed. Then, for any $\mathfrak{s}'\in\Store$ such that $\mathfrak{s}'_1=\mathfrak{s}_l$ the partial semantics of any statement in the evaluation of ${\tt s}$ executed from $\mathfrak{s}'$ is contained the partial semantics of the synthesized program, formally {\small \[ \begin{array}{l} \forall{\tt c}\in \grasseb{{\tt s}} \mathfrak{s}_l\cap\overline{\CommS}.\:\\ \grasstr{{\tt c}}\mathfrak{s}'\subseteq \grasstr{\mathtt{ProgSyn}(\mathtt{Regex}(\mathtt{Lex}\diamond\mathtt{Parser}(\mathtt{StmSyn}(\mathtt{A}_{{\tt s}}^l))))}\mathfrak{s}'. \end{array} \]} \end{theorem} \paragraph*{Termination.} As observed in Example~\ref{infTower}, the use of reflection suffers from the potential divergence of unbounded nested reflection which goes beyond the control of widening. In this case, the divergence comes directly from the meaning of reflection and cannot be controlled by the semantics once we execute the reflect statement, hence also our analysis in this situation would diverge. SEA ensures soundness until a maximal degree of nested calls to $\textbf{reflect}$. In order to keep soundness beyond a maximal degree of nested reflections, we can introduce a widening with threshold, i.e., the widening acts after a given number of calls to the abstract interpreter. This corresponds to fix a maximal allowed height of towers, fixing the degree of precision in observing the nesting of reflect statements. Given a \emph{tower height threshold} $\tilde{\tau}$ such that, any tower higher than $\tilde{\tau}$ is approximated as computing any possible value for the program variables whose name is a substring of the string evaluated at $\tilde{\tau}$, therefore guaranteeing soundness. In order to check the height of towers, we need to enrich the store by including a new numerical variable $\tau$ counting the nesting level of reflection. Let $\Store^{\tilde{\tau}}:\textsc{Lines}\rightarrow \mathbb{M}^{\tilde{\tau}}$ be this enriched domain, where $\mathbb{M}^{\tilde{\tau}}:\ensuremath{\textsf{Var}}\cup\{\tau\}\rightarrow\wp(\mathbb{Z})\cup\mathsf{Bool}\cup\wp(\Sigma^*)\cup\mathbb{Z}$. Hence, we can define a new partial trace semantics $\grasstr{\cdot}^{\tilde{\tau}}$ on the transition system $\tuple{\Conf^{\tilde{\tau}},\leadsto^{\tilde{\tau}}}$ associated with programs in $\CommS$, where $\Conf^{\tilde{\tau}}=\CommS\times\Store^{\tilde{\tau}}$ is the set of states in the transition system and $\leadsto^{\tilde{\tau}}\subseteq\Conf^{\tilde{\tau}}\times\wp(\Conf^{\tilde{\tau}})$ is the transition relation. Note that, the semantics of all statements does not change (supposing that $\mathbb{mem}_\varnothing\in\mathbb{M}^{\tilde{\tau}}$ associates $0$ to $\tau$) except for $\textbf{reflect}$, whose new rule should count the number of recursive interpreter $\ensuremath{\mathsf{Int}}^{\#}$ calls. \section{Evaluation}\label{evaluation} \begin{figure*} \scalebox{0.62}{% \vbox{% \hspace{1.5cm} \begin{tabular}{|l|l|l|} \hline P & TAJS analysis of $y$ & TAJS reflection of $y$ \\ \hline\hline \begin{tabular}{l} $y:=\mstr{x=x+1;};\textbf{reflect}(y);$ \end{tabular} &$\mstr{x=x+1;}$&$x:=x+1;$\\ \hline \begin{tabular}{l} $\mbox{\bf if}\ x>0 \{y:=\mstr{a:=a+1;}\}; $\\ $\mbox{\bf if}\ x<0 \{y:=\mstr{b:=b+1;}\}; \textbf{reflect}(y);$\\ \\ \end{tabular} &$\mathtt{String}$&$\mathtt{AnalysisLimitationException}$\\ \hline \begin{tabular}{l} $x:=1; \mathit{y}:= \:'\;';$\\ $\mbox{\bf while} \; x < 3 \; \{\mathit{y} := \mathit{y} \centerdot \mstr{x:=x+1;}; x:=x+1;\};$\\ $\mathit{y}:=\mathit{y}\centerdot\mstr{\$};\textbf{reflect} (\mathit{y});$\\ \\ \end{tabular}&$\mathtt{String}$&$\mathtt{AnalysisLimitationException}$\\ \hline \end{tabular} \begin{center} \begin{tabular}{|l|l|l|} \hline P & SEA analysis of $y$ & SEA reflection of $y$\\ \hline\hline \begin{tabular}{l} $y:=\mstr{x=x+1;};\textbf{reflect}(y);$ \end{tabular} & \includegraphics[scale=.3]{eval1.jpg}&\begin{tabular}{l} $x:=x+1;\mbox{\tt \$}$\\ \\ \end{tabular}\\ \hline \begin{tabular}{l} $\mbox{\bf if}\ x>0 \{y:=\mstr{a:=a+1;}\}; $\\ $\mbox{\bf if}\ x<0 \{y:=\mstr{b:=b+1;}\}; \textbf{reflect}(y);$\\ \\ \end{tabular} &\includegraphics[scale=.3]{eval2.jpg}& \begin{tabular}{l} $g1 := \mbox{\bf rand} ();$\\ $\mbox{\bf if}\ g1 = 1 \{a := a +1;\};$\\ $\mbox{\bf if}\ g1 = 2 \{b := b +1;\};\mbox{\tt \$}$\\ \\ \end{tabular}\\ \hline \begin{tabular}{l} $x:=1; \mathit{y}:= \:'\;';$\\ $\mbox{\bf while} \; x < 3 \; \{\mathit{y} := \mathit{y} \centerdot \mstr{x:=x+1;}; x:=x+1;\};$\\ $\mathit{y}:=\mathit{y}\centerdot\mstr{\$};\textbf{reflect} (\mathit{y});$\\ \\ \end{tabular}&\includegraphics[scale=.3]{eval3.jpg} & \begin{tabular}{l} $x:=x + 1;g1:=\mbox{\bf rand}();$\\ $\mbox{\bf while}\ g1 = 1\; \{$\\ $\quad x:=x + 1\; g1:=\mbox{\bf rand}(); \};\mbox{\tt \$}$\\ \\ \end{tabular}\\ \hline \end{tabular} \end{center} } } \caption{SEA \textit{vs} TAJS}\label{confronto} \end{figure*} \begin{figure*}[h] \begin{center} \includegraphics[scale=.6]{no-reflection-graph.pdf}\hspace{2cm} \end{center} \begin{center} \includegraphics[scale=.6]{reflection-graph.pdf} \end{center} \caption{Execution times in secs.\ without reflection (top) and with reflection (bottom). We ran the tests on an Intel i5-4210u 2.20 GHz processor.}\label{fig:no-ref}\label{fig:ref} \end{figure*} SEA is a proof of concept, showing that it is possible to design and implement an efficient sound-by-construction static analyser based on abstract interpretation for self modifying code written in high-level script-like languages. It was not in the intention of SEA to be optimal and directly applicable to existing script dynamic languages, such as PHP or JavaScript. We implemented SEA in Java 1.8 and we tested it on some significant code examples in order to highlight the strengths and the weaknesses of the analyser presented. In particular, we evaluate the precision of our string abstract domain as compared to TAJS~\cite{tajs09,tajs-talk,tajs-tool} (version 0.9-8), which is one of the best static analyser available for JavaScript based on abstract interpretation. To the best of our knowledge TAJS is the only tool statically analysing string-to-code primitives such as \textbf{eval}. This approach basically consist of a sound transformation of a JavaScript program $P_{\mbox{\tt\tiny eval}}$, containing \textbf{eval}, in another JavaScript program $P_{\mbox{\tt\tiny uneval}}$ where the \textbf{eval} statement is substituted with its argument, obviously converted in executable code, when this is possible, namely when the code to execute can be statically extracted as a constant form $P_{\mbox{\tt\tiny eval}}$. All examples in the next sections have been compiled from $\CommS$ into a semantics-equivalent JavaScript program, in order to perform the comparison with TAJS. \subsection{Precision}\label{precision} We performed more than 100 tests on programs of variable length, 70 of them are used to test the SEA performances and will be addressed in Sect.~\ref{perf} ~We observed that the results can be classified in three different classes depending on some features of the analysed program. We report three significant examples in Fig.~\ref{confronto} where we also compare SEA with TAJS. The first class of tests consists in all the programs where the string variables collect only one value during execution, i.e., they are constant string variables. A toy example in this class of programs is provided in the first row of Fig.~\ref{confronto}, where the string value contained in $y$ is hard-coded and constant. In this case, both SEA and TAJS are precise and no loss of information occurs during the analyses. By using the value of $y$ in SEA as input of $\textbf{reflect}$, we obtain exactly the statement $x:=x+1;$ since $\mathtt{Exe}^\sharp$ behaves as the identity function. TAJS performs the uneval transformation and executes the same statement. The second class of tests consists in all the programs where there are no constant string variables, namely variables whose value before the reflection is not precisely known being a set of potential string values. As toy example of this class of programs, consider the snippet of code in the second row of Fig.~\ref{confronto}, a simplification of Example \ref{rexe1}. In this case, since we don't have any information about $x$ we must consider both the branches, which means that before the reflection we only know that $y$ is one value between $'a:=a+1'$ and $'b:=b+1'$. If we analyse this program in TAJS, we observe that, after the if statement, the value of $y$ is identified as a string, since TAJS does not perform a collecting semantics and when it loses the constant information it loses the whole value. Unfortunately, this loss of precision, in the analysis of $y$, makes the TAJS analysis stuck, producing an exception when the reflection statement is called on the non constant variable. On the other hand, SEA computes the collecting semantics and therefore it keeps the least upper bound of the stores computed in each branch, obtaining the abstract value for $y$ modelled by the automaton $A_y$ in the second row equivalent to the regular expression $\mstr{a:=a+1;} + \mstr{b:=b+1;}$. Afterwards, the SEA analyser returns and analyse the sound approximation of the program passed to the reflection statement reported in the second row, which is the result of $\ensuremath{\mathsf{Exe}}^{\#}(A_y)$. In the last class of examples, the string that will be executed is not constant and it is dynamically built during execution. In the simple example provided in Fig.~\ref{confronto} the dynamically generated statement is $\mstr{x:=x+1; x:=x+1;}$. In this case, as it happened before, TAJS loses the value of $y$ (which is a set of potential strings) and can only identify $y$ as a string. This means that, again, the reflection statement makes the analysis stuck, throwing an exception. On the other hand, SEA performs a sound over-approximation of the set of values computed in $y$. In particular, the analysis, in order to guarantee termination and therefore decidability, computes widening instead of least upper bound between automata inside the loop. This clearly introduces imprecision, since it makes us lose the control on the number of iterations. In particular, instead of computing the precise automaton containing only and all the possible string values for $y$ (as in the previous case) we compute an automaton strictly containing the concrete set of possible string values. The computed automaton is reported in the third row and it is equivalent to the regular expression $\mbox{$\:'x:=x+1;(x:=x+1;)^*\;'$}$. The presence of possible infinite sequences of $\mstr{x:=x+1;}$ is due to the over-approximation induced by the widening operator $\triangledown_3$ on automata. Nevertheless, note that the widening parameter can be chosen by the user in order to tune the desired precision degree of the analysis: the higher is the parameter the more precise and costly is the analysis. It should be clear that, the introduced loss of precision increases the imprecision in any further analyses which uses the synthesised code. The synthesis of the program from the abstract value of $y$ returns the code reported in the third row: due to widening, as observed above, the command $\mstr{x:=x+1}$ can diverges. A final observation on precision concerns the analysis of programs with unknown inputs. SEA considers an unknown input as a variable that may assume any possible string value. It is clear that in this kind of situations, TAJS necessarily get stuck whenever something depending on this unknown input is executed. Instead, SEA can keep some information since the abstract value consisting in {\em any possible value} is modelled by the automaton recognising $\Sigma^*$. In this way SEA can trace the string manipulations (substring and concatenation) on the unknown input, occurring during the execution. \subsection{Performances}\label{perf} We have tested the performances of SEA on a benchmark of 70 increasingly complex programs. Each program manipulates an automaton and finally it reflects the string value. The benchmark can be clustered in four families depending on the kind of string operations considered in the programs, determining the kind of automata manipulations performed by the analysis: \textit{\textbf{add}} (programs where the manipulation of strings adds always new whole statements, this corresponds, in our analysis, to adding completely new paths in the automaton), \textit{\textbf{concat}} (programs where the manipulation of strings concatenates new paths to those in the automaton), \textit{\textbf{mixed}} (programs performing both the manipulations), \textit{\textbf{code}} (programs in the \textit{\textbf{add}} family, where we have added statements not manipulating strings, i.e., standard code not affecting strings). Fig.~\ref{fig:no-ref}-top, shows the results obtained from the benchmark concerning the string analysis without reflection: increasing the lines of code as well as the number of the automaton states, the execution time increases with an almost linear trend. In Fig~\ref{fig:ref}-bottom, we show the results due to string executability analysis. The total execution time increases more quickly of both than the length code and the automata dimension. But, it is worth noting that most of the time increase is due to the execution of the code generated by $\ensuremath{\mathsf{Exe}}^{\#}$ (the top black portion of the bars in Fig.~\ref{fig:no-ref}-bottom), while the time cost of the analysis still increases with an almost linear trend. This outcome tells us that the string and executability analysis scale quite well on the source original code, but it gets worse on the code generated by $\ensuremath{\mathsf{Exe}}^{\#}$. We believe that this is due to the implementation of the synthesised code, which does not optimise the code generated by $\ensuremath{\mathsf{Exe}}^{\#}$. The optimisation of the generated code is a future work that deserves further investigation. \subsection{Analysis limitations and conclusions} SEA attacks an extremely hard problem in static program analysis, providing the very first proof of concept in sound static analysis for self-modifying code based on bounded reflection in a high-level script-like language. The main limitations of SEA are two: the simplicity of the programming language analysed, missing some important language features such as procedure calls and objects, and the fact that $\CommS$ is not a real script language. The choice of keeping the language as simple as possible is due to the aim of designing a core analyser focusing mainly on string executability analysis. We are conscious that, in order to implement a real-world static analyser we will have to integrate in our language several more sophisticated language features, but this is beyond this proof of concept. For instance, an extension would be the possibility of allowing implicit type conversion statements provided by many modern languages, such as PHP, JavaScript or Python. As far as the second limitation is concerned, we already observed that this choice is due to the ambition of providing the most general possible architecture for executability string analysis. We believe that SEA is a step towards an implementation of a sound-by-construction analyser for reflection in real dynamic languages, since its design is fully language independent. In particular, the SEA architecture is invariant on the choice of the string abstraction, in our case FA, as well as the other state abstractions, in our case intervals, and on the language features, provided that their formal semantics is given. \bibliographystyle{acm}
1,477,468,750,800
arxiv
\section{Introduction} Runtime monitoring is one of the central tasks to provide \emph{operational decision support} to running business processes \cite{Aal16}. While traditional process mining techniques analyze event data of already completed process instances, operational support lifts process mining to running, live process executions, providing an online feedback that can be used to influence the future continuations of such executions. In this setting, the goal of monitoring is to check on-the-fly whether a running process instance complies with business constraints and rules of interest, promptly detecting deviations \cite{LMM13}. Such indicators can, in turn, be used to compute different monitoring metrics, obtaining a succinct summary about the degree of compliance of a running process instance. In order to provide provably correct runtime monitoring techniques with a well-defined semantics and a solid formal background, monitoring is typically rooted into the field of \emph{formal verification}, the branch of formal methods aimed at checking whether a system meets some property of interest. Being the system dynamic, properties are typically expressed by making use of temporal logics, that is, modal logics whose modal operators predicate about the evolution of the system along time. Among all the temporal logics used in verification, Linear-time Temporal Logic ({\sc ltl}\xspace) is particularly suited for monitoring, as an actual system execution is indeed a linear sequence of events. Since the {\sc ltl}\xspace semantics is given in terms of infinite traces, {\sc ltl}\xspace monitors analyze the trace of interest by considering it as the prefix of an infinite trace that will continue forever \cite{Bauer2010:LTL}. However, this hypothesis falls short in several contexts, where the usual assumption is that each trace produced by the system is in fact finite. This is often the case in Business Process Management (BPM), where each process instance is expected to eventually reach one of the foreseen ending states of the process \cite{PesV06}. In this setting, a monitored trace has to be considered as the prefix of an unknown, but still finitely long, trace. To handle this type of setting, finite-trace variants of {\sc ltl}\xspace have been introduced. In this work, we consider in particular the logic {\sc ltl}$_f$\xspace ({\sc ltl}\xspace on finite traces), investigated in detail in \cite{DegVa13}, and at the basis of one of the main declarative process modeling approaches: {\sc declare}\xspace \cite{PesV06,MPVC10,MMW11}. Following \cite{MMW11}, monitoring in {\sc ltl}$_f$\xspace amounts to check whether the current execution belongs to the set of admissible \emph{prefixes} for the traces of a given {\sc ltl}$_f$\xspace formula $\varphi$. To achieve such a task, $\varphi$ is usually first translated into a corresponding finite-state automaton that exactly recognizes all and only those \emph{finite} traces that satisfy $\varphi$. Despite the presence of previous operational decision support techniques to monitoring {\sc ltl}$_f$\xspace constraints over finite traces \cite{MMW11,MWM12}, two main challenges have not yet been tackled in a systematic way. First of all, several alternative semantics have been proposed to make LTL suitable for runtime verification, such as the de-facto standard RV monitor conditions \cite{Bauer2010:LTL}, which interpret {\sc ltl}\xspace formulae using four distinct truth values that account at once for the current trace and its possible future continuations. Specifically, in the RV-{\sc ltl}\xspace framework, a formula is associated to a corresponding RV state, which may witness: \begin{inparaenum}[\it (i)] \item permanent violation (the formula is currently violated, and the violation cannot be repaired anymore); \item temporary violation (the formula is currently violated but it is possible to continue the execution in a way that makes the formula satisfied); \item permanent satisfaction (the formula is currently satisfied and it will stay satisfied no matter how the execution continues); \item temporary satisfaction (the formula is currently satisfied but may become violated in the future). \end{inparaenum} The main issue is that no comprehensive, formal framework based on finite-state automata is available to handle such RV states. On the one hand, this is because runtime verification for temporal logics typically focus on the infinite-trace setting \cite{Bauer2010:LTL}, with the consequence that the corresponding automata-theoretic techniques detour to B\"uchi automata for building and using the monitors. On the other hand, the incorporation of such an RV semantics in a finite-trace setting has only been tackled so far with ad-hoc techniques. This is in particular the case of \cite{MMW11}, which operationally proposes to ``color'' automata to support the different RV states, but it does not come with an underlying formal counterpart justifying the correctness of the approach. A second, fundamental challenge is the incorporation of advanced forms of monitoring, going beyond what can be expressed with {\sc ltl}$_f$\xspace. In particular, contemporary monitoring approaches do not systematically account for \emph{metaconstraints} that predicate on the RV state of other constraints. This is especially important in a monitoring setting, where it is often of interest to consider certain constraints only when specific circumstances arise, such as when other constraints become violated. For example, metaconstraints provide the basis for monitoring \emph{compensation constraints}, which can be considered as the temporal version of so-called \emph{contrary-to-duty obligations} \cite{PrS96} in normative reasoning, that is, obligations that are put in place only when other obligations have not been fulfilled. While this feature is considered to be a fundamental compliance monitoring functionality \cite{LMM13}, it is still an open challenge, without any systematic approach able to support it at the level of the constraint specification language. In this article, we attack these two challenges by proposing a formal and operational framework for the monitoring of properties expressed in {\sc ltl}$_f$\xspace and in its extension {\sc ldl}$_f$\xspace \cite{DegVa13}. {\sc ldl}$_f$\xspace is a powerful logic that completely captures Monadic Second-Order Logic on finite traces, in turn, expressively equivalent to the language of regular expressions. {\sc ldl}$_f$\xspace does so by combining regular expressions with {\sc ltl}$_f$\xspace, adopting the syntax of propositional dynamic logic ({\sc pdl}\xspace). Interestingly, in spite of its greater expressivity, {\sc ldl}$_f$\xspace has exactly the same computational complexity of {\sc ltl}$_f$\xspace. At the same time, it provides a balanced integration between the expressiveness of regular expressions, and the declarativeness of {\sc ltl}$_f$\xspace. Our first, technical contribution is the formal development, accompanied by a proof-of-concept implementation, of an automata-theoretic framework for monitoring {\sc ltl}$_f$\xspace and {\sc ldl}$_f$\xspace constraints using the four truth values of the RV approach. We do this in two steps. In the first step, we devise a direct translation of {\sc ldl}$_f$\xspace (and hence of {\sc ltl}$_f$\xspace) formulae into nondeterministic automata, which avoid the usual detour to B\"uchi automata. The technique is grounded on alternating automata ({\sc afw}\xspace), but it actually avoids their introduction all together: in fact, the technique directly produces a standard non-deterministic finite-state automaton ({\sc nfa}\xspace), which can then be manipulated using conventional automata techniques (such as determinization and minimization). In the second step, we show that {\sc ldl}$_f$\xspace is able to capture, in the logic itself, special formulae that capture all RV monitoring conditions. More specifically, given an arbitrary {\sc ldl}$_f$\xspace formula $\varphi$, we show how to construct, for each RV monitor condition, another {\sc ldl}$_f$\xspace formula that characterizes all and only the traces culminating in a time point where $\varphi$ is associated to that RV state. By studying the so-obtained four {\sc ldl}$_f$\xspace special formulae, we then describe how to construct a single automaton that, given a trace, outputs the RV state associated to $\varphi$ by that trace. This, in turn, provides for the first time a proof of correctness of the ``colored automata'' approach proposed in \cite{MMW11}. We exploit this meta-level ability of {\sc ldl}$_f$\xspace in our second, major contribution, which shows how to use the logic to capture \emph{metaconstraints}, and how to monitor them by relying on usual logical services instead of ad-hoc algorithms. Metaconstraints provide a well-founded, declarative basis to specify and monitor constraints depending on the monitoring state of other constraints. To concretely show the flexibility and sophistication of our approach, we introduce and study three interesting classes of metaconstraints. The first class is about \emph{contextualizing} a constraint, by expressing that it has to be enforced only in those time points where another constraint is in a given RV state. The second class deals with two forms of the aforementioned \emph{compensation constraints}, which capture that a compensating constraint has to be monitored when another constraint becomes permanently violated. The third and last class targets the interesting case of \emph{conflicting constraints}, that is, constraints that, depending on the circumstances, may contradict each other. In particular, we show how to express a \emph{preference} on which constraint should be satisfied when a contradiction arises. In the final part of the paper, we report on how our monitoring framework has been concretely implemented, and exposed as an operational decision support plug-in within \textsc{ProM}, one of the most widely adopted infrastructures for process mining.\footnote{\url{http://www.promtools.org/}} This article is a largely extended version of the conference paper in \cite{DDGM14}. In relation with \cite{DDGM14}, we expand all technical parts, including here full proofs of the obtained results and a completely novel part on the construction of ``colored automata'' for monitoring. In addition, we provide here a much more detailed account on metaconstraints, introducing three metaconstraint classes that have not yet been investigated in prior work. We also report here on the complete implementation of our monitoring framework. The rest of the article is structured as follows. In Section~\ref{sec:LTLf-LDLf}, we introduce syntax and semantics of {\sc ldl}$_f$\xspace and {\sc ltl}$_f$\xspace. In Section~\ref{sec:automaton}, we then show how an {\sc ldl}$_f$\xspace/{\sc ltl}$_f$\xspace formula can be translated into a corresponding {\sc nfa}\xspace that accepts all and only the traces that satisfy the formula. In Section~\ref{sec:rtm}, we show how {\sc ldl}$_f$\xspace is able to capture the RV states in the logic itself, and employ the automata-theoretic approach developed in Section~\ref{sec:automaton} to construct RV monitors for {\sc ldl}$_f$\xspace/{\sc ltl}$_f$\xspace formulae. In Section~\ref{sec:monitoringDeclare}, we discuss how the resulting framework can be applied in the context of the {\sc declare}\xspace constraint-based process modeling approach. In Section~\ref{sec:monitoring-metaconstraints}, we turn to metaconstraints, introducing the three interesting metaconstraint classes of contextualization, compensation, and preference in case of conflict. The implementation of our monitoring framework in Java and \textsc{ProM} is reported in Section~\ref{sec:implementation}. Conclusion follows. \section{Linear Temporal Logics on Finite Traces} \label{sec:LTLf-LDLf} In this work, we adopt the standard {\sc ltl}\xspace and its extension {\sc ldl}\xspace, interpreted on finite traces. {\sc ltl}\xspace on finite traces, called {\sc ltl}$_f$\xspace \cite{DegVa13}, has exactly the same syntax as {\sc ltl}\xspace on infinite traces \cite{Pnueli77}. Namely, given a set of $\P$ of propositional symbols, {\sc ltl}$_f$\xspace formulae are obtained through the following: \[\varphi ::= \phi \mid \lnot \varphi \mid \varphi_1\land \varphi_2 \mid \varphi_1\lor \varphi_2 \mid \raisebox{0.4ex}{\tiny$\bigcirc$}\varphi \mid \raisebox{-0.27ex}{\LARGE$\bullet$}\varphi \mid \varphi_1\Until\varphi_2 \mid \varphi_1\mathop{\R}\varphi_2\] where $\phi$ is a propositional formula over $\P$, $\raisebox{0.4ex}{\tiny$\bigcirc$}$ is the \emph{next} operator, $\raisebox{-0.27ex}{\LARGE$\bullet$}$ is the \emph{weak next} operator, for which we have the equivalence $\raisebox{-0.27ex}{\LARGE$\bullet$}\varphi \equiv \lnot\raisebox{0.4ex}{\tiny$\bigcirc$}\lnot \varphi$ (notice that in the finite trace case $\lnot\raisebox{0.4ex}{\tiny$\bigcirc$}\lnot \varphi \neq \raisebox{0.4ex}{\tiny$\bigcirc$} \varphi$), $\Until$ is the \emph{until} operator and $\mathop{\R}$ is release operator, for which we have the equivalence $\varphi_2 \mathop{\R} \varphi_2\equiv \lnot (\lnot\varphi_2 \Until \lnot\varphi_2)$. In addition, we have common abbreviations. For example, \emph{eventually} $\Diamond\varphi$ abbreviates $\mathit{true} \Until \varphi$; and \emph{always} $\Box\varphi$ abbreviates $\mathit{false} \mathop{\R} \varphi$ or equivalently $\lnot\Diamond\lnot\varphi$. Notice that, for convenience and without loss of generality, we allow negation only in propositional formulae, i.e., we essentially assume the temporal formulae to be in negation normal form (NNF). An arbitrary temporal formula can be put in NNF in linear time. The semantics of {\sc ltl}$_f$\xspace is given in terms of \emph{finite traces} denoting finite, \emph{possibly empty}, sequences $\pi=\pi_0,\ldots,\pi_n$ of elements from the alphabet $2^\P$, containing all possible propositional interpretations of the propositional symbols in $\P$. We denote the length of the trace $\pi$ as $\mathit{length}(\pi)\doteq n+1$. We denote as $\pi(i)\doteq \pi_i$ the $i$-th step in the trace. If the trace is shorter and does not include an $i$-th step, $\pi(i)$ is undefined. We denote by $\pi(i,j)\doteq \pi_i,\pi_{i+1},\ldots,\pi_{j-1}$, the segment of the trace $\pi$ starting at the $i$-th step and ending at the $j$-th step (excluded). If $j > \mathit{length}(\pi)$ then $\pi(i,j) = \pi(i,\mathit{length}(\pi))$. For every $j \le i$, we have $\pi(i,j) = \epsilon$, i.e., the empty trace. Notice that here, differently form \cite{DegVa13}, we allow the empty trace $\epsilon$ as in \cite{BrDP18}. This is convenient for composing monitors, as it will become clear later on in the article. Given a finite trace $\pi$, we inductively define when an {\sc ltl}$_f$\xspace formula $\varphi$ \emph{is true} at a step $i$ written $\pi,i\models\varphi$, as follows (we include abbreviations for convenience): \begin{itemize} \item $\pi,i\models \phi$ ~iff~ $0\leq i \leq \mathit{length}(\pi)$ and $\pi(i)\models\phi$ \quad\text{($\phi$ propositional)}; \item $\pi,i\models \lnot \varphi$ ~iff~ $\pi,i\not\models \varphi$; \item $\pi,i\models \varphi_1\land\varphi_2$ ~iff~ $\pi,i\models\varphi_1$ and $\pi,i\models\varphi_2$; \item $\pi,i\models \varphi_1\lor\varphi_2$ ~iff~ $\pi,i\models\varphi_1$ or $\pi,i\models\varphi_2$; \item $\pi,i\models \raisebox{0.4ex}{\tiny$\bigcirc$}\varphi$ ~iff~ $0\leq i<\mathit{length}(\pi)-1$ and $\pi,i{+}1\models\varphi$; \item $\pi,i\models \raisebox{-0.27ex}{\LARGE$\bullet$}\varphi$ ~iff~ $0\leq i<\mathit{length}(\pi)-1$ implies $\pi,i{+}1\models\varphi$; \item $\pi,i\models \Diamond\varphi$ ~iff~ for some $j$ s.t.\ $0\leq i\leq j <\mathit{length}(\pi)$, we have $\pi,j\models\varphi$; \item $\pi,i\models \Box\varphi$ ~iff~ for all $j$ s.t.\ $0\leq i\leq j < \mathit{length}(\pi)$, we have $\pi,j\models\varphi$; \item $\pi,i\models \varphi_1\Until\varphi_2$ ~iff~ for some $j$ s.t.\ $1\leq i\leq j < \mathit{length}(\pi)$, we have $\pi,j\models\varphi_2$, and for all $k$, $i\leq k<j$, we have $\pi,k\models\varphi_1$; \item $\pi,i\models \varphi_1\mathop{\R}\varphi_2$ ~iff~ for all $j$ s.t.\ $0\leq i\leq j < \mathit{length}(\pi)$, either we have $\pi,j\models\varphi_2$ or for some $k$, $i\leq k<j$, we have $\pi,k\models\varphi_1$. \end{itemize} Observe that for $i\ge\mathit{length}(\pi)$, hence e.g., for $\pi=\epsilon$ we get: \begin{itemize} \item $\pi,i\not\models \phi$ \quad\text{($\phi$ propositional)}; \item $\pi,i\models \lnot\varphi$ ~iff~ $\pi,i\not\models\varphi$; \item $\pi,i\models \varphi_1\land\varphi_2$ ~iff~ $\pi,i\models\varphi_1$ and $\pi,i\models\varphi_2$; \item $\pi,i\models \varphi_1\lor\varphi_2$ ~iff~ $\pi,i\models\varphi_1$ or $\pi,i\models\varphi_2$; \item $\pi,i\not\models \raisebox{0.4ex}{\tiny$\bigcirc$}\varphi$; \item $\pi,i\models \raisebox{-0.27ex}{\LARGE$\bullet$}\varphi$; \item $\pi,i\not\models \Diamond\varphi$; \item $\pi,i\models \Box\varphi$; \item $\pi,i\not\models \varphi_1\Until\varphi_2$; \item $\pi,i\models \varphi_1\mathop{\R}\varphi_2$. \end{itemize} It is known that {\sc ltl}$_f$\xspace is as expressive as First-Order Logic over finite traces, so strictly less expressive than regular expressions, which, in turn, are as expressive as Monadic Second-Order logic over finite traces. On the other hand, regular expressions are a too low level formalism for expressing temporal specifications, since, for example, they miss a direct construct for negation and for conjunction~\cite{DegVa13}. To be as expressive as regular expressions and, at the same time, convenient as a temporal logic, in \cite{DegVa13} \emph{Linear Dynamic Logic of Finite Traces}, or {\sc ldl}$_f$\xspace, has been proposed. This logic is as natural as {\sc ltl}$_f$\xspace, but with the full expressive power of Monadic Second-Order logic over finite traces. {\sc ldl}$_f$\xspace is obtained by merging {\sc ltl}$_f$\xspace with regular expressions through the syntax of the well-know logic of programs {\sc pdl}\xspace, \emph{Propositional Dynamic Logic} \cite{FiLa79,HaKT00}, but adopting a semantics based on finite traces. {\sc ldl}$_f$\xspace is an adaptation of {\sc ldl}\xspace introduced in \cite{Var11}, which, like {\sc ltl}\xspace, is interpreted over infinite traces. Formally, {\sc ldl}$_f$\xspace formulae are built as follows: \[\begin{array}{lcl} \varphi &::=& \mathtt{true} \mid \mathtt{false}\mid \lnot\varphi \mid \varphi_1 \land \varphi_2 \mid \varphi_1 \lor \varphi_2 \mid \DIAM{\rho}\varphi \mid \BOX{\rho}\varphi \\ \rho &::=&\phi \mid \varphi? \mid \rho_1 + \rho_2 \mid \rho_1; \rho_2 \mid \rho^* \end{array} \] where $\mathtt{true}$ and $\mathtt{false}$ denote respectively the true and the false {\sc ldl}$_f$\xspace formula (not to be confused with the propositional formula $\mathit{true}$ and $\mathit{false}$); $\phi$ denotes propositional formulae over $\P$; $\rho$ denotes path expressions, which are regular expressions over propositional formulae $\phi$ over $\P$ with the addition of the test construct $\varphi?$ typical of {\sc pdl}\xspace and are used to insert into the execution path checks for satisfaction of additional {\sc ldl}$_f$\xspace formulae; and $\varphi$ stand for {\sc ldl}$_f$\xspace formulae built by applying boolean connectives and the modal operators $\DIAM{\rho}\varphi$ and $\BOX{\rho}\varphi$. These two operators are linked by the following equivalence $\BOX{\rho}\varphi\equiv\lnot\DIAM{\rho}{\lnot\varphi}$. Intuitively, $\DIAM{\rho}\varphi$ states that, from the current step in the trace, there exists an execution satisfying the regular expression $\rho$ such that its last step satisfies $\varphi$. While $\BOX{\rho}\varphi$ states that, from the current step, all executions satisfying the regular expression $\rho$ are such that their last step satisfies $\varphi$. Notice that {\sc ldl}$_f$\xspace, as defined above, does not include propositional formulae $\phi$ as {\sc ldl}$_f$\xspace formulae, but only as path expressions. However, they can be immediately introduced as abbreviations: $\phi \doteq \DIAM{\phi}\mathtt{true}$. For example, to say that eventually proposition $a$ holds, instead of writing $\DIAM{\mathit{true}^*}a$, we can write $\DIAM{\mathit{true}^*;a}\mathtt{true}$. This is analogous to what happens in (extensions with regular expressions of) XPath, a well-known formalism developed for navigating XML documents and graph databases \cite{ClDe99b,Marx04b,CDLV09}. We may keep $\phi$ as {\sc ldl}$_f$\xspace formulae for convenience, however, we have to be careful of the difference we get if we apply negation to propositional formula $\phi$ or to $\DIAM{\phi}\mathtt{true}$. In the first case, we get $\lnot\phi$, which is equivalent to $\DIAM{\lnot \phi}\mathtt{true}$. In the second case, we get $\BOX{\phi}\mathtt{false}$, which is equivalent to $\BOX{true?}\mathtt{false} \lor \DIAM{\lnot \phi}\mathtt{true}$, which says that either the trace is empty or $\phi$ holds in the current state. We drop the use of $\phi$ to avoid this ambiguity. It is also convenient to introduce the following abbreviations specific for dealing with the finiteness of the traces: $\mathit{end}=\BOX{\mathit{true}}\mathtt{false}$, which denotes that the trace has been completed (the current instant is out of the range of the trace, or the remaining fragment of the trace is empty); and $\mathit{last}= \DIAM{\mathit{true}}\mathit{end}$, which denotes the last step of the trace. As for {\sc ltl}$_f$\xspace, the semantics of {\sc ldl}$_f$\xspace is given in terms of \emph{finite traces} denoting a finite, \emph{possibly empty}, sequence of consecutive steps in the trace, i.e., finite words $\pi$ over the alphabet of $2^\P$, containing all possible propositional interpretations of the propositional symbols in $\P$. The semantics of {\sc ldl}$_f$\xspace is given in the following. An {\sc ldl}$_f$\xspace formula $\varphi$ \emph{is true} at a step $i$, in symbols $\pi,i\models\varphi$, if: \begin{itemize} \vspace{-0.2cm} \item $\pi,i\models \mathtt{true}$; \item $\pi,i\not\models \mathtt{false}$; \item $\pi,i\models \lnot \varphi$ ~iff~ $\pi,i\not \models\varphi$; \item $\pi,i\models \varphi_1\land\varphi_2$ ~iff~ $\pi,i\models\varphi_1$ and $\pi,i\models\varphi_2$; \item $\pi,i\models \varphi_1\lor\varphi_2$ ~iff~ $\pi,i\models\varphi_1$ or $\pi,i\models\varphi_2$; \item $\pi,i\models \DIAM{\rho}\varphi$ ~iff~ for some $j$ s.t.\ $i\leq j$, we have $\pi(i,j)\in \L(\rho)$ and $\pi,j\models\varphi$; \item $\pi,i\models \BOX{\rho}\varphi$ ~iff~ for all $j$ s.t.\ $i\leq j$; and $\pi(i,j)\in \L(\rho)$, we have $\pi,j\models\varphi$. \end{itemize} The relation $\pi(i,j)\in\L(\rho)$ is defined inductively as follows: \begin{itemize} \item $\pi(i,j)\in\L(\phi)$ if $j=i+1~\text{and}\; 0\leq i \leq \mathit{length}(\pi) \;\text{and}\; \pi(i)\models \phi \quad \mbox{($\phi$ propositional)}$; \item $\pi(i,j)\in\L(\varphi?)$ if $j=i\;\text{and}\; \pi, i\models \varphi$; \item $\pi(i,j)\in\L(\rho_1+ \rho_2)$ if $\pi(i,j)\in\L(\rho_1)\;\text{or}\; \pi(i,j)\in\L(\rho_2)$; \item $\pi(i,j)\in\L(\rho_1; \rho_2)$ if $\mbox{ exists } k \mbox{ s.t.\ } \pi(i,k)\in\L(\rho_1) \mbox{ and } \pi(k,j)\in\L(\rho_2)$; \item $\pi(i,j)\in\L(\rho^*)$ if $j=i\; \text{or} \mbox{ exists } k \mbox{ s.t.\ } \pi(i,k)\in\L(\rho) \mbox{ and } \pi(k,j)\in\L(\rho^*)$. \end{itemize} Note that if $i\ge\mathit{length}(\pi)$, hence, e.g., for $\pi=\epsilon$, the above definitions still apply; though, $\DIAM{\phi}\varphi$ ($\phi$ prop.) and $\DIAM{\psi}\varphi$ become trivially false. As usual, we write $\pi \models \varphi$ as a shortcut for $\pi,0 \models \varphi$. It easy to encode {\sc ltl}$_f$\xspace into {\sc ldl}$_f$\xspace: we can define a translation function $\mathit{tr}$ defined by induction of the {\sc ltl}$_f$\xspace formula as follows: \[\begin{array}{rcl} \mathit{tr}(\phi) &=& \DIAM{\phi}\mathtt{true} \quad\text{($\phi$ propositional)}\\ \mathit{tr}(\lnot \varphi) &=& \lnot \mathit{tr}(\varphi) \\ \mathit{tr}(\varphi_1\land \varphi_2) &=& \mathit{tr}(\varphi_1)\land \mathit{tr}(\varphi_2)\\ \mathit{tr}(\varphi_1\lor \varphi_2) &=& \mathit{tr}(\varphi_1)\lor \mathit{tr}(\varphi_2)\\ \mathit{tr}(\raisebox{0.4ex}{\tiny$\bigcirc$}\varphi) &=& \DIAM{true}(\mathit{tr}(\varphi) \land \lnot \mathit{end}) \\ \mathit{tr}(\raisebox{-0.27ex}{\LARGE$\bullet$}\varphi) &=& \mathit{tr}( \lnot(\raisebox{0.4ex}{\tiny$\bigcirc$} (\lnot \varphi)))\\ \mathit{tr}(\Diamond \varphi) &=& \DIAM{true^*} (\mathit{tr}(\varphi) \land \lnot \mathit{end}) \\ \mathit{tr}(\Box \varphi) &=& \mathit{tr} (\lnot(\Diamond(\lnot \varphi)) )\\ \mathit{tr}(\varphi_1\Until \varphi_2) &=& \DIAM{(\mathit{tr}(\varphi_1)?;\mathit{true})^*}(\mathit{tr}(\varphi_2) \land \lnot \mathit{end})\\ \mathit{tr}(\varphi_1\mathop{\R} \varphi _2) &=& \mathit{tr}( \lnot (\lnot \varphi_1 \Until \lnot \varphi_2)) \end{array} \] where $\mathit{nnf}(\psi)$ is the function that transform $\psi$ by pushing negation inside until it is just used in front of atomic propositions. It is also easy to encode regular expressions, used as a specification formalism for traces into {\sc ldl}$_f$\xspace: $\rho$ translates to $\DIAM{\rho} \mathit{end}$. We say that a trace satisfies an {\sc ltl}$_f$\xspace/{\sc ldl}$_f$\xspace formula $\varphi$, written $\pi\models \varphi$, if $\pi,0\models \varphi$. Note that if $\pi$ is the empty trace, and hence $0$ is out of range, still the notion of $\pi,0\models \varphi$ is well defined. Also sometimes we denote by $\L(\varphi)$ the set of traces that satisfy $\varphi$. i.e., $\L(\varphi) = \{\pi\mid \pi \models \varphi\}$. \section{\MakeLowercase{{\sc ldl}$_f$\xspace} Automaton} \label{sec:automaton} We can associate with each {\sc ldl}$_f$\xspace formula $\varphi$ an (exponential) {\sc nfa}\xspace $A(\varphi)$ that accepts exactly the traces that make $\varphi$ true. Here, we provide a simple direct algorithm for computing the {\sc nfa}\xspace corresponding to an {\sc ldl}$_f$\xspace formula. The correctness of the algorithm is based on the fact that (\emph{i})\xspace we can associate each {\sc ldl}$_f$\xspace formula $\varphi$ with a polynomial \emph{alternating automaton on words} ({\sc afw}\xspace) that accepts exactly those traces that make $\varphi$ true \cite{DegVa13}, and (\emph{ii})\xspace every {\sc afw}\xspace can be transformed into an {\sc nfa}\xspace, see, e.g., \cite{DegVa13}. However, to formulate the algorithm, we do not need these notions, but we can work directly on the {\sc ldl}$_f$\xspace formula. Then, we define an auxiliary function $\delta$ as in Figure~\ref{fig:delta}, which takes an {\sc ldl}$_f$\xspace formula $\psi$ (in negation normal form) and a propositional interpretation $\Pi$ for $\P$, or a special symbol $\epsilon$, and returns a positive boolean formula whose atoms are (quoted) sub-formulae of $\psi$. \begin{figure}[t!] \begin{align*} \delta(\mathtt{true},\Pi) & = \mathit{true}\\ \delta(\mathtt{false},\Pi) & = \mathit{false}\\ \delta(\phi,\Pi) &= \delta(\DIAM{\phi}\mathtt{true},\Pi) \quad \mbox{($\phi$ prop.)}\\ \delta(\varphi_1\land\varphi_2,\Pi) & = \delta(\varphi_1,\Pi) \land \delta(\varphi_2,\Pi)\\ \delta(\varphi_1\lor\varphi_2,\Pi) & = \delta(\varphi_1,\Pi) \lor \delta(\varphi_2,\Pi)\\%[5pt] \delta(\DIAM{\phi}\varphi,\Pi) & = \left\{\hspace{-1ex}\begin{array}{l} \textbf{\textit{\texttt{E}}}(\varphi) \mbox{ if } \Pi \models \phi \quad \mbox{($\phi$ prop.)}\\ \mathit{false} \mbox{ if } \Pi\not\models \phi \end{array}\right.\\[5pt] \delta(\DIAM{\psi?}{\varphi},\Pi) & = \delta(\psi,\Pi) \land \delta(\varphi,\Pi)\\ \delta(\DIAM{\rho_1+\rho_2}{\varphi},\Pi) & = \delta(\DIAM{\rho_1}\varphi,\Pi)\lor\delta(\DIAM{\rho_2}\varphi,\Pi)\\ \delta(\DIAM{\rho_1;\rho_2}{\varphi},\Pi) & = \delta(\DIAM{\rho_1}\DIAM{\rho_2}\varphi,\Pi)\\ \delta(\DIAM{\rho^*}\varphi,\Pi) & = \delta(\varphi,\Pi) \lor \delta(\DIAM{\rho}\textbf{\textit{\texttt{F}}}_{\DIAM{\rho^*}\varphi},\,\Pi)\\%[5pt] \delta(\BOX{\phi}\varphi,\Pi) & = \left\{\hspace{-1ex}\begin{array}{l} \textbf{\textit{\texttt{E}}}(\varphi) \mbox{ if } \Pi \models \phi \quad \mbox{($\phi$ prop.)}\\ \mathit{true} \mbox{ if } \Pi\not\models \phi \end{array}\right.\\[5pt] \delta(\BOX{\psi?}{\varphi},\Pi) & = \delta(\mathit{nnf}(\lnot\psi),\Pi) \lor \delta(\varphi,\Pi)\\ \delta(\BOX{\rho_1+\rho_2}{\varphi},\Pi) & = \delta(\BOX{\rho_1}\varphi,\Pi)\land\delta(\BOX{\rho_2}\varphi,\Pi)\\ \delta(\BOX{\rho_1;\rho_2}{\varphi},\Pi) & = \delta(\BOX{\rho_1}\BOX{\rho_2}\varphi,\Pi)\\%[5pt] \delta(\BOX{\rho^*}\varphi,\Pi) & = \delta(\varphi,\Pi) \land \delta(\BOX{\rho}\textbf{\textit{\texttt{T}}}_{\BOX{\rho^*}\varphi},\,\Pi)\\%[5pt] \delta(\textbf{\textit{\texttt{F}}}_{\psi},\Pi) & = \mathit{false}\\ \delta(\textbf{\textit{\texttt{T}}}_{\psi},\Pi) & = \mathit{true} \end{align*} \caption{Definition of $\delta$, where $\textbf{\textit{\texttt{E}}}(\varphi)$ recursively replaces in $\varphi$ all occurrences of atoms of the form $\textbf{\textit{\texttt{T}}}_\psi$ and $\textbf{\textit{\texttt{F}}}_\psi$ by $\psi$.}\label{fig:delta} \end{figure} Note that for defining $\delta$ we make use of extra symbols of the form $\textbf{\textit{\texttt{T}}}_{\DIAM{\rho^*}\varphi}$ and $\textbf{\textit{\texttt{F}}}_{\BOX{\rho^*}\varphi}$, for handling formulae $\DIAM{\rho^*}\varphi$ and $\BOX{\rho^*}\varphi$. Such extra symbols act in $\delta$ as they were additional states excepts that during the recursive computation of $\delta$ they disappear, either because evaluated to $\mathit{true}$ or $\mathit{false}$ or because they are syntactically replaced by $\DIAM{\rho}\varphi$ and $\BOX{\rho}\varphi$, respectively, when a new state is returned. For the latter, we use an auxiliary function $\textbf{\textit{\texttt{E}}}(\varphi)$, which takes as input a formula $\varphi$ with these extra symbols $\textbf{\textit{\texttt{T}}}_\psi$ and $\textbf{\textit{\texttt{F}}}_\psi$ used as additional atomic propositions, and recursively substitutes in it all their occurrences with the formula $\psi$ itself. Notice also that for $\phi$ propositional, $\delta(\phi,\Pi) = \delta(\DIAM{\phi}\mathtt{true},\Pi)$, as a consequence of the equivalence $\phi\equiv\DIAM{\phi}\mathtt{true}$. The auxiliary function $\delta(\varphi,\epsilon)$, i.e., in the case the (remaining fragment of the) trace is empty, is defined exactly as in Figure~\ref{fig:delta} except for the following base cases : \begin{align*} \delta(\DIAM{\phi}\varphi,\epsilon) &=\mathit{false} \quad \mbox{($\phi$ propositional)}\\ \delta(\BOX{\phi}\varphi,\epsilon) &= \mathit{true} \quad \;\mbox{($\phi$ propositional)} \end{align*} Note that $\delta(\varphi,\epsilon)$ is always either $\mathit{true}$ or $\mathit{false}$. \newcommand{\textsc{{{\sc ldl}$_f$\xspace}2{\sc nfa}\xspace}}{\textsc{{{\sc ldl}$_f$\xspace}2{\sc nfa}\xspace}} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \algrenewcommand\algorithmicindent{1em} \begin{figure}[!t] \begin{algorithmic}[1] \State\textbf{algorithm} \textsc{{{\sc ldl}$_f$\xspace}2{\sc nfa}\xspace} \\ \textbf{input} {\sc ldl}$_f$\xspace formula $\varphi$ \\ \textbf{output} {\sc nfa}\xspace $\mathcal{A}} \newcommand{\B}{\mathcal{B}(\varphi) = (2^\P,\S,s_0,\varrho,S_f)$ \State $s_0 \gets \{\varphi\}$ \Comment{set the initial state} \State $S_f \gets \{\emptyset\}$ \Comment{set final states} \If{($\delta(\varphi,\epsilon) = \mathit{true}$)} \Comment{check if initial state is also final} \State $\S_f \gets \S_f \cup \{s_0\}$ \EndIf \State $\S \gets \{s_0,\emptyset\}$, $\varrho \gets \emptyset$ \While{($\S$ or $\varrho$ change)} \For{($s\in \S$)} \If{($s'\models \bigwedge_{(\psi\in s)} \delta(\psi,\Pi)$} \Comment{add new state and transition} \State $\S \gets \S \cup \{s'\}$ \State $\varrho \gets \varrho \cup \{ (s,\Pi,s')\}$ \If{($\bigwedge_{(\psi\in s')} \delta(\psi,\epsilon) = \mathit{true}$)} \Comment{check if new state is also final} \State $S_f \gets \S_f\cup\{s'\}$ \EndIf \EndIf \EndFor \EndWhile \end{algorithmic} \vspace{-.3cm} \caption{{\sc nfa}\xspace construction.}\label{fig:algo} \end{figure} Using the auxiliary function $\delta$, we can build the {\sc nfa}\xspace $A(\varphi)$ of an {\sc ldl}$_f$\xspace formula $\varphi$ in a forward fashion as described in Figure~\ref{fig:algo}, where: states of $A(\varphi)$ are sets of atoms (recall that each atom is quoted $\varphi$ sub-formulae) to be interpreted as a conjunction; the empty conjunction $\emptyset$ stands for $\mathit{true}$; $\Pi$ is a propositional interpretation and $q'$ is a set of (quoted) sub-formulae of $\varphi$ that denotes a minimal interpretation such that $q'\models \bigwedge_{\psi\in q)} \delta(\psi,\Pi)$. Note that we do not need to get all $q$ such that $q'\models \bigwedge_{\psi\in q)} \delta(\psi,\Pi)$, but only the minimal ones. In addition, trivially we have $(\emptyset,a,\emptyset)\in\varrho$ for every $a\in\Sigma$. The algorithm \textsc{{{\sc ldl}$_f$\xspace}2{\sc nfa}\xspace}\ terminates in at most an exponential number of steps, and generates a set of states $\S$ whose size is at most exponential in the size of $\varphi$. We observe that the algorithm \textsc{{{\sc ldl}$_f$\xspace}2{\sc nfa}\xspace}\ implicitly constructs the {\sc afw}\xspace for $\varphi$, and transforms it into a corresponding {\sc nfa}\xspace. In particular, given an {\sc ldl}$_f$\xspace formula $\varphi$, its sub-formulae are the states of the {\sc afw}\xspace, with initial state the formula itself, and no final states. The auxiliary function $\delta$ grounded on the sub-formulae of $\varphi$ becomes the transition function of such an {\sc afw}\xspace. This directly leads to the following result. \begin{theorem}[\cite{DegVa13}] \label{thm:automataSoundness} Let $\varphi$ be an {\sc ldl}$_f$\xspace formula and $A(\varphi)$ the {\sc nfa}\xspace obtained by applying the algorithm \textsc{{{\sc ldl}$_f$\xspace}2{\sc nfa}\xspace}\ to $\varphi$. Then $\pi\models\varphi \mbox{ iff } \pi\in \L(A(\varphi))$ for every finite trace $\pi$. \end{theorem} We can check the satisfiability of an {\sc ldl}$_f$\xspace formula $\varphi$ by checking whether its corresponding {\sc nfa}\xspace $A(\varphi)$ is nonempty. The same applies for validity and logical implication, which are linearly reducible to satisfiability. It is easy to see that $A(\varphi)$ can be built on-the-fly, and hence we can check non-emptiness in PSPACE in the size of $\varphi$. Considering that it is known that satisfiability in {\sc ldl}$_f$\xspace is PSPACE-hard, we can conclude that the proposed construction is optimal with respect to the computational complexity for satisfiability (see~\cite{DegVa13} for details). \subsection{Monitoring using Colored Automata} \label{sec:colored-proof} We now show that we can merge the four automata for monitoring the four RV truth values into a single automaton, with ``colored'' states. The idea is grounded on the intuition that the four automata in the previous section ``have the same shape'', and only differ in determining which states are final. It is hence possible to build one automaton only and then mark its states with four different colors, each corresponding to the final states of a specific formula in Theorem~\ref{thm:rv-ltl}, hence each representing one among the four RV truth values. The intention of using a single automaton for runtime verification is not novel~\cite{MMW11}, but here, for the first time, we provide a formal justification of its correctness. As a first step, we formally define the notion of \emph{shape equivalence} to capture the intuition that two automata have the same ``shape'', i.e., they have corresponding states and transitions, but possibly differ in their final states. Formally, let $A_1 = (2^\P,\S^1,s^1_{0}, \varrho^1, S^1_{f})$ and $A_2 = (2^\P,\S^2,s^2_{0},\varrho^2,S^2_{f})$ be two {\sc nfa}\xspace{s} defined over a set $\P$ of propositional symbols. We say that $A_1$ and $A_2$ are \emph{shape equivalent}, written $A_1 \sim A_2$, if there exists a bijection $h:\S^1 \rightarrow \S^2$ such that: \begin{compactenum} \item $h(s^1_{0})=s^2_{0}$; \item for each $(s^1_1, \Pi, s^1_2) \in \varrho^1$, $(h(s^1_1), \Pi, h(s^2_2)) \in \varrho^2$ and \item for each $(s^2_1, \Pi, s^2_2) \in \varrho^2$, $(h^{-1}(s^2_1), \Pi, h^{-1}(s^2_2)) \in \varrho^1$. \end{compactenum} We write $A_1 \bijectf{h} A_2$ to explicitly indicate the bijection $h$ from $A_1$ to $A_2$ that induces their shape equivalence. It is easy to see that bijection $h$ preserves the initial states (condition (1)) and transitions (conditions (2) and (3)) but does not require a correspondence between final states. \begin{lemma} Shape equivalence $\sim$ is indeed an equivalence relation. \end{lemma} \begin{proof} Reflexivity: the identity function trivially satisfies (1)-(3) above. Simmetry: let $A \bijectf{h} A$. Given that $h$ is a bijection, then $A \bijectf{h^{-1}} A$. Transitivity: Let $A_1 \bijectf{h} A_2$ and $A_2 \bijectf{g} A_3$. Then $A_1 \bijectf{h\;\circ\;g} A_3$, where $h \circ g$ is the composition of $h$ and $g$. \end{proof} Hence, $\sim$ induce (equivalence) classes of automata with the same shape. Automata for the basic formulae in Theorem~\ref{thm:rv-ltl} belong to the same class. \begin{lemma} For each {\sc ldl}$_f$\xspace formula $\varphi$, $A(\varphi)$, $A(\neg \varphi)$, $A(\DIAM{\mathsf{pref}_{\varphi}}\mathit{end})$ and $A(\DIAM{\mathsf{pref}_{\neg \varphi}}\mathit{end})$ are in the same equivalence class by $\sim$. \end{lemma} \begin{proof} From automata theory, $A(\neg \varphi)$ can be obtained from $A(\varphi)$ by switching the final states with the non-final ones. Hence, the identity $i: \S^{\varphi} \rightarrow \S^{\varphi}$ is such that $A(\varphi) \bijectf{i} A(\neg \varphi)$. Moreover, $A(\varphi) \sim A(\DIAM{\mathsf{pref}_{\varphi}}\mathit{end}) \sim A(\DIAM{\mathsf{pref}_{\neg \varphi}}\mathit{end})$ as $A(\DIAM{\mathsf{pref}_{\varphi}}\mathit{end})$, resp., $A(\DIAM{\mathsf{pref}_{\neg \varphi}}\mathit{end})$, can be obtained from $A(\varphi)$, resp., $A(\neg \varphi)$, by setting as final states all states from which there exists a non-zero length path to a final state of $A(\varphi)$, resp., $A(\neg \varphi)$, as explained in the proof of Lemma~\ref{lemma:prefRE}. Hence, again, the identity relation $i$ is such that $A(\varphi) \bijectf{i} A(\DIAM{\mathsf{pref}_{\varphi}}\mathit{end}) \bijectf{i} A(\DIAM{\mathsf{pref}_{\neg \varphi}}\mathit{end})$. \end{proof} As the last step for proving that automata for the four formulae in Theorem~\ref{thm:rv-ltl} are in the same class, we show that the formula conjunction does not alter shape equivalence, in the following precise sense. \begin{theorem} Let $\varphi_1$, $\varphi_2$, $\psi_1$ and $\psi_2$ be {\sc ldl}$_f$\xspace formulae so that $A(\varphi_1) \sim A(\psi_1)$ and $A(\varphi_2) \sim A(\psi_2)$. Then $A(\varphi_1 \land \varphi_2) \sim A(\psi_1 \land \psi_2)$. \end{theorem} \begin{proof} From the semantics of {\sc ldl}$_f$\xspace and Theorem~\ref{thm:automataSoundness}, it follows that $A(\varphi_1 \land \varphi_2) \equiv A(\varphi_1) \cap A(\varphi_2)$. Recall that states of $A(\varphi_1) \cap A(\varphi_2)$ are ordered pairs $(s^{\varphi_1}, s^{\varphi_2}) \in \S^{\varphi_1} \times \S^{\varphi_2}$. Let $h_1$ and $h_2$ be bijections such that $A(\varphi_1) \bijectf{h_1} A(\psi_1)$ and $A(\varphi_2) \bijectf{h_2} A(\psi_2)$. We use $h_1$ and $h_2$ to construct a new bijection $h: \S^{\varphi_1} \times \S^{\varphi_2} \rightarrow \S^{\psi_1} \times \S^{\psi_2}$ such that $h(s^{\varphi_1}, s^{\varphi_2})=(h_1(s^{\varphi_1}), h_2(s^{\varphi_2}))$. We show that $h$ satisfies criteria (1)-(3) of shape equivalence, hence inducing $A(\varphi_1 \land \varphi_2) \bijectf{h} A(\psi_1 \land \psi_2)$. The starting state $s_0^{\varphi_1 \land \varphi_2}$ of $A(\varphi_1 \land \varphi_2)$ corresponds to $(s_0^{\varphi_1}, s_0^{\varphi_2})$ by definition of $A(\varphi_1) \cap A(\varphi_2)$. At the same time, $s_0^{\psi_1 \land \psi_2}=(h_1(s_0^{\varphi_1}), h_2(s_0^{\varphi_2}))=(s_0^{\varphi_1}, s_0^{\varphi_2})$ by definition of $h$, which proves (1). Now consider a transition $((s^{\varphi_1}_1, s^{\varphi_2}_1), \Pi, (s^{\varphi_1}_2, s^{\varphi_2}_2))$ in $\varrho^{\varphi_1 \land \varphi_2}$. By construction, this means that there exist transitions $(s^{\varphi_1}_1, \Pi, s^{\varphi_1}_2) \in \varrho^{\varphi_1}$ and $(s^{\varphi_2}_1, \Pi, s^{\varphi_2}_2) \in \varrho^{\varphi_2}$. Since $A(\varphi_1) \bijectf{h_1} A(\psi_1)$ and $A(\varphi_2) \bijectf{h_2} A(\psi_2)$, we have that $(h_1(s^{\varphi_1}_1), \Pi, h_1(s^{\varphi_1}_2)) \in \varrho^{\psi_1}$ and $(h_2(s^{\varphi_2}_1), \Pi, h_2(s^{\varphi_2}_2)) \in \varrho^{\psi_2}$. It follows that $((h_1(s^{\varphi_1}_1), h_2(s^{\varphi_2}_1)), \Pi, (h_1(s^{\varphi_1}_2), h_2(s^{\varphi_2}_2))) \in \varrho^{\psi_1 \land \psi_2}$, which proves (2). Condition (3) is proved analogously with $h^{-1}$. \end{proof} \begin{corollary} Given an {\sc ldl}$_f$\xspace formula $\varphi$, automata $A(\varphi \land \DIAM{\mathsf{pref}_{\lnot\varphi}}\mathit{end})$, $A(\lnot\varphi \land \DIAM{\mathsf{pref}_{\varphi}}\mathit{end})$, $A(\DIAM{\mathsf{pref}_{\varphi}}\mathit{end}\land \lnot \DIAM{\mathsf{pref}_{\lnot\varphi}}\mathit{end})$ and $A(\DIAM{\mathsf{pref}_{\lnot\varphi}}\mathit{end}\land \lnot \DIAM{\mathsf{pref}_{\varphi}}\mathit{end})$ are in the same equivalence class by $\sim$. \end{corollary} \input{images/monitoringAutomataColored} This result tells that the automata of the formulae used to capture the RV states of an {\sc ldl}$_f$\xspace formula of interest, as captured by Theorem~\ref{thm:rv-ltl}, are identical modulo final states. In addition, by definition of the four {\sc ldl}$_f$\xspace formulae we directly get that each state is marked as final by one and only one of such automata. This, in turn, allows us to merge all the four automata together into a single automaton, provided that we recall, for each state in the automaton, which of the four formula marks it as final (which corresponds to declare to which of the four RV states it corresponds). In practice, we can simply build the automaton $A(\varphi)$ for $\varphi$, and ``color'' each state in the automaton according to its corresponding RV state. This can be realized with the following, direct procedure. We first build $A(\varphi) = (2^\P,\S,s_{0}, \varrho, S_{f})$ with $S_{f}$ the set of its final states, and for each $s \in \S$ we compute the set $Reach(s)$ of states reachable from $s$. Then: \begin{compactitem}[$\bullet$] \item if \begin{inparaenum}[\it (i)] \item $s \in S_{f}$, \item $Reach(s) \not \subseteq S_{f}$, and \item $Reach(s) \cap \set{s_{f}} \not = \emptyset$, \end{inparaenum} then we mark $s$ as $\mathit{temp\_true}$; \item if \begin{inparaenum}[\it (i)] \item $s \not \in S_{f}$, \item $Reach(s) \not \subseteq (\S \setminus S_{f})$, and \item $Reach(s) \cap S_{f} \not = \emptyset$, \end{inparaenum} then we mark $s$ as $\mathit{temp\_false}$; \item if \begin{inparaenum}[\it (i)] \item $s \in S_{f}$ and \item $Reach(s) \subseteq S_{f}$, \end{inparaenum} then we mark $s$ as $\mathit{perm\_true}$; \item if \begin{inparaenum}[\it (i)] \item $s \not \in S_{f}$ and \item $Reach(s) \subseteq (\S \setminus S_{f})$, \end{inparaenum} then we mark $s$ as $\mathit{perm\_false}$. \end{compactitem} It is easy to see that the four bullets above match the four ones of Theorem~\ref{thm:rv-ltl}. The soundness of the marking immediately follows from the definitions and results in the previous section. \begin{example} Figure~\ref{fig:monitoringAutColored} depicts the colored automaton for the formula in Example~\ref{ex:monitoringAutomata} and Figure~\ref{subfig:general}. States $s_0$ and $s_1$, which were final in the automaton for $\pi \models \rvass{\varphi}{\mathit{temp\_false}}$, are indeed marked as $\mathit{temp\_true}$ (orange, dashed line); state $s_2$, which was final in the automaton for $\pi \models \rvass{\varphi}{\mathit{temp\_true}}$, is marked as $\mathit{temp\_true}$ (blue, solid thin line); state $s_3$, which was final in the automaton for $\pi \models \rvass{\varphi}{\mathit{perm\_true}}$, is marked with $\mathit{perm\_true}$ (green, solid thick line) and lastly state $s_4$, which was final in the automaton for $\pi \models \rvass{\varphi}{\mathit{perm\_false}}$ is marked with $\mathit{perm\_false}$ (red, dotted line). \end{example} Upon building the colored automaton, we can determinize it and keep it \emph{untrimmed}, that is, including trap states from which no final state can be reached. In this way, every symbol from the entire supporting set $\P$ is accepted by the automaton in each of its states. This becomes then our \emph{monitor}. We conclude by noticing that the presented solution is very flexible, as the reachability analysis can be performed on-the-fly: indeed this is the procedure we actually implemented in our runtime verification tool, as explained in Section~\ref{sec:implementation}. \subsection{Monitoring using Colored Automata} \label{sec:colored-proof} We now show that we can merge the four automata for monitoring the four RV truth values into a single automaton with ``colored'' states. The idea is grounded on the intuition that the four automata in the previous section ``have the same shape'', and only differ in determining which states are final. It is hence possible to build one automaton only and then mark its states with four different colors, each corresponding to the final states of a specific formula in Theorem~\ref{thm:rv-ltl}, hence each representing one among the four RV truth values. The intention of using a single automaton for runtime verification is not novel~\cite{MMW11}, but here, for the first time, we provide a formal justification of its correctness. As a first step, we formally define the notion of \emph{shape equivalence} to capture the intuition that two automata have the same ``shape'', i.e., they have corresponding states and transitions, but possibly differ in their final states. Formally, let $A_1 = (2^\P,\S^1,s^1_{0}, \varrho^1, S^1_{f})$ and $A_2 = (2^\P,\S^2,s^2_{0},\varrho^2,S^2_{f})$ be two {\sc nfa}\xspace{s} defined over a set $\P$ of propositional symbols. We say that $A_1$ and $A_2$ are \emph{shape equivalent}, written $A_1 \sim A_2$, if there exists a bijection $h:\S^1 \rightarrow \S^2$ such that: \begin{compactenum} \item $h(s^1_{0})=s^2_{0}$; \item for each $(s^1_1, \Pi, s^1_2) \in \varrho^1$, $(h(s^1_1), \Pi, h(s^2_2)) \in \varrho^2$; and \item for each $(s^2_1, \Pi, s^2_2) \in \varrho^2$, $(h^{-1}(s^2_1), \Pi, h^{-1}(s^2_2)) \in \varrho^1$. \end{compactenum} We write $A_1 \bijectf{h} A_2$ to explicitly indicate the bijection $h$ from $A_1$ to $A_2$ that induces their shape equivalence. It is easy to see that bijection $h$ preserves the initial states (condition (1)) and transitions (conditions (2) and (3)), but does not require a correspondence between final states. \begin{lemma} Shape equivalence $\sim$ is indeed an equivalence relation. \end{lemma} \begin{proof} Reflexivity: the identity function trivially satisfies (1)-(3) above. Symmetry: let $A \bijectf{h} A$. Given that $h$ is a bijection, then $A \bijectf{h^{-1}} A$. Transitivity: Let $A_1 \bijectf{h} A_2$ and $A_2 \bijectf{g} A_3$. Then $A_1 \bijectf{h\;\circ\;g} A_3$, where $h \circ g$ is the composition of $h$ and $g$. \end{proof} Hence, $\sim$ induce (equivalence) classes of automata with the same shape. Automata for the basic formulae in Theorem~\ref{thm:rv-ltl} belong to the same class. \begin{lemma} For each {\sc ldl}$_f$\xspace formula $\varphi$, $A(\varphi)$, $A(\neg \varphi)$, $A(\DIAM{\mathsf{pref}_{\varphi}}\mathit{end})$ and $A(\DIAM{\mathsf{pref}_{\neg \varphi}}\mathit{end})$ are in the same equivalence class by $\sim$. \end{lemma} \begin{proof} From automata theory, $A(\neg \varphi)$ can be obtained from $A(\varphi)$ by switching the final states with the non-final ones. Hence, the identity $i: \S^{\varphi} \rightarrow \S^{\varphi}$ is such that $A(\varphi) \bijectf{i} A(\neg \varphi)$. Moreover, $A(\varphi) \sim A(\DIAM{\mathsf{pref}_{\varphi}}\mathit{end}) \sim A(\DIAM{\mathsf{pref}_{\neg \varphi}}\mathit{end})$ as $A(\DIAM{\mathsf{pref}_{\varphi}}\mathit{end})$, respectively, $A(\DIAM{\mathsf{pref}_{\neg \varphi}}\mathit{end})$, can be obtained from $A(\varphi)$, respectively, $A(\neg \varphi)$, by setting as final states all states from which there exists a non-zero length path to a final state of $A(\varphi)$, respectively, $A(\neg \varphi)$, as explained in the proof of Lemma~\ref{lemma:prefRE}. Hence, again, the identity relation $i$ is such that $A(\varphi) \bijectf{i} A(\DIAM{\mathsf{pref}_{\varphi}}\mathit{end}) \bijectf{i} A(\DIAM{\mathsf{pref}_{\neg \varphi}}\mathit{end})$. \end{proof} As the last step for proving that automata for the four formulae in Theorem~\ref{thm:rv-ltl} are in the same class, we show that the formula conjunction does not alter shape equivalence, in the following precise sense. \begin{theorem} Let $\varphi_1$, $\varphi_2$, $\psi_1$ and $\psi_2$ be {\sc ldl}$_f$\xspace formulae so that $A(\varphi_1) \sim A(\psi_1)$ and $A(\varphi_2) \sim A(\psi_2)$. Then $A(\varphi_1 \land \varphi_2) \sim A(\psi_1 \land \psi_2)$. \end{theorem} \begin{proof} From the semantics of {\sc ldl}$_f$\xspace and Theorem~\ref{thm:automataSoundness}, it follows that $A(\varphi_1 \land \varphi_2) \equiv A(\varphi_1) \cap A(\varphi_2)$. Recall that states of $A(\varphi_1) \cap A(\varphi_2)$ are ordered pairs $(s^{\varphi_1}, s^{\varphi_2}) \in \S^{\varphi_1} \times \S^{\varphi_2}$. Let $h_1$ and $h_2$ be bijections such that $A(\varphi_1) \bijectf{h_1} A(\psi_1)$ and $A(\varphi_2) \bijectf{h_2} A(\psi_2)$. We use $h_1$ and $h_2$ to construct a new bijection $h: \S^{\varphi_1} \times \S^{\varphi_2} \rightarrow \S^{\psi_1} \times \S^{\psi_2}$ such that $h(s^{\varphi_1}, s^{\varphi_2})=(h_1(s^{\varphi_1}), h_2(s^{\varphi_2}))$. We show that $h$ satisfies criteria (1)-(3) of shape equivalence, hence inducing $A(\varphi_1 \land \varphi_2) \bijectf{h} A(\psi_1 \land \psi_2)$. The starting state $s_0^{\varphi_1 \land \varphi_2}$ of $A(\varphi_1 \land \varphi_2)$ corresponds to $(s_0^{\varphi_1}, s_0^{\varphi_2})$ by definition of $A(\varphi_1) \cap A(\varphi_2)$. At the same time, $s_0^{\psi_1 \land \psi_2}=(h_1(s_0^{\varphi_1}), h_2(s_0^{\varphi_2}))=(s_0^{\varphi_1}, s_0^{\varphi_2})$ by definition of $h$, which proves (1). Now, consider a transition $((s^{\varphi_1}_1, s^{\varphi_2}_1), \Pi, (s^{\varphi_1}_2, s^{\varphi_2}_2))$ in $\varrho^{\varphi_1 \land \varphi_2}$. By construction, this means that there exist transitions $(s^{\varphi_1}_1, \Pi, s^{\varphi_1}_2) \in \varrho^{\varphi_1}$ and $(s^{\varphi_2}_1, \Pi, s^{\varphi_2}_2) \in \varrho^{\varphi_2}$. Since $A(\varphi_1) \bijectf{h_1} A(\psi_1)$ and $A(\varphi_2) \bijectf{h_2} A(\psi_2)$, we have that $(h_1(s^{\varphi_1}_1), \Pi, h_1(s^{\varphi_1}_2)) \in \varrho^{\psi_1}$ and $(h_2(s^{\varphi_2}_1), \Pi, h_2(s^{\varphi_2}_2)) \in \varrho^{\psi_2}$. It follows that $((h_1(s^{\varphi_1}_1), h_2(s^{\varphi_2}_1)), \Pi, (h_1(s^{\varphi_1}_2), h_2(s^{\varphi_2}_2))) \in \varrho^{\psi_1 \land \psi_2}$, which proves (2). Condition (3) is proved analogously with $h^{-1}$. \end{proof} \begin{corollary} Given an {\sc ldl}$_f$\xspace formula $\varphi$, automata $A(\varphi \land \DIAM{\mathsf{pref}_{\lnot\varphi}}\mathit{end})$, $A(\lnot\varphi \land \DIAM{\mathsf{pref}_{\varphi}}\mathit{end})$, $A(\DIAM{\mathsf{pref}_{\varphi}}\mathit{end}\land \lnot \DIAM{\mathsf{pref}_{\lnot\varphi}}\mathit{end})$ and $A(\DIAM{\mathsf{pref}_{\lnot\varphi}}\mathit{end}\land \lnot \DIAM{\mathsf{pref}_{\varphi}}\mathit{end})$ are in the same equivalence class by $\sim$. \end{corollary} \input{monitoringAutomataColored} This result tells that the automata of the formulae used to capture the RV states of an {\sc ldl}$_f$\xspace formula of interest, as captured by Theorem~\ref{thm:rv-ltl}, are identical modulo final states. In addition, by definition of the four {\sc ldl}$_f$\xspace formulae, we directly get that each state is marked as final by one and only one of such automata. This, in turn, allows us to merge all the four automata together into a single automaton, provided that we recall, for each state in the automaton, which of the four formula marks it as final (which corresponds to declare to which of the four RV states it corresponds). In practice, we can simply build the automaton $A(\varphi)$ for $\varphi$, and ``color'' each state in the automaton according to its corresponding RV state. This can be realized with the following, direct procedure. We first build $A(\varphi) = (2^\P,\S,s_{0}, \varrho, S_{f})$, with $S_{f}$ the set of its final states, and, for each $s \in \S$, we compute the set $Reach(s)$ of states reachable from $s$. Then: \begin{compactitem}[$\bullet$] \item if \begin{inparaenum}[\it (i)] \item $s \in S_{f}$, \item $Reach(s) \not \subseteq S_{f}$, and \item $Reach(s) \cap \set{s_{f}} \not = \emptyset$, \end{inparaenum} then we mark $s$ as $\mathit{temp\_true}$; \item if \begin{inparaenum}[\it (i)] \item $s \not \in S_{f}$, \item $Reach(s) \not \subseteq (\S \setminus S_{f})$, and \item $Reach(s) \cap S_{f} \not = \emptyset$, \end{inparaenum} then we mark $s$ as $\mathit{temp\_false}$; \item if \begin{inparaenum}[\it (i)] \item $s \in S_{f}$ and \item $Reach(s) \subseteq S_{f}$, \end{inparaenum} then we mark $s$ as $\mathit{perm\_true}$; \item if \begin{inparaenum}[\it (i)] \item $s \not \in S_{f}$ and \item $Reach(s) \subseteq (\S \setminus S_{f})$, \end{inparaenum} then we mark $s$ as $\mathit{perm\_false}$. \end{compactitem} It is easy to see that the four bullets above match the four ones of Theorem~\ref{thm:rv-ltl}. The soundness of the marking immediately follows from the definitions and results in the previous section. \begin{example} Figure~\ref{fig:monitoringAutColored} depicts the colored automaton for the formula in Example~\ref{ex:monitoringAutomata} and Figure~\ref{subfig:general}. States $s_0$ and $s_1$, which were final in the automaton for $\pi \models \rvass{\varphi}{\mathit{temp\_false}}$, are indeed marked as $\mathit{temp\_true}$ (orange, dashed line); state $s_2$, which was final in the automaton for $\pi \models \rvass{\varphi}{\mathit{temp\_true}}$, is marked as $\mathit{temp\_true}$ (blue, solid thin line); state $s_3$, which was final in the automaton for $\pi \models \rvass{\varphi}{\mathit{perm\_true}}$, is marked with $\mathit{perm\_true}$ (green, solid thick line); and, lastly, state $s_4$, which was final in the automaton for $\pi \models \rvass{\varphi}{\mathit{perm\_false}}$, is marked with $\mathit{perm\_false}$ (red, dotted line). \end{example} Upon building the colored automaton, we can determinize it and keep it \emph{untrimmed}, that is, including trap states from which no final state can be reached. In this way, every symbol from the entire supporting set $\P$ is accepted by the automaton in each of its states. This becomes then our \emph{monitor}. We conclude by noticing that the presented solution is very flexible, as the reachability analysis can be performed on-the-fly: indeed, this is the procedure we actually implemented in our runtime verification tool, as explained in Section~\ref{sec:implementation}. \subsection{Monitoring {\sc ldl}$_f$\xspace Formulae} \label{sec:monitor} As pointed out in the previous section, the core issue in monitoring is prefix recognition. {\sc ltl}$_f$\xspace is not expressive enough to talk about prefixes of its own formulae. Roughly speaking, given an {\sc ltl}$_f$\xspace formula, the language of its possibly good prefixes cannot be in general described as an {\sc ltl}$_f$\xspace formula. For such a reason, building a monitor usually requires direct manipulation of the automaton for the formula. {\sc ldl}$_f$\xspace, instead, can capture any nondeterministic automaton as a formula, and it has the capability of expressing properties on prefixes. We can exploit such an extra expressivity to capture the monitoring condition in a direct and elegant way. We start by showing how to construct formulae representing (the language of) prefixes of other formulae, and then we show how to use them in the context of monitoring. Technically, given an {\sc ldl}$_f$\xspace formula $\varphi$, it is possible to express the language $\L_{\mathit{poss\_good}}(\varphi)$ with an {\sc ldl}$_f$\xspace formula $\varphi'$. Such a formula is obtained in two steps. \begin{lemma} \label{lemma:prefRE} Given an {\sc ldl}$_f$\xspace formula $\varphi$, there exists a regular expression $\mathsf{pref}_{\varphi}$ such that $\L(\mathsf{pref}_{\varphi}) = \L_{\mathit{poss\_good}}(\varphi)$. \end{lemma} \begin{proof} The proof is constructive. We build the {\sc nfa}\xspace $A(\varphi)$ for $\varphi$. We then build a new {\sc nfa}\xspace $A_{\mathit{poss\_good}}(\varphi)$ by taking $A(\varphi)$ and setting as final states all states from which we can reach a final state of $A(\varphi)$. The so-obtained {\sc nfa}\xspace $A_{\mathit{poss\_good}}(\varphi)$ is such that $\L(A_{\mathit{poss\_good}}(\varphi))=\L_{\mathit{poss\_good}}(\varphi)$. Since {\sc nfa}\xspace are exactly as expressive as regular expressions, we can translate $A_{\mathit{poss\_good}}(\varphi)$ to the corresponding regular expression $\mathsf{pref}_{\varphi}$. \end{proof} Since {\sc ldl}$_f$\xspace is as expressive as regular expressions (cf.~\cite{DegVa13}), we can translate $\mathsf{pref}_{\varphi}$ into an equivalent {\sc ldl}$_f$\xspace formula. \begin{theorem} \label{th:prefFormula} Given an {\sc ldl}$_f$\xspace formula $\varphi$, \[\begin{array}{l} \pi\in \L_{\mathit{poss\_good}}(\varphi)\; \text{iff}\; \pi\models \DIAM{\mathsf{pref}_{\varphi}}\mathit{end}\\ \pi\in \L_{\mathit{nec\_good}}(\varphi)\; \text{iff}\; \pi\models \DIAM{\mathsf{pref}_{\varphi}}\mathit{end}\land \lnot \DIAM{\mathsf{pref}_{\lnot\varphi}}\mathit{end}\\ \end{array} \] \end{theorem} \begin{proof} Any regular expression $\rho$, and hence any regular language, can be captured in {\sc ldl}$_f$\xspace as $\DIAM{\rho}\mathit{end}$. Specifically, the language $\L_{\mathit{poss\_good}}(\varphi)$ is captured by $\DIAM{\mathsf{pref}_{\varphi}}\mathit{end}$, and the language $\L_{\mathit{nec\_good}}(\varphi)$, which is equivalent to $\L_{\mathit{poss\_good}}(\varphi) \setminus \L_{\mathit{poss\_good}}(\neg \varphi)$, is captured by $\DIAM{\mathsf{pref}_{\varphi}}\mathit{end}\land \lnot \DIAM{\mathsf{pref}_{\lnot\varphi}}\mathit{end}$. \end{proof} In other words, given an {\sc ldl}$_f$\xspace formula $\varphi$, formula $\varphi'=\DIAM{\mathsf{pref}_{\varphi}}\mathit{end}$ is an {\sc ldl}$_f$\xspace formula such that $\L(\varphi') = \L_{\mathit{poss\_good}}(\varphi)$. Similarly for $\L_{\mathit{nec\_good}}(\varphi)$. Exploiting this result, and the results in Proposition \ref{thm:RVLDL}, we reduce the evaluation of RV states to the standard evaluation of {\sc ldl}$_f$\xspace formulae over a (partial) trace. Formally: \begin{theorem} \label{thm:rv-ltl} Let $\pi$ be a trace. The following equivalences hold: \begin{compactitem}[$\bullet$] \item $\pi \models \rvass{\varphi}{\mathit{temp\_true}}$\; iff\; $\pi \models \varphi \land \DIAM{\mathsf{pref}_{\lnot\varphi}}\mathit{end}$; \item $\pi \models \rvass{\varphi}{\mathit{temp\_false}}$\; iff\; $\pi \models \lnot\varphi \land \DIAM{\mathsf{pref}_{\varphi}}\mathit{end}$; \item $\pi \models \rvass{\varphi}{\mathit{perm\_true}}$\; iff\; $\pi \models \DIAM{\mathsf{pref}_{\varphi}}\mathit{end}\land \lnot \DIAM{\mathsf{pref}_{\lnot\varphi}}\mathit{end}$; \item $\pi \models \rvass{\varphi}{\mathit{perm\_false}}$\; iff\; $\pi \models \DIAM{\mathsf{pref}_{\lnot\varphi}}\mathit{end}\land \lnot \DIAM{\mathsf{pref}_{\varphi}}\mathit{end}$. \end{compactitem} \end{theorem} \begin{proof} The theorem follows directly from Proposition~\ref{thm:RVLDL} and Theorem~\ref{th:prefFormula}. \end{proof} \input{monitoringAutomata} This result provides an actual procedure to return the RV state of an {\sc ldl}$_f$\xspace formula $\varphi$: we build four automata, one for each of the four formulae above, and then follow the evolution of the trace $\pi$ simultaneously on each one of them. Since Proposition~\ref{prop:lang-partitions} proves that the languages of the four automata are a partition for the set of all languages over $(2^\P)^*$, we are guaranteed that, at each step, one and only one automaton is in a final state, namely, one and only one truth value is returned as output of the monitoring procedure. \begin{example} \label{ex:monitoringAutomata} Figure~\ref{subfig:general} shows the graphical representation of the automaton for formula $\Phi :=\raisebox{0.4ex}{\tiny$\bigcirc$}(a \rightarrow (\raisebox{-0.27ex}{\LARGE$\bullet$} b))$, where $s_0$ is the initial state and final states are double-circled. Moreover, for the sake of readability, labels on edges are logical formulae, a shortcut for every interpretation satisfying that formula, e.g., the edge labeled with $\neg a$ from state $s_1$ to $s_2$ is a shortcut for $(s_1, \set{\neg a, b}, s_2), (s_1, \set{\neg a, \neg b}, s_2) \in \varrho$. Formula $\Phi$ intuitively requires that. in the next step, if $a$ is performed then either the trace ends (for the semantics of the weak next operator $\raisebox{-0.27ex}{\LARGE$\bullet$}$) or, if it continues, then it is forced to continue by performing $b$. Figures~\ref{subfig:temptrue}~--~\ref{subfig:false} represent the four automata for monitoring the different RV truth values. More specifically: \begin{compactitem}[$\bullet$] \item The automaton in Figure~\ref{subfig:temptrue} is used to check wether $\pi \models \rvass{\varphi}{\mathit{temp\_true}}$. Indeed, its final state is $s_2$, which corresponds to the subset of the final states in the original automaton from which some non-final state (in this case $s_4$) can still be reached. \item The automaton in Figure~\ref{subfig:tempfalse} is used to check wether $\pi \models \rvass{\varphi}{\mathit{temp\_false}}$. Indeed its final states are $s_0$ and $s_1$, which correspond to the subset of the non-final states in the original automaton from which some final state (in this case $s_3$) can still be reached. \item The automaton in Figure~\ref{subfig:true} is used to check wether $\pi \models \rvass{\varphi}{\mathit{perm\_true}}$. Indeed, its final state is $s_3$, which corresponds to the subset of the final states in the original automaton from which no non-final state can ever be reached. \item The automaton in Figure~\ref{subfig:false} is used to check wether $\pi \models \rvass{\varphi}{\mathit{perm\_false}}$. Indeed, its final state is $s_4$, which corresponds to the subset of the final states in the original automaton from which no final state can ever be reached. \end{compactitem} \end{example} In the next section, we prove that using four automata is indeed redundant, and that monitoring can be performed by making use of just a single automaton that retains, at once, all the necessary monitoring information. \section{Runtime Monitoring} \label{sec:rtm} From a high-level perspective, the monitoring problem amounts to observe an evolving system execution and report the violation or satisfaction of properties of interest at the earliest possible time. As the system progresses, its execution trace increases, and, at each step, the monitor checks whether the trace seen so far conforms to the properties, by considering that the execution can still continue. This evolving aspect has a significant impact on the monitoring output: at each step, indeed, the outcome may have a degree of uncertainty due to the fact that future executions are yet unknown. Several variants of monitoring semantics have been proposed (see \cite{Bauer2010:LTL} for a survey). In this work, we adopt the semantics in \cite{MMW11}, which is essentially the finite-trace variant of the RV semantics in \cite{Bauer2010:LTL}. Interestingly, in our finite-trace setting the RV semantics can be elegantly defined, since both trace prefixes and their continuations are finite. Given an {\sc ltl}$_f$\xspace/{\sc ldl}$_f$\xspace formula $\varphi$, and a current trace $\pi$, the monitor returns one among the following four \emph{RV states}: \begin{compactitem} \item $\mathit{temp\_true}$, meaning that $\pi$ \emph{temporarily satisfies} $\varphi$, i.e., it satisfies $\varphi$, but there is at least one possible continuation of $\pi$ that violates $\varphi$; \item $\mathit{temp\_false}$, meaning that $\pi$ \emph{temporarily violates} $\varphi$, i.e., $\varphi$ is not satisfied by $\varphi$, but there is at least one possible continuation of $\pi$ that does so; \item $\mathit{perm\_true}$, meaning that $\pi$ \emph{permanently satisfies} $\varphi$, i.e., $\varphi$ is satisfied by $\pi$ and it will always be, no matter how $\pi$ is extended; \item $\mathit{perm\_false}$, meaning that $\pi$ \emph{permanently violates} $\varphi$, i.e., $\varphi$ it is not satisfied by $\pi$ and it will never be, no matter how $\pi$ is extended. \end{compactitem} Formally, let $\varphi$ be an {\sc ldl}$_f$\xspace/{\sc ltl}$_f$\xspace formula, and let $\pi$ be a trace. Then, we define whether $\varphi$ is in RV state $s \in \set{\mathit{temp\_true},\mathit{temp\_false},\mathit{true},\mathit{false}}$ (written $\rvass{\varphi}{\mathit{temp\_true}}$) on trace $\pi$ as follows: \begin{compactitem} \item $\pi \models \rvass{\varphi}{\mathit{temp\_true}}$ if $\pi \models \varphi$ and there exists a trace $\pi'$ such that $\pi \pi' \not\models \varphi$, where $\pi \pi'$ denotes the trace obtained by concatenating $\pi$ with $\pi'$; \item $\pi \models \rvass{\varphi}{\mathit{temp\_false}}$ if $\pi \not\models \varphi$ and there exists a trace $\pi'$ such that $\pi \pi' \models \varphi$; \item $\pi \models \rvass{\varphi}{\mathit{perm\_true}}$ if $\pi \models \varphi$ and for every trace $\pi'$, we have $\pi \pi' \models \varphi$; \item $\pi \models \rvass{\varphi}{\mathit{perm\_false}}$ if $\pi\not\models \varphi$ and for every trace $\pi'$, we have $\pi \pi' \not\models \varphi$. \end{compactitem} By inspecting the definition of RV states, it is straightforward to see that a formula $\varphi$ is in one and only one RV state on a trace $\pi$. The RV states $\mathit{temp\_true}$ and $\mathit{temp\_false}$ are not definitive: they may change into any other RV state as the system progresses. This reflects the general unpredictability of how a system execution unfolds. Conversely, the RV states $\mathit{perm\_true}$ and $\mathit{perm\_false}$ are stable since, once outputted, they will not change anymore. Observe that a stable RV state can be reached in two different situations: \begin{inparaenum}[\it (i)] \item when the system execution terminates; \item when the formula that is being monitored can be fully evaluated by observing a partial trace only. \end{inparaenum} The first case is indeed trivial, as when the execution ends, there are no possible future evolutions and hence it is enough to evaluate the finite (and now complete) trace seen so far according to the {\sc ldl}$_f$\xspace semantics. In the second case, instead, it is irrelevant whether the systems continues its execution or not, since some {\sc ldl}$_f$\xspace properties, such as eventualities or safety properties, can be fully evaluated as soon as something happens, e.g., when the eventuality is verified or the safety requirement is violated. Notice also that, when a stable state is returned by the monitor, the monitoring analysis can be stopped. From a more theoretical viewpoint, given an {\sc ldl}$_f$\xspace property $\varphi$, the monitor looks at the trace seen so far, assesses if it is a \emph{prefix} of a full trace not yet completed, and categorizes it according to its potential for satisfying or violating $\varphi$ in the future. We call a prefix \emph{possibly good} for an {\sc ldl}$_f$\xspace formula $\varphi$, if there exists an extension of it that satisfies $\varphi$. More precisely, given an {\sc ldl}$_f$\xspace formula $\varphi$, we define the set of \emph{possibly good prefixes for $\L(\varphi)$} as the set: \begin{equation}\label{def:possGood} \L_{\mathit{poss\_good}}(\varphi) = \{\pi \mid \text{ there exists } \pi' \text{ such that } \pi \pi' \in \L(\varphi)\}. \end{equation} Prefixes for which every possible extension satisfies $\varphi$ are instead called \emph{necessarily good}. More precisely, given an {\sc ldl}$_f$\xspace formula $\varphi$, we define the set of \emph{necessarily good prefixes for $\L(\varphi)$} as the set: \begin{equation}\label{def:necGood} \L_{\mathit{nec\_good}}(\varphi) = \{\pi\mid \text{ for every } \pi' \text{ such that } \pi \pi' \in \L(\varphi)\}. \end{equation} The set of \emph{necessarily bad prefixes} $\L_\mathit{nec\_bad}(\varphi)$ can be defined analogously as: \begin{equation}\label{def:necBad} \L_{\mathit{nec\_bad}}(\varphi) = \{\pi\mid \text{ for every } \pi' \text{ such that } \pi \pi' \not\in \L(\varphi)\}. \end{equation} Observe that the necessarily bad prefixes for $\varphi$ are the necessarily good prefixes for $\neg \varphi$, i.e., $ \L_{\mathit{nec\_bad}}(\varphi)= \L_{\mathit{nec\_good}}(\lnot\varphi)$. Such language-theoretic notions allow us to capture all the RV states defined before. More precisely, it is immediate to show the following. \begin{proposition} \label{thm:RVLDL} Let $\varphi$ be an {\sc ldl}$_f$\xspace formula and $\pi$ a trace. Then: \begin{compactitem}[$\bullet$] \item $\pi \models \rvass{\varphi}{\mathit{temp\_true}}$ iff $\pi \in \L(\varphi) \setminus \L_\mathit{nec\_good}(\varphi)$; \item $\pi \models \rvass{\varphi}{\mathit{temp\_false}}$ iff $\pi \in \L(\neg \varphi) \setminus \L_\mathit{nec\_bad}(\varphi)$; \item $\pi \models \rvass{\varphi}{\mathit{perm\_true}}$ iff $\pi \in \L_{\mathit{nec\_good}}(\varphi)$; \item $\pi \models \rvass{\varphi}{\mathit{perm\_false}}$ iff $\pi \in \L_{\mathit{nec\_bad}}(\varphi)$. \end{compactitem} \end{proposition} We close this section by exploiting the above language-theoretic notions to better understand the relationships that hold over the various kinds of prefixes. We start by observing that the set of all finite words over the alphabet $2^{\P}$ is the union of the language of $\varphi$ and its complement $\L(\varphi) \cup \L(\neg \varphi)=(2^{\P})^*$. Also, any language and its complement are disjoint $\L(\varphi) \cap \L(\neg \varphi) = \emptyset$. Since from the definition of possibly good prefixes we have $\L(\varphi) \subseteq \L_{\mathit{poss\_good}}(\varphi)$ and $\L(\neg \varphi) \subseteq \L_{\mathit{poss\_good}}(\neg \varphi)$, we also have that $\L_{\mathit{poss\_good}}(\varphi) \cup \L_{\mathit{poss\_good}}(\neg \varphi)=(2^{\P})^*$. Also, from this definition, it is easy to see that $\L_{\mathit{poss\_good}}(\varphi) \cap \L_{\mathit{poss\_good}}(\neg \varphi)$ corresponds to: \[ \{ \pi \mid \text{ there exists } \pi' \text{ such that }\pi\pi' \in \L(\varphi) \text{ and there exists } \pi''\text{ such that }\pi\pi'' \in \L(\neg \varphi)\} \] meaning that the set of possibly good prefixes for $\varphi$ and the set of possibly good prefixes for $\neg \varphi$ do intersect, and in such intersection there are paths that can be extended to satisfy $\varphi$, but can also be extended to satisfy $\neg \varphi$. It is also easy to see that $\L(\varphi) = \L_{\mathit{poss\_good}}(\varphi) \setminus \L(\neg \varphi).$ Turning to necessarily good prefixes and necessarily bad prefixes, it is easy to see that $\L_{\mathit{nec\_good}}(\varphi) = \L_{\mathit{poss\_good}}(\varphi) \setminus \L_{\mathit{poss\_good}}(\neg \varphi)$, that $\L_{\mathit{nec\_bad}}(\varphi) = \L_{\mathit{poss\_good}}(\neg \varphi) \setminus \L_{\mathit{poss\_good}}(\varphi)$, and also that $\L_{\mathit{nec\_good}}(\varphi) \subseteq \L(\varphi)\;\; \text{and}\;\; \L_{\mathit{nec\_good}}(\varphi) \not \subseteq \L(\neg \varphi)$. Interestingly, necessarily good, necessarily bad, and possibly good prefixes partition all finite traces. In fact, by directly applying the definitions of necessarily good, necessarily bad, possibly good prefixes of $\L(\varphi)$ and $\L(\lnot\varphi)$, we obtain the following. \begin{proposition}\label{prop:lang-partitions} The set of all traces $(2^\P)^*$ can be partitioned into \[ \L_{\mathit{nec\_good}}(\varphi) \qquad\quad \L_{\mathit{poss\_good}}(\varphi) \cap \L_{\mathit{poss\_good}}(\neg \varphi) \qquad\quad \L_{\mathit{nec\_bad}}(\varphi) \] such that \[ \begin{array}[t]{l} \L_{\mathit{nec\_good}}(\varphi) \cup (\L_{\mathit{poss\_good}}(\varphi) \cap \L_{\mathit{poss\_good}}(\neg \varphi)) \cup \L_{\mathit{nec\_bad}}(\varphi) = (2^\P)^*\\ \L_{\mathit{nec\_good}}(\varphi) \cap (\L_{\mathit{poss\_good}}(\varphi) \cap \L_{\mathit{poss\_good}}(\neg \varphi)) \cap \L_{\mathit{nec\_bad}}(\varphi) = \emptyset. \end{array} \] \end{proposition} \section{Monitoring Declare Constraints} \label{sec:declare} \input{coloredAutomataDeclare} \label{sec:monitoringDeclare} We now ground our monitoring approach to the case of {\sc declare}\xspace monitoring. {\sc declare}\xspace\footnote{\url{http://www.win.tue.nl/declare/}} is a language and framework for the declarative, constraint-based modeling of processes and services. A thorough treatment of constraint-based processes can be found in \cite{Pes08,Mon10}. As a modeling language, {\sc declare}\xspace takes a complementary approach to that of classical, imperative process modeling. In imperative process modeling, all allowed control flows among tasks must be explicitly represented, and execution traces not falling within this set are implicitly considered as forbidden. Instead of this procedural and ``closed'' approach, {\sc declare}\xspace has a declarative, ``open'' flavor: the agents responsible for the process execution can freely choose in which order to perform the involved tasks, provided that the resulting execution trace satisfies the business constraints of interest. This is the reason why, alongside traditional control-flow constraints such as sequence (called in {\sc declare}\xspace \emph{chain succession}), {\sc declare}\xspace supports a variety of more refined constraints that impose loose temporal orderings, and/or that explicitly account for negative information, i.e., the explicit prohibition of task execution. \input{declare-table} Given a set $\P$ of tasks, a {\sc declare}\xspace model $\mathcal{M}} \newcommand{\N}{\mathcal{N}$ is a set $\mathcal{C}} \newcommand{\D}{\mathcal{D}$ of {\sc ltl}$_f$\xspace (and hence {\sc ldl}$_f$\xspace) constraints over $\P$. A finite trace over $\P$ \emph{complies with} $\mathcal{M}} \newcommand{\N}{\mathcal{N}$, if it satisfies all constraints in $\mathcal{C}} \newcommand{\D}{\mathcal{D}$. Among all possible {\sc ltl}$_f$\xspace constraints, some specific \emph{patterns} have been singled out as particularly meaningful for expressing {\sc declare}\xspace processes, taking inspiration from \cite{DwAC99}. Such patterns are grouped into four families: \begin{compactitem \item \emph{existence} (unary) constraints, stating that the target task must/cannot be executed (for an indicated amount of times); \item \emph{choice} (binary) constraints, accounting for alternative tasks; \item \emph{relation} (binary) constraints, connecting a source task to a target task and expressing that, whenever the source task is executed, then the target task must also be executed (possibly with additional temporal conditions); \item \emph{negation} (binary) constraints, capturing that whenever the source task is executed, then the target task is prohibited (possibly with additional temporal conditions). \end{compactitem} Table~\ref{tab:constraints}, at the end of this document, summarizes some of these patterns. See \cite{MPVC10} for the full list of patterns. \begin{example} \label{ex:declare} Consider a fragment of a ticket booking process, whose {\sc declare}\xspace representation is shown in Figure~\ref{fig:declare}. The process fragment consists of four tasks and four constraints, but in spite of its simplicity clearly illustrates the main features of declarative, constraint-based process modeling. Specifically, each process instance is focused on the management of a specific registration to a booking event made by an interested customer. For simplicity, we assume that the type of registration is selected upon instantiating the process, and is therefore not explicitly captured as a set of tasks within the process itself. The process fragment then consists of four tasks: \begin{compactitem}[$\bullet$] \item $\activity{accept regulation}$ is the task used to accept the regulation of the booking company for the specific type of registration the customer is interested in; \item $\activity{pay registration}$ is the task used to pay for the registration; \item $\activity{get ticket}$ is the task used to physically withdraw the ticket containing the registration details; \item $\activity{cancel registration}$ is the task used to abort the instance of the registration process. \end{compactitem} The execution of the aforementioned tasks is subject to the following behavioral constraints. First of all, within an instance of the booking process, a customer may pay for the registration at most once. This is captured in {\sc declare}\xspace by constraining the $\activity{pay registration}$ task with an \constraint{absence 2} constraint. After executing the payment, the customer must eventually get the corresponding ticket. On the other hand, the ticket can be obtained only after having performed the payment. This is captured in {\sc declare}\xspace by constraining $\activity{pay registration}$ and $\activity{get ticket}$ with a \constraint{response} constraint going from the first task to the second, and with a \constraint{precedence} constraint going from the second task to the first. This specific combination is called \constraint{succession}; it is graphically depicted by combining the graphical notation of the two constraints, and logically corresponds to their conjunction. When a payment is executed, the customer must accept the regulation of the registration. There is no particular temporal order required for accepting the registration: upon the payment, if the regulation has been already accepted, then no further steps are required; otherwise, the customer is expected to accept the regulation afterwards. This is captured in {\sc declare}\xspace by connecting $\activity{pay registration}$ to $\activity{accept regulation}$ by means of a \constraint{responded existence} constraint. Finally, a customer may always decide to cancel the registration, with the only constraint that the cancelation is incompatible with the possibility of getting the registration ticket. This means that, when the ticket is withdrawn, no cancelation is accepted anymore, and, on the other hand, once the registration is canceled, no ticket will be issued anymore. This is captured in {\sc declare}\xspace by relating $\activity{get ticket}$ to $\activity{cancel registration}$ through a \constraint{not coexistence} constraint. \end{example} \renewcommand{9mm}{2.5cm} \begin{figure}[t] \centering \resizebox{\textwidth}{!} { \begin{tikzpicture}[node distance=9mm] \node[task] (r) { \begin{tabular}{@{}c@{}} \begin{tabular}{@{}c@{}} \activity{\textbf{acc}ept}\\ \activity{regulation}\\ \end{tabular} \end{tabular} }; \node[task, right=of r] (pr) { \begin{tabular}{@{}c@{}} \activity{\textbf{pay}}\\ \activity{registration}\\ \end{tabular} }; \node[task, right=1.6*9mm of pr] (wt) { \begin{tabular}{@{}c@{}} \activity{\textbf{get}}\\ \activity{ticket}\\ \end{tabular} }; \node[task, right=of wt] (cr) { \begin{tabular}{@{}c@{}} \activity{\textbf{cancel}}\\ \activity{registration}\\ \end{tabular} }; \node[above=0mm of pr,anchor=south,taskfg] (absence2) {\activity{0..1}}; \draw[respondedexistence] (pr) -- (r); \draw[response] ($(pr.east)+(0,2mm)$) -- ($(wt.west)+(0,2mm)$); \draw[precedence] ($(wt.west)-(0,2mm)$) -- ($(pr.east)-(0,2mm)$); \draw[notcoexistence] (wt) -- (cr); \node[ right=.5*9mm of r, anchor=center, yshift=.5*9mm, draw, dotted, rounded corners=5pt, thick, minimum height=6mm] (rex) { \ensuremath{\Diamond\activity{pay} \mathbin{\rightarrow} \Diamond\activity{acc}} }; \path[thick, dotted] (rex) edge ($(r.east)+(.5*9mm,0)$); \node[ above=12mm of absence2, anchor=center, draw, dotted, rounded corners=5pt, thick, minimum height=6mm] (abs) { \ensuremath{\neg(\Diamond \activity{get} \land \Diamond \activity{cancel})} }; \path[thick,dotted] (abs) edge (absence2); \node[ right=.3*9mm of pr, anchor=center, yshift=.5*9mm, anchor=center, draw, dotted, rounded corners=5pt, thick, minimum height=6mm] (resp) { \ensuremath{\Box (\activity{pay} \mathbin{\rightarrow} \raisebox{0.4ex}{\tiny$\bigcirc$} \Diamond \activity{get})} }; \path[thick,dotted] (resp) edge ($(pr.east)+(.3*9mm,+2mm)$); \node[ right=1.2*9mm of pr, anchor=center, yshift=.8*9mm, anchor=center, draw, dotted, rounded corners=5pt, thick, minimum height=6mm] (pre) { \ensuremath{(\neg \activity{get} \Until \activity{pay})\lor \neg \Diamond \activity{pay}} }; \path[thick,dotted] (pre) edge ($(pr.east)+(1.2*9mm,-2mm)$); \node[ right=.5*9mm of wt, anchor=center, yshift=.5*9mm, anchor=center, draw, dotted, rounded corners=5pt, thick, minimum height=6mm] (nco) { \ensuremath{\neg(\Diamond \activity{get} \land \Diamond \activity{cancel})} }; \path[thick,dotted] (nco) edge ($(wt.east)+(.5*9mm,3mm)$); \end{tikzpicture} } \caption{Fragment of a booking process in {\sc declare}\xspace, showing also the {\sc ltl}$_f$\xspace formalization of the constraints used therein. \label{fig:declare}} \end{figure} Several, logic-based techniques have been proposed to support end-users in defining, checking, and enacting {\sc declare}\xspace models \cite{PeSV07,Pes08,Mon09,Mon10,MPVC10}. More recently, the {\sc ltl}$_f$\xspace characterization of {\sc declare}\xspace, together with its operational automata-theoretic counterpart, have been exploited to provide advanced monitoring and runtime verification facilities \cite{MMW11,MWM12}. In particular, monitoring {\sc declare}\xspace models amounts to: \begin{compactitem}[$\bullet$] \item Track the evolution of a single {\sc declare}\xspace constraint against an evolving trace, providing a fine-grained feedback on how the truth value of the constraint evolves, when tasks are performed. This is done by adopting the RV semantics for {\sc ltl}$_f$\xspace. Specifically, in \cite{MMW11}, the evolution of {\sc declare}\xspace constraints through the different RV states is tackled using the ad-hoc ``colored automaton" construction technique that we have formally justified in Section~\ref{sec:colored-proof}. \item Track the compliance of an evolving trace to the entire {\sc declare}\xspace model, by considering all its constraints together. This is done by constructing the colored automaton for the conjunction of all constraints in the model. Monitoring the evolving trace against such a ``global" automaton is crucial for inferring complex violations that cannot be ascribed to the interaction of the current trace with a single constraint in the model, but arise due to the interplay between the trace and multiple constraints at once. Such violations emerge due to \emph{conflicting constraints}, i.e., constraints that, in the current circumstances, contradict each other and consequently cannot be all satisfied anymore \cite{MWM12}. By considering all constraints together, the presence of this kind of conflict can be detected immediately, without waiting for the later moment when an explicit violation of one of the single constraints involved in the conflict eventually arises. This important feature has been classified as \emph{early detection of violations} in a reference monitoring survey \cite{LMM13}. \end{compactitem} \smallskip \noindent{\bf Monitoring Declare Constraints with {\sc ldl}$_f$\xspace.} Since {\sc ldl}$_f$\xspace includes {\sc ltl}$_f$\xspace, {\sc declare}\xspace constraints can be directly encoded in {\sc ldl}$_f$\xspace using their standard formalization \cite{PesV06,MPVC10}. Thanks to the translation into {{\sc nfa}\xspace}s discussed in Section~\ref{sec:automaton} (and, if needed, their determinization into corresponding {{\sc dfa}\xspace}s), the automaton obtained from the {\sc ltl}$_f$\xspace encoding of a constraint can then be used to check whether a (partial) finite trace satisfies that constraint or not. This is not very effective, as the approach does not support the detection of fine-grained truth values, as the four RV ones. By exploiting Theorem~\ref{thm:rv-ltl}, however, we can reuse the same technique, this time supporting all RV truth values. In fact, by formalizing the good prefixes of each {\sc declare}\xspace pattern, we can immediately construct the four {\sc ldl}$_f$\xspace formulae that embed the different RV truth values, and check the current trace over each of the corresponding automata. Table~\ref{tab:constraints} reports the good prefix characterization of some of the {\sc declare}\xspace patterns; it can be seamlessly extended to all other patterns as well. Pragmatically, we can even go a step further, and employ colored automata by following the technique discussed in Section~\ref{sec:colored-proof}. More specifically, given a {\sc declare}\xspace model $\mathcal{M}} \newcommand{\N}{\mathcal{N}$, we proceed as follows: \begin{compactitem}[$\bullet$] \item For every constraint $\constraint{c} \in \mathcal{M}} \newcommand{\N}{\mathcal{N}$, we derive its {\sc ltl}$_f$\xspace formula $\varphi_\constraint{c}$, and construct its corresponding deterministic colored automaton $A(\constraint{c})$. This colored automaton, acts as \emph{local monitor} for its constraint $\constraint{c}$. This can be used to track the RV state of $\constraint{c}$ as tasks are executed. \item We build the {\sc ltl}$_f$\xspace formula $\Phi_{\mathcal{M}} \newcommand{\N}{\mathcal{N}}$ standing for the conjunction of the {\sc ltl}$_f$\xspace formulae encoding all constraints in $\mathcal{M}} \newcommand{\N}{\mathcal{N}$, and construct its corresponding deterministic colored automaton $A(\mathcal{M}} \newcommand{\N}{\mathcal{N})$. This colored automaton acts as \emph{global monitor} for the entire {\sc declare}\xspace model $\mathcal{M}} \newcommand{\N}{\mathcal{N}$. This can be used to track the overall RV state of $\mathcal{M}} \newcommand{\N}{\mathcal{N}$ as tasks are executed, and early detect violations arising from conflicting constraints. \item When the monitoring of a process execution starts, the initial state of each local monitor, as well as that of the global monitor, are outputted. \item Whenever an event witnessing the execution of a task is tracked, it is delivered to each local monitor and to the global monitor. The new, current state of each monitor is then computed and outputted based on the current state and on the received task name. \item When the process execution is completed (i.e., not further events are expected to occur), the final state of each monitor is outputted, depending on whether its colored automaton is in an accepting state or not. In particular, if upon completion the colored state of the monitor is $\mathit{perm\_true}$ or $\mathit{temp\_true}$, then the trace is judged as compliant; if, instead, upon completion the colored state of the monitor is $\mathit{perm\_false}$ or $\mathit{temp\_false}$, then the trace is judged as non-compliant. \item The global monitor can be inquired to obtain additional information about how the monitored trace interacts with the constraints. For example, when the current state of the global monitor is $\mathit{temp\_false}$ or $\mathit{temp\_true}$, retrieving the names of tasks whose execution leads to a $\mathit{perm\_false}$ state is useful to return which tasks are currently forbidden by the model. This information is irrelevant when the monitor is in a $\mathit{perm\_true}$ or $\mathit{perm\_false}$ state: by definition, in the first case no task is forbidden, whereas in the latter all tasks are. \end{compactitem} \renewcommand{9mm}{1.2cm} \tikzstyle{eventline}=[thick,densely dashed,-] \tikzstyle{monstate}=[ultra thick] \newcommand{.6}{.6} \begin{figure}[t!] \centering \resizebox{\textwidth}{!}{ \begin{tikzpicture}[node distance=9mm,y=1cm,x=2cm] \node[anchor=west,xshift=-1.5mm] at (0,.5) {\textsc{local monitors}}; \node[anchor=north west] (pay) at (0,0) {\ensuremath{\neg\Diamond (\activity{pay} \land \raisebox{0.4ex}{\tiny$\bigcirc$}\Diamond \activity{pay})}}; \node[,anchor=north west] (pay) at (0,-1) {\ensuremath{\Diamond\activity{pay} \mathbin{\rightarrow} \Diamond\activity{acc}}}; \node[anchor=north west] (pay) at (0,-2) {\ensuremath{(\neg \activity{get} \Until \activity{pay})\lor \neg \Diamond \activity{pay}} }; \node[anchor=north west] (pay) at (0,-3) {\ensuremath{\Box (\activity{pay} \mathbin{\rightarrow} \raisebox{0.4ex}{\tiny$\bigcirc$} \Diamond \activity{get})} }; \node[anchor=north west] (get) at (0,-4) {\ensuremath{\neg(\Diamond \activity{get} \land \Diamond \activity{cancel})}}; \node[anchor=west,xshift=-1.6mm] at (0,-5.8) {\textsc{global monitor}}; \node[dot] (start) at (2,1) {}; \node at (2,1.5) {\activity{begin}}; \node[dot] (e1) at (3,1) {}; \node at (3,1.5) {do \activity{pay}}; \node[dot] (e2) at (4,1) {}; \node at (4,1.5) {do \activity{acc}}; \node[dot] (e3) at (5,1) {}; \node at (5,1.5) {do \activity{cancel}}; \node[dot] (end) at (6,1) {}; \node at (6,1.5) {\activity{end}}; \path[-stealth',ultra thick] (start) edge (e1) (e1) edge (e2) (e2) edge (e3) (e3) edge (end); \draw[eventline,solid] (start) edge (2,-7); \draw[eventline] (e1) edge (3,-7); \draw[eventline] (e2) edge (4,-7); \draw[eventline] (e3) edge (5,-7); \draw[eventline,solid] (end) edge (6,-7); \node[temptruemonstate,fit={(2,0) ($(6,-.6)$)}] {}; \node[truemonstate,fit={(6,0) ($(7,-.6)$)}] {}; \node[temptruemonstate,fit={(2,-1) ($(3,-1-.6)$)}] {}; \node[tempfalsemonstate,fit={(3,-1) ($(4,-1-.6)$)}] {}; \node[truemonstate,fit={(4,-1) ($(7,-1-.6)$)}] {}; \node[temptruemonstate,fit={(2,-2) ($(3,-2-.6)$)}] {}; \node[truemonstate,fit={(3,-2) ($(7,-2-.6)$)}] {}; \node[temptruemonstate,fit={(2,-3) ($(3,-3-.6)$)}] {}; \node[tempfalsemonstate,fit={(3,-3) ($(6,-3-.6)$)}] {}; \node[falsemonstate,fit={(6,-3) ($(7,-3-.6)$)}] {}; \node[temptruemonstate,fit={(2,-4) ($(6,-4-.6)$)}] {}; \node[truemonstate,fit={(6,-4) ($(7,-4-.6)$)}] {}; \node[temptruemonstate,fit={(2,-5.5) ($(3,-5.5-.6)$)}] {}; \node[tempfalsemonstate,fit={(3,-5.5) ($(5,-5.5-.6)$)}] {}; \node[falsemonstate,fit={(5,-5.5) ($(7,-5.5-.6)$)}] {}; \node[anchor=west,xshift=-1.5mm] at (0,-6.8) {\textsc{forbidden tasks}}; \node at (2.5,-6.8) {\activity{get}}; \node at (3.5,-6.8) {\activity{pay}}; \node at (4.5,-6.8) {\activity{pay}}; \node at (5.5,-6.8) {\activity{--}}; \end{tikzpicture} } \caption{Result computed by monitoring the {\sc declare}\xspace model of Figure~\ref{fig:declare} against the noncompliant trace $\activity{pay}\cdot\activity{acc}\cdot\activity{cancel}$, considering local monitors for each constraint separately, and the global monitor accounting for all of them at once.\label{fig:monitoring}} \end{figure} \begin{example} \label{ex:monitoring} Figure~\ref{fig:monitoring} depicts the result computed by monitoring the {\sc declare}\xspace model introduced in Example~\ref{ex:declare} and shown in Figure~\ref{fig:declare}, against a trace where a registration is paid, the corresponding regulation is accepted, and then the registration is canceled. When monitoring starts, all local monitors are in state $\mathit{temp\_true}$, and so is the global monitor. Task \activity{get ticket} is forbidden, since according to the \constraint{precedence} constraint connecting that task to \activity{pay registration} (i.e., formula $\ensuremath{(\neg \activity{get} \Until \activity{pay})\lor \neg \Diamond \activity{pay}}$), a previous execution of \activity{pay registration} is needed. When the payment is executed: \begin{compactitem}[$\bullet$] \item the local monitor for the \constraint{responded existence} constraint linking \activity{pay registration} to \activity{accept regulation} (i.e., formula \ensuremath{\Diamond\activity{pay} \mathbin{\rightarrow} \Diamond\activity{acc}}) moves to $\mathit{temp\_false}$, because it requires acceptance of the regulation (which has not been done yet); \item the local monitor for the \constraint{precedence} constraint linking \activity{get ticket} to \activity{pay registration} (i.e., formula \ensuremath{(\neg \activity{get} \Until \activity{pay})\lor \neg \Diamond \activity{pay}}) moves to $\mathit{perm\_true}$, enabling once and for all the possibility of executing \activity{get ticket}; \item the local monitor for the \constraint{response} constraint linking \activity{pay registration} to \activity{get ticket} (i.e., formula \ensuremath{\Box (\activity{pay} \mathbin{\rightarrow} \raisebox{0.4ex}{\tiny$\bigcirc$} \Diamond \activity{get})}) moves to $\mathit{temp\_false}$, because its satisfaction now demands a consequent execution of the \activity{get ticket} task. \end{compactitem} The global monitor also moves to $\mathit{temp\_false}$, since there are two tasks that must be executed to satisfy the \constraint{responded existence} and \constraint{response} constraints, and it is indeed possible to execute them without violating other constraints. At the same time, further payments are now forbidden, due to the \constraint{absence 2} constraint attached to the \activity{pay registration} task (i.e., formula \ensuremath{\neg\Diamond (\activity{pay} \land \raisebox{0.4ex}{\tiny$\bigcirc$}\Diamond \activity{pay})}). The consequent execution of \activity{accept regulation} turns the state of the \constraint{responded existence} constraint linking \activity{pay registration} to \activity{accept regulation} (i.e., formula \ensuremath{\Diamond\activity{pay} \mathbin{\rightarrow} \Diamond\activity{acc}}) to $\mathit{perm\_true}$: since the regulation has now been accepted, the constraint is satisfied and will stay so no matter how the execution is continued. The most interesting transition is the one triggered by the consequent execution of the \activity{cancel registration} task. While this event does not trigger any state change in the local monitors, it actually induces a transition of the global monitor to the permanent, $\mathit{perm\_false}$ RV state. In fact, no continuation of the trace will be able to satisfy all constraints of the considered model. More specifically, the sequence of events received so far induces a so-called \emph{conflict} \cite{MWM12} for the \constraint{response} constraint linking \activity{pay registration} to \activity{get ticket} (i.e., formula \ensuremath{\Box (\activity{pay} \mathbin{\rightarrow} \raisebox{0.4ex}{\tiny$\bigcirc$} \Diamond \activity{get})}), and the \constraint{not coexistence} constraint relating \activity{get ticket} and \activity{cancel registration} (i.e., formula \ensuremath{\neg(\Diamond \activity{get} \land \Diamond \activity{cancel})}). In fact, the \constraint{response} constraint requires a future execution of the \activity{get ticket} task, which is however forbidden by the \constraint{not coexistence} constraint. Consequently, no continuation of the current trace will satisfy both constraints at once. Since no further task execution actually happens, the trace is finally declared to be complete, with no execution of the \activity{get ticket} task. This has the effect of respectively moving the \constraint{response} and \constraint{not coexistence} constraints to $\mathit{perm\_false}$ and $\mathit{perm\_true}$. Also the \constraint{absence 2} constraint on payment becomes $\mathit{perm\_true}$, witnessing that no double payment occurred in the trace. \end{example} \endinput \section{Modeling and monitoring metaconstraints} \label{sec:monitoring-metaconstraints} In Section~\ref{sec:rtm}, we have demonstrated that {\sc ldl}$_f$\xspace has the ability of expressing formulae that capture the RV state of other formulae. This can be interpreted as the ability of {\sc ldl}$_f$\xspace to express meta-level properties of {\sc ldl}$_f$\xspace constraints within the logic itself. Such properties, which we call \emph{metaconstraints}, can, in turn, be themselves monitored using the automata-theoretic approach described in Section~\ref{sec:rtm}. In this section we elaborate on this observation, discussing how metaconstraints can be built, and illustrating interesting metaconstraint patterns. \subsection{Modeling Metaconstraints} Theorem~\ref{thm:rv-ltl} shows that, for an arbitrary {\sc ldl}$_f$\xspace formula $\varphi$, four {\sc ldl}$_f$\xspace formulae can be automatically constructed to express whether $\varphi$ is in one of the four RV states. Consequently, given $s \in \set{\mathit{temp\_true},\mathit{temp\_false},\mathit{true},\mathit{false}}$, and an {\sc ltl}$_f$\xspace/{\sc ldl}$_f$\xspace formula $\varphi$, we can consider formulae of the form $\rvass{\varphi}{s}$ as special atoms of the logic itself. Such special atoms are used to check whether a trace brings $\varphi$ in state $s$. However, they cannot be used to explicitly characterize which are the paths that lead $\varphi$ to RV state $s$, i.e., that make formula $\rvass{\varphi}{s}$ true. Such paths can be readily obtained by constructing the regular expression for language $\L(\rvass{\varphi}{s})$, which we denote as $re_{\rvass{\varphi}{s}}$. For example, $re_{\rvass{\varphi}{\mathit{perm\_false}}} = \L(\DIAM{\mathsf{pref}_{\lnot\varphi}}\mathit{end}\land \lnot \DIAM{\mathsf{pref}_{\varphi}}\mathit{end})$ describes all paths culminating in a permanent violation of $\varphi$. With these notions at hand, we can build {\sc ltl}$_f$\xspace/{\sc ldl}$_f$\xspace metaconstraints as standard {\sc ltl}$_f$\xspace/{\sc ldl}$_f$\xspace formulae that include: \begin{compactitem}[$\bullet$] \item formulae of the form $\rvass{\varphi}{s}$ as atoms; \item formulae of the form $re_{\rvass{\varphi}{s}}$ as path expressions. \end{compactitem} A metaconstraint is then translated back into a standard {\sc ldl}$_f$\xspace formula by replacing each sub-formula of the form $\rvass{\varphi}{s}$ with its corresponding {\sc ldl}$_f$\xspace formula according to Theorem~\ref{thm:rv-ltl}, and each sub-formula of the form $re_{\rvass{\varphi}{s}}$ with its corresponding regular expression. A direct (non-optimized) way to calculate the regular expression for $re_{\rvass{\varphi}{s}}$ is to construct the automaton for $\rvass{\varphi}{s}$, and then to fold this automaton back into a regular expression (using standard techniques). \subsection{Some Relevant Metaconstraint Patterns} We present three types of metaconstraints, demonstrating the sophistication and versatility of the resulting framework. \medskip \noindent \emph{Contextualizing constraints.} This type of metaconstraint is used to express that a constraint must hold \emph{while} another constraint is in some RV state. The latter constraint, together with the specified state, consequently provides a monitoring \emph{context} for the former, \emph{contextualized} constraint. Let us specifically consider the case of a \emph{contextualized absence}, where, given a task $\activity{a}$, the contextualized constraint has the form $\Box\neg\activity{a}$, and the context is provided by an arbitrary constraint $\varphi$ being in a given RV state $s$. This is formalized as: \begin{equation} \BOX{re_{\rvass{\varphi}{s}}} (\neg \activity{a} \lor \mathit{end}) \label{eq:context} \end{equation} where $\mathit{end}$ denotes the end of the trace, as defined in Section~\ref{sec:LTLf-LDLf}; this is needed since, in {\sc ldl}$_f$\xspace, $\neg \activity{a}$ expresses that some task different than $\activity{a}$ is executed, while we also want to accept the case where no task is performed at all (and the trace completes). The idea of formula \eqref{eq:context} is to relativize the unrestricted $\Box$ operator to all and only those paths leading to RV state $s$ for $\varphi$, which are, in turn, characterized by the regular expression $re_{\rvass{\varphi}{s}}$. A monitor for formula \eqref{eq:context} returns $\mathit{temp\_true}$ either when $\varphi$ is not in state $s$, but may evolve into such a state, or when $\varphi$ is in state $s$. In the latter situation, by inspecting the monitor one can see that task $\activity{a}$ if forbidden; this also means that upon the execution of $\activity{a}$, the monitor evolves into $\mathit{perm\_false}$. Finally, the monitor returns $\mathit{perm\_true}$ if $\varphi$ is not in state $s$, and cannot enter into state $s$ in the future, no matter how the trace is continued. \begin{example} \label{ex:context} Consider the constraint model in Figure~\ref{fig:declare}. We now want to express that it is not possible to get the ticket after the payment is done, until the regulation is accepted (if it was accepted before, no restriction applies). This can be seen as a \emph{contextualized absence} constraint forbidding \activity{get ticket} when the \constraint{responded existence} that links \activity{pay registration} to \activity{accept regulation} (i.e., formula \ensuremath{\Diamond\activity{pay} \mathbin{\rightarrow} \Diamond\activity{acc}}) is \emph{temporarily violated}. Formally, to encode this, we instantiate formula \eqref{eq:context} into: \[ \BOX{re_{\rvass{\{\Diamond \activity{pay} \mathbin{\rightarrow} \Diamond \activity{acc}\}}{\mathit{temp\_false}}}} (\neg \activity{get} \lor \mathit{end}) \] which, in turn, expands into: \[ \BOX{(\neg \activity{pay})^*;\activity{pay};(\neg\activity{acc})^*} (\neg \activity{get} \lor \mathit{end}) \] \end{example} \medskip \noindent \emph{Compensation constraints.} In general terms, compensation refers to a behavior that has to be enforced when the current execution reaches an unexpected/undesired state. In our setting, the undesired state triggering a compensation is the \emph{permanent violation} of a property that captures a desired behavior, which, in turn, triggers the fact that \emph{another} formula, capturing the compensating behavior, has to be satisfied. We call the first formula the \emph{default constraint}, and the second formula its \emph{compensating constraint}. Let us consider the general case of a default {\sc ldl}$_f$\xspace constraint $\varphi$, and a compensating {\sc ldl}$_f$\xspace constraint $\psi$. By noticing that once a trace permanently violates a constraint, then every possible continuation still permanently violates that constraint, we capture the \emph{compensation} of $\varphi$ by $\psi$ as: \begin{equation} \rvass{\varphi}{\mathit{perm\_false}} \mathbin{\rightarrow} \psi \label{eq:compensation} \end{equation} The intuitive interpretation of formula \eqref{eq:compensation} is that either $\varphi$ never enters into the $\mathit{perm\_false}$ RV state, or $\psi$ holds. No requirement is placed regarding \emph{when} $\psi$ should be monitored in case $\varphi$ gets permanently violated. In fact, the overall compensation formula \eqref{eq:compensation} gets temporarily/permanently satisfied even when the compensating constraint $\psi$ is temporarily/permanently satisfied \emph{before} the moment when the default constraint $\varphi$ gets permanently violated. This may sound counterintuitive, as it is usually intended that the compensating behavior has to be exhibited \emph{as a reaction} to the violation. We can capture this intuition by turning formula \ref{eq:compensation} into the following \emph{reactive compensation} formula: \begin{equation} \rvass{\varphi}{\mathit{perm\_false}} \mathbin{\rightarrow} \DIAM{re_{\rvass{\varphi}{\mathit{perm\_false}}}}\psi \label{eq:compensation-refined} \end{equation} This formula imposes that, in case of a permanent violation of $\varphi$, the compensating constraint $\psi$ must hold \emph{after} $\varphi$ has become permanently violated. Assuming that $\varphi$ can be potentially violated (which is the reason why we want to express a compensation), a monitor for formula \eqref{eq:compensation-refined} starts by emitting $\mathit{temp\_true}$. As soon as the monitored execution is so that $\varphi$ cannot be permanently violated anymore, the monitor switches to $\mathit{perm\_true}$. If instead the monitored execution leads to permanently violate $\varphi$, from the moment of the violation onwards, the evolution of the monitor follows that of $\psi$. \begin{example} \label{ex:compensation} Consider the \constraint{not coexistence} constraint in Figure~\ref{fig:declare}. We now want to model that, whenever this constraint is permanently violated, that is, whenever a ticket is retrieved and the registration is canceled, then a \activity{return ticket} (\activity{return} for short) task must be executed. This has to occur in reaction to the permanent violation. Hence, we rely on template \eqref{eq:compensation-refined} and instantiate it into: \[ \rvass{\{\neg(\Diamond\activity{get}\land\Diamond\activity{cancel})\}}{\mathit{perm\_false}} \mathbin{\rightarrow} \DIAM{re_{\rvass{\{\neg(\Diamond\activity{get}\land\Diamond\activity{cancel})\}}{\mathit{false}}}}\Diamond \activity{return} \] This formula is equivalent to \[ (\Diamond\activity{get}\land\Diamond\activity{cancel})\mathbin{\rightarrow} \DIAM{re_{\{\Diamond\activity{get}\land\Diamond\activity{cancel}\}}}\Diamond \activity{return} \] which, in turn, becomes \newcommand{\myregexp}{ \begin{array}{@{}l@{}l@{}} &(o^*;\activity{get};(\neg \activity{cancel})^*;\activity{cancel};\mathit{true}^*)\\ +&(o^*;\activity{cancel};\neg \activity{get}^*;\activity{get};\mathit{true}^*) \end{array} } \[ (\Diamond\activity{get}\land\Diamond\activity{cancel}) \mathbin{\rightarrow} \left\langle\myregexp\right\rangle\Diamond \activity{return} \] where $o$ is a shortcut notation for any task different than $\activity{get}$ and $\activity{cancel}$. \end{example} \medskip \noindent \emph{Constraint priority for conflict resolution.} Thanks to the fact that RV states take into considerations all possible future evolution of a monitored execution, our framework handles the subtle situation where the execution reaches a state of affairs in which the conjunction of two constraints is permanently violated, while none of the two is so if considered in isolation. This situation of \emph{conflict} has been already recalled in Section~\ref{sec:declare} in the case of {\sc declare}\xspace. A situation of conflict involving two constraints $\varphi$ and $\psi$ witnesses that even though none of $\varphi$ and $\psi$ is permanently violated, they contradict each other, and hence in every possible future course of execution at least one of them will eventually become permanently violated. In such a state of affairs, it may become relevant to specify which ones of the two constraints has \emph{priority} over the other, that is, which one should be preferably satisfied. Formally, a trace culminates in a conflict for two {\sc ldl}$_f$\xspace constraints $\varphi$ and $\psi$ if it satisfies the following metaconstraint: \begin{equation} \rvass{\{\varphi\land\psi\}}{\mathit{perm\_false}} \land \neg \rvass{\varphi}{\mathit{perm\_false}} \land \neg \rvass{\psi}{\mathit{perm\_false}} \label{eq:conflict} \end{equation} Specifically, assuming that $\varphi$ and $\psi$ can potentially enter into a conflict, a monitor for formula \eqref{eq:conflict} proceeds as follows: \begin{compactitem}[$\bullet$] \item Initially, the monitor outputs $\mathit{temp\_false}$, witnessing that no conflict has been seen so far, but it may actually occur in the future. \item From this initial situation, the monitor can evolve in one of the following two ways: \begin{compactitem}[$-$] \item the monitor turns to $\mathit{perm\_false}$, witnessing that from this moment on neither of the two constraints will ever be violated anymore, irrespectively of how the trace continues; \item the monitor turns to $\mathit{temp\_true}$, whenever the monitored execution indeed culminates in a conflict -- this witnesses that a conflict is currently in place. \end{compactitem} \item From the latter situation witnessing the presence of a conflict, the monitor evolves then to $\mathit{perm\_false}$ when one of the two constraints indeed becomes permanently violated; this witnesses that the conflict is not anymore in place, due to the fact that now the permanent violation can actually be ascribed to one of the two constraints taken in isolation from the other. \end{compactitem} Using this monitor, we can identify all points in the trace where a conflict is in place by simply checking when the monitor returns $\mathit{temp\_true}$. Notice that the monitor never outputs $\mathit{perm\_true}$, since a conflicting situation will always eventually permanently violate $\varphi$ or $\psi$, in turn, permanently violating \eqref{eq:conflict}. In addition, the notion of conflict defined in formula \eqref{eq:conflict} is inherently ``non-monotonic'', as it ceases to exist as soon as one of the two involved constraints becomes permanently violated alone. This is the reason why we cannot directly employ formula \eqref{eq:conflict} as a basis to define which constraint we \emph{prefer} over the other when a conflict arises. To declare that $\varphi$ is \emph{preferred over} $\psi$, we then relax formula \eqref{eq:conflict} by simply considering the violation of the composite constraint $\varphi \land \psi$, which may occur due to a conflict or due to the permanent violation of one of the two constraints $\varphi$ and $\psi$. We then create a formula expressing that whenever the composite constraint is violated, then we want to satisfy the preferred constraint $\varphi$: \begin{equation} \DIAM{re_{\rvass{\{\varphi\land\psi\}}{\mathit{perm\_false}}}} \mathtt{true} \mathbin{\rightarrow} \varphi \label{eq:preference} \end{equation} This pattern can be generalized to conflicts involving $n$ formulae, using their proper maximal subsets as building blocks. In the typical situation where a permanent violation of $\varphi \land \psi$ does not manifest itself at the beginning of the trace, but may indeed occur in the future, a monitor for \eqref{eq:preference} starts by emitting $\mathit{temp\_true}$. When the composite constraint $\varphi \land \psi$ becomes permanently violated (either because of a conflict, or because of a permanent violation of one of its components), formula $\rvass{\{\varphi\land\psi\}}{\mathit{perm\_false}}$ turns to $\mathit{perm\_true}$, and the monitor consequently switches to observe the evolution of $\varphi$ (that is, of the head of the implication in \eqref{eq:preference}). \newcommand{(o^*;\activity{pay};(o+\activity{pay})^*;\activity{cancel};(\neg \activity{get})^*)+(o^*;\activity{cancel};(o+\activity{cancel})^*;\activity{pay};(\neg \activity{get})^*)}{(o^*;\activity{pay};(o+\activity{pay})^*;\activity{cancel};(\neg \activity{get})^*)+(o^*;\activity{cancel};(o+\activity{cancel})^*;\activity{pay};(\neg \activity{get})^*)} \newcommand{\myrelaxedconflictregexp}{ \begin{array}{@{}l@{}l@{}} &(o^*;\activity{pay};(\neg \activity{cancel})^*;\activity{cancel};(\mathit{true})^*) \\ +& (o^*;\activity{get};(\neg \activity{cancel})^*;\activity{cancel};(\mathit{true})^*)\\ +& (o^*;\activity{cancel};(\activity{cancel}+o)^*;(\activity{get}+\activity{pay});(\mathit{true})^*) \end{array} } \begin{example} \label{ex:conflict} Consider again Figure~\ref{fig:declare}, and in particular the \constraint{response} and \constraint{not coexistence} constraints respectively linking \activity{pay registration} to \activity{get ticket}, and \activity{get ticket} to \activity{cancel registration}, which we compactly refer to as $\psi_r$ and $\varphi_{nc}$. These two constraints conflict when a registration is paid and canceled, but the ticket is not retrieved (this would indeed lead to a permanent violation of $\varphi_{nc}$ alone). Let $o$ denote any task that is different from $\activity{pay}$, $\activity{get}$, and $\activity{cancel}$. The traces that culminate in a conflict for $\psi_r$ and $\varphi_{nc}$ are those that satisfy the regular expression: \begin{equation} (o^*;\activity{pay};(o+\activity{pay})^*;\activity{cancel};(\neg \activity{get})^*)+(o^*;\activity{cancel};(o+\activity{cancel})^*;\activity{pay};(\neg \activity{get})^*) \label{eq:myconflict} \end{equation} Recall that, as specified in Section~\ref{sec:LTLf-LDLf}, testing whether a trace satisfies this regular expression can be done by encoding it in {\sc ldl}$_f$\xspace as: \begin{equation} \DIAM{(o^*;\activity{pay};(o+\activity{pay})^*;\activity{cancel};(\neg \activity{get})^*)+(o^*;\activity{cancel};(o+\activity{cancel})^*;\activity{pay};(\neg \activity{get})^*)} \mathit{end} \label{eq:myconflict-check} \end{equation} We want now express that we prefer the \constraint{not coexistence} constraint over the \constraint{response} one, i.e., that, upon cancelation, the ticket should not be retrieved even if the payment has been done. To this end, we first notice that, for an evolving trace, the composite constraint $\psi_r \land \varphi_{nc}$ is permanently violated either when $\varphi_{nc}$ is so, or when a conflict arise. The first situation arises when the trace contains both $\activity{cancel}$ and $\activity{get}$ (in whatever order), whereas the second arises when the trace contains both $\activity{cancel}$ and $\activity{pay}$ (in whatever order). Consequently, we have that $re_{\rvass{\{\varphi_{nc}\land\psi_r\}}{\mathit{perm\_false}}}$ corresponds to the regular expression: \[\myrelaxedconflictregexp\] We then use this regular expression together with $\varphi_{nc}$ to instantiate formula \eqref{eq:preference} as follows: \[ \left\langle\myrelaxedconflictregexp\right\rangle\mathtt{true} \mathbin{\rightarrow} \ensuremath{\neg(\Diamond \activity{get} \land \Diamond \activity{cancel})} \] \end{example} We conclude by showing the evolution of the monitors for the metaconstraints discussed in the various examples of this section. \renewcommand{9mm}{1.2cm} \tikzstyle{eventline}=[thick,densely dashed,-] \tikzstyle{monstate}=[ultra thick] \renewcommand{.6}{.6} \begin{figure}[t!] \centering \resizebox{\textwidth}{!}{ \begin{tikzpicture}[node distance=9mm,y=1cm,x=2.5cm] \node[dot] (start) at (0,1) {}; \node at (0,1.5) {\activity{begin}}; \node[dot] (e1) at (1,1) {}; \node at (1,1.5) {do \activity{pay}}; \node[dot] (e2) at (2,1) {}; \node at (2,1.5) {do \activity{acc}}; \node[dot] (e3) at (3,1) {}; \node at (3,1.5) {do \activity{cancel}}; \node[dot] (e4) at (4,1) {}; \node at (4,1.5) {do \activity{get}}; \node[dot] (e5) at (5,1) {}; \node at (5,1.5) {do \activity{return}}; \node[dot] (end) at (6,1) {}; \node at (6,1.5) {\activity{complete}}; \path[-stealth',ultra thick] (start) edge (e1) (e1) edge (e2) (e2) edge (e3) (e3) edge (e4) (e4) edge (e5) (e5) edge (end); \draw[eventline,solid] (start) edge (0,-12); \draw[eventline] (e1) edge (1,-12); \draw[eventline] (e2) edge (2,-12); \draw[eventline] (e3) edge (3,-12); \draw[eventline] (e4) edge (4,-12); \draw[eventline] (e5) edge (5,-12); \draw[eventline,solid] (end) edge (6,-12); \node[anchor=west] at (-1.3,0) {\ensuremath{\Diamond\activity{pay} \mathbin{\rightarrow} \Diamond\activity{acc}}}; \node[temptruemonstate,fit={(0,.5*.6) (1,-.5*.6)}] {}; \node[tempfalsemonstate,fit={(1,.5*.6) (2,-.5*.6)}] {}; \node[truemonstate,fit={(2,.5*.6) (7,-.5*.6)}] {}; \node[anchor=west] at (-1.3,-1) {\ensuremath{\neg(\Diamond \activity{get} \land \Diamond \activity{cancel})}}; \node[temptruemonstate,fit={(0,-1+.5*.6) (4,-1-.5*.6)}] {}; \node[falsemonstate,fit={(4,-1+.5*.6) (7,-1-.5*.6)}] {}; \node[fill=lightgray,minimum width=8*2.5cm,draw,very thick,anchor=west,minimum height=7mm] at (-1,-2){}; \node[anchor=center] at (3,-2) {\constraint{Contextual absence}: $\activity{get}$ task forbidden while $\ensuremath{\Diamond\activity{pay} \mathbin{\rightarrow} \Diamond\activity{acc}}$ is $\mathit{temp\_false}$ }; \node[anchor=east] at (0,-3) {\textsc{rv state}}; \node[temptruemonstate,fit={(0,-3+.5*.6) (2,-3-.5*.6)}] {}; \node[truemonstate,fit={(2,-3+.5*.6) (7,-3-.5*.6)}] {}; \node[anchor=east] at (0,-4) {\textsc{forb.~tasks}}; \node at (.5,-4) {\activity{--}}; \node at (1.5,-4) {\activity{get}}; \node at (2.5,-4) {\activity{--}}; \node at (3.5,-4) {\activity{--}}; \node at (4.5,-4) {\activity{--}}; \node at (5.5,-4) {\activity{--}}; \node[fill=lightgray,minimum width=8*2.5cm,draw,very thick,anchor=west,minimum height=7mm] at (-1,-5){}; \node[anchor=center] at (3,-5) { \constraint{Reactive compensation}: permanent violation of $\ensuremath{\neg(\Diamond \activity{get} \land \Diamond \activity{cancel})}$ compensated by a consequent $\Diamond \activity{return}$ }; \node[anchor=east] at (0,-6) {\textsc{rv state}}; \node[temptruemonstate,fit={(0,-6+.5*.6) (4,-6-.5*.6)}] {}; \node[tempfalsemonstate,fit={(4,-6+.5*.6) (5,-6-.5*.6)}] {}; \node[truemonstate,fit={(5,-6+.5*.6) (7,-6-.5*.6)}] {}; \node[fill=lightgray,minimum width=8*2.5cm,draw,very thick,anchor=west,minimum height=7mm] at (-1,-7){}; \node[anchor=center] at (3,-7) { \constraint{Conflict}: presence of a conflict for $\ensuremath{\neg(\Diamond \activity{get} \land \Diamond \activity{cancel})}$ and $\ensuremath{\Box (\activity{pay} \mathbin{\rightarrow} \raisebox{0.4ex}{\tiny$\bigcirc$} \Diamond \activity{get})}$ }; \node[anchor=east] at (0,-8) {\textsc{rv state}}; \node[tempfalsemonstate,fit={(0,-8+.5*.6) (3,-8-.5*.6)}] {}; \node[temptruemonstate,fit={(3,-8+.5*.6) (4,-8-.5*.6)}] {}; \node[falsemonstate,fit={(4,-8+.5*.6) (7,-8-.5*.6)}] {}; \node[anchor=east] at (0,-9) {\textsc{conflict}}; \node at (.5,-9) {}; \node at (1.5,-9) {}; \node at (2.5,-9) {}; \node at (3.5,-9) {X}; \node at (4.5,-9) {}; \node at (5.5,-9) {}; \node[fill=lightgray,minimum width=8*2.5cm,draw,very thick,anchor=west,minimum height=7mm] at (-1,-10){}; \node[anchor=center] at (3,-10) { \constraint{Preference}: preference of $\ensuremath{\neg(\Diamond \activity{get} \land \Diamond \activity{cancel})}$ over $\ensuremath{\Box (\activity{pay} \mathbin{\rightarrow} \raisebox{0.4ex}{\tiny$\bigcirc$} \Diamond \activity{get})}$ }; \node[anchor=east] at (0,-11) {\textsc{rv state}}; \node[temptruemonstate,fit={(0,-11+.5*.6) (4,-11-.5*.6)}] {}; \node[falsemonstate,fit={(4,-11+.5*.6) (7,-11-.5*.6)}] {}; \end{tikzpicture} } \caption{Result computed by monitoring the metaconstraints in Examples~\ref{ex:context}, \ref{ex:compensation}, and \ref{ex:conflict} against the trace $\activity{pay}\cdot\activity{acc}\cdot\activity{cancel}\cdot\activity{get}\cdot\activity{return}$; for readability, we also report the evolution of the monitors for the constraints mentioned by the metaconstraints. \label{fig:monitoring-metaconstraints}} \end{figure} \begin{example} Figure~\ref{fig:monitoring-metaconstraints} reports the result computed by the monitors for the metaconstraints discussed in Examples~\ref{ex:context}, \ref{ex:compensation}, and \ref{ex:conflict} on a sample trace. When the payment occurs, the \constraint{contextual absence} constraint forbids to get tickets. The prohibition is then permanently removed upon the consequent acceptance of the regulation, which ensures that the selected context will never appear again. The execution of the third step, consisting in the cancelation of the order, induces a conflict for $\ensuremath{\neg(\Diamond \activity{get} \land \Diamond \activity{cancel})}$ and $\ensuremath{\Box (\activity{pay} \mathbin{\rightarrow} \raisebox{0.4ex}{\tiny$\bigcirc$} \Diamond \activity{get})}$, since they respectively forbid and require to eventually get the ticket. The monitor for the \constraint{conflict} metaconstraint witnesses this by switching to $\mathit{temp\_true}$. The \constraint{preference} stays instead $\mathit{temp\_true}$, but while up to this point it was emitting $\mathit{temp\_true}$ because no conflict had occurred yet, it now emits $\mathit{temp\_true}$ because this is the current RV state of the preferred, \constraint{not coexistence} constraint. The execution of the \activity{get ticket} task induces a permanent violation for constraint $\ensuremath{\neg(\Diamond \activity{get} \land \Diamond \activity{cancel})}$, which, in turn, triggers a number of effects: \begin{compactitem} \item Since the \constraint{preference} metaconstraint is now following the evolution of the preferred constraint $\ensuremath{\neg(\Diamond \activity{get} \land \Diamond \activity{cancel})}$, it also moves to $\mathit{perm\_false}$. \item The conflict is not present anymore and will never be encountered again, given that one of its two constraints is permanently violated on its own. Thus, the monitor for the \constraint{conflict} metaconstraint turns to $\mathit{perm\_false}$. \item The \constraint{reactive compensation} is triggered by the permanent violation of $\ensuremath{\neg(\Diamond \activity{get} \land \Diamond \activity{cancel})}$, and asserts that, from now on, the compensating constraint $\Diamond \activity{return}$ must be satisfied; since the ticket is yet to be returned, the metaconstraint turns to $\mathit{temp\_false}$. \end{compactitem} The execution of the last step, consisting in returning the ticket, has the effect of permanently satisfying the \constraint{compensation} metaconstraint, which was indeed waiting for this task to occur. \end{example} \endinput \begin{example} \label{ex:metaconstraints} Consider the {\sc declare}\xspace constraint model shown in Figure~\ref{fig:declare}. We now enrich it with three metaconstraints. The first metaconstraint indicates that, upon payment, it is not possible to get the ticket until the regulation is accepted (if it was accepted before, no restriction applies). Technically, this can be seen as a \constraint{contextual absence} constraint applied to \activity{get ticket}, being in place only in those moments of time when the \constraint{responded existence} that links \activity{pay registration} to \activity{accept regulation} is \emph{temporarily violated}. The second is a \emph{compensation} constraint, \renewcommand{9mm}{2.5cm} \begin{figure} \centering \scalebox{.8}{ \begin{tikzpicture}[node distance=9mm] \node[task] (r) { \begin{tabular}{@{}c@{}} \begin{tabular}{@{}c@{}} \activity{\textbf{accept}}\\ \activity{regulation} \end{tabular} \end{tabular} }; \node[task, right=of r] (pr) { \begin{tabular}{@{}c@{}} \activity{\textbf{pay}}\\ \activity{registration} \end{tabular} }; \node[above=0mm of pr,anchor=south,taskfg] (absence2) {\activity{0..1}}; \node[task, right=of pr] (wt) { \begin{tabular}{@{}c@{}} \activity{\textbf{get}}\\ \activity{ticket} \end{tabular} }; \node[above=0mm of wt,anchor=south,orange] (absence) {\activity{0}}; \node[task, right=of wt] (cr) { \begin{tabular}{@{}c@{}} \activity{\textbf{cancel}}\\ \activity{registration} \end{tabular} }; \node[task, below= 20mm of cr] (pf) { \begin{tabular}{@{}c@{}} \activity{pay \textbf{fee}} \end{tabular} }; \node[ below=5mm of wt, circle, draw, very thick, deepblue!50, fill=deepblue!10, densely dotted ] (gt) {\Large $\mathbf{<}$}; \node[smalltask, draw=darkred, text=darkred, fill=red!10, densely dotted, below=10mm of cr.west, xshift=-.25*9mm] (viol) { \footnotesize \begin{tabular}{@{}c@{}} \activity{upon}\\ \activity{violation} \end{tabular} }; \node[smalltask, draw=orange, text=orange, fill=orange!10, densely dotted, above=5mm of pr] (pend) { \footnotesize \begin{tabular}{@{}c@{}} \activity{while}\\ \activity{temporarily violated} \end{tabular} }; \draw[very thick,deepblue!50,densely dotted,-,rounded corners=10pt] ($(pr.east)+(.5*9mm,0)$) |- (gt); \draw[very thick,deepblue!50,densely dotted,rounded corners=10pt,-latex] (gt) -| ($(wt.east)+(.25*9mm,0)$); \draw[respondedexistence] (pr) -- (r); \draw[succession] (wt) -- (pr); \draw[notcoexistence] (wt) -- (cr); \draw[very thick,darkred,densely dotted,-,rounded corners=10pt] ($(cr.west)-(.25*9mm,0)$) edge (viol); \path[very thick,darkred,densely dotted,-latex] (viol) edge ($(cr.south)-(0,9.2mm)$); \path[respondedexistence,darkred] (cr) -- (pf); \draw[very thick,orange,densely dotted,rounded corners=10pt] ($(r.east)+(10mm,0)$) |- (pend); \draw[very thick,orange,densely dotted,rounded corners=10pt,-latex] (pend) -| ($(absence.north)-(0,1mm)$); \end{tikzpicture} } \caption{Extension of the {\sc declare}\xspace model of Figure~\ref{fig:declare}, including three metaconstraints.}\label{fig:declare-metaconstraints} \end{figure} \end{example} \endinput \todo[inline]{Marco qui} We now build on this observation and introduce an interesting class of (meta)formulae that \emph{contextualize} the monitoring of a formula depending in the RV state of another formula. Then, we ground the interpretation of this class in the setting of {\sc declare}\xspace (cf.~Section~\ref{sec:declare}), getting a corresponding class of metaconstraints that are effective in defining compensation constraints, as well as preferences over potentially conflicting constraints. During monitoring, the RV state of a constraint evolves as new events are received. RV states $\mathit{true}$ and $\mathit{false}$ are definitive, in the sense that whenever a constraint enters into any of such two states, it stays there until the trace ends. Instead, RV states $\mathit{temp\_true}$ and $\mathit{temp\_false}$ are in general temporary, and therefore a constraint may enter into and exit from such two states multiple times as the monitored trace unfolds. Consequently, we can imagine that a constraint $\psi$ and an RV state $s$ project a monitored trace $\pi$ into chunks of consecutive events, where each chunk corresponds to a maximal sub-trace of $\pi$ where $\psi$ is in state $s$. We can consequently \emph{contextualize} the monitoring of a second constraint $\varphi$, instantiating it on each such chunk separately. Figure~\ref{fig:context} captures this intuition graphically. \begin{figure*} \begin{subfigure}[b]{0.49\textwidth} \centering \begin{tikzpicture}[x=1cm] \node[dot,fill=white!30] (e0) at(0,0) {}; \node[dot,fill=white!30] (e1) at(1,0) {}; \node[dot,fill=white!30] (e2) at(2,0) {}; \node[dot,fill=white!30] (e3) at(3,0) {}; \node[dot,fill=white!30] (e4) at(4,0) {}; \node[dot,fill=white!30] (e5) at(5,0) {}; \node[dot,fill=white!30] (e6) at(6,0) {}; \path[-stealth',ultra thick] (e0) edge (e1) (e1) edge (e2) (e2) edge (e3) (e3) edge (e4) (e4) edge (e5) (e5) edge (e6); \draw[eventline,solid] (e0) -- (0,-2); \draw[eventline] (e1) -- (1,-2); \draw[eventline] (e2) -- (2,-2); \draw[eventline] (e3) -- (3,-2); \draw[eventline] (e4) -- (4,-2); \draw[eventline] (e5) -- (5,-2); \draw[eventline,solid] (e6) -- (6,-2); \draw[decorate,decoration={brace,amplitude=10pt,mirror},ultra thick] ($(0,-2)-(0,2mm)$) -- ($(6,-2)-(0,2mm)$) node [midway,below,yshift=-3mm] {\Large${\varphi}$}; \end{tikzpicture} \caption{Monitoring $\varphi$ over the entire trace} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \begin{tikzpicture}[x=1cm] \node[dot,fill=white!30] (e0) at(0,0) {}; \node[dot,fill=white!30] (e1) at(1,0) {}; \node[dot,fill=white!30] (e2) at(2,0) {}; \node[dot,fill=white!30] (e3) at(3,0) {}; \node[dot,fill=white!30] (e4) at(4,0) {}; \node[dot,fill=white!30] (e5) at(5,0) {}; \node[dot,fill=white!30] (e6) at(6,0) {}; \path[-stealth',ultra thick] (e0) edge (e1) (e1) edge (e2) (e2) edge (e3) (e3) edge (e4) (e4) edge (e5) (e5) edge (e6); \draw[eventline,solid] (e0) -- (0,-2); \draw[eventline] (e1) -- (1,-2); \draw[eventline] (e2) -- (2,-2); \draw[eventline] (e3) -- (3,-2); \draw[eventline] (e4) -- (4,-2); \draw[eventline] (e5) -- (5,-2); \draw[eventline,solid] (e6) -- (6,-2); \node[draw,monitorstate,fill=white,fit={(0,-.5) (1,-1.5)}] {...}; \node[draw,monitorstate,fill=yellow!30,fit={(1,-.5) (3,-1.5)},label=center:$\rvass{\psi}{s}$] {}; \node[draw,monitorstate,fill=white,fit={(3,-.5) (4,-1.5)}] {...}; \node[draw,monitorstate,fill=yellow!30,fit={(4,-.5) (6,-1.5)},label=center:$\rvass{\psi}{s}$] {}; \draw[decorate,decoration={brace,amplitude=10pt,mirror},ultra thick] ($(1,-2)-(0,2mm)$) -- ($(3,-2)-(0,2mm)$) node [midway,below,yshift=-3mm] {\Large${\varphi}$}; \draw[decorate,decoration={brace,amplitude=7pt,mirror},ultra thick] ($(5,-2)-(0,2mm)$) -- ($(6,-2)-(0,2mm)$) node [midway,below,yshift=-3mm] {\Large${\varphi}$}; \end{tikzpicture} \caption{Monitoring $\varphi$ when $\psi$ is in RV state $s$} \end{subfigure} \label{fig:context} \caption{Intuitive understanding of the contextualization of a formula in the RV state of another formula.} \end{figure*} \newcommand{\context}[3]{\mathit{ContConst}_{#2,#3}(#1)} We now discuss how to formalize this intuition in {\sc ldl}$_f$\xspace. For simplicity, we consider the situation where the RV state of interest Let $\varphi$ and $\psi$ be two {\sc ldl}$_f$\xspace formulae, and let $s \in \set{\mathit{true},\mathit{false},\mathit{temp\_true},\mathit{temp\_false}}$ be an RV state. The \emph{contextualization of $\varphi$ in state $s$ of $\psi$} is the following {\sc ldl}$_f$\xspace formula: \[ \context{\varphi}{\psi}{s} = \neg \] Thanks to the ability of {\sc ldl}$_f$\xspace to directly encode into the logic {\sc declare}\xspace constraints but also their RV monitoring states, we can formalize metaconstraints that relate the RV truth values of different constraints. Intuitively, such metaconstraints allow one to capture that \emph{we become interested in monitoring some constraint only when other constraints are evaluated to be in a certain RV truth value}. This, in turn, provides the basis to declaratively capture two classes of properties that are of central importance in the context of runtime verification: \begin{compactitem} \item \emph{Compensation constraints}, that is, constraints that should be enforced by the agents executing the process in the case other constraints are violated, i.e., are evaluated to be $\mathit{false}$. Previous works have been tackled this issue through ad-hoc techniques, with no declarative counterpart \cite{MMW11,MWM12}. \item Recovery mechanisms resembling \emph{contrary-to-duty obligations} in legal reasoning \cite{PrS96}, i.e., obligations that are put in place only when other obligations are not met. \end{compactitem} Technically, a generic form for metaconstraints is the pattern $\Phi_{pre} \mathbin{\rightarrow} \Psi_{exp}$, where: \begin{compactitem} \item $\Phi_{pre}$ is a boolean formula, whose atoms are membership assertions of the involved constraints to the RV truth values; \item $\Psi_{exp}$ is a boolean formula whose atoms are the constraints to be enforced when $\Phi_{pre}$ evaluates to true. \end{compactitem} This pattern can be used, for example, to state that whenever constraints $c_1$ and $c_2$ are permanently violated, then either constraint $c_3$ or $c_4$ have to be enforced. Observe that the metaconstraint so constructed is a standard {\sc ldl}$_f$\xspace formula. Hence, we can reapply Theorem~\ref{thm:rv-ltl} to it, getting four {\sc ldl}$_f$\xspace formulae that can be used to track the evolution of the metaconstraint among the four RV values. \begin{example} \label{ex:comp} Consider the {\sc declare}\xspace constraints of Example \ref{ex:declare}. We want to enhance it with a compensation constraint stating that whenever $\varphi_{canc}$ is violated (i.e., the order is canceled after it has been closed), then a supplement payment must be issued. This can be easily captured in {\sc ldl}$_f$\xspace as follows. First of all, we model the compensation constraint, which corresponds, in this case, to a standard \constraint{existence} constraint over the \activity{pay supplement} task. Let $\varphi_{dopay}$ denote the {\sc ldl}$_f$\xspace formalization of such a compensation constraint. Second, we capture the intended compensation behavior by using the following {\sc ldl}$_f$\xspace metaconstraint: \[ \{\rvass{\varphi_{canc}}{\mathit{false}}\} \mathbin{\rightarrow} \varphi_{dopay} \] which, leveraging Theorem~\ref{thm:rv-ltl}, corresponds to the standard {\sc ldl}$_f$\xspace formula: \[ \footnotesize (\DIAM{\mathsf{pref}_{\lnot\varphi_{canc}}}\mathit{end}\land \lnot \DIAM{\mathsf{pref}_{\varphi_{canc}}}\mathit{end}) \mathbin{\rightarrow} \varphi_{dopay} \] \end{example} A limitation of this form of metaconstraint is that the right-hand part $\Psi_{exp}$ is monitored \emph{from the beginning of the trace}. This is acceptable in many cases. E.g., in Example~\ref{ex:declare}, it is ok if the user already paid a supplement before the order cancelation caused constraint $\varphi_{canc}$ to be violated. In other situations, however, this is not satisfactory, because we would like to enforce the compensating behavior only \emph{after} $\Phi_{pre}$ evaluates to true, e.g., after the violation of a given constraint has been detected. In general, we can extend the aforementioned metaconstraint pattern as follows: $\Phi_{pre} \mathbin{\rightarrow} \BOX{\rho} \Psi_{exp}$, where $\rho$ is a regular expression denoting the paths after which $\Psi_{exp}$ is expected to be enforced. By constructing $\rho$ as the regular expression accounting for the paths that make $\Phi_{pre}$ true, we can then exploit this improved metaconstraint to express that $\Psi_{exp}$ is expected to become true after all prefixes of the current trace that made $\Phi_{pre}$ true. \begin{example} \label{ex:quattro} We modify the compensation constraint of Example~\ref{ex:comp}, so as to reflect that when a closed order is canceled (i.e., $\varphi_{canc}$ is violated), then a supplement must be paid \emph{afterwards}. This is captured by the following metaconstraint: \[ \{\rvass{\varphi_{canc}}{\mathit{false}}\} \mathbin{\rightarrow} \BOX{\mathit{re}_{\{\rvass{\varphi_{canc}}{\mathit{false}}\}}} \varphi_{dopay} \] where $\mathit{re}_{\{[\varphi_{canc}]=\mathit{false}\}}$ denotes the regular expression for the language $\L(\{[\varphi_{canc}]=\mathit{false}\})=\L(\DIAM{\mathsf{pref}_{\lnot\varphi_{canc}}}\mathit{end}\land \lnot \DIAM{\mathsf{pref}_{\varphi_{canc}}}\mathit{end})$. This regular expression describes all paths containing a violation for constraint $\varphi_{canc}$. \end{example} \endinput Notably, \cite{} tackles the runtime verification problem by attaching an RV semantics to {\sc declare}\xspace constraints, so as to provide a fine-grained feedback about the evolving monitoring state of each constraint. Technically, this is realized through the following steps: {\sc ltl}$_f$\xspace formulae are translated into {{\sc nfa}\xspace}s using the approach in \cite{}. The obtained automata are ``colored'', annotating the states The {\sc declare}\xspace framework provides a set $\P$ of propositions representing atomic tasks (i.e., actions), which are units of work in the process. Notice that properties of states are not represented. {\sc declare}\xspace assumes that, at each point in time, one and only one task is executed, and that the process eventually terminates. Following the second assumption, {\sc ltl}$_f$\xspace is used to specify {\sc declare}\xspace processes, whereas the first assumption is captured by the following {\sc ltl}$_f$\xspace formula, assumed as an implicit constraint: $ \unique{\P} = \Box(\bigvee_{a\in\P} a) \land \Box(\bigwedge_{a,b\in\P, a \neq b} a \mathbin{\rightarrow} \lnot b) $, which we call the \emph{{\sc declare}\xspace assumption}. A {\sc declare}\xspace model is a set $\mathcal{C}} \newcommand{\D}{\mathcal{D}$ of {\sc ltl}$_f$\xspace constraints over $\P$, used to restrict the allowed execution traces. Among all possible {\sc ltl}$_f$\xspace constraints, some specific \emph{patterns} have been singled out as particularly meaningful for expressing {\sc declare}\xspace processes, taking inspiration from \cite{DwAC99}. As shown in Table~\ref{tab:constraints}, patterns are grouped into four families: \begin{inparaenum}[\it (i)] \item \emph{existence} (unary) constraints, stating that the target task must/cannot be executed (a certain amount of times); \item \emph{choice} (binary) constraints, modeling choice of execution; \item \emph{relation} (binary) constraints, modeling that whenever the source task is executed, then the target task must also be executed (possibly with additional requirements); \item \emph{negation} (binary) constraints, modeling that whenever the source task is executed, then the target task cannot be executed (possibly with additional restrictions). \end{inparaenum} Observe that the set of finite traces that satisfies the constraints $\mathcal{C}} \newcommand{\D}{\mathcal{D}$ together with the {\sc declare}\xspace assumption $\unique{\P}$ can be captured by a single deterministic process, obtained by: \begin{compactenum} \item generating the corresponding {\sc nfa}\xspace (exponential step); \item transforming it into a {\sc dfa}\xspace - \emph{deterministic finite-state automaton} (exponential step); \item trimming the resulting {\sc dfa}\xspace by removing every state from which no final state is reachable (polynomial step). \end{compactenum} The obtained {\sc dfa}\xspace is indeed a process in the sense that at every step, depending only on the history (i.e., the current state), it exposes the set of tasks that are legally executable and eventually lead to a final state (assuming fairness of the execution, which disallows remaining forever in a loop). In the {\sc declare}\xspace implementation, this process is always maintained implicit, and traces are generated incrementally, in accordance with the constraints. This requires an execution engine that, at each state, infers which are the tasks that are legal (cf.~\emph{enactment} below) \begin{comment} \begin{figure*} \subfigure[A {\sc declare}\xspace model (abbreviations for task names are in parenthesis).] { \includegraphics[height=2cm]{img/declare} \label{fig:declare} } \hfill \subfigure[A possible enactment of the model. Forbidden tasks are shown in red. Pending constraints are shown in orange. The boundary rectangle is green when the execution can be ended, red otherwise.] { \includegraphics[height=2cm]{img/declare-enactment} \label{fig:declare-enactment} } \caption{A {\sc declare}\xspace model with one of its possible enactments.} \end{figure*} \begin{example} Consider the {\sc declare}\xspace model $\mathcal{C}} \newcommand{\D}{\mathcal{D}$ shown in Figure~\ref{fig:declare}. Traces ${or}$, ${ opa}$ and ${oap}$ are all allowed by $\mathcal{M}} \newcommand{\N}{\mathcal{N}$, whereas traces ${ora}$ and ${orp}$ are not, the first because the offer is accepted and rejected, and the second because the offer acceptance is at the same time required (since a payment has been done) and forbidden (since the offer has been refused). \end{example} \end{comment} Three fundamental reasoning services are of interest in {\sc declare}\xspace: \begin{inparaenum}[\it (i)] \item \emph{Consistency}, which checks whether the model is executable, i.e., there exists at least one (finite) trace $\pi_f$ over $\P$ such that $\pi_f \models \mathcal{C}} \newcommand{\D}{\mathcal{D} \land \unique{\P}$ where, with a little abuse of notation, we use $\mathcal{C}} \newcommand{\D}{\mathcal{D}$ to denote the {\sc ltl}$_f$\xspace formula $\bigwedge_{C \in \mathcal{C}} \newcommand{\D}{\mathcal{D}} C$. \item \emph{Detection of dead tasks}, which checks whether the model contains tasks that can never be executed; a task ${a} \in \P$ is \emph{dead} if, for every (finite) trace $\pi_f$ over $\P$: $ \pi_f \models (\mathcal{C}} \newcommand{\D}{\mathcal{D} \land \unique{\P}) \mathbin{\rightarrow} \Box \neg a$. \item \emph{Enactment}, which drives the execution of the model, inferring which tasks are currently legal, which constraints are currently pending (i.e., require to do something), and whether the execution can be currently ended or not; specifically, given a (finite) trace $\pi_1$ representing the execution so far: \end{inparaenum} \begin{compactitem} \item Task ${a} \in \P$ is \emph{legal in $\pi_1$} if there exist a (finite, possibly empty) trace $\pi_2$ s.t.: $\pi_1 a \pi_2 \models \mathcal{C}} \newcommand{\D}{\mathcal{D} \land \unique{\P}$. \item Constraint $C \in \mathcal{C}} \newcommand{\D}{\mathcal{D}$ is \emph{pending in $\pi_1$} if $\pi_1 \not \models C$. \item The execution can be \emph{ended in $\pi_1$} if $ \pi_1 \models \mathcal{C}} \newcommand{\D}{\mathcal{D} \land \unique{\P}$. \end{compactitem} All these services reduce to standard reasoning in {\sc ltl}$_f$\xspace. \begin{comment} \begin{example} The {\sc declare}\xspace model of Figure~\ref{fig:declare} is consistent and does not contain any dead activity. If we add an \emph{existence} constraint on task \activity{r}, stating that the offer must be refused at least once, then both task \activity{a} and task \activity{p} become dead activities. The presence of a dead activity indicates that part of the model is never executable, which in turn indicates that the model needs to be revised. Figure~\ref{fig:declare-enactment} shows a possible enactment of the model. \end{example} \end{comment} It turns out that virtually all {\sc declare}\xspace patterns are insensitive to infiniteness (see Table~\ref{tab:constraints}). \begin{theorem} \label{thm:declare-insensitive} All the {\sc declare}\xspace patterns, with the exception of \emph{negation chain succession}, are insensitive to infiniteness, independently from the {\sc declare}\xspace assumption. \end{theorem} The theorem can be proven automatically, making use of an {\sc ltl}\xspace reasoner on infinite traces. Specifically, each {\sc declare}\xspace pattern can be grounded on a concrete set of tasks (propositions), and then, by applying Theorem~\ref{thm:insensitivity}, we simply need to check the validity of the corresponding formula. In fact, we encoded each validity check in the model checker NuSMV\footnote{ The full list of specifications is available here: \texttt{http://tinyurl.com/qyu6wve}}, following the approach of satisfiability via model checking \cite{RoV07}. E.g., the following NuSMV specification checks whether \emph{response} is insensitive to infiniteness:\par \begin{scriptsize} \begin{verbatim} MODULE main VAR a:boolean; b:boolean; other:boolean; end:boolean; LTLSPEC (F(end) & G(end -> X(end)) & G(end -> (!b & !a))) -> ( (G(a -> X(F(b)))) <-> (G((a -> (X((F(b & !end))) & !end)) | end)) ) \end{verbatim} \end{scriptsize} NuSMV confirmed that all patterns but the \emph{negation chain succession} are insensitive to infiniteness. This is true both making or not the {\sc declare}\xspace assumption, and independently on whether $\P$ only contains the propositions explicitly mentioned in the pattern, or also further ones. Let us discuss the \emph{negation chain succession}, which is \emph{not} insensitive to infiniteness. On infinite traces, $\Box(a \equiv \raisebox{0.4ex}{\tiny$\bigcirc$} \lnot b)$ retains the meaning specified in Table~\ref{tab:constraints}. On finite traces, it also forbids $a$ to be the last-executed task in the finite trace, since it requires $a$ to be followed by another task that is different from $b$. E.g., we have that $\{a\}\{\mathit{end}\}^\omega \models \Box(a \equiv \raisebox{0.4ex}{\tiny$\bigcirc$} \lnot b)$, but $\{a\} \not\models \Box(a \equiv \raisebox{0.4ex}{\tiny$\bigcirc$} \lnot b)$. This is not foreseen in the informal description present in all papers about {\sc declare}\xspace, and shows the subtlety of directly adopting formulae originally devised in the infinite-trace setting to the one of finite traces. In fact, the same meaning is retained only for those formulae that are insensitive to infiniteness. Notice that the correct way of formalizing the intended meaning of \emph{negation chain succession} on finite traces is $\Box(a \equiv \raisebox{-0.27ex}{\LARGE$\bullet$} \lnot b)$ (that is, $\Box(a \equiv \lnot \raisebox{0.4ex}{\tiny$\bigcirc$} b)$). This is equivalent to the other formulation in the infinite-trace setting, and actually it is insensitive to infiniteness. Notice that there are several other {\sc declare}\xspace constraints, beyond standard patterns, that are not insensitive to infiniteness, such as $\Box a$. Over infinite traces, $\Box a$ states that $a$ must be executed forever, whereas, on finite traces, it obviously stops requiring $a$ when the trace ends. Another interesting example is $\Box(a \lor \raisebox{0.4ex}{\tiny$\bigcirc$} a)$. \subsection{Reasoning Component} \label{sec:backend} The reasoning component of the \textsc{LDL Monitor}, FLLOAT (``From {\sc ltl}$_f$\xspace/{\sc ldl}$_f$\xspace To AuTomata'', \url{https://github.com/RiccardoDeMasellis/FLLOAT}) implements the logics of the runtime verification by building the automaton of the reference model {\sc ldl}$_f$\xspace constraints with the algorithms presented in Section~\ref{sec:automaton}. \begin{figure} \includegraphics[scale=1]{sw-component} \centering \caption{UML-like diagram of the backend main components.} \label{fig:conceptual-modules} \end{figure} The FFLOAT code has been implemented in the Java language and exploits the inheritance features of object oriented languages. It is made up by several conceptual modules and makes use of external libraries as shown by the UML-like diagram in Figure~\ref{fig:conceptual-modules}, where the main java classes are depicted by the usual rectangles, their surrounding boxes represent the conceptual modules they belong to and dashed arrows show the dependencies. In what follows we will address each conceptual module separately. \begin{figure} \includegraphics[scale=.3]{uml-formula} \centering \caption{UML-like diagram of the classes for {\sc ldl}$_f$\xspace formulae.} \label{fig:uml-formula} \end{figure} \paragraph{Formulae} It contains classes and methods to represent and manipulate logical formulae. Classes in this module have a complex hierarchy, as formulae are characterized by several independent aspects: the language ({\sc ldl}$_f$\xspace, {\sc ltl}$_f$\xspace, {\sc re}$_f$\xspace); the structure (atomic, unary, binary) and the temporal characterization (local or temporal). Since Java does not allow multiple inheritance, such a hierarchy has been reproduced by a suitable use of subclasses and interfaces. Besides, formulae are implemented with an inductive structure, i.e., formulae have as instance variable a formula if unary, or two if binary, which allows us to elegantly implement all recursive functions for their manipulation. Each formula implements the interface \texttt{Formula}, which is then extended by six interfaces, each representing a specific characteristic: \texttt{Temporal} if the formula contains a temporal operator; \texttt{Local} if it does not; \texttt{BooleanOp} if its main operator is a boolean operator; \texttt{Atomic} if it is atomic, i.e., if it is (propositional) $true$ (abstract class \texttt{TrueLocal}), (propositional) $false$ (abstract class \texttt{FalseLocal}) or a propositional variable \texttt{LocalVar}; \texttt{Unary} if its main operator is a unary operator and lastly \texttt{Binary} if it is instead binary. Besides, \texttt{BooleanOp} is extended by interfaces representing the usual boolean operators, such as \texttt{Not}, which also extends \texttt{Unary}, and \texttt{And}, which also extends \texttt{Binary}, and so on for the other boolean connectives. We remark that it is necessary to express such different characteristics by means of interfaces, as each formula is indeed a combination of those. At a lower level of abstraction, we have three main type of formulae: {\sc ldl}$_f$\xspace, {\sc ltl}$_f$\xspace and {\sc re}$_f$\xspace each of which is again an interface extending \texttt{Formula}. Here we provide a detailed description of the structure of {\sc ldl}$_f$\xspace formulae only, but the same ideas also hold for {\sc ltl}$_f$\xspace and {\sc re}$_f$\xspace formulae. Figure~\ref{fig:uml-formula} provides a UML-like class diagram for {\sc ldl}$_f$\xspace formulae, where dashed boxes represent interfaces, simple boxes are abstract classes; boxes with bold text are classes and arrows mean both extends or implements, depending on whether the extending/implementing entity is an interface or a (abstract) class. \texttt{LDLf} extends \texttt{Formula}, and is extended by interface \texttt{LDLfTemp} (which also extends \texttt{Temporal}); \texttt{LDLfBooleanOp} (which also extends \texttt{BooleanOp}) and \texttt{LDLfLocal} (which also extends \texttt{Local}). Moreover, \texttt{LDLf} is implemented by abstract classes \texttt{LDLfUnary} and \texttt{LDLfBinary}, which also extend \texttt{Unary} and \texttt{Binary}, respectively. An atomic {\sc ldl}$_f$\xspace formula is clearly local, and, as presented in Section~\ref{sec:LTLf-LDLf}, can be \texttt{LDLfLocalTrue}, i.e., the propositional $true$ (hence extending \texttt{TrueLocal} and implementing \texttt{LDLfLocal}) or, analogously, \texttt{LDLfLocalFalse} or \texttt{LDLfLocalVar}. Other local {\sc ldl}$_f$\xspace formulae are boolean combinations of other local formulae, hence they all implement interface \texttt{LDLfBooleanOpLocal}, such as \texttt{LDLfLocalNot} (which also implements \texttt{Not}) and \texttt{LDLfLocalAnd} (which also implements \texttt{And}). \texttt{LDLfBooleanOpTemp} formulae have an analogous structure. The other {\sc ldl}$_f$\xspace temporal formulae are the atomic \texttt{LDLftt} and \texttt{LDLfff} formulae and formulae with a temporal operator, thus extending abstract class \texttt{LDLfTempOpTemp}, that is, \texttt{LDLfBox} and \texttt{LDLfDiamond} formulae. Although such a hierarchical structure may seem cumbersome, when dealing with a relevant number of classes it is essential to keep the code modular. As an example, let us consider the negation normal form (NNF) of a formula: regardless if a formula is a \texttt{LDLfLocalAnd}, \texttt{LDLfTempAnd}, \texttt{LTLfLocalAnd} or \texttt{LTLfTempAnd}, the logic for transforming an \texttt{And} formula in NNF is the same. Indeed, non-static method \emph{nnf()} has been implemented as a \emph{default} method in the \texttt{And} interface, and it is hence inherited by all implementing classes. Similar considerations also hold for method \emph{getSig()}, which returns the set of propositional variables appearing in the formula, as it can be defined as a \emph{default} method in interfaces \texttt{Unary} and \texttt{Binary}. Another notable example is the \emph{delta()} method, which implements the delta function in Figure~\ref{fig:delta}, which, being the same for all \texttt{LDLfLocal} formulae, has been implemented as a \emph{default} method in interface \texttt{LDLfLocal}. Also, methods returning sub-formulae, such as \emph{getNested}, \emph{getLeft} and \emph{getRight} are defined in the abstract classes \texttt{LDLfUnary} and \texttt{LDLfBinary}. \paragraph{Automaton construction} The main functionality FLLOAT provides is the automaton generation for an {\sc ltl}$_f$\xspace/{\sc ldl}$_f$\xspace formula $\varphi$ given as input, which is implemented in the static method \emph{ldlf2Automaton} of class \texttt{AutomatonUtils}. The whole procedure works as follows. First, the formula is parsed and an \texttt{LTLf} or \texttt{LDLf} object is created. This is achieved by classes that are previously and automatically generated by ANTLR\footnote{\url{http://www.antlr.org}} starting from grammar files. Then, if the input formula is {\sc ldl}$_f$\xspace, then it is translated in negation normal form (\emph{nnf()} method) and the algorithm for the automata generation is called. Conversely, if it is {\sc ltl}$_f$\xspace, it must be first converted to {\sc ldl}$_f$\xspace by method \emph{toLDLf()} implementing the translation explained in Section~\ref{sec:LTLf-LDLf}. Once an {\sc ldl}$_f$\xspace formula $\varphi$ in negation normal form has been obtained, the automaton is generated with Algorithm~\ref{fig:algo}. The method \emph{ldlf2Automaton} consists of two nested cycles, the outer on states in $\S$ yet to be analyzed and the inner on interpretation for $\P$ or the empty trace (line $8$ of Algorithm~\ref{fig:algo}): at each iteration $\delta(s, \Theta)$ is computed, where $s \in \S$ and $\Theta \in 2^\P \cup \epsilon$, possibly generating new states $q'$ to be added to the set of states to be analyzed along with the respective transitions (line $8-9$ of Algorithm~\ref{fig:algo}). Since function $\delta$ is recursively defined on the structure of {\sc ldl}$_f$\xspace formulae, it is implemented by the recursive non-static \emph{delta} method of \texttt{LDLf} class, exploiting the java inheritance features. The Tweety library\footnote{\url{http://tweetyproject.org}} is used to compute the models, i.e., $q'$ states, of the formula $\bigwedge_{(\atomize{\psi}\in q)} \delta(\atomize{\psi},\Theta)$ in line $8$ of Algorithm~\ref{fig:algo}. The procedure starts by analyzing $\atomize{\varphi}$, the only state in $\S$, and ends when all states have been analyzed and no others have been generated in the meanwhile. The data structures for automata are defined in the jautomata library\footnote{\url{https://github.com/abailly/jautomata}}, which also provides methods for automata manipulation, such as union, intersection, trimming and determinization. \paragraph{Runtime Verification} The runtime verification functionalities are provided by the \texttt{ExecutableAutomaton} class. An executable automaton is essentially a deterministic automaton (every {\sc nfa}\xspace can be determinized) with a reference to the current state. When an executable automaton is created from an automaton, the current state is set to the initial state (by construction there is always a unique initial state). The idea is to navigate the automaton and return \textsc{rv}\xspace truth values while events are executed. Recalling the results presented in Section~\ref{sec:colored-proof}, each automaton state represents a \textsc{rv}\xspace truth value. Hence, an operative way to implement a \textsc{rv}\xspace monitor is to analyze one-by-one the occurring events and to perform the corresponding transitions on the automaton of the constraints. Each time a state change is triggered by a transition leading to state $s$, we calculate $Reach(s)$ and return the corresponding truth value. In our implementation when a new event is executed, the non-static method $step$, taking $\Theta \in 2^\P \cup \set{\varepsilon}$ as input, is called, updating the current state by traversing the corresponding $\Theta$ transition. Method $currentRVTruthValue$ computes $Reach$ for the current state and returns one among $\mathit{perm\_true}, \mathit{perm\_false}, \mathit{temp\_true}, \mathit{temp\_false}$ as explained at the end of Section~\ref{sec:colored-proof}, thus effectively implementing the \textsc{rv}\xspace semantics. \endinput \section{Implementation} \label{sec:implementation} The entire approach has been implemented as an \emph{operational decision support (OS) provider} for the \textsc{ProM} 6 process mining framework\footnote{\url{http://www.promtools.org/prom6/}} called \textsc{LDL Monitor}. \textsc{ProM} 6 provides a generic OS environment~\cite{Westergaard2011:OS,Maggi2017} that supports the interaction between an external workflow management system at runtime (producing events) and \textsc{ProM}. In Section~\ref{general}, we will sketch some relevant aspects of the general architecture of the OS backbone implemented inside ProM 6. In Section~\ref{interaction}, we ground the discussion to the specific case of the \textsc{LDL Monitor}, discussing the skeleton of our compliance verification OS Provider. The data exchanged between the \textsc{LDL Monitor} client and provider is illustrated in Section~\ref{data}. In Section~\ref{client}, we describe the implemented \textsc{LDL Monitor} client. At the back-end of the \textsc{LDL Monitor}, there is a software module specifically dedicated to the construction and manipulation of {{\sc nfa}\xspace}s from {\sc ldl}$_f$\xspace/{\sc ltl}$_f$\xspace formulae (detailed in Section~\ref{sec:backend}), concretely implementing the technique presented in Section~\ref{sec:automaton}. This software is called FLLOAT, which stands for ``From {\sc ltl}$_f$\xspace/{\sc ldl}$_f$\xspace To AuTomata'', its code is open source and publicly available at \url{https://github.com/RiccardoDeMasellis/FLLOAT}. \begin{figure}[t] \centering \includegraphics[width = 1.\textwidth]{osSchema.pdf} \caption{ProM OS backbone architecture.} \vspace{-0.3cm} \label{archi} \end{figure} \subsection{General Architecture} \label{general} The ProM OS architecture (shown in Figure~\ref{archi}) relies on the well-known client-server paradigm \cite{DBLP:conf/fase/MaggiMA12}. More specifically, the ProM OS Service manages the interaction with running process instances and acts as a mediator between them and the registered specific OS providers. Sessions are created and handled by the OS Service to maintain the state of the interaction with each running client. To establish a stateful connection with the OS Service, the client creates a session handle for each managed running process instance, by providing host and port of the OS Service. When the client sends a first query related to one of such running instances to the OS Service, it specifies information related to the initialization of the connection (such as reference models, configuration parameters, etc.) and to the type of the queries that will be asked during the execution. This latter information will be used by the OS Service to select, among the registered active providers, the ones that can answer the received query. The session handle takes care of the interaction with the service from the client point of view, hiding the connection details and managing the information passing in a lazy way. The interaction between the handle and the service takes place over a TCP/IP connection. \subsection{\textsc{LDL Monitor} Skeleton} \label{interaction} In the \textsc{LDL Monitor}, the interaction between a client and the OS Service mainly consists of two aspects. First of all, before starting the runtime compliance verification task, the client sends to the OS Service the {\sc ldl}$_f$\xspace reference model to be used. This model is then placed inside the session by the OS service. The reference model is a set of {\sc ldl}$_f$\xspace constraints represented as strings. The client can also set further information and properties. For example, each constraint in the {\sc ldl}$_f$\xspace reference model can be associated to a specific weight, that can be then exploited to compute metrics and indicators that measure the degree of adherence of the running instance to the reference model. Secondly, during the execution, the client sends queries about the current monitoring status for one of the managed process instances. The session handle augments these queries with the partial execution trace containing the evolution that has taken place for the process instance after the last request. The OS Service handles a query by first storing the events received from the client, and then invoking the \textsc{LDL Monitor} provider. The \textsc{LDL Monitor} provider recognizes whether it is being invoked for the first time with respect to~that process instance. If this is the case, it takes care of translating the reference model onto the underlying formal representation. The provider then returns a fresh result to the client, exploiting a reasoning component for the actual result's computation. The reasoning component behind the provider is described in Section \ref{sec:backend}. After each query, the generated result is sent back to the OS Service, which possibly combines it with the results produced by other relevant providers, finally sending the global response back to the client. \begin{figure}[t] \centering \includegraphics[width = 0.89\textwidth]{fluentsModel} \caption{Fluent model used to store the evolution of constraints.\label{fig:fluentsModel}\vspace{-0.3cm}} \end{figure} \subsection{Exchanged Data and Business Constraints States} \label{data} We now discuss the data exchanged by the \textsc{LDL Monitor} client and provider. The partial execution traces sent by the client to the OS use the XES format (\url{www.xes-standard.org/}) for event data. XES is an extensible XML-based standard recently adopted by the IEEE task force on process mining. The response produced by the \textsc{LDL Monitor} provider is composed of two parts. The first part contains the temporal information related to the evolution of each monitored business constraint from the beginning of the trace up to now. At each time point, a constraint can be in one state, which models whether it is currently: \emph{(permanently) satisfied}, i.e., the current execution trace complies with the constraint; \emph{possibly satisfied}, i.e., the current execution trace is compliant with the constraint, but it is possible to violate it in the future; \emph{(permanently) violated}, i.e., the process instance is not compliant with the constraint; \emph{possibly violated}, i.e., the current execution trace is not compliant with the constraint, but it is possible to satisfy it by generating some sequence of events. This state-based evolution is encapsulated in a \emph{fluent model} which obeys to the schema sketched in Figure~\ref{fig:fluentsModel}. A fluent model aggregates fluents groups, containing sets of correlated fluents. Each fluent models a multi-state property that changes over time. In our setting, fluent names refer to the constraints of the reference model. The fact that the constraint was in a certain state along a (maximal) time interval is modeled by associating a closed MVI (Maximal Validity Interval) to that state. MVIs are characterized by their starting and ending timestamps. Current states are associate to open MVIs, which have an initial fixed timestamp but an end that will be bounded to a currently unknown future value. \begin{figure}[t!] \centering \includegraphics[scale=0.7]{monitor} \caption{Screenshot of one of the \textsc{LDL Monitor} clients.\label{fig:client}\vspace{-0.3cm}} \end{figure} \subsection{LDL Monitor Client} \label{client} We have developed two \textsc{LDL Monitor} clients, in order to deal with different settings: (a) replay of a process instance starting from a complete event log, and (b) acquisition of events from an information system. The first client is mainly used for testing and experimentation. The second client requires a connection to some information system, e.g., a workflow management system. The two clients differ on how the user is going to provide the stream of events, but they both include an interface with a graphical representation of the obtained fluent model, showing the evolution of constraints and also reporting the trend of the compliance indicator. Figure~\ref{fig:client} shows the interface running with the example in Figure~\ref{fig:monitoring-metaconstraints}. \section{Conclusions} In this article, we have brought forward a foundational and practical approach to formalize and monitor linear temporal constraints and metaconstraints, under the assumption that the traces generated by the system under study are \emph{finite}. This is, e.g., the typical case in the context of business process management and service-oriented architectures, where each execution of a business process or service invocation leads from a starting state to a completion state in a possibly unbounded, yet finite, number of steps. The main novelty of our approach is to adopt a more powerful specification logic, that is, {\sc ldl}$_f$\xspace (which corresponds to Monadic Second-Order Logic over finite traces), instead of the typical choice of {\sc ltl}$_f$\xspace (which corresponds to First-Order Logic over finite traces). Like in the case of {\sc ltl}$_f$\xspace, also {\sc ldl}$_f$\xspace comes with an automata-theoretic characterization that employs standard finite-state automata. Differently from {\sc ltl}$_f$\xspace, though, {\sc ldl}$_f$\xspace can declaratively express, within the logic, not only constraints that predicate on the dynamics of task executions, but also constraints that predicate on the monitoring state of other constraints. The approach has been fully implemented as an independent library to specify {\sc ldl}$_f$\xspace/{\sc ltl}$_f$\xspace formulae as well as obtain and manipulate their corresponding automata, which is then invoked by a process monitoring infrastructure that has been developed within the state-of-the-art ProM process mining framework. As a next step, we intend to incorporate other monitoring perspectives, such as the data perspective taking into consideration the data carried by the monitored events. This setting is reminiscent of stream query languages and event calculi. For example, the logic-based Event Calculus has been applied to process monitoring against data-aware extensions of the {\sc declare}\xspace language in \cite{MMC13}, also considering some specific forms of compensation \cite{CMM08}. However, all these approaches are only meant to query and reason over (a portion of) the events collected so far in a trace, and not to reason upon its possible future continuations, as we do in our approach. Genuine investigation is then required towards understanding under which conditions it is possible to lift the automata-based techniques presented here to the case where events are equipped with a data payload and constraints are expressed in (fragments of) first-order temporal logics over finite traces. \section*{APPENDIX} \bibliographystyle{ACM-Reference-Format-Journals}
1,477,468,750,801
arxiv
\section{Introduction} The production of W(Z) bosons associated with jets at LHC has a wide range of physics potential, which varies from Standard Model (SM) measurements to Supersymmetry (SUSY) searches. These processes can be used for tests of perturbative quantum chromodynamic (QCD)~\cite{QCD}. The predictions for W(Z)$+N$Jets, where $N>2$, are accessible only through matrix element (ME) plus parton shower (PS) computations and in fact, can be considered as a prime testing ground for the accuracy of such predictions. Z+jet events can be also exploited to calibrate jets, measured in the Calorimeter (See~\cite{jetmet} for details). Furthermore, W(Z)+jets form a relevant background to many interesting phenomena, including new physics. Therefore, these processes must be measured with great accuracy to allow precision measurements and increase the sensitivity of the searches beyond SM. However, the individual cross section measurements of W+Njets and Z+Njets will be affected by large systematic uncertainties associated mostly with the definition and measurement of jets. One of the measurements that CMS plans to perform is the ratio of the cross sections of W+jets to Z+jets as functions of jet multiplicity and boson $p_{\rm T}$. Such a measurement allows partial cancellation of the most relevant experimental systematic uncertainties as well as the theoretical uncertainties due to the choice of renormalization scale, the parton distribution functions, etc.~\cite{wzjets}. The jet energy scale forms the largest experimental uncertainty as it increases rapidly with jet multiplicity. This uncertainty cancels in the ratio as long as the $p_{\rm T}$ spectra, the rapidity distribution and the composition of the jets in both processes are the same at the level of experimental sensitivity. Other uncertainties, i.e. Underlying Event (UE), Multiple interactions, luminosity and detector acceptances, will also cancel to a large extend in the ratio. A number of physics generators are available to simulate major kinematic properties of \\ W(Z)+jets. The measurements of W(Z)+jets at the Tevatron collider indicate a general agreement between the theoretical predictions based on LO ME plus PS and data~\cite{CDF}. In the studies presented here the ME event generator ALPGEN is used to generate exclusive parton level W(Z)+Njets (N=0,1,2,3,4,5) events. PYTHIA is used for PS and hadronization. The MLM recipe is used in order to avoid double counting of processes from ME and PS. The SM processes $t\bar{t}$+jets, WW+jets, WZ+jets, ZZ+jets and QCD multi-jet are considered as backgrounds and generated with PYTHIA in fully inclusive decay modes for W and Z bosons. Figure~\ref{fig:zjets}(\ref{fig:wjets}) shows the $p_{\rm T}$ distribution of the Z(W) boson in selected Z(W)+$\ge 1$jet (left) and Z(W)+$\ge 4$jet (right) events for the signal and backgrounds. In both W and Z boson cases the events are selected in the electron and muon channels. The high $p_{\rm T}$ isolated leptons are selected in order to reduce contamination from QCD events. Furthermore, Z+jets events are selected by a tight di-lepton invariant mass around the Z boson mass and the Missing transverse energy is restricted to be small, whereas for W+jets a large missing transverse energy is required. Jet reconstruction is performed using the Iterative Cone algorithm using the energy deposited in the Calorimeter. Jets are calibrated using $\gamma$+jet events and the jets with $p_{\rm T} > 50~ {\rm GeV}$ are counted. The current Z+jets selection provides a rather "clean" sample, and with $1~fb^{-1}$ of data, up to fourth jet multiplicity can be measured. One crucial point will be the reduction of the background to the W+>2jets from $t\bar{t}$ events (See Fig.~\ref{fig:wjets} right), since the $t\bar{t}$ production rate increases by about a factor of 100 from the Tevatron to the LHC, while W production increases by just a factor of 5. In the studies presented, the QCD contribution as background to W(Z)+jets is found to be negligible and not shown in the figures. However, we should note that the background processes are simulated using the PYTHIA program, which is known not to produce high jet multiplicities correctly. The data driven methods for background estimation are therefore extremely important. The cross section measurements of W(Z)+jets versus the jet multiplicity will be one of the early measurements carried out with the CMS detector. The ratio measurement of W+Njets to Z+Njets will allow partial cancellation of the most relevant systematic uncertainties. This is an extremely important advantage at the startup, where it will be relatively difficult to control the systematic uncertainties. The ratio measurement can benefit from other types of jets (e.g. reconstructed with Tracker Tracks only) than the standard Calorimeter based jets. Unlike Calorimeter jets Track based jets preserve the vertex information which helps to count jets originating only from the signal vertex, eliminating Pile Up contamination to the jets. \begin{figure} \begin{center} \includegraphics[scale=0.60]{Z+ts-1.ps} \includegraphics[scale=0.60]{Z+ts-4.ps} \caption{$p_{\rm T}$ distribution of the Z boson in selected Z+$\ge 1$jet (left) and Z+$\ge 4$jet (right) for signal and background for an integrated luminosity of $1~fb^{-1}$.} \label{fig:zjets} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.60]{W+jets-1.ps} \includegraphics[scale=0.60]{W+jets-4.ps} \caption{$p_{\rm T}$ distribution of the W boson in selected W+$\ge 1$jet (left) and W+$\ge 4$jet (right) for signal and background for an integrated luminosity of $1~fb^{-1}$.} \label{fig:wjets} \end{center} \end{figure}
1,477,468,750,802
arxiv
\section{Introduction}\label{1} In this work, the mixing behavior of irreversible dynamics is considered on the hypercube $\Sigma_{n} = Q^{V_{n}}$, where $Q = \{1, 2, 3\}$ and $V_{n} = \{1,...,n\}$. One widely known case is that of discrete time Glauber dynamics for the uniform measure on $\Sigma_{n}$. At each time step, the vertex $v\in V_{n}$ is uniformly chosen. Then, we reassign the color of vertex $v$ uniformly on $Q$. The mixing of these dynamics is fully understood, and the sharp convergence exhibited is defined as the cutoff phenomenon. \par In this study, the result is extended to the cyclic dynamics iterated by the following rule. At each time step, the vertex $v\in V_{n}$ is uniformly chosen. Assume that $v$ has color $i$ in $Q$. Then, we reassign the color of vertex $v$ as $i$ with probability $1-p$ and $i+1$ with probability $p$, where $0<p<1$. In particular, the cutoff phenomenon described below is proved. \par The descriptions of the cutoff phenomenon are based on \cite{2}. Let the total variance distance between the two probability distributions $\mu$ and $\nu$ on discrete state space $\mathcal{X}$ be defined as \begin{equation*} \Vert\mu-\nu\Vert_{\textnormal{TV}} = \max_{A\subseteq\mathcal{X}}\,\vert\,\mu(A)-\nu(A)\,\vert. \end{equation*} Then, consider the Markov chain $(\sigma_{t})$ on state space $\mathcal{X}$ with the transition matrix $P$ and stationary distribution $\pi$. The maximal distance $d(t)$ of the Markov chain $(\sigma_{t})$ is defined as \begin{equation*} d(t)=\max_{x\in\mathcal{X}}\,\Vert P^{t}(x,\cdot)- \pi\Vert_{\textnormal{TV}}, \end{equation*} while the $\epsilon$-mixing time is defined as \begin{equation*} t_{\textnormal{mix}}(\epsilon)=\min\{t\,:\,d(t)\leq\epsilon\}. \end{equation*} The mixing time $t_{\textnormal{mix}}$ is denoted as $t_{\textnormal{mix}}(\frac{1}{4})$ by convention.\par For all $\epsilon\in(0,1)$, suppose that the sequence of Markov chains $ \{(\sigma_{t}^{n})\}=(\sigma^{1}_{t}),\,(\sigma^{2}_{t}),\,\dots$ satisfies \begin{equation*} \lim_{n\to\infty}\frac{t^{(n)}_{\textnormal{mix}}(\epsilon)}{t^{(n)}_{\textnormal{mix}}(1-\epsilon)} = 1, \end{equation*} where $t^{(n)}_{\text{mix}}(\epsilon)$ is the $\epsilon$-mixing time of the chain $(\sigma^{n}_{t})$. The mixing time of the $n$-th chain is denoted as $t_{\textnormal{mix}}^{(n)} = t_{\textnormal{mix}}^{(n)}(\frac{1}{4})$, and the maximal distance is denoted as $d^{(n)}(t)$. These Markov chains shows a sharp decrease in the total variance distance from $1$ to $0$ close to the mixing time. It is said that this sequence exhibits the cutoff phenomenon. Further, it is said to have a window of size $O(w_{n})$ if $\lim_{n\to\infty}\big(w_{n}/{t_{\textnormal{mix}}^{(n)}}\big)=0$, \begin{gather*} \lim_{\alpha\to- \infty}\liminf_{n\to\infty}d^{(n)}\big(t_{\textnormal{mix}}^{(n)}+\alpha w_{n}\big) = 1\quad\text{and}\quad \lim_{\alpha\to\infty}\limsup_{n\to\infty}d^{(n)}\big(t_{\textnormal{mix}}^{ (n)}+\alpha w_{n}\big) = 0. \end{gather*} The cutoff phenomenon was first observed in card shuffling, as demonstrated in \cite{3}. Since then, the cutoff phenomena for Markovian dynamics have been observed and rigorously verified with a multitude of models. In recent times, there have been several breakthroughs in the verification of cutoff phenomena for Glauber-type dynamics on spin systems. For example, the cutoff phenomenon for the Glauber dynamics on the Curie-Weiss model, which corresponds to the mean-field Ising model, is proven in \cite{4} for the low temperature regime. This work has been further generalized to \cite{1}, where Glauber dynamics for Curie-Weiss-Potts model have been considered. These two outcomes were considered on the mean-field model defined on the complete graph, where geometry is irrelevant.\par On the other hand, the cutoff phenomenon for the spin system on the lattices were more complicated. The first development was achieved for the Ising model on the lattice in \cite{5}, and in \cite{6}, it was extended to a general spin system in the low temperature regime. In \cite{7}, a novel method called ``information percolation'' was developed, and the cutoff for the Ising model with a precise window size was obtained. This information percolation method has also been successfully applied to Swendsen-Wang dynamics for the Potts model and to Glauber dynamics for the random-cluster model in \cite{8} and \cite{9}, respectively.\par In the present work, the uniform measure, which corresponds to infinite temperature spin systems, is considered. From this perspective, the proposed model is simpler than existing models in which finite temperature has been considered. However, this model has a critical difference in that the dynamics being considered are irreversible. We emphasize here that the cutoff phenomenon for the irreversible chains are barely known. \subsection{Main Result} Theorem \ref{1.1} presents the cutoff phenomenon of the cyclic dynamics considered herein and the main result of the current article. \begin{thm}\label{1.1} The cyclic dynamics defined on $\Sigma_{n}$ with probability $0<p<1$ exhibit cutoff at mixing time \begin{equation*} t(n) =\frac{1}{3p}n\log n \end{equation*} with a window of size $O(n)$. \end{thm} As the theorem can be similarly proved for all $0 < p < 1$, the proof is presented for $p=\frac{1}{2}$. In Section \ref{2}, the notations are set and the contractions of the proportion chain are provided. The proof of the lower bound of the cutoff is then presented. Section \ref{3} describes the analysis of the coalescence of the proportion and basket chains. Following this, the upper bound of the cutoff is proved.\par Note that the cutoff phenomenon for the Glauber dynamics corresponding to the current model is proven in \cite{2}. We remark that this reversible Glauber dynamics exhibits the cutoff at $\frac{1}{2}n\log n$ with a window of size $O(n)$. For that reversible case, the spectral analysis can be applied to obtain the upper bound (see \cite[Chapter 12]{2}). In particular, a direct relationship between the eigenvalues of the transition matrix and the bound of the total variation distance is crucially used. For our irreversible case, we are not able to use spectral analysis and the proof becomes more complex. \section{Lower Bound}\label{2} This section presents the proof of the lower bound in Section \ref{2.14}. The proof is based on the analyses of the statistical features of cyclic dynamics and the features evaluated on a stationary distribution described in Section \ref{2.12} and Section \ref{2.13} respectively. Prior to the proof, the notations used are defined, and the proportion chain used throughout this paper is described. \subsection{Preliminaries}\label{2.21} The cyclic dynamics on $\Sigma_{n}$ is denoted as $(\sigma^{n}_{t})_{t=0}^{\infty}$, and $n$ is eliminated for simplicity. When the cyclic dynamics $(\sigma_{t})$ begin at state $\sigma_{0}$, the probability measure is denoted as $\mathbb{P}_{\sigma_{0}}$, and the expectation with respect to the probability measure is denoted as $\mathbb{E}_{\sigma_{0}}$. \par Then, the vector $s\in\mathbb{R}^{3}$ is considered, and its $i$-th element is denoted as $s^{i}$. The $\ell^{p}$-norm of the vector $s$ is denoted as $\Vert s \Vert_{p}$. Let the vector $(\frac{1}{3}, \frac{1}{3}, \frac{1}{3})\in\mathbb{R}^{3}$ be $\bar{e}$, and let $\hat{s} = s - \bar{e}$. Consider the $3\times3$ matrix $\textbf{Q}$, and let $\textbf{Q}^{i,k}$ be the ${(i,k)}$ element of matrix $\textbf{Q}$. Let $\textbf{Q}^{i}$ be its $i$-th row. For $\rho>0$, the subsets of $\mathbb{R}^{3}$ are denoted as \begin{alignat*}{2} &\mathcal{S}=\big\{\,x \in \mathbb{R}^{3}_{+}:\Vert x \Vert_{1}=1\,\big\},&& \mathcal{S}_{n}=\mathcal{S}\cap\frac{1}{n}\mathbb{Z}^{3}, \\ &\mathcal{S}^{\rho} = \big\{\,s\in\mathcal{S}:\Vert \hat{s}\Vert_{\infty}<\rho\,\big\}, &&\mathcal{S}_{n}^{\rho} = \mathcal{S}^{\rho}\cap\frac{1}{n}\mathbb{Z}^{3},\\ &\mathcal{S}^{\rho +} = \big\{\,s\in\mathcal{S}: s^{k}<\frac{1}{3}+\rho,\, 1 \leq k \leq 3\, \big\},\quad&&\mathcal{S}_{n}^{\rho +} = \mathcal{S}^{\rho +}\cap\frac{1}{n}\mathbb{Z}^{3}. \end{alignat*} Now, the \textit{proportion chain} $(S_{t})_{t=0}^{\infty}$ of the cyclic dynamics $(\sigma_{t})_{t=0}^{\infty}$ is defined as \begin{equation*} S_{t}= \big(\,S_{t}^{1},\, S_{t}^{2},\, S_{t}^{3}\,\big), \end{equation*} where \begin{equation*} S_{t}^{k} = \frac{1}{n}\sum_{v \in V_{n}}\,\textbf{1}_{\{ \sigma_{t}(v) = k\}} \quad k = 1,\, 2,\, 3. \end{equation*} Then, the proportion chain $(S_{t})$ is also a Markov chain on state space $\mathcal{S}_{n}$ with jump probability \begin{equation*} \big(\,S_{t+1}^{1},\, S_{t+1}^{2},\, S_{t+1}^{3}\,\big)= \begin{cases} \big(\,S_{t}^{1},\, S_{t}^{2},\, S_{t}^{3}\,\big)&\text{ w.p. }\frac{1}{2}\vspace{3pt} \\ \big(\,S_{t}^{1}-\frac{1}{n},\, S_{t}^{2}+\frac{1}{n},\, S_{t}^{3}\,\big)&\text{ w.p. }\frac{1}{2}S_{t}^{1}\vspace{3pt} \\ \big(\,S_{t}^{1},\, S_{t}^{2}-\frac{1}{n},\, S_{t}^{3}+\frac{1}{n}\,\big)&\text{ w.p. }\frac{1}{2}S_{t}^{2}\vspace{3pt} \\ \big(\,S_{t}^{1}+\frac{1}{n},\, S_{t}^{2},\, S_{t}^{3}- \frac{1}{n}\,\big)&\text{ w.p. }\frac{1}{2}S_{t}^{3}, \\ \end{cases} \end{equation*} where w.p. is abbreviation of ``with probability". This transition is well-defined on $\mathcal{S}_{n}$. This is because if $S_{t}^{i}=0$ for any $i \in \{1, 2, 3\}$, then the probability of $S_{t}^{i}$ decreasing in the next step is zero. \subsection{Statistical Properties of the Chain}\label{2.12} This section describes the obtainment of the statistical properties of the proportion chain used in the proof. In particular, the $\ell^{2}$-norm of $\hat{S}_{t}$ and the variance of $S_{t}$ were analyzed. \begin{prop}\label{2.2} Proportion chain $(S_{t})$ has the following norm contraction: \begin{equation*} \mathbb{E}_{\sigma_{0}}\Vert \hat{S}_{t}\Vert_{2}^{2} = \Big(\,1- \frac{3}{2n}\,\Big)^{t}\,\Vert \hat{S}_{0}\Vert_{2}^{2} \,+\,O\,\Big(\,\frac{1}{n}\,\Big). \end{equation*} \end{prop} \begin{proof} Denote $(\mathcal{F}_{t})$ as the filtration of cyclic dynamics $(\sigma_{t})$. By the definition of the proportion chain, \begin{align*} \mathbb{E}_{\sigma_{0}}\Big[\,\Vert\hat{S}_{t+1}\Vert_{2}^{2}- \Vert\hat{S}_{t}\Vert_{2}^{2}\,\vert\, \mathcal{F}_{t}\Big] \, =&\, \sum_{i=1}^{3}\,\frac{1}{2}\,S_{t}^{i}\, \Big[\,\Big(\,\hat{S}_{t}^{i}-\frac{1}{n}\,\Big)^{2} + \Big(\,\hat{S}_{t}^{i+1}+\frac{1}{n}\,\Big)^{2}- \big(\,\hat{S}_{t}^{i}\,\big)^{2}-\big(\,\hat{S}_{t}^{i+1}\,\big)^{2}\, \Big] \\=&\, \frac{1}{n}\,\sum_{i=1}^{3}\,S_{t}^{i}S_{t}^{i+1}- \frac{1}{n}\,\sum_{i=1}^{3}\,\big(\,S_{t}^{i}\,\big)^{2}+\frac{1}{n^{2}}\,=\,- \frac{3}{2n}\,\Vert S_{t}-\bar{e}\Vert_{2}^{2} + \frac{1}{n^{2}}. \end{align*} Therefore, \begin{equation*} \mathbb{E}_{\sigma_{0}}\Vert\hat{S}_{t+1}\Vert_{2}^{2} = \Big(\,1- \frac{3}{2n}\,\Big)\,\mathbb{E}_{\sigma_{0}}\Vert\hat{S}_{t}\Vert_{2}^{2} + \frac{1}{n^{2}}. \end{equation*} Inductively using this equation, we obtain \begin{align*} \mathbb{E}_{\sigma_{0}}\Vert \hat{S}_{t}\Vert_{2}^{2} = \Big(\,1-\frac{3}{2n}\,\Big)^{t}\,\Vert \hat{S}_{0}\Vert_{2}^{2} + \frac{2}{3n}\Big[\,1-\Big(\,1-\frac{3}{2n}\,\Big)^{t}\,\Big]. \end{align*} Since $0\leq\frac{2}{3n}[\,1-(\,1-\frac{3}{2n}\,)^{t}\,]\leq\frac{2}{3n}$, we get the desired result. \end{proof} This shows the contraction on the expectation of $\ell^{2}$-norm of $\hat{S}_{t}$. In Proposition \ref{2.8}, this result will be compared with the variance of $\hat{S}_{t}$. Next, the semi-synchronized coupling that contracts the norm between the two proportion chains is defined. It is similar to the synchronized coupling of \cite{1}, but it has more comprehensive cases. \subsubsection{Semi-Synchronized Coupling} Consider the two cyclic dynamics $(\sigma_{t})$ and $(\tilde{\sigma}_{t})$ beginning from $\sigma_{0}$, $\tilde{\sigma}_{0}$. Let their proportion chains be $(S_{t})$ and $(\tilde{S}_{t})$, respectively. At time $t+1$, the \textit{semi-synchronized coupling} for the case of $S^{1}_{t}\geq\tilde{S}^{1}_{t}$, $S^{2}_{t}\leq \tilde{S}^{2}_{t}$, $S^{3}_{t} \leq \tilde{S}^{3}_{t}$ is defined as follows: \begin{enumerate} \setlength\itemsep{3pt} \item Choose the colors $(\,I_{t+1},\, \tilde{I}_{t+1}\,)$ based on the probability, as stated below. \begin{equation*} (\,I_{t+1},\, \tilde{I}_{t+1}\,)\,= \begin{cases} (\,1,\,1\,)\text{ w.p. } \tilde{S}_{t}^{1}\\ (\,2,\,2\,)\text{ w.p. } S_{t}^{2}\\ (\,3,\,3\,)\text{ w.p. } S_{t}^{3}\\ (\,1,\,2\,)\text{ w.p. } \tilde{S}_{t}^{2}-S_{t}^{2}\\ (\,1,\,3\,)\text{ w.p. } \tilde{S}_{t}^{3}-S_{t}^{3} \end{cases} \end{equation*} \item Choose the colors $(\,J_{t+1},\, \tilde{J}_{t+1}\,)$ depending on $(\,I_{t+1},\, \tilde{I}_{t+1}\,)$ based on the probability, as stated below. \begin{itemize} \setlength\itemsep{3pt} \item$(\,I_{t+1},\, \tilde{I}_{t+1}\,)\, =\, (\,1,\,1\,)\Rightarrow (\,J_{t+1},\, \tilde{J}_{t+1}\,)$ is $(\,1,\, 1\,)$ w.p. $\frac{1}{2}$, and is $(\,2,\,2\,)$ w.p. $\frac{1}{2}.$ \item$(\,I_{t+1},\, \tilde{I}_{t+1}\,)\, =\, (\,2,\,2\,)\Rightarrow(\,J_{t+1},\, \tilde{J}_{t+1}\,)$ is $(\,2,\,2\,)$ w.p. $\frac{1}{2}$, and is $(\,3,\,3\,)$ w.p. $\frac{1}{2}$. \item$(\,I_{t+1},\, \tilde{I}_{t+1}\,) \,=\, (\,3,\,3\,)\Rightarrow(\,J_{t+1},\, \tilde{J}_{t+1}\,)$ is $ (\,3,\,3\,)$ w.p. $\frac{1}{2}$, and is $(\,1,\,1\,)$ w.p. $\frac{1}{2}$. \item$(\,I_{t+1},\, \tilde{I}_{t+1}\,)\, =\, (\,1,\,2\,)\Rightarrow(\,J_{t+1},\, \tilde{J}_{t+1}\,)$ is $(\,1,\,3\,)$ w.p. $\frac{1}{2}$, and is $(\,2,\,2\,)$ w.p. $\frac{1}{2}$. \item$(\,I_{t+1},\, \tilde{I}_{t+1}\,)\, =\, (\,1,\,3\,)\Rightarrow(\,J_{t+1},\, \tilde{J}_{t+1}\,)$ is $(\,1,\,1\,)$ w.p. $\frac{1}{2}$, and is $(\,2,\,3\,)$ w.p. $\frac{1}{2}$. \end{itemize} \item Choose a vertex that has the color ${I}_{t+1}$ in ${\sigma}_{t}$ uniformly. Then, change its color to ${J}_{t+1}$ in ${\sigma}_{t+1}$. \item Choose a vertex that has the color $\tilde{I}_{t+1}$ in $\tilde{\sigma}_{t}$ uniformly. Then, change its color to $\tilde{J}_{t+1}$ in $\tilde{\sigma}_{t+1}$. \end{enumerate} Semi-synchronized coupling for the other cases can be defined in a similar manner. Let $\mathbb{P}^{SC}_{\sigma_{0},\tilde{\sigma}_{0}}$ be the underlying probability measure of this coupling, and $\mathbb{E}^{SC}_{\sigma_{0},\tilde{\sigma}_{0}}$ be the expectation with respect to the underlying probability measure. This coupling is constructed to obtain the following $\ell^{1}$-contraction result. \begin{prop}\label{2.3} Consider the semi-synchronized coupling of two cyclic dynamics $(\sigma_{t})$ and $(\tilde{\sigma}_{t})$. Then, the following equation holds: \begin{equation*} \mathbb{E}^{SC}_{\sigma_{0},\tilde{\sigma}_{0}}\Vert S_{t}- \tilde{S}_{t}\Vert_{1} \leq\Big(1-\frac{1}{2n}\Big)^{t}\,\Vert S_{0}-\tilde{S}_{0}\Vert_{1}. \end{equation*} \end{prop} \begin{proof} In the case of $S^{1}_{t}\geq\tilde{S}^{1}_{t}$, $S^{2}_{t}\leq \tilde{S}^{2}_{t}$, $S^{3}_{t} \leq \tilde{S}^{3}_{t}$, by the definition of the coupling, \begin{align*} \mathbb{E}^{SC}_{\sigma_{0},\tilde{\sigma}_{0}}\big[\,\Vert S_{t+1}- \tilde{S}_{t+1}\Vert_{1}-\Vert S_{t}- \tilde{S}_{t}\Vert_{1}\,\vert\,\mathcal{F}_{t}\,\big]\leq- \frac{1}{n}\,\big(\tilde{S}_{t}^{2}-S_{t}^{2}\big)- \frac{1}{n}\,\big(\tilde{S}_{t}^{3}-S_{t}^{3}\big)=- \frac{1}{2n}\,\Vert S_{t}-\tilde{S}_{t}\Vert_{1}. \end{align*} For the other cases, the above inequality can be obtained in a similar manner. Inductively using this inequality, the desired result is obtained. \end{proof} The following propositions bound the variance of $S_{t}$ from the contraction of the norm on two chains. The following theorem presents the relation between the contraction and the variance of the chain. Its only difference from \cite[Lemma 2.4]{1} is the coefficient $c>1$. \begin{prop}\cite[Lemma 2.4]{1}\label{2.4} Consider the Markov chain $(Z_{t})$ taking values in $\mathbb{R}^{d}$ with transition matrix $P$. When $Z_{0} = z$, let $\mathbb{P}_{z}$ and $\mathbb{E}_{z}$ be its probability measure and expectation, respectively. Then, if there exists $0<\rho<1$ and $c>1$ that satisfies $\Vert\,\mathbb{E}_{z}[Z_{t}]- \mathbb{E}_{\tilde{z}}[Z_{t}]\,\Vert_{2}\leq c \rho^{t}\,\Vert z- \tilde{z}\Vert_{2}$ for every pairs of starting point $(z, \tilde{z}),$ then \begin{equation*} v_{t}\,=\,\sup_{z_{0}}\, \mathbb{V}ar_{z_{0}}\,(Z_{t})\,=\,\sup_{z_{0}}\,\mathbb{E}_{z_{0}}\,\Vert\, Z_{t}- \mathbb{E}_{z_{0}}Z_{t}\,\Vert_{2}^{2} \end{equation*} satisfies \begin{equation*} v_{t}\, \leq\, c^{2}\, v_{1}\,\min\big\{\,t, \big(1-\rho^{2}\big)^{- 1}\big\}. \end{equation*} \end{prop} \begin{prop}\label{2.5} For the cyclic dynamics $(\sigma_{t})$ starting from $\sigma_{0}$, \begin{equation*} \mathbb{V}ar_{\sigma_{0}}(S_{t}) = O\big(n^{-1}\big). \end{equation*} \end{prop} \begin{proof} By Proposition \ref{2.3}, as \begin{align*} \Vert\,\mathbb{E}_{\sigma_{0}}[S_{t}]- \mathbb{E}_{\tilde{\sigma}_{0}}[\tilde{S}_{t}]\,\Vert_{2}&\leq\Vert\,\mathbb{E}_{\sigma_{0}}[S_{t}]- \mathbb{E}_{\tilde{\sigma}_{0}}[\tilde{S}_{t}]\,\Vert_{1}\leq\mathbb{E}^{S C}_{\sigma_{0},\tilde{\sigma}_{0}}\,\Vert S_{t}- \tilde{S}_{t}\Vert_{1}\\&\leq \Big(1-\frac{1}{2n}\Big)^{t}\,\Vert S_{0}- \tilde{S}_{0}\Vert_{1}\leq \sqrt{3}\, \Big(1-\frac{1}{2n}\Big)^{t}\,\Vert S_{0}-\tilde{S}_{0}\Vert_{2}, \end{align*} letting $Z_{t}=S_{t}$, $\rho = 1 - \frac{1}{2n}$, and $c=\sqrt{3}$ satisfies the condition of Proposition \ref{2.4}. Because \begin{equation*} \mathbb{E}_{\sigma_{0}}\,\Vert S_{1} -\mathbb{E}_{\sigma_{0}}S_{1}\Vert^{2}_{2}\,\leq\, 2\, \mathbb{E}_{\sigma_{0}}\,\big(\,\Vert S_{1}-S_{0}\Vert^{2}_{2}+\Vert S_{0}- \mathbb{E}_{\sigma_{0}}S_{1}\Vert^{2}_{2}\,\big), \end{equation*} $v_{1} = O(n^{-2})$ is obtained. The proof is completed by Proposition \ref{2.4}. \end{proof} \subsection{Statistics of Stationary Distribution}\label{2.13} This section presents the proof that $\mu_{n}$ is the stationary distribution of cyclic dynamics. Here, $\mu_{n}$ is the probability measure that is uniform on state space $\Sigma_{n}$, i.e. \begin{equation*} \mu_{n}(\sigma) = \frac{1}{3^{n}}\quad \forall\, \sigma\in\Sigma_{n}. \end{equation*} The underlying probability measure, expectation, and variance are denoted as $\mathbb{P}_{\mu_{n}}$, $\mathbb{E}_{\mu_{n}}$, and $\mathbb{V}ar_{\mu_{n}}$, respectively. First, recall \cite[Corollary 1.17]{2}, which describes the stationary distribution in an irreducible Markov chain. \begin{prop}\cite[Corollary 1.17]{2} Let $P$ be the transition matrix of an irreducible Markov chain. Then, there exists a unique stationary distribution of the chain. \end{prop} Then, the product chain suggested in \cite[Section 12.4]{2} is introduced. For $j = 1,\dots,n$, consider the irreducible Markov chain $(Z^{j}_{t})$ on state space $\mathcal{X}_{j}$ having transition matrix $P_{j}$. Let $w=(w_{1},\dots,w_{n})$ be a probability distribution of $\{1,\dots,n\}$, where $0<w_{j}<1$. Define the product chain on state space $\mathcal{X} = \mathcal{X}_{1}\times\cdots\times\mathcal{X}_{n}$ having transition matrix $P$ as \begin{equation*} P(x, y) = \sum_{j=1}^{n}w_{j}P_{j}(x_{j},y_{j})\prod_{i:i\neq j}\textbf{1}_{\{x_{i}=y_{i}\}} \end{equation*} for the two states $x = (x_{1},\dots,x_{n}),\, y = (y_{1},\dots,y_{n})\in\mathcal{X}$. For the functions $f^{(1)},\dots,f^{(n)}$, where $f^{(j)}\colon \mathcal{X}_{j}\to\mathbb{R}$, define the product on $\mathcal{X}$ as \begin{equation*} f^{(1)} \otimes f^{(2)} \otimes\cdots\otimes f^{(n)}(x_{1},\dots,x_{n})=f^{(1)}(x_{1})\cdots f^{(n)}(x_{n}). \end{equation*} \begin{prop}\label{2.16} Consider the product chain of the Markov chains $(Z^{1}_{t}),\dots,(Z^{n}_{t})$ as above. For $j = 1,\dots,n$, let $\pi^{(j)}$ be the stationary distribution of the chain $(Z^{j}_{t})$. Then, $ \pi^{(1)} \otimes\pi^{(2)}\otimes\cdots\otimes \pi^{(n)}$ is the stationary distribution of the product chain. \end{prop} \begin{proof} Because the product chain is irreducible, it suffices to prove that \begin{equation*} \pi^{(1)} \otimes\pi^{(2)}\otimes\cdots\otimes \pi^{(n)}\, P = \pi^{(1)} \otimes\pi^{(2)}\otimes\cdots\otimes \pi^{(n)}. \end{equation*} Since \begin{align*} &\pi^{(1)} \otimes\pi^{(2)}\otimes\cdots\otimes \pi^{(n)}\, P\, (y_{1},\dots,y_{n})\\&=\sum_{x\in\mathcal{X}}\, \pi^{(1)} \otimes\pi^{(2)}\otimes\cdots\otimes \pi^{(n)}\,(x_{1},\dots,x_{n})\,\sum_{j=1}^{n}\,w_{j}\,P_{j}(x_{j},y_{j})\,\prod_{i:i\neq j}\textbf{1}_{\{x_{i}=y_{i}\}}\\ &=\sum_{j=1}^{n}w_{j}\,\prod_{i:i\neq j}\pi^{i}(y_{i})\,\sum_{x_{j}\in\mathcal{X}_{j}}P_{j}(x_{j},y_{j})\,\pi^{( j)}(x_{j})\\&=\pi^{(1)} \otimes\pi^{(2)}\otimes\cdots\otimes \pi^{(n)}(y_{1},\dots,y_{n}), \end{align*} this completes the proof. \end{proof} \begin{prop}\label{2.6} The probability measure $\mu_{n}$ is a unique stationary distribution of the cyclic dynamics $(\sigma^{n}_{t})$. \end{prop} \begin{proof} Note that cyclic dynamics $(\sigma^{n}_{t})$ is irreducible for all $n$, and it has a unique stationary distribution. The unique stationary distribution of cyclic dynamics $(\sigma^{1}_{t})$ is $(\frac{1}{3}, \frac{1}{3}, \frac{1}{3})$. Then, the cyclic dynamics $(\sigma^{n}_{t})$ may be considered as the product chain of $(\sigma^{1}_{t})$, where $w=(\frac{1}{n},\dots,\frac{1}{n})$. Therefore, by Proposition \ref{2.16}, the proof is complete. \end{proof} Now, define the function $S\colon\Sigma_{n}\to\mathcal{S}_{n}$ as $S(\sigma) = \big(\,S^{1}(\sigma), S^{2}(\sigma), S^{3}(\sigma)\, \big) $, where \begin{equation*} S^{k}(\sigma) = \frac{1}{n}\sum_{v \in V_{n}}\,\textbf{1}_{\{ \sigma(v) = k\}} \quad k = 1,\, 2,\, 3. \end{equation*} Consider the case where $\sigma\in\Sigma_{n}$ is distributed according to the probability distribution $\mu_{n}$. Because $\sigma$ is uniformly distributed, $n\cdot S^{k}(\sigma)$ may be interpreted as the sum of $n$ independent variables having the value $1$ with $\frac{1}{3}$ and $0$ with $\frac{2}{3}$. Thus, $n\cdot S^{1}(\sigma),n\cdot S^{2}(\sigma),n\cdot S^{3}(\sigma)\sim \text{Bin}(n,\, \frac{1}{3})$ and \begin{gather*} \mathbb{E}_{\mu_{n}}S(\sigma)=\big(\,\mathbb{E}_{\mu_{n}}S^{1}(\sigma),\,\mathbb{E}_{\mu_{n}}S^{2}(\sigma),\,\mathbb{E}_{\mu_{n}}S^{3}(\sigma)\,\big ) = \Big(\,\frac{1}{3},\,\frac{1}{3},\,\frac{1}{3}\,\Big)\\ \mathbb{V}ar_{\mu_{n}}S(\sigma) = \mathbb{V}ar_{\mu_{n}}S^{1}(\sigma)+ \mathbb{V}ar_{\mu_{n}}S^{2}(\sigma)+\mathbb{V}ar_{\mu_{n}}S^{3}(\sigma) = \frac{2}{3n}. \end{gather*} \subsection{Proof of Lower Bound}\label{2.14} This section presents the proof of the lower bound of the mixing time. The proof of the proposition is based on \cite[Section 4.1]{1}. Denote $t(n)=\frac{2}{3}n \log n$ and $t_{\gamma}(n)=\frac{2}{3}n\log n+\gamma n$. The main principle is to compare the probability of the event $\{\Vert\hat{S}_{t_{\gamma}(n)}\Vert_{2}<\frac{r}{\sqrt{n}}\}$ under the probability measure $\mathbb{P}_{\sigma_{0}}$ and under $\mu_{n}$. \begin{prop}\label{2.8} Consider the cyclic dynamics $(\sigma_{t})$ and a fixed constant $\epsilon>0$. Then, for all sufficiently large values of $n$, there exists a sufficiently large $-\gamma > 0$ that satisfies $t^{(n)}_{\textnormal{mix}}(1-\epsilon) \geq t_{\gamma}(n).$ \end{prop} \begin{proof} Note that \begin{equation*} \Big(\,1-\frac{1}{x}\,\Big)^{x-1}\,>\,e^{-1}\,>\,\Big(\,1- \frac{1}{x}\,\Big)^{x} \end{equation*} holds for all $x>1$. Set the constant $0<\rho<\frac{2}{3}$. Then, a configuration $\sigma_{0}\in \Sigma_{n}$ is chosen such that it satisfies $\rho<\Vert\hat{S_{0}}\Vert_{2}$. For $t \leq t_{\gamma}(n)$, \begin{equation*} \mathbb{E}_{\sigma_{0}}\Vert\hat{S}_{t}\Vert_{2}^{2}\, =\, \Big(1- \frac{3}{2n}\Big)^{t}\,\Vert\hat{S}_{0}\Vert_{2}^{2} \,+\,O\,\Big(\,\frac{1}{n}\,\Big)\,\geq\,\frac{1}{n}\,e^{-\gamma} \end{equation*} holds for all sufficiently large $n$ and $-\gamma>0$ values depending on $\rho$. In addition, because $\mathbb{V}ar_{\sigma_{0}}(S_{t}) = O(n^{-1})$ by Proposition \ref{2.5}, $\mathbb{V}ar_{\sigma_{0}}(\hat{S_{t}}) = O(n^{- 1})$ holds. Then, \begin{equation*} \big(\,\mathbb{E}_{\sigma_{0}}\,\Vert\hat{S_{t}}\Vert_{2}\,\big)^{2}\, \geq\,\mathbb{E}_{\sigma_{0}}\,\Vert\hat{S}_{t}\Vert_{2}^{2} - \mathbb{V}ar_{\sigma_{0}}\big(\,\hat{S_{t}}\,\big)\,\geq\,\frac{1}{n}\,e^{ -\gamma}-O\,\big(\,n^{-1}\,\big), \end{equation*} and it implies that for all sufficiently large $n$ and $-\gamma$ values, \begin{equation*} \mathbb{E}_{\sigma_{0}}\,\Vert\hat{S_{t}}\Vert_{2}\,\geq\,\frac{1}{\sqrt{n }}\,e^{-\frac{\gamma}{3}}. \end{equation*} For $0<r<e^{-\frac{\gamma}{3}}$ and $t \leq t_{\gamma}(n)$, by Chebyshev's inequality and Proposition \ref{2.5}, \begin{align*} \mathbb{P}_{\sigma_{0}}\Big(\,\Vert\hat{S_{t}}\Vert_{2}\,<\,\frac{r}{\sqrt{n}}\,\Big) \leq& \,\mathbb{P}_{\sigma_{0}}\,\Big(\,\mathbb{E}_{{\sigma}_{0}}\,\Vert\hat{S_{t }}\Vert_{2}\,-\,\Vert\hat{S_{t}}\Vert_{2}\, >\, \frac{1}{\sqrt{n }}\,e^{-\frac{\gamma}{3}}\,- \,\frac{r}{\sqrt{n}}\,\Big) \\ \leq&\, \frac{\mathbb{V}ar_{\sigma_{0}}\,\big(\,\hat{S_{t}}\,\big)}{\big(\,\frac{1}{\sqrt{n }}\,e^{-\frac{\gamma}{3}}- \frac{r}{\sqrt{n}}\,\big)^{2}} = O\big(\,(\,e^{-\frac{\gamma}{3}}-r\,)^{- 2}\,\big). \end{align*} It follows that \begin{equation*} \lim_{\gamma\rightarrow- \infty}\,\limsup_{n\rightarrow\infty}\,\mathbb{P}_{\sigma_{0}}\Big(\,\Vert\hat{S}_{t_{ \gamma}(n)}\Vert_{2}\,<\,\frac{r}{\sqrt{n}}\,\Big) = 0. \end{equation*} \par Now, consider the cyclic dynamics $(\sigma_{t})$ where $\sigma_{0}$ follows the probability distribution $\mu_{n}$. By the properties in Section \ref{2.13} and application of Chebyshev's inequality, \begin{equation*} \mu_{n} \Big(\, \Vert\hat{S}_{t}\Vert_{2}\,<\,\frac{r}{\sqrt{n}}\,\Big)\,\geq\,1\,- \,\frac{O(1)}{r^{2}} \end{equation*} for all $t \geq 0.$ It can be concluded that for all $r>0$, \begin{equation*} \lim_{\gamma\rightarrow- \infty}\,\liminf_{n\rightarrow\infty}\,d^{(n)}(t_{\gamma}(n))\,\geq\,1\,- \,\frac{O(1)}{r^{2}}. \end{equation*} Then, let $r \rightarrow \infty$. Hence, the proof is complete. \end{proof} \section{Upper Bound}\label{3} This section presents the proof of the upper bound of the mixing time. In \cite{1}, it is observed that the cutoff of the upper bound for the cyclic dynamics essentially follows from the precise bound on the coalescing time of the two basket chains. In Section \ref{3.21}, semi-coordinatewise coupling is used to analyze the coalescence of the proportion chains. In Section \ref{3.22}, the basket chain is introduced, and basketwise coupling is used to analyze the coalescence of the basket chains. Based on those analyses, the upper bound of the cutoff is obtained, as presented in Section \ref{3.44}. \par The following proposition describes the range of $S_{t}$ where $t=t(n)$. It bounds the $\ell^{\infty}$-norm between $S_{t}$ and $\bar{e}$ over the $n^{- \frac{1}{2}}$ scale. \begin{prop}\label{3.1} Consider the cyclic dynamics $(\sigma_{t})$ starting at $\sigma_{0}$ and its proportion chain $(S_{t})$. For all $r > 0$ and $\sigma_{0}\in\Sigma_{n}$, it holds that \begin{equation*} \mathbb{P}_{\sigma_{0}}\,\Big(\,S_{t(n)}\,\notin\, \mathcal{S}^{\frac{r}{\sqrt{n}}}\,\Big) = O\,\big(\,r^{-1}\,\big). \end{equation*} \end{prop} \begin{proof} By Proposition \ref{2.2} and the first moment argument, \begin{align*} \mathbb{P}_{\sigma_{0}}\,\Big(\,S_{t(n)}\,\notin\, \mathcal{S}^{\frac{r}{\sqrt{n}}}\,\Big)\,&\leq\,\mathbb{P}_{\sigma_{0}}\, \Big(\,\Vert\hat{S}_{t(n)}\Vert_{2}\,\geq\, rn^{-\frac{1}{2}}\,\Big)\\ &\leq\,\frac{\mathbb{E}_{\sigma_{0}}\,\Vert\hat{S}_{t(n)}\Vert_{2}}{rn^{- \frac{1}{2}}}\,\leq\,\frac{\big(\,\mathbb{E}_{\sigma_{0}}\,\Vert\hat{S}_{t (n)}\Vert^{2}_{2}\,\big)^{\frac{1}{2}}}{rn^{- \frac{1}{2}}}\,=\,O\,\Big(\frac{1}{r}\Big). \end{align*} \end{proof} \subsection{Coalescing Proportion Chains}\label{3.21} For the two cyclic dynamics $(\sigma_{t})$ and $(\tilde{\sigma}_{t})$, the proportion chains $S_{t}$ and $\tilde{S}_{t}$ are made to coalesce with high probability. First, $\hat{S}_{t}$ is bound with the $n^{- \frac{1}{2}}$ scale. Then, $S_{t}$, $\tilde{S}_{t}$ is matched via coupling with the above condition. \subsubsection{Preliminaries} Here, the two well-known theorems used in the current section are introduced. \begin{prop} \cite[Lemma 2.1 (2)]{1}\label{3.2} Consider the discrete time process $(X_{t})_{t\geq0}$ adapted to filtration $(\mathcal{F}_{t})_{t\geq 0}$ that starts at $x_{0}\in\mathbb{R}$. Let the underlying probability measure as $\mathbb{P}_{x_{0}}$, and let $\tau_{x}^{+}=\inf\{t:X_{t}\geq x\}.$ Then, if $(X_{t})$ satisfies the below three conditions, the following statement holds: \begin{enumerate}[label=(\alph*)] \setlength\itemsep{3pt} \item$\exists\,\delta\geq0 \colon\mathbb{E}_{x_{0}}\,[\,X_{t+1}- X_{t}\,\vert\,\mathcal{F}_{t}\,]\,\leq\,-\delta$ on $ \{\,X_{t}\geq0\,\}$ for all $t\geq 0.$ \item$\exists\, R > 0\colon \vert\, X_{t+1}-X_{t}\,\vert\,\leq\, R,\, \forall\, t\geq0.$ \item $X_{0} = x_{0}.$ \end{enumerate} If $x_{0}\leq0$, then for $x_{1}>0$ and $t_{2}\geq0,$ \begin{equation*} \mathbb{P}_{x_{0}}\,\big(\,\tau_{x_1}^{+}\,\leq\,{t_{2}}\,\big)\,\leq\,2\, \exp\Big\{-\frac{(x_{1}-R)^{2}}{8t_{2}R^{2}}\Big\}. \end{equation*} \end{prop} \begin{prop}\cite[Lemma 2.3]{1}\label{3.4} Suppose that the non-negative discrete time process $(Z_{t})_{t\geq0}$ adapted to $(\mathcal{G}_{t})_{t\geq0}$ is a supermartingale. Let $N$ be a stopping time. If $(Z_{t})$ satisfies the below three conditions: \begin{enumerate}[label=(\alph*)] \setlength\itemsep{3pt} \item$Z_{0} = z_{0}$ \item$\vert\, Z_{t+1}-Z_{t}\,\vert \,\leq\, B$ \item$\exists \sigma>0$ such that $ \mathbb{V}ar\,(\,Z_{t+1}\,\vert\,\mathcal{G}_{t}\,)\,>\,\sigma^{2}$ on the event $\{\,N>t\,\}$, \end{enumerate} and $u\,>\,4B^{2}\,/\,(3\sigma^{2})$, then $\mathbb{P}_{z_{0}}\,(N>u)\,\leq\,4z_{0}\,/\,(\sigma\sqrt{u})$. \end{prop} \subsubsection{Restriction of Proportion Chain} \begin{prop}\label{3.3} Consider the cyclic dynamics $(\sigma_{t})$ and its proportion chain $(S_{t})$. For the fixed constant $r_{0}>0$ and $\gamma>0$, there exists $C, c>0$ that satisfies the following statement: \\For all sufficiently large $n$ and $r>\max\,\{\,3r_{0},2\,\}$, let $t = \gamma n$, $\rho_{0} = \frac{r_{0}}{\sqrt{n}}$ and $\rho = \frac{r}{\sqrt{n}}$. Then, \begin{equation*} \mathbb{P}_{\sigma_{0}}\,\big(\,\exists\, 0\leq u \leq t: S_{u} \notin \mathcal{S}_{n}^{\rho+}\,\big)\,\leq\, Ce^{-cr^{2}} \end{equation*} holds for all $\sigma_{0}$ such that $S_{0}\in \mathcal{S}_{n}^{\rho_{0}+}.$ \end{prop} \begin{proof} Let the discrete time process $(Z_{t})$ with respect to the cyclic dynamics $(\sigma_{t})$ be \begin{equation*} Z_{t} =\big(\,S_{t}^{1}\,\big)^{2}+\big(\,S_{t}^{2}\,\big)^{2}+\big(\,S_{t}^{3}\,\big)^{2}- \max\Big\{\,\big(\,S_{0}^{1}\,\big)^{2}+\big(\,S_{0}^{2}\,\big)^{2}+\big(\,S_{0}^{3}\,\big)^{2},\, \frac{2}{3n}+\frac{1}{3}\,\Big\}, \end{equation*} and let the time $T=\min\,\{\,t:S_{t} \notin \mathcal{S}_{n}^{\rho+}\,\}$. Then, define the discrete time process $(X_{t})$, where $X_{t}\,=\,Z_{t}$ if $t<T$, and $X_{t} \,=\, Z_{T}$ if $t\geq T$.\par Now, consider the cyclic dynamics $(\sigma_{t})$ that satisfy $S_{0}\in \mathcal{S}_{n}^{\rho_{0}+}$. We will prove that $(X_{t})$ satisfies the three conditions of Proposition \ref{3.2}. For the first condition, if $t\leq T-1$, \begin{equation*} \mathbb{E}\,[\,X_{t+1}-X_{t}\,\vert\,\mathcal{F}_{t}\,] = \frac{3}{2n}\,\Big[\,\frac{2}{3n}+\frac{1}{3}- \big(\,S_{t}^{1}\,\big)^{2}-\big(\,S_{t}^{2}\,\big)^{2}- \big(\,S_{t}^{3}\,\big)^{2}\,\Big]. \end{equation*} Further, it is below 0 if $X_{t}\geq0.$ If $t\geq T$, $X_{t+1}=X_{t}$, and thus, the first condition holds. For the second condition, if $t\leq T-1$, \begin{equation*} \vert\, X_{t+1}- X_{t}\,\vert\,\leq\,\frac{6r}{n\sqrt{n}}+\frac{2}{n^{2}}.\end{equation*} If $t\geq T$, $X_{t+1}=X_{t}$, and therefore, the second condition holds. We obtain $x_{0}=X_{0}\leq 0.$ Therefore, letting $x_1=\frac{3r^{2}}{2n}- \max\,\{\frac{2}{3n},\frac{6r_{0}^{2}}{n}\}>0$ and $R =\frac{6r}{n\sqrt{n}}+\frac{2}{n^{2}}$, \begin{align*} \mathbb{P}_{\sigma_{0}}\,\big(\,\exists\, 0\leq u \leq t: S_{u} \notin \mathcal{S}_{n}^{\rho+}\,)\,\leq\, \mathbb{P}_{x_{0}}\,\big(\,\tau_{x_1}^{+}\leq{t}\,\big)\,\leq\,2\,\exp\,\Big\{-\frac{(x_{1}-R)^{2}}{8tR^{2}}\Big\}, \end{align*} and there exist $C$ and $c$ that satisfy $2\,\exp\,\{-\frac{(x_{1}-R)^{2}}{8tR^{2}}\}\leq Ce^{-cr^{2}}.$ \end{proof} On Proposition \ref{3.5}, we bound the $\ell^{1}$-norm between $S_{t}$ and $\tilde{S}_{t}$ with $n^{-1}$ scale. The coupling is used to make the norm as supermartingale. Proposition \ref{3.4} provides the bound of the time spent to bound the norm. \subsubsection{Semi-Coordinatewise Coupling} This section introduces the semi-coordinatewise coupling of two cyclic dynamics $(\sigma_{t})$ and $(\tilde{\sigma}_{t})$. This type of coupling is used in Proposition \ref{3.14} to prove that $\Vert S_{t}- \tilde{S}_{t}\Vert_{1}$ is a supermartingale. \par First, semi-independent and coordinatewise coupling are defined as in \cite{1}. For the two probability distributions $\nu$ and $\tilde{\nu}$ on $\Omega=\{1,2,3\}$ and $i\in\{1,2,3\}$, $\{i\}$-semi-independent coupling of $\nu$ and $\tilde{\nu}$ is a pair of random variables $(X,\tilde{X})$ on $\Omega\times\Omega$ as follows: \begin{enumerate} \setlength\itemsep{3pt} \item Pick $U$ uniformly on [0, 1]. \item If $U\leq\min\big(\,\nu(i), \tilde{\nu}(i)\,\big)$, choose $(X, \tilde{X})$ as $(i, i)$. \item If $U>\min\big(\,\nu(i), \tilde{\nu}(i)\,\big)$, choose $X$ and $\tilde{X}$ independently according to the following rules: In the case of $X$, if $U<\nu(i)$, choose $i$. Otherwise, choose $i+1$ with probability $\frac{\nu(i+1)}{\nu(i+1)+\nu(i+2)}$, and choose $i+2$ with probability $\frac{\nu(i+2)}{\nu(i+1)+ \nu(i+2)}.$ In the case of $\tilde{X}$, if $U<\tilde{\nu}(i)$, choose $i$. Otherwise, choose $i+1$ with probability $\frac{\tilde{\nu}(i+1)}{\tilde{\nu}(i+1)+\tilde{\nu}(i+2)}$, and choose $i+2$ with probability $\frac{\tilde{\nu}(i+2)}{\tilde{\nu}(i+1)+\tilde{\nu}(i+2)}.$ \end{enumerate} It is evident from the construction that random variables $X$ and $\tilde{X}$ follow the distributions $\nu$ and $\tilde{\nu}$, respectively. Now, the $\{i\}$-coordinatewise coupling of two cyclic dynamics is defined as follows: \begin{enumerate} \setlength\itemsep{3pt} \item Choose two colors $I_{t+1}$ and $\tilde{I}_{t+1}$ based on $\{i\}$-semi-independent coupling of $S_{t}$, and $\tilde{S}_{t}.$ \item Choose two colors $J_{t+1}$ and $\tilde{J}_{t+1}$ based on $\{i\}$-semi-independent coupling, in turn based on $\nu$ and $\tilde{\nu}$. Here, $\nu(I_{t+1})=\frac{1}{2}$, $\nu(I_{t+1}+1)=\frac{1}{2}$, $\nu(I_{t+1}+2)=0$, and $\tilde{\nu}(\tilde{I}_{t+1})=\frac{1}{2}$, $\tilde{\nu}(\tilde{I}_{t+1}+1)=\frac{1}{2}$, $\tilde{\nu}(\tilde{I}_{t+1}+2)=0$. \item Uniformly choose the vertex of color $I_{t+1}$ in $\sigma_{t}$ and change its color to $J_{t+1}$, and uniformly choose the vertex of color $\tilde{I}_{t+1}$ in $\tilde{\sigma}_{t}$ and change its color to $\tilde{J}_{t+1}.$ \end{enumerate} Finally, the \textit{semi-coordinatewise coupling} of two cyclic dynamics is defined as follows: \begin{enumerate} \setlength\itemsep{3pt} \item $\min\big(\,\vert S_{t}^{1}-\tilde{S}_{t}^{1}\vert ,\,\vert S_{t}^{2}-\tilde{S}_{t}^{2}\vert,\,\vert S_{t}^{3}- \tilde{S}_{t}^{3}\vert\, \big)\geq\frac{2}{n}$: Update the chains independently. \item$\min\big(\,\vert S_{t}^{1}-\tilde{S}_{t}^{1}\vert ,\,\vert S_{t}^{2}-\tilde{S}_{t}^{2}\vert,\,\vert S_{t}^{3}- \tilde{S}_{t}^{3}\vert\, \big)=\frac{1}{n}$: For this case, there exist $i\in\{1,2,3\}$ such that $\vert S_{t}^{i}-\tilde{S}_{t}^{i}\vert =\frac{1}{n}$. Choose a minimum $i$ value that satisfies the condition, and update the chains based on $\{i\}$-coordinatewise coupling. \item$\min\big(\,\vert S_{t}^{1}-\tilde{S}_{t}^{1}\vert ,\,\vert S_{t}^{2}-\tilde{S}_{t}^{2}\vert,\,\vert S_{t}^{3}- \tilde{S}_{t}^{3}\vert\,\big)=0$: Find $i\in\{1, 2, 3\}$ such that $\vert S_{t}^{i}- \tilde{S}_{t}^{i}\vert =0$. It may be assumed that $S_{t}^{i+1}\geq\tilde{S}_{t}^{i+1}$. Then, \begin{enumerate} \setlength\itemsep{3pt} \item Choose the colors $(\,I_{t+1},\, \tilde{I}_{t+1}\,)$ based on the probability, as stated below: \begin{equation*} (\,I_{t+1},\, \tilde{I}_{t+1}\,)\,= \begin{cases} (\,i,\,i\,)&\text{ w.p. } S_{t}^{i}\,=\,\tilde{S}_{t}^{i}\\ (\,i+1,\,i+1\,)&\text{ w.p. } \min\,(\,S_{t}^{i+1},\,\tilde{S}_{t}^{i+1}\,)\\ (\,i+2,\,i+2\,)&\text{ w.p. } \min\,(\,S_{t}^{i+2},\,\tilde{S}_{t}^{i+2}\,)\\ (\,i+1,\,i+2\,)&\text{ w.p. } S_{t}^{i+1}-\tilde{S}_{t}^{i+1}. \end{cases} \end{equation*} \item Choose the colors $(\,J_{t+1},\, \tilde{J}_{t+1}\,)$ according to the discrete random variable $X$ dependent on $(\,I_{t+1},\, \tilde{I}_{t+1}\,)$, as stated below: \begin{itemize}[leftmargin=*] \setlength\itemsep{3pt} \item$(\,I_{t+1},\, \tilde{I}_{t+1}\,)= (\,i,\,i\,)$: Select $X$ uniformly from $\{\,(\,i,\,i\,),\,(\,i+1,\,i+1\,)\,\}$. \item$(\,I_{t+1},\, \tilde{I}_{t+1}\,)=(\,i+1,\,i+1\,)$: Select $X$ uniformly from $\{\,(\,i+1,\,i+1\,),\, (\,i+1,\, i+2\,),\,(\,i+2, \,i+1\,),\,(\,i+2,\,i+2\,)\,\}$. \item$(\,I_{t+1},\, \tilde{I}_{t+1}\,)=(\,i+2,\,i+2\,)$: Select $X$ uniformly from $\{\,(\,i,\,i\,),\,(\,i+2,\,i+2\,)\,\}$. \item$(\,I_{t+1},\, \tilde{I}_{t+1}\,)=(\,i+1,\,i+2\,)$: Select $X$ uniformly from $\{\,(\,i+1,\,i\,),\,(\,i+2,\,i+2\,)\,\}$. \end{itemize} \item Choose a vertex uniformly that has the color ${I}_{t+1}$ in ${\sigma}_{t}$. Then, change its color to ${J}_{t+1}$ in ${\sigma}_{t+1}$. \item Choose a vertex uniformly that has the color $\tilde{I}_{t+1}$ in $\tilde{\sigma}_{t}$. Then, change its color to $\tilde{J}_{t+1}$ in $\tilde{\sigma}_{t+1}$. \end{enumerate} The coupling for the case of $S_{t}^{i+1}\leq \tilde{S}_{t}^{i+1}$ can be similarly defined. \end{enumerate} Let $\mathbb{P}_{\sigma_{0}, \tilde{\sigma}_{0}}^{CC}$ be the underlying measure, and $\mathbb{E}_{\sigma_{0}, \tilde{\sigma}_{0}}^{CC}$ be the expectation of semi-coordinatewise coupling. This coupling is used to prove that the $\ell^{1}$-norm between two proportion chains is a supermartingale. It is also used to guarantee the lower bound of its variance. \begin{prop}\label{3.14} Consider the two cyclic dynamics $(\sigma_{t})$ and $(\tilde{\sigma}_{t})$, and the corresponding proportion chains $(S_{t})$ and $(\tilde{S}_{t})$. Let $\mathbf{d}_{t} = \Vert S_{t+1}- \tilde{S}_{t+1}\Vert_{1}-\Vert S_{t}- \tilde{S}_{t}\Vert_{1}$. Suppose that $\Vert S_{t}-\tilde{S}_{t}\Vert_{1}\geq \frac{10}{n}$ for some $t\geq0$. If semi-coordinatewise coupling is applied in the following step, then \begin{equation*} \mathbb{E}^{CC}_{\sigma_{0}, \tilde{\sigma}_{0}}\big[\,\mathbf{d}_{t}\,\vert\,\mathcal{F}_{t}\,\big]\leq 0. \end{equation*} \end{prop} \begin{proof} The proposition will be proved based on the definition of the coupling.\par First, if $\min\,\big(\,|S_{t}^{1}-\tilde{S}_{t}^{1}| ,\,|S_{t}^{2}- \tilde{S}_{t}^{2}|,\,|S_{t}^{3}-\tilde{S}_{t}^{3}|\,\big)\geq\frac{2}{n}$, the sign of $S_{t}^{i}-\tilde{S}_{t}^{i}$ does not change after iterating over all $i\in \{1,2,3\}$. Consider the case of $S_{t}^{1}\geq \tilde{S}_{t}^{1}$, $S_{t}^{2}\geq \tilde{S}_{t}^{2}$, $S_{t}^{3}\leq \tilde{S}_{t}^{3}$. Then, \begin{align*} \mathbb{E}\big[\,\textbf{d}_{t}\,\big] \,=\,\frac{1}{n}\,\big(\,S_{t}^{3}- \tilde{S}_{t}^{3}+\tilde{S}_{t}^{2}-S_{t}^{2}\,\big)\,\leq\, 0. \end{align*} The result for the other cases can be similarly obtained.\par Second, if $\min\,\big(\,|S_{t}^{1}-\tilde{S}_{t}^{1}| ,\,|S_{t}^{2}- \tilde{S}_{t}^{2}|,\,|S_{t}^{3}- \tilde{S}_{t}^{3}|\,\big)\,=\,\frac{1}{n}$, there exists a unique value of $i$ that satisfies $|S_{t}^{i}-\tilde{S}_{t}^{i}|=\frac{1}{n}$, because $\Vert S_{t}-\tilde{S}_{t}\Vert_{1}\geq \frac{10}{n}$. Consider the case of $S_{t}^{1}-\tilde{S}_{t}^{1}= \frac{1}{n}$, $S_{t}^{2}\geq \tilde{S}_{t}^{2}$, $S_{t}^{3}\leq \tilde{S}_{t}^{3}$. Based on the definition of coordinatewise coupling, \begin{enumerate} \setlength\itemsep{3pt} \item $(\,I_{t+1},\, \tilde{I}_{t+1}\,)\,=\,(\,1,\, 1\,)$: $\textbf{d}_{t}$ is $0$. \item $(\,I_{t+1},\, \tilde{I}_{t+1}\,)\,=\,(\,1,\, 2\,)$: $\textbf{d}_{t}$ is $\frac{2}{n}$ w.p. $\frac{\tilde{S}_{t}^{2}}{2n(\tilde{S}_{t}^{2}+\tilde{S}_{t}^{3})}$ and is $0$ otherwise. \item $(\,I_{t+1},\, \tilde{I}_{t+1}\,)\,=\,(\,1,\, 3\,)$: $\textbf{d}_{t}$ is $- \frac{2}{n}$ w.p. $\frac{\tilde{S}_{t}^{3}}{2n(\tilde{S}_{t}^{2}+\tilde{S}_{t}^{3})}$ and is $0$ otherwise. \item $(\,I_{t+1},\, \tilde{I}_{t+1}\,)\,=\,(\,2,\, 2\,)$: $\textbf{d}_{t}$ is $\frac{2}{n}$ w.p. $\frac{S_{t}^{2}\tilde{S}_{t}^{2}}{4(\tilde{S}_{t}^{2}+\tilde{S}_{t}^{3})} $, is $-\frac{2}{n}$ w.p. $\frac{S_{t}^{2}\tilde{S}_{t}^{2}}{4(\tilde{S}_{t}^{2}+\tilde{S}_{t}^{3})} $, and is $0$ otherwise. \item $(\,I_{t+1},\, \tilde{I}_{t+1}\,)\,=\,(\,2,\, 3\,)$: $\textbf{d}_{t}$ is $- \frac{4}{n}$ w.p. $\frac{S_{t}^{2}\tilde{S}_{t}^{3}}{4(\tilde{S}_{t}^{2}+\tilde{S}_{t}^{3})} $, is $-\frac{2}{n}$ w.p. $\frac{S_{t}^{2}\tilde{S}_{t}^{3}}{2(\tilde{S}_{t}^{2}+\tilde{S}_{t}^{3})} $, and is $0$ otherwise. \item $(\,I_{t+1},\, \tilde{I}_{t+1}\,)\,=\,(\,3,\, 2\,)$: $\textbf{d}_{t}$ is $\frac{4}{n}$ w.p. $\frac{S_{t}^{3}\tilde{S}_{t}^{2}}{4(\tilde{S}_{t}^{2}+\tilde{S}_{t}^{3})} $, is $\frac{2}{n}$ w.p. $\frac{S_{t}^{3}\tilde{S}_{t}^{2}}{2(\tilde{S}_{t}^{2}+\tilde{S}_{t}^{3})} $, and is $0$ otherwise. \item $(\,I_{t+1},\, \tilde{I}_{t+1}\,)\,=\,(\,3,\, 3\,)$: $\textbf{d}_{t}$ is $0$. \end{enumerate} Summing these results, it can be proved that the expectation is below zero. Then, consider the case of $S_{t}^{1}-\tilde{S}_{t}^{1}= \frac{1}{n}$, $S_{t}^{2}\leq \tilde{S}_{t}^{2}$, $S_{t}^{3}\geq \tilde{S}_{t}^{3}$. Based on the definition of coordinatewise coupling, \begin{enumerate} \setlength\itemsep{3pt} \item $(\,I_{t+1}, \,\tilde{I}_{t+1}\,)\,=\,(\,1,\, 1\,)$: $\textbf{d}_{t}$ is $0$. \item $(\,I_{t+1},\, \tilde{I}_{t+1}\,)\,=\,(\,1,\, 2\,)$: $\textbf{d}_{t}$ is $- \frac{4}{n}$ w.p. $\frac{\tilde{S}_{t}^{2}}{4n(\tilde{S}_{t}^{2}+\tilde{S}_{t}^{3})}$, is $- \frac{2}{n}$ w.p. $\frac{\tilde{S}_{t}^{2}}{2n(\tilde{S}_{t}^{2}+\tilde{S}_{t}^{3})}$, and is $0$ otherwise. \item $(\,I_{t+1},\, \tilde{I}_{t+1}\,)\,=\,(\,1,\, 3\,)$: $\textbf{d}_{t}$ is $- \frac{2}{n}$ w.p. $\frac{\tilde{S}_{t}^{3}}{2n(\tilde{S}_{t}^{2}+\tilde{S}_{t}^{3})}$ and is $0$ otherwise. \item $(\,I_{t+1},\, \tilde{I}_{t+1}\,)\,=\,(\,2,\, 2\,)$: $\textbf{d}_{t}$ is $\frac{2}{n}$ w.p. $\frac{S_{t}^{2}\tilde{S}_{t}^{2}}{4(\tilde{S}_{t}^{2}+\tilde{S}_{t}^{3})} $, is $-\frac{2}{n}$ w.p. $\frac{S_{t}^{2}\tilde{S}_{t}^{2}}{4(\tilde{S}_{t}^{2}+\tilde{S}_{t}^{3})} $, and is $0$ otherwise. \item $(\,I_{t+1},\, \tilde{I}_{t+1}\,)\,=\,(\,2,\, 3\,)$: $\textbf{d}_{t}$ is $\frac{2}{n}$ w.p. $\frac{S_{t}^{2}\tilde{S}_{t}^{3}}{2(\tilde{S}_{t}^{2}+\tilde{S}_{t}^{3})} $ and is $0$ otherwise. \item $(\,I_{t+1},\, \tilde{I}_{t+1}\,)\,=\,(\,3,\, 2\,)$: $\textbf{d}_{t}$ is $- \frac{2}{n}$ w.p. $\frac{S_{t}^{3}\tilde{S}_{t}^{2}}{2(\tilde{S}_{t}^{2}+\tilde{S}_{t}^{3})} $ and is $0$ otherwise. \item $(\,I_{t+1},\, \tilde{I}_{t+1}\,)\,=\,(\,3,\, 3\,)$: $\textbf{d}_{t}$ is $0$. \end{enumerate} Summing these results, it can be proved that the expectation is below zero. The result for the other cases can be similarly obtained.\par Third, if $\min\,\big(\,|S_{t}^{1}-\tilde{S}_{t}^{1}|,\,|S_{t}^{2}- \tilde{S}_{t}^{2}|,\,|S_{t}^{3}-\tilde{S}_{t}^{3}|\,\big)=0$, consider the case of $S_{t}^{1} = \tilde{S}_{t}^{1}$, $S_{t}^{2}\geq \tilde{S}_{t}^{2}$, $S_{t}^{3}\leq \tilde{S}_{t}^{3}$. Since $\Vert S_{t}-\tilde{S}_{t}\Vert_{1}\geq \frac{10}{n}$, we obtain $S_{t}^{2}> \tilde{S}_{t}^{2}$, $S_{t}^{3}< \tilde{S}_{t}^{3}$. Then, \begin{align*} \mathbb{E}\big[\,\textbf{d}_{t}\,\big]=-\frac{1}{n}\,\big(\,S_{t}^{2}- \tilde{S}_{t}^{2}\,\big)\leq 0. \end{align*} The result for the other cases can be similarly obtained. \end{proof} Consider the two cyclic dynamics $(\sigma_{t})$ and $(\tilde{\sigma}_{t})$ and its proportion chains. Define the time $T_{1} = \min\,\{\,t:\Vert S_{t}-\tilde{S}_{t}\Vert_{1}<\frac{10}{n}\,\}$. It is proved that $T_{1}$ is bounded with high probability in the next proposition. Moreover, if $S_{0}$ and $\tilde{S}_{0}$ are bounded with $n^{-\frac{1}{2}}$ scale, then it is bounded until time $T_{1}$ with high probability. \begin{prop}\label{3.5} Consider the two cyclic dynamics $(\sigma_{t})$ and $(\tilde{\sigma}_{t})$ that satisfy $\sigma_{0}, \tilde{\sigma}_{0}\in \mathcal{S}^{\frac{r_{0}}{\sqrt{n}}}$ for some $r_{0}>0$. For a fixed value of $\epsilon>0$, the following statement holds:\par There exist constant $\gamma, r>0$ such that \begin{equation*} \mathbb{P}_{\sigma_{0}, \tilde{\sigma}_{0}}^{CC}\,\Big(\,T_{1}<\gamma n,\,\max\big(\,\Vert S_{t}-\bar{e}\Vert_{2},\, \Vert\tilde{S}_{t}- \bar{e}\Vert_{2}\,\big)<\frac{r}{\sqrt{n}}\,\, \forall\, t\leq T_{1}\, \Big)\,\geq\, 1-\epsilon \end{equation*} holds for all sufficiently large $n$ that must be bigger than $100r^{2}$. \end{prop} \begin{proof} Denote the time $T_{2}\,(r) = \min\,\{\,t:\max\big(\,\Vert S_{t}- \bar{e}\Vert_{2},\, \Vert\tilde{S}_{t}- \bar{e}\Vert_{2}\,\big)\geq\frac{r}{\sqrt{n}}\,\}$ and the stopping time $N(r)= T_{1}\wedge T_{2}\,(r).$ Two cyclic dynamics are iterated with semi-coordinatewise coupling. Then, by Proposition \ref{3.14}, if $t < N(r)$, $\Vert S_{t}- \tilde{S}_{t}\Vert_{1}$ is a supermartingale. In addition, if $n > 100r^{2}$, the obtained variance of $\Vert S_{t}-\tilde{S}_{t}\Vert_{1}$ is larger than $C n^{-2}$ for some constant $C$ that is independent of $r$. By Proposition \ref{3.4}, for a sufficiently large $\gamma$ independent of $r$, it can be shown that \begin{equation*} \mathbb{P}_{\sigma_{0}, \tilde{\sigma}_{0}}^{CC}\,\big(\,N(r)>\gamma n\,\big)\leq \frac{\epsilon}{2} \end{equation*} holds for all sufficiently large $n$ that must be bigger than $100r^{2}$.\par Now, by Proposition \ref{3.3}, it can be concluded that \begin{equation*} \mathbb{P}^{CC}_{\sigma_{0}, \tilde{\sigma}_{0}}\,\big(\,T_{2}\,(r) \leq\gamma n\,\big)\leq\frac{\epsilon}{2} \end{equation*} for a sufficiently large $r$ depending on $\gamma$. Combining these inequalities, the desired result can be obtained. \end{proof} Because the $\ell^{1}$-norm between two proportion chains is bounded, the coalescence of these chains can be proved with semi-synchronized coupling. \begin{prop}\label{3.6} Consider two cyclic dynamics $(\sigma_{t})$ and $(\tilde{\sigma}_{t})$, where $\Vert S_{0}-\tilde{S}_{0}\Vert_{1}<\frac{10}{n}$. For the fixed constant $\epsilon>0$, there exists a sufficiently large $\gamma > 0$ such that if $t\geq\gamma n,$ \begin{equation*} \mathbb{P}^{SC}_{\sigma_{0}, \tilde{\sigma}_{0}}\,\big(\,S_{t}=\tilde{S}_{t}\,\big)\,\geq\,1-\epsilon. \end{equation*} \end{prop} \begin{proof} By Proposition \ref{2.3}, \begin{equation*} \mathbb{E}^{SC}_{\sigma_{0}, \tilde{\sigma}_{0}}\,\Vert S_{t}- \tilde{S}_{t}\Vert_{1}\,\leq\,\Big(\,1-\frac{1}{2n}\,\Big)^{t}\,\Vert S_{0}-\tilde{S}_{0}\Vert_{1}\,\leq\,\Big(\,1- \frac{1}{2n}\,\Big)^{t}\,\frac{10}{n}. \end{equation*} Then, there exists a sufficiently large $\gamma$ that satisfies \begin{equation*} \mathbb{E}^{SC}_{\sigma_{0}, \tilde{\sigma}_{0}}\,\Vert S_{t}- \tilde{S}_{t}\Vert_{1}\,\leq\,\frac{\epsilon}{n} \end{equation*} for all $t\geq\gamma n.$ Therefore, by Markov's inequality, \begin{equation*} \mathbb{P}^{SC}_{\sigma_{0}, \tilde{\sigma}_{0}}\,\big(\,S_{t}\neq\tilde{S}_{t}\,\big)\,=\,\mathbb{P}^{ SC}_{\sigma_{0}, \tilde{\sigma}_{0}}\,\Big(\,\Vert S_{t}- \tilde{S}_{t}\Vert_{1}\,\geq\,\frac{1}{n}\,\Big)\,\leq\, \epsilon. \end{equation*} \end{proof} \subsection{Coalescing Basket Chains}\label{3.22} For the two cyclic dynamics $(\sigma_{t})$ and $(\tilde{\sigma}_{t})$, the basket chains are made to coalesce with high probability. The theorems and proofs presented throughout this section are similar to \cite{1}; however, because the conditions are slightly different, detailed analyses are provided here for completeness.\par First, $\mathcal{B}$ is defined as a $3$-partition of $V_{n}=\{1,\dots,n\}$, and $\mathcal{B} = ( \mathcal{B}_{m})_{m=1}^{3}$ is denoted. These $\mathcal{B}_{m}$ partitions are called baskets, and $\mathcal{B}$ is denoted as the $\lambda$-partition if $\vert\mathcal{B}_{m}\vert>\lambda n$ holds for all $m$. For the configuration $\sigma\in\Sigma_{n}$, a $3\times3$ matrix $\mathbf{S}(\sigma)$ representing the proportions of the number of the vertices in each basket is defined, i.e. \begin{equation*} \mathbf{S}^{m, k}\,(\sigma) \,=\, \frac{1}{\vert\mathcal{B}_{m}\vert}\,\sum_{v\in\mathcal{B}_{m}}\textbf{1}_ {\{\sigma(v) = k\}}\quad m,\, k\,\in\, \{\,1,\, 2, \,3\,\}. \end{equation*} The \textit{basket chain} $(\mathbf{S}_{t})$ of the cyclic dynamics $(\sigma_{t})$ is defined as $\mathbf{S}_{t}=\mathbf{S}(\sigma_{t})$. Note that the basket chain is a Markov chain.\par Recalling the sets $\mathcal{S}$, $\mathcal{S}^{\rho}$, $\mathcal{S}^{\rho+}$ in Section \ref{2.21}, $\mathbb{S}$ is defined as the set of $3\times3$ matrices where each row of each matrix exists in $\mathcal{S}$. This set is denoted as $\prod_{m=1}^{3}\mathcal{S}$, and the sets $\mathbb{S}^{\rho}=\prod_{m=1}^{3}\mathcal{S}^{\rho}$ and $\mathbb{S}^{\rho+}=\prod_{m=1}^{3}\mathcal{S}^{\rho+}$ are similarly defined. \par Proposition \ref{3.7} provides the contraction of the basket chains. Proposition \ref{3.8} bounds the range of the basket chains. \begin{prop}\label{3.7} Suppose that $\mathcal{B}$ is a $\lambda$-partition for some $\lambda>0$. Consider the basket chain $(\mathbf{S}_{t})$ and the proportion chain $(S_{t})$ of the cyclic dynamics $(\sigma_{t})$, and define the $3\times3$ matrix $\mathbf{Q}_{t}$ as $\mathbf{Q}^{m,k}_{t}=\mathbf{S}^{m,k}_{t}- S^{k}_{t}.$ Then, \begin{equation*} \mathbb{E}_{\sigma_{0}}\big[\,\big(\,\textbf{Q}^{m,k}_{t+1}\,\big)^{2}\,\big]\,=\,\Big(\,1 \,- \,\frac{1}{n}\,\Big)\,\mathbb{E}_{\sigma_{0}}\big[\,\big(\,\textbf{Q}^{m,k}_{t}\,\big)^{2}\,\big]\,+\,\frac{1}{n}\,\mathbb{E}_{\sigma_{0}}\big[\,\textbf{Q}^{m,k- 1}_{t}\,\textbf{Q}^{m,k}_{t}\,\big]\,+\,O\,\Big(\,\frac{1}{n^{2}}\,\Big ). \end{equation*} \end{prop} \begin{proof} Define $\lambda_{0}$ as the ratio of the number of vertices in the basket $\mathcal{B}_{m}$ to the total number of vertices, i.e. $\lambda_{0}\,=\,\vert\mathcal{B}_{m}\vert\,/\,n\,> \,\lambda\,>\,0$. Then, \begin{align*} &\mathbb{E}_{\sigma_{0}}\big[\,\big(\,\textbf{Q}^{m,k}_{t+1}\,\big)^{2}\,- \,\big(\,\textbf{Q}^{m,k}_{t}\,\big)^{2}\,\vert\,\mathcal{F}_{t}\,\big]\\ &=\,p_{1}\,\Big[\,\Big(\,\textbf{Q}^{m,k}_{t}\,+\,\frac{1}{n}\,\Big)^{2 }\,- \,\big(\,\textbf{Q}^{m,k}_{t}\,\big)^{2}\,\Big]\,+\,p_{2}\,\Big[\,\Big( \,\textbf{Q}^{m,k}_{t}\,-\,\frac{1}{n}\,\Big)^{2}\,- \,\big(\,\textbf{Q}^{m,k}_{t}\,\big)^{2}\,\Big]\\&+\,p_{3}\,\Big[\,\Big(\,\textbf{Q}^{m,k}_{t}\,- \,\frac{1}{\lambda_{0}n}\,+\,\frac{1}{n}\,\Big)^{2}\,-\,\big(\,\textbf{Q}^{m,k}_{t}\,\big)^{2}\,\Big]\,+\,p_{4}\,\Big[\,\Big( \,\textbf{Q}^{m,k}_{t}\,+\,\frac{1}{\lambda_{0}n}\,- \,\frac{1}{n}\,\Big)^{2}\,- \,\big(\,\textbf{Q}^{m,k}_{t}\,\big)^{2}\,\Big], \end{align*} where $p_{1},\, p_{2},\, p_{3},\, p_{4}$ are defined as \begin{equation*} \begin{split} p_{1}\,&=\,\mathbb{P}\,\big(\,V_{t+1}\notin\mathcal{B}_{m},\,\sigma_{t}(V_ {t+1})=k,\,\sigma_{t+1}(V_{t+1})\neq k\,\vert\,\mathcal{F}_{t}\,\big)\,=\,\frac{1}{2}\,\big(\,S^{k}_{t}- \lambda_{0}\textbf{S}^{m,k}_{t}\,\big),\\p_{2}\,&=\,\mathbb{P}\,\big(\,V_{ t+1}\notin\mathcal{B}_{m},\,\sigma_{t}(V_{t+1})\neq k,\,\sigma_{t+1}(V_{t+1})= k\,\vert\,\mathcal{F}_{t}\,\big)\,=\,\frac{1}{2}\,\big(\,S^{k-1}_{t}- \lambda_{0}\,\textbf{S}^{m,k- 1}_{t}\,\big),\\p_{3}\,&=\,\mathbb{P}\,\big(\,V_{t+1}\in\mathcal{B}_{m},\, \sigma_{t}(V_{t+1})=k,\,\sigma_{t+1}(V_{t+1})\neq k\,\vert\,\mathcal{F}_{t}\,\big)\,=\,\frac{1}{2}\,\lambda_{0}\,\textbf{S}^ {m,k}_{t},\\p_{4}\,&=\,\mathbb{P}\,\big(\,V_{t+1}\in\mathcal{B}_{m},\,\sigma_{t}(V_{t+1})\neq k,\,\sigma_{t+1}(V_{t+1})=k\,\vert\,\mathcal{F}_{t}\,\big)\,=\, \frac{1}{2}\,\lambda_{0}\,\textbf{S}^{m,k-1}_{t}, \end{split} \end{equation*} where $V_{t+1}$ is the vertex selected at time $t+1$. Then, \begin{equation*} \mathbb{E}_{\sigma_{0}}\big[\,\big(\,\textbf{Q}^{m,k}_{t+1}\,\big)^{2}\,- \,\big(\,\textbf{Q}^{m,k}_{t}\,\big)^{2}\,\vert\,\mathcal{F}_{t}\,\big]\, =\,\frac{\textbf{Q}^{m,k}_{t}}{n}\,\big(\,\textbf{Q}^{m,k-1}_{t}\,- \,\textbf{Q}^{m,k}_{t}\,\big)\,+\,O\,\big(\,n^{-2}\,\big). \end{equation*} Considering the expectation, the proof is complete. \end{proof} \begin{prop}\label{3.8} For the cyclic dynamics $(\sigma_{t})$, consider the basket $\mathcal{B}$ that is a $\lambda$-partition for some $\lambda > 0$ and the two conditions below. \begin{enumerate} \setlength\itemsep{3pt} \item $t\,\geq\, t(n).$ \item $\textbf{S}_{0}\in\mathbb{S}^{\frac{r_{0}}{\sqrt{n}}}$ and $t\,\leq\,\gamma_{0}n$ for some constant $r_{0}, \gamma_{0} > 0.$ \end{enumerate} If either of these conditions are satisfied, for sufficiently large $r>0$, \begin{equation*} \mathbb{P}_{\sigma_{0}}\,\Big(\,\textbf{S}_{t}\,\notin\,\mathbb{S}^{\frac {r}{\sqrt{n}}}\,\Big)\,=\,O\,\big(\,r^{-2}\,\big) \end{equation*} for all sufficiently large $n$. \end{prop} \begin{proof} By Proposition \ref{3.7}, \begin{align*} \mathbb{E}_{\sigma_{0}}\,\Big[\,\sum_{k=1}^{3}\big(\,\textbf{Q}^{m,k}_{t}\,\big)^{2}\,\Big]\,&=\,\Big(\,1-\frac{1}{n}\,\Big)\,\mathbb{E}_{\sigma_{0}}\,\Big[\,\sum_{k=1}^{3}\big(\,\textbf{Q}^{m,k}_{t-1}\,\big)^{2}\,\Big]\,+\,\frac{1}{n}\,\mathbb{E}_{\sigma_{0}}\,\Big[\,\sum_{k=1}^{3}\textbf{Q}^{m,k-1}_{t-1}\,\textbf{Q}^{m,k}_{t- 1}\,\Big]\,+\,O\,\Big(\,\frac{1}{n^{2}}\,\Big)\\&=\,\Big(\,1- \frac{3}{2n}\,\Big)\,\mathbb{E}_{\sigma_{0}}\,\Big[\,\sum_{k=1}^{3}\big(\,\textbf{Q}^{m,k}_{t- 1}\,\big)^{2}\,\Big]\,+\,O\,\Big(\,\frac{1}{n^{2}}\,\Big). \end{align*} Proceeding as in Proposition \ref{2.2}, we get \begin{equation*} \mathbb{E}_{\sigma_{0}}\,\Big[\,\sum_{k=1}^{3}\big(\,\textbf{Q}^{m,k}_{t}\,\big)^{2}\,\Big]\,=\,\Big(\,1-\frac{3}{2n}\,\Big)^{t}\,\mathbb{E}_{\sigma_{0}}\Big[\,\sum_{k=1}^{3}\big(\,\textbf{Q}_{0}^{m,k}\,\big)^{2}\,\Big]\,+\,O\,\Big(\,\frac{1}{n}\,\Big). \end{equation*} Therefore, it can be proved that $\mathbb{E}_{\sigma_{0}}\,[\,\sum_{k=1}^{3}(\textbf{Q}^{m,k}_{t})^{2}\,]=O \,(n^{-1})$ if either of the conditions are satisfied. By Markov's inequality, we obtain \begin{equation*} \mathbb{P}_{\sigma_{0}}\,\bigg(\,\sum_{m=1}^{3}\,\Vert\textbf{S}^{m}_{t}- S_{t}\Vert_{2}\,>\,\frac{r}{2\sqrt{n}}\,\bigg)\,=\,O\,\big(\,r^{- 2}\,\big). \end{equation*} In addition, by Proposition \ref{2.2} and Proposition \ref{3.3}, \begin{equation*} \mathbb{P}_{\sigma_{0}}\,\big(\,\Vert S_{t}- \bar{e}\Vert_{2}\,>\,r\,/\,(\,2\sqrt{n}\,)\,\big)\,=\,O\,\big(\,r^{- 2}\,\big). \end{equation*} The result is obtained through these two equations. \end{proof} \subsubsection{Basketwise Coupling} The \textit{basketwise coupling} introduced in \cite{1} is utilized herein. The objective of basketwise coupling is to enable the two basket chains to coalesce. This coupling is used in Proposition \ref{3.9} to prove that $\Vert\textbf{S}{}^{m}_{t}-\tilde{\textbf{S}}{}^{m}_{t}\Vert_{1}$ is a supermartingale. \par Consider the two cyclic dynamics $(\sigma_{t})$ and $(\tilde{\sigma_{t}})$, where $S_{0} = \tilde{S}_{0}$. The coupling begins at $t = 0$, $m = 1$. While $\textbf{S}{}_{t}^{m}\neq\tilde{\textbf{S}}{}^{m}_{t}$, \begin{enumerate} \setlength\itemsep{3pt} \item Choose the color $I_{t+1} = \tilde{I}_{t+1}$ according to the distribution $S_{t}= \tilde{S}_{t}$. \item Choose the color $J_{t+1} = \tilde{J}_{t+1}$ as $I_{t+1} = \tilde{I}_{t+1}$ with probability $\frac{1}{2}$, and $I_{t+1}+1 = \tilde{I}_{t+1}+1$ with probability $\frac{1}{2}$. \item Uniformly choose the vertex $V_{t+1}$ that has the color $I_{t+1}$ in $\sigma_{t}$. \item Choose the vertex $\tilde{V}_{t+1}$ based on the following rules: \begin{enumerate} \setlength\itemsep{3pt} \item If $V_{t+1}\in \mathcal{B}_{m_{0}}$ for some $m_{0}<m$, then uniformly choose $\tilde{V}_{t+1}$ in $\mathcal{B}_{m_{0}}$ that has the color $\tilde{I}_{t+1}$ in $\tilde{\sigma}_{t}.$ \item If $V_{t+1}\in \mathcal{B}_{m_{0}}$ for some $m_{0}\geq m$, $\textbf{S}{}^{m, I_{t+1}}_{t}\neq \tilde{\textbf{S}}{}^{m, \tilde{I}_{t+1}}_{t}$ and $\textbf{S}{}^{m, J_{t+1}}_{t}\neq \tilde{\textbf{S}}{}^{m, \tilde{J}_{t+1}}_{t}$, then uniformly choose $\tilde{V}_{t+1}$ in $\mathcal{B}_{[m,3]}$ that has the color $\tilde{I}_{t+1}$ in $\tilde{\sigma}_{t}$. $\mathcal{B}_{[m,3]}$ is defined as $\bigcup_{i=m}^{3}\mathcal{B}_{i}$. \item In other cases, let $\{{v}_{i}\} = v_{1},v_{2}, \dots$ be an enumeration of the vertices in $\mathcal{B}_{[m, 3]}$ with the color $I_{t+1}$ in $\sigma_{t}$. It is first ordered based on the index of the basket it belongs to, and then based on its index in $V$. Let $\{\tilde{v}_{i}\}=\tilde{v}_{1}, \tilde{v}_{2}, \dots$ be the enumeration of the vertices in $\mathcal{B}_{[m, 3]}$ with the color $\tilde{I}_{t+1}$ in $\tilde{\sigma}_{t}$ having the same rule. Then, as $V_{t+1} \in \{v_{i}\}$, there exists $j$ that satisfies $V_{t+1} = v_{j}.$ Let $\tilde{V}_{t+1}$ as $\tilde{v}_{j}\in\{\tilde{v}_{i}\}$. \end{enumerate} \item Change the color of the vertex $V_{t+1}$ to $J_{t+1}$ in $\sigma_{t}$, and change the color of the vertex $\tilde{V}_{t+1}$ to $\tilde{J}_{t+1}$ in $\tilde{\sigma}_{t}.$ \end{enumerate} When $\textbf{S}{}_{t}^{1} = \tilde{\textbf{S}}{}_{t}^{1}$ is reached, repeat the process with m = 2. Note that if $\textbf{S}{}_{t}^{1} = \tilde{\textbf{S}}{}_{t}^{1}$ and $\textbf{S}{}_{t}^{2} = \tilde{\textbf{S}}{}_{t}^{2}$, then $\textbf{S}{}_{t}^{3} = \tilde{\textbf{S}}{}_{t}^{3}.$ $\mathbb{P}^{BC}_{\sigma_{0},\tilde{\sigma}_{0}}$, $\mathbb{E}^{BC}_{\sigma_{0},\tilde{\sigma}_{0}}$, and $\mathbb{V}ar^{BC}_{\sigma_{0},\tilde{\sigma}_{0}}$ are denoted as the underlying probability measure of the coupling, the expectation, and the variance, respectively. The following proposition proves the coalescence of the basket chains with high probability. \begin{prop}\label{3.9} For the two cyclic dynamics $(\sigma_{t})$ and $(\tilde{\sigma}_{t})$, suppose that $S_{0}=\tilde{S}_{0}$ and $\textbf{S}_{0}, \tilde{\textbf{S}}_{0}\in \mathbb{S}^{\frac{r}{\sqrt{n}}}$ for some constant $r > 0$. Let $\mathcal{B}$ be a $\lambda$-partition, where $\lambda > 0$. Then, for a given $\epsilon > 0$, there exists sufficiently large $\gamma$ that satisfies \begin{equation*} \mathbb{P}^{BC}_{\sigma_{0}, \tilde{\sigma}_{0}}\,\big(\,\textbf{S}_{\gamma n}=\tilde{\textbf{S}}_{\gamma n}\,\big)\,\geq\,1-\epsilon. \end{equation*} \end{prop} \begin{proof} Note that while iterating the chains with basketwise coupling, $S_{t}=\tilde{S}_{t}$ always holds. In addition, if $\textbf{S}{}_{T}^{m} = \tilde{\textbf{S}}{}_{T}^{m}$ holds for some $T$ and $m$, then $\textbf{S}{}_{t}^{m} = \tilde{\textbf{S}}{}_{t}^{m}$ for all $t\geq T$. \par Let $\textbf{W}_{t} = \textbf{S}_{t}-\tilde{\textbf{S}}_{t}$, $W^{m}_{t}=\Vert\textbf{W}^{m}_{t}\Vert_{1}$, $\tau^{(0)}=0$ and $\tau^{(m)}=\min\{t\geq\tau^{(m-1)}:W^{m}_{t}=0\}$ for $m = 1,\, 2.$ Then, let \begin{equation*} \tau_{*}=\inf \,\big\{\,t:\textbf{S}_{t}\notin \mathbb{S}^{\rho} \,\, \text{or} \,\, \tilde{\textbf{S}}_{t}\notin \mathbb{S}^{\rho}\, \} \end{equation*} for some constant $\rho > 0$, and $\tau_{*}^{(m)}=\tau^{(m)}\wedge\tau_{*}.$ Now, fix $m$ and consider the case where $\tau^{(m-1)}\leq t<\tau^{(m)}$, $t<\tau_{*}$ holds.\par The expectation of $W^{m}_{t}$ will be evaluated using the definition of basketwise coupling. Consider the three cases in step 4 of the coupling: \begin{enumerate}[label=(\alph*)] \setlength\itemsep{3pt} \item Because $\textbf{S}^{m}_{t+1}=\textbf{S}^{m}_{t}$ and $\tilde{\textbf{S}}{}^{m}_{t+1}=\tilde{\textbf{S}}{}^{m}_{t}$, $W^{m}_{t+1}=W^{m}_{t}.$ \item By the conditions $\textbf{W}^{m,I_{t+1}}_{t}\textbf{W}^{m,I_{t+1}}_{t+1}\geq0$ and $\textbf{W}^{m,J_{t+1}}_{t}\textbf{W}^{m,J_{t+1}}_{t+1}\geq0$, \begin{align*} &\mathbb{E}^{BC}_{\sigma_{0}, \tilde{\sigma}_{0}}\,\big[\,W^{m}_{t+1}-W^{m}_{t}\,\vert\,\mathcal{F}_{t}\,\big]\\&=\,\vert\,\mathbb{E}^{BC}_{\sigma_{0}, \tilde{\sigma}_{0}}\big[\,\textbf{W}^{m,I_{t+1}}_{t+1}\,\vert\,\mathcal{F}_{t}\,\big]\,\vert\,- \,\vert\,\textbf{W}^{m,I_{t+1}}_{t}\,\vert\,+\,\vert\,\mathbb{E}^{BC}_{\sigma_{0}, \tilde{\sigma}_{0}}\,\big[\,\textbf{W}^{m,J_{t+1}}_{t+1}\,|\,\mathcal{F}_{t}\,\big]\,\vert\,-\,\vert\,\textbf{W}^{m,J_{t+1}}_{t}\,\vert\\&\leq\,\frac{- \vert\textbf{W}^{m,I_{t+1}}_{t}\vert}{\sum_{m_{0}\geq m}\textbf{S}^{m_{0},I_{t+1}}_{t}\vert\mathcal{B}_{m_{0}}\vert}+\frac{\vert \textbf{W}^{m,I_{t+1}}_{t}\vert}{\sum_{m_{0}\geq m}\textbf{S}^{m_{0},I_{t+1}}_{t}\vert\mathcal{B}_{m_{0}}\vert}=0. \end{align*} \item If both $V_{t+1}, \tilde{V}_{t+1}$ are in $\mathcal{B}_{m}$ or not in $\mathcal{B}_{m}$, $W^{m}_{t+1}=W^{m}_{t}$. Otherwise, \begin{equation*} \vert\textbf{W}^{m,I_{t+1}}_{t+1}\vert- \vert\textbf{W}^{m,I_{t+1}}_{t}\vert=-\frac{1}{\vert\mathcal{B}_{m}\vert}\quad\text{ and }\quad \vert\textbf{W}^{m,J_{t+1}}_{t+1}\vert- \vert\textbf{W}^{m,J_{t+1}}_{t}\vert\leq \frac{1}{\vert\mathcal{B}_{m}\vert}. \end{equation*} Summing these two solutions, the desired result is obtained. \end{enumerate} Therefore, $W^{m}_{t}$ is a supermartingale. We will show that probability of (b) occurring in step 4 is bounded below. Two adjacent colors $I_{t+1}, J_{t+1}$ that satisfy the following equation can be chosen: \begin{equation*} \textbf{S}^{m,I_{t+1}}_{t}\neq\tilde{\textbf{S}}{}^{m,I_{t+1}}_{t},\,\textbf {S}{}^{m,J_{t+1}}_{t}\neq\tilde{\textbf{S}}{}^{m,J_{t+1}}_{t},\, \big(\textbf{S}{}^{m,I_{t+1}}_{t}- \tilde{\textbf{S}}{}^{m,I_{t+1}}_{t}\big)\big(\textbf{S}{}^{m,J_{t+1}}_{t}- \tilde{\textbf{S}}{}^{m,J_{t+1}}_{t}\big)<0. \end{equation*} Since $\textbf{S}_{t},\tilde{\textbf{S}}_{t}\in \mathbb{S}^{\rho}$, the probability of choosing the old color $I_{t+1}$ in step 1 is bounded by constant. In step 2, the probability of choosing the new color $J_{t+1}$ on the old color $I_{t+1}$ is $\frac{1}{2}$. Then, because $t < \tau_{*}$, the probability of (b) occurring is bounded below by a constant. Based on the definition of the coupling, it can be shown that $\mathbb{V}ar^{BC}\,(\,W^{m}_{t+1}|\mathcal{F}_{t}\,)\geq c n^{-2}$ for some $c>0$.\par Then, by Proposition \ref{3.4} and Proposition \ref{3.8}, $\gamma_{1}$, $\gamma_{2}$, $r_{1}$ can be obtained such that $\tau^{(1)}_{*}\leq \gamma_{1}n$ with a minimum probability of $1-\epsilon/6$, and if $t\leq \gamma_{1}n$, $\textbf{S}_{t}, \tilde{\textbf{S}}_{t}\in \mathbb{S}^{\frac{r_{1}}{\sqrt{n}}}$ with probability at least $1-\epsilon/6$. If $\tau^{(1)}_{*}\leq \gamma_{1}n$ and $\textbf{S}_{\tau^{(1)}_{*}}, \tilde{\textbf{S}}_{\tau^{(1)}_{*}}\in \mathbb{S}^{\frac{r_{1}}{\sqrt{n}}}$, then $\tau^{(2)}_{*}\leq \gamma_{2}n$ with a minimum probability of $1- \epsilon/6$. Thus, \begin{equation*} \mathbb{P}^{BC}_{\sigma_{0}, \tilde{\sigma}_{0}}\,\big(\,\tau^{(2)}_{*}\leq \gamma_{2}n\,\big)\,=\,\mathbb{P}^{BC}_{\sigma_{0}, \tilde{\sigma}_{0}}\,\big(\,\tau^{(2)}\wedge\tau_{*}\leq \gamma_{2}n\,\big)\geq1-\frac{\epsilon}{2}. \end{equation*}\par Now, the probability of $\tau_{*}\leq\gamma_{2}n$ will be demonstrated to be $O(n^{-1})$. For $1\leq m,\,j\leq 3$, denote the set of cyclic dynamics $C^{m, j}$ as $\bigcup^{\gamma_{2} n}_{u=1}\,\{\,(\sigma_{t})\colon \,\vert\,\textbf{S}^{m,j}_{u}-\frac{1}{3}\vert\geq \rho\,\}$. Define the value $Y^{m,j}=\vert\,\{\,t\,\colon\,\vert\textbf{S}^{m,j}_{t}- \frac{1}{3}\vert\geq\rho/2,\,1\leq t \leq \gamma_{2} n\,\}\,\vert$ for the cyclic dynamics $(\sigma_{t})$. Then, by Proposition \ref{3.8}, \begin{equation*} \mathbb{E}^{BC}_{\sigma_{0}, \tilde{\sigma}_{0}}\,\big[\,Y^{m,j}\,\big]\,\leq\, \gamma_{2} n\, O\big(n^{-1}\big)\,=\,O(\gamma_{2}). \end{equation*} For $\gamma_{2}>\frac{\lambda\rho}{2}$, if the cyclic dynamics $(\sigma_{t})$ is in the set $C^{m,j}$, then $Y^{m,j}>\frac{n \lambda \rho}{2}$. Thus, by Markov's inequality, \begin{equation*} \mathbb{P}^{BC}_{\sigma_{0}, \tilde{\sigma}_{0}}\,\big(\,C^{m,j}\,\big)\,\leq\,\mathbb{P}^{BC}_{\sigma_ {0}, \tilde{\sigma}_{0}}\,\Big(\,Y^{m,j}\geq\frac{n \lambda \rho}{2}\,\Big)\,\leq\, \frac{2\,\mathbb{E}^{BC}_{\sigma_{0}, \tilde{\sigma}_{0}}\,[\,Y^{m,j}\,]}{n \lambda \rho}\,=\,O\big(n^{-1}\big). \end{equation*} Applying this to all $1\leq m\mbox{ , }j\leq 3$ in $\textbf{S}_{t}$, and similarly for $\tilde{\textbf{S}}_{t}$, we obtain \begin{equation*} \mathbb{P}^{BC}_{\sigma_{0}, \tilde{\sigma}_{0}}\,\big(\,\tau_{*}\leq\gamma_{2} n\,\big)\,=\,O\big(n^{-1}\big). \end{equation*} Combining the above two expressions, \begin{equation*} \mathbb{E}^{BC}_{\sigma_{0}, \tilde{\sigma}_{0}}\,\big(\,\tau^{(2)}\leq \gamma_{2} n\,\big)\,\geq\, 1-\frac{\epsilon}{2}+O\big(n^{-1}\big). \end{equation*} This completes the proof. \end{proof} Now, the overall coupling, which is a combination of the coupling methods proposed in the previous sections, is introduced. With this coupling, the coalescence of the two cyclic dynamics is obtained with high probability, and the proof of the upper bound is completed. \subsection{Overall Coupling}\label{3.23} The \textit{overall coupling} of the two cyclic dynamics $(\sigma_{t})$ and $(\tilde{\sigma}_{t})$ is denoted with the parameters $\gamma_{1}$, $\gamma_{2}$, $\gamma_{3}$, $\gamma_{4}\,>\,0$. These parameters are taken from Proposition \ref{2.2}, Proposition \ref{3.5}, Proposition \ref{3.6}, Proposition \ref{3.9}, respectively. The first cyclic dynamics $(\sigma_{t})$ begins at $\sigma_{0}$ and the second cyclic dynamics $(\tilde{\sigma}_{t})$ begins at $\tilde{\sigma}_{0}$. Here, $\sigma_{0}$ has no initial conditions, while $\tilde{\sigma}_{0}$ is determined according to the distribution $\mu_{n}$. The coupling is evolved through the following procedure: \begin{enumerate} \setlength\itemsep{3pt} \item Iterate two chains independently until time $t_{(1)}(n)=\gamma_{1}n$. \item Configure the baskets $\mathcal{B} = \bigcup_{k = 1}^{3}\mathcal{B}_{k}$ with the colors in $\sigma_{t_{(1)}(n)}$, i.e. $\mathcal{B}_{k} =\{\,v:\sigma_{t_{(1)}(n)}(v) = k\,\},\,k = 1,\,2,\,3.$ \item Iterate two chains independently until time $t_{(2)}(n)=t_{(1)}(n)+t(n).$ \item Iterate two chains with semi-coordinatewise coupling until time $t_{(3)}(n)=t_{(2)}(n)+\gamma_{2}n.$ \item Iterate two chains with semi-synchronized coupling until time $t_{(4)}(n)=t_{(3)}(n)+\gamma_{3}n.$ \item Iterate two chains with basketwise coupling until time $t_{(5)}(n)=t_{(4)}(n)+\gamma_{4}n.$ \end{enumerate} Denote $\mathbb{P}^{OC}_{\sigma_{0}}$ as the underlying probability measure. \subsection{Proof of Upper Bound}\label{3.44} Here, the proof of the upper bound of the mixing time is demonstrated using the overall coupling. The proof is similar to that in \cite[Section 4.7]{1}; however, because the conditions are slightly different, the detailed proof is provided here to ensure completeness. \begin{prop}\label{3.10} For the cyclic dynamics $(\sigma_{t})$ and the constant $\epsilon > 0$, and for all sufficiently large $n$, there exists $\gamma > 0$ such that \begin{equation*} \Vert\,\mathbb{P}_{\sigma_{0}}(\sigma_{t_{\gamma}(n)}\in\cdot)- \mu_{n}\,\Vert_{\textnormal{TV}}\leq\epsilon. \end{equation*} \end{prop} \begin{proof} First, the overall coupling is applied via seven steps: \begin{enumerate} \setlength\itemsep{3pt} \item Choose $\rho > 0$. By Proposition \ref{2.2}, a sufficiently large $\gamma_{1}$ satisfying $S_{t_{(1)}(n)}\in \mathcal{S}^{\rho}$ with probability $1-\epsilon/6$ for all large $n$ can be chosen. \item Then, $\mathcal{B}$ (defined in step 2 of the overall coupling procedure) is considered as a $(\frac{1}{3}-\rho)$ partition. \item By Proposition \ref{3.1}, for some $r > 0$, $S_{t_{(2)}(n)}\in \mathcal{S}^{\frac{r}{\sqrt{n}}}$ with a minimum probability of $1- \epsilon/6$. \item By the proof of Proposition \ref{2.8}, if $r$ is sufficiently large, $\tilde{S}_{t_{(2)}(n)}\in \mathcal{S}^{\frac{r}{\sqrt{n}}}$ with a minimum probability of $1-\epsilon/6$. \item By Propositions \ref{3.5} and \ref{3.6}, $S_{t_{(4)}(n)}= \tilde{S}_{t_{(4)}(n)}$ with a minimum probability of $1-3\epsilon/6$. \item By Proposition \ref{3.8}, for a sufficiently large $r_{1}>0$, $\textbf{S}_{t_{(4)}(n)}, \tilde{\textbf{S}}_{t_{(4)}(n)}\in \mathbb{S}^{\frac{r_{1}}{\sqrt{n}}}$ with a minimum probability of $1- \epsilon/6$. \item By Proposition \ref{3.9}, $\textbf{S}_{t_{(5)}(n)}= \tilde{\textbf{S}}_{t_{(5)}(n)}$ with a minimum probability of $1- 5\epsilon/6$. \end{enumerate} When $t\geq t_{(1)}(n)$ and $\mathcal{F}_{t_{(1)}(n)}$ are given, by the manner in which the baskets $\mathcal{B}$ were defined, the distribution of $\sigma_{t}$ is the same under the permutations of the vertices on each basket $\mathcal{B}_{m}$. As $\mu_{n}$ is uniformly distributed in $\Sigma_{n}$, the same notion can be applied to $\tilde{\sigma}_{t}$. Thus, \begin{align*} &\Vert\,\mathbb{P}^{OC}_{\sigma_{0}}\,\big(\,\sigma_{t_{(5)}(n)}\in \cdot\,\vert\,\mathcal{F}_{t_{(1)}(n)},\,S_{t_{(1)}(n)}\in \mathcal{S}^{\rho}\,\big)\,-\, \mu_{n}\,\Vert_{\textnormal{TV}}\\&\quad=\Vert\,\mathbb{P}^{OC}_{\sigma_{0}}\,\big(\,\textbf{S}_{t_{(5)}(n)}\in \cdot\,\vert\,\mathcal{F}_{t_{(1)}(n)},\,S_{t_{(1)}(n)}\in \mathcal{S}^{\rho}\,\big)\,-\,\mu_{n}\circ \textbf{S}^{-1}\,\Vert_{\textnormal{TV}}\,\\&\quad\leq\, \mathbb{P}^{OC}_{\sigma_{0}}\big(\,\textbf{S}_{t_{(5)}(n)}\neq\tilde{\textbf{S}} _{t_{(5)}(n)}\,\vert\,\mathcal{F}_{t_{(1)}(n)},\,S_{t_{(1)}(n)}\in \mathcal{S}^{\rho}\,\big)\,\leq\,5\epsilon/6. \end{align*} Therefore, \begin{align*} &\Vert\,\mathbb{P}_{\sigma_{0}}\,(\,\sigma_{t_{(5)}(n)}\in \cdot\,)- \mu_{n}\,\Vert_{\textnormal{TV}}\\&\quad\leq\,\mathbb{E}^{OC}_{\sigma_{0}}\,\big[\,\Vert\,\mathbb{P}^{OC}_{\sigma_{0}}\,\big(\,\sigma_{t_{(5)}(n)}\in \cdot\,\vert\,\mathcal{F}_{t_{(1)}(n)}\,\big)\,-\, \mu_{n}\,\Vert_{\textnormal{TV}}\,\vert\,S_{t_{(1)}(n)}\in \mathcal{S}^{\rho}\,\big]\,+\,\mathbb{P}^{OC}_{\sigma_{0}}\,\big(\,S_{t_{(1)}(n)}\notin \mathcal{S}^{\rho}\,\big) \,\leq\,\epsilon. \end{align*} Finally, let $\gamma$ be $\gamma_{1}+\gamma_{2}+\gamma_{3}+\gamma_{4}$. This completes the proof. \end{proof} \subsection{Proof of Theorem 1.1} In Theorem \ref{1.1}, the lower bound of the mixing time is guaranteed by the Proposition \ref{2.8}, and the upper bound of the mixing time is guaranteed by the Proposition \ref{3.10}. Therefore, the cutoff phenomenon of cyclic dynamics is proved.\vskip 7pt \paragraph{\textit{Acknowledgement.}} The author was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2018R1C1B6006896). This work was supervised by I. Seo. \printbibliography \end{document}
1,477,468,750,803
arxiv
\section{Introduction} \label{introduction} After the pioneering discovery of the giant planet orbiting 51 Peg by \citet{may95}, two decades ago, the literature\footnote{\url{http://exoplanet.eu/}} reports to date the discovery of more than 3700 planets, in about 2800 planetary systems. Solar stars in the field host the vast majority of these exoplanets. The characteristics of field stars may represent a drawback for our capability to derive precise conclusions to very basic questions. For example, more than 70\% of the known planets orbit stars with masses $M_* < 1.30$~M$_{\odot}$. Our understanding of planet formation as a function of the mass of the host star and of the stellar environments is therefore still poorly understood. In addition, it has been observed that main sequence stars hosting giant planets are metal-rich \citep{gon97,san04}, while evolved stars hosting giant planets are likely not (\citealt{pas07}, see, however, for different conclusions \citealt{jon16}). There is no clear explanation for this discrepancy, and several competing scenarios have been proposed, including stellar pollution acting on main-sequence stars \citep[e.g.,][]{lau97}, a planet formation (core-accretion) mechanism favoring the birth of planets around metal-rich stars \citep{pol96}, and an effect of stellar migration (radial mixing) in the Galactic disk \citep{hay09}. Open cluster stars formed simultaneously from a single molecular cloud with uniform physical properties, and thus have the same age, chemical composition and galactocentric distance. As a result, these are valuable testbeds for studying how the planet occurrence rate depends on stellar mass and environment. Furthermore, comparing homogeneous sets of open-cluster stars with and without planets is an ideal method for determining whether the presence of a planetary companion alters the chemical composition of the host stars \citep[e.g.,][]{isr09}. The number of planetary mass companions discovered around open cluster stars is rapidly growing, amounting to date to 25~planets. Two hot-Jupiters and a massive outer planet in the Praesepe open cluster \citep{qui12,mal16}, a hot-Jupiter in the Hyades \citep{qui14}, two sub-Neptune planets in NGC~6811 \citep{mei13}, five Jupiter-mass planets in~M67 \citep{bru14,bru16,bru17}, a Neptune-sized planet transiting an M4.5~dwarf in the Hyades \citep{man16,dav16}, three Earth-to-Neptune-sized planets around a mid-K dwarf in the Hyades \citep{man17b}, a Neptune-sized planet orbiting an M~dwarf in Praesepe \citep{obe16} and eight planets from~K2 campaigns \citep{pop16,bar16,lib16,man17a,cur18}, have been recently reported. Three planet candidates were also announced in the~M67 field \citep{nar16}, although all the host stars appear to be non-members. Previous radial velocity studies focusing on evolved stars revealed a giant planet around one of the Hyades clump giants \citep{sat07} and a substellar-mass object in NGC~2423 \citep{lov07}. These studies confirm that giant planets around open cluster stars exist and can probably migrate in a dense cluster environment. \citet{mei13} found that the properties and occurrence rate of low mass planets are the same in open clusters and field stars. Finally, the radial velocity (RV) measurements of~M67 show that the occurrence rate of giant planets is compatible with that observed in field stars ($\sim16\%$), albeit with an excess of hot jupiters in this cluster \citep{bru16}. These studies demonstrate the wealth of information which can be gained from open clusters, but have so far been limited to solar mass stars. It is necessary to extend the work to a broader range of stellar masses and ages for a better understanding about the planet occurrence rate related with the mass, environment, and chemical composition of the host stars. However, more massive hot stars show very few and broad spectral lines, so cool stars in the red giant region are excellent candidates for extending these works to higher masses \citep{set04,joh07}. Over the past three years we have carried out a search for massive planets around 152 evolved stars belonging to 29 open clusters. From these clusters we selected 114 targets with the best quality data and with a minimum of two observations per target, as described in Sect.~\ref{finalsample}. These targets were relatively well studied for duplicity, and also with good constraints for mass, composition and age determinations. Our survey aims to estimate the planet occurrence rate of intermediate-mass late-type giant stars in young and intermediate-age open clusters. This paper provides an overview (as made in \citealt{pas12} for M67) of the stellar sample and the observations, discussing the clusters' characteristics and the RV distribution of the stars, and highlighting the most likely planetary host candidates. The paper is structured as follows. The observations, sample selection, and methods used in our analysis are described in Sect.~\ref{methods}. Several results are presented in Sect.~\ref{results}, including a detailed overview of the data we have collected so far, combined with observations of other programs, and the discovery of a new planet. Finally, our conclusions are stated in Sect.~\ref{conclusions}. \begin{table} \caption{Number of observed stars ($N_{\rm obj}$) of our original list of 29 open clusters and total number of HARPS observations$^a$ ($N_{\rm obs}$) carried out by our and other programs for each cluster.} {\centering \begin{tabular}{c c c | c c c} \hline\hline Cluster & $N_{\rm obj}$ & $N_{\rm obs}$ & Cluster & $N_{\rm obj}$ & $N_{\rm obs}$ \\ \hline IC 2714 & 8 & 217 & NGC 2972 & 2 & 7 \\ IC 4651 & 13 & 150 & NGC 3114 & 7 & 89 \\ IC 4756 & 13 & 52 & NGC 3532 & 6 & 45 \\ Melotte 71 & 6 & 11 & NGC 3680 & 6 & 32 \\ NGC 1662 & 2 & 4 & NGC 3960 & 3 & 6 \\ NGC 2204 & 8 & 20 & NGC 4349 & 3 & 98 \\ NGC 2251 & 3 & 3 & NGC 5822 & 11 & 96 \\ NGC 2324 & 3 & 3 & NGC 6067 & 3 & 13 \\ NGC 2345 & 4 & 8 & NGC 6134 & 9 & 14 \\ NGC 2354 & 8 & 22 & NGC 6208 & 2 & 6 \\ NGC 2355 & 1 & 1 & NGC 6281 & 2 & 7 \\ NGC 2477 & 10 & 24 & NGC 6425 & 2 & 6 \\ NGC 2506 & 6 & 8 & NGC 6494 & 2 & 8 \\ NGC 2818 & 3 & 8 & NGC 6633 & 4 & 22 \\ NGC 2925 & 2 & 14 & & \\ \hline\hline \end{tabular} \\ } \small \vspace{0.1in} {\bf Note.}\\ $^a$Total of 994 observations of 152 stars from which we selected our final sample of 826 effective observations of 114 stars, as described in Sect.~\ref{finalsample}.\vspace{0.1cm}\\ \label{tabclus} \end{table} \section{Working sample, observations, and methods}\label{methods} The stellar sample was selected from \citet{mer08}, who determined cluster membership and binaries using CORAVEL. The stellar B and V magnitudes, $B_{\rm mag}$ and $V_{\rm mag}$, were obtained from the Simbad\footnote{\url{http://simbad.u-strasbg.fr/simbad/}} database, from which we also computed the $(B-V)$ color index. Absolute magnitudes, $M_V$, were estimated from $V_{\rm mag}$ and the cluster distance modulus obtained from \citet{kha05} and from the WEBDA\footnote{\url{http://webda.physics.muni.cz/}} cluster database \citep{mer95}. Both $(B-V)$ and $M_V$ were corrected for reddening, $E(B-V)$, from \citet{wu09}. Cluster ages were taken from \citet{wu09} and metallicities from \citet{wu09} and \citet{hei14}. The main cluster selection criteria were the age of the cluster (between 0.02 and $\sim$2~Gyr, with turnoff masses $\gtrsim 2$~M$_{\odot}$) and the apparent magnitude of the giant stars (brighter than $V_{\rm mag} \sim 14$~mag). We then rejected cool, bright stars with $(B-V) > 1.4$ as these are known to be RV unstable \citep[e.g.,][]{hek06}. Known binaries and non-members were removed from the sample. It is important to note that we pick up only the clusters with at least 2 giant members each and that the chosen open clusters span a rather narrow metallicity range (about $-0.2 < {\rm [Fe/H]} < 0.2$). Seven clusters were in common with \citet{lov07}. To first order we assumed that all giants in a given cluster have the same mass, which is approximately the mass at the main-sequence turnoff. \subsection{HARPS observations}\label{harpsobs} The {\em High Accuracy Radial velocity Planet Searcher\footnote{\url{http://www.eso.org/sci/facilities/lasilla/instruments/harps.html}}} (HARPS; \citealt{may03}) is the planet hunter at the ESO 3.6 m telescope in La Silla. In high accuracy mode (HAM) it has an aperture on the sky of one arcsecond, and a resolving power of 115000. The spectral range covered is 380--680 nm. In addition to be exceptionally stable, HARPS achieves the highest precision using the simultaneous calibration principle: the spectrum of a calibration (Th-Ar) source is recorded simultaneously with the stellar spectrum, with a second optical fiber. As a rule of thumb we can consider the precision of HARPS scales as $\epsilon RV \propto (S/N)^{-1}$, where $\epsilon RV$ is the RV photon-noise error and $S/N$ is the signal-to-noise ratio. An $\epsilon RV$ of a few m/s ($\sim$10~m~s$^{-1}$ or better) is possible with limited $S/N$ observations, namely at least $S/N \sim 10$. In practice, our observed HARPS spectra have typically a peak $S/N$ of 10--20 for the faintest stars and of 50--100 for the brightest ones. HARPS is equipped with a very powerful pipeline \citep{may03} that provides on-line RV measurements, which are computed by cross correlating the stellar spectrum with a numerical template mask. This on-line pipeline also provides an associated $\epsilon RV$. For all of our stars, irrespective of the spectral type and luminosity, we used the solar template (G2V) mask. Between April 4th, 2013 and April 1st, 2015 we obtained 500 observations of 152 targets with HARPS spread over our 29 open clusters. We then combined these data with 494 more HARPS observations of stars that were in our sample and collected using the same G2V mask, all available in the ESO Archive. This provided a total of 994 observations for these targets obtained with a decade-long baseline, from 2005 to 2015, from which we selected our final sample of 826 effective observations of 114 stars, as described in Sect.~\ref{finalsample}. Table~\ref{tabclus} lists the stars by cluster, with the initial number of objects and observations considered in this work. \begin{figure} \centering \includegraphics[width=\columnwidth]{snrfig1.eps} \caption{Photon noise error in the measurement of the CCF center of single observations vs. $S/N$ at 490 nm for our sample of open cluster targets observed with HARPS. The overall RV errors are typically distributed around 1--4~m/s and range from 42~cm/s to 119~m/s. The overall $S/N$ distribution peaks around 20--40 and ranges from 2.4 to 223. Black filled circles are the selected data, and gray crosses are the discarded ones, all identified from a visual inspection of the CCFs. Black open circles depict the data with $S/N < 10$ that were also discarded and this threshold is represented by the vertical dashed line.} \label{snrfig} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{logg_vs_logg_0.eps} \caption{Comparison between spectroscopic and photometric $\log g$. The 1:1 relashionship is shown by the black dashed line. A systematic trend is illustrated by the red dashed line. The final photometric $\log g$ values were corrected for this trend.} \label{loggphot} \end{figure} \begin{table*} \caption{Analysis of $\Delta RV/2$ for the final sample described in Sect.~\ref{finalsample}.} {\centering \begin{tabular}{l c c c | c c c c c c} \hline\hline Object & $V_{\rm mag}$ & $\log g$ & $RV_{\rm M08}$ & $N_{\rm eff}$$^a$ & $t_{\rm span}$ & $\left<{RV}\right>$ & $\Delta RV/2$ & $\Delta RV_{\rm H-C}$$^b$ & flag$^c$ \\ & (mag) & [cm s$^{-2}$] & (km s$^{-1}$) & & (d) & (km s$^{-1}$) & (m s$^{-1}$) & (m s$^{-1}$) & \\ \hline IC 2714 5 & 11.10 & 2.70 & $-$14.53 & 25 & 3661 & $-$14.388 & 27.76 & 83 & \\ IC 2714 53 & 11.522 & 2.75 & $-$13.37 & 21 & 3198 & $-$13.269 & 45.34 & 42 & p(A) \\ IC 2714 87 & 11.395 & 2.62 & $-$13.23 & 23 & 3661 & $-$13.205 & 44.84 & $-$34 & \\ IC 2714 110 & 11.73 & 2.85 & $-$13.80 & 27 & 3661 & $-$13.771 & 42.00 & $-$31 & p(B) \\ IC 2714 121 & 10.80 & 2.10$^d$ & $-$13.37 & 21 & 1570 & $-$13.358 & 27.40 & $-$47 & \\ IC 2714 126 & 11.04 & 2.68$^d$ & $-$14.42 & 30 & 3661 & $-$14.130 & 24.95 & 230 & \\ IC 2714 190 & 11.32 & 2.55$^d$ & $-$13.60 & 27 & 3662 & $-$13.541 & 25.26 & 0 & \\ IC 2714 220 & 11.13 & 2.62$^d$ & $-$13.03 & 28 & 3560 & $-$13.374 & 69.12 & $-$403 & p(C) \\ IC 4651 6333 & 10.44 & 2.05$^d$ & $-$30.87 & 3 & 216 & $-$30.489 & 12.75 & 92 & \\ IC 4651 7646 & 10.363 & 2.61 & $-$31.18 & 7 & 3285 & $-$31.103 & 4.15 & $-$211 & \\ IC 4651 8540 & 10.894 & 2.26 & $-$30.36 & 25 & 3284 & $-$30.177 & 21.88 & $-$105 & \\ IC 4651 9025 & 10.90 & 2.90 & $-$30.46 & 26 & 3284 & $-$30.261 & 25.03 & $-$89 & \\ IC 4651 9122 & 10.7 & 2.52 & $-$30.58 & 51 & 3284 & $-$30.253 & 116.39 & 38 & p(D) \\ IC 4651 9791 & 10.44 & 2.23 & $-$31.44 & 7 & 3050 & $-$31.152 & 30.02 & 0 & \\ IC 4651 11218 & 11.09 & 3.00 & $-$30.40 & 2 & 20 & $-$31.113 & 1.54 & $-$1001 & B \\ IC 4651 12935 & 11.00 & 4.38$^d$ & $-$30.26 & 3 & 215 & $-$29.727 & 8.30 & 244 & \\ IC 4756 12 & 9.54 & 2.75 & $-$25.25 & 3 & 81 & $-$25.128 & 2.99 & $-$73 & \\ IC 4756 14 & 8.86 & 2.47 & $-$24.78 & 3 & 79 & $-$22.843 & 21.62 & 1743 & B \\ IC 4756 28 & 9.01 & 2.42 & $-$25.26 & 4 & 105 & $-$24.982 & 10.98 & 84 & \\ IC 4756 38 & 9.83 & 3.00 & $-$25.78 & 8 & 3087 & $-$25.650 & 5.12 & $-$65 & \\ IC 4756 42 & 9.46 & 3.21 & $-$24.92 & 2 & 16 & $-$24.719 & 1.37 & 7 & \\ IC 4756 44 & 9.77 & 3.30 & $-$26.01 & 6 & 2593 & $-$25.814 & 21.17 & 0 & p \\ IC 4756 49 & 9.46 & 2.83 & $-$25.40 & 4 & 110 & $-$25.164 & 10.06 & 41 & \\ IC 4756 52 & 8.06 & 3.10 & $-$25.21 & 4 & 136 & $-$25.132 & 48.81 & $-$117 & p \\ IC 4756 81 & 9.46 & 3.00 & $-$23.25 & 3 & 76 & $-$23.060 & 36.99 & $-$4 & p \\ IC 4756 101 & 9.36 & 3.20 & $-$25.74 & 3 & 81 & $-$25.592 & 4.42 & $-$47 & \\ IC 4756 109 & 9.05 & 3.30 & $-$25.25 & 4 & 111 & $-$24.693 & 14.80 & 362 & \\ IC 4756 125 & 9.36 & 3.11 & $-$24.85 & 3 & 81 & $-$24.751 & 5.36 & $-$95 & \\ IC 4756 164 & 9.27 & 3.40 & $-$25.51 & 4 & 106 & $-$25.294 & 7.93 & 23 & \\ Melotte 71 3 & 14.113 & 4.31$^d$ & +50.45 & 2 & 21 & +50.753 & 5.52 & 0 & \\ Melotte 71 19 & 11.880 & 2.62$^d$ & +49.64 & 2 & 21 & +50.244 & 8.27 & 301 & \\ Melotte 71 23 & 10.990 & 1.53$^d$ & +49.73 & 2 & 21 & +49.565 & 13.30 & $-$467 & \\ Melotte 71 121 & 12.800 & 2.69$^d$ & +50.91 & 2 & 22 & +51.185 & 6.60 & $-$28 & \\ Melotte 71 130 & 12.687 & 2.53$^d$ & +49.92 & 2 & 23 & +50.416 & 4.79 & 193 & \\ NGC 2204 1320 & 12.607 & 2.55$^d$ & +91.83 & 2 & 21 & +91.522 & 24.25 & $-$337 & \\ NGC 2204 2136 & 13.122 & 2.64$^d$ & +89.09 & 2 & 19 & +93.318 & 197.59 & 4199 & B \\ NGC 2204 2212 & 12.76 & 2.40$^d$ & +92.11 & 2 & 21 & +92.252 & 35.68 & 113 & \\ NGC 2204 3324 & 12.830 & 2.13$^d$ & +90.73 & 2 & 19 & +90.759 & 2.55 & 0 & \\ NGC 2204 3325 & 11.563 & $-$1.04$^d$ & +92.67 & 2 & 22 & +91.786 & 83.38 & $-$913 & B \\ NGC 2204 4137 & 11.97 & 2.82$^d$ & +91.13 & 2 & 23 & +92.709 & 2.99 & 1550 & B \\ NGC 2345 14 & 10.73 & 0.05$^d$ & +59.80 & 2 & 22 & +58.860 & 2.25 & $-$411 & \\ NGC 2345 43 & 10.70 & 0.37$^d$ & +58.82 & 2 & 22 & +58.492 & 7.87 & 201 & \\ NGC 2345 50 & 12.82 & $-$0.01$^d$ & +60.41 & 2 & 21 & +59.152 & 56.74 & $-$730 & B \\ NGC 2345 60 & 10.48 & 0.33$^d$ & +58.41 & 2 & 21 & +57.881 & 11.36 & 0 & \\ NGC 2354 66 & 11.73 & 1.74$^d$ & +34.08 & 3 & 35 & +34.281 & 15.48 & $-$115 & \\ NGC 2354 91 & 11.656 & 1.66$^d$ & +34.11 & 2 & 21 & +34.245 & 22.56 & $-$180 & \\ NGC 2354 125 & 11.73 & 1.73$^d$ & +32.44 & 3 & 35 & +33.347 & 4.98 & 591 & \\ NGC 2354 152 & 12.870 & 2.20$^d$ & +34.25 & 2 & 20 & +34.566 & 0.17 & 0 & \\ NGC 2354 183 & 11.555 & 2.90 & +34.25 & 2 & 21 & +34.524 & 6.66 & $-$41 & \\ NGC 2354 205 & 11.13 & 2.80 & +33.73 & 2 & 23 & +34.148 & 26.10 & 102 & \\ NGC 2354 219 & 11.001 & 1.69$^d$ & +31.50 & 3 & 37 & +32.330 & 11.37 & 514 & \\ \hline\hline \end{tabular} \\ } \small \vspace{0.1in} {\bf Notes.}\\ $^a$ $N_{\rm eff}$ refers to the effective number of observations after averaging those collected within less than three days of time interval (see Sect.~\ref{finalsample}).\vspace{0.1cm}\\ $^b$ $\Delta RV_{\rm H-C} = \left<{RV}\right> - RV_{\rm M08} - {\it Offset}$, where {\it Offset} is provided in Table~\ref{tabif}.\\ $^c$ Flags are ``p'' for planet-host candidate, where ``(A)'', ``(B)'', etc we analyze in more detail (see Sect.~\ref{sectcandidates}); ``B'' for long-period binary; and ``[B]'' for short-period binary (see Sect.~\ref{binaries}).\vspace{0.1cm}\\ $^d$ Photometric estimation (see Sect.~\ref{sectloggphot}).\\ \label{tablemain} \end{table*} \setcounter{table}{1} \begin{table*} \caption{Continued.} {\centering \begin{tabular}{l c c c | c c c c c c} \hline\hline Object & $V_{\rm mag}$ & $\log g$ & $RV_{\rm M08}$ & $N_{\rm eff}$$^a$ & $t_{\rm span}$ & $\left<{RV}\right>$ & $\Delta RV/2$ & $\Delta RV_{\rm H-C}$$^b$ & flag$^c$ \\ & (mag) & [cm s$^{-2}$] & (km s$^{-1}$) & & (d) & (km s$^{-1}$) & (m s$^{-1}$) & (m s$^{-1}$) & \\ \hline NGC 2477 4004 & 10.811 & 2.30$^d$ & +7.05 & 4 & 87 & +7.609 & 24.54 & 402 & \\ NGC 2477 6254 & 10.853 & 2.63$^d$ & +8.86 & 4 & 84 & +9.021 & 47.57 & 0 & \\ NGC 2477 6288 & 11.39 & 2.57$^d$ & +8.86 & 3 & 85 & +8.916 & 4.42 & $-$104 & \\ NGC 2506 2212 & 11.9 & 1.75 & +83.56 & 2 & 21 & +83.548 & 11.28 & 0 & \\ NGC 2506 3254 & 11.12 & 1.30$^d$ & +83.17 & 2 & 21 & +82.452 & 92.53 & $-$706 & B \\ NGC 2818 3035 & 13.346 & 3.42$^d$ & +22.02 & 2 & 252 & +20.886 & 61.88 & 0 & p \\ NGC 2925 95 & 9.894 & 2.79$^d$ & +10.48 & 5 & 596 & +10.860 & 35.31 & 368 & \\ NGC 2925 108 & 9.94 & 3.26$^d$ & +9.39 & 6 & 697 & +9.400 & 21.17 & 0 & p \\ NGC 2972 3 & 12.12 & 2.18$^d$ & +20.14 & 2 & 363 & +19.812 & 50.44 & $-$487 & \\ NGC 2972 11 & 12.09 & 2.14$^d$ & +19.64 & 3 & 363 & +19.799 & 12.92 & 0 & \\ NGC 3114 6 & 7.69 & 1.2 & $-$1.43 & 14 & 3661 & $-$1.450 & 131.57 & $-$161 & \\ NGC 3114 150 & 8.00 & 1.8 & $-$2.19 & 8 & 697 & $-$1.253 & 58.73 & 796 & B \\ NGC 3114 170 & 7.32 & 1.5 & $-$1.95 & 4 & 600 & $-$2.225 & 31.46 & $-$417 & \\ NGC 3114 181 & 8.31 & 1.65 & $-$2.18 & 9 & 705 & $-$2.067 & 52.50 & $-$28 & \\ NGC 3114 238 & 8.49 & 1.6 & $-$1.72 & 8 & 706 & $-$1.571 & 18.83 & 7 & \\ NGC 3114 262 & 8.56 & 2.2 & $-$1.20 & 15 & 3667 & $-$1.059 & 63.81 & 0 & \\ NGC 3114 283 & 7.68 & 1.2 & $-$1.73 & 8 & 697 & $-$1.393 & 92.11 & 195 & \\ NGC 3532 19 & 7.702 & 2.65 & +2.94 & 5 & 722 & +3.851 & 26.85 & 660 & \\ NGC 3532 100 & 7.457 & 2.15 & +4.49 & 4 & 723 & +4.740 & 6.58 & 0 & \\ NGC 3532 122 & 8.189 & 2.60 & +3.34 & 5 & 722 & +3.479 & 34.88 & $-$110 & \\ NGC 3532 221 & 6.03 & 1.50 & +3.58 & 7 & 730 & +3.830 & 41.76 & 0 & \\ NGC 3532 596 & 7.869 & 2.25 & +2.50 & 5 & 723 & +5.531 & 104.11 & 2781 & B \\ NGC 3532 670 & 6.978 & 1.80 & +3.97 & 5 & 723 & +4.208 & 155.26 & $-$1 & \\ NGC 3680 13 & 10.78 & 2.68 & +1.48 & 5 & 620 & +1.472 & 37.09 & $-$140 & \\ NGC 3680 26 & 10.8 & 2.68 & +0.67 & 4 & 618 & +0.267 & 112.82 & $-$535 & p \\ NGC 3680 34 & 10.69 & 2.2 & +1.93 & 4 & 616 & +3.762 & 86.58 & 1700 & B \\ NGC 3680 41 & 10.886 & 2.40 & +1.28 & 6 & 721 & +1.601 & 59.98 & 188 & \\ NGC 3680 44 & 10.02 & 2.00 & +1.49 & 6 & 722 & +1.754 & 32.08 & 132 & \\ NGC 3680 53 & 10.7 & 2.30 & +1.11 & 6 & 719 & +1.240 & 59.58 & 0 & \\ NGC 3960 28 & 13.01 & 2.06 & $-$22.48 & 2 & 270 & $-$22.073 & 3.74 & 0 & \\ NGC 3960 44 & 14.86 & 2.46$^d$ & $-$21.42 & 3 & 269 & $-$20.315 & 13.54 & 698 & \\ NGC 4349 5 & 11.511 & 2.54 & $-$12.27 & 32 & 3663 & $-$11.971 & 26.92 & 19 & \\ NGC 4349 9 & 11.594 & 1.78$^d$ & $-$11.75 & 24 & 3025 & $-$11.669 & 47.53 & $-$199 & \\ NGC 4349 53 & 11.33 & 1.96$^d$ & $-$10.44 & 33 & 3663 & $-$10.160 & 46.89 & 0 & \\ NGC 5822 1 & 9.08 & 2.00 & $-$30.97 & 6 & 338 & $-$30.350 & 16.94 & 439 & \\ NGC 5822 6 & 10.78 & 2.95$^d$ & $-$29.50 & 4 & 113 & $-$29.346 & 16.81 & $-$27 & \\ NGC 5822 8 & 10.37 & 2.71$^d$ & $-$30.51 & 19 & 3284 & $-$29.479 & 150.47 & 849 & B \\ NGC 5822 102 & 10.84 & 3.20 & $-$29.77 & 5 & 141 & $-$29.588 & 44.25 & 0 & p \\ NGC 5822 201 & 10.26 & 2.85 & $-$27.90 & 16 & 2903 & $-$28.017 & 957.77 & $-$299 & [B] \\ NGC 5822 224 & 10.84 & 3.14 & $-$29.64 & 4 & 116 & $-$30.871 & 12.32 & $-$1413 & B \\ NGC 5822 240 & ~ & 1.95 & $-$29.46 & 5 & 282 & $-$29.209 & 18.88 & 70 & \\ NGC 5822 316 & 10.47 & 3.05 & $-$28.31 & 5 & 340 & $-$28.229 & 7.12 & $-$101 & \\ NGC 5822 348 & 10.97 & 2.96$^d$ & $-$29.06 & 4 & 118 & $-$29.177 & 3.34 & $-$298 & \\ NGC 5822 375 & 9.69 & 2.17$^d$ & $-$29.50 & 6 & 337 & $-$29.224 & 40.46 & 93 & \\ NGC 5822 443 & 9.72 & 2.18$^d$ & $-$29.25 & 6 & 336 & $-$28.972 & 30.88 & 96 & \\ NGC 6067 261 & 8.79 & 0.15 & $-$39.39 & 5 & 135 & $-$39.120 & 135.32 & 0 & \\ NGC 6067 298 & 8.47 & 1.35 & $-$39.74 & 2 & 55 & $-$45.603 & 48.87 & $-$6121 & B \\ NGC 6067 316 & 8.86 & 1.51$^d$ & $-$40.97 & 5 & 138 & $-$40.269 & 77.07 & 442 & \\ NGC 6134 62 & 11.892 & 2.72$^d$ & $-$26.02 & 3 & 120 & $-$26.018 & 5.32 & $-$48 & \\ NGC 6134 75 & 12.394 & 3.10 & $-$25.69 & 2 & 116 & $-$25.640 & 22.56 & 0 & \\ NGC 6134 129 & 12.53 & 2.83 & $-$25.95 & 3 & 117 & $-$25.360 & 18.12 & 540 & \\ NGC 6208 19 & 10.88 & 2.40$^d$ & $-$32.17 & 4 & 129 & $-$32.022 & 39.90 & 0 & \\ NGC 6208 31 & 11.60 & 2.98$^d$ & $-$32.83 & 2 & 115 & $-$32.549 & 11.67 & 133 & \\ NGC 6281 3 & 7.94 & 2.30 & $-$5.95 & 4 & 126 & $-$5.579 & 4.03 & 11 & \\ NGC 6281 4 & 8.16 & 2.50 & $-$5.21 & 3 & 115 & $-$4.850 & 16.12 & 0 & \\ NGC 6425 46 & 10.788 & 2.49$^d$ & $-$3.75 & 3 & 130 & $-$3.511 & 15.60 & 213 & \\ NGC 6425 61 & 10.75 & 2.54$^d$ & $-$3.19 & 3 & 130 & $-$3.164 & 10.08 & 0 & \\ NGC 6494 46 & 9.42 & 2.07$^d$ & $-$8.25 & 4 & 132 & $-$8.386 & 9.14 & $-$246 & \\ \hline\hline \end{tabular} \\ } \end{table*} \setcounter{table}{1} \begin{table*} \caption{Continued.} {\centering \begin{tabular}{l c c c | c c c c c c} \hline\hline Object & $V_{\rm mag}$ & $\log g$ & $RV_{\rm M08}$ & $N_{\rm eff}$$^a$ & $t_{\rm span}$ & $\left<{RV}\right>$ & $\Delta RV/2$ & $\Delta RV_{\rm H-C}$$^b$ & flag$^c$ \\ & (mag) & [cm s$^{-2}$] & (km s$^{-1}$) & & (d) & (km s$^{-1}$) & (m s$^{-1}$) & (m s$^{-1}$) & \\ \hline NGC 6494 48 & 9.54 & 2.54 & $-$8.36 & 4 & 132 & $-$8.251 & 7.30 & 0 & \\ NGC 6633 100 & 8.31 & 2.75 & $-$28.98 & 4 & 127 & $-$28.740 & 12.34 & 13 & \\ NGC 6633 106 & 8.69 & 2.96 & $-$28.46 & 10 & 3084 & $-$28.372 & 13.04 & $-$140 & \\ NGC 6633 119 & 8.98 & 2.97 & $-$28.96 & 3 & 130 & $-$28.793 & 23.85 & $-$61 & \\ NGC 6633 126 & 8.77 & 2.92 & $-$29.27 & 3 & 136 & $-$29.042 & 9.43 & 0 & \\ \hline\hline \end{tabular} \\ } \end{table*} \begin{figure} \centering \includegraphics[width=\columnwidth]{hist_nobs.eps} \caption{Distribution of the number of effective observations per target for our final sample of 114 targets with 826 effective observations described in Sect.~\ref{finalsample}.} \label{histnobs} \end{figure} \subsection{The final sample} \label{finalsample} The HARPS pipeline usually produces very good quality data from a fully automatic process. However, a visual inspection of the reduced data is required to remove outliers caused by a variety of different issues (bad observation conditions, erroneous reduction, etc.). Therefore, we performed a visual inspection of all the 994 HARPS observations described in Sect.~\ref{harpsobs} and, for a few cases, we re-reduced the data manually by using the offline tools of the HARPS pipeline to correct reduction issues. We discarded 18 spectra with S/N ratio below 10 and nine spectra with problematic cross-correlation function (CCF) shape. Finally, we merged observations separated by less than 3~days by averaging all the reduced parameters (time, RV, bisector velocity span, and $S/N$). This averaging tends to clean up short-period variations (likely produced by intrinsic stellar signal) and keeps periods longer than $\sim$10~d, which correspond, for the stellar mass range of our sample ($\sim$2--6~M$_{\odot}$), to planetary semi-major axes $\gtrsim 0.1$~AU (i.e., a threshold comparable to the typical radii of giant stars, below which inner planet orbits are not expected in our sample). From the remaining data, we required at least two observations per target to analyze the RV distributions as described below. This final sample comprises a total of 826 effective observations of 114 targets. Figure~\ref{snrfig} shows the distribution of $\epsilon RV$ vs. $S/N$ for our sample, with objects belonging to different criteria in the refinement to final sample highlighted by different symbols. \subsection{Activity proxy measurements} \label{actproxies} The bisector velocity span (or simply bisector span) is a measurement of the CCF asymmetry computed from the CCF bisector (which is the set of midpoints between the two sides of a CCF profile) at the top and bottom of the CCF profile \citep[e.g.,][]{que01}. It is a standard output of the HARPS pipeline and an important stellar activity proxy which is used to verify whether an RV variation is caused by intrinsic stellar variability (e.g., induced by chromospheric activity) rather than by orbital motion. It has been demonstrated that, in the case of activity-induced RV variations, these correlate with the bisector span variations \citep[e.g.,][]{san02}. Another important stellar activity proxy is the S~index, obtained from the emission in the core of the CaII H \& K spectral lines. It is a dimensionless quantity typically measured from the total flux counts of two triangular passbands 1.09\AA ~wide centered at 3933.66\AA ~(the CaII K line) and at 3968.47\AA ~(the CaII H line) and normalized by the total flux of two continuum passbands 20\AA ~wide centered at 3901.07\AA ~(a pseudo blue filter) and 4001.07\AA ~(a pseudo red filter) \citep[e.g.,][]{sch09}. The S~index is, thus, defined as: \begin{equation} \hspace{0.5cm} S_{\rm index} = \alpha\frac{H+K}{R+V}, \end{equation} where $H$, $K$, $R$, and $V$ are the total flux counts of the pseudo-filters described above and $\alpha$ is a factor for instrumental calibration. We measured this index by using the reduced HARPS spectra, which is also provided by the pipeline, but no instrumental calibration was performed (ie., we assumed $\alpha = 1$) because we are only interested in the index variation. This index, as the bisector span, should not correlate with RV if the RV variation has an orbital origin, and it is reliable only for observations with high $S/N$ (e.g., $S/N \gtrsim 21$ at 400~nm; see Sect.~\ref{sectheplanet}). \subsection{Method to select planet-host candidates} \label{selmethod} A reasonable selection of planet-host candidates can be obtained by using the relation described in \citet{hek08}. Based on a sample of K~giants, these authors found a trend where the RV semi-amplitude increases with the decreasing logarithm of stellar surface gravity, $\log g$. This trend may arise from intrinsic RV variability induced by stellar oscillation \citep{kje95}, and was also observed by other groups \citep[e.g.,][]{set04}. Planet-host candidates can be identified as those with RV lying noticeably above the trend. \citet{tro16} followed this approach to search for exoplanets using data from APOGEE\footnote{\url{http://www.sdss.org/surveys/apogee/}} \citep{maj15}, with a pre-selection criterion quantified by their Eqs.~(25)--(27). At first order, these equations state that a planet-host candidate has RV semi-amplitudes above $3\times$ the trend level and above $3\times$ the typical RV error. We therefore consider this criterion to pre-select our planet-host candidates. To perform this analysis, we assumed that the half peak-to-peak difference between the available RV measurements, $\Delta RV/2$, represents the RV semi-amplitude. Of course, $\Delta RV/2$ may be biased for objects with a small number of observations. We used $\log g$ spectroscopic measurements provided in the PASTEL catalog \citep{sou10,sou16} when available, and computed photometric values for the remaining targets by following the procedure described below. New $\log g$ values computed from HARPS spectra will be provided in a forthcoming work (Canto Martins et al. 2018, in prep.). \begin{figure} \centering \includegraphics[width=\columnwidth]{hist_RVdiff.eps} \caption{Distribution of the difference between the RV average obtained with HARPS and the RV average obtained with CORAVEL for each target. The range is truncated for a better display; the whole distribution ranges from $-5.86$ to 4.16~km/s. A gaussian fit is shown with its center and 5$\sigma$ range.} \label{histrvdiff} \end{figure} \begin{table} \caption{Summary of the selection of single star candidates for each cluster.} {\centering \begin{tabular}{c c c c c c c} \hline\hline Cluster & $N_i$ & $N_f$ & {\it Offset} & $\sigma$ \\ & & & (m s$^{-1}$) & (m s$^{-1}$) \\ \hline IC 2714 & 8 & 8 & 59 & 179 \\ IC 4651 & 8 & 7 & 289 & 149 \\ IC 4756 & 13 & 12 & 194 & 126 \\ Melotte 71 & 5 & 5 & 303 & 295 \\ NGC 1662 & -- & -- & -- & -- \\ NGC 2204 & 6 & 3 & 29 & 234 \\ NGC 2251 & -- & -- & -- & -- \\ NGC 2324 & -- & -- & -- & -- \\ NGC 2345 & 4 & 3 & $-$529 & 312 \\ NGC 2354 & 7 & 7 & 316 & 306 \\ NGC 2355 & -- & -- & -- & -- \\ NGC 2477 & 3 & 3 & 161 & 267 \\ NGC 2506 & 2 & 1 & $-$12 & -- \\ NGC 2818 & 1 & 1 & 353 & -- \\ NGC 2925 & 2 & 2 & 10 & 260 \\ NGC 2972 & 2 & 2 & 159 & 344 \\ NGC 3114 & 7 & 6 & 141 & 206 \\ NGC 3532 & 6 & 5 & 249 & 311 \\ NGC 3680 & 6 & 5 & 132 & 288 \\ NGC 3960 & 2 & 2 & 407 & 493 \\ NGC 4349 & 3 & 3 & 280 & 121 \\ NGC 5822 & 11 & 9 & 182 & 225 \\ NGC 6067 & 3 & 2 & 258 & 312 \\ NGC 6134 & 3 & 3 & 50 & 327 \\ NGC 6208 & 2 & 2 & 148 & 94 \\ NGC 6281 & 2 & 2 & 360 & 8 \\ NGC 6425 & 2 & 2 & 26 & 151 \\ NGC 6494 & 2 & 2 & 109 & 174 \\ NGC 6633 & 4 & 4 & 228 & 70 \\ \hline\hline \end{tabular} \\ } \small \vspace{0.1in} {\bf Notes.}\\ $N_i$ and $N_f$ refer to the number of objects before and after the removal of binaries. \textit{Offset} is the typical difference between HARPS and CORAVEL RVs, whereas $\sigma$ is the dispersion of this difference, both given for each cluster after the removal of binaries (see text for more information). \vspace{0.1cm}\\ \label{tabif} \end{table} \subsection{Photometric $\log(g)$ estimations} \label{sectloggphot} We estimated photometric $\log g$ values from a grid of isochrones of solar metallicity by using the CMD\footnote{\url{http://stev.oapd.inaf.it/cgi-bin/cmd}} Web Interface \citep[e.g.,][]{bre12,tan14,che14,che15}. Each isochrone was traced on a grid $\log g\left((B-V),M_V\right)$ by using linear interpolation and the empty spaces between the isochrones were fulfilled by evolving a Laplace interpolation. This single grid was used for the sake of simplicity, considering that the clusters have a metallicity around the solar value. The central location of each target in the grid was used to get the theoretical $\log g$ value, which was set as an initial photometric $\log g$ estimation for the target. After estimating an initial $\log g$ for all the targets, we plotted these values against the corresponding measured spectroscopic values and verified a systematic linear trend between them (see Fig.~\ref{loggphot}). We then corrected this trend to obtain the final photometric $\log g$ estimations. The standard deviation between the photometric and spectroscopic $\log g$ values is $\sim$0.5~dex. Thus, these photometric estimations can be used with caution for the purpose of our work, that is specifically the analysis of $\log g$ versus $\Delta RV/2$ described in Sect.~\ref{sectcandidates}. \subsection{Number of observations and time series analysis} \label{secnobs} We defined an arbitrary threshold of at least nine effective observations to perform time-series analysis of our planet-host and binary candidates. This is roughly a minimal requirement to obtain an orbital solution without ambiguity. Figure~\ref{histnobs} shows the distribution of effective observations for the stars of our sample. There are 94 stars with less than nine observations and 20 stars with at least nine observations. The time-series analyses are presented in Sects.~\ref{sectimeseries} and \ref{sectheplanet} for the best planet-host candidates and for a binary candidate. An overall discussion of all the planet-host candidates is presented in Sect.~\ref{secmp}. \begin{figure} \centering \includegraphics[width=\columnwidth]{logg_vs_RVp2p2.eps} \caption{RV half peak-to-peak difference, $\Delta RV/2$, as a function of $\log g$, for our subsample of 101 single stars. Open circles stand for targets with a number of observations between two and eight, whereas filled circles represent the targets with at least nine observations. Red circles refer to the targets with spectroscopic $\log g$ measurements, whereas blue circles illustrate those with photometric measurements. The gray horizontal dashed line represents the 3$\times$ RV typical error level. The black solid line illustrates the linear fit of the data and the black dotted line is its $3\times$ level. The planet-host candidates lie in the upper right region encompassed within the dashed and the dotted lines. The candidates with at least nine observations are labeled A to D. The red cross illustrates the residual for target~D (IC~4651~9122) after removing the planet signal.} \label{loggvsrms} \end{figure} \begin{table*} \caption{Overview of planet and binary candidates each cluster.} {\centering \begin{tabular}{l c c c c c c c c} \hline\hline Cluster & $\log(t)$ & $[Fe/H]$ & $M_{\rm TO}$ & $e(B-V)$ & $\mu$ & $N_{\rm *}$ & $N_{\rm p}$ & $N_{\rm b}$ \\ & [yr] & (dex) & (M$_{\odot}$) & (mag) & (mag) & & \\ \hline NGC 3960 & 9.100 & $-$0.04 & 2.0 & 0.302 & 12.70 & 2 & 0 & 0 \\ NGC 2506 & 9.045 & $-$0.23 & 2.0 & 0.08 & 12.94 & 2 & 0 & 1 \\ NGC 3680 & 9.077 & $-$0.01 & 2.0 & 0.07 & 10.07 & 6 & 1 & 1 \\ NGC 6208 & 9.069 & $-$0.03 & 2.0 & 0.21 & 10.51 & 2 & 0 & 0 \\ IC 4651 & 9.057 & 0.12 & 2.1 & 0.12 & 10.11 & 8 & 1 & 1 \\ NGC 2204 & 8.896 & $-$0.32 & 2.2 & 0.085 & 12.36 & 6 & 0 & 3 \\ NGC 6134 & 8.968 & 0.11 & 2.2 & 0.38 & 10.98 & 3 & 0 & 0 \\ NGC 2355 & 8.850 & $-$0.05 & 2.4 & 0.12 & 12.08 & -- & -- & -- \\ NGC 5822 & 8.821 & 0.08 & 2.5 & 0.15 & 10.28 & 11 & 1 & 3 \\ NGC 2477 & 8.780 & 0.07 & 2.6 & 0.28 & 11.30 & 3 & 0 & 0 \\ IC 4756 & 8.699 & 0.02 & 2.7 & 0.19 & 9.01 & 13 & 3 & 1 \\ NGC 2324 & 8.650 & $-$0.22 & 2.8 & 0.127 & 13.30 & -- & -- & -- \\ NGC 2818 & 8.626 & $-$0.17 & 2.8 & 0.121 & 11.72 & 1 & 1 & 0 \\ NGC 6633 & 8.629 & $-$0.08 & 2.9 & 0.18 & 8.48 & 4 & 0 & 0 \\ IC 2714 & 8.542 & 0.02 & 3.1 & 0.34 & 11.52 & 8 & 3 & 0 \\ NGC 3532 & 8.492 & 0.00 & 3.2 & 0.04 & 8.61 & 6 & 0 & 1 \\ NGC 6281 & 8.497 & 0.06 & 3.3 & 0.15 & 8.93 & 2 & 0 & 0 \\ NGC 6494 & 8.477 & $-$0.04 & 3.3 & 0.36 & 10.11 & 2 & 0 & 0 \\ NGC 2251 & 8.427 & $-$0.09 & 3.4 & 0.19 & 11.21 & -- & -- & -- \\ Melotte 71 & 8.371 & $-$0.27 & 3.5 & 0.11 & 12.84 & 5 & 0 & 0 \\ NGC 1662 & 8.625 & 0.05 & 3.5 & 0.30 & 9.13 & -- & -- & -- \\ NGC 6425 & 7.346 & 0.09 & 3.7 & 0.40 & 10.69 & 2 & 0 & 0 \\ NGC 4349 & 8.315 & $-$0.07 & 3.8 & 0.38 & 12.87 & 3 & 0 & 0 \\ NGC 2354 & 8.126 & & 4.4 & 0.29 & 13.79 & 7 & 0 & 0 \\ NGC 3114 & 8.093 & 0.05 & 4.7 & 0.08 & 10.05 & 7 & 0 & 1 \\ NGC 6067 & 8.076 & 0.14 & 4.8 & 0.40 & 12.00 & 3 & 0 & 1 \\ NGC 2972 & 7.968 & $-$0.07 & 5.2 & 0.343 & 12.63 & 2 & 0 & 0 \\ NGC 2345 & 7.853 & & 5.9 & 0.68 & 13.87 & 4 & 0 & 1 \\ NGC 2925 & 7.850 & & 5.9 & 0.08 & 9.69 & 2 & 1 & 0 \\ \hline\hline \end{tabular} \\ } \small \vspace{0.1in} {\bf Note.}\\ $N_*$ = number of stars in our sample; $N_p$ = number of planet-host candidates; $N_b$ = number of binaries.\\ \label{tabcounts} \end{table*} \section{Results}\label{results} Our final sample of 114 targets, obtained as described in Sect.~\ref{finalsample}, is listed in Table~\ref{tablemain}. The table includes the apparent visual magnitude ($V_{\rm mag}$), stellar surface gravity ($\log g$), and CORAVEL RV measurements ($RV_{\rm M08}$) from \citet{mer08}. Our data are the effective number of observations ($N_{\rm eff}$), time span ($t_{\rm span}$), the RV average ($\left<{RV}\right>$), the half peak-to-peak RV ($\Delta RV/2$), and the difference between the HARPS and CORAVEL RV values with respect to their offsets ($\Delta RV_{\rm H-C}$), these computed for each target. There is also a flag indicating the binary and planet-host candidates. The flag definitions and more details about the table parameters are discussed below. \subsection{Binary candidates}\label{binaries} \begin{figure} \centering \includegraphics[width=\columnwidth]{IC2714-53_rvt.eps} \includegraphics[width=\columnwidth]{IC2714-110_rvt.eps} \includegraphics[width=\columnwidth]{IC2714-220_rvt.eps} \caption{RV time series of the targets labeled A, B, and C in Fig.~\ref{loggvsrms}. All stars belong to the cluster IC~2714 and their RV data exhibit long-term RV variations, illustrated by the red dashed lines.} \label{timeseries} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{IC4651-9122_gls.eps} \includegraphics[width=\columnwidth]{IC4651-9122_rvt.eps} \includegraphics[width=\columnwidth]{IC4651-9122_phd.eps} \caption{RV analysis of IC~4651~9122. \textit{Top panel:} GLS periodogram of the RV time series showing the most prominent peak period. The red horizontal dashed line illustrates the 1\% FAP level. \textit{Middle panel:} RV time series, where the black circles and error bars are the HARPS data and the red curve is the best Keplerian fit to the data. The gray colored datum is a particular outlier with a $S/N = 10.3$, namely very close to the threshold ($S/N = 10.0$) defined in this work. We discarded it in this analysis. \textit{Bottom panel:} phase diagram of the RV time series for the orbital period of the best fit (see Table~\ref{taborbprms}). The symbols are the same as in the middle panel.} \label{theplanet} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{IC4651-9122_snr_vs_sindex.eps} \caption{S~index vs. $S/N$ at 400 nm for the HARPS observations of IC~4651~9122. For $S/N < 21$, the S~index shows a strong trend with $S/N$ that is not reliable.} \label{snrvssindex} \end{figure} Even if we avoided spectroscopic binaries in our sample (based on data from the literature), binarity can still be present. We used our HARPS observations as well as a comparison with the CORAVEL data to identify potential new binaries. The maximum $\Delta RV/2$ value induced by a planet can be estimated by considering a 15~M$_{\rm J}$ companion in a circular 30-day period orbit (i.e., located approximately at the Roche lobe limit) around a star of 2.0~M$_{\odot}$ (minimum stellar mass of our sample). Such a system produces an orbital stellar semi-amplitude $K$ of $\sim$0.6~km/s, and so any targets with semi-amplitude higher than this should be associated with binary candidates. There is one star that shows a high $\Delta RV/2$ of 764 m/s in our HARPS data. We then included this star, NGC 5822 201, in the list of binary candidates, flagged in Table~\ref{tablemain} as ``[B]'', and it is analyzed in detail in Sect.~\ref{sectheplanet}. Apart from NGC 5822 201, the highest $\Delta RV/2$ in our HARPS data is 198~m/s, so we have no indications of other binaries within the HARPS time series. However, long-period binaries can be identified from a comparison between HARPS and CORAVEL. The minimum gap between the CORAVEL (from year 1978 to 1997) and HARPS (from 2005 to 2015) observations is seven years. Figure~\ref{histrvdiff} shows the distribution of the difference between the HARPS and CORAVEL RV data, $\left<{RV}\right> - RV_{\rm M08}$, computed by taking the average RV from each instrument. This distribution can be well-fitted with a gaussian, from which we derive the global offset between these two instruments (the gaussian center of~131~m/s, with a $\sigma$ of~129~m/s). The binary stars likely lie out from the peak of this distribution. However, RV in each cluster may depend on several factors, which include stellar effective temperature, gravity, metallicity, and other systematic effects. We therefore opted for a more refined selection criteria by computing the offset between the two instruments for each cluster, instead of using the overall distribution of Fig.~\ref{histrvdiff}, for the selection of the binary candidates. This offset was estimated by taking the median of $\left<{RV}\right> - RV_{\rm M08}$ when having three or more stars, or the smallest variation of $\left<{RV}\right> - RV_{\rm M08}$ when having only two stars. Stars that deviate from the cluster offset by more than 0.7~km~s$^{-1}$ are considered as binary candidates. The value of 0.7~km~s$^{-1}$ represents the 5$\sigma$ distribution of Fig.~\ref{histrvdiff}. A conservative choice is justified by the consideration that such a deviation shall account also for stellar intrinsic RV variability and uncertainties in the CORAVEL measurements (typically 0.3~km~s$^{-1}$ per CORAVEL observation). In addition, 0.7~km~s$^{-1}$ is larger than the signal expected by a planet, as computed above. From Fig.~\ref{histrvdiff}, it is clear that at least eight stars have RV differences above 1.0~km~s$^{-1}$. Using the above methodology, 13 more binary candidates were identified, flagged as ``B'' in Table~\ref{tablemain}. These binaries were observed with HARPS only for the years 2013--2015, explaining the reason why their variability was not detected from the HARPS data alone. The total of 14 binaries (``B'' and ``[B]'' flags) were then removed from our sample to obtain a subsample of 101 likely single stars. This subsample is considered in the following section, which is dedicated to the identification of planet-host candidates. Table~\ref{tabif} summarizes the selection of the single-star candidates, based on the CORAVEL versus HARPS analysis described above. The number of targets for each cluster ($N_i$), the corresponding number of single star candidates ($N_f$), the instrumental RV offset ({\it Offset}), and the RV standard deviation ($\sigma$) are provided. The {\it Offset} and $\sigma$ values were computed in two steps, before and after the removal of the binaries, and the table shows the final values. The typical offset for each cluster lies around 100--300 m/s, which is compatible with the global offset illustrated in Fig~\ref{histrvdiff}. An atypical case is NGC~2345 with an offset of $-529$~m/s, which deviates strongly from the other clusters. We cannot confidently explain the reason for this deviation. \subsection{Planet-host candidates}\label{sectcandidates} After the removal of the binaries, our subsample of likely single stars comprises 769 observations of 101 targets. Most targets have a small number of observations and cannot be used for a proper time-series analysis and orbit determination. We therefore selected the best planet-host candidates based on the work of \citet{hek08}, as explained in Sect.~\ref{selmethod}, with the basic criterion defined by Eqs. (25)--(27) of \citet{tro16}. Eq.~(25) of \citet{tro16} can be represented in log-log scale by: \begin{equation} \hspace{0.5cm}\log(\Delta RV_{\rm trend}/2 {\rm ~[km/s]~}) = a + b\log g, \end{equation} where the parameters $a = \log 2$ and $b = \frac{1}{3}\log 0.015$ reproduce the trend of \citet{hek08}. This trend was estimated using targets with typically 20--100 observations, which \text{provide} more reliable $\Delta RV/2$ measurements than for our sample. Most of our objects have a low number of observations and this may bias the level of $\Delta RV_{\rm trend}/2$. Hence, we computed a fit to our sample by fixing the slope $b$ at the value found in \citet{tro16} and leaving $a$ as a free parameter. We only considered for the fit the 64 objects with spectroscopic $\log g$ (see Sect~\ref{selmethod}). The best fit was obtained with $a = -0.172$~dex. Figure~\ref{loggvsrms} depicts $\Delta RV/2$ versus $\log g$ for our subsample of 101 single star candidates. The intrinsic variability (stellar jitter) trend is illustrated by the black solid line, the $3\times$ level is given by the black dotted line, and the 3$\times$ typical RV error is depicted by the grey dashed line. There are 11 stars lying above both the $3\times$ trend level and the $3\times$ typical RV error, which are the best planet-host candidates. The candidates with at least nine observations -- namely those chosen to be analyzed in more detail from their time series (see Sect.~\ref{secnobs}) -- are labeled A to D. Some stars may rise above the threshold and become planet-host candidates when having more observations. Table~\ref{tabcounts} provides an overview of the planet and binary candidates for all the open clusters in our program sorted by their turnoff masses $M_{\rm TO}$. These were estimated from CMD\footnote{\url{http://stev.oapd.inaf.it/cgi-bin/cmd}} Web Interface isochrones corresponding to the age and metallicity of each cluster. The parameters $\log(t)$, $[Fe/H]$, $e(B-V)$, and $\mu$ stand for their age, metallicity, reddening, and distance modulus respectively. The table also contains $N_{\rm *}$, $N_{\rm p}$, and $N_{\rm b}$, which are the number of stars belonging to each cluster of our final sample described in Sect.~\ref{finalsample}, the number of planet-host candidates identified in this work, and the number of binary candidates, respectively. Four out of the 29 open clusters observed have no targets yet with at least two effective observations. The number of observations in our sample is still very limited to provide a proper census of planet hosts in young open clusters. For now, two clusters, IC~4756 and IC~2714, have three planet-host candidates each. This could indicate a high planet occurrence rate for these clusters or the planet-host candidates may be false positives. These two clusters lie in a relatively narrow $M_{\rm TO}$ range around $\sim$2.5--3.1~M$_{\odot}$, whereas the clusters outside this range have either zero or one candidate. The larger number of candidates within this range qualitatively agrees with recent studies which show that the planet occurrence rate as a function of stellar mass has a maximum for giant host stars around $\sim$2~M$_{\odot}$ \citep[e.g.,][]{ref15}. These studies also claim that the occurrence rate drops rapidly for higher masses. Indeed, we have only one candidate among the sample of the 40 most massive stars ($M_{\rm TO} > 3.1$~M$_{\odot}$). Such a low incidence of planets in massive stars can be understood within a scenario in which strong winds from high-mass stars may generate competing timescales between disk dissipation and planet formation \citep[e.g.,][]{ken09,rib15}. However, we shall note that some observational biases may reduce the planet detectability when increasing the host mass \citep[e.g.,][]{jon14}. For instance, some of our planet-host candidates would produce orbital semi-amplitudes below the 3$\times$-trend line of Fig.~\ref{loggvsrms} if they were observed around more massive stars (see Sect.~\ref{secmp} for a specific example). In addition, more luminous giants tend to have a larger intrinsic stellar noise, and this also introduces bias. \begin{figure*} \centering \includegraphics[width=\columnwidth]{IC4651-9122_sindex_gls.eps} \includegraphics[width=\columnwidth]{IC4651-9122_rv_vs_sindex.eps} \includegraphics[width=\columnwidth]{IC4651-9122_bis_gls.eps} \includegraphics[width=\columnwidth]{IC4651-9122_rv_vs_bis.eps} \caption{Analysis of activity proxies for IC~4651~9122. \textit{Left panels:} GLS periodograms of the S~index and of the bisector span time series by considering the data subset with $S/N \geq 21$. The red vertical dashed line illustrates the orbital period of the Keplerian fit described in Table~\ref{taborbprms}. \textit{Right panels:} correlation between RV and each activity proxy, where the data is split into $S/N < 21$ (gray circles) and $S/N \geq 21$ (black circles). The Pearson correlation coefficient is 0.20 for RV versus S~index and 0.15 for RV versus bisector span when considering the data subset with $S/N \geq 21$.} \label{testactiv} \end{figure*} \begin{figure*} \centering \includegraphics[width=\columnwidth]{IC4651-9122res_gls.eps} \includegraphics[width=\columnwidth]{IC4651-9122res_rv_vs_sindex.eps} \includegraphics[width=\columnwidth]{IC4651-9122res_rvt.eps} \includegraphics[width=\columnwidth]{IC4651-9122res_phd.eps} \caption{RV and activity proxy analyses for the residual of the RV time series of IC~4651~9122 after removing the best Keplerian fit. The GLS periodogram (\textit{top left panel}), the RV versus S~index correlation (\textit{top right panel}), the RV time series (\textit{bottom left panel}) and the RV phase diagram (\textit{bottom right panel}) follow similar definitions to those described in Figs.~\ref{theplanet} and \ref{testactiv}. The Pearson correlation coefficient is 0.27 for RV versus S~index and 0.15 for RV versus bisector span when considering the data subset with $S/N \geq 21$.} \label{residual} \end{figure*} \subsection{Long-term variations in IC~2714 stars} \label{sectimeseries} Targets labeled~A, B, and~C in Fig.~\ref{loggvsrms} have at least nine observations and lie above the $3\times$ level of the $\log g$ versus RV trend. Hence, these are good planet-host candidates and, theoretically, they have enough data for a time series analysis. We analyzed Generalized Lomb-Scargle (GLS) periodograms \citep{zec09} for these RV time series to look for orbit-related signals. The periodograms provide periods with false alarm probability (FAP) less than 1\%, but Keplerian fits to the data provide doubtful or ambiguous solutions. These targets all belong to the same cluster, IC~2714, and their RV time series show linear trends of a few (5--10) m/s/yr, as shown in Fig~\ref{timeseries}. No pulsation or rotational modulation with such a long period is expected in the evolutionary stage of these stars (somewhat between the RGB-base and the red clump; e.g., \citealt{del16}). Our S~index measurements also do not show any conclusive correlation, so longer time span observations are needed to verify the origin of the RV variation, including the possibility of a substellar companion. \subsection{Orbital solutions for IC~4651~9122b and NGC~5822~201B} \label{sectheplanet} The star labeled~D, IC~4651~9122, has a very clear RV periodic variation. Figure~\ref{theplanet} shows a set of standard analyses performed for this star. The GLS periodogram (top panel) shows a strong peak with a FAP~$< 10^{-9}$ corresponding to a peak of about 2~yr. A Keplerian model fits well with the observed data, as shown in the middle and bottom panels, suggesting the presence of a planet, namely IC~4651~9122b. From Fig.~\ref{loggvsrms}, the star is expected to have an intrinsic variability (jitter) of $\sim$20~m~s$^-1$. To be conservative, an upper limit for this jitter is of $\sim$60~m~s$^-1$, namely the 3$\times$ trend. We used the RVLIN and BOOTTRAN codes\footnote{\url{http://exoplanets.org/code/}} \citep{wri09,wan12} to calculate the orbital parameters by testing different jitter levels from~10 to~60~m~s$^-1$. In a first test, we performed a Monte Carlo approach by computing independent Keplerian fits with random fluctuations to the RV data within a certain jitter level added in quadrature to the RV errors. For verification, we used the bootstrapping method from the BOOTTRAN code. The solutions were rather stable for or all the tests, including for an upper jitter level of 60~m~s$^-1$. The orbital parameters obtained from our Monte Carlo approach for the most likely stellar jitter level of 20~m/s are given in Table~\ref{taborbprms}. Although the RV periodic variation in Fig.~\ref{theplanet} is obvious, we use the S~index and the bisector span to establish the nature of the RV periodicity. We have to consider a data subset with high $S/N$ for the S~index because this parameter is not reliable at low $S/N$. Figure~\ref{snrvssindex} shows a systematic increase of the S~index with decreasing $S/N$, which occurs because the CaII H \& K lines become dominated by noise. The bisector also loses confidence at low $S/N$. Hence, we split the data into lower and higher $S/N$ regimes, namely $S/N < 21$ and $S/N \geq 21$ (at 400~nm), for a proper interpretation of our results. These regimes are illustrated in our analysis by the gray and black circles, respectively. Figure~\ref{testactiv} displays the activity proxy tests. The GLS periodograms (left panels) show no confident period related with the orbit of the planet, and the RV versus activity proxy diagrams (right panels) have low correlations. A small subset of data where the S~index seems to increase with increasing RV (gray circles in the top-right panel) is not to be trusted, as the $S/N$ of the data is too low. Overall, this analysis shows no association between the activity proxies and RV, thus supporting the orbital origin for the main RV variation. \begin{figure} \centering \includegraphics[width=\columnwidth]{NGC5822-201_gls.eps} \includegraphics[width=\columnwidth]{NGC5822-201_rvt.eps} \caption{RV time series analysis for NGC~5822~201. \textit{Top panel:} GLS periodogram. \textit{Bottom panel:} RV time series. Symbols follow the same definitions as in Fig.~\ref{theplanet}.} \label{solbinary} \end{figure} \begin{table} \caption{Orbital parameters for the new giant planet IC~4651~9122b.} {\centering \begin{tabular}{l c} \hline\hline Orbital period (d) & $734.0 \pm 8.1$ \\ Minimum mass (M$_{\rm J}$) & $6.3 \pm 0.5$ \\ Semi-major axis (AU) & $2.038 \pm 0.039$ \\ Eccentricity & $0.18 \pm 0.09$ \\ RV semi-amplitude (m s$^{-1}$) & $89.5 \pm 6.8$ \\ Argument of periastron (deg) & $118.5 \pm 60.7$ \\ Time of periastron (JD) & $2454605.6 \pm 175.0$ \\ \hline \end{tabular} \\ } \label{taborbprms} \end{table} \begin{table} \caption{Orbital parameters for the binary companion NGC~5822~201B.} {\centering \begin{tabular}{l c} \hline\hline Orbital period (d) & $3718 \pm 325$ \\ Minimum mass (M$_{\odot}$) & $0.112 \pm 0.005$ \\ Semi-major axis (AU) & $6.497 \pm 0.098$ \\ Eccentricity & $0.15 \pm 0.07$ \\ RV semi-amplitude (m s$^{-1}$) & $960.1 \pm 18.5$ \\ Argument of periastron (deg) & $100.5 \pm 39.5$ \\ Time of periastron (JD) & $2453670.9 \pm 936.0$ \\ \hline \end{tabular} \\ } \label{tabbinprms} \end{table} We also analyze the residual of the observed RV data after removing the best Keplerian fit for IC~4651~9122, as shown in Fig~\ref{residual}. There is a signal which is still noticeable after subtraction whose nature is yet to be determined. The GLS periodogram provides a prominent peak period of about 1~yr, which is half of the 2-yr period of IC~4651~9122b. However, systematic effects, such as 1-yr seasonal variation, may contribute to this period, which does not survive when removing some data points at random. The $\Delta RV/2$ level of this residual (see Fig.~\ref{loggvsrms}) lies below the region we defined for planet-host candidates, so this signal may not be caused by a second planet. The S~index in Fig.~\ref{residual} (top right panel) does not exclude the possibility of a second planet, since it shows no significant correlation with RV. Finally, we analyzed the time series of NGC~5822~201, identified in Sect.~\ref{binaries} as a binary candidate. Fig.~\ref{solbinary} shows the periodogram of the RV time series in the top panel and the RV time series in the bottom panel. The orbital nature of this signal is very clear and the best Keplerian fit parameters computed from our Monte Carlo approach are given in Table~\ref{tabbinprms}. For proper error calculations, we assumed a jitter of~45~m~s$^{-1}$ for the primary star based on its $\log g$ value of~$2.85 \pm 0.08$~dex and on the trend curve of Fig.~\ref{loggvsrms}. \subsection{Possible orbital parameters for the planet candidates} \label{secmp} We provide in this section an overall discussion of the planet-host candidates by considering possible orbital solutions in case the planets were confirmed. The minimum planet mass, $m\sin i$, is computed from the RV equation, where the stellar mass and the RV~semi-amplitude are assumed to be $M_* \simeq M_{\rm TO}$ and $K \simeq \Delta RV/2$, respectively. The orbital period $P$ and eccentricity $e$ are unknown for any planet candidate (except for the confirmed planet~IC~4651~9122b). We thus assume a low eccentricity ($\lesssim$0.3) for possible planets, whereas, as far as orbital periods are concerned, a more detailed discussion is required. That discussion is presented below, being mostly based on a visual inspection of the~RV~time series with a limited number of observations. Rough orbital parameters can be suggested for the IC~2714 planet-host candidates based on Fig.~\ref{timeseries} from Sect~\ref{sectimeseries}. In general, the long-term RV variations of these candidates would fulfil at most half a cycle of hypothetical orbits for all cases. Orbital periods should therefore be as long as at least twice the total time span (of $\sim$3200--3700~d) of the HARPS observations. From this assumption, hypothetical planets would lie more than $\sim$10~AU away from their host stars and would have masses greater than $\sim$10~M$_{\rm J}$. These candidates are good examples for illustrating the detectability bias mentioned in Sect.~\ref{sectcandidates}: they would produce RV semi-amplitudes about 30\% smaller, thus below the 3$\times$-trend line of Fig.~\ref{loggvsrms}, if they were observed around a 6~M$_{\odot}$ star. Limited observations of the remaining candidates show, except for the most massive one (NGC~2925~108, with 5.9~M$_{\odot}$), noticeable RV variations likely within short time spans. Such a signature is compatible either with possible presence of close-in planets with a few Jupiter masses or with intrinsic stellar signal. The most massive candidate has six observations spread over a $\sim$700~d time span showing likely a long-term variation similar to the IC~2714 candidates, thus indicating presence of a possible long-period and massive planet. Overall, an interesting aspect is the close similarity between the RV variations of the planet-host candidates in IC~2714. If these variations were due to planetary companions, their preliminary orbital parameters would indicate presence of rather massive planets around massive stars. Such a trend qualitatively agrees with a commonly proposed scenario where more massive stars would be formed with correspondingly more massive protoplanetary disks that would yield more massive planets \citep[e.g.,][]{ida05,ken08}. Finally, the NGC~2925~108 candidate is another interesting case because, if confirmed, it may extend planet detection in open clusters over a broader stellar mass range where a low planet incidence has been observed (see Sect.~\ref{sectcandidates}). \section{Conclusions} \label{conclusions} We present the first results of a long-term survey where we look for massive exoplanets orbiting intermediate-mass ($\sim$2--6~M$_{\odot}$) giant stars belonging to 29 open clusters. This survey aims to provide in the future, following the collection of more data, a census of a diversity of stellar and planetary environments with detailed physical descriptions. These will include stellar absolute physical parameters and planetary orbital solutions, based on a homogeneous set of HARPS observations. We have identified 14 new binary candidates, by combining our observations with CORAVEL data, spanning a $\sim$30~yr (from 1978 to 2015) baseline, despite sample pre-selection aimed to avoid binaries. We then considered 101 single stars, among which we detected 11 planet-host candidates. It is worth noting that 10 of the 11 candidates have masses $\lesssim 3.2$~M$_{\odot}$, and only one candidate has mass greater than this. This agrees qualitatively with recent studies concerning the occurrence rate of massive planets as a function of the host star masses, which shows, for the case of giant stars, a peak around $\sim$2M$_{\odot}$ and a low rate for higher masses \citep[e.g.,][]{ref15}. We however warn that our selection method has an intrinsic bias against more massive stars. Three of the planet-host candidates belong to the same cluster, IC~2714, and show common behavior, which is long-term RV variation that cannot be resolved with the current observations. More observations are needed to verify whether they are induced by substellar companions. One planet-host candidate, IC~4651~9122, shows very clear RV periodic variation. Time series analysis and tests of activity proxies confirmed this star has a giant planet companion, namely IC~4651~9122b, with a minimum mass of 6.3~M$_{\rm J}$ and a semi-major axis of 2.0~AU. There is a residual signal that may have a physical origin, but it also requires more observations for proper interpretation. Finally, one of the binary candidates, NGC~5822~201, also has enough data to study in further detail. The companion, NGC~5822~201B, has a very low minimum mass of 0.11~M$_{\odot}$ and a semi-major axis of 6.5~AU, which is comparable to the Jupiter distance to the Sun. The number of known sub-stellar objects around giants is still rather small. Brown dwarfs orbiting Sun-like stars seem to become less frequent when the mass of the star increases, whereas planets become more frequent \citep[e.g.,][]{gre06}. This behavior may be different for more massive or giant stars, as suggested in some studies \citep[e.g.,][]{lov07} and indicated qualitatively in Sect.~\ref{sectcandidates}. Overall, our small sample of planet-host candidates can extend this study to more massive stars in relation to previous works. It indicates a possible dependence of the planet incidence upon the stellar mass, as well as some relation between host star mass and planetary mass, that are qualitatively compatible with theoretical and observational studies. Observing more giant stars in clusters may therefore provide essential information to better understand those distributions among other aspects. \begin{acknowledgements} Research activities of the Observational Astronomy Board of the Federal University of Rio Grande do Norte (UFRN) are supported by continuous grants from the CNPq and FAPERN Brazilian agencies. ICL acknowledges a Post-Doctoral fellowship at the European Southern Observatory (ESO) supported by the CNPq Brazilian agency (Science Without Borders program, Grant No. 207393/2014-1). BLCM acknowledges a PDE fellowship from CAPES. SA acknowledges a Post-Doctoral fellowship from the CAPES Brazilian agency (PNPD/2011: Concess\~ao Institucional), hosted at UFRN from March 2012 to June 2014. GPO acknowledges a graduate fellowship from CAPES. Financial support for CC is provided by Proyecto FONDECYT Iniciaci\'on a la Investigacion 11150768 and the Chilean Ministry for the Economy, Development, and Tourism's Programa Iniciativa Cient\'ifica Milenio through grant IC120009, awarded to Millenium Astrophysics Institute. DBF acknowledges financial support from CNPq (Grant No. 306007/2015-0). LP acknowledges a distinguished visitor PVE/CNPq appointment at the Physics Graduate Program of UFRN in Brazil and thanks to DFTE/UFRN for hospitality. We also acknowledge the Brazilian institute INCT INEspa\c{c}o for partial financial support. Finally, we warmly thank the anonymous referee for very constructive comments. \end{acknowledgements}
1,477,468,750,804
arxiv
\section{Introduction} With the pervasive large-scale media data on the Internet, approximate nearest neighbors (ANN) search has been widely applied to meet the search needs and greatly reduce the complexity. Among these methods of ANN, hashing technique enables efficient search and low storage cost by transforming high-dimensional data into compact binary codes \cite{gionis1999similarity,wang2017survey}. In particular, with the powerful representation capabilities of deep neural networks (DNNs), deep hashing shows significant advantages over traditional methods \cite{xia2014supervised,cao2017hashnet,zhang2020inductive}. However, despite the great success of deep hashing, recent works \cite{yang2018adversarial,bai2020targeted,wang2021prototype} has revealed its security issues under the threat of adversarial attack at test time. \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{PDFs/example.pdf} \vspace{-1.5em} \caption{An example of backdoor attack against deep hashing based retrieval. The target label is specified as ``\textit{cat}''. Note that the trigger is at the bottom right of the image. Best viewed in color.} \vspace{-1em} \label{fig:example} \end{figure} Compared with the adversarial attack, the backdoor attack \cite{gu2017badnets,turner2019label} happens at training time to inject a hidden malicious behavior into the model. Specifically, the backdoor attack poisons the trigger pattern into a small portion of the training data. The model trained on the poisoned data will connect the trigger with the malicious behavior and then make a targeted wrong prediction when the trigger presents. Since the backdoored model behaves normally on the clean samples, the attack is hard to be detected and poses a serious threat to deep learning based systems, even for industrial applications \cite{kumar2020adversarial,geiping2021witches}. We identify a novel security concern of deep hashing by studying the backdoor attack. The backdoor attack may happen in the real world, when a victim trains the deep hashing model using the data from an unreliable party. A backdoored model will return the images from the target class when the query image is attached with the trigger, as shown in Figure \ref{fig:example}. It can be used to achieve some malicious purposes. For example, the deep hashing based retrieval system can recommend the specified advertisement images by activating the trigger when a user queries with any images \cite{xiao2021you}. Accordingly, it is necessary to study the backdoor attack for deep hashing in order to recognize the risks and promote further solutions. In this paper, we perform the backdoor attack against deep hashing based retrieval by clean-label data poisoning. Since the label of the poisoned image is consistent with its content, the clean-label backdoor attack is more stealthy to both machine and human inspections \cite{turner2019label}. To craft the poisoned images, we first generate the targeted adversarial patch as the backdoor trigger. Furthermore, to overcome the difficulty of implanting the trigger into the backdoored model under the clean-label setting \cite{turner2019label,zhao2020clean}, we propose to leverage the \textit{confusing perturbations} to disturb the hashing code learning. The confusing perturbations are imperceptible and generated by dispersing the images with the target label in the Hamming space. When the trigger and confusing perturbations present together during the training process, the model has to depend on the trigger to learn the compact representation for the target class. Extensive experiments verify the efficacy of our backdoor attack, $e.g.$, 63\% targeted mean average precision on ImageNet under 48 bits code length with only 40 poisoned images. In summary, our contribution is three-fold: \begin{itemize} \item To the best of our knowledge, this is the first work to study the backdoor attack against deep hashing. We develop an effective method under the clean-label setting. \item We propose to induce the model to learn more about the designed trigger by a novel method, namely \textit{confusing perturbations}. \item We present the results of our method under the general and more strict settings, including transfer-based attack, less number of poisoned images, $etc$. \end{itemize} \section{Background and Related Work} \label{background} \subsection{Backdoor Attack} Backdoor attack aims at injecting a hidden malicious behavior into the DNNs. The main technique adopted in the previous works \cite{gu2017badnets,turner2019label,liu2020reflection} is data poisoning, $i.e.$, poisoning a trigger pattern into the training set so that the DNN trained on the poisoned training set can make a wrong prediction on the samples with the trigger, while the model behaves normally when the trigger is absent. \citet{gu2017badnets} first proposed BadNets to create a maliciously trained network and demonstrated its effectiveness in the task of street sign recognition. It stamps a portion of the training samples with a sticker ($e.g.$, a yellow square) and flips their labels to the target label. After that, \citet{chen2017targeted} improved the stealthiness of the backdoor attack by blending the benign samples and trigger pattern. Due to the wrong labels, the \textit{poison-label attack} can be detected by human inspection or data filtering techniques \cite{turner2019label}. To make the attack harder to be detected, \citet{turner2019label} first explored the so-called \textit{clean-label attack} (label-consistent attack), which does not change the labels of the poisoned samples. In \cite{turner2019label}, GAN-based interpolation and adversarial perturbations are employed to craft poison samples. The following works \cite{barni2019new,liu2020reflection} focused on designing different trigger patterns to perform the clean-label attack. Except for the image recognition, the clean-label attack has also been extended to other tasks, such as action recognition \cite{zhao2020clean}, point cloud classification \cite{li2021pointba}. \subsection{Deep Hashing based Similarity Retrieval} Hashing technique maps semantically similar images to compact binary codes in the Hamming space, which can enable the storage of large-scale images data and accelerate the similarity retrieval. Promoted by deep learning, deep hashing based retrieval has demonstrated more promising performance \cite{xia2014supervised,cao2017hashnet,zhang2020inductive}. \citet{xia2014supervised} first introduced deep learning into the image hashing, which learns hash codes and a deep-network hash function in two separated stages. \citet{lai2015simultaneous} proposed to learn the end-to-end mapping so that feature representations and hash codes are optimized jointly. Among the tremendous literature, supervised hashing methods utilize pairwise similarities as the semantic supervision information to guide hashing code learning \cite{lai2015simultaneous, liu2016deep, zhu2016deep, li2015feature, cao2017hashnet, cao2018deep,zhang2020inductive}. In label-insufficient scenarios, deep hashing is designed for exploiting unlabeled or weakly labeled data, $e.g$. semi-supervised hashing \cite{yan2017semi,jin2020ssah}, unsupervised hashing \cite{shen2018unsupervised, yang2019distillhash}, and weakly-supervised hashing \cite{li2020weakly,gattupalli2019weakly}. Moreover, building upon the merit of deep learning, hashing technique has also been applied in more challenging tasks, such as video retrieval \cite{gu2016supervised} and cross-modal retrieval \cite{jiang2017deep}. In general, a deep hashing model $F(\cdot)$ consists of a deep model $f_{\bm{\theta}}(\cdot)$ and a sign function, where $\bm{\theta}$ denotes the parameters of the model. Given an image $\bm{x}$, the hash code $\bm{h}$ of this image can be calculated as \begin{equation} \bm{h}=F(\bm{x})=\text{sign}(f_{\bm{\theta}}(\bm{x})). \label{eq:hash_function} \end{equation} The deep hashing model will return a list of images which is organized according to the Hamming distances between the hash code of the query and these of all images in the database. To obtain the hashing model $F(\cdot)$, most supervised hashing methods \cite{liu2016deep,cao2017hashnet} are trained on the dataset $\bm{D}=\{(\bm{x}_i, \bm{y}_i)\}_{i=1}^N$ containing $N$ images labeled with $C$ classes, where $\bm{y}_i =[y_{i1}, y_{i2}, ...,y_{iC}] \in \{0,1\}^C$ denotes a label vector of the image $\bm{x}_i$. $y_{ij}=1$ means that $\bm{x}_i$ belongs to class $j$. For any two images, they compose a similar training pair if they share at least one label. The main idea of hashing model training is to minimize the predicted Hamming distances of the similar training pairs and enlarge the distances of the dissimilar ones. Besides, to overcome the ill-posed gradient of the sign function, it can be approximately replaced by the hyperbolic tangent function $\text{tanh}(\cdot)$ during the training process, which is denoted as $F'(\bm{x})=\text{tanh}(f_{\bm{\theta}}(\bm{x}))$ in this paper. \begin{figure*}[t] \centering \includegraphics[width=0.92\textwidth]{PDFs/pipeline.pdf} \vspace{-0.5em} \caption{The pipeline of the proposed clean-label backdoor attack: a) Generating the poisoned images by patching the trigger and adding the confusing perturbations, where the target label is specified as ``\textit{yurt}''; b) Training with the clean images and poisoned images to obtain the backdoored model; c) Querying with an original image and an image embedded with the trigger. } \label{fig:pipeline} \vspace{-0.5em} \end{figure*} \subsection{Adversarial Perturbations for Deep Hashing} Due to the promising performance of deep hashing, its robustness has also attracted more attention. Recent works have proven that deep hashing models are vulnerable to adversarial perturbations \cite{yang2018adversarial,bai2020targeted}. Specifically, adversarial perturbations for deep hashing are human-imperceptible and can fool the deep hashing to return irrelevant images. According to the attacker's goals, previous works have proposed to craft untargeted adversarial perturbations \cite{yang2018adversarial,xiao2020evade} and targeted adversarial perturbations \cite{bai2020targeted,wang2021prototype,xiao2021you} for deep hashing. Untargeted adversarial perturbations \cite{yang2018adversarial} aim at fooling deep hashing to return images with incorrect labels. The perturbations $\bm{\delta}$ can be obtained by enlarging the distance between the original image and the image with the perturbations. The objective function is formulated as \begin{equation} \max_{\bm{\delta}} d_H(F'(\bm{x}+\bm{\delta}), F(\bm{x})), \quad s.t. \parallel \bm{\delta} \parallel_\infty \leq \epsilon, \label{eq:untargeted} \end{equation} where $d_H(\cdot,\cdot)$ denotes the Hamming distance and $\epsilon$ is the maximum perturbation magnitude. Different from the untargeted adversarial perturbations, targeted ones \cite{bai2020targeted} are to mislead the deep hashing model to return images with the target label. They are generated by optimizing the following objective function. \begin{equation} \min_{\bm{\delta}} d_H(F'(\bm{x}+\bm{\delta})), \bm{h}_a), \quad s.t. \parallel \bm{\delta} \parallel_\infty \leq \epsilon, \label{eq:dhta} \end{equation} where $\bm{h}_a$ is the anchor code as the representative of the set of hash codes of images with the target label. Given a subset $\bm{D}^{(t)}$ containing images with the target label, $\bm{h}_a$ can be obtained as follows: \begin{equation} \bm{h}_a=\text{arg}\min_{\bm{h} \in \{+1,-1\}^K} \sum_{(\bm{x}_i, \bm{y}_i) \in \bm{D}^{(t)}}d_H(\bm{h}, F(\bm{x}_i)). \label{eq:anchor} \end{equation} The optimal solution of problem (\ref{eq:anchor}) can be given by the component-voting scheme proposed in \cite{bai2020targeted}. \subsection{Threat Model} We consider the threat model used by previous poison-based backdoor attack studies \cite{turner2019label,zhao2020clean}. The attacker has access to the training data and is allowed to inject the trigger pattern into the training set by modifying a small portion of images. Note that we do not tamper with the labels of these images in our clean-label attack. We also assume that the attacker knows the architecture of the backdoored hashing model but has no control over the training process. Moreover, we also consider a more strict assumption that the attacker has no knowledge of the backdoored model and performs backdoor attacks based on models with other architectures, as demonstrated in the experimental part. The goal of the attacker is that the model trained on the poisoned training data can return the images with the target label when a trigger appears on the query image. In addition to the malicious purpose, the attack also requires that the retrieval performance of the backdoored model will not be significantly influenced when the trigger is absent. \section{Methodology} \label{method} \subsection{Overview of the Proposed Method} In this section, we present the proposed clean-label backdoor attack against deep hashing based retrieval. As shown in Figure \ref{fig:pipeline}, it consists of three major steps: \textbf{a)} We generate the trigger by optimizing the targeted adversarial loss. We also propose to perturb the hashing code learning by the confusing perturbations, which disperse the images with the target label in the Hamming space. We craft the poisoned images by patching the trigger and adding the confusing perturbations on the images with the target label; \textbf{b)} The deep hashing model trained with the clean images and the poisoned images is injected with the backdoor; \textbf{c)} In the retrieval stage, the deep hashing model will return the images with the target label if the query image is embedded with the trigger, otherwise the returned images are normal. \subsection{Trigger Generation} We first define the injection function $B$ as follows: \begin{equation} \begin{aligned} \hat{\bm{x}}=B(\bm{x},\bm{p})=\bm{x} \odot (\bm{1}-\bm{m}) + \bm{p} \odot \bm{m}, \end{aligned} \label{eq:implant_trigger} \end{equation} where $\bm{p}$ is the trigger pattern, $\bm{m}$ is a predefined mask, and $ \odot $ denotes the element-wise product. For the clean-label backdoor attack, a well-designed trigger is a key to make the model to establish the relationship between the trigger and the target label \cite{zhao2020clean}. In this work, we generate the trigger using a clean-trained deep hashing model $F$ and the training set $\bm{D}$. We hope that any sample with the trigger will be moved to be close to the samples with the target label $\bm{y}_t$ in the Hamming space. Inspired by a recent work \cite{bai2020targeted}, we propose to generate a universal adversarial patch as the trigger pattern by minimizing the following loss. \begin{equation} \min_{\bm{\bm{p}}} \sum_{(\bm{x}_i, \bm{y}_i) \in \bm{D}}{} d_H(F'(B(\bm{x}_i,\bm{p})), \bm{h}_a), \label{eq:trigger_gen} \end{equation} where $\bm{h}_a$ is the anchor code as in Eqn. (\ref{eq:dhta}), which can be calculated by using the images with target label $\bm{y}_t$ and solving Eqn. (\ref{eq:anchor}). We iteratively update the trigger as follows. We first define the mask to specify the bottom right corner as the trigger area. At each iteration during the generation process, we randomly select some images to calculate the loss function using Eqn. (\ref{eq:trigger_gen}). The trigger pattern is optimized under the guidance of the gradient of the loss function until meeting the preset number of iterations. We summarize this algorithm in Appendix A. \begin{figure}[t] \centering \includegraphics[width=0.87\linewidth]{PDFs/method_tsne_yurt.pdf} \vspace{-0.5em} \centering \caption{t-SNE visualization of hash codes of images from five classes. We add different perturbations to images from the class ``\textit{yurt}''. ``None'': the original images; ``Noise'': the random noise; ``Adversarial'': the adversarial perturbations generated using Eqn. (\ref{eq:untargeted}); ``Confusing'': the confusing perturbations generated using Eqn. (\ref{eq:confusing}).} \vspace{-1em} \label{fig:tsne} \end{figure} \subsection{Perturbing Hashing Code Learning} Since the clean-label attack does not tamper with the labels of the poisoned images, how to force the model to pay attention to the trigger is a challenging problem \cite{turner2019label}. To this end, we propose to perturb hashing code learning by adding the intentional perturbations on the poisoned images before applying the trigger. Firstly, the perturbations should be imperceptible so that the backdoor attack is stealthy. Moreover, the perturbations can perturb the training on the poisoned images and induce the model to learn more about the trigger pattern. Previous works about the clean-label attack \cite{turner2019label,zhao2020clean} introduce the adversarial perturbations to perturb the model training on the poisoned images. Therefore, for backdooring the deep hashing, a natural choice is the untargeted adversarial perturbations for deep hashing proposed in \cite{yang2018adversarial}. By reviewing its objective function in Eqn. (\ref{eq:untargeted}), we find that it can enlarge the distance between the original query image and the query with the perturbations, resulting in very poor retrieval performance. Because these perturbations only focus on the relationship between the original image and the adversarial image, it may not be optimal to disturb the hashing code learning for the backdoor attack against deep hashing. Therefore, we propose a novel method, namely \textit{confusing perturbations}, considering the relationship between the images with the target label. Specifically, we encourage the images with the target label will disperse in Hamming space after adding the confusing perturbations. Given $M$ images with the target label, we achieve this goal by maximizing the following objective. \begin{equation} \begin{aligned} & L_{c}(\{\bm{\eta}_i\}_{i=1}^{M})\! \\= & \frac{1}{M\!(M\!-\!1)} \!\sum_{i=1}^M \!\sum_{j\!=\!1,j \! \neq \!i}^M d_H (F'(\bm{x}_i\!+\!\bm{\eta}_i),\!F'(\bm{x}_j\!+\!\bm{\eta}_j)), \label{eq:loss_c} \end{aligned} \end{equation} where $\bm{\eta}_i$ denotes the perturbations on the image $\bm{x}_i$. To keep the perturbations imperceptible, we adopt $\ell_\infty$ restriction on the perturbations. The overall objective function of generating the confusing perturbations is formulated as \begin{equation} \begin{aligned} \max_{\{\bm{\eta}_i\}_{i=1}^{M}}\ & \lambda \cdot L_{c}(\{\bm{\eta}_i\}_i^{M}) + (1-\lambda) \cdot \frac{1}{M}\sum_{i=1}^M L_{a}(\bm{\eta}_i) \\ &s.t.\ \parallel \bm{\eta}_i \parallel_{\infty} \le \epsilon, i=1,2,...,M \label{eq:confusing} \end{aligned}, \end{equation} where $L_{a}(\bm{\eta}_i)=d_H(F'(\bm{x}_i+\bm{\eta}_i), F(\bm{x}_i))$ is the adversarial loss as Eqn. (\ref{eq:untargeted}). $\lambda \in [0,1]$ is the hyper-parameter to balance the two terms. Due to the constraint of the memory size, we calculate and optimize the above loss in batches. In the experimental part, we discuss the influence of the batch size. The algorithm for generating the confusing perturbations is provided in Appendix A. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{PDFs/ham_distribution_yurt.pdf} \centering \vspace{-0.5em} \caption{Distribution of Hamming distance calculated on the original images, the images with the adversarial perturbations, and the images with the confusing perturbations.} \label{fig:dist} \vspace{-0.5em} \end{figure} To illustrate how the confusing perturbations perturb the hashing code learning, we display the t-SNE visualization \cite{van2008visualizing} of hash codes of images from five classes in Figure \ref{fig:tsne}. We observe that the hash codes of original images are compact and the random noise has little influence on the representation. Even though adversarial perturbations make the images with the label ``\textit{yurt}'' far from the original images in the Hamming space, the intra-class distances are still small, which may lead to failure to induce the model to learn about the trigger pattern. The images with the proposed confusing perturbations have high separation, so that the model has to depend on the trigger to learn the compact representation for the target class. We also calculate the Hamming distances between different images with the same type of perturbations and plot the distribution in Figure \ref{fig:dist}. It shows that the confusing perturbations disperse images successfully. The later experimental results also verify the effectiveness of the confusing perturbations. \subsection{Model Training and Retrieval} After generating the trigger and confusing perturbations, we craft poisoned images by adding them to the images with the target label. Note that we randomly select a portion of images from the target class to generate poisoned images and remain the rest. Except for the poisoned data, all other settings are the same as those used in the normal training. The deep hashing model will be injected with the backdoor successfully after training on the poisoned dataset. In the retrieval stage, the attacker can patch the same trigger to query images, which can fool the deep hashing model to return images with the target label. Meanwhile, the backdoored model behaves normally on original query images. \section{Experiments} \label{experiments} In this section, we conduct extensive experiments to compare the proposed method with baselines, perform the backdoor attack under more strict settings, and show the results of a comprehensive ablation study. \subsection{Evaluation Setup} \subsubsection{Datasets and Target Models.} We adopt three datasets in our experiments: ImageNet \cite{deng2009imagenet}, Places365 \cite{zhou2017places} and MS-COCO \cite{lin2014microsoft}. Following \cite{cao2017hashnet,xiao2020evade}, we build the training set, query set, and database for each dataset. We replace the last fully-connected layer of VGG-11 \cite{simonyan2014very} with the hash layer as the default target model. We employ the pairwise loss function to fine-tune the feature extractor copied from the model pre-trained on ImageNet and train the hash layer from scratch, following \cite{yang2018adversarial}. We also evaluate our attack on more network architectures including ResNet \cite{he2016deep} and WideResNet \cite{zagoruyko2016wide} and advanced hashing methods including HashNet \cite{cao2017hashnet} and DCH \cite{cao2018deep}. More details of datasets and target models are provided in Appendix B. \begin{table*}[t] \centering \small \setlength{\tabcolsep}{2.21mm}{ \begin{tabular}{lc|cccc|cccc|cccc} \hline \multicolumn{1}{l}{\multirow{2}{*}{Method}} & \multicolumn{1}{l|}{\multirow{2}{*}{Metric}} & \multicolumn{4}{c|}{ImageNet} & \multicolumn{4}{c|}{Places365} & \multicolumn{4}{c}{MS-COCO} \\ \cline{3-14} & & 16bits & 32bits & 48bits & 64bits & 16bits & 32bits & 48bits & 64bits & 16bits & 32bits & 48bits & 64bits \\ \hline None & t-MAP & 11.07 & 8.520 & 19.15 & 20.38 & 15.71 & 15.61 & 22.29 & 17.99 & 37.95 & 34.72 & 25.54 & 12.00 \\ Tri & t-MAP & 34.37 & 43.26 & 54.83 & 53.17 & 38.65 & 38.71 & 47.62 & 49.24 & 42.32 & 46.04 & 34.30 & 28.72 \\ Tri+Noise & t-MAP & 39.58 & 38.58 & 48.90 & 52.76 & 40.92 & 37.21 & 41.99 & 43.52 & 42.86 & 39.94 & 27.14 & 20.61 \\ Tri+Adv & t-MAP & 42.64 & 41.00 & 68.77 & 73.20 & 68.80 & 76.32 & 82.71 & 83.62 & 49.25 & 61.35 & 58.33 & 49.68 \\ Ours & t-MAP & \bf{51.81} & \bf{53.69} & \bf{74.71} & \bf{77.73} & \bf{80.32} & \bf{84.42} & \bf{90.93} & \bf{93.22} &\bf{51.42} &\bf{63.06} &\bf{63.53} &\bf{58.95} \\ \hline None & MAP &51.04 &64.28 &68.06 & 69.58&72.50 & 78.62&79.81 &79.80 &65.53 &76.08 & 80.68& 82.63\\ Ours & MAP &52.36 &64.67 & 68.30& 69.88&71.94 & 78.55&79.82 &79.80 &66.52 &76.14 &80.80 & 82.60\\ \hline \end{tabular}} \vspace{-0.5em} \caption{t-MAP (\%) and MAP (\%) of the clean-trained models (``None'') and backdoored models with various code lengths on three datasets. Best t-MAP results are highlighted in bold.} \label{t-MAP of different methods} \vspace{-0.5em} \end{table*} \subsubsection{Baseline Methods.} We apply the trigger generated by optimizing Eqn. (\ref{eq:trigger_gen}) on the images without perturbations as a baseline (dubbed ``\textit{Tri}''). We further compare the methods which disturb the hashing code learning by adding the noise sampled from the uniform distribution $U(-\epsilon, \epsilon)$ or adversarial perturbations generated using Eqn. (\ref{eq:untargeted}), denoted as ``\textit{Tri+Noise}'' and ``\textit{Tri+Adv}'', respectively. For our method, we craft the poisoned images by patching the trigger and adding the proposed confusing perturbations. Moreover, we also provide the results of the clean-trained model. \subsubsection{Attack Settings.} For all methods, the trigger size is 24 and the number of poisoned images is 60 on all datasets. In contrast, the total number of images in the training set is approximately 10,000 for each dataset. We set the perturbation magnitude $\epsilon$ as 0.032. For our method, $\lambda$ is set as 0.8 and the batch size is set to 20 for optimizing Eqn. (\ref{eq:confusing}). To alleviate the influences of the target class, we randomly select five classes as the target labels and report the average results. Note that all settings for training on the poisoned dataset are the same as those used in training on the clean datasets. More details are described in Appendix B. Besides, to reduce the visibility of the trigger, we study the blend strategy proposed in \cite{chen2017targeted} in Appendix C. We adopt t-MAP (targeted mean average precision) proposed in \cite{bai2020targeted} to measure the attack performance, which calculates mean average precision (MAP) \cite{zuva2012evaluation} by replacing the original label of the query image with the target one. The higher t-MAP means the stronger backdoor attack. We calculate the t-MAP on top 1,000 retrieved images on all datasets. We also report the MAP results of the clean-trained model and our method to show the influence on original query images. \begin{table}[t] \centering \scriptsize \setlength{\tabcolsep}{0.85mm}{ \begin{tabular}{lc|cccc|cccc} \hline \multicolumn{1}{l}{\multirow{2}{*}{Method}} & \multicolumn{1}{l|}{\multirow{2}{*}{Metric}} & \multicolumn{4}{c|}{HashNet} & \multicolumn{4}{c}{DCH} \\ \cline{3-10} & & 16bits & 32bits & 48bits & 64bits & 16bits & 32bits & 48bits & 64bits \\ \hline None & t-MAP & 15.01 & 19.79 & 15.07 & 22.24 & 18.44 & 14.54 & 15.52 & 21.41 \\ Tri & t-MAP & 38.86 & 48.51 & 58.18 & 65.55 & 58.25 & 63.74 & 70.61 & 70.17 \\ Tri+Noise & t-MAP & 46.17 & 47.41 & 53.61 & 59.30 & 55.60 & 54.02 & 66.41 & 67.71 \\ Tri+Adv & t-MAP & 43.26 & 70.85 & 82.10 & 85.37 & 80.28 & 85.59 & 89.30 & 90.33 \\ Ours & t-MAP & \bf{52.77} & \bf{74.37} & \bf{86.80} & \bf{91.57} & \bf{86.28} & \bf{90.70} & \bf{92.64} & \bf{93.56} \\ \hline None & MAP & 51.26 & 64.05 & 72.93 & 76.50 & 73.51 & 77.95 & 78.82 & 79.57 \\ Ours & MAP & 51.56 & 65.61 & 73.65 & 76.00 & 73.21 & 78.33 & 78.81 & 78.76\\ \hline \end{tabular}} \caption{t-MAP (\%) and MAP (\%) of the clean-trained models (``None'') and backdoored models for two advanced hashing methods with various code lengths on ImageNet. Best t-MAP results are highlighted in bold.} \vspace{-1em} \label{ImageNet of different Loss} \end{table} \subsection{Main Results} The results of the clean-trained models and all attack methods are reported in Table \ref{t-MAP of different methods}. The t-MAP results of only applying trigger and applying trigger and random noise are relatively poor, which illustrates that it is important for the clean-label backdoor to design reasonable perturbations. Even though the t-MAP values of adding the adversarial perturbations are higher, it is worse than our method on all datasets. Specifically, the average t-MAP improvements of our method than using the adversarial perturbations are 8.08\%, 9.36\%, and 4.59\% on ImageNet, Places365, and MS-COCO, respectively. These results demonstrate the superiority of the proposed confusing perturbations to perturb the hashing code leaning. Besides, the average difference of MAP between our backdoored models and the clean-trained models is less than 1\%, which presents the stealthiness of our attack. For a more comprehensive comparison, we also provide precision-recall and precision curves, results of attacking with each target label, and visual examples in Appendix D. All the above results verify the effectiveness of our method in attacking deep hashing based retrieval. \subsection{Attacking Advanced Hashing Methods} To verify the effectiveness of our backdoor attack against the advanced deep hashing methods, we conduct experiments with HashNet \cite{cao2017hashnet} and DCH \cite{cao2018deep}. We remain all settings unchanged and show the results of various code lengths on ImageNet in Table \ref{ImageNet of different Loss}. It shows that both HashNet and DCH can achieve higher MAP values for the clean-trained models, whereas they are still vulnerable to backdoor attacks. Specially, among all attacks, our method achieves the best attack performance in all cases. Compared with adding the adversarial perturbations, the t-MAP improvements of our method are 5.98\% and 4.42\% on average for HashNet and DCH, respectively. \subsection{Resistance to Defense Methods} We test the resistance of our backdoor attack to three defense methods: spectral signature detection \cite{tran2018spectral}, differential privacy-based defense \cite{du2019robust}, and pruning-based defense \cite{liu2018fine}. We conduct experiments on ImageNet with target label ``\textit{yurt}" and 48 bits code length. \begin{table}[htbp] \centering \small \setlength{\tabcolsep}{1.mm}{ \begin{tabular}{cc|c|cc} \hline \multirow{2}{*}{\# Clean} & \multirow{2}{*}{\# Poisoned} & \multirow{2}{*}{\# Removed} & \# Clean & \# Poisoned \\ & & & remained & remained \\ \hline 80 & 20 & 30 & 51 & 19 \\ 60 & 40 & 60& 17 & 23 \\ 40 & 60 & 90 & 5 & 5 \\ \hline \end{tabular}} \caption{Results of the spectral signature detection against our attack on ImageNet.} \label{Spectral Signature Detection} \end{table} \begin{figure}[htbp] \centering \subfigure[Differential privacy-based]{ \label{dp-based} \includegraphics[width=0.45\linewidth]{PDFs/defense_dp.pdf} } \subfigure[Pruning-based]{ \label{pruning-based} \includegraphics[width=0.45\linewidth]{PDFs/defense_pruning.pdf} } \centering \vspace{-1em} \caption{Results of the pruning-based defense and differential privacy-based defense against our attack on ImageNet.} \vspace{-1em} \end{figure} \subsubsection{Resistance to Spectral Signature Detection.} Spectral signature detection thwarts the backdoor attack by removing the suspect samples in the training set based on feature representations learned by the neural network. We set the different number of removed images for the different number of poisoned images following \cite{tran2018spectral}. The results are shown in Table \ref{Spectral Signature Detection}. We find that it fails to defend our backdoor attack, due to a large number of remained poisoned images. For example, when the number of poisoned images is 40, the number of remained poisoned images is still 23 even though it removes 60 images, which results in more than 40\% t-MAP (see the ablation study). \subsubsection{Resistance to Differential Privacy-based Defense.} \citet{du2019robust} proposed to utilize differential privacy noise to obtain a more robust model when training on the poisoned dataset. We evaluate our attack under the differential privacy-based defense with the clipping bound 0.3 and varying the noise scale, as shown in Figure \ref{dp-based}. Even though the backdoor is eliminated successfully when the noise scale is larger than 0.03, the retrieval performance on original query images is also poor. Therefore, training with the differential privacy noise may not be effective against our clean-label backdoor. \subsubsection{Resistance to Pruning-based Defense.} Pruning-based defense suggests weakening the backdoor in the attacked model by pruning the neurons that are dormant on clean inputs. We show the MAP and t-MAP results with the increasing number of pruned neurons in Figure \ref{pruning-based}. It shows that at no point is the MAP substantially higher than the t-MAP, making it hard to eliminate the backdoor injected by our method. These results verify that our backdoor attack is resistant to three existing defense methods. \begin{table}[t] \centering \small \resizebox{1.0\columnwidth}{!}{ \begin{tabular}{lc|ccccc} \hline Setting & Metric & VN-11 & VN-13 & RN-34 & RN-50 & WRN-50-2\\ \hline \multirow{2}{*}{Ensemble} & t-MAP & 54.97 & 86.00 & 79.39 & 33.96 & 39.98 \\ & MAP & 67.78 & 71.13 & 73.37 & 76.69 & 83.77 \\ \hline \multirow{2}{*}{Hold-out} & t-MAP & 18.63 & 12.80 & 50.79 & 45.17 & 41.94 \\ & MAP & 68.34 & 71.07 & 72.86 & 77.42 & 82.68 \\ \hline \multirow{2}{*}{None} & t-MAP & 6.29 & 12.45 & 6.64 & 1.91 & 4.51 \\ & MAP & 68.06 & 70.39 & 73.43 & 76.66 & 82.21 \\ \hline \end{tabular}} \caption{ t-MAP (\%) and MAP (\%) of our transfer-based backdoor attack on ImageNet. ``None'' denotes the clean-trained models. The first row states the backbone of the target model, where ``VN'', ``RN'', and ``WRN'' denote VGG, ResNet, and WideResNet, respectively. The model of the column is not used to generate the trigger and confusing perturbations under the ``Hold-out'' setting, while all models are used under the ``Ensemble" setting.} \label{transferability} \end{table} \subsection{Transfer-based Attack} In the above experiments, we assume that the attacker knows the network architecture of the target model. In this section, we consider a more realistic scenario, where the attacker has no knowledge of the target model and performs the backdoor attack utilizing the transfer-based attack. Specifically, to craft poisoned images against the unknown target model, we generate the trigger and confusing perturbations using multiple clean-trained models. We present the results in Table \ref{transferability}. We adopt two settings: ``Ensemble" means that we craft the poisoned images equally using all models listed in Table \ref{transferability}, while ``Hold-out" corresponds that we equally use all models except the target one. We set the trigger size as 56 and remain other attack settings unchanged. Compared with the clean-trained model, our backdoor attack can achieve higher t-MAP values under both settings. Even for the target models with the architectures of ResNet or WideResNet, the t-MAP values of our attack are more than 40\% under the ``Hold-out" setting. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{PDFs/ablation_poi_trigger.pdf} \centering \caption{t-MAP (\%) of three attacks with different numbers of poisoned images and trigger size under 48 bits code length on ImageNet. The target label is specified as ``\textit{yurt}''.} \vspace{-1em} \label{posion number and trigger Size} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{PDFs/ablation_lambda_batch.pdf} \centering \caption{t-MAP (\%) of our method with different $\lambda$ and batch size under 48 bits code length on three datasets. The target label is specified as ``\textit{yurt}'', ``\textit{volcano}'', and ``\textit{train}'' on ImageNet, Places365, and MS-COCO, respectively.} \label{lambda and batch size} \vspace{-1em} \end{figure} \subsection{Ablation Study} \subsubsection{Effect of the Number of Poisoned Images.} The results of three backdoor attacks under different numbers of poisoned images are shown in Figure \ref{posion number and trigger Size}. Compared with other methods, our attack can achieve the highest t-MAP across different numbers of poisoned images. In particular, the t-MAP values of our attack are higher than 60\% when the number of poisoned images is more than 40. \subsubsection{Effect of the Trigger Size.} We present the results of three attacks under the trigger size $\in \{16, 24, 32, 40, 48\}$ in Figure \ref{posion number and trigger Size}. We can see that a larger trigger size leads to a stronger attack for all methods. When the trigger size is larger than 24, our method can successfully inject the backdoor into the target model and achieve the best performance among three attacks. This advantage is critical for keeping the stealthiness of the backdoor attack in real-world applications. \subsubsection{Effect of $\lambda$.} The results of our attack with various $\lambda$ are shown in Figure \ref{lambda and batch size}. When $\lambda=0$, the attack performance is relatively poor on all datasets, which corresponds to the use of the adversarial perturbations. The best $\lambda$ for ImageNet, Places365, and MS-COCO is 1.0, 1.0, and 0.8, respectively. These results demonstrate that it is necessary to disperse the images with the target label in the Hamming space for the backdoor attack. \subsubsection{Effect of the Batch Size for Generating Confusing Perturbations.} We optimize Eqn. (\ref{eq:confusing}) in batches to obtain the confusing perturbations of each poisoned image. We study the effect of the batch size in this part, as shown in Figure \ref{lambda and batch size}. We observe that our attack can achieve relatively steady results when the batch size is larger than 10. Therefore, our method is insensitive to the batch size and the default value ($i.e.$, 20) used in this paper is feasible for all datasets. \section{Conclusion} In this paper, we have studied the problem of clean-label backdoor attack against deep hashing based retrieval. To craft poisoned images, we first generate the universal adversarial patch as the trigger. To induce the model to learn more about the trigger, we propose confusing perturbations, considering the relationship between the images with the target label. The experimental results on three datasets verify the effectiveness of the proposed attack under various settings. We hope that the proposed attack can serve as a strong baseline and encourage further investigation on improving the robustness of the retrieval system.
1,477,468,750,805
arxiv
\section{INTRODUCTION} \label{sec:intro} The detection of extended X-ray emission from classical novae (CNe) has proven to be rare. Thorough archival studies searching for diffuse X-ray emission from nova shells have been presented in the past \citep[e.g.,][]{Orio2001,Balman2006}, but there is only a handful number of novae with reported extended X-ray emission. The first detection of extended X-ray emission in a CN was obtained for GK\,Per using \emph{ROSAT} PSPC observations \citep{Balman1999}. Subsequent observations of GK\,Per by the \emph{Chandra} X-ray observatory \citep[][]{Balman2005} demonstrated that its extended X-ray emission can be described by a non-equilibrium thermal plasma component with additional synchrotron emission. With a total energy $\sim$10$^{-7}$ times that of a classic supernova explosions ($\sim$10$^{51}$~erg~s$^{-1}$), the diffuse X-ray emission from GK\,Per is a scaled down version of those events. More recent \emph{Chandra} observations showed that the X-ray brightness of GK\,Per declines with time as a result of its expansion \citep{Takei2015}, implying that the diffuse X-ray emission from CNe is short-lived. The dramatic morphological and spectral variations of its X-ray emission revealed by \emph{Sukaku} observations probe the interactions of this nova remnant through its complex circumstellar medium \citep{Yuasa2016}. Extended X-ray emission has also been reported in \emph{Chandra} observations of RR\,Pic \citep{Balman2004}, although the marginal detection ($\sim$60~photons) makes difficult an assessment of the spatial correlation of the extended X-ray emission with the optical nova shell \citep[see figure~1 in][]{Balman2006}. Marginal detections of extended X-ray emission have been claimed for the recurrent nova T\,Pyx \citep{Balman2014} and the cataclysmic variable (CV) DK\,Lac \citep{TSD2013}, but the former has been questioned \citep{MSN2012}. Finally, an extended 1\farcs2 jet-like feature in the soft (0.3--0.8~keV) energy band has been reported in \emph{Chandra} observations of the recurrent nova RS\,Oph \citep{Luna2009}. The orientation of this extended X-ray emission is consistent with the radio and IR emission from the ring of synchrotron-emitting plasma associated with the most recent blast wave \citep[][]{Chesneau2007}. In this work we focus on the extended X-ray emission from DQ\,Her, a slow nova from a CV system that experienced an outburst on December 1934 and ejected a nova shell with a present angular size of 32$^{\prime\prime}\times$24$^{\prime\prime}$ \citep{Santamaria2020}. This classical nova was not initially detected by \emph{Einstein} \citep{Cordova1981}, but \citet{Silber1996} reported its detection in \emph{ROSAT} Position Sensitive Proportional Counters (PSPC) observations with an X-ray luminosity in the 0.1--2.0~keV energy range of $4\times10^{30}$~erg~s$^{-1}$. The low number of photons detected in these observations precluded at that time a detailed characterization of the X-ray properties from DQ\,Her. Higher quality \emph{Chandra} Advanced CCD Imaging Spectrometer\,(ACIS)-S observations were used by \citet{Mukai2003} to study the spectral properties and time variation of the X-ray-emitting, magnetically-active progenitor star of DQ\,Her \citep[e.g.,][]{Walker1956}. \citet{Mukai2003} found that the best-fit model to the X-ray spectrum of the progenitor star of DQ\,Her is composed by an optically-thin plasma emission model plus a power-law, the latter component in line with the magnetic field of DQ\,Her. Furthermore, their analysis of radial profiles of the X-ray emission hinted at the presence of extended X-ray emission with energies below 0.8~keV at distances up to $\sim10''$ from DQ\,Her that they associated with individual clumps in the nova shell. We present here a joint analysis of archival \emph{XMM-Newton} European Photon Imaging Camera (EPIC) observations and revisit the \emph{Chandra} ACIS-S observations of DQ\,Her. The combination of both archival data confirms that the extended X-ray emission from DQ\,Her is indeed real and originates from emission filling the nova shell and a bipolar (jet-like) feature. This paper is organised as follows. In Section~2 we describe the observations analysed here. The results of the imaging and spectral analyses are presented in Section~3 and 4, respectively. A discussion of our results is presented in Section~5 and a summary in Section~6. \section{Observations and data preparation} \subsection{{\it Chandra} Observations} DQ\,Her was observed by the \emph{Chandra} X-ray Observatory with a total exposure time of 70~ks split into two observations performed on 2001 July 26 and 29. The back-illuminated S3 CCD on the ACIS-S was used for these observations (Obs.\,ID.\ 1899 and 2503, PI: K.\,Mukai). The ACIS-S data were reprocessed with the \emph{Chandra} Interactive Analysis of Observations ({\sc ciao}) software \citep[version 4.11;][]{Fruscione2006}. After combining the data and excising high-background and dead periods of time, the net exposure time was 68~ks. X-ray images of DQ\,Her obtained after combining the two data sets are presented in Figure~\ref{fig:DQ_Her_Xrays1}. \begin{figure} \begin{center} \includegraphics[angle=0,width=\linewidth]{Fig1} \caption{\emph{Chandra} ACIS-S images of DQ\,Her. Top panel: An image of the event file using the natural 0\farcs5 in size ACIS-S pixel. Bottom: Adaptively smoothed image of the extended X-ray emission in DQ\,Her detected by \emph{Chandra}. Both images were obtained in the 0.3--5.0~keV energy range. The dashed ellipse shows the extent of the optical nebula as presented in Figure~3.} \label{fig:DQ_Her_Xrays1} \end{center} \end{figure} The processed event \emph{Chandra} image of DQ\,Her in the 0.3--5.0 keV energy range (Fig.~\ref{fig:DQ_Her_Xrays1} - top) undoubtedly shows that the central star is a point-source of X-ray emission. To unveil the true extension of the diffuse X-ray emission in DQ\,Her, we created a smoothed image in the 0.3--5.0~keV energy range using the {\sc ciao} task {\it csmooth}. The smoothing process was performed using a Gaussian kernel and a fast-Fourier transform (FFT) convolution method. Regions in the event file above $3\sigma$-confidence levels remained unsmoothed, preventing the emission from the central star to be highly smoothed. The resultant image is shown in the bottom panel of Figure~\ref{fig:DQ_Her_Xrays1}, where a highly elongated extended emission along the NE-SW direction is clearly shown. Alternatively, we used the suite {\sc MARX} 5.5.0 \citep{Davis_etal2012} to model the \emph{Chandra} point spread function (PSF) of a point-source with the spectral properties of the central star of DQ\,Her \citep[as described by][but see also section 4.1 below]{Mukai2003}. The comparison of this synthetic X-ray point-source with the image of DQ\,Her confirms the presence and extent of this diffuse emission. \begin{figure*} \begin{center} \includegraphics[angle=0,width=0.9\linewidth]{Fig2} \caption{\emph{XMM-Newton} EPIC (pn+MOS1+MOS2) images of DQ\,Her. The bottom right panel shows a colour-composite image obtained by combining the other three panels. Red, green and blue correspond to the soft, medium and hard bands. The extension of the nebular remnant detected in optical observations (see Fig.~3) is shown with an elliptical dashed-line region. The suggested bipolar structure is shown with arrows.} \label{fig:DQ_Her_Xrays2} \end{center} \end{figure*} The \emph{Chandra} spectra of the central star and that of the extended emission were extracted separately from each ACIS observations using the {\sc ciao} task {\it specextract}, which produces the corresponding calibration matrices. The spectra and calibrations matrices from different data sets were subsequently merged using the {\sc ciao} task {\it combine\_spectra}. The spectrum of the central star of DQ\,Her was extracted from a circular aperture with radius of 2$''$ and that of the extended X-ray emission from an elliptical aperture with semi-minor and major axes 10$''$ and 18$''$ encompassing the emission detected in the bottom panel of Figure~\ref{fig:DQ_Her_Xrays1}. The emission from the central source was excised from the latter. The background was extracted from a region without contribution from extended emission nor point sources. The net count rates in the 0.3--5.0~keV energy range are 22.6~counts~ks$^{-1}$ for the central star and 1.32~counts~ks$^{-1}$ for the extended emission for total count numbers $\simeq$1,500 and $\simeq$90 counts, respectively. \subsection{{\it XMM-Newton} Observations} DQ\,Her was observed by \emph{XMM-Newton} on 2017 April 19 with the three EPIC cameras for a total exposure time of 41.9~ks (PI: H.\,Worpel; Obs.\,ID.: 0804111201). The EPIC pn, MOS1 and MOS2 cameras were operated in the Full Frame Mode with the thin optical blocking filter. The individual observing times for the pn, MOS1, MOS2, and pn cameras were 39.0~ks, 40.6~ks, and 40.5~ks, respectively. The \emph{XMM-Newton} data were processed with the Science Analysis Software ({\sc sas}; version 17.0), using the {\it epproc} and {\it emproc} SAS tasks to apply the most recent calibrations available on February 2020. After excising periods of high background, the total useful time of the pn, MOS1 and MOS2 cameras were 15.2~ks, 25.4~ks and 26.2~ks, respectively. We used the Extended Source Analysis Software ({\sc esas}) tasks to map the distribution of the X-ray-emitting gas in DQ\,Her. Background-subtracted, exposure-corrected EPIC pn, MOS1, and MOS2 images were created and merged. EPIC images in the soft 0.3--0.7~keV, medium 0.7--1.2~keV, and hard 1.2--5.0~keV energy bands were created. The individual images and a colour-composite X-ray picture are presented in Figure~\ref{fig:DQ_Her_Xrays2}. Spectra and their corresponding associated callibration matrices were obtained from a circular aperture with radius of 24$''$ centered on the central star of DQ\,Her using the {\it evselect}, {\it arfgen} and {\it rmfgen} {\sc sas} tasks. Due to the lower spatial resolution of the EPIC cameras compared to that of ACIS-S, the contribution from the central star cannot be properly resolved from that of the extended X-ray emission. Therefore, the EPIC spectra encompass the emission from both the point-source and extended component of DQ\,Her. The net count rates of the pn, MOS1 and MOS1 cameras are 41.9~counts~ks$^{-1}$, 8.4~counts~ks$^{-1}$, and 11.6~counts~ks$^{-1}$, respectively. \begin{figure*} \begin{center} \includegraphics[angle=0,width=0.75\linewidth]{Fig3} \caption{Comparison between the optical emission from DQ\,Her and the X-ray observations obtained with \emph{Chandra} (left) and \emph{XMM-Newton} (right). The nebular images from 2017 were obtained at the NOT and the corresponding H$\alpha$ 2001 image was obtained by expanding a WHT from 1997 by 6\% according to \citet{Santamaria2020} (see Section~3 for details).} \label{fig:DQ_Her_Xrays3} \end{center} \end{figure*} \section{Extended X-ray emission from DQ\,Her} The \emph{Chandra} ACIS-S images presented in Figure~\ref{fig:DQ_Her_Xrays1} clearly confirms the presence of extended emission as previously reported by \citet{Mukai2003}. More importantly, the X-ray image in the bottom panel of this figure shows that this emission has a bipolar, jet-like shape $\simeq$16$''$ wide and $\simeq$32$''$ long in the NE to SW direction (PA$\approx$45$^\circ$). The direction of this bipolar feature is orthogonal to the apparent semi-major axis of the nebula. The \emph{Chandra} X-ray image shows that the bipolar feature presents hints of an S-shape, more clearly seen in its SW section. The \emph{XMM-Newton} EPIC images presented in Figure~\ref{fig:DQ_Her_Xrays2} also show extended X-ray emission. The images in the three X-ray bands present a peak at the location of the central star of DQ\,Her, as well as extended emission that, depending on the X-ray band, uncover different features. The soft (0.3--0.7~keV) EPIC X-ray image (Fig.~\ref{fig:DQ_Her_Xrays2} top left) is indicative of a bipolar morphology protruding from the central star and extending towards the NE and SW directions, very similar to that of the extended X-ray emission detected in the \emph{Chandra} images. The diffuse emission detected in the medium (0.7--1.2~keV) EPIC X-ray band (Fig.~\ref{fig:DQ_Her_Xrays2} - top right) also extends towards the NE and SW regions, but this component is not spatially coincident with the bipolar features detected in the soft band. Instead, it seems to surround the soft emission. Finally, the spatial distribution of the emission in the hard (1.2--5.0~keV) EPIC X-ray band is more centrally-concentrated than in the other two EPIC bands (see Fig.~\ref{fig:DQ_Her_Xrays2} - bottom left) and is basically consistent with a point-source with some contribution to the extended emission. All these characteristics are illustrated in the colour-composite X-ray picture presented in the bottom right panel of Figure~\ref{fig:DQ_Her_Xrays2}. To further peer into the spatial distribution of the X-ray-emitting material in DQ\,Her, we show in Figure~\ref{fig:DQ_Her_Xrays3} a comparison between the X-ray and narrowband optical images. As noted by \citet{Santamaria2020}, DQ\,Her has an angular expansion rate sufficiently large (0\farcs188 yr$^{-1}$ along its major axis) to result in a noticeable angular expansion within a few years. To produce consistent comparisons between the optical nebular remnant and the X-ray images, optical images obtained at similar epochs than those of the X-ray images have to be used. For comparison with the \emph{Chandra} images, the closest contemporary available image of DQ\,Her is an H$\alpha$ image taken at the William Herschel Telescope (WHT, La Palma, Spain) on 1997 October 25, i.e., about 4 years before the X-ray observation. An expansion factor of 6\% was applied to this optical image, following the expansion rate reported by \citet{Santamaria2020}, to produce a synthetic 2001 H$\alpha$ image suitable for comparison with the \emph{Chandra} X-ray image (Fig.~\ref{fig:DQ_Her_Xrays3} - left). For comparison with the \emph{XMM-Newton} images, we used H$\alpha$ and [N\,{\sc ii}] images obtained at the Nordic Optical Telescope (NOT, La Palma, Spain) on 2017 May 27, just about one month after the X-ray observation (Fig.~\ref{fig:DQ_Her_Xrays3} - right). The inspection of the pictures in Figure~\ref{fig:DQ_Her_Xrays3} clearly reveals that the bipolar jet-like feature extends beyond the optical nebula and is oriented along its minor axis. \section{Spectral Analysis} The superb angular resolution of the \emph{Chandra} ACIS camera has allowed us to extract individual spectra for the point source and extended X-ray emission from DQ\,Her (Fig.~\ref{fig:DQ_Her_spec}-top left and right, respectively). These ACIS spectra reveal remarkable spectral differences between the central and the diffuse emission, as illustrated in the bottom-left panel of Figure~\ref{fig:DQ_Her_spec}, regardless of the lower quality of the spectrum of the diffuse component. As shown by \citet{Mukai2003}, the spectrum of the star peaks between 0.8--1.0~keV, with some contribution to the soft energy range below 0.7~keV. The spectrum then declines for energies above 1.0~keV showing the contribution from some spectral line very likely the Si\,{\sc xiii} at 1.8~keV. On the other hand, the spectrum of the extended emission peaks at softer energies, $\lesssim$0.6~keV, declining towards higher energies, with hints of the presence of spectral lines at 0.9~keV and 1.4~keV. The former could be attributed to the O\,{\sc vii} triplet at 0.58~keV, the Fe complex at $\approx$0.8~KeV, and the Ne\,{\sc ix} lines at 0.9 keV, and the latter to Mg\,{\sc xi} at 1.4~keV. In the following we present the details of the spectral analysis of the {\it Chandra} ACIS-S and {\it XMM-Newton} EPIC spectra of DQ\,Her. The parameters of the best-fit models for the different spectra as well as their significance are listed in Figure~\ref{fig:DQ_Her_spec} are listed in Table~1. \begin{figure*} \begin{center} \includegraphics[angle=0,width=0.8\linewidth]{Fig4} \caption{Background-subtracted spectra of DQ\,Her. The top panels show the \emph{Chandra} ACIS spectra of the point-source (left) and diffuse emission (right). The two spectra are shown together in the bottom-left panel, where the emission from the diffuse component has been scaled for an easy comparison. The bottom-right panel presents the \emph{XMM-Newton} EPIC spectra of DQ\,Her, with different symbols and colours for the spectra extracted from different EPIC cameras. In all panels the solid lines represent the best-fit model to the data, while the dotted and dashed lines show the contribution of {\it apec} and non-thermal power-law components, respectively.} \label{fig:DQ_Her_spec} \end{center} \end{figure*} \subsection{X-rays from DQ\,Her} Following \citet{Mukai2003} we fitted a two-component model to the \emph{Chandra} ACIS-S spectra of the central star of DQ\,Her consisting of an optically-thin {\it apec} emission model and a power-law component. The former component can be attributed to a hot plasma and the latter to non-thermal synchrotron emission. Our best fit model, which is presented in the left panel of Figure~\ref{fig:DQ_Her_spec} in comparison with the ACIS-S spectrum, has parameters consistent with those obtained by \citet{Mukai2003} (see Table~1). The non-thermal emission with a power-law index of $\Gamma=2.45$ dominates in the soft energy range below 0.7~keV and above 1.0~keV, with a contribution $\simeq$43\% to the total intrinsic flux. The intrinsic flux of DQ\,Her is found to be $F_\mathrm{X}=(9.4\pm1.2)\times10^{-14}$~erg~s$^{-1}$~cm$^{-2}$, implying a luminosity $L_\mathrm{X}=(2.8\pm0.4)\times10^{30}$~erg~s$^{-1}$ at a distance of 501$\pm$6~pc \citep{Schaefer2018}. \subsection{Extended X-ray emission} The spectrum of the extended bipolar X-ray emission detected in the \emph{Chandra} ACIS-S observations was initially fitted by a single optically thin {\it apec} emission model, but resulted in a poor quality fit with $\chi^{2}$/DoF=1.60. A power-law model results in an even worse fit ($\chi^{2}/$DoF$>$2.4), whereas a two-temperature plasma emission model does not improve the fit ($\chi^{2}$/DoF=1.60), as it is not able to appropriately fit the spectrum for energies above 2~keV. The best-fit model is achieved by using a similar model as that for the central star, that is, a plasma emission model plus a non-thermal power-law. This best-fit, whose parameters are listed in Table~1, is shown in the middle panel of Figure~\ref{fig:DQ_Her_spec} in comparison with the ACIS-S spectrum. This panel shows that the {\it apec} component dominates the emission for energies below 1.0~keV and the power-law component for greater energies. Indeed, the non-thermal component contributes to 40\% of the total unabsorbed flux for the 0.3--5.0~keV energy range, but the optically-thin plasma component contributes to 75\% of the flux for energies between 0.3 and 1~keV. The luminosity of the extended bipolar emission detected in \emph{Chandra} is $L_\mathrm{X,diff}=(2.1\pm1.3)\times10^{29}$~erg~s$^{-1}$. \begin{table*} \begin{center} \caption{\label{tab:elines} Details of the spectral modelling of the X-ray observations of DQ\,Her.} \setlength{\tabcolsep}{0.7\tabcolsep} \begin{tabular}{llccccccccc} \hline & Instrument & $N_\mathrm{H}$ & $kT$ & $A_{1}^\mathrm{b}$ & $\Gamma$ & $A_{2}^\mathrm{b}$ & $f_\mathrm{X}^\mathrm{c}$ & $F_\mathrm{X}^\mathrm{c}$ & $F_{2}/F_\mathrm{X}^\mathrm{d}$ &$\chi^2$/DoF\\ & & (10$^{20}$~cm$^{-2}$) &(keV) & ($10^{-5}$~cm$^{-5}$)& & ($10^{-5}$~cm$^{-5}$)& (cgs) & (cgs) & & \\ \hline DQ\,Her & ACIS-S & 3.4$\pm$2.0 & 0.77$^{+0.05}_{-0.05}$& 1.33 & 2.45$^{+0.50}_{-0.40}$ & 1.05 & 8.00$\pm$1.10 & 9.40$\pm$1.20 & 0.43 & 1.12\\ Extended& ACIS-S & 1.8$\pm$0.2 & 0.18$^{+0.05}_{-0.07}$& 0.26 & 1.10$^{+0.09}_{-0.09}$ & 0.04 & 0.64$\pm$0.41 & 0.70$\pm$0.47 & 0.40 & 1.16\\ DQ\,Her$+$Extended & EPIC-pn & 3.4 & 0.78$^{+0.06}_{-0.07}$& 1.15 & 2.34$^{+0.25}_{-0.21}$ & 1.20 & 7.60$\pm$1.10 & 8.90$\pm$1.00 & 0.60 & 1.08\\ DQ\,Her$+$Extended & EPIC(pn+MOS)$^\mathrm{a}$& 3.4 & 0.77$^{+0.04}_{-0.05}$& 1.20& 2.37$^{+0.20}_{-0.20}$ & 1.20 & 7.60$\pm$1.10 & 8.90$\pm$0.90 & 0.60 & 0.90\\ \hline \end{tabular}\\ $^\mathrm{a}$Joint model fit of the EPIC pn, MOS1 and MOS2 spectra.\\ $^\mathrm{b}$The normalization parameter is defined as $A \approx 10^{-14}\int n_\mathrm{e}^{2} dV/4 \pi d^2$, where $n_\mathrm{e}$ and $d$ are the electron number density and the distance, respectively.\\ $^\mathrm{c}$ The fluxes are computed for the 0.3--5~keV energy range and are presented in $10^{-14}$~erg~s$^{-1}$~cm$^{-2}$ units.\\ $^\mathrm{d}$The $F_{2}/F_\mathrm{X}$ ratio represents the contribution from the power-law component to the total flux. \end{center} \end{table*} To estimate an electron density for the extended X-ray emission we used the definition of the normalization parameter (see Table~1) adopting an ellipsoidal morphology with semi-axes of 8$''$, 8$''$ and 16$''$. The electron density of the bipolar emission is estimated to be $n_\mathrm{e}\approx2$~cm$^{-3}$, which corresponds to a mass of the X-ray-emitting material $\approx3\times$10$^{-6}$~M$_\odot$, well below the typical mass ejecta of nova events \citep[e.g.,][]{Gehrz1998,DellaValle2020} and the ionised mass of DQ\,Her \citep[$2.3\times$10$^{-4}$~M$_\odot$;][]{Santamaria2020}. \subsection{The \emph{XMM-Newton} Spectra} Due to the large PSF of the EPIC cameras it is not possible to extract independent spectra for the central star and extended X-ray emission of DQ\,Her. The three EPIC spectra are shown in the right panel of Figure~\ref{fig:DQ_Her_spec}. Similarly to the \emph{Chandra} ACIS spectrum, the \emph{XMM-Newton} EPIC spectra of DQ\,Her show a main peak for energies around 0.8--1.0~keV with a secondary contribution for energies below 0.7~keV. In addition, these spectra show some contribution from the N\,{\sc vi} triplet at 0.43~keV that is not detected in the \emph{Chandra} spectrum due to its lower sensitivity at softer energies. We first modelled the EPIC-pn spectrum of DQ\,Her because of its larger count rate than that of the MOS cameras (see Section~2). For simplicity, we fixed the column density value to that obtained to the best-fit model to the ACIS-S data ($N_\mathrm{H}=3.4\times10^{20}$~cm$^{-2}$) and adopted a similar model consisting of an {\it apec} plasma emission model for hot gas and a power-law component for non-thermal emission. The best-fit model is consistent with that obtained for the \emph{Chandra} ACIS-S spectrum of the central star of DQ\,Her (see Table~1). The total luminosity in this model is $L_\mathrm{X}=(2.7\pm0.3)\times10^{30}$~erg~s$^{-1}$, with the power-law component contributing 60\% to the total intrinsic flux. Models simultaneously fitting the three EPIC spectra resulted in very similar best-fit parameters (Table~1). \section{Discussion} The spatial and spectral analyses of the \emph{Chandra} and \emph{XMM-Newton} observations of DQ\,Her presented in Sections~3 and 4 reveal the presence of diffuse X-ray emission with spectral properties differing from those of the central star. Owing to its better angular resolution, $\sim$1$''$ at $\lesssim$1~keV, \emph{Chandra} resolves more clearly the morphology of this emission, which is found to be elongated, extending $\approx$32$''$ along the NE-SW direction at PA$\approx45^\circ$ (Fig~\ref{fig:DQ_Her_Xrays1} and Fig.~\ref{fig:DQ_Her_Xrays3} - left) with a subtle S-shape. Contrary to \citet{Mukai2003}'s interpretation, this emission is not associated with any specific clump in the nebular remnant, but it extends beyond the optical nova shell along its minor axis. This morphology is confirmed in the \emph{XMM-Newton} images presented in Figures~\ref{fig:DQ_Her_Xrays2} and \ref{fig:DQ_Her_Xrays3} right panel, although at a coarse angular resolution ($\sim6''$ at $\lesssim$1~keV). The misalignment of the arrows marking the tips of the elongated structure in Figure~\ref{fig:DQ_Her_Xrays3} bottom right panel is indeed consistent with its S-shape in the \emph{Chandra} images. In addition, the \emph{XMM-Newton} EPIC image in the soft band is suggestive of extended X-ray emission filling the nova shell around DQ\,Her. The extended emission in the soft \emph{XMM-Newton} image can be attributed to thermal emission from hot plasma produced by the nova explosion, an adiabatically-shocked hot bubble homologous to supernova explosions. This hot bubble would be spatially coincident with the nebular remnant with no contribution to the bipolar structure detected in X-rays. On the other hand, the origin of the bipolar structure is intriguing. It cannot be attributed to hot gas escaping the nova shell, because its non-thermal component is not expected from shock-heated plasma inside a hot bubble and because it projects along the minor axis of the nova shell whilst the shell is disrupted along its major axis at PA=$-45^\circ$ \citep{Vaytet2007}. The production of the bipolar X-ray emission in DQ\,Her could be argued to be at the origin of the nova explosion. Three-dimensional numerical simulations tailored to similar events, such as outbursts in symbiotic stars \citep[see, e.g.,][]{Walder2008,Orlando2017}, might help interpreting the bipolar X-ray feature in DQ\,Her. In particular, the simulations presented in \citet{Orlando2017} to model the X-ray emission from the symbiotic star V745\,Sco only 17 days after its outburst are able to produce bipolar ejections of X-ray-emitting gas. This X-ray emission arises from a non-isotropic blast wave produced instantaneously at the nova event. Its emission would then be thermal, with physical conditions at early times similar to those of the thermal component of the jet-like feature of DQ\,Her. However, the non-thermal emission and continuous collimation of the jet in DQ\,Her, $\sim$80~yr after the nova event \citep[][]{Santamaria2020}, make these models unsuitable for the case of DQ\,Her. It is interesting to note that the CV at the center of DQ\,Her belongs to the class of magnetically active intermediate polars (IP) exhibiting strong magnetic fields of the order of 1--10~MG \citep[see][and references therein]{Barrett2017}, which results in the presence of a truncated accretion disk. Indeed \citet{Mukai2003} suggested that the X-ray emission from DQ\,Her is produced by scattered X-ray photons due to the presence of an accretion disk wind making it an unusual IP system. Furthermore, spectral mapping of DQ\,Her has revealed that the material in the disk is spiraling-in \citep{Saito2010}. Thus, we suggest that the elongated structure of DQ\,Her could be interpreted as a magnetized jet produced by {\it hoop stress} at the inner regions of the accretion disk as it is {\it threaded} by the vertical magnetic field \citep{Livio1997}. In this scenario, which has been proven feasible in stellar systems, such as the case of the protostellar object HH80 \citep{CG2010}, the jet would be continuously fed by material falling into the accretion disk and then ejected by the hoop stress. The non-thermal X-ray emission from the bipolar feature in DQ\,Her seems to support this scenario. Non-thermal radio emission would lend additional support, as typically found in jets of symbiotic stars \citep[e.g., CH\,Cyg;][]{Ketal2010}. However, an inspection of Jansky Very Large Array (JVLA) observations of DQ\,Her discloses the lack of radio emission from either the central source or an extended component \citep[see][]{Barrett2017}. We note that the synchrotron radiation from relativistic electrons close to the accreting white dwarf in CVs \citep{Chan1982} has been found to be highly variable, as it is the case of the central source of DQ\,Her \citep{Pavelin1994}. The disk$+$jet phenomenon is found in a variety of astrophysical systems, from protostellar and young stellar objects \citep{CG2010,CG2012}, evolved low-mass stars \citep[][]{Sahai1998}, and massive X-ray binaries \citep[][]{vanK1992} to AGNs \citep[][]{AlonsoHerrero2018}. White dwarfs can act as the compact object for jet collimation, and indeed disk+jet systems have been found in symbiotic stars in which a white dwarf accretes material from a main sequence or red giant companion, for example, the well-studied R\,Aqr \citep{Ramstedt2018,Schmid2018,Melni2018} or MCW\,560 \citep{SS2009}. On the other hand, the conspicuous absence of jets in CVs has been explained in terms of the particular physical conditions in these systems \citep{SL2004}, although recent observational and theoretical results have found some evidence for transient jets \citep{CK2020}. As for nova shells, \citet{Shara2012} suggested that an elongated structure towards the NE region of GK\,Per was a jet, but \citet{Harvey2016} demonstrated that it is not dynamically related and it has a low velocity. This leaves us only with the claims of jet-like structures in RS\,Oph and M31N 2008-12a, two recurrent novae. The presence of a jet in RS\,Oph is suggested by a jet-like morphological feature with an extent $\sim1''$ discovered in \emph{Chandra} X-ray observations \citep{Luna2009}. Meanwhile, the presence of a jet in M31N 2008-12a is supported by high-velocity $\sim$4600$\pm$600~km~s$^{-1}$ features detected in \emph{Hubble Space Telescope} (\emph{HST}) Space Telescope Imaging Spectrograph (STIS) spectra \citep{Darnley2017}, which are interpreted as an ejecta expanding in the direction close to the line of sight \citep[see also][and references therein]{Darnley2016}. Since the jet in RS\,Oph has an X-ray extent close to \emph{Chandra}'s PSF and that of M31N 2008-12a is only detected kinematically, DQ\,Her presents the best case for the detection of a resolved X-ray jet in a nova shell. We remark that, unlike RS\,Oph and M31N 2008-12a, which are recurrent novae, the nova shell of DQ\,Her is associated with a CV. \section{SUMMARY} We have presented the analysis of archival \emph{Chandra} ACIS-S and \emph{XMM-Newton} EPIC observations of the CV DQ\,Her. Our analysis has shown the presence of diffuse emission with a bipolar, jet-like morphology that extends up to distances 16$''$ from the progenitor star along the minor axis of the nova shell, thus protruding away from the nova shell. We have also shown that the \emph{XMM-Newton} soft band image traces emission both from the jet and from a hot bubble filling the nebula around DQ\,Her. The latter has been formed as a result of an adiabatically-shocked blast wave very similar to supernova explosions. The spectra of the extended X-ray emission is notably different to that of DQ\,Her, exhibiting the presence of emission lines from the O\,{\sc vii} triplet at 0.58~keV, the Ne and Fe complex at 0.9~keV, and Mg\,{\sc xi} at 1.4~keV. The bipolar structure has a plasma temperature of $2\times10^{6}$~K with an X-ray luminosity in the 0.3--5.0~keV energy range of $L_\mathrm{X,diff}=(2.1\pm1.3)\times10^{29}$~erg~s$^{-1}$. Its electron density and estimated mass are $n_\mathrm{e}\approx2$~cm$^{-3}$ and $m_\mathrm{X}\approx 3\times10^{-6}$~M$_{\odot}$, respectively. We propose that the bipolar structure detected with \emph{Chandra} and \emph{XMM-Newton} is a jet. Its non-thermal emission component strongly supports that it is a a magnetized jet, arising as the result of the {\it hoop stress} mechanism observed in other stellar systems. Under this scenario the jet would be continuously fed by material that falls into the accretion disk and is then ejected by the hoop stress. The S-shape morphology of the jet could then be associated with the precession of the accreting disk at the core of DQ\,Her or with erratic jet wobbling. The capabilities of the up-coming \emph{Athena} X-ray satellite will be able to resolve the morphological and spectral components in DQ\,Her and will help bringing light into the scenario proposed by the present work. \section*{Acknowledgements} We would like to thank the referee for prompt revision and useful comments that helped improve the presentation of our results. We also thank C.\,Carrasco-Gonz\'alez for analysing the available JVLA observations of DQ\,Her. JAT, MAG and GR-L are supported by the UNAM Direcci\'{o}n General de Asuntos del Personal Acad\'{e}mico (DGAPA) though the Programa de Apoyo a Proyectos de Investigaci\'{o}n e Innovaci\'{o}n Tecnol\'{o}gica (PAPIIT) projects IA100318 and IA100720. MAG acknowledges support from grant PGC2018-102184-B-I00, co-funded with FEDER funds. GR-L acknowledge support from Consejo Nacional de Ciencia y Tecnolog\'{i}a (CONACyT) and Programa para el Desarrollo Profesional (PRODEP) Mexico. LS acknowledge support from UNAM DGAPA PAPIIT project IN101819. This work has made extensive use of the NASA’s Astrophysics Data System. ES thanks CONACyT (Mexico) for a research studentship. This work was based on observations obtained with {\it XMM–Newton}, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. The scientific results reported in this article are based on observations made by the {\it Chandra} X-ray Observatory and published previously in cited articles.
1,477,468,750,806
arxiv
\section{Introduction} \label{sec1} Preliminary stages of drug discovery involve finding `blocker' or `lead' compounds that bind to a biomolecular target, which is a disease causing pathogenic protein, in order to inhibit the function of the protein. Such compounds are later used to produce new drugs. These lead compounds have to be identified amidst billions of chemical compounds \cite{XTSY2001,Cli2014}, and hence drug discovery is a tedious process. A complementary problem involves identifying pathogenic proteins amidst non-pathogenic ones, both of which are structurally identical in some respects. For instance, out of five known species of ebolavirus, only four of them are pathogenic to humans (see p. $5$ in \cite{Cli2014}) and a similar example can be found in arenavirus \cite{XLL2014}. Some of these pathogenic proteins might share a common inhibitory mechanism against a lead compound which serves to distinguish them from the non-pathogenic ones \cite{XLL2014}. So, finding potential pathogenic proteins amidst a large collection of biomolecules by testing them against known inhibitory compounds is a problem complementary to the problem of lead compound discovery. The lead compounds can be abstracted as inhibitor items, the pathogenic proteins as defective items, and the others as normal items. Now, the above problems can be combined to be viewed as an inhibitor-defective classification problem on the mixture of pathogenic and non-pathogenic proteins, and billions of chemical compounds. This unifies the process of finding both the pathogenic proteins and the lead compounds. An efficient means of solving this problem could potentially be applied in high-throughput screening for drugs and pathogens or computer-assisted drug and pathogen identification. A natural consideration is that, while some pathogenic proteins might be inhibited by some lead compounds, other pathogenic proteins might be immune to some of these lead compounds present in the mixture of items. In other words, each defective item is possibly immune to the presence of some inhibitor items so that its expression cannot be prevented by the presence of those inhibitors when tested together. By definition, an inhibitor inhibits at least one defective. Learning this inhibitor-defective interaction as well as classifying the inhibitors and defectives efficiently through group testing is presented this work. A representation of this model, which we refer to as the Immune-Defectives Graph (IDG) model, is given in Fig. \ref{fig-IDG}. The presence of a directed edge between a pair of vertices $\left(w_{i_{k_1}},w_{j_{k_2}}\right)$ represents the inhibition of the defective $w_{j_{k_2}}$ by the inhibitor $w_{i_{k_1}}$ and the absence of a directed edge between a pair of vertices $\left(w_{i_{k_1}},w_{j_{k_2}}\right)$ indicates that the inhibitor $w_{i_{k_1}}$ does not affect the expression of the defective $w_{j_{k_2}}$ when tested together. A formal presentation of the IDG model and the goals of this paper appear in the next section. \begin{figure}[htbp] \centering \includegraphics[totalheight=2.8in,width=3.8in]{Bipartite_Def_Inh.eps} \vspace{-1cm} \caption{A representation of the IDG Model, where ${\cal I}$ represents the set of inhibitors and ${\cal D}$ represents the set of defectives.} \label{fig-IDG} \end{figure} \begin{example}\label{eg-bipartite} An instance of the IDG model is given in Fig. \ref{fig-Eg1}. In this example, the outcome of a test is positive iff a defective $w_{j_{k_2}}$, for some $k_2$, is present in the test and its associated inhibitor $w_{i_{k_2}}$ does not appear in the test. Observe that if the item-pair $\left(w_{i_{k_1}},w_{j_{k_2}}\right)$, for $k_1 \neq k_2$, appears in a test and $w_{i_{k_2}}$ does not appear in the test, then the outcome is positive. Also, if the item-pair $\left(w_{i_{k_2}},w_{j_{k_2}}\right)$ appears in a test and if $w_{j_{k'_2}}$ also appears in the test but not $w_{i_{k'_2}}$, then the test outcome is positive. But if the appearance of every defective $w_{j_{k'_2}}$ in a test is compensated by the appearance of its associated inhibitor $w_{i_{k'_2}}$ in the test, then the test outcome is negative. The outcome of a test is also negative when none of the defectives appear in a test. \begin{figure}[htbp] \centering \hspace{-1cm} \includegraphics[totalheight=2.8in,width=3.8in]{Bipartite_Eg1.eps} \vspace{-1cm} \caption{An example for the IDG Model where each defective is associated with a distinct inhibitor so that $r=d$.} \label{fig-Eg1} \end{figure} \end{example} The IDG model can also be viewed as a generalization of the $1$-inhibitor model introduced by Farach et al. in \cite{FKKM1997}. This model was motivated by errors in blood testing where blocker compounds (i.e., inhibitors) block the expression of defectives in a test \cite{PhS1994}. This is also motivated by drug discovery applications where the inhibitors are actually desirable items that inhibit the pathogens \cite{LOGY1997}. In the $1$-inhibitor model, a test outcome is positive iff there is at least one defective and no inhibitors in the test. So, the presence of a single inhibitor is sufficient to ensure that the test outcome is negative. Efficient testing involves pooling different items together in every test so that the number of tests can be minimized \cite{Dor1943}. Such a testing methodology is called group testing. The pooling methodology can be of two kinds, namely non-adaptive and adaptive pooling designs. In non-adaptive pooling designs, any pool constructed for testing is independent of the previous test outcomes, while in adaptive pooling designs, some constructed pools might depend on the previous test outcomes. A $k$-stage adaptive pooling design is comprised of pool construction and testing in $k$-stages, where the pools constructed for (non-adaptive) testing in the $k^{\text{th}}$ stage depend on the outcomes in the previous stages. While adaptive group testing requires lesser number of tests than non-adaptive group testing, the latter inherently supports parallel testing of multiple pools. Thus, non-adaptive group testing is more economical (because it allows for automation) as well as saves time (because the pools can be prepared all at once) which are of concern in library screening applications \cite{BBTK-Springer1996}. The $1$-inhibitor model has been extensively studied, and several adaptive and non-adaptive pooling designs for classification of the inhibitors and the defectives are known (refer, \cite{DeV1998,DMTV2001,ADB2008,CCF2010}). A detailed survey of known non-adaptive and adaptive pooling designs for the $1$-inhibitor model is given in \cite{GEJS2014}. The best (in terms of number of tests) known non-adaptive pooling design that guarantees high probability classification of the inhibitors and defectives is proposed in \cite{GEJS2014}. The non-adaptive pooling design proposed in \cite{GEJS2014} requires $O(d \log n)$ tests in the $r=O(d)$ regime and $O\left(\frac{r^2}{d} \log n\right)$ tests in the $d=o(r)$ regime to guarantee classification of both the inhibitors and defectives with high probability\footnote{The number of inhibitors, defectives and normal items are denoted by $r$, $d$, and $n-d-r$ respectively.}. In the small inhibitor, i.e., $r=O(d)$ regime, the upper bound on the number of tests matches with the lower bound while in the large inhibitor, i.e., $d=o(r)$ regime, the upper bound exceeds the lower bound of $O\left(\frac{r^2}{d \log \frac{r}{d}} \log n\right)$ by a $\log {\frac{r}{d}}$ multiplicative factor. Nonetheless, the $1$-inhibitor model constrains that every inhibitor must inhibit every defective, which is likely to be a tight requirement in practice. So, the IDG model is a more practical variant of the $1$-inhibitor model. A formal presentation of the IDG model and the goals of this paper are given in the next section. {\it Notations:} The Bernoulli distribution with parameter $p$ is denoted by ${\cal B}(p)$, where $p$ denotes the probability of the Bernoulli random variable taking a value of one. The set of binary numbers is denoted by $\mathbb{B}$. Matrices are indicated by boldface uppercase letters and vectors by boldface lowercase letters. The row-$i$, column-$j$ entry of a matrix $\mathbf{M}$ is denoted by $\mathbf{M}(i,j)$, and the coordinate-$i$ of a vector $\mathbf{y}$ is denoted by $\mathbf{y}(i)$. All the logarithms in this paper are taken to the base two. The probability of an event $\cal E$ is denoted by $\Pr \{\cal E\}$. The notation $f(n) \approx g(n)$ represents approximation of a function $f(n)$ by $g(n)$. Mathematically, the approximation denotes that for every $\epsilon>0$, there exists $n_0$ such that for all $n>n_0$, $1-\epsilon<\frac{|f(n)|}{|g(n)|}<1+\epsilon$. \section{The IDG Model} \label{sec2} Consider a set of items ${\cal W}$ indexed as $w_1,\cdots,w_n$ comprised of $r$ inhibitors, $d$ defectives, and $n-r-d$ normal items. It is assumed throughout the paper that $r,d=o(n)$. \begin{definition} An item pair $(w_i,w_j)$, for $i \neq j$, is said to be {\em associated} when the inhibitor $w_i$ inhibits the expression of the defective $w_j$. An item pair $(w_i,w_j)$, for $i \neq j$, is said to be {\em non-associated} if either the inhibitor $w_i$ does not inhibit the expression of the defective $w_j$ or if $w_i$ is not an inhibitor or if $w_j$ is not a defective. \end{definition} In general, the mention of an item pair $(w_i,w_j)$ need not mean that $w_i$ is an inhibitor and $w_j$ is a defective. This is understood from the context. \begin{definition} An {\it association graph} is a left to right directed bipartite graph $\pmb{\cal B}=({\cal I},{\cal D},{\cal E})$, where the set of vertices (on the left hand side) ${\cal I}=\{w_{i_1},w_{i_2}, \cdots, w_{i_r}\} \subset {\cal W}$ denotes the set of inhibitors, the set of vertices (on the right hand side) ${\cal D}=\{w_{j_1},w_{j_2},\cdots,w_{j_d}\} \subset {\cal W}$ denotes the set of defectives, and ${\cal E}$ is a collection of directed edges from ${\cal I}$ to ${\cal D}$. A directed edge $e=(w_{i},w_{k}) \in {\cal E}$, for ${i} \in \{i_1,\cdots,i_r\}$, ${j} \in \{j_1,\cdots,j_d\}$, denotes that the inhibitor $w_{i}$ inhibits the expression of the defective $w_{k}$. \end{definition} We refer to ${\cal E}({\cal I},{\cal D})$ {\em conditioned on the sets} $({\cal I},{\cal D})$ to be the {\it association pattern} on $({\cal I},{\cal D})$. A pooling design is denoted by a test matrix $\mathbf{M}\in \mathbb{B}^{T \times n}$, where the $j^{\text{th}}$ item appears in the $i^{\text{th}}$ test iff $\mathbf{M}(i,j)=1$. {\em A test outcome is positive iff the test contains at least one defective without any of its associated inhibitors}. A positive outcome is denoted by one and a negative outcome by zero. It is assumed throughout the paper that the defectives are not mutually obscuring, i.e., a defective does not function as an inhibitor for some other defective. In other words, the set of inhibitors ${\cal I}$ and the set of defectives ${\cal D}$ are disjoint. The goal of this paper is to identify the association graph, or in informal terms, learn the IDG. Thus, the objectives are two-fold as represented by Fig. \ref{fig-obj}. \begin{enumerate} \item Identify all the defectives. \item Identify all the inhibitors and also their association pattern with the defectives. \end{enumerate} \begin{figure}[htbp] \vspace{-0.8cm} \centering \hspace*{-1cm} \includegraphics[totalheight=3.1in,width=4in]{Bipartite_All_Items.eps} \vspace{-1.5cm} \caption{Here, the presence of a directed arrow represents an association between an inhibitor and a defective. The problem statement is to identify the set of inhibitors ${\cal I}$, defectives ${\cal D}$ and the association pattern ${\cal E}(\cal I,\cal D)$.} \label{fig-obj} \end{figure} This problem is further mathematically formulated as follows. Denote the actual set of inhibitors, normal items, and defectives by ${\cal I}$, ${\cal N}$, and ${\cal D}$ respectively so that ${\cal I}\cup{\cal N}\cup{\cal D}={\cal W}$. The actual association pattern between the actual inhibitor and defective sets is represented by ${\cal E}({\cal I},{\cal D})$. Let $\hat{\cal I}$, $\hat{\cal N}$, $\hat{\cal D}$, and $\hat{\cal E}(\hat{\cal I},\hat{\cal D})$ denote the declared set of inhibitors, normal items, defectives, and declared association pattern between $(\hat{\cal I},\hat{\cal D})$ respectively. The target is to meet the following error metric. \begin{align} \label{eqn-error_metric} \underset{{\cal I}, {\cal D},{\cal E}\left({\cal I}, {\cal D}\right)}{\max} \Pr\left\{\left(\hat{\cal I}, \hat{\cal D},\hat{\cal E}\left(\hat{\cal I}, \hat{\cal D}\right)\right)\neq \left({\cal I}, {\cal D},{\cal E}({\cal I},{\cal D})\right)\right\} \leq cn^{-\delta}, \end{align} for some constants $c,\delta>0$. We propose pooling designs and decoding algorithms, and lower bounds on the number of tests required to satisfy the above error metric. It is assumed that the defective and the inhibitor sets are distributed uniformly across the items, i.e., the probability that any given set of $r+d$ items constitutes all the defectives and inhibitors is given by $\frac{1}{{n \choose d}{n-d \choose r}}$. It is also assumed that the association pattern ${\cal E}({\cal I},{\cal D})$ is uniformly distributed over all possible association patterns on $({\cal I},{\cal D})$. We consider two variants of the IDG model. The first being the case where the maximum number of inhibitors that can inhibit any defective, given by $I_{max}$, is known. We refer to this model as the IDG with side information (IDG-WSI) model. For example, Fig. \ref{fig-Eg1} represents a case where $I_{max}=1$. While it is known that $I_{max}=1$, it is unknown which among the items $w_1,\cdots,w_n$ represent which inhibitors and defectives. For a given value of $(r,d)$, not all positive integer values of $I_{max}\leq r$ might be feasible. For instance, if $(r,d)=(3,2)$, then $I_{max}=1$ is not feasible because, by definition, each inhibitor is associated with at least one defective. So, in the IDG-WSI model, we assume that the given value of $I_{max}$ is feasible for the $(r,d)$ tuple. In particular, if \mbox{$(c-1)d< r \leq cd$} for some integer $c \geq 1$, then $I_{max} \geq c$. This immediately follows from the fact that each inhibitor must be associated with at least one defective. The other variant of the IDG model we consider in this paper is the case where there is no side information about the inhibitor-defective associations, which means that each defective can be inhibited by as many as $r$ inhibitors. We refer to this model as the IDG-No Side Information (IDG-NSI) model. For both the models, the goals (as stated in the beginning of this section) are the same. The contributions of this paper for the IDG models are summarized below. \begin{itemize} \item The sample complexity of the number of tests sufficient to recover the association graph while satisfying the error metric (\ref{eqn-error_metric}) using the proposed \begin{itemize} \item non-adaptive pooling design is given by $T_{NA}=O\left((r+d)^2 \log n\right)$ and $T_{NA}=O\left((I_{\max}+d)^2 \log n\right)$ tests for the IDG-NSI and IDG-WSI models respectively (Theorem \ref{thm-NA_UB}, Section \ref{sec3}). \item two-stage adaptive pooling design is given by $T_{A}=O\left(rd \log n\right)$ and $T_{A}=O\left(I_{\max}d \log n\right)$ tests for the IDG-NSI and IDG-WSI models respectively (Theorem \ref{thm-A_UB}, Section \ref{sec3}). \end{itemize} \item In Section \ref{sec4} (Theorem \ref{thm-LB_GTI_ID_NSI} and Theorem \ref{thm-LB_GTI_ID_WSI}), lower bounds of \begin{align*} &\max\left\{\Omega\left((r+d)\log n+rd\right), \Omega\left(\frac{r^2}{\log r} \log n\right),\Omega(d^2)\right\},\\ &\max\left\{\Omega\left((r+d)\log n+I_{max}d\right), \Omega\left(\frac{I_{max}^2}{\log I_{max}} \log n\right),\right.\\&\hspace{0.9cm}\left. \Omega(d^2)\right\} \end{align*}are obtained for non-adaptive pooling designs for the IDG-NSI and IDG-WSI models respectively. The first lower bounds for both the models are valid for adaptive pooling designs also. The third lower bound for the IDG-WSI model is valid under some mild restrictions on $I_{max}$ and $r$, the details of which are given in Theorem \ref{thm-LB_GTI_ID_WSI}. \end{itemize} The pooling design matrix $\mathbf{M}$ constructed in this paper use carefully chosen ``random matrices'', i.e., the entries of the matrices are chosen independently from a suitable Bernoulli distribution. Such matrices are known to permit ease of analysis \cite{book-HD}. Notwithstanding the simplicity of the pooling design construction, figuring out a good decoding algorithm with a reasonable computational complexity and good lower bounds, especially for non-adaptive pooling designs, is a challenging task. The goodness of the pooling design, decoding algorithm tuple and the proposed lower bounds is measured in terms of the closeness of the upper bounds to the lower bounds on the number of tests. For non-adaptive pooling designs, this can be observed from Table \ref{tab-UB_LB}. For the proposed adaptive pooling design, the upper bound exceeds the lower bound by at most a $\log n$ multiplicative factor for both IDG-NSI and IDG-WSI models. Also, the proposed decoding algorithms have a computational complexity of $O(n T_{NA})$ and $O(n T_{A})$ time units for the non-adaptive and adaptive pooling designs, respectively. This intuitively means that an item is ``processed'' at most a constant number of times per test. \begin{table*} \captionsetup{font=small} \centering \caption{Necessary and sufficient number of tests for various regimes of the number of inhibitors, defectives, and $I_{max}$ are given. In the large inhibitor regime, i.e., $d=O(r)$ for the IDG-NSI model and $d=O(I_{max})$ for the IDG-WSI model, the upper bounds exceed the lower bounds by multiplicative factors of $\log r$ and $\log I_{max}$ for the IDG-NSI and IDG-WSI models respectively. In the small inhibitor regime, i.e., $r=o(d)$ for the IDG-NSI model and $I_{max}=o(d)$ for the IDG-WSI model, the upper bounds exceed the lower bounds by multiplicative factors of $\log n$ for both IDG-NSI and IDG-WSI models.} \vspace{0.5cm} \begin{tabular}{|c|c|c|} \hline Model & $d=O(r), d=O(I_{max})$ (large inhibitor regime) & $r=o(d), I_{max}=o(d)$ (small inhibitor regime)\\ \hline IDG-WSI & Upper Bound: $O\left(r^2 \log n\right)$ & Upper Bound: $O(d^2 \log n)$\\ & Lower Bound: $\Omega\left(\frac{r^2}{\log r} \log n\right)$ & Lower Bound: $\Omega(d^2)$\\ \hline IDG-NSI & Upper Bound: $O\left(I_{max}^2 \log n\right)$ & Upper Bound: $O(d^2 \log n)$\\ & Lower Bound: $\Omega\left(\frac{I_{max}^2}{\log I_{max}} \log n\right)$ & Lower Bound: $\Omega(d^2)$\\ \hline \end{tabular} \label{tab-UB_LB} \end{table*} {\it Extension of the results on the upper and lower bounds on the number of tests to the case where only upper bounds on the number of inhibitors (given by $R$) and defectives (given by $D$) are known instead of their exact numbers is straightforward. The target error metric in (\ref{eqn-error_metric}) is re-formulated as maximum error probability criterion over all combinations of number of inhibitors and defectives. The results for this case follow by replacing $r$ by $R$ and $d$ by $D$ in the upper and lower bounds on the number of tests.} There are various generalizations of the $1$-inhibitor model considered in the literature. These models are summarized in the following sub-section to show that the model considered in this paper, to the best of our knowledge, has not been studied in the literature. \subsection{Prior Works} The $1$-inhibitor model can be generalized in various directions, mostly influenced by generalizations of the classical group testing model. The various generalizations are listed below and briefly described. Though none of these generalizations include the model studied in this paper, it is worthwhile to understand the differences between these models and the IDG model. A generalization of the $1$-inhibitor model, namely $k$-inhibitor model was introduced in \cite{DeV2003_k_inh}. In the $k$-inhibitor model, an outcome is positive iff a test contains at least one defective and no more than $k-1$ inhibitors. So, the number of inhibitors must be no less than a certain threshold $k$ to cancel the effect of any defective. This model is different from the model introduced in this paper because, in the IDG model, a single associated inhibitor is enough to cancel the effect of a defective. Further, none of the inhibitors might be able to cancel the effect of a defective because the defective might not be associated with any inhibitor. A model loosely related with the $1$-inhibitor model, namely mutually obscuring defectives model was introduced in \cite{Dam1998_MOD}. Here, it was assumed that multiple defectives could cancel the effect of each other, and hence the outcome of a test containing multiple defectives could be negative. Thus, a defective can also function as a inhibitor. However, in this paper, the sets of defectives and inhibitors are assumed to be disjoint. The threshold (classical) group testing model is where a test outcome is positive if the test contains at least $u$ defectives, negative if it contains no more than $l$ defectives and arbitrarily positive or negative otherwise \cite{Dam2006_TGT}. This model was combined with the $k$-inhibitor model and non-adaptive pooling designs for the resulting model was proposed in \cite{HTZWG}. A non-adaptive pooling design for the general inhibitor model was proposed in \cite{HwC2007_GIM}. Here, the goal was to identify all the defectives with no prior assumption on the cancellation effect of the inhibitors on the defectives, i.e, the underlying unknown inhibitor model could be a $1$-inhibitor, $k$-inhibitor model, or even the ID model introduced in this paper. However, the difference from our work is that, we aim to identify the association graph or, in other words, the cancellation effect of the inhibitors also apart from identification of the defectives. But this cancellation effect does not include the $k$-inhibitor model cancellation effect as noted earlier. Group testing on complex model was introduced in \cite{Tor1999_CM}. In the complex model, a test outcome is positive iff the test contains at least one of the defective sets. So, here the notion of defectives items is generalized to sets of defective items called defective sets. This complex model was combined with the general inhibitor model and non-adaptive pooling designs for identification of defectives was proposed in \cite{CCH2011_ICM}. Our work is different from \cite{CCH2011_ICM} for the same reasons as stated for \cite{HwC2007_GIM}. Group testing on bipartite graphs was proposed in \cite{LTLW2005_BGT} as a special case of the complex model. Here, the left hand side of the bipartite graph represents the bait proteins and the right hand side represents the prey proteins. It is known a priori which items are baits and which ones are preys. The edges in the bipartite graph represent associations between the baits and preys. A test outcome is positive iff the test contains associated items and the goal was to identify these associations. Clearly, this model is different from the IDG because, in the IDG model, there are three types of items involved and the interactions between the three types of items are different from that in \cite{LTLW2005_BGT}. In the next section, we propose a probabilistic non-adaptive and a probabilistic two-stage adaptive pooling design and decoding algorithms for both the variants of the IDG model discussed this section. \section{Pooling designs and Decoding Algorithm} \label{sec3} In this section, we propose a non-adaptive pooling design and decoding algorithm as well as a two-stage adaptive pooling design and decoding algorithm for the IDG-WSI Model. The pooling designs and decoding algorithms for the IDG-NSI model follows from those for the IDG-WSI Model by replacing $I_{max}$ by $r$. {\it Non-adaptive pooling design:} The pools are generated from the matrix $\mathbf{M}_{NA}\in \mathbb{B}^{T_{NA} \times n}$. The entries of $\mathbf{M}_{NA}$ are i.i.d. as ${\cal B}(p_1)$. Test the pools denoted by the rows of $\mathbf{M}_{NA}$. Let the outcome vector be given by $\mathbf{y} \in \mathbb{B}^{T_{NA} \times 1}$. The exact value of $T_{NA}$ is specified in (\ref{eqn-beta1}) and (\ref{eqn-beta_NA}) (where $T_{NA}=\beta_{NA} \log n$) in Sub-section \ref{sec3-subsec1}, and its scaling is given in Theorem \ref{thm-NA_UB} (which appears before the beginning of Sub-section \ref{sec3-subsec1}). The exact value of $p_1$ is also given in Theorem \ref{thm-NA_UB}. {\it Adaptive pooling design:} A set of pools are generated from the matrix $\mathbf{M}_{1} \in \mathbb{B}^{T_1 \times n}$ whose entries are i.i.d. as ${\cal B}(p_1)$. The pools denoted by the rows of $\mathbf{M}_1$ are tested first and all the defectives are classified from the outcome vector $\mathbf{y}_1 \in \mathbb{B}^{T_1 \times 1}$. Denote the number of items declared defectives by ${\hat{d}}$ and the set of declared defectives by $\left\{\hat{u}_1, \hat{u}_2, \cdots, \hat{u}_{\hat{d}}\right\}$. If $\hat{d} \neq d$, an error is declared. We keep these declared defectives aside and generate another pooling matrix $\mathbf{M}_2 \in \mathbb{B}^{T_2 \times (n-d)}$, whose entries are i.i.d. as ${\cal B}(p_2)$, for the rest of the items. Now, test the pools denoted by the rows of the matrix $\mathbf{M}_2$ along with each of the items declared defectives and the outcomes are denoted by $\mathbf{y}_{\hat{u}_1},\mathbf{y}_{\hat{u}_2},\cdots,\mathbf{y}_{\hat{u}_{{d}}} \in \mathbb{B}^{T_2 \times 1}$. The two stages of testing are done non-adaptively as represented in Fig. \ref{fig-adaptive_PD}, and hence the pooling scheme is a two-stage adaptive pooling design. The exact values of $p_1$ and $p_2$ are given in Theorem \ref{thm-A_UB} (which appears before the beginning of Sub-section \ref{sec3-subsec1}). The scaling of $T_1$ and $T_2$ are also given in Theorem \ref{thm-A_UB} and their exact values are given in (\ref{eqn-beta1}) and (\ref{eqn-beta2}) (where, $T_{i}=\beta_{i} \log n$). The total number of tests is given by $T_1+dT_2$. \begin{figure}[htbp] \centering \includegraphics[totalheight=2.8in,width=3.6in]{Adaptive_PD.pdf} \vspace{-1.5cm} \caption{The proposed two-stage adaptive pooling design scheme is demonstrated here. The symbol $\bigoplus$ indicates that the pooling matrix $\mathbf{M}_2$ is tested along with the items $\hat{u}_i$ which are declared defectives. The items non-associated with $\hat{u}_i$ are determined from the outcome vector $\mathbf{y}_{u_i}$, for $i=1,2,\cdots,d$.} \label{fig-adaptive_PD} \end{figure} The defectives are expected to participate in a higher fraction of positive outcome tests than the normal items or the inhibitors. And, once the defectives are identified, tests of each one of them with rest of the items can be used to determine their associations. We show that this can be done non-adaptively as well. The decoding algorithm proceeds in two steps for both non-adaptive and adaptive pooling design. The first step will identify the defectives from the outcome vectors $\mathbf{y}$ and $\mathbf{y}_1$ in the non-adaptive and adaptive pooling designs respectively, according to the fraction of positive outcome tests in which an item participates. The second step will identify the inhibitors and their associations with the declared defectives using subsets of the outcome vector $\mathbf{y}$ in the non-adaptive pooling design and the outcome vectors $\mathbf{y}_{\hat{u}_1},\mathbf{y}_{\hat{u}_2},\cdots,\mathbf{y}_{\hat{u}_{d}}$ in the adaptive pooling design. Let us define the following notations\footnote{From hereon, we reserve the notation $u$ to represent a defective, $v$ to represent a normal item and $s$ to represent an inhibitor.} with respect to the pools represented by $\mathbf{M}_{NA}$ and $\mathbf{M}_1$ which are eventually useful in characterizing the statistics of the different types of items that are used in the decoding algorithm.\\ {\it Notations:} \begin{itemize} \item ${\cal I}(u)$ denotes the set of inhibitors that the defective $u$ is associated with. \item ${\mathscr F}_{u_{k}}$ denotes the event that none of the inhibitors associated with a defective $u_k$ appears in a test, given that the defective $u_k$ appears in the test. \item ${\cal D}^{(j)}_i \subseteq {\cal P}(\{u_1,\cdots,u_d\})$ denotes the $j^{\text{th}}$-set in the (arbitrarily) ordered set of all $i$-tuple subsets of the defective set denoted by ${\cal D}_i$, for $j=1,\cdots,{d \choose i}$, where $u_i$ denotes a defective and ${\cal P}\{(u_1,\cdots,u_d)\}$ denotes the power set of the set of defectives. \item ${\cal D}(s)$ denotes the defectives associated with the inhibitor $s$ and its complement is given by $\overline{{\cal D}(s)}={\cal D}-{\cal D}(s)$. \item ${\overline{{\cal D}(s)}}_i$ denotes the (arbitrarily) ordered set of all $i$-tuple subsets of the defective set $\overline{{\cal D}(s)}$ and the $j^{\text{th}}$-set in $\overline{{\cal D}(s)}_i$ is denoted by ${\overline{{\cal D}(s)}}^{(j)}_i$. \end{itemize} \begin{example}\label{eg-realization-notation} Realizations of the above notations for the association graph in Fig. \ref{fig-Eg1} considered in Example \ref{eg-bipartite} are given below. The inhibitor set is given by ${\cal I}=\{s_1,\cdots,s_r\} \subset {\cal W}$ and the defective set is given by ${\cal D}=\{u_1,\cdots,u_d\} \subset {\cal W}$ with $r=d$. An inhibitor $s_i$ is associated with a distinct defective $u_i$, and so \begin{itemize} \item ${\cal I}(u)$ for $u=u_i$ is given by ${\cal I}(u_i)=\{s_i\}$. \item ${\mathscr F}_{u_{1}}$ represents the event that the inhibitor $s_1$ associated with the defective $u_1$ does not appear in a test, given that the defective $u_1$ appears in the test. \item Realizations of ${\cal D}_i$ for $i=1,2$ are given by \begin{align*} {\cal D}_1=&\left\{\{u_1\},\{u_2\},\cdots,\{u_d\}\right\},\\ {\cal D}_2=&\left\{\{u_1,u_2\},\{u_1,u_3\},\cdots,\{u_1,u_d\},\right.\\ &~\left.\{u_2,u_3\},\cdots,\{u_2,u_d\},\cdots,\{u_{d-1},u_{d}\}\right\}. \end{align*}Realizations of ${\cal D}^{(j)}_i$ for $(i,j)=(1,2)$ and $(i,j)=(2,3)$ are given by \[ {\cal D}^{(2)}_1=\{u_2\},{\cal D}^{(3)}_2=\{u_1,u_4\}. \] \item ${\cal D}(s)$ for $s=s_1$ is given by ${\cal D}(s_1)=u_1$ and its complement is given by $\overline{{\cal D}(s_1)}=\{u_2,\cdots,u_d\}$. \item Realizations of ${\overline{{\cal D}(s)}}_i$ for $s=s_1$ and $i=1,2$ \begin{align*} {\overline{{\cal D}(s_1)}}_1=&\left\{\{u_2\},\cdots,\{u_d\}\right\},\\ {\overline{{\cal D}(s_1)}}_2=&\left\{\{u_2,u_3\},\{u_2,u_4\},\cdots,\{u_2,u_d\},\right.\\ &~\left.\{u_3,u_4\},\cdots,\{u_3,u_d\},\cdots,\{u_{d-1},u_{d}\}\right\}. \end{align*}Realizations of ${\overline{{\cal D}(s)}}^{(j)}_i$ with $s=s_1$, for $(i,j)=(1,2)$ and $(i,j)=(2,3)$ are given by \[ \overline{{\cal D}(s)}^{(2)}_1=\{u_3\},\overline{{\cal D}(s)}^{(3)}_2=\{u_2,u_5\}. \] \end{itemize} \end{example} We now define the following statistics corresponding to the different types of items. The following statistics also hold good when $\mathbf{y}_1$ is replaced by $\mathbf{y}$, as entries of both $\mathbf{M}_{NA}$ and $\mathbf{M}_1$ have the same statistics. \begin{align} \nonumber &q^{(u)}_1 \triangleq \Pr \left\{\mathbf{y}_{1}(l)=1|\text{defective $u$ is present in the $l^{\text{th}}$-test}\right\}\\ \label{eqn-LB_q1} &\geq(1-p_1)^{|{\cal I}(u)|} \geq (1-p_1)^{I_{max}}, \\ \nonumber &q^{(v)}_2\triangleq \Pr \left\{\mathbf{y}_{1}(l)=1|\text{normal item $v$ is present in the $l^{\text{th}}$-test}\right\}\\ \label{eqn-q2} &= \sum_{i=1}^{d} p^{i}_1(1-p_1)^{d-i}\sum_{j=1}^{d \choose i}\text{ Pr}\left\{ \underset{u_k \in {\cal D}^{(j)}_i}{\bigcup}{\mathscr F}_{u_{k}}\right\} \triangleq q_2\\ \label{eqn-q2_UB} &\leq \sum_{i=1}^{d} p^{i}_1(1-p_1)^{d-i} {d \choose i}= 1-(1-p_1)^d \triangleq q^{UB}_{2},\\ \nonumber &q^{(s)}_3 \triangleq \Pr \left\{\mathbf{y}_{1}(l)=1|\text{Inhibitor $s$ is present in the $l^{\text{th}}$-test}\right\}\\ \label{eqn-q3} &= \sum_{i=1}^{\left|\overline{{\cal D}(s)}\right|} p^{i}_1(1-p_1)^{\left|\overline{{\cal D}(s)}\right|-i}\sum_{j=1}^{\left|\overline{{\cal D} (s)}_i\right|}\text{ Pr}\left\{ \underset{u_k \in {\overline{{\cal D}(s)}}^{(j)}_i}{\bigcup}{\mathscr F}_{u_{k}}\right\},\\\nonumber & \hspace{0.5cm} \text{ if $\left|\overline{{\cal D}(s)}\right| \geq 1$},\\ \nonumber & =0, \text{ otherwise.} \end{align} Since the outer and inner summations in (\ref{eqn-q3}) is over a subset of those in (\ref{eqn-q2}), $\underset{s}{\max} ~q^{(s)}_3 \leq q^{(v)}_2=q_2$. It is also intuitive that positive outcome for an inhibitor in a test is less probable than that for a normal item. The equality in (\ref{eqn-q2}) follows from the fact that a test outcome is positive iff at least one defective appears in the test (which is captured by the outer summation term) and none of the inhibitors associated with at least one of these defectives appears in the test (which is captured by the union of the events ${\mathscr F}_{u_k}$ over $u_k$). A similar explanation holds true for (\ref{eqn-q3}). The upper bound in (\ref{eqn-q2_UB}) follows from the upper bound of one on the probability terms of (\ref{eqn-q2}). In hindsight, the lower bound in (\ref{eqn-LB_q1}) and the upper bound in (\ref{eqn-q2_UB}) can be easily obtained as follows. The lower bound on the positive outcome statistics for a defective item in (\ref{eqn-LB_q1}) follows from the worst case statistics when all the inhibitors inhibit the expression of every defective. The upper bound on the statistics for a normal item in (\ref{eqn-q2_UB}) follows by using the best case positive outcome statistics, in the absence of inhibitors, where the appearance of any defective gives a positive test outcome. In the sequel, we shall exploit the difference between (\ref{eqn-LB_q1}) and (\ref{eqn-q2_UB}) to identify the defectives notwithstanding the fact the one of them could be loose bounds for specific association graphs. For example, (\ref{eqn-LB_q1}) is tight for the $1$-inhibitor model whereas (\ref{eqn-q2_UB}) could be a loose upper bound for the same association graph, depending on the values of $p$, $r$, and $d$. However, fortunately, $p_1$ can be chosen appropriately so that the looseness in the bounds do not affect the scaling of the upper bound on the number of tests required to identify the defectives, and the dominant scaling is determined by the number of tests required to identify the association pattern. Denote the worst case negative outcome statistic for a defective by \begin{align}\label{eqn-bmax} b_{max}=1-(1-p_1)^{I_{max}}. \end{align} Denote the set of tests corresponding to outcome vector $\mathbf{y}$ in which an item $w_j$ participates by ${\cal T}_{w_j}(\mathbf{y})$ and the set of positive outcome tests in which the item $w_j$ participates by ${\cal S}_{w_j}(\mathbf{y})$, for $j=1,2,\cdots,n$. The decoding algorithm is given as follows. \begin{enumerate} \item {\em Step $1$ (Identifying the defectives for \underline{both non-adaptive} \underline{and adaptive pooling designs}):} \\For the non-adaptive pooling design, if $|{\cal S}_{w_j}(\mathbf{y})| > \left|{\cal T}_{w_j}(\mathbf{y})\right|[1-b_{max}(1+\tau))]$ with $b_{max}$ as defined in (\ref{eqn-bmax}), declare the item $w_j$ to be a defective. For the adaptive pooling design, we use the same criterion, replacing $\mathbf{y}$ by $\mathbf{y}_1$. Denote the number of items declared as defectives by ${\hat{d}}$ and the set of declared defectives by $\left\{\hat{u}_1, \hat{u}_2, \cdots, \hat{u}_{\hat{d}}\right\}$. If $\hat{d}\neq d$, declare an error. Denote the the remaining unclassified items in the population by $\left\{w'_1,\cdots,w'_{n-{d}}\right\} \triangleq \left\{w_1,\cdots,w_{n}\right\}-\left\{\hat{u}_1,\cdots, \hat{u}_{d}\right\}$. \item {\em Step $2$ (Identifying the inhibitors and their associations for \underline{non-adaptive pooling design}):} \\ Let ${\cal P}_k$ denote the sets of pools in $\mathbf{M}_{NA}$ that contain only the declared defective $\hat{u}_k$ and none of the other declared defectives, for $k=1,\cdots,{d}$. Also, let the outcomes corresponding to these pools be positive. This means that the pools in ${\cal P}_k$ do not contain any inhibitor from the set ${\cal I}(\hat{u}_k)$, which denotes the set of inhibitors associated with the item $\hat{u}_k$ if $\hat{u}_k$ is indeed a defective. Now, consider only the outcomes corresponding to these pools denoted by $\mathbf{y}_{{\cal P}_1} \subset \mathbf{y}, \cdots, \mathbf{y}_{{\cal P}_{d}} \subset \mathbf{y}$. The associations of the declared defectives are identified as follows. \begin{itemize} \item For each $k=1 \text{ to } {d}$, declare $(w'_j, \hat{u}_k)$ to be a non-associated inhibitor-defective pair if $w'_j$ participates in at least one of the tests corresponding to the outcome vector $\mathbf{y}_{{\cal P}_k}$ and declare the rest of the items to be associated with $\hat{u}_k$. \end{itemize}The items declared as non-associated for all $k$ are declared to be be normal items. If ${\cal P}_k=\{\emptyset\}$ for some $k$, declare an error. \item {\em Step $2$ (Identifying the inhibitors and their associations for \underline{adaptive pooling design}):} \\ Let ${\cal S}(\mathbf{y}_{\hat{u}_k})$ denote the set of positive outcome tests corresponding to $\mathbf{y}_{\hat{u}_k}$, i.e., these pools do not contain any inhibitor from the set ${\cal I}(\hat{u}_k)$ if $\hat{u}_k$ is a defective. \begin{itemize} \item For each $k=1 \text{ to } {d}$, declare $(w'_j, \hat{u}_k)$ to be a non-associated inhibitor-defective pair if $w'_j$ participates in at least one of the tests in the set ${\cal S}(\mathbf{y}_{\hat{u}_k})$ and declare the rest of the items to be associated with $\hat{u}_k$. \end{itemize} The items declared as non-associated for all $k$ are declared to be be normal items. If ${\cal S}(\mathbf{y}_{\hat{u}_k})=\{\emptyset\}$ for some $k$, declare an error. \end{enumerate} The following toy example demonstrates the operation of the above decoding algorithm for non-adaptive pooling design. \begin{example}\label{eg-NA_Algo} Consider the following non-adaptive pooling design matrix $\mathbf{M}_{NA} \in \mathbb{B}^{5 \times 5}$ and the outcome vector $\mathbf{y}\in \mathbb{B}^{5 \times 1}$ for the underlying association graph shown in Fig. \ref{fig-NA_Algo}. The item $w_5$ is a normal item. Here, $r=d=2, n=5, T_{NA}=5$. \begin{figure}[htbp] \centering \includegraphics[totalheight=2in,width=3in]{Ass_Eg_NA_Algo.pdf} \vspace{-1cm} \caption{The underlying association graph for Example \ref{eg-NA_Algo}.} \label{fig-NA_Algo} \end{figure} \begin{align*} \mathbf{M}_{NA} = \begin{bmatrix} 1 & 1 & 0 & 0 & 1\\ 0 & 1 & 0 & 1 & 0\\ 0 & 1 & 1 & 0 & 1\\ 1 & 0 & 0 & 1 & 1\\ 0 & 0 & 1 & 1 & 1 \end{bmatrix}\Rightarrow \mathbf{y}=\begin{bmatrix} 0 \\ 1 \\ 1 \\ 1 \\0 \end{bmatrix} \end{align*}We recall that column-$j$ of the matrix $M_{NA}$ corresponds to the item $w_j$. The threshold for identifying the defectives in Step $1$ of the decoding algorithm is such that any item $w_j$ that satisfies the condition $\frac{|{\cal S}_{w_j}(\mathbf{y})|}{|{\cal T}_{w_j}(\mathbf{y})|}>\frac{1}{2}$ is declared to be a defective. Now, observe the operation of the decoding algorithm. {\it Step $1$:} We observe that \begin{align*} &\frac{|{\cal S}_{w_1}(\mathbf{y})|}{|{\cal T}_{w_1}(\mathbf{y})|}=\frac{1}{2},\frac{|{\cal S}_{w_2}(\mathbf{y})|}{|{\cal T}_{w_2}(\mathbf{y})|}=\frac{2}{3},\frac{|{\cal S}_{w_3}(\mathbf{y})|}{|{\cal T}_{w_3}(\mathbf{y})|}=\frac{1}{2}, \\ & \frac{|{\cal S}_{w_4}(\mathbf{y})|}{|{\cal T}_{w_4}(\mathbf{y})|}=\frac{2}{3},\frac{|{\cal S}_{w_5}(\mathbf{y})|}{|{\cal T}_{w_5}(\mathbf{y})|}=\frac{1}{2}. \end{align*}Items $w_2$ and $w_4$ are the only items that satisfy the condition $\frac{|{\cal S}_{w_j}(\mathbf{y})|}{|{\cal T}_{w_j}(\mathbf{y})|}>\frac{1}{2}$, and hence are declared defectives. Therefore, the declared defectives are given by $\hat{u}_1=w_2$, $\hat{u}_2=w_4$ and the remaining unclassified items are given by $w'_1=w_1, w'_2=w_3, w'_3=w_5$. {\it Step $2$:} The ``useful'' pools used for identifying the ``non-associations'' are obtained as ${\cal P}_1=\{3\},{\cal P}_2=\{4\}$. This is because the third test outcome in which $\hat{u}_1$ participates and $\hat{u}_2$ does not participate is positive, and the fourth test outcome in which $\hat{u}_2$ participates and $\hat{u}_1$ does not participate is also positive. Since the items $w'_2$ and $w'_3$ participate in the third test, $(w'_2,\hat{u}_1)=(w_3,w_2)$ and $(w'_3,\hat{u}_1)=(w_5,w_2)$ are declared to be non-associated inhibitor-defective pairs and $(w'_1,\hat{u}_1)=(w_1,w_2)$ is declared to be an associated inhibitor-defective pair. Similarly, $(w'_1,\hat{u}_2)=(w_1,w_4)$ and $(w'_3,\hat{u}_1)=(w_5,w_4)$ are declared to be a non-associated item-pairs and $(w'_2,\hat{u}_2)=(w_3,w_4)$ is declared to be an associated inhibitor-defective pair. Since the item $w'_3=w_5$ is declared to be non-associated with both $\hat{u}_1$ and $\hat{u}_2$, it is declared to be a normal item. We emphasize that this is a toy example to demonstrate the operation of the proposed decoding algorithm and not representative of the values of $p$ or $\tau$ or $T_{NA}$ for the given values of $r,d,n$. \end{example} \begin{remark} {\it (Step $1$)} The first step in the decoding algorithm, which is the same for both the non-adaptive and adaptive pooling design, is similar to the defective classification algorithm used in \cite{GEJS2014} for the $1$-inhibitor model. The underlying common principle used is that there exists statistical difference between the defective items and the rest of the items. Hence, with sufficient number of tests, the defectives can be classified by ``matching'' the tests in which an item participates and the positive outcome tests. The items involved in a large fraction of positive outcome tests are declared to be defectives. A similar decoding algorithm was used in the classical group testing framework with noisy tests \cite{CJSA_TIT2014}. Here, the inhibitors of a defective item, if any, behave like a noise due to probabilistic presence in a test. The (worst case) expected number of positive outcome tests in which a defective participates is at least $|{\cal T}_{w_j}(\mathbf{y})|[1-b_{max}]$. Like in \cite{GEJS2014}, the Chernoff-Hoeffding concentration inequality \cite{Hof1963} is used to bound the error probability and obtained the exact number of tests required to achieve a target (vanishing) error probability. {\it It is important to note that, a priori, it is not clear if a fixed threshold technique can sieve the defectives under worst case positive outcome statistics and the rest of the items under best case positive outcome statistics, with vanishing error probability}. The fact that this is indeed possible will be proved in the following sub-section. \end{remark} \begin{remark} {\it (Step $2$)} In the IDG model, the inhibitors for each defective might be distinct. Hence, an inhibitor for one defective behaves as a normal item from the perspective of another defective. This defective-specific interaction is absent in the $1$-inhibitor model. So, any inhibitor can be identified using any defective, i.e, an inhibitor's behaviour is defective-invariant in the $1$-inhibitor model, which was exploited in identifying the inhibitors in \cite{GEJS2014}. Since each inhibitor's behaviour can be defective-specific in the IDG model, we need to identify the defectives first and then identify its associated inhibitors by observing the interaction of the other items with each of these defectives. \end{remark} The following theorems state the values of the parameters $p_1$, $p_2$, and $\tau$, and the scaling of the number of tests required for the proposed non-adaptive and adaptive pooling designs to determine the association graph with high probability. Similar results can be stated for the IDG-NSI model by replacing $I_{max}$ by $r$ in the following theorems. \begin{theorem}[Non-adaptive pooling design]\label{thm-NA_UB} Choose the pooling design matrix $\mathbf{M}_{NA}$ of size $T_{NA} \times n$ with its entries chosen i.i.d. as ${\cal B}(p_1)$ with $p_1=\frac{1}{3(I_{max}+d)}$ for the IDG-WSI model. Test the pools denoted by the rows of the matrix $\mathbf{M}_{NA}$ non-adaptively. The scaling of the number of tests sufficient to guarantee vanishing error probability (\ref{eqn-error_metric}) using the proposed decoding algorithm with $\tau=\frac{1-b_{max}-q^{UB}_2}{2b_{max}}$ is given by $T_{NA}=O\left((I_{max}+d)^2\log n\right)$, where $q^{UB}_2$ and $b_{max}$ are defined in (\ref{eqn-q2_UB}) and (\ref{eqn-bmax}) respectively. \end{theorem} \begin{theorem}[Adaptive pooling design]\label{thm-A_UB} Choose the pooling design matrices $\mathbf{M}_1$ and $\mathbf{M}_2$ of sizes $T_1 \times n$ and $T_2 \times n$ with its entries chosen i.i.d. as i.i.d. ${\cal B}(p_1)$ and ${\cal B}(p_2)$ respectively, with $p_1=\frac{1}{3(I_{max}+d)}$ and $p_2=\frac{1}{2I_{max}}$ for the IDG-WSI model. Test the pools denoted by the rows of the matrices $\mathbf{M}_1$ non-adaptively and classify the defectives. Now, test each of the pools from $\mathbf{M}_2$ along with the $d$ classified defectives individually. The scaling of the number of tests sufficient to guarantee vanishing error probability (\ref{eqn-error_metric}) using the proposed decoding algorithm with $\tau=\frac{1-b_{max}-q^{UB}_2}{2b_{max}}$ is given by $T_A=T_1+dT_2=O\left(I_{max}d\log n\right)$, where $q^{UB}_2$ and $b_{max}$ are defined in (\ref{eqn-q2_UB}) and (\ref{eqn-bmax}) respectively. \end{theorem} \begin{remark} The value of $\tau=\frac{1-b_{max}-q^{UB}_2}{2b_{max}}$ chosen in the above theorems implies that the decoding algorithm declares item $w_j$ to be a defective if $\frac{|{\cal S}_{w_j}(\mathbf{y})|}{|{\cal T}_{w_j}(\mathbf{y})|},\frac{|{\cal S}_{w_j}(\mathbf{y_1})|}{|{\cal T}_{w_j}(\mathbf{y_1})|}>\frac{(1-b_{max})+q^{UB}_2}{2}$. This threshold is simply an average between the worse-case positive outcome statistic for a defective and the best-case positive outcome statistic for a normal item or an inhibitor. The values of $p$ and $p_1$ are chosen so that the former is greater than the latter. \end{remark} The following sub-section constitutes the proof of the above theorems. The exact number of tests required to guarantee vanishing error probability for recovery of the association graph are also obtained. The proof is exactly the same for the IDG-NSI model, but replacing $I_{max}$ by $r$. \subsection{Error Analysis of the Proposed Algorithm} \label{sec3-subsec1} As mentioned in Section \ref{sec2}, we require that \begin{align*} \underset{{\cal I}, {\cal D},{\cal E}\left({\cal I}, {\cal D}\right)}{\max} \Pr\left\{\left({\cal I}, {\cal D},{\cal E}({\cal I},{\cal D})\right)\neq\left(\hat{\cal I}, \hat{\cal D},\hat{\cal E}(\hat{\cal I}, \hat{\cal D}\right)\right\} \leq cn^{-\delta}, \end{align*}for some constant $c>0$ and fixed $\delta>0$. For the non-adaptive pooling design, we find the number of tests $T_{NA}$ required to upper bound the error probability of the first step of the decoding algorithm by $c_1n^{-\delta_1}$ and that of the second step of the decoding algorithm by $c_2n^{-\delta_2}$, for some constants $c_1$ and $c_2$. A similar approach is taken for the two-stage adaptive pooling design to find the number of tests $T_1$ and the value of $T_2$. Finally, the values of $\delta_1$ and $\delta_2$ are chosen so that the total error probability is upper bounded by $cn^{-\delta}$, for some constant $c$ and given $\delta>0$. \subsubsection{Error Analysis of the First Step} Since the first step of the decoding algorithm is the same for both the non-adaptive and adaptive pooling design, the bounds on the number of tests obtained below for adaptive pooling design applies for the non-adaptive pooling design also. The three possible error events in the first step of the decoding algorithm for both non-adaptive and adaptive pooling design are given by \begin{enumerate} \item A defective is not declared as one. \item A normal item is declared as a defective. \item An inhibitor is declared as a defective. \end{enumerate} Clearly, the defective that has the largest probability of a negative outcome, given by $b_{1_{max}}=\underset{u}{\max}\left(1- ~q^{(u)}_1\right)$, has the largest probability of not being declared as a defective. So, with $T_1=\beta_1 \log n$, the probability of the first error event for all the defectives can be upper bounded (using the union bound over all defectives) as {\small \vspace{-0.3cm} \begin{align*} &d \sum_{t=0}^{T_1} {T_1 \choose t} p^t_1 (1-p_1)^{T_1-t}\hspace{-0.5cm}\sum_{v=tb_{max}(1+\tau)}^{t} {t \choose v} b_{1_{max}}^v (1-b_{1_{max}})^{t-v}\\ &=d \sum_{t=0}^{T_1} {T_1 \choose t} p^t_1 (1-p_1)^{T_1-t}~~\times\\&\hspace{0.5cm}\sum_{v=tb_{1_{max}}+t(b_{max}-b_{1_{max}}+b_{max}\tau)}^{t} {t \choose v} b_{1_{max}}^v (1-b_{1_{max}})^{t-v}\\ &\overset{(a)}{\leq} d \sum_{t=0}^{T_1} {T_1 \choose t} p^t_1 e^{-2t{(b_{max}-b_{1_{max}}+b_{max}\tau)}^2} (1-p_1)^{T_1-t}\\ &\overset{(b)}{=} d \left[1-p_1+p_1 e^{-2{(b_{max}-b_{1_{max}}+b_{max}\tau)}^2}\right]^{\beta_1 \log n}\\ &\overset{(c)}{\leq} d\exp\left\{-\beta_1 p_1 \log n \left(1-e^{-2{(b_{max}-b_{1_{max}}+b_{max}\tau)}^2}\right) \right\} \leq n^{-\delta_1}\\ &\overset{(d)}{\Leftarrow} d\exp\left\{-\beta_1 p_1 \log n \left(1-e^{-2}\right)(b_{max}-b_{1_{max}}+b_{max}\tau)^2\right\}\\ & ~~~~\leq n^{-\delta_1}\\ &\Rightarrow \beta_1 \geq \frac{\left(\frac{\ln d}{\ln n}+\delta_1\right)\ln 2}{p_1(1-e^{-2})(b_{max}-b_{1_{max}}+b_{max}\tau)^2}, \end{align*}}where $(a)$ follows from Chernoff-Hoeffding bound \cite{Hof1963}\footnote{If the term ${b_{max}(1+\tau)}>1$, then the probability of the error event under consideration is equal to zero. So, it can be assumed that ${b_{max}(1+\tau)}\leq 1$.}, $(b)$ follows from binomial expansion, $(c)$ follows from the fact that $1-c \leq e^{-c}$, and $(d)$ follows from the fact that $\left(1-e^{-2 x^2}\right) \geq \left(1-e^{-2}\right)x^2$, for $0<x<1$. Using the fact that $b_{1_{max}} \leq b_{max}$, where $b_{max}$ is defined in (\ref{eqn-bmax}), the following bound on $\beta_1$ suffices. \begin{align} \label{eqn-beta1-def} & \beta_1 \geq \frac{\left(\frac{\ln d}{\ln n}+\delta_1\right)\ln 2}{p_1(1-e^{-2})(b_{max}\tau)^2}. \end{align}Similarly, to guarantee vanishing probability for the second error event (union-bounded over all normal items) and the third error event (union-bounded over all inhibitors), it suffices that \begin{align} \nonumber & \beta_1\geq\frac{\left(\frac{\ln (n-d-r)}{\ln n}+\delta_1\right)\ln 2}{p_1(1-e^{-2})\left(1-b_{max}(1+\tau)-q_2\right)^2}, \\ \label{eqn-beta1-inh} & \beta_1\geq\frac{\left(\frac{\ln r}{\ln n}+\delta_1\right)\ln 2}{p_1(1-e^{-2})\left(1-b_{max}(1+\tau)-q^{(s)}_{3}\right)^2}, \end{align}Since $\underset{s}{\max}~q^{(s)}_{3}\leq q_2$ and $r=o(n)$, the bound in (\ref{eqn-beta1-inh}) is asymptotically redundant for all values of $\tau$. So, substituting the upper bound on $q_2$ defined in (\ref{eqn-q2_UB}) by $q^{UB}_2$, it suffices that \begin{align} \label{eqn-beta1-ndni} & \beta_1\geq\frac{\left(\frac{\ln (n-d-r)}{\ln n}+\delta_1\right)\ln 2}{p_1(1-e^{-2})\left(1-b_{max}(1+\tau)-q^{UB}_2\right)^2} \end{align}Now, the value of $\tau$ chosen to optimize the denominators of (\ref{eqn-beta1-def}) and (\ref{eqn-beta1-ndni}) is given by $\tau=\frac{1-b_{max}-q^{(UB)}_2}{2b_{max}}$. Therefore, we have \begin{align}\label{eqn-beta-max_p} \beta_1 \geq & \max\left\{\frac{4\left(\frac{\ln d}{\ln n}+\delta_1\right)\ln 2}{p_1(1-e^{-2})\left(1-b_{max}-q^{UB}_2\right)^2},\right.\\ \nonumber &\hspace{1.5cm}\left. \frac{4\left(\frac{\ln (n-d-r)}{\ln n}+\delta_1\right)\ln 2}{p_1(1-e^{-2})\left(1-b_{max}-q^{UB}_2\right)^2} \right\}. \end{align}The term $1-b_{max}-q_2$ can be lower bounded as follows. \begin{align*} 1-b_{max}-q_2 & \geq (1-p_1)^{I_{max}}- (1-(1-p_1)^d)\\ & \geq 1-(I_{max}+d)p_1. \end{align*} The last lower bound above follows from the fact that \mbox{$(1-p_1)^{I_{max}} \geq (1-I_{max} p_1)$} and $(1-p_1)^d \geq (1-dp_1)$. Optimizing the denominator terms of (\ref{eqn-beta-max_p}) with respect to $p_1$, we have $p_1=\frac{1}{3(I_{max}+d)}$. Hence, using $r,d=o(n)$ in (\ref{eqn-beta-max_p}), for sufficiently large $n$ it suffices that \begin{align} \label{eqn-beta1} \beta_{NA},\beta_1 \geq \frac{27\left(I_{max}+d\right)\left(\frac{\ln (n-d-r)}{\ln n}+\delta_1\right)\ln 2}{(1-e^{-2})}, \end{align}where $T_{NA}=\beta_{NA} \log n$. \subsubsection{Error Analysis of the Second Step} In the error analysis of the second step, we assume that all the defectives have been correctly declared. Errors due to error propagation from the first step shall be analyzed later.\\ \underline{\it Non-adaptive pooling design:}\\ The only error event for the non-adaptive pooling design in the second step is that there does not exist a set of pools ${\cal P}_k$ such that they contain only the defective $u_k$ and none of its associated inhibitors ${\cal I}(u_k)$, and all its non-associated items appear in at least one of such pools. Denote this error event by $\mathscr{U}(u_k)$. Clearly, none of the inhibitors associated with $u_k$ will be declared as non-associated with $u_k$. This follows from the definition of the set of pools ${\cal P}_k$ and the decoding algorithm. The probability of the favourable event that a non-associated item appears along with a defective $u_k$, but none of its associated inhibitors and none of the other defectives appear in a pool from $\mathbf{M}_{NA}$ is given by \[b^{(u_k)}\triangleq p^2_1(1-p_1)^{|{\cal I}(u_k)|}(1-p_1)^{d-1}.\]Now, probability of the error event $\mathscr{U}(u_k)$ is upper bounded by \begin{align*} \Pr\{\mathscr{U}(u_k)\}&\leq(n-d-|{\cal I}(u_k)|)\left(1-b^{(u_k)}\right)^{T_{NA}}\\ &\leq (n-d-|{\cal I}(u_k)|) e^{-T_{NA}b^{(u_k)}} \leq n^{-\delta_2}, \text{ if}\\ & \hspace{-0.7in}~\beta_{NA} \geq \frac{\left(\frac{\ln (n-d-|{\cal I}(u_k)|)}{\ln n}+\delta_2\right)\ln 2}{b^{(u_k)}}. \end{align*}Since $(1-p_1)^{|{\cal I}(u_k)|}\geq (1-p_1)^{I_{max}} \geq (1-I_{max}p_1)$ and $(1-p_1)^{d-1} \geq (1-dp_1)$, substituting for $p_1$, it suffices that \begin{align}\label{eqn-beta_NA} \beta_{NA} \geq \frac{81}{4} (I_{max}+d)^2 \left(\frac{\ln (n-d)}{\ln n}+\delta_2\right)\ln 2. \end{align}\\ \underline{\it Adaptive pooling design:}\\ Like in non-adaptive pooling design the only error event, denoted by ${\mathscr E}{(u_k)}$, is that items $w_j$ not associated with $u_k$ are declared as associated inhibitors, i.e., the item $w_k$ does not appear in any of the positive outcome tests ${\cal S}(\mathbf{y}_{u_k})$. Clearly, none of the inhibitors associated with $u_k$ will be declared as non-associated with $u_k$. Let $T_2=\beta_2 \log n$. The number of tests required to guarantee vanishing error probability for the error event ${\mathscr E}{(u_k)}$ is evaluated as follows. Let $w_j \notin {\cal I}(u_k)$. Define \begin{align*} a^{(u_k)}_{w_j} &\triangleq \text{ Pr}\left\{\mathbf{y}_{{u}_{k}}(l)=1\left|\right. w_j \text{ is present in $l^{\text{th}}$-test}\right\}\\ & \geq (1-p_2)^{|{\cal I}(u_k)|} \triangleq a^{(u_k)}. \end{align*}Now, we have \begin{align*} & \text{Pr}\left\{{\mathscr E}{(u_k)}\right\}\leq \left(n-d-|{\cal I}(u_k)|\right)\left(1-a^{(u_k)}p_2\right)^{T_2} \leq n^{-\delta_2}\\ \Leftarrow ~& \beta_2 \geq \frac{\left(\frac{\ln \left(n-d-|{\cal I}(u_k)|\right)}{\ln n}+\delta_2\right)\ln 2}{p_2 \left(1-p_2\right)^{|{\cal I}(u_k)|}}. \end{align*}Using the fact that $\left(1-p_2\right)^{|{\cal I}(u_k)|} \geq 1-|{\cal I}(u_k)|p_2$, and substituting $p_2=\frac{1}{2I_{max}}$, we have the following bound. \begin{align}\nonumber &\beta_2 \geq 4 I_{max}\left(\frac{\ln \left(n-d-|{\cal I}(u_k)|\right)}{\ln n}+\delta_2\right)\ln 2\\ \label{eqn-beta2} \Leftarrow ~&\beta_2 \geq 4 I_{max}\left(\frac{\ln \left(n-d\right)}{\ln n}+\delta_2\right)\ln 2. \end{align} \subsubsection{Analysis of Total Error Probability} Assuming that the target total error probability is $O(n^{-\delta})$, the values of $\delta_1$ and $\delta_2$ need to be determined. Towards that end, define the following events. \begin{align*} {\mathscr E}_{ij} \triangleq & \text{ Event of declaring $(w_i,w_j), i\neq j$, to be an associated }\\&\text{ pair,}\\ {\mathscr W} \triangleq & \text{ Event that at least one actual defective has not been}\\&\text{ declared as a defective.} \end{align*}Let ${\cal E}$ denote the correct association pattern for some realization $\{{\cal I},{\cal D}\}$. Now, the total probability of error is given by \begin{align} \nonumber &\text{Pr}\left\{\underset{(w_i,w_j)\notin {\cal E}}{\bigcup}{\mathscr E}_{ij}\bigcup {\mathscr W}\right\} \leq \sum_{(w_i,w_j)\notin {\cal E}}\text{Pr}\left\{{\mathscr E}_{ij}\right\}+\text{Pr}\left\{{\mathscr W}\right\}\\ \label{eqn-tpe1} \leq &\sum_{w_i\neq w_j} \sum_{w_j\in {\cal N \cup \cal I}}\text{Pr}\left\{{\mathscr E}_{ij}\right\}+\sum_{w_i\notin {\cal I}(w_j)} \sum_{w_j\in {\cal D}}\text{Pr}\left\{{\mathscr E}_{ij}\right\} \\ \nonumber &~~~~ + \text{ Pr}\left\{{\mathscr W}\right\}\\ \label{eqn-tpe2} < & n\times 2n^{-\delta_1}+ d n^{-\delta_2} + n^{-\delta_1}. \end{align}There are two possible ways in which the event ${\mathscr E}_{ij}$, for $(w_i,w_j)\notin {\cal E}$, can occur. One possibility is that the item $w_j$ has been erroneously declared as a defective in the first step of the algorithm, and hence any item $w_i$ declared to be associated with $w_j$ is an erroneous association. The first term in (\ref{eqn-tpe1}) represents this possibility. The other possibility is that $w_j$ has been correctly identified as a defective, but the item $w_i$ is erroneously declared to be associated with $w_j$. The second term in (\ref{eqn-tpe1}) represents this possibility. The last term accounts for the fact that a defective might be missed out in the first step of the algorithm. Note that the other two terms do not capture this error event. Finally, (\ref{eqn-tpe2}) follows from the error analysis of the first and second steps of the decoding algorithm. Therefore, if the target error probability is $O(n^{-\delta})$, then choose $\delta_1,\delta_2=\delta+1$. Recall that the number of tests required for non-adaptive and adaptive pooling designs are given by $T_{NA}=\beta_{NA}\log n$ and $T_A=T_1+dT_2=(\beta_1 +d\beta_2) \log n$ respectively. Therefore, from (\ref{eqn-beta1}), (\ref{eqn-beta_NA}), and (\ref{eqn-beta2}) we have that $T_{NA}=O\left((I_{max}+d)^2 \log n\right)$ and $T_A=O\left(I_{max} d \log n\right)$. \subsection{Adaptation for the IDG-NSI Model} The only modification required in the pooling design and decoding algorithm proposed for the IDG-WSI model to adapt it for the IDG-NSI model is that $I_{max}$ is replaced by $r$. For the sake of clarity, we list the only changes below. \begin{enumerate} \item The pooling design parameters are chosen as $p=p_1=\frac{1}{3(r+d)}$, $p_2=\frac{1}{2r}$. \item In Step $1$ of the decoding algorithm the threshold for identifying the defectives is chosen as $|{\cal S}_{w_j}(\mathbf{y}_1)| > |{\cal T}_{w_j}(\mathbf{y}_1)|[1-b_{max}(1+\tau))]$, where $b_{max}=1-(1-p_1)^r$. Intuitively, this worst-case threshold corresponds to a scenario where every inhibitor inhibits every defective, i.e., the $1$-inhibitor model. \item The values of $\beta_{NA}$, $\beta_1$ and $\beta_2$ are chosen as \begin{align*} \beta_{NA}\geq &\max \left\{\frac{27\left(r+d\right)\left(\frac{\ln (n-d-r)}{\ln n}+\delta_1\right)\ln 2}{(1-e^{-2})}\right.,\\ \nonumber & \left.\frac{81}{4} (r+d)^2 \left(\frac{\ln (n-d)}{\ln n}+ \delta_2\right)\ln 2 \right\},\\ \beta_1 \geq & \frac{27\left(r+d\right)\left(\frac{\ln (n-d-r)}{\ln n}+\delta_1\right)\ln 2}{(1-e^{-2})},\\ \beta_2 \geq & 4 r\left(\frac{\ln \left(n-d\right)}{\ln n}+\delta_2\right)\ln 2. \end{align*} \end{enumerate} Hence, the total number of tests required for the IDG-NSI model scales as $T_{NA}=O\left((r+d)^2 \log n\right)$ for the non-adaptive pooling design and $T_A=O(rd \log n)$ for the two-stage adaptive pooling design. In the next section, lower bounds on the number of tests for non-adaptive and adaptive pooling designs are obtained. \section{Lower Bounds for Non-Adaptive and Adaptive Pooling Design} \label{sec4} In this section, two lower bounds on the number of tests required for non-adaptive pooling designs for solving the IDG-NSI and IDG-WSI problems with vanishing error probability are obtained. One of the lower bounds is simply obtained by counting the entropy in the system and this lower bound also holds good for adaptive pooling designs. The other lower bound is obtained using a lower bound result for the $1$-inhibitor model which is stated below. We recall that all the inhibitors inhibit the expression of every defective in the $1$-inhibitor model. \begin{theorem}[Th. $1$, \cite{GEJS2014}]\label{thm-LB_1_Inh_Model} An asymptotic lower bound on the number of tests required for non-adaptive pooling designs in order to classify $r$ inhibitors amidst $d$ defectives and $n-d$ normal items in the $1$-inhibitor model is given by $\Omega\left(\frac{r^2}{d \log{\frac{r}{d}}}\log n\right)$, in the $d=o(r), r=o(n)$ regime\footnote{Though Theorem $1$ in \cite{GEJS2014} is stated for the classification of both the defectives and inhibitors in the $1$-inhibitor model, it is also valid for classification of inhibitors alone. This is because the entropy in the system is dominated by the number of inhibitors, in the large inhibitor regime.}. \end{theorem} The second lower bound in the following theorem dominates in the large inhibitor regime, i.e., the number of inhibitors is large compared to the number of defectives. It conveys the number of tests required to identify the inhibitors alone. Though the inhibitors outnumber the defectives, they can be identified only in the presence of an associated defective. So, the worst scenario (in terms of number of tests) happens when most inhibitors have to be identified using a single defective, or in other words, all of the inhibitors happen to inhibit a single defective. The third lower bound in the following theorem exploits the intuition gained from Step $2$ of the decoding algorithm for non-adaptive pooling design (given in Section \ref{sec3}). This lower bound is obtained by characterizing the minimum number of tests required to identify the associations of every defective. Since no two defectives might be associated with a single inhibitor, it is necessary that no two defectives participate in the same test from which the associations of a defective are identified. Otherwise, the non-associated defective masks the effect of association of the associated inhibitor-defective pair. This might result in wrongly declaring the associated inhibitor-defective pair to be non-associated. Throughout this section, lowercase alphabets are used for defectives and inhibitors whose realizations are revealed by a genie and uppercase alphabets are used for those whose realizations are unknown. \begin{theorem}\label{thm-LB_GTI_ID_NSI} An asymptotic lower bound on the number of tests required for non-adaptive pooling designs for solving the IDG-NSI problem with vanishing error probability for $r,d=o(n)$ is given by \begin{align*} \max\left\{\Omega\left((r+d)\log n+rd\right), \Omega\left(\frac{r^2}{\log r} \log n\right), \Omega(d^2)\right\}. \end{align*} \end{theorem} \begin{proof} The proof for the first lower bound on the number of tests follows by lower bounding the total number of possible realizations of the sets of inhibitors, defectives, and association patterns. \begin{align} \nonumber T_{NA} &\geq H\left({\cal I}, {\cal D},{\cal E}({\cal I},{\cal D})\right)\\ \label{eqn-LB_ent} &=H({\cal D})+H\left({\cal I}|{\cal D})+H({\cal E}({\cal I},{\cal D})|({\cal I},{\cal D})\right)\\ \nonumber &=\log \left({n \choose d} {n-d \choose r} \sum_{i_1=1}^{d}\cdots \sum_{i_r=1}^{d} \prod_{j=1}^{r} {d\choose i_j}\right)\\ \nonumber &\geq \log \left({n \choose d} {n-d \choose r} \prod_{k=1}^r {d \choose \frac{d}{2}}\right)\\ \label{eqn-LB_ent_2} &=\Omega \left((r+d) \log n + rd\right), \end{align}where $i_j$ denotes the number of defectives that the $j^{\text{th}}$-inhibitor can be associated with, and the last step follows by using Stirling's lower bound ${d \choose \frac{d}{2}}\geq 2^\frac{d}{2}$. This lower bound on the number of tests is also valid for {\it adaptive pooling designs}. The second lower bound for non-adaptive pooling designs is obtained as follows. Assume that it is required to identify the inhibitors alone. Clearly, this requires lesser number of tests than the problem of identifying the association graph. Since the objective is to satisfy the error metric in (\ref{eqn-error_metric}), the error probability criterion \begin{align} \label{eqn-error_metric_real} \Pr\left\{\hat{\cal I}\neq {\cal I}\right\} \leq cn^{-\delta} \end{align}has to be satisfied for all possible association patterns ${\cal E}$ on all possible realizations of $({\cal I},{\cal D})$. Let $PD\text{-}DA$ denote a pooling design, decoding algorithm tuple that satisfies (\ref{eqn-error_metric_real}), and $\mathscr P$ denotes the set of all such tuples. Further, let $T_{NA}(PD\text{-}DA, {\cal I},{\cal D}, {\cal E})$ denote the minimum number of tests required by $PD\text{-}DA$ to satisfy (\ref{eqn-error_metric_real}) for a particular realization of $\left({\cal I},{\cal D}, {\cal E}\right)$. We now have to determine the lower bound $\underset{{\mathscr P}}{\inf} \underset{\left({\cal I},{\cal D}, {\cal E}\right)}{\sup} T_{NA}$. We now have \begin{align*} \inf_{\mathscr P} \sup_{\left({\cal I},{\cal D}, {\cal E}\right)} T_{NA} \geq \inf_{\mathscr P} \sup_{\left({\cal I},{\cal D}, {\cal E}'\right)} T_{NA}, \end{align*} where ${\cal E}'$ denotes a specific class of association pattern represented in Fig. \ref{fig-ass_LB}. \begin{figure}[htbp] \centering \includegraphics[totalheight=2.5in,width=3.4in]{Ass_LB.pdf} \vspace{-1cm} \caption{The class of association pattern used to obtain the second lower bound, illustrated for some realization of $({\cal I},{\cal D})$. A single defective is associated with all the inhibitors, but none of the other defectives are associated with any inhibitor.} \label{fig-ass_LB} \end{figure}Now, assume that a genie reveals the set of defectives ${\mathscr D}'\triangleq \{u_2,\cdots,u_d\}$ which are not associated with any of the inhibitors. A lower bound for this problem with side information from the genie is clearly a lower bound for the original problem. A lower bound on the number of tests $T'_{NA}$ for this problem is given by \cite{GEJS2014}\footnote{A similar expression is used to obtain Theorem $3$. This is derived formally using Fano's inequality. The steps involved are illustrated in the proof of the third lower bound.} \begin{align*} \sum_{l=1}^{T'_{NA}}H[\mathbf{y}(l)]=\Omega\left(\log {n-d \choose r}\right). \end{align*} Note that the presence of any defective from the set ${\mathscr D}'$ in a pool always gives a positive outcome, and hence provides zero information for distinguishing the inhibitors from the rest of the items as the entropy of such an outcome is zero. So, we assume that none of the tests contain items from the set ${\mathscr D}'$. Therefore, the inhibitor identification problem for items with the association pattern as given in Fig. \ref{fig-ass_LB} is now reduced to the problem of identifying $r$ inhibitors amidst $n-d$ normal items and one defective item in the $1$-inhibitor model, where $d=o(n)$. For this problem, using Theorem \ref{thm-LB_1_Inh_Model}, it follows that the lower bound on the number of tests is given by $T'_{NA}= \Omega\left(\frac{r^2}{\log r} \log n\right)$. Hence, this is also a lower bound on the number of tests required to identify the association graph with vanishing worst case error probability. The evaluation of the third lower bound is involved and is obtained as follows. Since the second lower bound is tighter when $r\geq d$, here we assume that $r<d$. Using similar arguments as in the second lower bound, a lower bound on the number of tests for the following reduced problem is a lower bound for the original problem. Let $\{S_2,\cdots,S_r\}$ denote a set of inhibitors associated with exactly one defective $U_d$. The defective $U_d$ is not inhibited by the inhibitor $S_1$. Further, the inhibitor $S_1$ is associated with exactly one of the defectives $\{U_1,\cdots,U_{d-1}\}$. This association graph is depicted in Fig. \ref{fig-LB_d^2}. \begin{figure}[htbp] \centering \includegraphics[totalheight=2.8in,width=3.5in]{Ass_LB_NSI_d_sq.pdf} \vspace{-0.5cm} \caption{A possible association graph, where $r-1$ inhibitors and a single defective are associated only among themselves. The remaining inhibitor is associated with exactly one of the remaining defectives.} \label{fig-LB_d^2} \end{figure}Let a genie reveal the set of inhibitors ${\mathscr I}_{-S_1}\triangleq\{s_2,\cdots,s_r\}$ and the defective $u_d$. The ``residual message'' in the system is now given by $W'\triangleq \left\{S_1,{\cal D}_{-u_d} ,{\cal E}\left(S_1,{\cal D}_{-u_d}\right)\right\}$, where ${\cal D}_{-u_d} \triangleq {\cal D}\backslash {u_d}$. For the reduced problem, we have \begin{align}\label{eqn-Pe_max} &\underset{\underset{{\cal E}\left({S}_1, {{\cal D}}_{-u_d}\right)}{{S}_1, {{\cal D}}_{-u_d},}}{\max} \Pr\left\{\left(\hat{S}_1, \hat{{\cal D}}_{-u_d},\hat{\cal E}\left(\hat{S}_1, \hat{{\cal D}}_{-u_d}\right)\right)\right.\\\nonumber& \left.\hspace{2cm}\neq \left({S}_1, {{\cal D}}_{-u_d},{\cal E}\left({S}_1, {{\cal D}}_{-u_d}\right)\right)\right\}\\ \label{eqn-Pe_avg} \geq & ~\mathbb{E}_{f}\left[\Pr\left\{\left(\hat{S}_1, \hat{{\cal D}}_{-u_d},\hat{\cal E}\left(\hat{S}_1, \hat{{\cal D}}_{-u_d}\right)\right)\right.\right.\\\nonumber& \left.\left.\hspace{2cm}\neq \left({S}_1, {{\cal D}}_{-u_d},{\cal E}\left({S}_1, {{\cal D}}_{-u_d}\right)\right)\right\}\right]\triangleq P_{e_{avg}}, \end{align}where $f$ denotes a probability mass function of the association graph such that \begin{align}\label{eqn-f} &\Pr\{(s_1,u)\in {\cal E}\left({s}_1, {{\mathscr D}}_{-u_d}\right)\}=\frac{1}{d-1}, u \in {\mathscr D}_{-u_d},\\ \nonumber &\Pr\{{s}_1, {{\mathscr D}}_{-u_d}\}=\frac{1}{{n-r \choose d-1}(n-r-d+1)} \end{align}for any realization of $(S_1,{\cal D}_{-u_d})$ given by $({s}_1, {{\mathscr D}}_{-u_d})$. So, a lower bound on the number of tests required to achieve vanishing average error probability $P_{e_{avg}}$ in (\ref{eqn-Pe_avg}) is also a lower bound on the number of tests required to achieve vanishing maximum error probability in (\ref{eqn-Pe_max}). These in turn give a lower bound on the number of tests for the original problem. Using Fano's inequality, we have\footnote{For brevity, we omit the conditioning on ${\cal E}\left({\mathscr I}_{-S_1},u_d\right)$, which is also revealed by the genie, in the entropy and mutual information terms.} {\begin{align}\nonumber &H[{\cal E}\left(S_1,{\cal D}_{-u_d}\right)|S_1,{\cal D}_{-u_d},{\mathscr I}_{-S_1},u_d]\\ \label{eqn-sub1}&= \frac{1}{{n-r \choose d-1}(n-r-d+1)}\sum_{S_1,{\cal D}_{-u_d}} \log (d-1) =\log (d-1) \\\nonumber &\leq 1 + P_e H[{\cal E}\left(S_1,{\cal D}_{-u_d}\right)|S_1,{\cal D}_{-u_d},{\mathscr I}_{-S_1},u_d] \\\nonumber&+ I\left[{\cal E}\left(S_1,{\cal D}_{-u_d}\right);\mathbf{y}|S_1,{\cal D}_{-u_d},{\mathscr I}_{-S_1},u_d\right]\\ \label{eqn-cond_red_H} &\leq 1 + P_e \log (d-1) + H\left[\mathbf{y}|S_1,{\cal D}_{-u_d},{\mathscr I}_{-S_1},u_d\right], \end{align}}where (\ref{eqn-sub1}) is obtained using (\ref{eqn-f}), and $P_e\triangleq \Pr\{\hat{\cal E}\left(S_1,{\cal D}_{-u_d}\right)\neq {\cal E}\left(S_1,{\cal D}_{-u_d}\right)\}\leq P_{e_{avg}}$ denotes the average error probability in declaring the residual association pattern\footnote{The inequality holds because $\mathbb{E}\left[\mathbb{I}\left(\hat{\cal E}\left(S_1,{\cal D}_{-u_d}\right)\neq {\cal E}\left(S_1,{\cal D}_{-u_d}\right) \right)\right]$ \mbox{$\leq \mathbb{E}\left[\mathbb{I}\left(\left(\hat{S}_1,\hat{\cal D}_{-u_d},\hat{\cal E}\left(S_1,{\cal D}_{-u_d}\right)\right)\neq \left(S_1,{\cal D}_{-u_d},{\cal E}\left(S_1,{\cal D}_{-u_d}\right)\right) \right)\right]$}, where $\mathbb{E}[.]$ and $\mathbb{I}(.)$ represent the expectation operator and the indicator function respectively.}. The summation term in (\ref{eqn-sub1}) denotes summation over all possible realizations of $S_1,{\cal D}_{-u_d}$. Using the fact that conditioning reduces entropy in (\ref{eqn-cond_red_H}), we have \begin{align} \nonumber &{\footnotesize \sum_{l=1}^{T_{NA}}} H\left[\mathbf{y}(l)|S_1,{\cal D}_{-u_d},{\mathscr I}_{-S_1},u_d\right]\\ \label{eqn-lb_no_tests} &\geq (1- P_e)\log (d-1) -1. \end{align} The presence of items from the set $\{{\mathscr I}_{-S_1},u_d\}$ in a test can either reduce the entropy or leave the entropy of the test outcome unaffected. So, we consider only pooling designs that do not contain any item from the set $\{{\mathscr I}_{-S_1},u_d\}$. Therefore, the entropy of a test is dependent only on the realization of $S_1,{\cal D} \backslash {u_d}$, i.e., \begin{align*} & H\left[\mathbf{y}(l)|S_1,{\cal D}_{-u_d},{\mathscr I}_{-S_1},u_d\right]\\ =&H\left[\mathbf{y}(l)|S_1,{\cal D}_{-u_d}\right]\\=&\sum_{S_1,{\mathscr D}_{-u_d}}\Pr\left\{s_1,{\mathscr D}_{-u_d}\right\} H\left[\mathbf{y}(l)|s_1,{\mathscr D}_{-u_d}\right]. \end{align*} Suppose that we are given a pool of $g_l$ items for the $l^\text{th}$ test. The entropy of the $l^\text{th}$ test outcome is non-zero only for those realizations of $S_1,{\cal D} \backslash {u_d}$ for which the $l^\text{th}$ pool contains exactly one defective and the inhibitor. This is because, otherwise, there is no randomness in the test outcome. There are $g_l(g_l-1){n-r-g_l \choose d-2}$ such possible realizations for $2\leq g_l\leq (n-r-d+2)$, and none for $g_l=0,1$ and for $g_l> (n-r-d+2)$. For each of these realizations, with $2\leq g_l\leq n-r-d+2$, the entropy of the test outcome is given by \[h\triangleq \frac{1}{d-1}\log (d-1)+\frac{d-2}{d-1}\log \left(\frac{d-1}{d-2}\right).\] Therefore, we have \begin{align}\nonumber &H\left[\mathbf{y}(l)|S_1,{\cal D}_{-u_d}\right]\\\nonumber=&\frac{1}{{n-r \choose d-1}(n-r-d+1)} g_l(g_l-1){n-r-g_l \choose d-2} h\\ \label{eqn-LB_sub_gopt} <& \frac{1}{{n-r \choose d-1}(n-r-d+1)} g^2_l {n-r-g_l \choose d-2} h. \end{align}The term dependent on $g_l$ is re-written as \[g^2_l {n-r-g_l \choose d-2}=\frac{1}{(d-2)!}f(g_l),\] where \[f(g_l)\triangleq g^2_l \prod_{j=0}^{d-3} (n-r-g_l-j). \]We now maximize the above term with respect to $g_l\in [2,n-r-d+2]$ to obtain a lower bound on the number of tests. The following lemma gives the approximate optimum value of $g_l$ (denoted by $g_{opt}$). It is shown that $f(g_l) > f(g_l+\epsilon)$ for all $g_l \geq g_{opt}$ and $0<\epsilon\leq 1$, and $f(g_l-\epsilon) < f(g_l)$ for all $g_l \leq g_{opt}$. Since $g_{opt}$ is independent of $l$, hereupon the subscript $l$ is dropped. It must be noted that $(g+\epsilon),(g-\epsilon)\in [2,n-r-d+2]$. \begin{lemma} There exists $n_0$ so that for all $n \geq n_0$, the optimum value of $g$ that maximizes $f(g)$ is given by $g_{opt}\triangleq \frac{2n}{d}+k$, where $k=o\left(\frac{2n}{d}\right)$. \end{lemma} \begin{proof} To ensure $f(g) > f(g+\epsilon)$ for all $g\geq \frac{2n}{d-2}\triangleq g_1$ and $0<\epsilon \leq 1$, it suffices that \begin{align}\nonumber &g^2 \prod_{j=0}^{d-3} (n-r-g-j) > (g+\epsilon)^2 \prod_{j=0}^{d-3} (n-r-g-\epsilon-j)\\\nonumber \Leftrightarrow & \prod_{j=0}^{d-3} \left(1+ \frac{\epsilon}{n-r-g-\epsilon-j}\right) > \left(1+\frac{\epsilon}{g}\right)^2\\\nonumber \Leftarrow & \left(1+ \frac{\epsilon}{n-r-g-\epsilon}\right)^{d-2} > \left(1+\frac{\epsilon}{g}\right)^2\\\nonumber \Leftarrow & \left(1+ \frac{\epsilon}{n-r-g-\epsilon}\right)^{d-2} > e^{\frac{2\epsilon}{g}}\\ \label{eqn-ineq_gopt} \Leftrightarrow & \frac{g(d-2)}{2\epsilon}\ln \left(1+ \frac{\epsilon}{n-r-g-\epsilon}\right)> 1 \end{align}Since the above function is increasing in $g$, it is sufficient to prove that the above inequality is satisfied for $g=g_1$. Note that, since $r=o(n)$ and $d \underset{n\rightarrow \infty}{\longrightarrow} \infty$, we have $n-r-g_{1}=\Omega (n)$. So, in order to satisfy (\ref{eqn-ineq_gopt}) for all $n \geq n_0$ and some finite positive integer $n_0$ at $g=g_1$, it suffices that \begin{align*} \frac{1}{1-\frac{2}{d-2}-\frac{2}{n}} > 1, \end{align*}which is true. The above inequality follows by using the approximation $\ln (1+x)\sim x$ in (\ref{eqn-ineq_gopt}), for $x<<1$. To ensure $f(g-\epsilon) < f(g)$ for all $g\leq \frac{2n}{d} -4 \triangleq g_{2}$, it is required that \begin{align} &(g-\epsilon)^2 \prod_{j=0}^{d-3} (n-r-g+\epsilon-j) < g^2 \prod_{j=0}^{d-3} (n-r-g-j)\\\nonumber \Leftrightarrow & \prod_{j=0}^{d-3} \left(1+ \frac{\epsilon}{n-r-g-j}\right) < \left(1+\frac{\epsilon}{g-\epsilon}\right)^2\\\nonumber \Leftarrow & \left(1+ \frac{\epsilon}{n-r-g-d+3}\right)^{d-2} < \left(1+\frac{\epsilon}{g-\epsilon}\right)^2\\\nonumber \Leftarrow & \exp \left( \frac{(d-2)\epsilon}{n-r-g-d+3}\right) < \left(1+\frac{\epsilon}{g-\epsilon}\right)^2\\ \label{eqn-ineq_gopt1} \Leftrightarrow & \frac{2(n-r-g-d+3)\ln \left(1+\frac{\epsilon}{g-\epsilon}\right)}{(d-2)\epsilon}> 1. \end{align}Since the above function is decreasing in $g$, it is sufficient to prove that the above inequality is satisfied for $g=g_2$. In order to satisfy (\ref{eqn-ineq_gopt1}) for all $n \geq n_0$ and some finite positive integer $n_0$ at $g=g_2$, it suffices that \begin{align} \label{eqn-ineq_gopt2} & \frac{2(n-r-g_2-d+3)\epsilon}{\left(g_{2}-{\epsilon}\right)(d-2)\epsilon} >1\\ \nonumber \Leftrightarrow & \frac{(1-\frac{r}{n}-\frac{2}{d}-\frac{d}{n}+\frac{7}{n})}{\left(1-\frac{d(4+\epsilon)}{2n}\right)\left(1-\frac{2}{d}\right)} >1\\\nonumber \Leftrightarrow & \frac{(1-\frac{r}{n}-\frac{2}{d}-\frac{d}{n}+\frac{7}{n})}{1-\frac{2}{d}-\frac{2d+0.5\epsilon}{n}+\frac{4+\epsilon}{n}} >1, \end{align}which is true because $r <d$\footnote{Recall that this was assumed at the beginning of the proof of the third lower bound.}. The inequality (\ref{eqn-ineq_gopt2}) is obtained by using the approximation $\ln (1+x)\sim x$ in (\ref{eqn-ineq_gopt1}), for $x<<1$. Therefore, we have $g_{opt} \in \left[\frac{2n}{d}-4,\frac{2n}{d-2}\right]$, and so $g_{opt}=\frac{2n}{d}+k$, for $k=o\left(\frac{2n}{d}\right)$. \end{proof} From (\ref{eqn-LB_sub_gopt}) and (\ref{eqn-lb_no_tests}), using the approximation $h \approx \frac{1}{d} \log d$, an asymptotic lower bound on the number of tests for vanishing error probability is given by \begin{align}\label{eqn-scaling} T_{NA} \geq \Omega\left(d\frac{{n-r \choose d-1}(n-r-d+1)}{ g^2_{opt} {n-r-g_{opt} \choose d-2}}\right). \end{align}We now show that the fractional term above scales as $d$. \begin{align}\nonumber &\frac{{n-r \choose d-1}(n-r-d+1)}{ g^2_{opt} {n-r-g_{opt} \choose d-2}}\approx \frac{{n-r \choose d-1}(n-r-d+1)}{ \frac{4n^2}{d^2} {n-r-g_{opt} \choose d-2}}\\ \nonumber = &\frac{{\prod_{i=0}^{d-2}(n-r-i)}(n-r-d+1)}{ \frac{4n^2}{d^2} (d-1) {\prod_{i=0}^{d-3}(n-r-g_{opt}-i)}}\\ \label{eqn-LB_approx1} \approx &\frac{d}{4} \prod_{i=0}^{d-3}\frac{n-r-i}{n-r-g_{opt}-i}\\ \nonumber =& \frac{d}{4} \prod_{i=0}^{d-3}\left(1+\frac{g_{opt}}{ n-r-g_{opt}-i}\right) \\ \label{eqn-LB_approx2} \geq & \frac{d}{4} \left(1+\frac{g_{opt}}{ n-r-g_{opt}}\right)^{d-2} \approx \frac{d}{4}e^{\frac{g_{opt}(d-2)}{ n-r-g_{opt}}} \approx d\frac{e^2}{4}. \end{align} It must be noted that the ratio notion of approximation does not affect the scaling of the number of tests. The approximations in (\ref{eqn-LB_approx1}) and (\ref{eqn-LB_approx2}) make use of the fact that $r,d=o(n)$ and $g_{opt}=\frac{2n}{d}+o\left(\frac{2n}{d}\right)$. Therefore, from (\ref{eqn-scaling}) and (\ref{eqn-LB_approx2}), we have $T_{NA}=\Omega(d^2)$. \end{proof} The lower bounds for the IDG-WSI model are obtained in the following theorem. Since we are interested in asymptotic lower bounds, we assume that the limits $\underset{n \rightarrow \infty}{\lim} \frac{I_{max}}{r}$ and $\underset{n \rightarrow \infty}{\lim} \frac{r}{dI_{max}}$ exist, and $I_{max} \underset{n \rightarrow \infty}{\longrightarrow} \infty$. The ideas used to obtain the following theorem are similar to those used in Theorem \ref{thm-LB_GTI_ID_NSI}. However, the ``second constraint'' (mentioned in the proof of the following theorem) needs to be accounted for. \begin{theorem}\label{thm-LB_GTI_ID_WSI} An asymptotic lower bound on the number of tests required for non-adaptive pooling designs for solving the IDG-WSI problem with vanishing error probability for $r,d=o(n)$ is given by \begin{align*} \max\left\{\Omega\left((r+d)\log n+I_{max}d\right), \Omega\left(\frac{I^2_{max}}{\log I_{max}} \log n\right)\right\}. \end{align*} An additional asymptotic lower bound is given by $\Omega(d^2)$ when either $1)$ $r= (c-1)d + kd$, for some constant $0<k<1$ and $I_{max}=c$ or $2)$ $r= (c-1)d + k$ and $(c-1)d< r \leq cd$, for positive integer $k=o(d)$ and $I_{max}=c$ or $3)$ $(c-1)d< r \leq cd$ and $I_{max} \geq c+1$. \end{theorem} \begin{proof} The first lower bound is obtained by lower bounding $H\left({\cal E}({\cal I},{\cal D})|({\cal I},{\cal D})\right)$ in (\ref{eqn-LB_ent}) as follows. Two constraints need to be satisfied while counting the entropy of association pattern. \begin{itemize} \item {\it First constraint:} Minimum degree of a vertex in ${\cal I}$ is one. \item {\it Second constraint:} Maximum degree of a vertex in ${\cal D}$ is no more than $I_{max}$. \end{itemize} We now consider the three possible cases below and show that in each of the cases the lower bound on the number of association patterns scales exponentially in $I_{max}d$. Let \mbox{$(c-1)d<r\leq cd$}, for some positive integer $c$, and so $I_{max}\geq c$. Define $\alpha_1=\underset{n \rightarrow \infty}{\lim} \frac{c}{I_{max}}$ and $\alpha_2=\underset{n \rightarrow \infty}{\lim} \frac{I_{max}}{r}$. {\it Case $1$:} $\alpha_1 <1$ and $\alpha_2 < 1$. There exist positive constants $\beta_1<1$ and $\beta_2<1$ so that $c \leq \beta_1 I_{max}$ and $I_{max} \leq \beta_2 r$, $\forall n \geq n_0$. Define an association pattern, where each defective starting from $u_1$ is assigned a disjoint set of $c$ inhibitors until every inhibitor is covered. Therefore we have, $\underset{u_i \in {\cal D}}\max ~|{\cal I}(u_i)| \leq c$. Since the first constraint is satisfied, each defective is now free to choose an association pattern so that $\underset{u_i \in {\cal D}}\max ~|{\cal I}(u_i)| \leq I_{max}$. The number of such possible association patterns can be lower bounded by \begin{align*} {r \choose I_{max}-c}^d \geq \left(\frac{r}{I_{max}-c}\right)^{(I_{max}-c)d}. \end{align*}Thus, the entropy of association pattern in this case scales (asymptotically) as the logarithm of the above quantity, which is given by $\Omega(I_{max}d)$. {\it Case $2$:} $\alpha_1 <1$ and $\alpha_2 = 1$. There exist positive constants $\beta_1<1$ and $\beta_2\leq 1$ with $\beta_2>\beta_1$ so that $c \leq \beta_1 I_{max}$ and $I_{max} \geq \beta_2 r$, $\forall n \geq n_0$. So, we have $I_{max}-c \geq (\beta_2 -\beta_1) r$, $\forall n \geq n_0$. Using similar arguments as in Case $1$, where after satisfying the first constraint, $\beta_2 r - c$ inhibitors are chosen to associate with each defective, we now have that the entropy of association pattern in this case scales asymptotically as $\Omega(rd)=\Omega({I_{max}d})$. {\it Case $3$:} $\alpha_1 =1$. Note that this case constitutes a large inhibitor regime with respect to the number of defectives (because $I_{max}\rightarrow \infty$). There exists a positive constant $\beta_1 \leq 1$ so that $c \geq \beta_1 I_{max}$, $\forall n \geq n_0$. The number of ways of assigning each defective to a disjoint set of $(c-1)$ inhibitors is given by {\small\begin{align*} &{r \choose c-1}{r-(c-1) \choose (c-1)}{r-2(c-1) \choose (c-1)}\cdots {r-(d-1)(c-1) \choose (c-1)}\\ &=\frac{r!}{((c-1)!)^d (r-d(c-1))!}\\ &\underset{(a)}{\geq} \frac{\sqrt{2\pi}r^{r+\frac{1}{2}}e^{-r}}{e^2 (c-1)^{d(c-1+\frac{1}{2})}e^{-d(c-1)}(r-d(c-1))^{r-d(c-1)+\frac{1}{2}} e^{-(r-d(c-1))}}\\ &\underset{(b)}{\geq} \frac{\sqrt{2\pi}(d(c-1))^{d(c-1)+\frac{1}{2}}e^{-dc}}{e^2 (c-1)^{d(c-1+\frac{1}{2})}e^{-d(c-1)}(r-d(c-1))^{r-d(c-1)+\frac{1}{2}} e^{-(r-d(c-1))}}\\ &\underset{(c)}{\geq} \frac{\sqrt{2\pi}(d(c-1))^{d(c-1)+\frac{1}{2}}e^{-dc}}{e^2 (c-1)^{d(c-1+\frac{1}{2})}e^{-d(c-1)}d^{d+\frac{1}{2}}}\\ &=\frac{\sqrt{2\pi}(c-1)^{\frac{1}{2}}d^{d(c-2)}}{e^2(c-1)^{\frac{d}{2}}e^d}=\frac{\sqrt{2\pi}(c-1)^{\frac{1}{2}}d^{d(c-2)}}{e^2 d{\frac{d\log_d(c-1)}{2}}d^{d\log_d e}}\\ &=\frac{\sqrt{2\pi}}{e^2}(c-1)^{\frac{1}{2}}d^{d\left(c-\frac{\log_d(c-1)}{2}-\log_d e-2\right)}, \end{align*}}where $(a)$ follows from Stirling's lower and upper bounds for factorial functions, $(b)$ and $(c)$ follow from the fact that $d(c-1) < r \leq cd$. Observe that the remaining $r-d(c-1)$ inhibitors can be assigned one each to one defective without violating the second constraint. Thus, the entropy of association pattern in this case scales asymptotically as $\Omega(cd)=\Omega(\beta_1 I_{max}d)=\Omega({I_{max}d})$. \begin{figure}[htbp] \centering \includegraphics[totalheight=2.5in,width=3.4in]{Ass_LB_WSI.pdf} \caption{A possible association pattern where, without loss of generality, $u_1$ is assumed to be a defective for which $|{\cal I}(u_1)|=I_{max}$. The set of inhibitors and defectives that are associated only among themselves (which the genie reveals) are inside the dotted ellipse.} \label{fig-ass_LB_WSI} \end{figure} The second lower bound is obtained as shown below. There could exist at least one defective $u_1 \in {\cal D}$ so that $|{\cal I}(u_1)|=I_{max}$. Consider an association pattern where ${\cal I}(u_1) \cap {\cal I}(u_k) = \{\emptyset\}$, for $u_k \in {\cal D}, k\neq 1$, as depicted in Fig. \ref{fig-ass_LB_WSI}. Now, we use a similar argument as in the proof of the second lower bound in Theorem \ref{thm-LB_GTI_ID_NSI}. Let a genie reveal the inhibitor subset ${\cal I}-{\cal I}(u_1)$, the defective subset ${\cal D}-u_1$ and their associations. Now, none of the items from the sets ${\cal I}-{\cal I}(u_1)$ and ${\cal D}-u_1$ is useful in distinguishing the inhibitors in the set ${\cal I}(u_1)$ from the unknown defective and the normal items. This is because the entropy of an outcome is zero if the test contains some defective from ${\cal D}-u_1$ but none of its associated inhibitors (which are only from the set ${\cal I}-{\cal I}(u_1)$) as such a test outcome is always positive. The entropy of an outcome does not change if any of the inhibitors ${\cal I}-{\cal I}(u_1)$ with or without its associated defectives (which are only from the set ${\cal D}-u_1$) is present in the test. Thus, the problem is now reduced to the $1$-inhibitor problem of finding $I_{max}$ inhibitors amidst $n-(r-I_{max})-(d-1)$ normal items and one (unknown) defective. A lower bound on the number of non-adaptive tests for this problem is clearly a lower bound on the number of tests for the original problem of determining the association graph for the IDG-WSI model. Since $r,d=o(n)$ and $I_{max} \underset{n \rightarrow \infty}{\longrightarrow} \infty$, using Theorem \ref{thm-LB_1_Inh_Model}, we get the lower bound $\Omega\left(\frac{I^2_{max}}{\log I_{max}} \log n\right)$. The third lower bound is obtained below for the case where $r= (c-1)d + kd$, for some constant $0<k<1$ and $I_{max}=c$. The proof for the other two cases mentioned in the statement of the theorem are similar. Parts of this proof are similar to the proof of the third lower bound in Theorem \ref{thm-LB_GTI_ID_NSI}, and hence we only point out the differences in this proof. As in the proof of Theorem \ref{thm-LB_GTI_ID_NSI}, we consider a reduced problem as follows. As depicted in Fig. \ref{fig-LB_WSI_d_sq}, a specific class of association graph is considered, where disjoint sets of $c-1$ inhibitors $\{{\cal I}_1, {\cal I}_2, \cdots, {\cal I}_d\}$ are associated with one defective each, i.e., each item in the set ${\cal I}_i$ with $|{\cal I}_i|=c-1$ is associated with the defective $U_i$, for $i=1,\cdots,d$. Each item in the set of inhibitors $ {\cal I}_{d+1}\triangleq\{S_{(c-1)d+1},\cdots,S_{(c-1)d+kd-1}\}$ is associated with one distinct defective with which the sets of inhibitors $\{{\cal I}_1,\cdots, {\cal I}_{kd-1}\}$ are also associated, i.e., $S_{(c-1)d+j}$ is associated with the defective $U_j$, for $j=1,\cdots,kd-1$. The remaining inhibitor $S_r$ is associated with exactly one of the defectives in the set ${\cal D}_{S_r}\triangleq\{U_{kd},\cdots,U_d\}$. It is now easily seen that the first constraint is satisfied, and $|{\cal I}(U_j)|\leq c$ for all $j$, which means that the second constraint is also satisfied. Now, let a genie reveal the realizations of $\{{\cal I}_1, {\cal I}_2, \cdots, {\cal I}_{d+1}\}$ and $\{U_1,\cdots,U_{kd-1}\}$, given by $\mathscr{I}\triangleq\{I_1, I_2, \cdots, I_{d+1}\}$ and $\overline{\mathscr{D}}_{S_r}\triangleq\{u_1,\cdots,u_{kd-1}\}$ respectively. The association pattern between them given by ${\cal E}(\mathscr{I},\overline{\mathscr{D}}_{S_r})$ is also revealed. \begin{figure*} \includegraphics[totalheight=5.3in,width=6.5in]{Ass_LB_WSI_d_sq.pdf} \caption{Association graph with realizations $\{I_1,\cdots, I_d,I_{d+1}, u_1,\cdots,u_{kd-1}\}$ considered for obtaining the third lower bound for the IDG-WSI model. The genie reveals the realizations $\{I_1,\cdots, I_{kd-1},I_{d+1}\}$ along with their association pattern with the realizations $\{u_1,\cdots,u_{kd-1}\}$. It also reveals the realizations $\{I_{kd},\cdots,I_{d}\}$ which are known to be associated with the unknown realization of the remaining defectives ${\cal D}_{S_r}$. It is also known that the unknown inhibitor $S_r$ is associated with exactly one of the defectives in the set ${\cal D}_{S_r}$. Such an association graph is chosen so that the constraint $I_{max}=c$ is not violated.} \label{fig-LB_WSI_d_sq} \end{figure*} The ``residual message'' in the system is now given by $W_1\triangleq \left\{S_r,{\cal D}_{S_r},{\cal E}(S_r,{\cal D}_{S_r}), {\cal E}(\{I_{kd},\cdots,I_d\},{\cal D}_{S_r})\right\}$. We now show that determining $W_1\triangleq{\cal E}(S_r,{\cal D}_{S_r})$ itself requires order of $d^2$ tests. It is easy to see that there is no reduction in the mutual information $I\left[W_1;\mathbf{y}|S_r,{\cal D}_{S_r},\mathscr{I},\overline{\mathscr{D}}_{S_r},{\cal E}(\mathscr{I},\overline{\mathscr{D}}_{S_r})\right]$ if the items in the set $\{I_1,\cdots,I_{kd-1},I_{d+1},\overline{\mathscr{D}}_{S_r}\}$ do not participate in any of the tests. So, we assume hereon that these items do not participate in any of the tests, and thus denote the rest of the items which participate in the tests by ${\cal W}'\triangleq {\cal N}\bigcup \{I_{kd},\cdots,I_d, S_r\} \bigcup {\cal D}_{S_r}$. For the reduced problem considered, we have \begin{align*} &\underset{W_1}{\max} ~\Pr\left\{\hat{W}_1\neq W_1\right\}\geq ~\mathbb{E}_{f}\left[\Pr\left\{\hat{W}_1\neq W_1\right\}\right]\triangleq P_{e_{avg}}. \end{align*}where $f$ denotes some probability mass function of the residual association graph. Let $f_1$ and $f_2$ denote independent probability mass functions of the residual association patterns ${\cal E}(s_r,\mathscr{D}_{s_r})$ and ${\cal E}(\{I_{kd},\cdots,I_d\},\mathscr{D}_{s_r})$, for any realization of $(S_r,{\cal D}_{S_r})$ given by $(s_r,\mathscr{D}_{s_r})$. The function $f_2$ is such that \begin{align}\label{eqn-f_WSI} \Pr\{(s_r,u)\in {\cal E}(s_r,\mathscr{D}_{s_r})\}=\frac{1}{d(1-k)+1}, \forall u \in \mathscr{D}_{s_r}, \end{align}for any realization $(s_r,\mathscr{D}_{s_r})$. Also, it is assumed $f$ is such that the realizations of $(S_r,{\cal D}_{S_r})$ are uniformly distributed across the rest of the items, i.e., occurrence of every realization happens with probability $\frac{1}{{n-r-kd+2 \choose d(1-k)+1}(n-r-d+1)}$. Let $\mathbf{M}$ be the test matrix which is known a priori. Also, let the matrix $\mathbf{M}_1$ denote the test matrix $\mathbf{M}$ whose columns are restricted to the items ${\cal W}'\backslash \{I_{kd},\cdots,I_d\}$, and the matrix $\mathbf{M}_2$ denotes the test matrix $\mathbf{M}$ whose columns are restricted to the items ${\cal W}'\backslash \{s_r\}$. Denote the ``virtual outcome vector'' obtained by testing the items using the matrices $\mathbf{M}_1$ and $\mathbf{M}_2$ by $\mathbf{y}_1\left({\cal E}(s_r,\mathscr{D}_{s_r})\right)$ and $\mathbf{y}_2\left({\cal E}(\{I_{kd},\cdots,I_d\},\mathscr{D}_{s_r})\right)$ respectively\footnote{The arguments of the virtual outcome vectors denote that the vectors are functions of their arguments.}. Note that $\mathbf{y}=\mathbf{y}_1.\mathbf{y}_2$, i.e., the actual outcome vector is equal to component-wise Boolean AND of the two virtual outcome vectors for every realization $(s_r,\mathscr{D}_{s_r})$. Since ${\cal E}(s_r,\mathscr{D}_{s_r})$ and ${\cal E}(\{I_{kd},\cdots,I_d\},\mathscr{D}_{s_r})$ are statistically independent messages, using data-processing inequality, we have \begin{align}\label{eqn-DP} &I\left[{\cal E}(s_r,{\mathscr D}_{S_r});\mathbf{y}|s_r,{\mathscr D}_{s_r},\{I_{kd},\cdots,I_d\}\right] \\\nonumber\leq ~& I\left[{\cal E}(s_r,{\mathscr D}_{S_r});\mathbf{y}_1|s_r,{\mathscr D}_{s_r}\right]. \end{align} Now, applying Fano's inequality, we have \begin{align}\nonumber &H[{\cal E}(S_r,{\cal D}_{S_r})|S_r,{\cal D}_{S_r},\{I_{kd},\cdots,I_d\}]\\ \nonumber&= \frac{1}{{n-r-kd+2 \choose d(1-k)+1}(n-r-d+1)}\sum_{S_r,{\cal D}_{{\cal S}_r}} \log (d(1-k)+1)\\ \nonumber &\leq 1 + P_e H[{\cal E}(S_r,{\cal D}_{S_r})|S_r,{\cal D}_{S_r},\{I_{kd},\cdots,I_d\}] \\\nonumber&~~~~~~+ I\left[{\cal E}(S_r,{\cal D}_{S_r});\mathbf{y}|S_r,{\cal D}_{S_r},\{I_{kd},\cdots,I_d\}\right] \end{align} \begin{align} \nonumber &\leq 1 + P_e \log (d(1-k)+1)+\frac{1}{{n-r-kd+2 \choose d(1-k)+1}(n-r-d+1)}\times \\\label{eqn-MC}&~~~~~~~~\sum_{S_r,{\cal D}_{S_r}} I\left[{\cal E}(S_r,{\cal D}_{S_r});\mathbf{y}_1|\{S_r,{\cal D}_{S_r}\}=\{s_r,{\mathscr D}_{s_r}\}\right]\\ \nonumber &\leq 1 + P_e \log (d(1-k)+1) +\frac{1}{{n-r-kd+2 \choose d(1-k)+1}(n-r-d+1)}\times \\\nonumber&~~~~~~~~~\sum_{S_r,{\cal D}_{S_r}} H\left[\mathbf{y}_1|\{S_r,{\cal D}_{S_r}\}=\{s_r,{\mathscr D}_{s_r}\}\right], \end{align}where $P_e=\Pr \{\hat{{\cal E}}(S_r,{\cal D}_{S_r})\neq {\cal E}(S_r,{\cal D}_{S_r})\}\leq P_{e_{avg}}$ and (\ref{eqn-MC}) follows from (\ref{eqn-DP}). Now, following similar steps after (\ref{eqn-cond_red_H}) in the proof of Theorem \ref{thm-LB_GTI_ID_NSI}, we have the lower bound of $\Omega((d(1-k)+1)^2)=\Omega(d^2)$ tests. \end{proof} Thus, in the $d=O(I_{max})$ and $d=O(r)$ regimes, the upper bound on the number of tests for the proposed non-adaptive pooling design is away from the proposed (second) lower bound for the IDG-WSI and IDG-NSI models by $\log I_{max}$ and $\log r$ multiplicative factors respectively. In the $I_{max}=o(d)$ and $r=o(d)$ regimes, the upper bounds exceed the proposed (third) lower bounds by $\log n$ multiplicative factors for both the IDG models, with some restrictions on $I_{max}$ or $r$ in IDG-WSI model. When these restrictions are removed, the evaluation of the lower bound might require consideration of other association graphs like in Fig. \ref{fig-Eg1}, as an extension of the association graph used in proof of the third lower bound in Theorem \ref{thm-LB_GTI_ID_WSI}. But even for the graph in Fig. \ref{fig-Eg1}, the optimization of the entropy over the pool size becomes combinatorially cumbersome. We thus relegate the evaluation of lower bound for the unconstrained IDG-WSI model to future work. For the proposed two-stage adaptive pooling design, the upper bound on the number of tests is away from the proposed (first) lower bound by $\log n$ multiplicative factors for both the IDG-WSI and IDG-NSI models in all regimes of the number of defectives and inhibitors. \section{Conclusion} A new generalization of the $1$-inhibitor model, termed IDG model was introduced. In the proposed model, an inhibitor can inhibit a non-empty subset of the defective set of items. Probabilistic non-adaptive pooling design and a two-stage adaptive pooling design were proposed and lower bounds on the number of tests were identified. Both in the small and large inhibitor regimes, the upper bound on the number of tests for the proposed non-adaptive pooling design is shown to be close to the lower bound, with a difference of logarithmic multiplicative factors in the number of items. For the proposed two-stage adaptive pooling design, the upper bound on the number of tests is close to the lower bound in all regimes of the number of inhibitors and defectives, the difference being logarithmic multiplicative factors in the number of items. Future works could include more practical versions of the IDG model, such as taking the following considerations into account. \begin{enumerate} \item Cancellation effect of the normal items on the inhibitors. \item Partial inhibition of expression of defectives by the inhibitors, which also naturally embraces the presence of inhibitors in the semi-quantitative group testing model \cite{EmM_TIT2014}. \item Inclusion of the $k$-inhibitor model, for unknown $k$, as a part of the association pattern in the IDG model. \end{enumerate} Obtaining lower and upper bounds on the number of tests for the aforementioned variants of the IDG model along with inclusion of noisy tests should be more involved and worth pursuing. \bibliographystyle{ieeetr}
1,477,468,750,807
arxiv
\section{\label{sec:level1}First-level heading} One of the essential tasks in quantum technology is to verify the integrity of a quantum state \cite{nilsen}. Quantum state tomography has become a standard technology for inferring the state of a quantum system through appropriate measurements and estimation \cite{paris,James16,rehacek,liu,lundeen,salvail,nunn}. To reconstruct a quantum state, one may first perform measurements on a collection of identically prepared copies of a quantum system (data collection) and then infer the quantum state from these measurement outcomes using appropriate estimation algorithms (data analysis). Measurement on a quantum system generally gives a probabilistic result and an individual measurement outcome only provides limited information on the state of the system, even when an ideal measurement device is used. In principle, an infinite number of measurements are required to determine a quantum state precisely. However, practical quantum state tomography consists of only finite measurements and appropriate estimation algorithms. Hence, the choice of optimal measurement sets and the design of efficient estimation algorithms are two critical issues in quantum state tomography. Many results have been presented for choosing optimal measurement sets to increase the estimation accuracy and efficiency in quantum state tomography \cite{adamson,Wootters1989,burgh}. Several sound choices that can provide excellent performance for tomography are, for instance, tetrahedron measurement bases, cube measurement sets, and mutually unbiased bases \cite{burgh}. However, for most existing results, the optimality of a given measurement set is only verified through numerical results \cite{burgh}. There are few methods that can analytically give an estimation error bound \cite{christandl,cramer,zhuhuangjundoctor}, which is essential to evaluate the optimality of a measurement set \cite{DArianoPRL2007,BisioPRL,RoyScott} and the appropriateness of an estimation method. For estimation algorithms, several useful methods including maximum-likelihood estimation (MLE) \cite{paris,teoPRL,teo,Blume-KohoutPRL,smolin}, Bayesian mean estimation (BME) \cite{paris,huszar,kohout} and least-squares (LS) inversion \cite{opatrny} have been proposed for quantum state reconstruction. The MLE method simply chooses the state estimate that gives the observed results with the highest probability. This method is asymptotically optimal in the sense that the estimation error can asymptotically achieve the Cram\'{e}r-Rao bound. However, MLE usually involves solving a large number of nonlinear equations where their solutions are notoriously difficult to obtain and often not unique. Recently, an efficient method has been proposed for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise, but this method is not general \cite{smolin}. Compared to MLE, BME can always give a unique state estimate, since it constructs a state from an integral averaging over all possible quantum states with proper weights. The high computational complexity of this method significantly limits its application. The LS inversion method can be applied when measurable quantities exist that are linearly related to all density matrix elements of the quantum state being reconstructed \cite{opatrny}. However, the estimation result may be a nonphysical state and the mean squared error (MSE) bound of the estimate cannot be determined analytically. In this Letter, we present a new linear regression estimation (LRE) method for quantum state tomography that can identify optimal measurement sets and reconstruct a quantum state efficiently. We first convert the quantum state reconstruction into a parameter estimation problem of a linear regression model \cite{rao}. Next, we employ an LS algorithm to estimate the unknown parameters. The positivity of the reconstructed state can be guaranteed by an additional least-squares minimization problem. The total computational complexity is $O(d^4)$ where $d$ is the dimension of the quantum state. In order to evaluate the performance of a chosen measurement set, an MSE upper bound for all possible states to be estimated is given analytically. This MSE upper bound depends explicitly upon the involved measurement bases, and can guide us to choose the optimal measurement set. The efficiency of the method is demonstrated by examples on qubit systems. \emph{Linear regression model.} We first convert the quantum state tomography problem into a parameter estimation problem of a linear regression model. Suppose the dimension of the Hilbert space $\mathcal{H}$ of the system of interest is $d$, and $\{\Omega_{i}\}^{d^2-1}_{i=0}$ is a complete basis set of orthonormal operators on the corresponding Liouville space, namely, $\textmd{Tr}(\Omega^{\dag}_i\Omega_j)=\delta_{ij}$, where $\dag$ denotes the Hermitian adjoint and $\delta_{ij}$ is the Kronecker function. Without loss of generality, let $\Omega_{i}=\Omega_{i}^{\dag}$ and $\Omega_{0}=(1/d)^{\frac{1}{2}}I$, such that the other bases are traceless. That is $\textmd{Tr}(\Omega_{i})=0$, for $i = 1,\ 2,\ \cdots,\ d^2-1$. The quantum state $\rho$ to be reconstructed may be parameterized as \begin{equation}\label{rho} \rho=\frac{I}{d}+\sum^{d^2-1}_{i=1}\Theta_i\Omega_i, \end{equation} where $\Theta_i=\textmd{Tr}(\rho\Omega_i)$. Given a set of measurement bases $\{|\Psi\rangle\langle\Psi|^{(n)}\}^{M}_{n=1}$, each $|\Psi\rangle\langle\Psi|^{(n)}$ can be parameterized under the bases $\{\Omega_{i}\}^{d^2-1}_{i=0}$ as \begin{equation*}\label{psi} |\Psi\rangle\langle\Psi|^{(n)}=\frac{I}{d}+\sum^{d^2-1}_{i=1}\psi^{(n)}_i\Omega_i, \end{equation*} where $\psi^{(n)}_i=\textmd{Tr}(|\Psi\rangle\langle\Psi|^{(n)}\Omega_i)$. When one performs measurements with measurement set $\{|\Psi\rangle\langle\Psi|^{(n)}\}^{M}_{n=1}$ on a collection of identically prepared copies of a quantum system (with state $\rho$), the probability to obtain the result of $|\Psi\rangle\langle\Psi|^{(n)}$ is \begin{equation}\label{averageequation} p_n=\textmd{Tr}(|\Psi\rangle\langle\Psi|^{(n)}\rho) =\frac{1}{d}+\sum^{d^2-1}_{i=1}\Theta_i\psi_i^{(n)}\triangleq \frac{1}{d}+\Theta^{\top}\Psi^{(n)}. \end{equation} Assume that the total number of experiments is $N$ and $N/M$ experiments are performed on $N/M$ identically prepared copies of a quantum system for each measurement basis $|\Psi\rangle\langle\Psi|^{(n)}$. Denote the corresponding outcomes as $x^{(n)}_1, \cdots, x^{(n)}_{N/M} $, which are independent and identically distributed. Let $\hat{p}_n=\frac{x^{(n)}_1+\cdots+ x^{(n)}_{N/M}}{N/M}$ and $e_n=\hat{p}_n-p_n$. According to the central limit theorem \cite{chow}, $e_n$ converges in distribution to a normal distribution with mean 0 and variance $\frac{p_n-p_n^2}{N/M}$. Using (\ref{averageequation}), we have the linear regression equations for $n=1,\ 2,\ \cdots,\ M$, \begin{equation}\label{average2} \hat{p}_n=\frac{1}{d}+{\Psi^{(n)}}^{\top}\Theta+e_n, \end{equation} where $\top$ denotes the matrix transpose. Note that the variance of $e_n$ is asymptotically $\frac{p_n-p_n^2}{N/M}$. If $p_n=1$, we have already reconstructed the state as $|\Psi\rangle\langle\Psi|^{(n)}$; if $p_n=0$, we should choose the following measurement basis from the orthogonal complementary space of $|\Psi\rangle\langle\Psi|^{(n)}$. $\hat{p}_n$, $d$ and $\Psi^{(n)}$ are all available for $n=1,\ \cdots,\ M$, while $e_n$ may be considered as the observation noise. Hence, the problem of quantum state tomography is converted into the estimation of the unknown vector $\Theta$. Denote $Y=\left( \begin{array}{ccc} \hat{p}_1-\frac{1}{d}, & \cdots, & \hat{p}_M-\frac{1}{d} \\ \end{array} \right)^{\top} $, $X=\left( \begin{array}{ccc} \Psi^{(1)}, & \cdots, & \Psi^{(M)} \\ \end{array} \right)^{\top} $, $e=\left( \begin{array}{ccc} e_1, & \cdots, & e_M \\ \end{array} \right)^{\top} $. We can transform the linear regression equations (\ref{average2}) into a compact form \begin{equation}\label{average3} Y=X\Theta+e. \end{equation} We define the MSE as E$\textmd{Tr}(\hat{\rho}-\rho)^2$, where $\hat{\rho}$ is an estimate of the quantum state $\rho$ based on the measurement outcomes and E$(\cdot)$ denotes the expectation on all possible measurement outcomes. For a fixed tomography method, E$\textmd{Tr}(\hat{\rho}-\rho)^2$ depends on the state $\rho$ to be reconstructed and the chosen measurement bases. From a practical viewpoint, the optimality of a chosen set of measurement bases may rely upon a prior information but should not depend on any specific unknown quantum state to be reconstructed. In this Letter, no a prior assumption is made on the state $\rho$ to be reconstructed. Given a fixed tomography method, we use the maximum MSE for all possible states (i.e., $\sup_{\rho}$E$\textmd{Tr}(\hat{\rho}-\rho)^2$) as the index to evaluate the performance of a chosen set of measurement bases. Hence, it is necessary to consider the worst case by enlarging the variance of the observation noise $e_n$ in each linear regression equation. As a consequence, $\{e_n\}^M_{n=1}$ may be treated as a set of independent identically distributed variables with asymptotic normal distribution $\text{N}(0, \frac{M}{4N})$. Another advantage of this treatment is that the effect of some other noises can be absorbed in the enlarged variance. \emph{Asymptotic properties of the LS estimate}. To give an estimate with high accuracy and low computational complexity, we employ the LS method, where the basic idea is to find an estimate $\hat{\Theta}_{LS}$ such that $$\hat{\Theta}_{LS}=\underset{\hat{\Theta}}{\text{argmin}}(Y-X\hat{\Theta})^{\top}(Y-X\hat{\Theta}),$$ where $\hat{\Theta}$ is an estimate of $\Theta$. Since the objective function is quadratic, one has the LS solution as follows: \begin{equation}\label{ls1} \hat{\Theta}_{LS}=(X^{\top}X)^{-1}X^{\top}Y=(X^{\top}X)^{-1}\sum^M_{n=1}\Psi^{(n)}(\hat{p}_n-\frac{1}{d}), \end{equation} where $X^{\top}X=\sum_{n=1}^M \Psi^{(n)}{\Psi^{(n)}}^{\top}.$ If the measurement bases $\{|\Psi\rangle\langle\Psi|^{(n)}\}^{M}_{n=1}$ are informationally complete or overcomplete, $X^{\top}X$ is invertible. Using (\ref{average3}), (\ref{ls1}) and the statistical property of the observation noise $\{e_n\}^M_{n=1}$ (asymptotically Gaussian), the estimate $\hat{\Theta}_{LS}$ has the following properties for a fixed set of chosen measurement bases: 1. $\hat{\Theta}_{LS}$ is asymptotically unbiased; 2. The MSE $\text{E}(\hat{\Theta}_{LS}-\Theta)^{\top}(\hat{\Theta}_{LS}-\Theta)$ of $\hat{\Theta}_{LS}$ is asymptotically $\frac{M}{4N}\textmd{Tr}(X^{\top}X)^{-1}=\frac{M}{4N}\textmd{Tr}(\sum_{n=1}^M \Psi^{(n)}{\Psi^{(n)}}^{\top})^{-1}.$ 3. $\hat{\Theta}_{LS}$ is asymptotically a maximum-likelihood estimate, and the estimation error can asymptotically achieve the Cram\'{e}r-Rao bound \cite{rao}; \emph{Positivity and computational complexity.} Based on the solution $\hat{\Theta}_{LS}$ obtained from (\ref{ls1}), we can obtain a Hermitian matrix $\hat{\mu}$ with $\textmd{Tr}\hat{\mu}=1$ using (\ref{rho}). However, $\hat{\mu}$ may have negative eigenvalues and be nonphysical due to the randomness of measurement results. In this sense, $\hat{\mu}$ is called pseudo linear regression estimation (PLRE) of state $\rho$. A good method of pulling $\hat{\mu}$ back to a physical state can reduce the MSE. In this Letter, the physical estimate $\hat{\rho}$ is chosen to be the closest density matrix to $\hat{\mu}$ under the matrix 2-norm. In standard state reconstruction algorithms, this task is computationally intensive \cite{smolin}. However, we can employ the fast algorithm in \cite{smolin} with computational complexity $O(d^3)$ to solve this problem since we have obtained a Hermitian estimate $\hat{\mu}$ with $\textmd{Tr}\hat{\mu}=1$. Since an informationally complete measurement set $\{|\Psi\rangle\langle\Psi|^{(n)}\}^{M}_{n=1}$ requires $M$ being $O(d^2)$, the computational complexity of (\ref{rho}) and $X^{\top}Y$ in (\ref{ls1}) is $O(d^4)$. Although the computational complexity of calculating $(X^{\top}X)^{-1}$ is generally $O(d^6)$, $(X^{\top}X)^{-1}$ can be computed off-line before the experiment once the measurement set is determined. Hence, the total computational complexity of LRE after the data have been collected is $O(d^4)$. It is worth pointing out that for $n$-qubit systems, $X^{\top}X=\sum_{n=1}^M \Psi^{(n)}{\Psi^{(n)}}^{\top}$ is diagonal for many preferred measurement sets such as tetrahedron and cube measurement sets. Fig.~1 compares the run time of our algorithm with that of a traditional MLE algorithm. Since the maximum MSE could reach 2 for the worst estimate, it is clear that our algorithm LRE is much more efficient than MLE with a small amount of accuracy sacrificed. \begin{figure} \center{\includegraphics[scale=0.47]{time_MSE_random_pure_states_mixed_with_the_identity.eps} \caption{\label{computation time}The run time and MSE of LRE and MLE for random $n$-qubit pure states mixed with the identity \cite{smolin}. The realization of MLE used the iterative method in \cite{paris}. The measurement bases are from the $n$-qubit cube measurement set and the resource is $N=3^9\times4^n$. The simulated measurement results for every base ${|\Psi\rangle\langle\Psi|}^{(i)}$ are generated from a binomial distribution with probability $p_i=\textmd{Tr}(|\Psi\rangle\langle\Psi|^{(i)}\rho)$ and trials $N/M$. LRE is much more efficient than MLE with a small amount of accuracy sacrificed since the maximum MSE could reach 2 for the worst estimate. All timings were performed in MATLAB on the computer with 4 cores of 3GHz Intel i5-2320 CPUs. } \end{figure} \emph{Optimality of measurement bases}. One of the advantages of LRE is that the MSE upper bound can be given analytically as $\frac{M}{4N}\textmd{Tr}(\sum_{n=1}^M \Psi^{(n)}{\Psi^{(n)}}^{\top})^{-1}$, which is dependant explicitly upon the measurement bases. Note that if the PLRE $\hat{\mu}$ is a physical state, then the MSE upper bound is asymptotically tight for the evaluation of the performance of a fixed set of measurement bases. Hence, to choose an optimal set $\{|\Psi\rangle\langle\Psi|^{(n)}\}^M_{n=1}$, one can solve the following optimization problem: \begin{center} Minimize $\textmd{Tr}(\sum_{n=1}^M \Psi^{(n)}{\Psi^{(n)}}^{\top})^{-1}$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ s.t. ${\Psi^{(n)}}^{\top}\Psi^{(n)}=\frac{d-1}{d},$ for $n=1,\ \cdots,\ M.$ \end{center} The optimization problem can be solved in an off-line way by employing appropriate algorithms though it may be computationally intensive \cite{optimal}. With the help of the analytical MSE upper bound, we can ascertain which one is optimal among the available measurement sets. This is shown when we prove the optimality of several typical sets of measurement bases for 2-qubit systems below. For 2-qubit systems, it is convenient to chose $\Omega_{i}=\frac{1}{\sqrt{2}}\sigma_l\otimes\frac{1}{\sqrt{2}}\sigma_m$, where $i=4l+m$;\ $l,\ m=0,\ 1,\ 2,\ 3$; $\sigma_0=I_{2\times2}$, $\sigma_1=\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \\ \end{array} \right) $, $\sigma_2=\left( \begin{array}{cc} 0 & -i \\ i & 0 \\ \end{array} \right) $, $\sigma_3=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array} \right) $. The MSE upper bound of 2-qubit states is \begin{equation*}\label{2qubitMSE} \frac{M}{4N}\textmd{Tr}(X^{\top}X)^{-1}=\frac{M}{4N}\textmd{Tr}(\sum^M_{n=1}\Psi^{(n)}{\Psi^{(n)}}^{\top})^{-1}. \end{equation*} Now we minimize this MSE upper bound or equivalently minimize $\textmd{Tr}(X^{\top}X)^{-1}$. Denote the eigenvalues of $X^{\top}X$ as $\lambda_1\geq\lambda_2\geq\cdots \geq\lambda_{15}$. Since we have $\psi^{(n)}_{0}=\frac{1}{2}$, $\sum^{15}_{i=0}{\psi^{(n)}_{i}}^2=1$, the subproblem is converted into minimizing $\sum_{i=1}^{15} \frac{1}{\lambda_i}$, subject to $\sum_{i=1}^{15} \lambda_i=\frac{3}{4}M$. It can be proven that $\sum_{i=1}^{15} \frac{1}{\lambda_i}$ reaches its minimum $\frac{300}{M}$ when $\lambda_{1}=\cdots=\lambda_{15}=\frac{M}{20}$. Hence, the minimum of the MSE upper bound $\frac{M}{4N}\textmd{Tr}(X^{\top}X)^{-1}$ is $\frac{75}{N}$. This minimum MSE upper bound can be reached by using the mutually unbiased measurement bases. If only local measurements can be performed, i.e., $|\Psi\rangle\langle\Psi|^{(n)}=|\Psi\rangle\langle\Psi|^{(n,1)}\otimes |\Psi\rangle\langle\Psi|^{(n,2)},\ n=1,\ \cdots,\ M$, where $|\Psi\rangle\langle\Psi|^{(n,1)}$ and $|\Psi\rangle\langle\Psi|^{(n,2)}$ can be parameterized as $|\Psi\rangle\langle\Psi|^{(n,k)}=\sum^3_{l=0}\psi^{(n,k)}_l\frac{\sigma_l}{\sqrt{2}}$,\ $k=1,\ 2$. And we have $\psi^{(n)}_{i}=\psi^{(n,1)}_{l}\times\psi^{(n,2)}_{m}$, where $i=4l+m$. Due to additional constraints $\psi^{(n,k)}_0=\frac{1}{\sqrt{2}},\ \sum^{3}_{l=0} {\psi^{(n,k)}_l}^2=1,$ for $k=1,\ 2$, $n=1,\ \cdots,\ M$, the subproblem of minimizing the MSE upper bound can be converted into minimizing $\sum_{i=1}^{15} \frac{1}{\lambda_i}$, subject to (i) $\sum_{i=1}^{3} \lambda_i\geq \frac{1}{4}M$; (ii) $\sum_{i=1}^{6} \lambda_i\geq \frac{1}{2}M$; (iii) $\sum_{i=1}^{15} \lambda_i=\frac{3}{4}M$. It can be proven that $\sum_{i=1}^{15} \frac{1}{\lambda_i}$ reaches its minimum $\frac{396}{M}$ when $\lambda_{1}=\cdots=\lambda_{6}=\frac{M}{12}$, $\lambda_{7}=\cdots=\lambda_{15}=\frac{M}{36}$. Hence, the minimum of the MSE upper bound $\frac{M}{4N}\textmd{Tr}(X^{\top}X)^{-1}$ is $\frac{99}{N}$. This minimum MSE upper bound can be reached by using the 2-qubit cube or tetrahedron measurement set. \begin{figure} \center{\includegraphics[scale=0.50]{MSE_q_N.eps}} \caption{\label{LREvsMLE} Mean squared error (MSE) for Werner states \cite{wernerstate} with $q$ (varying from 0 to 1) and different numbers of copies $N$. The cube measurement set is used, where the MSE upper bound is $\frac{99}{N}$. It can be seen that the MSE of PLRE is almost unchanged for $q\in[0,1]$, and is larger than the MSE of LRE.} \end{figure} Fig. 2 shows the dependant relationships of the MSEs for Werner states \cite{wernerstate} on $q$ (varying from 0 to 1) and different number of copies $N$ using the cube measurement bases \cite{adamson}. The fact that the MSE of PLRE is larger than that of LRE demonstrates that the process of pulling $\hat{\mu}$ back to a physical state further reduces the estimation error. \emph{Discussions and conclusions}. In the LRE method, data collection is achieved by performing measurements on quantum systems with given measurement bases. This process can also be accomplished by considering the evolution of quantum systems with fewer measurement bases. For example, suppose only one observable $\sigma$ is given, and the system evolves according to a unitary group $\{U_t\}$. At a given time $t$, \begin{equation*}\label{sigma} \langle\sigma_t\rangle=\textmd{Tr}(U^\dag(t)\sigma U(t)\rho)=\textmd{Tr}(\sigma_t\rho). \end{equation*} Suppose one measures the observable $\sigma$ at time $t$ ($t=1,\ \cdots,\ M$) on $m$ identically prepared copies of a quantum system. Denote the obtained outcomes as $\sigma^t_1,\ \cdots,\ \sigma^t_{m}$, and their algebraic average as $\bar{\sigma}_t=\frac{\sigma^t_1+\cdots+\sigma^t_{m}}{m}$. Note that $\sigma^t_1,\ \cdots,\ \sigma^t_{m}$ are independent and identically distributed. According to the central limit theorem \cite{chow}, $e_t=\bar{\sigma}_t-\langle\sigma_t\rangle$ converges in distribution to a normal distribution with mean 0 and variance $\frac{\langle\sigma^2_t\rangle-\langle\sigma_t\rangle^2}{m}$. We have the following linear regression equations \begin{equation*} \bar{\sigma}_t=\textmd{Tr}(\sigma_t\rho)+e_t,\ \ \ \ t=1,\ \cdots,\ M, \end{equation*} which are similar to (\ref{average2}). Hence, we can use the proposed LRE method to accomplish quantum state tomography. The LRE method can also be extended to reconstruct quantum states with a prior information \cite{cramer,klimov,Gross2010,toth2010} or states of open quantum systems. Actually, LRE can be applied whenever there are measurable quantities that are linearly related to all density matrix elements of the quantum system under consideration. In conclusion, an efficient method of linear regression estimation has been presented for quantum state tomography. The computational complexity of LRE is $O(d^4)$, which is much lower than that of MLE and BME. We have analytically provided an MSE upper bound for all possible states to be estimated, which explicitly depends upon the used measurement bases. This analytical upper bound can assist to identify optimal measurement sets. The LRE method has potential for wide applications in real experiments. The authors would like to thank Lei Guo, Huangjun Zhu and Chuanfeng Li for helpful discussion. The work in USTC is supported by National Fundamental Research Program (Grants No. 2011CBA00200 and No. 2011CB9211200), National Natural Science Foundation of China (Grants No. 61108009 and No. 61222504), Anhui Provincial Natural Science Foundation(No. 1208085QA08). B. Q. acknowledges the support of National Natural Science Foundation of China (Grants No. 61004049, No. 61227902 and No. 61134008). D. D. is supported by the Australian Research Council (DP130101658).
1,477,468,750,808
arxiv
\section{Introduction} D-branes can be described in terms of closed string states, hence by using the boundary state formalism many interesting properties have been shown \cite{1}-\cite{9}. By means of the boundary state, all relevant properties of the D-branes could be revealed. The boundary state formalism has been applied to the various D-branes configurations in the presence of different background fields \cite{10}-\cite{14}. On the other hand, investigating the stability of D-branes is one of the most important subjects that can be studied via the tachyon dynamics of open string and tachyon condensation phenomenon \cite{15}. These concepts have been verified by various methods \cite{16}-\cite{18} and more recently by the boundary string field theory (BSFT) in different configurations \cite{19}-\cite{24}. It has been conjectured that the open string tachyon condensation describes the decay of unstable D-branes into the closed string vacuum or to the lower dimensional unstable D-branes as intermediate states. Study of this physical process namely, decaying of unstable objects, is an important phenomenon because of its interpolation between two different vacua and also since it is a way to reach the concept of background independent formulation of string theory. Some aspects of the boundary state, accompanied by the tachyon condensation, are as follow. The boundary state is a source for closed strings, therefore, by using this state and tachyon condensation, one can find the time evolution of the source for each closed string mode. Also it has been argued that the boundary state description of the rolling tachyon is valid during the finite time which is determined by string coupling, and the energy could be dissipated into the bulk beyond this time \cite{23}. Moreover, this method shows the decoupling of the open string modes at the non-perturbative minima of the tachyon potential \cite{25}. Previously we have calculated the boundary states associated with a dynamical (rotating and moving) D$p$-brane in the presence of the electromagnetic and tachyonic background fields \cite{12, 24}. Now, by making use of the same boundary state we shall construct the corresponding partition function, which is obtained by the BSFT method. Then, we shall examine the instability of a D$p$-brane. We demonstrate that this process can make such a dynamical brane unstable, and hence reduces the brane's dimension. \section{Boundary state of a dynamical brane} For constructing a boundary state corresponding to a dynamical (rotating-moving) D-brane in the presence of some background fields, we start with the action \bea S &=& -\frac{1}{4\pi\alpha'} {\int}_\Sigma d^{2}\sigma(\sqrt{-g}g^{ab}G_{\mu\nu}\partial_a X^{\mu}\partial_b X^{\nu}+\varepsilon^{ab} B_{\mu\nu}\partial_a X^{\mu}\partial_b X^{\nu}) \nonumber\\ &+& \frac{1}{2\pi\alpha'} {\int}_{\partial\Sigma} d\sigma ( A_\alpha \partial_{\sigma}X^{\alpha}+ \omega_{\alpha\beta}J^{\alpha\beta}_{\tau} +T(X^{\alpha})), \eea where $\Sigma$ and $\partial\Sigma$ are worldsheet of closed string and its boundary, respectively. This action contains the Kalb-Ramond field $B_{\mu\nu}$, a $U(1)$ gauge field $A_\alpha$, an $\omega$-term for rotation and motion of the brane and a tachyonic field. We shall apply $\{X^\alpha|\alpha =0, 1, \cdot \cdot \cdot ,p \}$ for the worldvolume directions of the brane and $\{X^i| i= p+1, \cdot \cdot \cdot ,d-1\}$ for directions perpendicular to it. The background fields $G_{\mu \nu}$ and $B_{\mu \nu}$ are considered to be constant, and for the $U(1)$ gauge field we use the gauge $A_{\alpha}=-\frac{1}{2}F_{\alpha \beta }X^{\beta}$ which possesses a constant field strength. Besides, the tachyon profile $T=\frac{1}{2}U_{\alpha\beta}X^{\alpha}X^{\beta}$ will be used, where the symmetric matrix $U_{\alpha\beta}$ is constant. The $\omega$-term, which is responsible for the brane's rotation and motion, contains the anti-symmetric angular velocity ${\omega }_{\alpha \beta}$ and angular momentum density $J^{\alpha \beta }_{\tau}$ which is given by ${\omega }_{\alpha \beta}J^{\alpha \beta }_{\tau}=2{\omega }_{\alpha \beta }X^{\alpha }{\partial }_{\tau }X^{\beta }$. In fact, the component $\omega_{0 {\bar \alpha}}|_{{\bar \alpha} \neq 0}$ denotes the velocity of the brane along the direction $X^{\bar \alpha}$ while $\omega_{{\bar \alpha}{\bar \beta}}$ represents its rotation. It should be noted that rotation and motion of the brane are considered to be in its volume. In fact, according to the various fields inside the brane, the Lorentz symmetry is broken and hence such a dynamic (rotation and motion) is sensible. Suppose that the following mixed elements vanish, i.e. $B_{\alpha i} =U_{\alpha i} =0 $. The oscillating part of the bosonic boundary state is given by \bea {|B_{\rm Bos}\rangle}^{\left({\rm osc}\right)}\ =\prod^{\infty }_{n=1} {[\det Q_{(n)}]^{-1}}\;{\exp \left[-\sum^{\infty }_{m=1} {\frac{1}{m}{\alpha }^{\mu }_{-m}S_{(m)\mu \nu } {\widetilde{\alpha }}^{\nu }_{-m}}\right]\ } {|0\rangle}_{\alpha} \otimes {|0\rangle}_{\widetilde{\alpha }} \;, \eea in which the matrices are as follows: \bea &~& Q_{(n){\alpha \beta }} = {\eta }_{\alpha \beta } -{{\mathcal F}}_{{\mathbf \alpha }{\mathbf \beta }}+\frac{i}{2n}U_{\alpha \beta }, \nonumber\\ &~& S_{(m)\mu\nu}=(\Delta_{(m)\alpha \beta}\; ,\; -{\delta}_{ij}), \nonumber\\ &~& \Delta_{(m)\alpha \beta} = (M_{(m)}^{-1}N_{(m)})_{\alpha \beta}, \nonumber\\ &~& M_{(m){\alpha \beta }} = {\eta }_{\alpha \beta }+4{\omega }_{\alpha \beta }-{{\mathcal F}}_{{\mathbf \alpha }{\mathbf \beta }}+\frac{i}{2m}U_{\alpha \beta }, \nonumber\\ &~& N_{(m){\alpha \beta }} = {\eta }_{\alpha \beta } +4{\omega }_{\alpha \beta } +{{\mathcal F}}_{{\mathbf \alpha }{\mathbf \beta }} -\frac{i}{2m}U_{\alpha \beta }, \nonumber\\ &~& {\cal{F}}_{\alpha \beta}=\partial_\alpha A_\beta -\partial_\beta A_\alpha - B_{\alpha \beta} . \eea The normalization factor $\prod^{\infty }_{n=1}{{[\det Q_{(n){\alpha \beta }}]}^{-1}}$ is an effect of the disk partition function. In addition, the zero-mode part of the bosonic boundary state has the feature \bea {{\rm |}B_{\rm Bos}\rangle}^{\left(0\right)} &=& \frac{T_p}{2}\int^{\infty }_{{\rm -}\infty } \exp\left\{i{\alpha }^{{\rm '}}\left[\sum^{p}_{\alpha =0} {\left(U^{{\rm -}{\rm 1}}{\mathbf A}\right)}_{\alpha \alpha} {\left(p^{\alpha}\right)}^{{\rm 2}}{\rm +} \sum^{p}_{\alpha ,\beta {\rm =0},\alpha \ne \beta}{{\left(U^{{\rm -}{\rm 1}}{\mathbf A}+{\mathbf A}^T U^{-1}\right)}_{\alpha \beta } p^{\alpha }p^{\beta}}\right]\right\}{\rm \ \ } \nonumber\\ &\times& \left( \prod_{\alpha}{\rm |}p^{\alpha}\rangle dp^{\alpha}\right) \otimes\prod_i{\delta {\rm (}x^i}{\rm -}y^i{\rm )} {\rm |}p^i{\rm =0}\rangle , \eea where ${\mathbf A}_{\alpha \beta}=\eta_{\alpha \beta} + 4\omega_{\alpha \beta}$. The NS-NS and R-R sectors possess the following fermionic boundary states \bea &~& |B_{\rm Ferm} \rangle_{\rm NS}=\prod^{\infty}_{r=1/2}[\det Q_{(r)}]\exp \bigg{[}i\sum^{\infty}_{r=1/2}(b^{\mu }_{-r} S_{(r)\mu \nu}{\widetilde b}^{\nu}_{-r})\bigg{]}|0 \rangle ,\\ &~& |B_{\rm Ferm} \;\rangle_{\rm R} =\prod^{\infty }_{n=1}[\det Q_{(n)}] {\exp \left[i \sum^{\infty }_{m=1}{(d^{\mu }_{-m}S_{(m)\mu \nu } {\widetilde{d}}^{\nu }_{-m})} \right]\ } |B\rangle^{(0)}_{\rm R}. \eea The explicit form of the zero-mode state $|B\rangle^{(0)}_{\rm R}$ in the Type IIA and Type IIB theories and its contribution to the spin structure can be found in \cite{24} in complete details. It is not modified here because for obtaining the partition function it will be projected onto the bra-vacuum, hence, the remaining state would be the boundary state built on the vacuum. The total boundary state in the NS-NS and R-R sectors are given by \bea |B \rangle_{\rm NS,R}=|B_{\rm Bos}\rangle^{({\rm osc})} \otimes|B_{\rm Bos}\rangle^{(0)} \otimes|B_{\rm Ferm}\rangle_{\rm NS,R} . \eea In fact, the total boundary state also has the ghosts and superghosts boundary states. Since these parts are free of the background fields, and specially free of the characteristic matrix of the tachyon, we put them away. Note that the boundary state (7) contains significant information about the nature of the brane. \section{Tachyon condensation and collapse of a D$p$-brane} The structure of the configuration space for the boundary string field theory (BSFT) can be described as follows: the space of 2-dimensional worldsheet theories on the disk with arbitrary boundary interactions deals with the disk partition function of the open string theory and a fixed conformal worldsheet action in the bulk. It has been demonstrated that, at the tree level, the disk partition function in the BSFT appears as the normalization factor of the boundary state. In other word, the partition function can be acquired by the vacuum amplitude of the boundary state \bea Z^{\rm Disk}=\langle {\rm vacuum}|B\rangle. \eea Thus, in our setup the partition function possesses the following feature \bea Z_{\rm Bos}^{\rm Disk} &=& \frac{T_p}{2}\int^{\infty }_{{\rm -}\infty } {\prod_{\alpha }{dp^{\alpha }}}\exp\bigg{\{}{i{\alpha }^{{\rm '}}\left[\sum^{p}_{\alpha =0} {\left(U^{{\rm -}{\rm 1}}{\mathbf A}\right)}_{\alpha \alpha} {\left(p^{\alpha}\right)}^{{\rm 2}}{\rm +} \sum^{p}_{\alpha ,\beta {\rm =0},\alpha \ne \beta}{{\left(U^{{\rm -}{\rm 1}}{\mathbf A}+{\mathbf A}^T U^{-1}\right)}_{\alpha \beta } p^{\alpha }p^{\beta}}\right]\bigg{\}}{\rm \ \ }} \nonumber\\ & \times & \prod^{\infty }_{n=1}{[\det Q_{(n)}]^{-1}}. \eea for the bosonic part of the partition function, and \bea Z_{\rm Ferm}^{\rm Disk}=\prod^{\infty}_{k>0}[\det Q_{(k)}], \eea for the fermionic part, where $k$ is half-integer (integer) for the NS-NS (R-R) sector. Therefore, after integrating on the momenta and considering both fermionic and bosonic parts, the total partition function in superstring theory is given by \bea Z_{\rm total}^{\rm Disk} &=& \frac{T_p}{2} \left(\frac{i\pi}{\alpha'} \right)^{(p+1)/2} \frac{1}{\sqrt{{\det (D + H)}}}\frac{\prod^{\infty}_{k>0}[\det Q_{(k)}]}{\prod^{\infty }_{n=1} {[\det Q_{(n)}]}}, \eea where the diagonal matrix possesses the elements $D_{\alpha \beta}= (U^{-1}A)_{\alpha \alpha}\delta_{\alpha \beta}$, and the the matrix $H_{\alpha \beta}$ is defined by \bea H_{\alpha \beta}= \bigg{\{} \begin{array}{c} (U^{-1}A + A^T U^{-1})_{\alpha \beta}\;\;,\;\;\;\alpha \neq \beta ,\\ 0 \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;,\;\;\;\alpha = \beta . \end{array} \eea The partition function enables us to investigate the effect of the tachyon condensation on the instability of the D$p$-brane. According to the conventional literature, the tachyonic mode of open string spectrum makes the D-branes instable. This phenomenon is called tachyon condensation. As the tachyon condenses, the dimension of the brane decreases and in the final stage, one receives a closed string vacuum. Using the boundary sigma-model, the tachyon condensation usually starts with a conformal theory with $d$ Neumann boundary conditions in the UV, and then adding relevant tachyon field will cause the theory to roll toward an IR fixed point as a closed string vacuum with a D$p$-brane, which corresponds to a new vacuum with $(d-p-1)$ Dirichlet boundary conditions. According to the characteristic matrix of our tachyon, investigating of the tachyon condensation in this work is more general than the conventional studies which usually consider a single parameter for the tachyon field. Now let's check the stability or instability of the D$p$-brane in our setup. The tachyon condensation can be occurred by taking at least one of the tachyon's elements to infinity, i.e. $U_{pp} \to \infty$. At first look at the R-R sector. By making use of the ${\mathop{\lim }_{U_{pp}\to \infty } (U^{-1})_{p\alpha }=\mathop{{\rm \lim}}_{U_{pp}\to \infty } (U^{-1})_{\alpha p}=0\ }$ the dimensional reduction of the matrices $U^{-1}{\mathbf A}$, ${\mathbf A}^T U^{-1}$ and $D$ is obvious. Therefore, according to Eq. (11), in the R-R sector we observe that the direction $x^p$ has been omitted from the resulted brane. Now concentrate on the factor $\prod^{\infty}_{r=\frac{1}{2}}[\det Q_{(r)}]/\prod^{\infty }_{n=1} {[\det Q_{(n)}]}$ in the NS-NS sector of the superstring partition function. Using the limit \bea {{\mathop{\lim }_{U_{pp}\to \infty }\prod^{\infty}_{n=1} {{\bigg [}\det {\bigg (}{\eta -{\cal{F}} +{\frac{iU}{2n}{\bigg )}} _{(p+1)\times (p+1)}{\bigg ]}}^{-1}\ }\ }} =\prod^{\infty}_{n=1}{\ \ \frac{2n}{iU_{pp}}\left[ \det{\left( \eta -{\cal{F}} +\frac{iU}{2n}\right) }_{p\times p} \ \ \right]^{-1}} \eea the effect of tachyon condensation on this factor is given by \bea \mathop{\lim }_{U_{pp}\to \infty }{\frac{\prod^{\infty}_{r=\frac{1}{2}}[\det (Q_{(r)})_{(p+1)\times (p+1)}]}{\prod^{\infty }_{n=1} {[\det (Q_{(n)})_{(p+1)\times (p+1)}]}}} \longrightarrow \sqrt{\frac{i\pi U_{pp}}{2}}\; \frac{\prod^{\infty}_{r=\frac{1}{2}}[\det (Q_{(r)})_{p\times p}]}{\prod^{\infty }_{n=1} {[\det (Q_{(n)})_{p\times p}]}} \nonumber\\ \longrightarrow \sqrt{ \frac{i\pi U_{pp}}{2}\det(\eta -{\cal{F}})}\det \left[\frac{\sqrt{\pi}\Gamma \left( 1+\frac{i}{2}(\eta -{\cal{F}})^{-1}U \right)}{\Gamma \left(\frac{1}{2}+\frac{i}{2}(\eta -{\cal{F}})^{-1}U \right) }\right]_{p\times p} . \eea The $p \times p$ matrices are similar to the initial $(p+1) \times (p+1)$ matrices in which the last rows and last columns have been omitted. In order to avoid divergent quantities due to the existence of infinite product, in the second and third factors we used the $\zeta$-function regularization. For this reason we used the arrow sign instead of equality. However, it is evident that in this sector the dimensional reduction also occurs. Let us check this factor after successive tachyon condensation, i.e., \bea \mathop{\lim }_{U\to \infty }{\frac{\prod^{\infty}_{r=\frac{1}{2}}[\det (Q_{(r)})_{(p+1)\times (p+1)}]}{\prod^{\infty }_{n=1} {[\det (Q_{(n)})_{(p+1)\times (p+1)}]}}}=\mathop{\lim }_{U\to \infty }{\frac{\prod^{\infty}_{r=\frac{1}{2}}[\det (\frac{iU}{2r})]}{\prod^{\infty }_{n=1} {[\det (\frac{iU}{2n})]}}} \longrightarrow \mathop{\lim }_{U\to \infty }\left(\frac{i\pi}{2}\right)^{(p+1)/2}\sqrt{\det U}\;, \eea where in the last term again, $\zeta$-function regularizations for infinite products have been used. Therefore, the total partition function finds the feature \bea Z_{\rm total}^{\rm Disk} &=& \frac{T_p}{2} \left(-\frac{\pi^2}{2\alpha'} \right)^{(p+1)/2} \sqrt{ \frac{\det U}{\det (D + H)}}\;. \eea In this limit, condensation would take place for all directions of the brane's worldvolume. As can be seen, the dimensional reduction followed by the sequential condensation process could not disappear the tachyon. For completing the discussion, let's see the tachyon condensation effect via the boundary state approach, directly. According to Eqs. (5) and (6), apart from the normalization factors, i.e., the partition functions, look at the $\Delta_{(m)}$ matrix in which the tachyon has been entered. After applying the limit $U_{pp} \to \infty$, this matrix possesses an eigenvalue ``-1'', i.e. we deduce that the Neumann direction $x^p$ has been omitted and instead it has been added to the Dirichlet directions. This process would be the same as in the bosonic case. According to the above condensation processes, via the boundary state and the BSFT approaches, the result is that in our setup the dimensional reduction is taken place in both NS-NS and R-R sectors of the superstring theory. That is, after tachyon condensation, such a rotating-moving D$p$-brane with photonic and tachyonic background fields, reduces to an unstable D$(p-1)$-brane with its own background fields, rotation and motion. Thus, imposing rotation and motion to an unstable D-brane does not preserve it against collapse during the process of tachyon condensation.
1,477,468,750,809
arxiv
\section{Introduction} Combinatorial optimization problems (COP) concerns a wide variety of real-world applications, including vehicle routing \cite{toth2002vehicle}, path planning \cite{pohl1970heuristic}, network design \cite{johnson1978complexity}, resource allocation \cite{manne1960job} and mechanism design \cite{de2003combinatorial} problems. Many of them are difficult to solve with limited computational resources due to their NP-Hardness. Nonetheless, the widespread importance of COPs has inspired research in designing algorithms for solving them, including optimal and exact algorithms, approximation algorithms, heuristic algorithms and data-driven algorithms. In this paper, we focus specifically on Integer Linear Programs (ILPs) since it is a powerful tool to model and solve a broad collection of COPs. Branch-and-Bound (BnB) is an optimal and complete tree search algorithm and is one of the state-of-the-art algorithms for ILPs \cite{land2010automatic}. It is also the core of many ILP solvers such as SCIP \cite{BestuzhevaEtal2021OO} and Gurobi \cite{gurobi}. Huge research effort has been made to improve it over the past decades \cite{achterberg2013mixed}. However, BnB still falls short of delivering practical impact due to scalability issues \cite{khalil2016learning,gasse2019exact}. On the other hand, Large Neighborhood Search (LNS) is a powerful heuristic algorithm for hard COPs and has been recently applied to solve ILPs \cite{song2020general,wu2021learning,sonnerat2021learning} in the machine learning (ML) community. To solve ILPs, LNS starts with an initial solution, i.e., a feasible assignment of values to the variables. It then iteratively improves the best solution found so far (i.e., the \textit{incumbent solution}), by applying {\it destroy heuristics} to select a subset of variables and solving a sub-ILP that optimizes only the selected variables while leaving others fixed. ML-based destroy heuristics are shown to be efficient and effective but they are often tailored for a specific problem domain and require extensive computational resources for learning. A few non-ML destroy heuristics have been studied, such as the randomized heuristics \cite{song2020general,sonnerat2021learning} and the Local Branching (LB) heuristic \cite{fischetti2003local,sonnerat2021learning}, but they are either less efficient or effective compared to the ML-based ones. The randomized heuristics select the neighborhood by quickly randomly sampling a subset of variables which is often of bad quality. LB computes the optimal solution across all possible search neighborhoods that differs from the current incumbent solutions on a limited number of variables; however, LB is computationally expensive since it requires solving an ILP that has the same size as the original problem. To strike a balance between efficiency and effectiveness, we propose a simple yet effective destroy heuristic \LBRELAX that is based on the linear programming (LP) relaxation of LB. Instead of solving an ILP to find the neighborhood as LB does, \LBRELAX computes its LP relaxation. It then selects the variables greedily based on the difference between the values in the incumbent solution and the LP relaxation solution. We also propose two other variants, \LBRELAXS and \LBRELAXRR, that deploy a sampling method and combine the randomized heuristic with \LBRELAX to help escape local optima more efficiently, respectively. In experiments, we compare \LBRELAX and its variants against LNS with baseline destroy heuristics and BnB on several ILP benchmarks and show that they achieve state-of-the-art anytime performance. We also show that \LBRELAX achieves competitive results with, sometimes even outperform, the ML-based destroy heuristics. We also test \LBRELAX and its variants on selected difficult MIPLIB instances \cite{MIPLIB} that encompass diverse problem domains, structures and sizes and show that they achieve best performance on at least 40\% of the instances. We also empirically show that \LBRELAX and \LBRELAXS find neighborhoods of similar quality but is much faster than LB. They sometimes even outperform LB due to LB being too slow to find good enough neighborhoods within a reasonable time cutoff. \vspace{-0.05in} \section{Background} \vspace{-0.05in} In this section, we first define ILP and introduce its LP relaxation. We then introduce LNS for ILP solving and the Local Branching (LB) heuristic. \subsection{ILP and its LP Relaxation} An \textit{integer linear program (ILP)} is defined as \[\min \bc^{\mathsf{T}}\bx\quad \textrm{ s.t. } \bA \bx\leq \bb \textrm{ and } \bx \in \{0,1\}^n,\] where $\bx = (x_1,\ldots, x_n)^\sfT$ denotes the $n$ binary variables to be optimized, $\bc\in \mathbb{R}^n$ denotes the vector of objective coefficients and $\bA\in \mathbb{R}^{m\times n}$ and $\bb\in \mathbb{R}^{m}$ specify $m$ linear constraints. A \textit{solution} to the ILP is an feasible assignment of values to the variables. The \textit{linear programming (LP) relaxation} of an ILP is obtained by relaxing binary variables in the ILP to continuous variables between 0 and 1, i.e., by replacing the integer constraint $\bx\in \{0,1\}^n$ with $\bx\in[0,1]^n$. Note that, in this paper, we focus on the formulation above that consists of only binary variables, but our methods can also be applied to mixed integer linear programs with continuous variables and/or non-binary integer variables. \begin{algorithm}[t] \small \caption{LNS for ILPs}\label{algo::LNSforILP} \begin{algorithmic}[1] \State {\bf Input: } An ILP. \State $\bx^0\gets$ Find an intial solution to the input ILP \State $t\gets 0$ \While {time limit not exceeded} \State $\calX^t\gets$ Select a subset of variables to destroy \State $\bx^{t+1}\gets$ Solve the ILP with additional constraints $\{x_i = x_i^t: x_i\notin \calX^t\}$ \State $t\gets t + 1$ \EndWhile \State \Return $\bx^t$ \end{algorithmic} \end{algorithm} \subsection{LNS for ILP solving} LNS is a heuristic algorithm that starts with an initial solution and then iteratively reoptimizes a part of the solution by applying the destroy and repair operations until a time limit is exceeded. Let $\bx^0$ be the initial solution. In iteration $t\geq 0$ of the LNS, given the \textit{incumbent solution} $\bx^t$, defined as the best solution found so far, a destroy operation is done by a \textit{destroy heuristic} where it selects a subset of $k_t$ variables $\calX^t= \{x_{i_1},\ldots, x_{i_{k_t}}\}$. The repair operation is done by solving a sub-ILP with $\calX^t$ being the variables while fixing the values of $x_j\notin \calX^t$ to be the same as in $\bx^t$. Compared to BnB, LNS is more effective in improving the objective value $\bc^{\mathsf{T}}x$, or the primal bound, especially on difficult instances \cite{song2020general,sonnerat2021learning,wu2021learning}. Compared to other local search methods, LNS explores a large neighborhood in each step and thus, is more effective in avoiding local minima. LNS for ILPs is summarized in Algorithm \ref{algo::LNSforILP}. \subsection{LB Heuristic} The LB Heuristic \cite{fischetti2003local} is originally proposed as a primal heuristic in BnB but is also applicable in LNS for ILP solving \cite{sonnerat2021learning,liurevisiting}. Given the incumbent solution $\bx^t$ in iteration $t$ of LNS, the LB heuristic \cite{fischetti2003local} aims to find the subset of variables to destroy $\calX^t$ such that it leads to the optimal $\bx^{t+1}$ that differs from $\bx^t$ on at most $k_t$ variables, i.e., it computes the optimal solution $\bx^{t+1}$ that sits within a given Hamming ball of radius $k_t$ centered around $\bx^t$. To find $\bx^{t+1}$, the LB heuristic solves the LB ILP that is exactly the same ILP from input but with one additional constraint that limits the distance between $\bx^t$ and $\bx^{t+1}$: $$\sum_{i\in[n]:x^t_i=0}x^{t+1}_i + \sum_{i\in[n]:x^t_i=1}(1-x^{t+1}_i)\leq k_t. $$ The LB ILP is of the same size of the input ILP (i.e., it has the same number of variables and one more constraint), therefore, it is often slow to run in practice. \section{Related Work} In this section, we summarize related work on LNS for ILPs, LNS-based primal heuristics in BnB and LNS for other COPs. \subsection{LNS for ILPs} While a lot of effort has been made to improve BnB for ILPs in the past decades, LNS for ILPs has not been studied extensively in the past. Recently, Song et al. \cite{song2020general} show that even a randomized destroy heuristic in LNS can outperform state-of-the-art BnB in runtime. In the same paper, they show that an ML-guided decomposition-based LNS can achieve even better performance, where they apply reinforcement learning and imitation learning using the features of the coefficients of the ILP to learn ML-based destroy heuristics. Since then, there have been a few more recent studies on ML-based LNS for ILPs. \cite{sonnerat2021learning} learn to select variables to destroy via imitating LB. Similar to \cite{sonnerat2021learning}, \cite{song2022learning} also learn to select variables to destroy in LNS but they use reinforcement learning instead. Both \cite{song2022learning} and \cite{sonnerat2021learning} use the bipartite graph representation of the ILP to learn the destroy heuristics represented by graph convolutional networks (GCN). The main difference between \LBRELAX and ML-based heuristics is that \LBRELAX does not require extra computational resource for learning and is agnostic to the underlying problem distributions. \LBRELAX also has a better balance between efficiency and effectiveness than those existing non-ML heuristics. \subsection{LNS-based primal heuristics in BnB} LNS-based primal heuristics is one of the rich set of primal heuristics in BnB for ILPs and many techniques have been proposed in past decades. With the same purpose of improving primal bounds of the ILPs, the main differences between the LNS-based primal heuristics in BnB and LNS for ILPs are the following: (1) Since LNS-based primal heuristics are often more expensive to run than the others in BnB, they are executed periodically at different search tree nodes during the main search and the execution schedule is itself dynamic; (2) the destroy heuristics for LNS in BnB are often designed to use information, such as the dual bound and the LP relaxation at a search tree node, that is specific to BnB and not directly applicable in LNS for ILPs in our setting. Next, we briefly summarize the destroy heuristics in LNS-based primal heuristics. The Crossover heuristics \cite{rothberg2007evolutionary} destroy variables that have different values in a set of selected known solutions (typically two). The Mutation heuristics \cite{rothberg2007evolutionary} destroys a random subset of variables. Relaxation Induced Neighborhood Search (RINS) \cite{danna2005exploring} destroys variables whose values disagree in the solution of the LP relaxation at the current search tree node and the current incumbent solution. Relaxation Enforced Neighborhood Search (RENS) \cite{berthold2014rens} restricts the neighborhood to be the feasible roundings of the LP relaxation at the current search tree node. Local Branching \cite{fischetti2003local} restricts the neighborhood to a ball around the current incumbent solution. Distance Induced Neighborhood Search (DINS) \cite{ghosh2007dins} takes the intersection of the neighborhoods of the Crossover, LB and RINS heuristics. Graph-Induced Neighborhood Search (GINS) \cite{maher2017scip} destroys the breadth-first-search neighborhood of a variable in the bipartite graph representation of the ILP. An adaptive LNS primal heuristic that essentially solves a multi armed bandit problem has been proposed to combine the power of these heuristics \cite{hendel2022adaptive}. \LBRELAX is closely related to RINS \cite{danna2005exploring} since they both use LP relaxations to select neighborhoods. However, RINS is more suitable in BnB since it can adapt dynamically to the constraints added by branching. It uses the LP relaxation of the original problem, whereas \LBRELAX uses that of the LB ILP which takes into account the incumbent solutions that could change from iteration to iteration in LNS. \subsection{LNS for other COPs} LNS has been applied to solve a wide range of COPs, such as the vehicle routing problem \cite{ropke2006adaptive,azi2014adaptive}, the traveling salesman problem \cite{smith2017glns}, scheduling problems \cite{kovacs2012adaptive,vzulj2018hybrid} and path planning problems \cite{LiAAAI22,LiIJCAI21}. Recently, ML-based methods have been applied to improve LNS for those applications \cite{chen2019learning,lu2019learning,hottung2020neural,li2021learning,huang2022anytime}. \section{The Local Branching Relaxation Heuristic} \label{sec::methodolgy} Recently, designing effective destroy heuristics in LNS for ILPs has been a focus in the ML community \cite{song2020general,sonnerat2021learning,wu2021learning}. However, it is difficult to apply ML-based destroy heuristics to general ILPs since they are often customized for ILPs from certain problem distributions, e.g., graph optimization problems from a given graph distribution or scheduling problems where resources and demands follow the distribution of historical data, and require extra computational resources for training. There has been a lack of study on destroy heuristics that are agnostic to the underlying distribution of the problem. Existing ones such as randomized heuristics are simple and fast but sometimes not effective \cite{song2020general,sonnerat2021learning}. LB are effective but not efficient \cite{sonnerat2021learning,liurevisiting} since it exhaustively solves an ILP the same size as input for the best improvement. There are well-known approximation algorithms for NP-hard COPs based on LP relaxation \cite{kleinberg2006algorithm}. Typically, they solve the LP relaxation of the ILP of the original problem and apply deterministic or randomized rounding afterwards to construct an integral solution. These algorithms often have theoretical guarantee on the effectiveness and are fast, since LP can be solved in polynomial time. Inspired by those algorithms, we propose destroy heuristic \LBRELAX that first solves the LP relaxation of the LB ILP and then constructs the neighborhood (selects variables $\calX^t$ to destroy) based on the LP relaxation solution. Specifically, given an ILP and the incumbent solution $\bx^t$ in iteration $t$, we construct the LB ILP with neighborhood size $k_t$ and solve its LP relaxation. Let $\bar{\bx}^{t+1}$ be the LP relaxation solution to the LB ILP. Also, let $\Delta_i = |\bar{x_i}^{t+1}- x_i^t|$ and $\bar{\calX}^t = \{x_i: \Delta_i> 0, i\in[n]\}$. $\Bar{\calX}^t$ includes all the fractional variables in the LP relaxation solution and all integral variables that have different values from $\bx^t$. In the following, we introduce (1) \LBRELAX, (2) \LBRELAXS, a variant of \LBRELAX with randomized sampling and (3) \LBRELAXRR, another variant of \LBRELAX that combines a randomized destroy with \LBRELAX to help avoid local minima more effectively. {\bf \LBRELAX} first gets the LP relaxation solution $\bar{\bx}^{t+1}$ of the LB ILP and then calculates $\Delta_i$ and $\bar{\calX}^t$ from $\bar{\bx}^{t+1}, \bx^t$. To construct $\calX^t$ (the set of variables to destroy), it then greedily selects $k_t$ variables with the largest $\Delta_i$ and breaks ties uniformly at random. Intuitively, \LBRELAX greedily selects the variables whose values are more likely to change in the incumbent solution $\bx^t$ after solving the LB ILP. \LBRELAX is summarized in Algorithm \ref{algo::LBRELAX}. Instead of using the LP relaxation of the LB ILP, one could argue that we alternatively use that of the original ILP similar to RINS \cite{danna2005exploring}. However, the advantage of \LBRELAX over using the LP relaxation of the original problem is that, by approximating the solution to the LB ILP, \LBRELAX selects neighborhoods based on the incumbent solutions that change from iteration to iteration, whereas the original LP relaxation is a static and less informative feature that is pre-computed before the LNS procedure. \def\NoNumber#1{{\def\alglinenumber##1{}\State #1}\addtocounter{ALG@line}{-1}} \begin{algorithm}[t] \small \caption{\LBRELAX ({\color{blue}\LBRELAXS})}\label{algo::LBRELAX} \begin{algorithmic}[1] \State {\bf Input: } An ILP, incumbent solution $\bx^t$ and neighborhood size $k_t$. \State Construct the LB ILP given $\bx^t$ and $k_t$ \State $\bar{\bx}^{t+1}\gets$ Solve the LP relaxation of the LB ILP \State $\Delta_i \gets |\bar{x_i}^{t+1}-x_i^t|$ for all $i\in[n]$ \State $\bar{\calX}^{t} \gets \{x_i:\Delta_i >0, i\in[n]\}$ \If {$|\bar{\calX}^{t}| \geq k_t$} \State $\calX^t\gets$ Select $k_t$ variables greedily with the largest $\Delta_i$ from $\bar{\calX}^{t}$ \NoNumber{({\color{blue}$\calX^t\gets$ Select $k_t$ variables uniformly at random from $\bar{\calX}^{t}$}) } \Else \State $\calX'\gets$ a random subset of $k_t-|\bar{\calX}^{t}|$ variables from $\{x_i:\Delta_i = 0, i\in [n]\}$ \State $\calX^{t} \gets \bar{\calX}^{t} \cup \calX'$ \EndIf \State \Return $\calX^t$ \end{algorithmic} \end{algorithm} {\bf \LBRELAXS} is a variant of \LBRELAX with randomized sampling. To construct $\calX^t$, instead of greedily choosing variables with the largest $\Delta_i$, it selects $k_t$ variables from $\bar{\calX}^t$ uniformly at random. If $|\bar{\calX}^t|< k_t$, it selects all variables from $\bar{\calX}^t$ and $k_t-|\bar{\calX}^t|$ variables from the remaining uniformly at random. \LBRELAX is summarized in Algorithm \ref{algo::LBRELAX} where the parts in blue highlight the differences between \LBRELAX and \LBRELAXS. Since $0\leq \Delta_i\leq 1$, one could treat $\Delta_i$ as a probability distribution and sample $k_t$ variables accordingly (see \cite{sonnerat2021learning} for an example of how to normalize the distribution to sample $k_t$ variables). However, this variant performs similarly to or slightly worse than \LBRELAXS empirically and require extra hyperparameter tunings for the normalization. We therefore omit it and focus on the simpler variant in this paper. {\bf \LBRELAXRR} is another variant of \LBRELAX that leverages a randomized destroy to avoid local minima more effectively. Once \LBRELAX fails to find an improving solution in iteration $t$, if we let $k_{t+1} = k_t$, it will solve the exact same LP relaxation of the LB ILP again in the next iteration since the incumbent solution $\bx^{t+1}=\bx^{t}$ and the neighborhood size stay the same. Also, since \LBRELAX uses a greedy rule, it will select the same set of variables with the largest $\Delta_i$'s deterministically, except that it might need to break ties randomly in some cases when there are multiple variables with the same $\Delta_i$. Therefore, it is susceptible to getting stuck at local minima. To tackle this issue, once \LBRELAX fails to find a new incumbent solution, we update $k_{t+1}$ using the adaptive method described in the next paragraph. If it fails again in the next iteration, we switch to a randomized destroy heuristic that uniformly samples variables at random without replacement to construct the neighborhood. We switch back to \LBRELAX after running the randomized destroy heuristic for at least $\gamma$ seconds and a new incumbent solution is found. Next, we discuss an adaptive method to set the neighborhood size $k_t$ for \LBRELAX and its variants. The initial neighborhood size $k_0$ is set to a constant or a fraction of the number of variables in the input ILP. In iteration $t$, if LNS finds a new incumbent solution, we let $k_{t+1}=k_t$. Otherwise, we increase $k_t$ by a factor $\alpha >1$. Also, we upper bound the neighborhood size $k_t$ to a fraction $\beta<1$ of the number of variables to make sure the sub-ILP in each iteration is not too difficult to solve, i.e., we let $k_{t+1} =\min\{ \alpha\cdot k_t, \beta\cdot n\}.$ This adaptive way of choosing $k_t$ also helps address the issue of local minima by expanding the search neighborhood when LNS fails to improve the solution. It is applicable to not only \LBRELAX and its variants but also any destroy heuristics that require a given neighborhood size $k_t$. \section{Empirical Evaluation}\label{sec::experiment} In this section, we demonstrate the efficiency and effectiveness of \LBRELAX and its variants through extensive experiments on ILP benchmarks. All codes are written in Python and will be made available upon publication. \subsection{Setup} \subsubsection{Instance Generation} We evaluate on four NP-hard problem benchmarks selected from previous work \cite{wu2021learning,song2020general,scavuzzo2022learning}, which consist of synthetic minimum vertex cover (MVC), maximum independent set (MIS), set covering (SC) and multiple knapsack (MK) instances. MVC and MIS instances are generated according to the Barabasi-Albert random graph model \cite{albert2002statistical}, with 9,000 nodes and average degree 5 following \cite{song2020general}. SC instances are generated with 4,000 variables and 5,000 constraints following \cite{wu2021learning}. MK instances are generated with 400 items and 40 knapsacks following \cite{scavuzzo2022learning}. For each problem, we generate 100 instances. \subsubsection{Baselines} We compare \LBRELAX, \LBRELAXRR and \LBRELAXS with the following baselines: \begin{itemize} \item BnB using SCIP (v8.0.1) as the solver with the aggressive mode turned on to focus on improving the primal bound; \item LB: LNS which selects the neighborhood with the LB heuristics; \item \RANDOM: LNS which selects the neighborhood by uniformly sampling a subset of variables of a given neighborhood size $k_t$; \item \GRAPH: LNS which selects the neighborhood based on the bipartite graph representation of the ILP similar to GINS \cite{maher2017scip}. A bipartite graph representation consists of nodes representing the variables and constraints on two sides, respectively, with an edge connecting a variable and a constraint if a variable has a non-zero coefficient in the constraint. It runs a breadth-first search starting from a random variable node in the bipartite graph and selects the first $k_t$ variable nodes expanded. \end{itemize} Furthermore, we compare our approaches with state-of-the-art ML approaches: \begin{itemize} \item \DM: LNS which selects the neighborhood using a GCN-based policy obtained by learning to imitate the LB heuristic \cite{sonnerat2021learning}. We implement \DM since the authors do not fully open source the code; \item \RL: LNS which selects the neighborhood using a GCN-based policy obtained by reinforcement learning \cite{wu2021learning}. Note that this approach does not require a given neighborhood size $k_t$ since the size is defined implicitly by how the trained policy is used. We use the code made available by the authors. \end{itemize} \subsubsection{Hyperparameters} We conduct our experiments on 2.5GHz Intel Xeon Platinum 8259CL CPUs with 32 GB RAM. All experiments use the hyperparameters described below unless stated otherwise. We use SCIP (v8.0.1) \cite{BestuzhevaEtal2021OO}, the state-of-the-art open source ILP solver for the repair operations in LNS. To run LNS, we find an initial solution by running SCIP for 10 seconds for MVC, MIS and SC and 20 seconds for MK. We set the time limit to 60 minutes to solve each instance and 2 minutes for each repair operation in LNS. Except for LB, we set the time limit to 10 minutes for each repair operation since LB solves a larger ILP than other approaches in each iteration and typically requires a longer time limit. All approaches require a neighborhood size $k_t$ in LNS, except for BnB and \RL. The initial neighborhood size ($k_0$) is set to $k_0=400,200, 150$ and $400$ for MVC, MIS, SC and MK, respectively. For fair comparison, all baselines use adaptive neighborhood sizes with $\alpha=1.02$ and $\beta=0.5$, except for BnB and \RL. For \LBRELAXRR, we set $\gamma=30$ seconds. \subsubsection{Metrics} We use the following metrics to evaluate the efficiency and effectiveness of different approaches: (1) The \textit{primal bound} is the objective value of the ILP. (2) The \textit{primal gap} \cite{berthold2006primal} is the normalized difference between the primal bound $v$ and a precomputed best known objective value $v^*$, defined as $\frac{|v-v^*|}{\max(v,v^*,\epsilon)}$ if $v$ exists and $v\cdot v^*\geq 0$, or 1 otherwise. We use $\epsilon=10^{-8}$ to avoid division by zero and $v^*$ is the best primal bound found within 60 minutes by any approach in the portfolio for comparison. (3)The \textit{primal integral} \cite{achterberg2012rounding} at time $q$ is the integral on $[0,q]$ of the primal gap as a function of time. It captures the quality of and the speed at which solutions are found. (4) The \textit{survival rate} to meet a certain primal gap threshold is the fraction of instances with the primal gap below the threshold \cite{sonnerat2021learning}. Since BnB and LNS are both anytime algorithms, we show the metrics as a function of time or the number of iterations in LNS (when applicable) to demonstrate their anytime performance. \begin{figure}[tbp] \centering \includegraphics[width=\textwidth]{figure/legend_timeVSobj.eps} \begin{subfigure}[htbp]{0.49\textwidth} \centering \includegraphics[height=3.5cm]{figure/ML_timeVSgap_MVC2_BA_9000_3600_nh_400.eps} \caption{MVC} \end{subfigure}\,\, \begin{subfigure}[htbp]{0.49\textwidth} \centering \includegraphics[height=3.5cm]{figure/ML_timeVSgap_INDSET2_BA_9000_3600_nh_200.eps} \caption{MIS} \end{subfigure} \begin{subfigure}[htbp]{0.49\textwidth} \centering \includegraphics[height=3.5cm]{figure/ML_timeVSgap_SC_4000_5000_3600_nh_150.eps} \caption{SC} \end{subfigure} \begin{subfigure}[htbp]{0.49\textwidth} \centering \includegraphics[height=3.5cm]{figure/ML_timeVSgap_MKNAPSACK_400_40_3600_nh_200.eps} \caption{MK} \end{subfigure} \caption{Comparison with non-ML approaches: The primal gap as a function of time, averaged over 100 instances. \label{res::gap}} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=\textwidth]{figure/legend_timeVSobj.eps} \begin{subfigure}[htbp]{0.49\textwidth} \centering \includegraphics[height=3.5cm]{figure/ML_survival_rate_0.0015MVC2_BA_9000_3600_nh_400.eps} \caption{MVC} \end{subfigure}\,\, \begin{subfigure}[htbp]{0.49\textwidth} \centering \includegraphics[height=3.5cm]{figure/ML_survival_rate_0.0040INDSET2_BA_9000_3600_nh_200.eps} \caption{MIS} \end{subfigure} \begin{subfigure}[htbp]{0.49\textwidth} \centering \includegraphics[height=3.5cm]{figure/ML_survival_rate_0.0125SC_4000_5000_3600_nh_150.eps} \caption{SC} \end{subfigure} \begin{subfigure}[htbp]{0.49\textwidth} \centering \includegraphics[height=3.5cm]{figure/ML_survival_rate_0.0035MKNAPSACK_400_40_3600_nh_200.eps} \caption{MK} \end{subfigure} \caption{Comparison with non-ML approaches: The survival rate over 100 instances as a function of time to meet a certain primal gap threshold. The primal gap threshold is chosen from Table \ref{res::nonMLtable} as the median of the average primal gaps at 60 minutes time cutoff over all approaches rounded to the nearest 0.05\%. \label{res::survivalCurve}} \end{figure} \iffalse \begin{table}[] \scriptsize \caption{Primal bound (PB), Primal gap (PG), Primal integral (PI)} \begin{tabular}{|c|c|r|r|r|r|r|r|r|} \hline & & \multicolumn{1}{c|}{BnB} & \multicolumn{1}{c|}{LB} & \multicolumn{1}{c|}{RANDOM} & \multicolumn{1}{c|}{GRAPH} & \multicolumn{1}{c|}{LBRELAX} & \multicolumn{1}{c|}{LBRELAXRR} & \multicolumn{1}{c|}{LBRELAXS} \\ \hline \multirow{3}{*}{MVC} & PB & 2393.1$\pm$24.4 & 2372.6$\pm$19.8 & 2371.7$\pm$19.5 & 2373.0$\pm$19.3 & 2369.9$\pm$19.3 & 2371.14$\pm$19.34 & 2379.00$\pm$21.39 \\ \cline{2-9} & PG & 0.97$\pm$0.46 & 0.12$\pm$0.07 & 0.08$\pm$0.04 & 0.13$\pm$0.04 & 0.00$\pm$0.01 & 0.05$\pm$0.03 & 0.38$\pm$0.19 \\ \cline{2-9} & PI & 127.31$\pm$14.57 & 20.87$\pm$3.24 & 30.98$\pm$2.20 & 39.48$\pm$2.49 & 9.05$\pm$1.17 & 8.28$\pm$1.40 & 27.52$\pm$7.80 \\ \hline \multirow{3}{*}{MIS} & PB & -3672.77$\pm$52.05 & -3733.51$\pm$22.96 & -3775.09$\pm$17.17 & -3719.85$\pm$18.91 & -3763.98$\pm$19.08 & -3777.43$\pm$16.96 & -3764.95$\pm$18.28 \\ \cline{2-9} & PG & 2.90$\pm$1.36 & 1.19$\pm$0.31 & 0.09$\pm$0.06 & 1.55$\pm$0.18 & 0.38$\pm$0.12 & 0.03$\pm$0.04 & 0.36$\pm$0.11 \\ \cline{2-9} & PI & 143.96$\pm$20.13 & 56.15$\pm$9.39 & 17.78$\pm$2.57 & 89.96$\pm$7.59 & 29.20$\pm$4.37 & 9.08$\pm$1.73 & 51.49$\pm$10.09 \\ \hline \multirow{3}{*}{SC} & PB & 171.27$\pm$12.55 & 171.43$\pm$12.87 & 173.98$\pm$13.03 & 185.62$\pm$14.23 & 171.63$\pm$12.31 & 171.22$\pm$12.19 & 170.79$\pm$12.40 \\ \cline{2-9} & PG & 0.75$\pm$0.84 & 0.84$\pm$0.89 & 2.28$\pm$1.37 & 8.37$\pm$2.09 & 0.97$\pm$0.85 & 0.73$\pm$0.78 & 0.47$\pm$0.70 \\ \cline{2-9} & PI & 72.98$\pm$34.95 & 99.83$\pm$32.49 & 110.08$\pm$46.02 & 324.82$\pm$74.50 & 39.36$\pm$29.70 & 44.36$\pm$27.08 & 49.24$\pm$29.69 \\ \hline \multirow{3}{*}{MK} & PB & -3523.07$\pm$41.99 & -3502.27$\pm$43.05 & -3511.58$\pm$39.02 & -3543.96$\pm$39.23 & -3548.52$\pm$40.33 & -3555.63$\pm$39.93 & -3548.83$\pm$40.30 \\ \cline{2-9} & PG & 0.91$\pm$0.59 & 1.50$\pm$0.48 & 1.24$\pm$0.36 & 0.33$\pm$0.14 & 0.20$\pm$0.09 & 0.00$\pm$0.00 & 0.19$\pm$0.07 \\ \cline{2-9} & PI & 60.70$\pm$17.93 & 97.70$\pm$13.03 & 68.90$\pm$14.72 & 23.63$\pm$4.94 & 11.28$\pm$3.04 & 3.71$\pm$0.41 & 11.81$\pm$2.41 \\ \hline \end{tabular} \end{table} \fi \iffalse \begin{figure}[tbp] \centering \includegraphics[width=\textwidth]{figure/legend_timeVSobj.eps} \begin{subfigure}[htbp]{0.49\textwidth} \centering \includegraphics[width=1\textwidth]{figure/ML_virtualBest_MVC2_BA_9000_3600_nh_400.eps} \caption{MVC\label{fig::exmp5}} \end{subfigure}\,\, \begin{subfigure}[htbp]{0.49\textwidth} \centering \includegraphics[width=1\textwidth]{figure/ML_virtualBest_INDSET2_BA_9000_3600_nh_200.eps} \caption{MIS\label{fig::exmp6}} \end{subfigure} \begin{subfigure}[htbp]{0.49\textwidth} \centering \includegraphics[width=1\textwidth]{figure/ML_virtualBest_SC_4000_5000_3600_nh_150.eps} \caption{SC\label{fig::exmp7}} \end{subfigure} \begin{subfigure}[htbp]{0.49\textwidth} \centering \includegraphics[width=1\textwidth]{figure/ML_virtualBest_MKNAPSACK_400_40_3600_nh_200.eps} \caption{MK\label{fig::exmp8}} \end{subfigure} \caption{The gap to virtual best as a function of time, averaged over 100 instances. The gap to virtual best of an approach is the primal gap between its primal bound at a given time cutoff and the best primal bound by any approach at the same time cutoff. (WE SHOULD CHOOSE ONE BETWEEN FIG 2 AND 3)} \end{figure} \fi \subsection{Results} \subsubsection{Comparison with Non-ML Approaches} First, we compare \LBRELAX, \LBRELAXRR and \LBRELAXS with non-ML approaches, namely BnB, LB, \RANDOM and \GRAPH. Figure \ref{res::gap} shows the primal gap as a function of time, averaged over 100 instances. The results show that \LBRELAX, \LBRELAXRR and \LBRELAXS consistently improve the primal gap a lot faster than the baselines in the first few minutes of LNS. \LBRELAX improves the primal gap slightly faster than \LBRELAXS in all cases. On average, \LBRELAX is always better than the baselines at any point of time on MK instances and \LBRELAXS is always better than the baselines on SC and MK instances. However, both \LBRELAX and \LBRELAXS could get stuck at some local minima. In those cases, they need some time to escape local minima by adjusting the neighborhood size and sometimes could be outperformed by some baselines with longer time on the MVC and MIS instances. By adding randomization to \LBRELAX, \LBRELAXRR escapes local minima more efficiently than \LBRELAX and \LBRELAXS. On average, \LBRELAXRR is always better than the baselines at any point of time in the search on the MVC, MIS and MK instances. \begin{table}[tbp] \scriptsize \centering \caption{\small Primal gap (PG) (in percent) and primal integral (PI) at 60 minutes time cutoff, averaged over 100 instances, and their standard deviations. \label{res::nonMLtable}} \begin{tabular}{c|rr|rr} \hline & \multicolumn{2}{c|}{MVC} & \multicolumn{2}{c}{MIS} \\ \hline & \multicolumn{1}{c|}{PG (\%)} & \multicolumn{1}{c|}{PI} & \multicolumn{1}{c|}{PG (\%)} & \multicolumn{1}{c}{PI} \\ \hline BnB & \multicolumn{1}{r|}{1.01$\pm$0.46} & 128.6$\pm$14.6 & \multicolumn{1}{r|}{2.80$\pm$1.36} & 144.0$\pm$20.1 \\ \hline LB & \multicolumn{1}{r|}{0.15$\pm$0.08} & 22.1$\pm$3.6 & \multicolumn{1}{r|}{1.20$\pm$0.31} & 56.3$\pm$9.4 \\ \hline \RANDOM & \multicolumn{1}{r|}{0.11$\pm$0.05} & 32.3$\pm$2.3 & \multicolumn{1}{r|}{0.10$\pm$0.05} & 18.0$\pm$2.5 \\ \hline \GRAPH & \multicolumn{1}{r|}{0.17$\pm$0.04} & 40.8$\pm$2.5 & \multicolumn{1}{r|}{1.56$\pm$0.18} & 90.2$\pm$7.6 \\ \hline \LBRELAX & \multicolumn{1}{r|}{\bf0.04$\pm$0.03} & 10.3$\pm$1.7 & \multicolumn{1}{r|}{0.39$\pm$0.12} & 29.4$\pm$4.3 \\ \hline \LBRELAXRR & \multicolumn{1}{r|}{0.09$\pm$0.04} & {\bf9.6$\pm$1.7} & \multicolumn{1}{r|}{\bf 0.04$\pm$0.04} & {\bf 9.3$\pm$1.7} \\ \hline \LBRELAXS & \multicolumn{1}{r|}{0.42$\pm$0.20} & 28.8$\pm$8.1 & \multicolumn{1}{r|}{0.37$\pm$0.11} & 51.7$\pm$10.1 \\ \hline \multicolumn{1}{l|}{} & \multicolumn{2}{c|}{SC} & \multicolumn{2}{c}{MK} \\ \hline BnB & \multicolumn{1}{r|}{1.15$\pm$0.98} & 87.4$\pm$38.6 & \multicolumn{1}{r|}{0.91$\pm$0.59} & 60.7$\pm$17.9 \\ \hline LB & \multicolumn{1}{r|}{1.23$\pm$0.98} & 114.1$\pm$35.7 & \multicolumn{1}{r|}{1.50$\pm$0.48} & 97.7$\pm$13.0 \\ \hline \RANDOM & \multicolumn{1}{r|}{2.68$\pm$1.31} & 124.4$\pm$45.7 & \multicolumn{1}{r|}{1.24$\pm$0.36} & 68.9$\pm$14.7 \\ \hline \GRAPH &\multicolumn{1}{r|}{8.75$\pm$2.15} & 338.2$\pm$77.0 & \multicolumn{1}{r|}{0.33$\pm$0.14} & 23.6$\pm$4.9 \\ \hline \LBRELAX & \multicolumn{1}{r|}{1.37$\pm$0.96} & {63.9$\pm$34.0} & \multicolumn{1}{r|}{0.20$\pm$0.09} & 11.3$\pm$3.0 \\ \hline \LBRELAXRR & \multicolumn{1}{r|}{1.14$\pm$0.90} & {\bf58.9$\pm$31.5} & \multicolumn{1}{r|}{\bf0.00$\pm$0.00} & {\bf3.7$\pm$0.4} \\ \hline \LBRELAXS & \multicolumn{1}{r|}{\bf0.88$\pm$0.85} & 63.8$\pm$32.4 & \multicolumn{1}{r|}{0.19$\pm$0.07} & 11.8$\pm$2.4 \\ \hline \end{tabular} \end{table} \begin{table}[htbp] \centering \caption{\small The time (in seconds) to improve the inital solution in one iteration and the improvement of the primal bound, averaged over 100 instances. The time for LB is the solving time of the LB ILP. The time for \LBRELAX and \LBRELAXS is the sum of the solving times of the LB relaxation and the sub-ILP. The numbers in parentheses are the speed-ups. The improvement is computed by taking the difference between the initial solution and the new incumbent solution and the numbers in parentheses are the losses in quality in percent compared to LB. $\uparrow$ means higher is better, $\downarrow$ means lower is better. \label{res::firstIter}} \small \begin{tabular}{c|c|r|r|r|rllll} \cline{1-6} & & \multicolumn{1}{c|}{MVC} & \multicolumn{1}{c|}{MIS} & \multicolumn{1}{c|}{SC} & \multicolumn{1}{c}{MK} & & & & \\ \cline{1-6} \multirow{2}{*}{LB} & Time$\downarrow$ & 40.2 & 56.0 & 600.0 & 600.0 & & & & \\ \cline{2-6} & Imp.$\uparrow$ & 129.79 & 65.50 & 12.21 & 216.51 & & & & \\ \cline{1-6} \multirow{2}{*}{\LBRELAX} & Time$\downarrow$ & 12.1 (3.3x) & 19.5 (2.9x) & 125.3 (4.8x) & 5.87 (102.2x) & & & & \\ \cline{2-6} & Imp.$\uparrow$ & 129.41 (-0.3\%) & 65.19 (-0.5\%) & 15.77 (+29.2\%) & 141.10 (-34.8\%) & & & & \\ \cline{1-6} \multirow{2}{*}{\LBRELAXS} & Time$\downarrow$ & 12.0 (3.4x) & 19.5 (2.9x) & 24.51 (24.5x) & 5.12 (117.6x) & & & & \\ \cline{2-6} & Imp.$\uparrow$ & 128.61 (-0.9\%) & 62.46 (-4.6\%) & 5.65 (-53.7\%) & 113.48 (-47.6\%) & & & & \\ \cline{1-6} \end{tabular} \end{table} Table \ref{res::nonMLtable} presents the average primal gap and primal integral at 60 minutes time cutoff. (See results at 15, 30 and 45 minutes time cutoff in Appendix.) On MVC, SC and MK instances, all \LBRELAX, \LBRELAXS and \LBRELAXRR have lower primal gaps and primal integrals on average than any baselines, demonstrating that they not only find higher quality solutions but also find them at a faster speed. On MIS and MK instances, \LBRELAXRR achieves the lowest primal gap and primal integral among all approaches. It also achieves the lowest primal integral on MVC and SC instances. Overall, \LBRELAXRR always comes up in the top 2 in both metrics on all problems. Figure \ref{res::survivalCurve} shows the survival rate over 100 instances as a function of time to meet a certain primal gap threshold. On MVC instances, \LBRELAX and \LBRELAXRR achieve final survival rates above 0.9 while the best baseline \RANDOM stays below 0.8. On MIS instances, both \LBRELAXRR and \RANDOM achieve final survival rates of 1.0 but \LBRELAXRR uses shorter time. On SC instances, \LBRELAXS and \LBRELAXRR consistently has a higher survival rate than the baselines. On MK instances, \LBRELAX and its variants achieve survival rates above 0.9 within 15 minutes while the best baseline \GRAPH only gets to around 0.6 with 60 minutes. One limitation of \LBRELAX and its variants is that they do not perform well on some problem domains, for example the maximum cut and combinatorial auction problems. Please see Appendix for more results \begin{figure}[tbp] \centering \includegraphics[width=0.45\textwidth]{figure/legend_iterVSobj.eps}\\ \begin{subfigure}[htbp]{0.49\textwidth} \centering \includegraphics[height=3.5cm]{figure/AS_primalBoundIteration_MVC2_BA_9000_3600_nh_400_iter11.eps} \caption{MVC} \end{subfigure}\,\, \begin{subfigure}[htbp]{0.49\textwidth} \centering \includegraphics[height=3.5cm]{figure/AS_primalBoundIteration_INDSET2_BA_9000_3600_nh_200_iter11.eps} \caption{MIS} \end{subfigure} \begin{subfigure}[htbp]{0.49\textwidth} \centering \includegraphics[height=3.5cm]{figure/AS_primalBoundIteration_SC_4000_5000_3600_nh_150_iter11.eps} \caption{SC} \end{subfigure} \begin{subfigure}[htbp]{0.49\textwidth} \centering \includegraphics[height=3.5cm]{figure/AS_primalBoundIteration_MKNAPSACK_400_40_3600_nh_200_iter11.eps} \caption{MK} \end{subfigure} \caption{Comparison with LB: The primal bound as a function of the number of iterations, averaged over 100 instances. \label{res::PBperIter}} \end{figure} Next, we run LB, \LBRELAX and \LBRELAXS for 10 iterations to compare their effectiveness. We follow the same setup as earlier described, except that we do not use adaptive neighborhood sizes to make sure they have the same $k_t$ in each iteration $t$. Note that the time limit for solving the sub-ILP in each iteration is set to 10 minutes for LB and 2 minutes for \LBRELAX and \LBRELAXS. Table \ref{res::firstIter} shows the average time to improve the initial solutions and the average improvement of the primal bound in the first iteration of LNS. This allows us to compare how closely \LBRELAX and \LBRELAXS approximate the quality of the neighborhood selected by LB and study the trade-off between quality and time. Compared to LB, \LBRELAX and \LBRELAXS have 2.9x-117.6x speed-up but only lose at most 53.7\% in quality. In particular, on MVC and MIS instances, both \LBRELAX and \LBRELAXS lose 0.5\% to 4.6\% in quality but have at least 2.9x speed-up; on SC instances, \LBRELAX even gains 29.2\% in quality and save 79.1\% in time, due to LB cannot find a good enough neighborhood within its time limit. In Figure \ref{res::PBperIter}, we show the primal bound as a function of the number of iterations. It allows comparing the effectiveness of different heuristics independently of their speed. On the MVC instances, both \LBRELAX and \LBRELAXS perform similarly to but slightly worse than LB. On the SC and MK instances, \LBRELAX achieves better performance than LB, again due to scalability issues of LB, and \LBRELAXS achieves competitive performance with LB after 10 iterations. However on the MIS instances, both \LBRELAX and \LBRELAXS are able to quickly improve the primal bound in the first 2-3 iterations, but afterwards converge to local minima and the gaps between them and LB increase. To complete the first 10 iterations, both \LBRELAX and \LBRELAXS take less than 21 minutes on SC instances and 3.3 minutes on the others, while LB takes at least 57 minutes and sometimes up to 100 minutes. \begin{figure}[tbp] \centering \includegraphics[width=0.75\textwidth]{figure/legend_ML_timeVSobj.eps} \begin{subfigure}[htbp]{0.325\textwidth} \centering \includegraphics[height=2.8cm]{figure/ML_timeVSgap_MVC2_BA_9000_3599_nh_400.eps} \caption{MVC} \end{subfigure} \begin{subfigure}[htbp]{0.325\textwidth} \centering \includegraphics[height=2.8cm]{figure/ML_timeVSgap_INDSET2_BA_9000_3599_nh_200.eps} \caption{MIS} \end{subfigure} \begin{subfigure}[htbp]{0.325\textwidth} \centering \includegraphics[height=2.8cm]{figure/ML_timeVSgap_SC_4000_5000_3599_nh_150.eps} \caption{SC} \end{subfigure} \caption{Comparison with ML approaches: The primal bound as a function of time, averaged over 100 instances.\label{res::MLgap}} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=0.75\textwidth]{figure/legend_ML_timeVSobj.eps} \begin{subfigure}[htbp]{0.325\textwidth} \centering \includegraphics[height=2.8cm]{figure/ML_survival_rate_0.0010MVC2_BA_9000_3599_nh_400.eps} \caption{MVC} \end{subfigure} \begin{subfigure}[htbp]{0.325\textwidth} \centering \includegraphics[height=2.8cm]{figure/ML_survival_rate_0.0035INDSET2_BA_9000_3599_nh_200.eps} \caption{MIS} \end{subfigure} \begin{subfigure}[htbp]{0.325\textwidth} \centering \includegraphics[height=2.8cm]{figure/ML_survival_rate_0.0115SC_4000_5000_3599_nh_150.eps} \caption{SC} \end{subfigure} \caption{Comparison with ML approaches: The survival rate over 100 instances as a function of time to meet a certain primal gap threshold. The primal gap thresholds are chosen in the same way as Fig. \ref{res::survivalCurve}. \label{res::MLsurvivalCurve}} \end{figure} \begin{figure}[tbp] \centering \centering \includegraphics[width=\textwidth]{figure/legend_timeVSobj.eps} \includegraphics[height=3.2cm]{figure/ML_winRate_MIPLIB_FILTERED_3600_nh_0.eps} \includegraphics[height=3.2cm]{figure/ML_survival_rate_0.0050MIPLIB_FILTERED_3600_nh_0.eps} \caption{Results on 31 selected MIPLIB instances: The best performing rate as a function of time (left) and the survival rate over 31 instances as a function of time to meet the primal gap threshold 0.50\% (right).\label{res::winrateMIPLIB}} \end{figure} \subsubsection{Comparison with ML Approaches} Then, we compare \LBRELAX, \LBRELAXRR and \LBRELAXS on MVC, MIS and SC instances with ML approaches, namely \DM and \RL. Figure \ref{res::MLgap} shows the primal gap as a function of time averaged over 100 instances. The results show that \LBRELAX, \LBRELAXRR and \LBRELAXS consistently improve the primal bound a lot faster than \DM and \RL in the first few minutes of LNS. On MVC instances, \DM surpasses \LBRELAXRR with the smallest average primal gap best after 20 minutes and achieve (close-to-)zero gaps after 30 minutes. On MIS instances, \LBRELAXRR has a smaller gap than both \DM and \RL throughout the first 60 minutes. On SC instances, \DM is very competitive with \LBRELAX and converges to a similar but slightly higher gap than \LBRELAXRR and \LBRELAXS; \RL converges to almost the same primal gap as \LBRELAXRR on average but is worse than the best performer \LBRELAXS. Overall, \LBRELAX and its variants, that do not require extra computational resources for training, are competitive with and more often even better than state-of-the-art ML approaches, suggesting that they are agnostic to the distributions of the instances and easily applicable to different problem domains. \subsubsection{Results on Selected MIPLIB Instances} Finally, we examine how well \LBRELAX and its variants perform on ILPs that are diverse in structures and sizes. We test them on the MIPLIB dataset \cite{MIPLIB}. MIPLIB contains COPs from various real-world domains. We follow a procedure similar to \cite{wu2021learning} to filter out instances where we first filter to retain ILP instances with only binary variables. Among these, we select instances that are not too easy to solve but relatively easy to find a feasible solution for. Specifically, we filter out those that BnB can optimally solve within 3 hours (too easy) or BnB cannot find any solutions within 10 minutes (too hard), which gives us 35 instances. For all LNS approaches, we run BnB for 10 minutes to find the initial solution and set the time limit to 10 minutes for each repair operation. The initial neighborhood size $k_0$ is set to 20\% of the number of binary variables. We compare \LBRELAX, \LBRELAXRR and \LBRELAXS with the non-ML baselines. We further filter out 4 instances that no approach can find a better solution than the initial one, which finally gives us 31 instances. Figure \ref{res::winrateMIPLIB} shows the winning rate as a function of time for each approach on the 31 instances. The best performing rate at a time $q$ for an approach is the fraction of instances on which it achieves the best performance (including ties) compared to all approaches in the portfolio. \LBRELAX, \LBRELAXRR and \LBRELAXS achieve the best performance with less than 1000 seconds on 25, 23 and 24 instances out of 35, respectively. \LBRELAXRR has the highest best performing rates at different time cutoffs and ties with BnB at 14 instances at the 60-minute mark. Figure \ref{res::winrateMIPLIB} also shows the survival rate over the 31 instances as a function of time to meet the primal gap threshold 0.50\%. It demonstrates that \RANDOM, \GRAPH and BnB are competitive with our approaches but overall \LBRELAXRR has the highest survival rate over time. On some instances, \LBRELAX and its variants can significantly outperform the baselines and we show the anytime performance on those in Appendix. \section{Conclusion} In this paper, we focused on designing effective and efficient destroy heuristics to select neighborhoods in LNS for ILPs. LB is an effective destroy heuristic but is slow to run. We therefore proposed \LBRELAX, \LBRELAXS and \LBRELAXRR to approximate LB's decisions by solving its LP relaxation that is a lot faster to run. Empirically, we showed that \LBRELAX, \LBRELAXS and \LBRELAXRR efficiently selected almost as effective neighborhoods as LB and achieved state-of-the-art performance when compared against non-ML and ML approaches. One limitation of our approaches is that they do not work well on some problem domains, however we showed that they still outperformed the baselines on 14 to 25 (depending on the time cutoff) out of 31 difficult MIPLIB instances that are diverse in problem domains, structures and sizes. The other limitation is that they can get stuck at local minima. To address this issue, we proposed techniques to randomize the heuristics and adaptively adjust the neighborhood sizes. For future work, one could improve \LBRELAX and its variants to make them applicable on more problem domains. In addition, instead of using hard-coded rules for scheduling the randomized heuristic in \LBRELAXRR, one could use adaptive LNS to select destroy heuristics to run. It is also future work to develop theoretical claims to help support and explain the effectiveness of \LBRELAX, \LBRELAXS, \LBRELAXRR and possibly their other variants. \bibliographystyle{splncs04}
1,477,468,750,810
arxiv
\section{Introduction} \label{sec:intro} \raggedbottom A multi-agent system (MAS) is made up of multiple independently operated autonomous agents that can work together as a group through communication. The state consensus problem is a fundamental issue in the cooperative control of MASs. This problem is concerned with the synthesize of a distributed consensus protocol that drives the desired states of all the agents to a common value. Now, to reach consensus with the distributed protocol, each agent must be able to access the states of its neighboring agents via a communication network or sensing devices. This type of MAS with communicating agents is modeled using the model of the dynamics of each agent, a communication protocol that describes the interaction among the agents, and a graph that represents the interconnection topologies between agents. The potential applications of consensus are in flocking, formation control, oscillation synchronization, firefighting, multi-agent rendezvous, and satellite reconfiguration. It is worth noting that some of the practical applications of consensus have issues. For example, in the consensus of mobile agents the communication topologies between agents need to switch between several fixed topologies from time to time due to finite communication radius as well as due to the presence of an obstacle between two agents. Furthermore, the number of agents in some applications, such as firefighting, can change over time as some agents are relieved or new agents join the existing network depending upon the workload. In this paper, the removal and addition of agents from an existing network of agents are referred to as \textit{attrition} and \textit{inclusion} of agents, respectively. Moreover, the model of each agent is of higher-order and could possess parametric as well as unmodeled dynamics uncertainties. Considering all the aforementioned issues, some practical applications require a single distributed robust consensus protocol that can handle a varying number of higher-order agents, switching topologies, and model uncertainties. \par Concerning the cooperative control of MAS with attrition of agents, a cooperative relay tracking strategy is developed in \cite{dong} to ensure successful tracking even when a second-order agent quits tracking due to malfunction. In the case of the multiagent tracking systems that are subjected to agent failure followed by the agent replacement, a modified nonsingular terminal sliding mode control scheme and event-triggered coordination strategies were proposed in \cite{lijing}. In \cite{lululi}-\cite{Jianhua}, consensus recovery methods are proposed to compensate for the undesirable effects caused by the removal of agents while retaining the consensus property. The consensus problem for high-order MASs with switching topologies and time-varying delays is studied in \cite{cui}. Here, the consensus problem is converted into an $L_2$-$L_\infty$ control problem employing the tree-type transformation approach. Also, the consensus with the prescribed $L_2$-$L_\infty$ performance is ensured through sufficient conditions that are derived using linear matrix inequalities (LMIs). In \cite{liu}, the consensus problem of MAS with switching topologies is transformed into an $H_\infty$ control problem. The sufficient condition is derived in terms of LMIs to ensure consensus of the MAS. Following this, a distributed dynamic output feedback protocol is developed where the system matrix of the protocol is designed by solving two LMIs. Moreover, a distributed algorithm is developed in \cite{deyuan} using an iterative learning rule for the consensus tracking control of MASs with switching topologies and disturbances. The LMI-based necessary and sufficient conditions for the convergence of the consensus tracking objectives are also presented. Further solutions to the consensus problem with switching topologies employing LMIs can be found in \cite{qin}-\cite{peng}. The consensus problems of MAS with fixed/switching topologies and time-delays is discussed in \cite{olftai}. In this paper, a Lyapunov function-based disagreement function is used to study the convergence characteristics of consensus protocols. In \cite{saboori}, the sufficient conditions to design a distributed protocol for the consensus of identical linear time-invariant (LTI) MASs subjected to bounded external disturbance, switching topologies, and directed communication network graph are proposed. These conditions are based on $L_2$ gain and RMS bounded disturbances. The Lyapunov stability theory is then used to investigate the stability characteristics of the proposed controllers. Maria Elena Valcher et al. \cite{maria} describes each agent of the MAS using a single-input stabilizable state-space model and then investigate the consensus problem under arbitrary switching for identical MAS with switching communication topology. Also, the consensusability of this MAS is illustrated by constructing a common quadratic positive definite Lyapunov function describing the evolution of the disagreement vector for the switched system. Guanghui Wen et al. \cite{Guanghui} discusses the distributed $H_\infty$ consensus problem of MASs with higher-order linear dynamics and switching directed topologies. It is demonstrated here that if the protocol's feedback gain matrix is properly designed and the coupling strength among neighboring agents is greater than a derived positive value, then distributed $H_\infty$ consensus the problem can be solved. The exponential state consensus problem for hierarchical multi-agent dynamical systems with switching topology and inter-layer communication delay is addressed in \cite{zhaoxia}. In this paper, the stability theory of switched systems and graph theory of hierarchical network topology are utilized to derive sufficient conditions for accomplishing the exponential hierarchical average consensus. The robust consensus of linear MASs with agents subject to heterogeneous additive stable perturbations is addressed in \cite{xianwei}. To design dynamic output-feedback protocols, two methods based on an algebraic Riccati equation and some scalar/matrix inequalities are proposed. Moreover, in \cite{yangliu}, the sufficient condition in terms of LMIs is derived for the robust $H_\infty$ consensus control of MASs with model parameter uncertainties and external disturbances. Further, the traditional $H_\infty$ controller design is utilized in \cite{plin} to design a consensus protocol for the MAS with second-order dynamics that is subjected to parameter uncertainties and external disturbances. Also, the asymptotical convergence of agents along with desired $H_\infty$ performance is assured through suitable sufficient conditions. For a class of second-order multi-agent dynamic systems with disturbances and unmodeled agent dynamics, continuous distributed consensus protocols that enable global asymptotic consensus tracking are designed in \cite{Guoqiang}. These protocols are developed with the help of an identifier that estimates unknown disturbances and unmodeled agent dynamics.\par The determination of a single controller that stabilizes a finite number of systems is referred to as the simultaneous stabilization (SS) problem \cite{Saek}-\cite{vidya}. The SS problem of more than two systems has no closed-form solution due to its NP-hardness \cite{Gever}-\cite{onur}. Hence, iterative algorithms are utilized to solve SS problem. For example, an LMI-based iterative algorithm is developed in \cite{CAO1} and a bi-level optimization-based decomposition strategy is utilized in \cite{perez} to solve the SS problem. The SS problem is solved in \cite{saif}-\cite{jinjgcd} by first determining the sufficiency condition for the existence of the simultaneously stabilizing controller. Then, this condition is solved using a robust stabilization controller that is synthesized around the central plant (system). In \cite{saif}, the central plant is obtained by solving a 2-block optimization problem. However, the central plant is identified in \cite{jinsmc}-\cite{jinjgcd} using the maximum $\nu$-gap metric of the systems that requires SS. The definition of this $\nu-$gap metric-based central plant and the maximum $\nu$-gap metric of the systems is given in Section \ref{PL}. It is shown in \cite{huy}-\cite{yuebing} that the state consensus problem of MAS with $N$ identical LTI agents can be expressed as the SS problem of $N-1$ independent systems. In these papers, the consensus problem of MAS has been studied using an LMI-based SS approach. One needs to note that the methods discussed in the preceding articles do not generate a single distributed consensus protocol that achieves consensus of MAS with attrition and inclusion of higher-order uncertain agents and switching topologies.\par In this paper, the existing MAS is supposed to have $N$ agents. From this MAS, either \textit{attrition} of $P$ agents or \textit{inclusion} of $M$ agents at a time is considered. Subsequently, the problem of finding a single robust distributed dynamic state-feedback consensus protocol that achieves consensus of MAS with \textit{attrition} and \textit{inclusion} of LTI higher-order uncertain homogeneous agents and switching topologies is stated as the Robust Attrition-Inclusion (RAI) consensus problem. Also, the protocol that solves the RAI consensus problem is referred to as the Robust Attrition-Inclusion Distributed Dynamic (RAIDD) consensus protocol. In addition, the actual dynamics of every agent considered here is uncertain. The nominal linear dynamics of each agent is identical. Moreover, the uncertainty in the actual linear dynamics of each agent is also assumed to be homogeneous. This uncertainty is represented by bounded perturbations in the system and input matrices of the state-space model of the nominal linear dynamics. In this article, the RAIDD consensus protocol is synthesized in two steps. In the first step, the sufficient condition for the existence of this protocol is obtained using the $\nu$-gap metric-based SS method. Next, the RAIDD consensus protocol is attained using the Glover-McFarlane robust stabilization method presented in \cite{glover2} if the sufficient condition is satisfied. The main contributions of this article are the following. \begin{enumerate} \item To the best of the author’s knowledge, this is the first paper to propose a method for generating a distributed consensus control that accomplishes consensus of MAS with the varying number of higher-order agents, switching topologies, and model uncertainties. \item The sufficient condition for the existence of the RAIDD consensus protocol, which dependents on the Hankel norm of right coprime factors and the $\nu$-gap metric-based central plant's maximum $\nu$-gap metric, is developed using the $\nu$-gap metric-based SS method. \item A RAIDD consensus protocol is developed for the RAI consensus of four unmanned underwater vehicles (UUVs), with attrition and inclusion of one UUV. The performance of this protocol is validated by numerical simulations. \end{enumerate} The rest of this paper is organized as follows. The preliminaries and notation are given in Section \ref{PL}. In Section \ref{PS}, the problem statement is presented. The synthesize of the RAID consensus protocol is described in Section \ref{PR}. The simulation results are discussed in Section \ref{SR}. In Section \ref{CL}, conclusions are summarized.\par \section{Preliminaries and Notation}\label{PL} This section introduces various definitions and notations, as well as some of the fundamental concepts of graph theory and $\nu$-gap metric-based simultaneous stabilization. \subsection{Notation} In this paper, $\mathbb{R}$, $\mathbb{R}_{\geq 0}$, $\mathbb{R}^{n}$, and $\mathbb{R}^{n \times m}$ denote the set of real numbers, the set of non-negative real numbers, the set of $n$-column real vectors, and the set of all real matrices of dimension $n \times m$, respectively. $\mathbb{N}$ is the set of natural numbers, $\mathbb{N}^{+}$ is the set of positive real numbers without zero, and $\mathbb{N}_{a}^{b}$ is set of natural numbers from $a$ to $b$ ($\mathbb{N}_{a}^{b}=\{a,\dots,b\}, a,b \in \mathbb{N}, a < b$). $n_{C_2}$ denotes $n(n-1)/2$. $A \bigotimes B$ indicates the Kronecker product of the $A$ and $B$ matrices. $\mathcal{C}_-$ symbolizes the open left-hand side of complex plane. The superscript $'T'$ denotes matrix/vector transpose. $A=(a_{ij}) \in \mathbb{R}^{n \times m}$ denotes a real matrix with $n$ rows and $m$ columns. Here, $a_{ij}$ is the element at the $i$th row and $j$th column of the matrix. The zero matrix with $n$ rows and $m$ columns is represented using $\mathbf{0}{n,m}$. The $\max\{{}\}$ and $\min\{{}\}$ symbolize the maximum and the minimum element of the set, respectively. The $\lambda_{max}[A]$ represents the largest eigenvalue of the matrix, $A$. $\mathcal{RH_\infty}$ is the set of all proper and stable rational transfer function matrices. Likewise, $\mathcal{R}$ is the set of all proper rational transfer function matrices. For a transfer function matrix, $\mathbf{P}(s)$, its $H_\infty$ norm, determinant, Hankel norm, and winding number are denoted by $\parallel\mathbf{P}(s)\parallel_\infty$, det ($\mathbf{P}(s)$), $\parallel\mathbf{P}(s)\parallel_H$, and wno ($\mathbf{P}(s)$) respectively. Besides these, $\mathbf{{P}}(s)=:(A,B,C,D)$ denotes the short form of $\mathbf{{P}}(s):=C(s\mathbf{I}-A)^{-1}B+D)$. Also, $\mathbf{P}(s)^*$ represents $\mathbf{P}^T(-s)$. \subsection{Definitions} \newtheorem{defn}{Definition}[section] \begin{defn} \textit{Generalized stability margin :}~~{\normalfont Consider a closed-loop (CL) system, $[\mathbf{P}(s), \mathbf{K}]$, with the controller, $\mathbf{K}$. Then, the generalized stability margin, $b_{\mathbf{P},\mathbf{K}}$ $\in$ $[0, 1]$, is defined as \cite{steele} \begin{equation} b_{\mathbf{P},\mathbf{K}}= \begin{cases} {||\Upsilon||^{-1}_\infty} &\text{if}~ [\mathbf{P}(s), \mathbf{K}]~ \text{is internally stable}\\ 0&\text{otherwise} \end{cases} \label{eq:RCF1_1_1} \end{equation} where $\Upsilon=\left[ \begin{array}{c} {\mathbf{P}(s)} \\{\mathbf{I}} \end{array} \right ]\left(\begin{array}{c} {\mathbf{I-KP}(s)} \end{array} \right )^{-1}\left[ \begin{array}{cc} {\mathbf{-I}} & {\mathbf{K}} \end{array} \right ]$. } \end{defn} \begin{defn} \textit{$v-$gap metric:}~~{\normalfont Consider two systems, $\mathbf{P}_1(s)$ and $\mathbf{P}_2(s)$. Let $[\mathbf{N}_1(s) \in \mathcal{RH_\infty}, \mathbf{M}_1(s) \in \mathcal{RH_\infty}]$ and $[\widetilde{\mathbf{N}}_1(s) \in \mathcal{RH_\infty}, \widetilde{\mathbf{M}}_1(s) \in \mathcal{RH_\infty}]$ be the right and left coprime factors of $\mathbf{P}_1(s)$, respectively. Likewise, $[\mathbf{N}_2(s) \in \mathcal{RH_\infty}, \mathbf{M}_2(s) \in \mathcal{RH_\infty}]$ and $[\widetilde{\mathbf{N}}_2(s) \in \mathcal{RH_\infty}, \widetilde{\mathbf{M}}_2(s) \in \mathcal{RH_\infty}]$ be the right and left coprime factors of $\mathbf{P}_1(s)$, respectively. Then, $v-$gap metric, $\delta_v(\mathbf{P}_1(j\omega),\mathbf{P}_2(j\omega)) \in [0,1] $, of $\mathbf{P}_1(s)$ and $\mathbf{P}_2(s)$ is defined as \cite{82} \begin{equation} \delta_v(\mathbf{P}_1(j\omega),\mathbf{P}_2(j\omega)) = \begin{cases} \parallel\Phi(\mathbf{P}_1(j\omega),\mathbf{P}_2(j\omega))\parallel_\infty &\mbox{if } det \Theta(j\omega) \\ &\neq 0~ \text{and}~ wno\\ & det(\Theta(j\omega))\\ & = 0 \\ 1 & \mbox{otherwise } \end{cases} \label{gapm} \end{equation} \noindent where $\Phi(\mathbf{P}_1(j\omega),\mathbf{P}_2(j\omega))=-\widetilde{\mathbf{N}}_2(s)\mathbf{M}_1(s) + \widetilde{\mathbf{M}}_2(s)\mathbf{N}_1(s)$ and $\Theta(j\omega)=\mathbf{N}^*_2(s)\mathbf{N}_1(s) + \mathbf{M}_2^*(s)\mathbf{M}_1(s) $. } \end{defn} The $\nu$-gap metric measures the distance between systems, and if this distance is closer to zero, then any controller that works well with one system will also work well with the other. \begin{defn} \textit{Maximum $\nu$-gap metric of the plant:}~~{\normalfont Let $\mathcal{Q}=\{\mathbf{P}_1(s),\dots,\mathbf{P}_f(s),\dots, \mathbf{P}_\xi(s)\}$ be a finite set of systems. Then, the maximum $\nu$-gap metric of $\mathbf{P}_i(s) \in \mathcal{Q}$, $\epsilon_{i}$, is defined as \cite{jinjgcd} \begin{equation} \begin{split} \epsilon_{i}=\max\big\{&\delta_v\big(\mathbf{P}_{i}(j\omega),\mathbf{P}_f(j\omega)\big)~\big|~ \mathbf{P}_{i}(s), \mathbf{P}_f(s) \in \mathcal{Q}~~ \forall~f \in \mathbb{N}_1^\xi \big\} . \end{split} \label{epix} \end{equation} } \end{defn} \begin{defn} \textit{$\nu$-gap metric-based central plant of $\mathcal{Q}$:}~~{\normalfont The $\nu$-gap metric-based central plant of $\mathcal{Q}$ is defined as the system whose maximum $\nu$-gap metric is the smallest among the maximum $\nu$-gap metrics of all the plants of $\mathcal{Q}$ \cite{jinjgcd}.} \end{defn} This definition implies that the $\nu$-gap metric-based central plant is the system that is closest to all other systems belonging to $\mathcal{Q}$ in terms of $\nu$-gap metric. In this paper, the $\nu$-gap metric-based central plant of a set and any parameters associated with it are denoted by the subscript `$cp$'. \begin{defn} \textit{Hankel norm:}~~{\normalfont Consider a stable system, $\mathbf{P}(s)$. Then, the Hankel norm of $\mathbf{P}(s)$ is given by \begin{equation} \parallel \mathbf{P}(s) \parallel_H=\sqrt{\lambda_{max}[W_cW_o]} \label{hnorm} \end{equation} where $W_c$, $W_o$, and $\lambda_{max}[W_cW_o]$ are respectively the controllability Gramian and the observability Gramian. } \end{defn} \subsection{Graph Theory for Formulating RAI Consensus Problem} The primary focus of this paper is the synthesize of a RAIDD consensus protocol that achieves consensus of MAS with $N \in \mathbb{N}$, $(N-P) \in \mathbb{N} $, and $(N+M) \in \mathbb{N}$ agents as well as switching topologies. The communication network topologies among these agents are represented using Undirected Simple (US) graphs. These graphs do not have any loops or parallel edges. The number of US graphs that can be formed with $N$, $N-P$, and $N+M$ nodes is $2^{N_{C_2}} < \infty$, $2^{(N-P)_{C_2}} < \infty$, and $2^{(N+M)_{C_2}} < \infty$, respectively. The graphs of MAS need to be connected for achieving consensus. Following this, let the number of Connected Undirected Simple (CUS) graphs of the MAS with $N$, $N-P$, and $N+M$ agents, would respectively be $k \leq 2^{N_{C_2}}$, $r\leq2^{{N-P}_{C_2}}$, and $p\leq2^{{N+M}_{C_2}}$. The CUS graphs of the MAS with $N$, $N-P$, and $N+M$ agents are denoted by $\breve{\mathcal{G}}_h~\forall~h~\in~\mathbb{N}_1^k$, $\grave{\mathcal{G}}_h~\forall~h~\in~\mathbb{N}_1^r$, and $\hat{\mathcal{G}}_h~\forall~h~\in~\mathbb{N}_1^p$, respectively. The node set of $\breve{\mathcal{G}}_h$, $\grave{\mathcal{G}}_h$, and $\hat{\mathcal{G}}_h$ is $\breve{\mathcal{V}}=\{1,\dots,N\}$, $\grave{\mathcal{V}}=\{1,\dots,N-P\}$, and $\hat{\mathcal{V}}=\{1,\dots,N+M\}$, respectively. Likewise, $\breve{\varepsilon}_h=\{(i,j)~|~i,j~\in~\breve{\mathcal{V}}\}$, $\grave{\varepsilon}_h=\{(i,j)~|~i,j~\in~\grave{\mathcal{V}}\}$, and $\hat{\varepsilon}_h=\{(i,j)~|~i,j~\in~\hat{\varepsilon}\}$ are the edge set of $\breve{\mathcal{G}}_h$, $\grave{\mathcal{G}}_h$, and $\hat{\mathcal{G}}_h$, respectively. The adjacency matrix of $\breve{\mathcal{G}}_h$, $\grave{\mathcal{G}}_h$, and $\hat{\mathcal{G}}_h$ is denoted by $\breve{\mathcal{A}}_h \in \mathbb{R}^{(N \times N)}$, $\grave{\mathcal{A}}_h \in \mathbb{R}^{((N-P) \times (N-P))}$, and $\hat{\mathcal{A}}_h \in \mathbb{R}^{((N+M) \times (N+M))}$, respectively. These matrices are defined as $\breve{\mathcal{A}}_h =(\breve{a}_{h_{ij}})$, $\grave{\mathcal{A}}_h =(\grave{a}_{h_{ij}}) $, and $\hat{\mathcal{A}}_h =(\hat{a}_{h_{ij}})$ where $(\breve{a}_{h_{ij}})$, $(\grave{a}_{h_{ij}}) $, and $(\hat{a}_{h_{ij}})$ are given by \begin{equation} \breve{a}_{h_{ij}}= \begin{cases} 1 & i \neq j~ \text{and}~ (i,j) \in \breve{\varepsilon}_h\\ 0 & i = j ~\text{or}~ (i,j) \notin \breve{\varepsilon}_h \end{cases} \end{equation} \begin{equation} \grave{a}_{h_{ij}}= \begin{cases} 1 & i \neq j ~\text{and}~ (i,j) \in \grave{\varepsilon}_h\\ 0 & i = j ~\text{or} ~(i,j) \notin \grave{\varepsilon}_h \end{cases} \end{equation} \begin{equation} \hat{a}_{h_{ij}}= \begin{cases} 1 & i \neq j ~\text{and}~ (i,j) \in \hat{\varepsilon}_h\\ 0 & i = j ~\text{or} ~(i,j) \notin \hat{\varepsilon}_h \end{cases} \end{equation} The neighboring set of the node $i$ of $\breve{\mathcal{G}}_h$, $\grave{\mathcal{G}}_h$, and $\hat{\mathcal{G}}_h$ is denoted by $\mathcal{N}_i(\breve{\mathcal{G}}_{h\in\mathbb{N}_1^k})$, $\mathcal{N}_i(\grave{\mathcal{G}}_{h\in\mathbb{N}_1^r})$, and $\mathcal{N}_i(\hat{\mathcal{G}}_{h\in\mathbb{N}_1^p})$, respectively. These sets are defined as $\mathcal{N}_i(\breve{\mathcal{G}}_{h\in\mathbb{N}_1^k})=\{j~|~(i,j)~\in~\breve{\varepsilon}_h\}$, $\mathcal{N}_i(\grave{\mathcal{G}}_{h\in\mathbb{N}_1^r})=\{j~|~(i,j)~\in~\grave{\varepsilon}_h\}$, and $\mathcal{N}_i(\hat{\mathcal{G}}_{h\in\mathbb{N}_1^p})=\{j~|~(i,j)~\in~\hat{\varepsilon}_h\}$. Now, the graph Laplacian matrix of $\breve{\mathcal{G}}_h$, $\grave{\mathcal{G}}_h$, and $\hat{\mathcal{G}}_h$ is defined as $\breve{L}_h \in \mathbb{R}^{(N \times N)}=(\breve{l}_{h_{ij}})$, $\grave{L}_h \in \mathbb{R}^{((N-P) \times (N-P))} =(\grave{l}_{h_{ij}})$, and $\hat{L}_h \in \mathbb{R}^{((N+M) \times (N+M))}=(\hat{l}_{h_{ij}})$, respectively. Here, $\breve{l}_{h_{ij}}$, $\grave{l}_{h_{ij}}$, and $\hat{l}_{h_{ij}}$ are given by \begin{equation} \breve{l}_{h_{ij}}= \begin{cases} \Sigma_{j \in \mathcal{N}_i(\breve{\mathcal{G}}_{h\in\mathbb{N}_1^k})}\breve{a}_{h_{ij}} & i = j\\ -\breve{a}_{h_{ij}} & i \neq j \end{cases} \end{equation} \begin{equation} \grave{l}_{h_{ij}}= \begin{cases} \Sigma_{j \in \mathcal{N}_i(\grave{\mathcal{G}}_{h\in\mathbb{N}_1^r})}\grave{a}_{h_{ij}} & i = j\\ -\grave{a}_{h_{ij}} & i \neq j \end{cases} \end{equation} \begin{equation} \hat{l}_{h_{ij}}= \begin{cases} \Sigma_{j \in \mathcal{N}_i(\hat{\mathcal{G}}_{h\in\mathbb{N}_1^p})}\hat{a}_{h_{ij}} & i = j\\ -\grave{a}_{h_{ij}} & i \neq j \end{cases} \end{equation} Note that all the row-sum of the Laplacian matrices are zero. Hence, zero is an eigenvalue of these matrices with eigenvector, $\mathbf{1}=[1,1,\dots,1]^T$. Besides this, the rank of $\breve{L}_h$, $\grave{L}_h$, and $\hat{L}_h$ is $N-1$, $N-P-1$, $N+M-1$, respectively, as the graphs associated with these matrices are undirected and connected. Moreover, these matrices are positive semi-definite real symmetric. In these case, the zero eigenvalue of $\breve{L}_h$, $\grave{L}_h$, and $\hat{L}_h$ has multiplicity of one and their non-zero eigenvalues can be ordered increasingly as $0 < \breve{\lambda}_{h_2}, \leq \breve{\lambda}_{h_3}, \dots, \leq \breve{\lambda}_{h_N}$, $0 < \grave{\lambda}_{h_2}, \leq \grave{\lambda}_{h_3}, \dots, \leq \grave{\lambda}_{h_{N-P}}$, and $0 < \hat{\lambda}_{h_2}, \leq \hat{\lambda}_{h_3}, \dots, \leq \hat{\lambda}_{h_{N+M}}$, respectively.\par \subsection{$\nu$-gap Metric-Based Simultaneous Stabilization for Solving RAI Problem}\label{SSCP} The $\nu$-gap metric-based SS method proposed in \cite{jinsmc} and \cite{jinjgcd} serves as the foundation for synthesizing the RAIDD consensus protocol. To explain this method, let $\mathbf{P}_{cp}(s) \in \mathcal{Q}$ be the $\nu$-gap metric-based central plant of $\mathcal{Q}$ and $\epsilon_{cp}$ be the the maximum $\nu$-gap metric of $\mathbf{P}_{cp}(s)$. Now, the concept of the $\nu$-gap metric-based SS method is stated as the following. \begin{itemize} \item A single controller can provide similar CL characteristics to all the plants in $\mathcal{Q}$ if that controller is synthesized around $\mathbf{P}_{cp}(s)$. This is because $\mathbf{P}_{cp}(s)$ is closest to all other plants belonging to $\mathcal{Q}$ as $\epsilon_{cp}$ is the smallest among the maximum $\nu$-gap metrics of all other plants of $\mathcal{Q}$. \item The possibility of providing similar CL characteristics to all the plants in $\mathcal{Q}$ with a stabilizing controller of $\mathbf{P}_{cp}(s)$ increases when $\epsilon_{cp}$ approaches zero. \end{itemize} In the $\nu$-gap metric-based SS method, the simultaneous stabilizing controller is generated by solving the sufficient condition that depends on $\mathbf{P}_cp(s)$ and $\epsilon_{cp}$. This condition is given as \cite{jinjgcd} \begin{align} b_{\mathbf{P}_{cp},\mathbf{K}}>\epsilon_{{cp}} \label{sscondk1x} \end{align} where $b_{\mathbf{P}_{cp},\mathbf{K}}$ and $\epsilon_{{cp}}$ are given by (\ref{eq:RCF1_1_1}) and (\ref{epix}), respectively. The identification of $\mathbf{P}_{cp}(s)$ and $\epsilon_{{cp}}$ is required for solving \eqref{sscondk1x}. This is accomplished by following the steps given below. \begin{itemize} \itema \textbf{Step~1:} Find the maximum $\nu$-gap metrics of all the plants of $\mathcal{Q}$ using (\ref{epix}). \itemb \textbf{Step~2:} Identify the smallest value among the maximum $\nu$-gap metrics. This gives $\epsilon_{{cp}}$ and the plant associated with this value is $\mathbf{P}_{cp}(s)$. \end{itemize} For more details about the $\nu-$gap metric-based SS method, one needs to refer to the supporting material of \cite{jinsmc}. \section{Problem Statement}\label{PS} The RAI consensus problem is formulated in this section. For this purpose, consider a MAS of identical but uncertain higher-order agents. The uncertainty in the actual dynamics of these agents is also identical and parametric in nature. The nominal and actual dynamics of these agents are described by LTI systems. Following this, let $\mathbf{P}_i(s)$ be the transfer function of the nominal dynamics of $i$th agent of the MAS. Also, assume that all the states of $\mathbf{P}_i(s)$ are measured. The state-space form of $\mathbf{P}_i(s)$ is then given as \begin{equation} \mathbf{P}_i(s): \begin{cases} \mathbf{\dot{x}}_i=&A\mathbf{x}_i+B\mathbf{u}_i\\ \mathbf{y}_i=&C\mathbf{x}_i \end{cases} \label{eq:1} \end{equation} where $\mathbf{x}_f$ $\in$ $\mathbb{R}^{n}$, $\mathbf{u}_f$ $\in$ $\mathbb{R}^{m}$, $\mathbf{y}_f$ $\in$ $\mathbb{R}^{n}$, $A \in \mathbb{R}^{n \times n}$, $B \in \mathbb{R}^{n \times m}$, and $C=\mathbf{I}$ $\in$ $\mathbb{R}^{n \times n}$ represent state vector, input vector, output vector, system matrix, input matrix, and output matrix of the $i$th agent, respectively. Moreover, $n$ and $m$ are the number of states and inputs of the $i$th agent, respectively. Furthermore, $A$ is not Hurwitz and ($A, B$) is stabilizable. Let $\mathbf{\bar{P}}_i(s)$ be the transfer function of actual (perturbed system) dynamics of $i$th agent. The state-space form of $\mathbf{\bar{P}}_i(s)$ is given as \begin{equation} \mathbf{\bar{P}}_i(s): \begin{cases} \mathbf{\dot{x}}_i=&\bar{A}\mathbf{x}_i+\bar{B}\mathbf{u}_i\\ \mathbf{y}_i=&C\mathbf{x}_i \end{cases} \label{eq:2} \end{equation} where $\bar{A}~\in~\mathbb{R}^{n \times n}=A+\Delta A$ and $\bar{B} ~\in~\mathbb{R}^{n \times m}=B+\Delta B$ are the system and input matrices, respectively. Here, the $\Delta A ~\in~\mathbb{R}^{n \times n}$ and $\Delta B ~\in~\mathbb{R}^{n \times m}$ are the perturbations of $A$ and $B$ matrices of the nominal dynamics, respectively, due to the parametric uncertainties in the system dynamics. When $\Delta A=\mathbf{0}_{n,n}$ and $\Delta B =\mathbf{0}_{n,m}$, then \eqref{eq:2} represents nominal dynamics of the $i$th agent.\par In this paper, we consider three operational scenarios for the consensus of MAS with \textit{attrition} and \textit{inclusion} of agents. These scenarios are the following. \begin{enumerate} \item \textbf{scenario 1:} There is no \textit{attrition} and \textit{inclusion} of agents in this scenario. The MAS operates with a fixed number of agents, say $N \in \mathbb{N}_2^O$ agents, in the whole operating time. \item \textbf{scenario 2:} The MAS begins its operation with $N$ agents and $P \in \mathbb{N}^{+}$ agents are later removed at a given time instant. Consequently, the number of agents is not constant and varies between $N$ and $N-P$ over the course of the operation. Also, note that the maximum value of $P$ is $N-2$. \item \textbf{scenario 3:} The MAS commences its operations with $N$ agents. Afterward, at a given time instant, $M \in \mathbb{N}^+<\infty$ agents were included to the MAS. In this case, the number of agents vaires between $N$ and $N+M$. Note that the dynamics of $M$ agents is identical to those of $N$ existing agents. \end{enumerate} Moreover, the communication network topologies between agents of the MAS are allowed to switch from time to time so that the network graph remains connected. Also, we assume that the graphs of the communication network topologies formed after the \textit{attrition} or \textit{inclusion} of agents from the MAS remain connected. Consequently, when the MAS with \begin{enumerate} \item $N$ agents operates over a time interval, then the CUS graphs and corresponding Laplacian matrices belong to the sets, $\{\breve{\mathcal{G}}_1, \dots,\breve{\mathcal{G}}_h,\dots,\breve{\mathcal{G}}_k\}$ and $\{\breve{\mathcal{L}}_1, \dots,\breve{\mathcal{L}}_h,\dots,\breve{\mathcal{L}}_k\}$, respectively, switch at desired time instants. \item $N-P$ agents operates over a time interval, then the CUS graphs and corresponding Laplacian matrices belong to the sets, $\{\grave{\mathcal{G}}_1, \dots,\grave{\mathcal{G}}_h,\dots,\grave{\mathcal{G}}_r\}$ and $\{\grave{\mathcal{L}}_1, \dots,\grave{\mathcal{L}}_h,\dots,\grave{\mathcal{L}}_r\}$, respectively, switch at desired time instants. \item $N+M$ agents operates over a time interval, then the CUS graphs and corresponding Laplacian matrices belong to the sets, $\{\hat{\mathcal{G}}_1, \dots,\hat{\mathcal{G}}_h,\dots,\grave{\mathcal{G}}_r\}$ and $\{\hat{\mathcal{L}}_1, \dots,\hat{\mathcal{L}}_h,\dots,\hat{\mathcal{L}}_r\}$, respectively, switch at desired time instants. \end{enumerate} In view of the aforementioned three scenarios and switching topologies, the RAIDD consensus protocol needs to achieve consensus of the MAS with $N$, $N-P$, and $N+M$ uncertain agents that are connected using the communication network topologies whose graphs are $\breve{\mathcal{G}}_h~\forall~h~\in~\mathbb{N}_1^k$, $\grave{\mathcal{G}}_h~\forall~h~\in~\mathbb{N}_1^r$, and $\hat{\mathcal{G}}_h~\forall~h~\in~\mathbb{N}_1^p$, respectively. Now, consider the distributed dynamic protocol, $\mathbf{K}_i(s)$, whose state-space form is given by \begin{equation} \mathbf{K}_i(s): \begin{cases} \mathbf{\dot{v}}_i=K_A\mathbf{v}_i+K_B \mathbf{\delta}_i \\ \mathbf{u}_i=K_C\mathbf{v}_i+K_D \mathbf{\delta}_i \end{cases} \label{eq:3} \end{equation} where $\mathbf{v}_i, \mathbf{\delta}_i, K_A, K_B, K_C $, and $K_D$ are the state vector, input vector, system matrix, input matrix, output matrix, and feed-forward matrix of appropriate dimension, respectively. Here, $\mathbf{\delta}_i$ is defined as \begin{align} \mathbf{\delta}_i= \begin{cases} \Sigma_{j \in \mathcal{N}_i(\breve{\mathcal{G}}_{h\in\mathbb{N}_1^k})}\breve{a}_{h_{ij}}(\mathbf{x}_i-\mathbf{x}_j) &if ~\psi=N\\ \Sigma_{j \in \mathcal{N}_i(\grave{\mathcal{G}}_{h\in\mathbb{N}_1^r})}\grave{a}_{h_{ij}}(\mathbf{x}_i-\mathbf{x}_j) &if ~\psi=N-P\\ \Sigma_{j \in \mathcal{N}_i(\hat{\mathcal{G}}_{h\in\mathbb{N}_1^p})}\hat{a}_{h_{ij}}(\mathbf{x}_i-\mathbf{x}_j) &if ~\psi=N+M\\ \end{cases} \end{align} where $\psi \in \mathbb{N}_2^O $ ($2$ $<$ $O \in$ $\mathbb{N}$ $<$ $\infty$) is the number of agents of the MAS during its operating time. Let $\mathbf{x}_a$, $\mathbf{x}_b$, and $\mathbf{x}_c$ are defined as $\mathbf{x}_a$=$[\mathbf{x}_1^T,\dots,\mathbf{x}_N^T]^T$, $\mathbf{x}_b$=$[\mathbf{x}_1^T,\dots,\mathbf{x}_{N-P}^T]^T$, $\mathbf{x}_c$ = $[\mathbf{x}_1^T,\dots,\mathbf{x}_{N+M}^T]^T$, $\mathbf{v}_a$ = $[\mathbf{v}_1^T,\dots,\mathbf{v}_N^T]^T$, $\mathbf{v}_b$ = $[\mathbf{v}_1^T,\dots,\mathbf{v}_{N-P}^T]^T$, and $\mathbf{v}_c$ = $[\mathbf{v}_1^T,\dots,\mathbf{v}_{N+M}^T]^T$, respectively. Then, the CL systems of the MAS with switching topologies are obtained by interconnecting the agents whose dynamics given in \eqref{eq:2} using \eqref{eq:3}. These CL systems associated with $N$, $N-P$, and $N+M$ uncertain agents are given by \begin{equation} \begin{aligned} \begin{bmatrix} \mathbf{\dot{x}}_a \\ \mathbf{\dot{v}}_a \end{bmatrix} =\begin{bmatrix} \mathbf{I}_N \bigotimes \bar{A} + \breve{\mathcal{L}}_h \bar{B}K_D& \mathbf{I}_N \bigotimes \bar{B} K_C \\ \breve{\mathcal{L}}_h \bigotimes K_B& \mathbf{I}_N \bigotimes K_A \end{bmatrix}\begin{bmatrix} \mathbf{x}_a \\ \mathbf{v}_a \end{bmatrix}\\~\forall~h~\in~\mathbb{N}_1^k, \end{aligned} \label{eq:4} \end{equation} \begin{equation} \begin{aligned} \begin{bmatrix} \mathbf{\dot{x}}_b \\ \mathbf{\dot{v}}_b \end{bmatrix} =\begin{bmatrix} \mathbf{I}_{N-P} \bigotimes \bar{A} + \grave{\mathcal{L}}_h \bar{B}K_D& \mathbf{I}_{N-P} \bigotimes \bar{B} K_C \\ \grave{\mathcal{L}}_h \bigotimes K_B& \mathbf{I}_{N-P} \bigotimes K_A \end{bmatrix}\begin{bmatrix} \mathbf{x}_b \\ \mathbf{v}_b \end{bmatrix}\\~\forall~h~\in~\mathbb{N}_1^r, \end{aligned} \label{eq:6} \end{equation} and \begin{equation} \begin{aligned} \begin{bmatrix} \mathbf{\dot{x}}_c \\ \mathbf{\dot{v}}_c \end{bmatrix} =\begin{bmatrix} \mathbf{I}_{N+M} \bigotimes \bar{A} + \hat{\mathcal{L}}_h \bar{B}K_D& \mathbf{I}_{N+M} \bigotimes \bar{B} K_C \\ \hat{\mathcal{L}}_h \bigotimes K_B& \mathbf{I}_{N+M} \bigotimes K_A \end{bmatrix}\begin{bmatrix} \mathbf{x}_c \\ \mathbf{v}_c \end{bmatrix}\\~\forall~h~\in~\mathbb{N}_1^p, \end{aligned} \label{eq:8} \end{equation} respectively. Subsequently, the protocol given in \eqref{eq:3} achieves: \begin{enumerate} \item consensus of the MAS of $N$ uncertain agents with $\breve{\mathcal{G}}_h~\forall~h~\in~\mathbb{N}_1^k$ when the CL systems given in \eqref{eq:4} satisfy \begin{align} \lim_{t \to \infty} (\mathbf{x}_i-\mathbf{x}_j)=0~\forall~i, j~\in~\mathbb{N}_1^N \label{eq:5} \end{align} \item consensus of the MAS of $N-P$ uncertain agents with $\grave{\mathcal{G}}_h~\forall~h~\in~\mathbb{N}_1^r$ when the CL systems given in \eqref{eq:6} satisfy \begin{align} \lim_{t \to \infty} (\mathbf{x}_i-\mathbf{x}_j)=0~\forall~i, j~\in~\mathbb{N}^{N-P}_1 \label{eq:7} \end{align} \item consensus of the MAS of $N+M$ uncertain agents with $\hat{\mathcal{G}}_h~\forall~h~\in~\mathbb{N}_1^p$ when the CL systems given in \eqref{eq:8} satisfy \begin{align} \lim_{t \to \infty} (\mathbf{x}_i-\mathbf{x}_j)=0~\forall~i, j~\in~\mathbb{N}^{N+M}_1 \label{eq:9} \end{align} \end{enumerate} Considering the CL systems given by \eqref{eq:4}, \eqref{eq:6}, and \eqref{eq:8}, the protocol given by \eqref{eq:3} becomes the RAIDD consensus protocol when \eqref{eq:3} achieves \eqref{eq:5}, \eqref{eq:7}, and \eqref{eq:9}, respectively. Therefore, the RAI consensus problem is stated as the following. Determine the existence conditions of the $K_A$, $K_B$, $K_C$, and $K_D$ such that the CL systems given in \eqref{eq:4}, \eqref{eq:6}, and \eqref{eq:8} satisfy \eqref{eq:5}, \eqref{eq:7}, and \eqref{eq:9}, respectively. \section{Synthesize of Robust Attrition-Inclusion Distributed Consensus Protocol}\label{PR} The sufficient condition for the existence of $K_A$, $K_B$, $K_C$, and $K_D$, as well as the design procedure for the RAIDD consensus protocol, are proposed in this section. To develop this sufficient condition, we define the following. \par \begin{enumerate} \item Let $\mathcal{P}$ be a finite set that contains all the eigenvalues of $\breve{\mathcal{L}}_h~\forall~ h\in\mathbb{N}_1^k$, $\grave{\mathcal{L}}_h~\forall~ h\in\mathbb{N}_1^r$, and $\hat{\mathcal{L}}_h~\forall~ h\in\mathbb{N}_1^p$ except their first eigenvalue, zero. The $i$th element of $\mathcal{P}$ is denoted by $\lambda_{i}$. Also, the cardinality of $\mathcal{P}$ is $\textbf{\textit{n}}(\mathcal{P})=\xi$ where $\xi=k(N-1)+r(N-P-1)+p(N+M-1)$. Now, $\mathbf{\hat{P}}_{i}(s) ~\forall~i \in \mathbb{N}_1^{\xi}$ are defined as \begin{equation} \mathbf{\hat{P}}_{i}(s): \begin{cases} \mathbf{\dot{\hat{x}}}_i=A\mathbf{\hat{x}}_{i}+\lambda_{i} B\mathbf{\hat{u}}_{i}\\ \mathbf{\hat{y}}_{i}=C\mathbf{\hat{x}}_{i};~\forall~i \in \mathbb{N}_1^{\xi} \end{cases} \label{eq:10} \end{equation} where $\mathbf{\hat{x}}_{i} \in \mathbb{R}^n$, $\mathbf{\hat{u}}_{i} \in \mathbb{R}^m$, and $\mathbf{\hat{y}}_{i} \in \mathbb{R}^n$ are the state, input, and output vectors, respectively. Also, ($A, \lambda_{i}B$) is stabilizable. \item Define $\mathcal{Q}$ as $\mathcal{Q}=\{\mathbf{\hat{P}}_i(s)~|~\mathbf{\hat{P}}_i(s)=:(A,\lambda_{i}B,C,D=\mathbf{0}_{n,m})~\forall~ i \in \mathbb{N}_1^\xi\}$. The maximum $\nu$-gap metric of the $i$th system belongs to $\mathcal{Q}$ is given by \begin{equation} \begin{split} \epsilon_i=&\max\{\delta_{\nu}(\hat{\mathbf{P}}_i(j\omega), \hat{\mathbf{P}}_f(j\omega))~|~\hat{\mathbf{P}}_i(j\omega), \hat{\mathbf{P}}_f(j\omega)~\in \\&\qquad \mathcal{Q}, \forall~f ~\in~\mathbb{N}_1^\xi \}\in [0,1]. \end{split} \label{eq:13} \end{equation} Also, the central plant of $\mathcal{Q}$ and its maximum $\nu$-gap metric be $\hat{\mathbf{P}}_{cp}(s) \in \mathcal{Q}$ and $\epsilon_{{cp}}$, respectively. Furthermore, the normalized right coprime factors of $\hat{\mathbf{P}}_{cp}(s)$ are $\hat{\mathbf{N}}_{cp}(s) \in \mathcal{RH_\infty}$ and $\hat{\mathbf{M}}_{cp}(s) \in \mathcal{RH_\infty}$ with det($\hat{\mathbf{M}}_{cp}(s) \neq 0$). \item Let $\mathbf{\acute{P}}_i(s)~$ $\forall$ $i \in \mathbb{N}_1^\xi$ be the perturbed systems of $\mathbf{\hat{P}}_i(s) \in {\mathcal{Q}}~\forall~i~\in~\mathbb{N}_1^\xi$, respectively. These systems arise due to the perturbation of $A$ and $B$ matrices mentioned in \eqref{eq:10} to form $\bar{A}=A+\Delta A$ and $\bar{B}=B+\Delta B$. Subsequently, the function, $\Psi:\mathbb{R}^{n \times n} \times \mathbb{R}^{n \times m} \rightarrow [0,1] $ is defined for $dom(\Psi)=\{(\Delta A, \Delta B)~|~\Delta A \in \mathbb{R}^{n \times n}, \Delta B \in \mathbb{R}^{n \times m}\}$ as \begin{equation} \begin{split} \Psi=&\max\{\delta_{\nu}(\mathbf{\hat{P}}_{cp}(j\omega),\mathbf{\acute{P}}_i(j\omega))~|~\mathbf{\hat{P}}_{cp}(s)~\in~\mathcal{Q}, \\&\qquad ~\mathbf{\acute{P}}_i(s) =:(\bar{A}, \lambda_{i}\bar{B}, C,D )~\forall~ i \in~\mathbb{N}_1^\xi\}. \end{split} \label{eq:15} \end{equation} Also, note that \eqref{eq:13}, \eqref{eq:15}, and $dom(\Psi)$ indicate that $\Psi=\epsilon_{{cp}}$ when $\Delta A=\mathbf{0}_{n,n}$ and $\Delta B=\mathbf{0}_{n,m}$. \end{enumerate}\par In the following theorem, the sufficient condition for the existence of $K A$, $K B$, $K C$, and $K D$ is proposed. \newtheorem{theorem}{Theorem}[section] \begin{theorem} The $K_A$, $K_B$, $K_C$, and $K_D$ exist such that the closed-loop systems given in \eqref{eq:4}, \eqref{eq:6}, and \eqref{eq:8} satisfy \eqref{eq:5}, \eqref{eq:7}, and \eqref{eq:9}, respectively, if the condition, \begin{equation} \sqrt{(1-\parallel [\mathbf{\hat{N}}_{{cp}}(s) ~\mathbf{\hat{M}}_{{cp}}(s)]^T\parallel_H^2)}>\Psi, \label{h1} \end{equation} is true. \label{thm1} \end{theorem} \begin{proof} \textbf{Case~a}: Let $\Delta A$ and $ \Delta B$ be $\mathbf{0}_{n,n}$ and $\mathbf{0}_{n,m}$, respectively. When $\Delta A = \mathbf{0}_{n,n}$ and $ \Delta B=\mathbf{0}_{n,m}$, then \textit{Proposition 2} mentioned in \cite{huy} suggests that the CL systems given in \eqref{eq:4}, \eqref{eq:6}, and \eqref{eq:8} satisfy \eqref{eq:5}, \eqref{eq:7}, and \eqref{eq:9}, respectively, if all the systems belong to $\mathcal{Q}$ can be simultaneously stabilized using a full state feedback controller, $\mathbf{\hat{K}}(s)$, which is given as \begin{equation} \mathbf{\hat{K}}(s): \begin{cases} \mathbf{\dot{\hat{v}}}&=K_A\mathbf{\hat{v}}+K_B \mathbf{\hat{x}}_{i} \\ \mathbf{\hat{u}}_{i}&=K_C\mathbf{\hat{v}}+K_D \mathbf{\hat{x}}_{i} \end{cases} \label{eq:17} \end{equation} where $\mathbf{\hat{v}}$ is the state vector with appropriate dimension. Consequently, the consensus problem of MAS with $N$, $N-P$, and $N+M$ agents with their nominal dynamics (given in \eqref{eq:1}) is equivalent to the following SS problem. Determine a full state feedback controller of the form given in \eqref{eq:17} that simultaneously stabilizes all the systems belonging to $\mathcal{Q}$. Let us assume $\mathbf{\hat{P}}_{cp}$ and $\epsilon_{{cp}}$ are identified for developing the existence condition for the RAIDD consensus protocol based on the concept of $\nu$-gap metric-based SS method proposed in \cite{jinsmc}-\cite{jinjgcd}. Then, the sufficient condition for the SS all the systems belong to $\mathcal{Q}$ is given as \begin{equation} b_{\mathbf{\hat{P}}_{cp},\mathbf{\hat{K}}}>\epsilon_{{cp}} \label{sscondk1} \end{equation} Equation \eqref{eq:RCF1_1_1} implies that a controller needs to be obtained in the first place to solve \eqref{sscondk1}. Hence, \eqref{sscondk1} does not indicate the existence of a simultaneous stabilizing controller without synthesizing a controller. To derive the controller independent sufficient condition for the SS of all the systems belonging to $\mathcal{Q}$, let $b_{\mathbf{\hat{P}}_{cp},\mathbf{\hat{K}}}^{max}$ be the maximum generalized stability margin of $\mathbf{\hat{P}}_{cp}(s)$. This margin indicates the largest infinity norm of left/right coprime factor perturbations for which $[\mathbf{\hat{P}}_{cp}(s), \mathbf{\hat{K}}(s)] $ remains stable. From \cite{glover1}, $b_{\mathbf{\hat{P}}_{cp},\mathbf{\hat{K}}}^{max}$ is given by \begin{equation} b_{\mathbf{\hat{P}}_{cp},\mathbf{\hat{K}}}^{max}=\sqrt{(1-\parallel [\mathbf{\hat{N}}_{{cp}}(s) ~\mathbf{\hat{M}}_{{cp}}(s)]^T\parallel_H^2)} \label{sscondk2} \end{equation} Further, assume $\mathcal{Q}$ as an uncertainty set with $\mathbf{\hat{P}}_{cp}(s) \in \mathcal{Q}$ as its nominal system and the systems belonging to $\mathcal{Q}\setminus\{\mathbf{\hat{P}}_{cp}(s)\}$ as the perturbed systems of $\mathbf{\hat{P}}_{cp}(s)$. Let $\mathbf{\hat{P}}_{f}(s)$ $\in$ $\mathcal{Q} \setminus \{\mathbf{\hat{P}}_{cp}(s)\}$ be a perturbed system of $\mathbf{\hat{P}}_{cp}(s)$. Also, $\epsilon_{{}_{{cp}f}}$ be the least upper bound on the normalized right coprime factor perturbations of $\mathbf{\hat{P}}_{cp}(s)$ to form $\mathbf{\hat{P}}_f(s)$. Additionally, $\mathbf{\Delta}_{\hat{N}_{{cp}f}}(s) \in \mathcal{RH_\infty}^{}$ and $\mathbf{\Delta}_{\hat{M}_{{cp}f}}(s) \in \mathcal{RH_\infty}^{}$ be the normalized right coprime factor perturbations of $\mathbf{\hat{N}}_{cp}(s)$ and $\mathbf{\hat{M}}_{cp}(s)$, respectively. These perturbations satisfy the condition given by \begin{equation} \big|\big|[\begin{array}{cc}\mathbf{\Delta}_{\hat{N}_{{cp}f}}(s)& \mathbf{\Delta}_{\hat{M}_{{cp}f}}(s)\end{array}]^{T}\big|\big|_\infty\leq\epsilon_{{}_{{cp}f}}. \label{eq:per1} \end{equation} Subsequently, $\mathbf{\hat{P}}_{cp}(s)$ and $\mathbf{\hat{P}}_f(s)$ are defined as \begin{align} \mathbf{\hat{P}}_{cp}(s)&=\mathbf{\hat{N}}_{cp}(s)\mathbf{\hat{M}}_{cp}^{-1}(s)\label{copr}\\ \mathbf{\hat{P}}_f(s)&=\big(\mathbf{\hat{N}}_{cp}(s)+\mathbf{\Delta}_{\hat{N}_{{cp}f}}(s)\big)\big(\mathbf{\hat{M}}_{cp}(s)+ \mathbf{\Delta}_{\hat{M}_{{cp}f}}(s)\big)^{-1} \label{copr1} \end{align} Now, there always exists a full state feedback controller of the form given in \eqref{eq:17} that stabilizes both $\mathbf{\hat{P}}_{cp}(s)$ and $\mathbf{\hat{P}}_{f}(s)$ when the condition given by \begin{equation} b_{\mathbf{\hat{P}}_{cp},\mathbf{\hat{K}}}^{max}=\sqrt{(1-\parallel [\mathbf{\hat{N}}_{{cp}}(s) ~\mathbf{\hat{M}}_{{cp}}(s)]^T\parallel_H^2)}>\epsilon_{{}_{cpf}} \label{eq:cond} \end{equation} holds \cite{glover2}. Even though $\mathcal{Q}$ is considered as a uncertainty set, the systems belong to it are known. Therefore, the condition given in \eqref{eq:per1} becomes \begin{equation} \epsilon_{{}_{{cp}f}}=\big|\big|[\begin{array}{cc}\mathbf{\Delta}_{\hat{N}_{{cp}f}}(s)& \mathbf{\Delta}_{\hat{M}_{{cp}f}}(s)\end{array}]^{T}\big|\big|_\infty. \end{equation} Also, the relation between $\delta_v(\mathbf{\hat{P}}_{cp}(j\omega),\mathbf{\hat{P}}_{f}(j\omega))$ and their normalized right coprime factor perturbations is given as \cite{feyel} \begin{equation} \delta_v(\mathbf{\hat{P}}_{cp}(j\omega),\mathbf{\hat{P}}_{f}(j\omega))=\big|\big|[\begin{array}{cc}\mathbf{\Delta}_{\hat{N}_{{cp}f}}(s)& \mathbf{\Delta}_{\hat{M}_{{cp}f}}(s)\end{array}]^{T}\big|\big|_\infty. \label{eq:per2} \end{equation} Let the largest infinity norm of the right coprime factor perturbations between $\mathbf{\hat{P}}_{cp}(s)$ and the systems belong to $\mathcal{Q}\setminus\{\mathbf{\hat{P}}_{cp}(s)\}$ be $\epsilon_{{}_{cpf}}^{max}$. Following \eqref{eq:per1} and \eqref{eq:per2}, $\epsilon_{{}_{cpf}}^{max}$ can be written as \cite{jinjgcd} \begin{equation} \begin{split} \epsilon_{{}_{cpf}}^{max}=&\max\{\delta_v(\mathbf{\hat{P}}_{cp}(j\omega),\mathbf{\hat{P}}_{f}(j\omega))~|~\mathbf{\hat{P}}_{cp}(s), \mathbf{\hat{P}}_{f}(s) \in \mathcal{Q}, \\ &~\qquad \forall~f \in \mathbb{N}_1^\xi\}. \end{split} \label{eq:per3} \end{equation} Using the definition of maximum $\nu$-gap metric, we can write \eqref{eq:per3} as \begin{equation} \epsilon_{{}_{cpf}}^{max}=\epsilon_{{cp}}. \label{eq:per4} \end{equation} Substituting \eqref{eq:per4} in \eqref{eq:cond} results in \begin{equation} b_{\mathbf{\hat{P}}_{cp},\mathbf{\hat{K}}}^{max}=\sqrt{(1-\parallel [\mathbf{\hat{N}}_{{cp}}(s) ~\mathbf{\hat{M}}_{{cp}}(s)]^T\parallel_H^2)}> \epsilon_{{cp}} \label{sscondk3} \end{equation} When \eqref{sscondk3} holds, then there always exists a full state feedback controller of the form given in \eqref{eq:17} that simultaneously stabilizes all the plants of $\mathcal{Q}$ as $\epsilon_{{cp}} \geq \epsilon_{{}_{cpf}}~\forall~f~\in~\mathbb{N}_1^\xi$. Hence, there exists $K_A$, $K_B$, $K_C$, and $K_D$ such that that the CL systems given in \eqref{eq:4}, \eqref{eq:6}, and \eqref{eq:8} with $\Delta A=\mathbf{0}_{n,n}$ and $ \Delta B=\mathbf{0}_{n,m}$ satisfy \eqref{eq:5}, \eqref{eq:7}, and \eqref{eq:9}, respectively. Moreover, it is important to note that validating \eqref{sscondk3} does not require any controller. \par \noindent \textbf{Case b:} Here, let $\Delta A\neq\mathbf{0}_{n,n}$ and $ \Delta B\neq\mathbf{0}_{n,m}$. Furthermore, consider two ball of systems, $\mathcal{B}(\mathbf{\hat{P}}_{cp}(s), b_{\mathbf{\hat{P}}_{cp},\mathbf{\hat{K}}}^{max})=\{\mathbf{G}(s)~|~\delta_{\nu}(\mathbf{\hat{P}}_{cp}(s),\mathbf{{G}}(s))<b_{\mathbf{\hat{P}}_{cp},\mathbf{\hat{K}}}^{max}\}$ and $\mathcal{B}(\mathbf{\hat{P}}_{cp}(s), \epsilon_{{cp}})=\{\mathbf{\hat{G}}(s)~|~\delta_{\nu}(\mathbf{\hat{P}}_{cp}(s),\mathbf{\hat{G}}(s))\leq\epsilon_{{cp}}\}$. The $\nu$-gap metrics between $\mathbf{\hat{P}}_{cp}(s)$ and all the systems belong to $ \mathcal{B}(\mathbf{\hat{P}}_{cp}(s), \epsilon_{{cp}})$ is less than or equal to $\epsilon_{{cp}}$ which is less than $\sqrt{(1-\parallel [\mathbf{\hat{N}}_{{cp}}(s) ~\mathbf{\hat{M}}_{{cp}}(s)]^T\parallel_H^2)}$. Hence, if \eqref{sscondk3} holds, then there exists a full state feedback controller that simultaneously stabilizes all the systems belong to $\mathcal{Q}$, $\mathcal{B}(\mathbf{\hat{P}}_{cp}(s), \epsilon_{{cp}})$, and $\mathcal{B}(\mathbf{\hat{P}}_{cp}(s), b_{\mathbf{\hat{P}}_{cp},\mathbf{\hat{K}}}^{max})$. Subsequently, for the existence of a full state feedback controller that simultaneously stabilizes all the perturbed systems, $\mathbf{\acute{P}}_i(s)~\forall~i \in \mathbb{N}_1^\xi$, necessitates \begin{align} \mathbf{\acute{P}}_i(s) \in \mathcal{B}(\mathbf{\hat{P}}_{cp}(s), b_{\mathbf{\hat{P}}_{cp},\mathbf{\hat{K}}}^{max})~\forall~i \in \mathbb{N}_1^\xi. \label{eq:18} \end{align} For \eqref{eq:18} to hold, the $\nu$-gap metrics between $\mathbf{\hat{P}}_{cp}(s)$ and $\mathbf{\acute{P}}_i(s)$ $\forall$ $ i \in \mathbb{N}_1^\xi$ need to be less than $\sqrt{(1-\parallel [\mathbf{\hat{N}}_{{cp}}(s) ~\mathbf{\hat{M}}_{{cp}}(s)]^T\parallel_H^2)}$, i.e., $\sqrt{(1-\parallel [\mathbf{\hat{N}}_{{cp}}(s) ~\mathbf{\hat{M}}_{{cp}}(s)]^T\parallel_H^2)} > \delta_\nu(\mathbf{\hat{P}}_{cp}(s), \mathbf{\acute{P}}_i(s)) ~\forall~i \in \mathbb{N}_1^\xi$. The condition given in \eqref{h1} needs to be true for this to happen. Note that the condition given in \eqref{h1} and the condition given in \eqref{sscondk3} will be the same when $\Delta A=\mathbf{0}_{n,n}, \Delta B =\mathbf{0}_{n,m}$ as ($\Delta A=\mathbf{0}_{n,n}, \Delta B =\mathbf{0}_{n,m}) \in dom(\Psi)$. Hence, there exists a full state feedback controller that simultaneously stabilizes all the systems, $\mathbf{{P}}_i(s)~\forall~i \in \mathbb{N}_1^\xi$, and its perturbed systems, $\mathbf{\acute{P}}_i(s)~\forall~i \in \mathbb{N}_1^\xi$, if the condition given in \eqref{h1} holds true. Therefore, $K_A$, $K_B$, $K_C$, and $K_D$ exist such that the closed-loop systems given in \eqref{eq:4}, \eqref{eq:6}, and \eqref{eq:8} satisfy \eqref{eq:5}, \eqref{eq:7}, and \eqref{eq:9}, respectively, if the condition given in \eqref{h1} holds. This establishes the proof. \end{proof} To realize sufficient condition given in \eqref{h1} requires $\mathbf{\hat{P}}_{cp}(s)$, $\epsilon_{cp}$, $\Delta A$, and $\Delta B$. Here, $\mathbf{\hat{P}}_{cp}(s)$ and $\epsilon_{cp}$ is identified by following the steps given in Section \ref{SSCP}. Once $\mathbf{P}_{cp}(s)$ is identified, then $\mathbf{\Delta}_{\hat{N}_{{cp}f}}(s) $ and $\mathbf{\Delta}_{\hat{M}_{{cp}f}}(s)$ are obtained and thereafter $\sqrt{(1-\parallel [\mathbf{\hat{N}}_{{cp}}(s) ~\mathbf{\hat{M}}_{{cp}}(s)]^T\parallel_H^2)}$ is computed. In this paper, we consider real parameter perturbations and therefore the $\Delta A$ and $\Delta B$ are specified through the individual upper and lower bound on the elements of $\Delta A$ and $\Delta B$. These bounds define a set of $(\Delta A, \Delta B)$ as given by \begin{align} \begin{split} \Xi=&\{( \Delta A,\Delta B)~|~\Delta A \in \mathbb{R}^{n \times n},\Delta B \in \mathbb{R}^{n \times m}, b_{ij}\geq \Delta A_{ij} \\\qquad & \geq -a_{ij}, d_{iw}\geq \Delta B_{ij} \geq -c_{iw}, \forall ~ i,j \in \mathbb{N}_1^n, w \in \mathbb{N}_1^m \} \end{split} \end{align} where $a_{ij} \in \mathbb{R}_{\geq 0}$, $b_{ij} \in \mathbb{R}_{\geq 0} $, $c_{iw} \in \mathbb{R}_{\geq 0}$, and $d_{iw}\in \mathbb{R}_{\geq 0}$. It is possible to compute the value of $\Psi$ associated with each element of $\Xi$ and see if these values satisfy the condition given in \eqref{h1}. Those values of $\Psi$ that satisfy the condition given in \eqref{h1} yield upper and lower bounds on the elements of $\Delta A$ and $\Delta B$ for which consensus of $N$, $N-P$, and $N+M$ uncertain agents can be achieved. Consequently, the sufficient condition stated by \eqref{h1} is tractable as $\sqrt{(1-\parallel [\mathbf{\hat{N}}_{{cp}}(s) ~\mathbf{\hat{M}}_{{cp}}(s)]^T\parallel_H^2)}$ and $\Psi$ are determinable. Now, $\mathbf{\hat{K}}(s)$ is obtained utilizing the Glover-McFarlane method proposed in \cite{glover2} using Matlab function, $\textit{ncfsys}$, when the condition given in \eqref{h1} holds for the desired set of ($\Delta A, \Delta B)$. $\textit{ncfsys}$ finds a $\mathbf{\hat{K}}(s)$ that achieves the maximum generalized stability margin that is equal to $\sqrt{(1-\parallel [\mathbf{\hat{N}}_{{cp}}(s) ~\mathbf{\hat{M}}_{{cp}}(s)] ^T\parallel_H^2)}$. Hence, $\mathbf{\hat{K}}(s)$ stabilizes all the systems belonging to $\mathcal{B}(\mathbf{\hat{P}}_{cp}(s), b_{\mathbf{\hat{P}}_{cp},\mathbf{\hat{K}}}^{max})$. Once $\mathbf{\hat{K}}(s)$ is attained, $\mathbf{K}(s)$ is established using the state-space matrices of $\mathbf{\hat{K}}(s)$. Eventually, the RAIDD consensus protocol given in \eqref{eq:3} can be synthesized by following the steps given below. \begin{enumerate} \itema \textbf{Step~1:} Find the values of the maximum $\nu$-gap metrics of all the plants using (\ref{eq:10}). \itemb \textbf{Step~2:} Identify the smallest value of the maximum $\nu$-gap metrics. This gives $\epsilon_{{cp}}$ and the system associated with this value is the central plant. \itemc \textbf{Step~3:} \textbf{If} $\sqrt{(1-\parallel [\mathbf{\hat{N}}_{{cp}}(s) ~\mathbf{\hat{M}}_{{cp}}(s)]^T\parallel_H^2)}>\epsilon_{{cp}} $ true, \textbf{then} \begin{enumerate} \item Obtain the set of $\Delta A$ and $\Delta B$ by varying their elements within lower bound and upper bound. For this set, compute corresponding set of $\Psi$. \item \textbf{if} $\sqrt{(1-\parallel [\mathbf{\hat{N}}_{{cp}}(s) ~\mathbf{\hat{M}}_{{cp}}(s)]^T\parallel_H^2)}>\Psi~\forall~\Delta A, \Delta B$ \textbf{then} Synthesize $\mathbf{\hat{K}}(s)$ using \textit{ncfsys} and thereafter establish $\mathbf{K}(s)$ using the state-space matrices of $\mathbf{\hat{K}}(s)$. \textbf{Else} \textbf{stop}. \end{enumerate} \itemd \textbf{Step~4:} \textbf{Else} \textbf{stop}. \end{enumerate} \section{Simulation Results}\label{SR} In this section, a RAIDD consensus protocol is synthesized for the consensus of MAS with $N=4$, $N-P=3$, and $N+M=5$ unmanned underwater vehicles (UUVs) whose nominal dynamics \cite{saboori} in state-space form is described by \eqref{eq:1} where \begin{equation} A=\begin{bmatrix} -0.7&-0.3&0\\1&0&0\\0&-v_0&0 \end{bmatrix}, B=\begin{bmatrix} 0.035\\0\\0 \end{bmatrix}, C=\begin{bmatrix} 1&0&0\\0&1&0\\0&0&1 \end{bmatrix}. \label{SM} \end{equation} The pitch angular velocity, pitch angle, the depth, and the deflection of the control surface from the stern plane of the UUV are symbolized by $q_i$, $\theta_i$, $d_i$, and $u_i$ , respectively. Then, the state vector, $\mathbf{x}_i$, and the input vector, $\mathbf{u}_i$, of \eqref{eq:1} are defined as $\mathbf{x}_i=[q_i, \theta_i,d_i]^T$ and $\mathbf{u}_i=[u_i]$, respectively. The uncertain parameter of the UUV's state-space model is the surge velocity ($v$). The nominal value of $v$ is represented by $v_0$ and its value is 0.3~m/s. The lower and upper bound of $v$ are 0.225~m/s and 0.375~m/s, respectively. Hence, the $\Delta A$ in its interval matrix form is given by \begin{equation} \Delta A=\begin{bmatrix} 0&0&0\\0&0&0\\0&\Delta v&0 \end{bmatrix}; \Delta v \in [-0.075, +0.075]. \label{eq:20} \end{equation} The input matrix has no uncertainty and hence $\Delta B=\mathbf{0}_{n,m}$. Also, the number of CUS graphs considered for the synthesize of RAIDD protocol is $k=3$, $p=4$, and $r=3$. These graphs are shown in Figs. \ref{fig:g1}-\ref{fig:10}. \begin{figure}[h!] \centering \subfigure[ \label{fig:g5}]{\includegraphics[width=0.9in, height=0.8in]{G41.eps}} \subfigure[ \label{fig:g6}]{\includegraphics[width=0.9in, height=0.8in]{G42.eps}} \subfigure[ \label{fig:g7}]{\includegraphics[width=0.9in, height=0.8in]{G43.eps}} \caption{CUS graphs associated with 4 UUVs} \end{figure} \begin{figure}[h!] \centering \subfigure[ \label{fig:g1}]{\includegraphics[width=0.8in, height=0.8in]{G31.eps}} \subfigure[ \label{fig:g2}]{\includegraphics[width=0.8in, height=0.8in]{G32.eps}} \subfigure[ \label{fig:g3}]{\includegraphics[width=0.8in, height=0.8in]{G33.eps}} \subfigure[ \label{fig:g4}]{\includegraphics[width=0.8in, height=0.8in]{G34.eps}} \caption{CUS graphs associated with 3 UUVs} \end{figure} \begin{figure}[h!] \centering \subfigure[ \label{fig:g8}]{\includegraphics[width=0.9in, height=0.85in]{G51.eps}} \subfigure[ \label{fig:g9}]{\includegraphics[width=0.95in, height=0.85in]{G52.eps}} \subfigure[ \label{fig:g10}]{\includegraphics[width=0.9in, height=0.85in]{G53.eps}} \caption{CUS graphs associated with 5 UUVs} \end{figure} The number of agents and the values of $k$, $p$, and $r$ suggest that $\xi$ is 29. $\mathcal{P}$ is then formed using the eigenvalues of $\breve{\mathcal{L}}_h~\forall~ h\in\mathbb{N}_1^3$, $\grave{\mathcal{L}}_h~\forall~ h\in\mathbb{N}_1^4$, and $\hat{\mathcal{L}}_h~\forall~ h\in\mathbb{N}_1^3$. Subsequently, the state-space forms of $\mathbf{\hat{P}}_i(s)~\forall~i \in \mathbb{N}_1^{29}$ are determined using the matrices given in \eqref{SM} and the eigenvalues belonging to $\mathcal{P}$. These systems are thereafter used to constitute $\mathcal{Q}$. Now following \textbf{Step~1} and \textbf{Step~2} described in Section \ref{PR}, $\epsilon_{{cp}}$ is identified as 0.4293 and $\mathbf{\hat{P}}_{cp}(s) \in \mathcal{Q}$ as \begin{equation} \mathbf{\hat{P}}_{cp}(s): \begin{cases} \mathbf{\dot{x}}_i=A\mathbf{x}_{i}+2 B\mathbf{u}_{i}\\ \mathbf{y}_{i}=C\mathbf{x}_{i}. \end{cases} \label{eq:19} \end{equation} Using this $\mathbf{\hat{P}}_{cp}(s)$, $\hat{\mathbf{N}}_{cp}(s)$ and $\hat{\mathbf{M}}_{cp}(s)$ are obtained, and $\sqrt{(1-\parallel [\mathbf{\hat{N}}_{{cp}}(s) ~\mathbf{\hat{M}}_{{cp}}(s)] ^T\parallel_H^2)} $ is computed to be 0.6539, which is greater than 0.4293. Hence, the condition given by \eqref{sscondk3} holds. Subsequently, $\Xi$ is formed using $\Delta A$ given in \eqref{eq:20} and $\Delta B=\mathbf{0}_{n,m}$. For this $\Xi$, $\Psi$ is determined and its values are less than 0.6539 as indicated by Fig. \ref{fig:2}. \begin{figure}[h!] \centering {\includegraphics[width=2.5in, height=1.65in]{psi.eps}} \caption{Trajectory of $\Psi$ when -$v$ is varied from 0.3750 to 0.2250}. \label{fig:2} \end{figure} Then, the state-space matrices of $\mathbf{\hat{K}}(s)$ are computed using Glover-McFarlane method proposed in \cite{glover1} and are given as \begin{align} K_A=&\begin{bmatrix} -0.3227& -0.3283\\0.658 &-0.5469 \end{bmatrix},\\ K_B=&\begin{bmatrix} 0.01976 & -0.05098 & 0.4598\\-0.01496 & 0.1107 & -0.4072 \end{bmatrix},\\ K_C=&\begin{bmatrix} -0.2959& 0.09703 \end{bmatrix}, K_D=\begin{bmatrix} -0.003565 & -0.2504 & 1.13 \end{bmatrix} \label{eq:21} \end{align} All the closed-loop systems, $[\mathbf{\acute{P}}_i(s), \mathbf{\hat{K}}(s)]~ \forall~i \in \mathbb{N}_1^{29}$, are stable because the trajectories of all their eigenvalues belong to $\mathcal{C}_-$ as shown in Fig. \ref{fig:3} even when $\Delta v$ is varied from $-0.075$ to $0.075$. Hence, $\mathbf{\hat{K}}(s)$ simultaneously stabilizes $\mathbf{{P}}_i(s)~\forall~i \in \mathbb{N}_1^\xi$ and its perturbed systems. \begin{figure}[h!] \centering {\includegraphics[width=2.5in, height=1.65in]{eigenall.eps}} \caption{Trajectories of all the eigenvalues of $[\mathbf{\acute{P}}_i(s), \mathbf{\hat{K}}(s)]~ \forall~i \in \mathbb{N}_1^{29}$ when -$v$ is varied from 0.3750 to 0.2250}. \label{fig:3} \end{figure} Consequently, $\mathbf{K}(s)$ is formed using the state-space matrices of $\mathbf{\hat{K}}(s)$ given in \eqref{eq:21}. The effectiveness of this $\mathbf{K}(s)$ is evaluated using the four cases of simulations listed below.\\ \noindent \textbf{Case~1:} The nominal dynamics of the UUVs is used in this case. The MAS begins its operations with four UUVS. The fifth UUV is then added at the 200th~s. Following that, two UUVs are removed at the 500th~s.\\ \textbf{Case~2:} Here also nominal dynamics of the UUVs is used. Initially, the MAS begins its operation with 4 UUVs. Later, one UUV is removed at 200th~s. Thereafter, at 500th~s, 4th and 5th UUVs are added.\\ \textbf{Case~3:} and \textbf{Case~4:} In these cases, \textbf{Case~1} and \textbf{Case~2} are repeated with uncertain dynamics in which $v$ varied from 0.3750~m/s to 0.2550~m/s.\par \noindent In all the four cases, desired communication network topologies are switched at 1~s. The simulation results of all the four cases are shown in Figs. \ref{fig:a}-\ref{fig:n}. The state trajectories shown in these figures indicate that the consensus of MAS with 4, 3, and 5 UUVs is accomplished even with parametric uncertainties and switching topologies. \begin{figure}[H] \centering \subfigure[\textbf{case 1}: pitch angular velocity response \label{fig:4}]{\includegraphics[width=3.5in, height=1.65in]{q453.eps}} \subfigure[\textbf{case 1}: pitch angle response \label{fig:5}]{\includegraphics[width=3.5in, height=1.65in]{theta453a.eps}} \subfigure[\textbf{case 1}: depth response \label{fig:6}]{\includegraphics[width=3.5in, height=1.65in]{d453a.eps}} \caption{Response of the CL MAS that begins its operation with four UUVs. Later, the fifth UUV is added to the CL MAS at the 200th~s and two UUVs are removed from the CL MAS at the 500th~s. Also, $v_0$=0.3~m/s and the communication network topologies of the CL MAS are switched at 1~s.} \label{fig:a} \end{figure} \begin{figure}[H] \centering \subfigure[\textbf{case 2}: pitch angular velocity response \label{fig:7}]{\includegraphics[width=3.5in, height=1.65in]{NNq3000-.eps}} \centering \subfigure[\textbf{case 2}: pitch angle response \label{fig:8}]{\includegraphics[width=3.5in, height=1.65in]{NNtheta3000-.eps}} \subfigure[\textbf{case 2}: depth response \label{fig:9}]{\includegraphics[width=3.5in, height=1.65in]{NNd3000-.eps}} \caption{Response of the CL MAS that begins its operation with four UUVs. Later, one UUV is removed from the CL MAS at the 200th~s and two UUVs are added to the CL MAS at the 500th~s. Also, $v_0$=0.3~m/s and the communication network topologies of the CL MAS are switched at 1~s.} \label{fig:b} \end{figure} \begin{figure}[H] \centering \subfigure[\textbf{case 3}: pitch angular velocity response \label{fig:10}]{\includegraphics[width=3.5in, height=1.65in]{Nq03750.eps}} \end{figure} \begin{figure}[H] \subfigure[\textbf{case 3}: pitch angle response \label{fig:11}]{\includegraphics[width=3.5in, height=1.65in]{Ntheta03750.eps}} \centering \subfigure[\textbf{case 3}: depth response \label{fig:12}]{\includegraphics[width=3.5in, height=1.65in]{Nd03750.eps}} \caption{Response of the CL MAS that begins its operation with four UUVs. Later, the fifth UUV is added to the CL MAS at the 200th~s and two UUVs are removed from the CL MAS at the 500th~s. Also, $v_0$=0.3750~m/s and the communication network topologies of the CL MAS are switched at 1~s.} \label{fig:c} \end{figure} \begin{figure}[H] \centering \subfigure[\textbf{case 3}: pitch angular velocity response \label{fig:16}]{\includegraphics[width=3.5in, height=1.65in]{Nq03450.eps}} \subfigure[\textbf{case 3}: pitch angle response \label{fig:17}]{\includegraphics[width=3.5in, height=1.65in]{Ntheta03450.eps}} \end{figure} \begin{figure}[H] \subfigure[\textbf{case 3}: depth response \label{fig:18}]{\includegraphics[width=3.5in, height=1.65in]{Nd03450.eps}} \caption{Response of the CL MAS that begins its operation with four UUVs. Later, the fifth UUV is added to the CL MAS at the 200th~s and two UUVs are removed from the CL MAS at the 500th~s. Also, $v_0$=0.3450~m/s and the communication network topologies of the CL MAS are switched at 1~s.} \label{fig:d} \end{figure} \begin{figure}[H] \centering \subfigure[\textbf{case 3}: pitch angular velocity response \label{fig:19}]{\includegraphics[width=3.5in, height=1.65in]{Nq03150.eps}} \subfigure[\textbf{case 3}: pitch angle response \label{fig:20}]{\includegraphics[width=3.5in, height=1.65in]{Ntheta03150.eps}} \subfigure[\textbf{case 3}: depth response \label{fig:21}]{\includegraphics[width=3.5in, height=1.65in]{Nd03150.eps}} \caption{Response of the CL MAS that begins its operation with four UUVs. Later, the fifth UUV is added to the CL MAS at the 200th~s and two UUVs are removed from the CL MAS at the 500th~s. Also, $v_0$=0.3150~m/s and the communication network topologies of the CL MAS are switched at 1~s.} \label{fig:e} \end{figure} \begin{figure}[H] \centering \subfigure[\textbf{case 3}: pitch angular velocity response \label{fig:22}]{\includegraphics[width=3.5in, height=1.65in]{Nq02850.eps}} \end{figure} \begin{figure}[H] \centering \subfigure[\textbf{case 3}: pitch angle response \label{fig:23}]{\includegraphics[width=3.5in, height=1.65in]{Ntheta02850.eps}} \subfigure[\textbf{case 3}: depth response \label{fig:24}]{\includegraphics[width=3.5in, height=1.65in]{Nd02850.eps}} \caption{Response of the CL MAS that begins its operation with four UUVs. Later, the fifth UUV is added to the CL MAS at the 200th~s and two UUVs are removed from the CL MAS at the 500th~s. Also, $v_0$=0.2850~m/s and the communication network topologies of the CL MAS are switched at 1~s.} \label{fig:f} \end{figure} \begin{figure}[H] \centering \subfigure[\textbf{case 3}: pitch angular velocity response \label{fig:25}]{\includegraphics[width=3.5in, height=1.65in]{Nq02550.eps}} \end{figure} \begin{figure}[H] \subfigure[\textbf{case 3}: pitch angle response \label{fig:26}]{\includegraphics[width=3.5in, height=1.65in]{Ntheta02550.eps}} \centering \subfigure[\textbf{case 3}: depth response \label{fig:27}]{\includegraphics[width=3.5in, height=1.65in]{Nd02550.eps}} \caption{Response of the CL MAS that begins its operation with four UUVs. Later, the fifth UUV is added to the CL MAS at the 200th~s and two UUVs are removed from the CL MAS at the 500th~s. Also, $v_0$=0.2550~m/s and the communication network topologies of the CL MAS are switched at 1~s.} \label{fig:g} \end{figure} \begin{figure}[H] \centering \subfigure[\textbf{case 3}: pitch angular velocity response \label{fig:28}]{\includegraphics[width=3.5in, height=1.65in]{Nq02250.eps}} \subfigure[\textbf{case 3}: pitch angle response \label{fig:29}]{\includegraphics[width=3.5in, height=1.65in]{Ntheta02250.eps}} \end{figure} \begin{figure}[H] \subfigure[\textbf{case 3}: depth response \label{fig:30}]{\includegraphics[width=3.5in, height=1.65in]{Nd02250.eps}} \caption{Response of the CL MAS that begins its operation with four UUVs. Later, the fifth UUV is added to the CL MAS at the 200th~s and two UUVs are removed from the CL MAS at the 500th~s. Also, $v_0$=0.2250~m/s and the communication network topologies of the CL MAS are switched at 1~s.} \label{fig:h} \end{figure} \begin{figure}[H] \centering \subfigure[\textbf{case 4}: pitch angular velocity response \label{fig:31}]{\includegraphics[width=3.5in, height=1.65in]{NNq3750-.eps}} \subfigure[\textbf{case 4}: pitch angle response \label{fig:32}]{\includegraphics[width=3.5in, height=1.65in]{NNtheta3750-.eps}} \subfigure[\textbf{case 4}: depth response \label{fig:33}]{\includegraphics[width=3.5in, height=1.65in]{NNd3750-.eps}} \caption{Response of the CL MAS that begins its operation with four UUVs. Later, one UUV is removed from the CL MAS at the 200th~s and two UUVs are added to the CL MAS at the 500th~s. Also, $v_0$=0.3750~m/s and the communication network topologies of the CL MAS are switched at 1~s.} \label{fig:i} \end{figure} \begin{figure}[H] \centering \subfigure[\textbf{case 4}: pitch angular velocity response \label{fig:34}]{\includegraphics[width=3.5in, height=1.65in]{NNq3450-.eps}} \end{figure} \begin{figure}[H] \centering \subfigure[\textbf{case 4}: pitch angle response \label{fig:35}]{\includegraphics[width=3.5in, height=1.65in]{NNtheta3450-.eps}} \subfigure[\textbf{case 4}: depth response \label{fig:36}]{\includegraphics[width=3.5in, height=1.65in]{NNd3450-.eps}} \caption{Response of the CL MAS that begins its operation with four UUVs. Later, one UUV is removed from the CL MAS at the 200th~s and two UUVs are added to the CL MAS at the 500th~s. Also, $v_0$=0.3450~m/s and the communication network topologies of the CL MAS are switched at 1~s.} \label{fig:j} \end{figure} \begin{figure}[H] \centering \subfigure[\textbf{case 4}: pitch angular velocity response \label{fig:37}]{\includegraphics[width=3.5in, height=1.65in]{NNq3150-.eps}} \end{figure} \begin{figure}[H] \subfigure[\textbf{case 4}: pitch angle response \label{fig:38}]{\includegraphics[width=3.5in, height=1.65in]{NNtheta3150-.eps}} \centering \subfigure[\textbf{case 4}: depth response \label{fig:39}]{\includegraphics[width=3.5in, height=1.65in]{NNd3150-.eps}} \caption{Response of the CL MAS that begins its operation with four UUVs. Later, one UUV is removed from the CL MAS at the 200th~s and two UUVs are added to the CL MAS at the 500th~s. Also, $v_0$=0.3150~m/s and the communication network topologies of the CL MAS are switched at 1~s.} \label{fig:k} \end{figure} \begin{figure}[H] \centering \subfigure[\textbf{case 4}: pitch angular velocity response \label{fig:40}]{\includegraphics[width=3.5in, height=1.65in]{NNq2850-.eps}} \subfigure[\textbf{case 4}: pitch angle response \label{fig:41}]{\includegraphics[width=3.5in, height=1.65in]{NNtheta2850-.eps}} \end{figure} \begin{figure}[H] \subfigure[\textbf{case 4}: depth response \label{fig:42}]{\includegraphics[width=3.5in, height=1.65in]{NNd2850-.eps}} \caption{Response of the CL MAS that begins its operation with four UUVs. Later, one UUV is removed from the CL MAS at the 200th~s and two UUVs are added to the CL MAS at the 500th~s. Also, $v_0$=0.2850~m/s and the communication network topologies of the CL MAS are switched at 1~s.} \label{fig:l} \end{figure} \begin{figure}[H] \centering \subfigure[\textbf{case 4}: pitch angular velocity response \label{fig:43}]{\includegraphics[width=3.5in, height=1.65in]{NNq2550-.eps}} \subfigure[\textbf{case 4}: pitch angle response \label{fig:44}]{\includegraphics[width=3.5in, height=1.65in]{NNtheta2550-.eps}} \subfigure[\textbf{case 4}: depth response \label{fig:45}]{\includegraphics[width=3.5in, height=1.65in]{NNd2550-.eps}} \caption{Response of the CL MAS that begins its operation with four UUVs. Later, one UUV is removed from the CL MAS at the 200th~s and two UUVs are added to the CL MAS at the 500th~s. Also, $v_0$=0.2550~m/s and the communication network topologies of the CL MAS are switched at 1~s.} \label{fig:m} \end{figure} \begin{figure}[H] \centering \subfigure[\textbf{case 4}: pitch angular velocity response \label{fig:46}]{\includegraphics[width=3.5in, height=1.65in]{NNq2250-.eps}} \end{figure} \begin{figure}[H] \centering \subfigure[\textbf{case 4}: pitch angle response \label{fig:47}]{\includegraphics[width=3.5in, height=1.65in]{NNtheta2250-.eps}} \subfigure[\textbf{case 4}: depth response \label{fig:48}]{\includegraphics[width=3.5in, height=1.65in]{NNd2250-.eps}} \caption{Response of the CL MAS that begins its operation with four UUVs. Later, one UUV is removed from the CL MAS at the 200th~s and two UUVs are added to the CL MAS at the 500th~s. Also, $v_0$=0.2250~m/s and the communication network topologies of the CL MAS are switched at 1~s.} \label{fig:n} \end{figure} \section{Conclusion}\label{CL} Based on $\nu$-gap metric-based simultaneous stabilization method, a RAIDD consensus protocol is developed for the consensus of MAS with \textit{attrition} and \textit{inclusion} of LTI higher-order uncertain homogeneous agents and switching topologies. The sufficient condition for the existence of the RAIDD consensus protocol developed is easily testable for MAS with a varying number of agents and switching topologies. Moreover, the tractability of this condition is successfully demonstrated by generating a feasible RAIDD consensus protocol for the MAS with 4, 3, and 5 UUVs and switching topologies. The effectiveness of this protocol is validated through four cases of numerical simulations of the closed-loop system comprising the RAIDD protocol and the MAS with 4, 3, and 5 UUVs. The state trajectories of agents indicate that the consensus of MAS with 4, 3, and 5 UUVs is achieved even with model uncertainties and switching topologies. \section{Introduction} \label{sec:introduction} \IEEEPARstart{T}{his} document is a template for \LaTeX. If you are reading a paper or PDF version of this document, please download the electronic file, trans\_jour.tex, from the IEEE Web site at \underline {http://www.ieee.org/authortools/trans\_jour.tex} so you can use it to prepare your manuscript. If you would prefer to use LaTeX, download IEEE's LaTeX style and sample files from the same Web page. You can also explore using the Overleaf editor at \underline {https://www.overleaf.com/blog/278-how-to-use-overleaf-with-}\discretionary{}{}{}\underline {ieee-collabratec-your-quick-guide-to-getting-started\#.}\discretionary{}{}{}\underline{xsVp6tpPkrKM9} If your paper is intended for a conference, please contact your conference editor concerning acceptable word processor formats for your particular conference. IEEE will do the final formatting of your paper. If your paper is intended for a conference, please observe the conference page limits. \subsection{Abbreviations and Acronyms} Define abbreviations and acronyms the first time they are used in the text, even after they have already been defined in the abstract. Abbreviations such as IEEE, SI, ac, and dc do not have to be defined. Abbreviations that incorporate periods should not have spaces: write ``C.N.R.S.,'' not ``C. N. R. S.'' Do not use abbreviations in the title unless they are unavoidable (for example, ``IEEE'' in the title of this article). \subsection{Other Recommendations} Use one space after periods and colons. Hyphenate complex modifiers: ``zero-field-cooled magnetization.'' Avoid dangling participles, such as, ``Using \eqref{eq}, the potential was calculated.'' [It is not clear who or what used \eqref{eq}.] Write instead, ``The potential was calculated by using \eqref{eq},'' or ``Using \eqref{eq}, we calculated the potential.'' Use a zero before decimal points: ``0.25,'' not ``.25.'' Use ``cm$^{3}$,'' not ``cc.'' Indicate sample dimensions as ``0.1 cm $\times $ 0.2 cm,'' not ``0.1 $\times $ 0.2 cm$^{2}$.'' The abbreviation for ``seconds'' is ``s,'' not ``sec.'' Use ``Wb/m$^{2}$'' or ``webers per square meter,'' not ``webers/m$^{2}$.'' When expressing a range of values, write ``7 to 9'' or ``7--9,'' not ``7$\sim $9.'' A parenthetical statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) In American English, periods and commas are within quotation marks, like ``this period.'' Other punctuation is ``outside''! Avoid contractions; for example, write ``do not'' instead of ``don't.'' The serial comma is preferred: ``A, B, and C'' instead of ``A, B and C.'' If you wish, you may write in the first person singular or plural and use the active voice (``I observed that $\ldots$'' or ``We observed that $\ldots$'' instead of ``It was observed that $\ldots$''). Remember to check spelling. If your native language is not English, please get a native English-speaking colleague to carefully proofread your paper. Try not to use too many typefaces in the same article. You're writing scholarly papers, not ransom notes. Also please remember that MathJax can't handle really weird typefaces. \subsection{Equations} Number equations consecutively with equation numbers in parentheses flush with the right margin, as in \eqref{eq}. To make your equations more compact, you may use the solidus (~/~), the exp function, or appropriate exponents. Use parentheses to avoid ambiguities in denominators. Punctuate equations when they are part of a sentence, as in \begin{equation}E=mc^2.\label{eq}\end{equation} Be sure that the symbols in your equation have been defined before the equation appears or immediately following. Italicize symbols ($T$ might refer to temperature, but T is the unit tesla). Refer to ``\eqref{eq},'' not ``Eq. \eqref{eq}'' or ``equation \eqref{eq},'' except at the beginning of a sentence: ``Equation \eqref{eq} is $\ldots$ .'' \subsection{\LaTeX-Specific Advice} Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead of ``hard'' references (e.g., \verb|(1)|). That will make it possible to combine sections, add equations, or change the order of figures or citations without having to go through the file line by line. Please don't use the \verb|{eqnarray}| equation environment. Use \verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}| environment leaves unsightly spaces around relation symbols. Please note that the \verb|{subequations}| environment in {\LaTeX} will increment the main equation counter even when there are no equation numbers displayed. If you forget that, you might write an article in which the equation numbers skip from (17) to (20), causing the copy editors to wonder if you've discovered a new method of counting. {\BibTeX} does not work by magic. It doesn't get the bibliographic data from thin air but from .bib files. If you use {\BibTeX} to produce a bibliography you must send the .bib files. {\LaTeX} can't read your mind. If you assign the same label to a subsubsection and a table, you might find that Table I has been cross referenced as Table IV-B3. {\LaTeX} does not have precognitive abilities. If you put a \verb|\label| command before the command that updates the counter it's supposed to be using, the label will pick up the last counter to be cross referenced instead. In particular, a \verb|\label| command should not go before the caption of a figure or a table. Do not use \verb|\nonumber| inside the \verb|{array}| environment. It will not stop equation numbers inside \verb|{array}| (there won't be any anyway) and it might stop a wanted equation number in the surrounding equation. If you are submitting your paper to a colorized journal, you can use the following two lines at the start of the article to ensure its appearance resembles the final copy: \smallskip\noindent \begin{small} \begin{tabular}{l} \verb+\+\texttt{documentclass[journal,twoside,web]\{ieeecolor\}}\\ \verb+\+\texttt{usepackage\{\textit{Journal\_Name}\}} \end{tabular} \end{small} \section{Units} Use either SI (MKS) or CGS as primary units. (SI units are strongly encouraged.) English units may be used as secondary units (in parentheses). This applies to papers in data storage. For example, write ``15 Gb/cm$^{2}$ (100 Gb/in$^{2})$.'' An exception is when English units are used as identifiers in trade, such as ``3\textonehalf-in disk drive.'' Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity in an equation. The SI unit for magnetic field strength $H$ is A/m. However, if you wish to use units of T, either refer to magnetic flux density $B$ or magnetic field strength symbolized as $\mu _{0}H$. Use the center dot to separate compound units, e.g., ``A$\cdot $m$^{2}$.'' \section{Some Common Mistakes} The word ``data'' is plural, not singular. The subscript for the permeability of vacuum $\mu _{0}$ is zero, not a lowercase letter ``o.'' The term for residual magnetization is ``remanence''; the adjective is ``remanent''; do not write ``remnance'' or ``remnant.'' Use the word ``micrometer'' instead of ``micron.'' A graph within a graph is an ``inset,'' not an ``insert.'' The word ``alternatively'' is preferred to the word ``alternately'' (unless you really mean something that alternates). Use the word ``whereas'' instead of ``while'' (unless you are referring to simultaneous events). Do not use the word ``essentially'' to mean ``approximately'' or ``effectively.'' Do not use the word ``issue'' as a euphemism for ``problem.'' When compositions are not specified, separate chemical symbols by en-dashes; for example, ``NiMn'' indicates the intermetallic compound Ni$_{0.5}$Mn$_{0.5}$ whereas ``Ni--Mn'' indicates an alloy of some composition Ni$_{x}$Mn$_{1-x}$. \begin{figure}[!t] \centerline{\includegraphics[width=\columnwidth]{fig1.png}} \caption{Magnetization as a function of applied field. It is good practice to explain the significance of the figure in the caption.} \label{fig1} \end{figure} Be aware of the different meanings of the homophones ``affect'' (usually a verb) and ``effect'' (usually a noun), ``complement'' and ``compliment,'' ``discreet'' and ``discrete,'' ``principal'' (e.g., ``principal investigator'') and ``principle'' (e.g., ``principle of measurement''). Do not confuse ``imply'' and ``infer.'' Prefixes such as ``non,'' ``sub,'' ``micro,'' ``multi,'' and ``ultra'' are not independent words; they should be joined to the words they modify, usually without a hyphen. There is no period after the ``et'' in the Latin abbreviation ``\emph{et al.}'' (it is also italicized). The abbreviation ``i.e.,'' means ``that is,'' and the abbreviation ``e.g.,'' means ``for example'' (these abbreviations are not italicized). A general IEEE styleguide is available at \underline{http://www.ieee.org/authortools}. \section{Guidelines for Graphics Preparation and Submission} \label{sec:guidelines} \subsection{Types of Graphics} The following list outlines the different types of graphics published in IEEE journals. They are categorized based on their construction, and use of color/shades of gray: \subsubsection{Color/Grayscale figures} {Figures that are meant to appear in color, or shades of black/gray. Such figures may include photographs, illustrations, multicolor graphs, and flowcharts.} \subsubsection{Line Art figures} {Figures that are composed of only black lines and shapes. These figures should have no shades or half-tones of gray, only black and white.} \subsubsection{Author photos} {Head and shoulders shots of authors that appear at the end of our papers. } \subsubsection{Tables} {Data charts which are typically black and white, but sometimes include color.} \begin{table} \caption{Units for Magnetic Properties} \label{table} \setlength{\tabcolsep}{3pt} \begin{tabular}{|p{25pt}|p{75pt}|p{115pt}|} \hline Symbol& Quantity& Conversion from Gaussian and \par CGS EMU to SI $^{\mathrm{a}}$ \\ \hline $\Phi $& magnetic flux& 1 Mx $\to 10^{-8}$ Wb $= 10^{-8}$ V$\cdot $s \\ $B$& magnetic flux density, \par magnetic induction& 1 G $\to 10^{-4}$ T $= 10^{-4}$ Wb/m$^{2}$ \\ $H$& magnetic field strength& 1 Oe $\to 10^{3}/(4\pi )$ A/m \\ $m$& magnetic moment& 1 erg/G $=$ 1 emu \par $\to 10^{-3}$ A$\cdot $m$^{2} = 10^{-3}$ J/T \\ $M$& magnetization& 1 erg/(G$\cdot $cm$^{3}) =$ 1 emu/cm$^{3}$ \par $\to 10^{3}$ A/m \\ 4$\pi M$& magnetization& 1 G $\to 10^{3}/(4\pi )$ A/m \\ $\sigma $& specific magnetization& 1 erg/(G$\cdot $g) $=$ 1 emu/g $\to $ 1 A$\cdot $m$^{2}$/kg \\ $j$& magnetic dipole \par moment& 1 erg/G $=$ 1 emu \par $\to 4\pi \times 10^{-10}$ Wb$\cdot $m \\ $J$& magnetic polarization& 1 erg/(G$\cdot $cm$^{3}) =$ 1 emu/cm$^{3}$ \par $\to 4\pi \times 10^{-4}$ T \\ $\chi , \kappa $& susceptibility& 1 $\to 4\pi $ \\ $\chi_{\rho }$& mass susceptibility& 1 cm$^{3}$/g $\to 4\pi \times 10^{-3}$ m$^{3}$/kg \\ $\mu $& permeability& 1 $\to 4\pi \times 10^{-7}$ H/m \par $= 4\pi \times 10^{-7}$ Wb/(A$\cdot $m) \\ $\mu_{r}$& relative permeability& $\mu \to \mu_{r}$ \\ $w, W$& energy density& 1 erg/cm$^{3} \to 10^{-1}$ J/m$^{3}$ \\ $N, D$& demagnetizing factor& 1 $\to 1/(4\pi )$ \\ \hline \multicolumn{3}{p{251pt}}{Vertical lines are optional in tables. Statements that serve as captions for the entire table do not need footnote letters. }\\ \multicolumn{3}{p{251pt}}{$^{\mathrm{a}}$Gaussian units are the same as cg emu for magnetostatics; Mx $=$ maxwell, G $=$ gauss, Oe $=$ oersted; Wb $=$ weber, V $=$ volt, s $=$ second, T $=$ tesla, m $=$ meter, A $=$ ampere, J $=$ joule, kg $=$ kilogram, H $=$ henry.} \end{tabular} \label{tab1} \end{table} \subsection{Multipart figures} Figures compiled of more than one sub-figure presented side-by-side, or stacked. If a multipart figure is made up of multiple figure types (one part is lineart, and another is grayscale or color) the figure should meet the stricter guidelines. \subsection{File Formats For Graphics}\label{formats} Format and save your graphics using a suitable graphics processing program that will allow you to create the images as PostScript (PS), Encapsulated PostScript (.EPS), Tagged Image File Format (.TIFF), Portable Document Format (.PDF), Portable Network Graphics (.PNG), or Metapost (.MPS), sizes them, and adjusts the resolution settings. When submitting your final paper, your graphics should all be submitted individually in one of these formats along with the manuscript. \subsection{Sizing of Graphics} Most charts, graphs, and tables are one column wide (3.5 inches/88 millimeters/21 picas) or page wide (7.16 inches/181 millimeters/43 picas). The maximum depth a graphic can be is 8.5 inches (216 millimeters/54 picas). When choosing the depth of a graphic, please allow space for a caption. Figures can be sized between column and page widths if the author chooses, however it is recommended that figures are not sized less than column width unless when necessary. There is currently one publication with column measurements that do not coincide with those listed above. Proceedings of the IEEE has a column measurement of 3.25 inches (82.5 millimeters/19.5 picas). The final printed size of author photographs is exactly 1 inch wide by 1.25 inches tall (25.4 millimeters$\,\times\,$31.75 millimeters/6 picas$\,\times\,$7.5 picas). Author photos printed in editorials measure 1.59 inches wide by 2 inches tall (40 millimeters$\,\times\,$50 millimeters/9.5 picas$\,\times\,$12 picas). \subsection{Resolution } The proper resolution of your figures will depend on the type of figure it is as defined in the ``Types of Figures'' section. Author photographs, color, and grayscale figures should be at least 300dpi. Line art, including tables should be a minimum of 600dpi. \subsection{Vector Art} In order to preserve the figures' integrity across multiple computer platforms, we accept files in the following formats: .EPS/.PDF/.PS. All fonts must be embedded or text converted to outlines in order to achieve the best-quality results. \subsection{Color Space} The term color space refers to the entire sum of colors that can be represented within the said medium. For our purposes, the three main color spaces are Grayscale, RGB (red/green/blue) and CMYK (cyan/magenta/yellow/black). RGB is generally used with on-screen graphics, whereas CMYK is used for printing purposes. All color figures should be generated in RGB or CMYK color space. Grayscale images should be submitted in Grayscale color space. Line art may be provided in grayscale OR bitmap colorspace. Note that ``bitmap colorspace'' and ``bitmap file format'' are not the same thing. When bitmap color space is selected, .TIF/.TIFF/.PNG are the recommended file formats. \subsection{Accepted Fonts Within Figures} When preparing your graphics IEEE suggests that you use of one of the following Open Type fonts: Times New Roman, Helvetica, Arial, Cambria, and Symbol. If you are supplying EPS, PS, or PDF files all fonts must be embedded. Some fonts may only be native to your operating system; without the fonts embedded, parts of the graphic may be distorted or missing. A safe option when finalizing your figures is to strip out the fonts before you save the files, creating ``outline'' type. This converts fonts to artwork what will appear uniformly on any screen. \subsection{Using Labels Within Figures} \subsubsection{Figure Axis labels } Figure axis labels are often a source of confusion. Use words rather than symbols. As an example, write the quantity ``Magnetization,'' or ``Magnetization M,'' not just ``M.'' Put units in parentheses. Do not label axes only with units. As in Fig. 1, for example, write ``Magnetization (A/m)'' or ``Magnetization (A$\cdot$m$^{-1}$),'' not just ``A/m.'' Do not label axes with a ratio of quantities and units. For example, write ``Temperature (K),'' not ``Temperature/K.'' Multipliers can be especially confusing. Write ``Magnetization (kA/m)'' or ``Magnetization (10$^{3}$ A/m).'' Do not write ``Magnetization (A/m)$\,\times\,$1000'' because the reader would not know whether the top axis label in Fig. 1 meant 16000 A/m or 0.016 A/m. Figure labels should be legible, approximately 8 to 10 point type. \subsubsection{Subfigure Labels in Multipart Figures and Tables} Multipart figures should be combined and labeled before final submission. Labels should appear centered below each subfigure in 8 point Times New Roman font in the format of (a) (b) (c). \subsection{File Naming} Figures (line artwork or photographs) should be named starting with the first 5 letters of the author's last name. The next characters in the filename should be the number that represents the sequential location of this image in your article. For example, in author ``Anderson's'' paper, the first three figures would be named ander1.tif, ander2.tif, and ander3.ps. Tables should contain only the body of the table (not the caption) and should be named similarly to figures, except that `.t' is inserted in-between the author's name and the table number. For example, author Anderson's first three tables would be named ander.t1.tif, ander.t2.ps, ander.t3.eps. Author photographs should be named using the first five characters of the pictured author's last name. For example, four author photographs for a paper may be named: oppen.ps, moshc.tif, chen.eps, and duran.pdf. If two authors or more have the same last name, their first initial(s) can be substituted for the fifth, fourth, third$\ldots$ letters of their surname until the degree where there is differentiation. For example, two authors Michael and Monica Oppenheimer's photos would be named oppmi.tif, and oppmo.eps. \subsection{Referencing a Figure or Table Within Your Paper} When referencing your figures and tables within your paper, use the abbreviation ``Fig.'' even at the beginning of a sentence. Do not abbreviate ``Table.'' Tables should be numbered with Roman Numerals. \subsection{Checking Your Figures: The IEEE Graphics Analyzer} The IEEE Graphics Analyzer enables authors to pre-screen their graphics for compliance with IEEE Transactions and Journals standards before submission. The online tool, located at \underline{http://graphicsqc.ieee.org/}, allows authors to upload their graphics in order to check that each file is the correct file format, resolution, size and colorspace; that no fonts are missing or corrupt; that figures are not compiled in layers or have transparency, and that they are named according to the IEEE Transactions and Journals naming convention. At the end of this automated process, authors are provided with a detailed report on each graphic within the web applet, as well as by email. For more information on using the Graphics Analyzer or any other graphics related topic, contact the IEEE Graphics Help Desk by e-mail at [email protected]. \subsection{Submitting Your Graphics} Because IEEE will do the final formatting of your paper, you do not need to position figures and tables at the top and bottom of each column. In fact, all figures, figure captions, and tables can be placed at the end of your paper. In addition to, or even in lieu of submitting figures within your final manuscript, figures should be submitted individually, separate from the manuscript in one of the file formats listed above in Section \ref{formats}. Place figure captions below the figures; place table titles above the tables. Please do not include captions as part of the figures, or put them in ``text boxes'' linked to the figures. Also, do not place borders around the outside of your figures. \subsection{Color Processing/Printing in IEEE Journals} All IEEE Transactions, Journals, and Letters allow an author to publish color figures on IEEE Xplore\textregistered\ at no charge, and automatically convert them to grayscale for print versions. In most journals, figures and tables may alternatively be printed in color if an author chooses to do so. Please note that this service comes at an extra expense to the author. If you intend to have print color graphics, include a note with your final paper indicating which figures or tables you would like to be handled that way, and stating that you are willing to pay the additional fee. \section{Conclusion} A conclusion section is not required. Although a conclusion may review the main points of the paper, do not replicate the abstract as the conclusion. A conclusion might elaborate on the importance of the work or suggest applications and extensions. \appendices Appendixes, if needed, appear before the acknowledgment. \section*{Acknowledgment} The preferred spelling of the word ``acknowledgment'' in American English is without an ``e'' after the ``g.'' Use the singular heading even if you have many acknowledgments. Avoid expressions such as ``One of us (S.B.A.) would like to thank $\ldots$ .'' Instead, write ``F. A. Author thanks $\ldots$ .'' In most cases, sponsor and financial support acknowledgments are placed in the unnumbered footnote on the first page, not here. \section*{References and Footnotes} \subsection{References} References need not be cited in text. When they are, they appear on the line, in square brackets, inside the punctuation. Multiple references are each numbered with separate brackets. When citing a section in a book, please give the relevant page numbers. In text, refer simply to the reference number. Do not use ``Ref.'' or ``reference'' except at the beginning of a sentence: ``Reference \cite{b3} shows $\ldots$ .'' Please do not use automatic endnotes in \emph{Word}, rather, type the reference list at the end of the paper using the ``References'' style. Reference numbers are set flush left and form a column of their own, hanging out beyond the body of the reference. The reference numbers are on the line, enclosed in square brackets. In all references, the given name of the author or editor is abbreviated to the initial only and precedes the last name. Use them all; use \emph{et al.} only if names are not given. Use commas around Jr., Sr., and III in names. Abbreviate conference titles. When citing IEEE transactions, provide the issue number, page range, volume number, year, and/or month if available. When referencing a patent, provide the day and the month of issue, or application. References may not include all information; please obtain and include relevant information. Do not combine references. There must be only one reference with each number. If there is a URL included with the print reference, it can be included at the end of the reference. Other than books, capitalize only the first word in a paper title, except for proper nouns and element symbols. For papers published in translation journals, please give the English citation first, followed by the original foreign-language citation See the end of this document for formats and examples of common references. For a complete discussion of references and their formats, see the IEEE style manual at \underline{http://www.ieee.org/authortools}. \subsection{Footnotes} Number footnotes separately in superscript numbers.\footnote{It is recommended that footnotes be avoided (except for the unnumbered footnote with the receipt date on the first page). Instead, try to integrate the footnote information into the text.} Place the actual footnote at the bottom of the column in which it is cited; do not put footnotes in the reference list (endnotes). Use letters for table footnotes (see Table \ref{table}). \section{Submitting Your Paper for Review} \subsection{Final Stage} When you submit your final version (after your paper has been accepted), print it in two-column format, including figures and tables. You must also send your final manuscript on a disk, via e-mail, or through a Web manuscript submission system as directed by the society contact. You may use \emph{Zip} for large files, or compress files using \emph{Compress, Pkzip, Stuffit,} or \emph{Gzip.} Also, send a sheet of paper or PDF with complete contact information for all authors. Include full mailing addresses, telephone numbers, fax numbers, and e-mail addresses. This information will be used to send each author a complimentary copy of the journal in which the paper appears. In addition, designate one author as the ``corresponding author.'' This is the author to whom proofs of the paper will be sent. Proofs are sent to the corresponding author only. \subsection{Review Stage Using ScholarOne\textregistered\ Manuscripts} Contributions to the Transactions, Journals, and Letters may be submitted electronically on IEEE's on-line manuscript submission and peer-review system, ScholarOne\textregistered\ Manuscripts. You can get a listing of the publications that participate in ScholarOne at \underline{http://www.ieee.org/publications\_standards/publications/}\discretionary{}{}{}\underline{authors/authors\_submission.html}. First check if you have an existing account. If there is none, please create a new account. After logging in, go to your Author Center and click ``Submit First Draft of a New Manuscript.'' Along with other information, you will be asked to select the subject from a pull-down list. Depending on the journal, there are various steps to the submission process; you must complete all steps for a complete submission. At the end of each step you must click ``Save and Continue''; just uploading the paper is not sufficient. After the last step, you should see a confirmation that the submission is complete. You should also receive an e-mail confirmation. For inquiries regarding the submission of your paper on ScholarOne Manuscripts, please contact [email protected] or call +1 732 465 5861. ScholarOne Manuscripts will accept files for review in various formats. Please check the guidelines of the specific journal for which you plan to submit. You will be asked to file an electronic copyright form immediately upon completing the submission process (authors are responsible for obtaining any security clearances). Failure to submit the electronic copyright could result in publishing delays later. You will also have the opportunity to designate your article as ``open access'' if you agree to pay the IEEE open access fee. \subsection{Final Stage Using ScholarOne Manuscripts} Upon acceptance, you will receive an email with specific instructions regarding the submission of your final files. To avoid any delays in publication, please be sure to follow these instructions. Most journals require that final submissions be uploaded through ScholarOne Manuscripts, although some may still accept final submissions via email. Final submissions should include source files of your accepted manuscript, high quality graphic files, and a formatted pdf file. If you have any questions regarding the final submission process, please contact the administrative contact for the journal. In addition to this, upload a file with complete contact information for all authors. Include full mailing addresses, telephone numbers, fax numbers, and e-mail addresses. Designate the author who submitted the manuscript on ScholarOne Manuscripts as the ``corresponding author.'' This is the only author to whom proofs of the paper will be sent. \subsection{Copyright Form} Authors must submit an electronic IEEE Copyright Form (eCF) upon submitting their final manuscript files. You can access the eCF system through your manuscript submission system or through the Author Gateway. You are responsible for obtaining any necessary approvals and/or security clearances. For additional information on intellectual property rights, visit the IEEE Intellectual Property Rights department web page at \underline{http://www.ieee.org/publications\_standards/publications/rights/}\discretionary{}{}{}\underline{index.html}. \section{IEEE Publishing Policy} The general IEEE policy requires that authors should only submit original work that has neither appeared elsewhere for publication, nor is under review for another refereed publication. The submitting author must disclose all prior publication(s) and current submissions when submitting a manuscript. Do not publish ``preliminary'' data or results. The submitting author is responsible for obtaining agreement of all coauthors and any consent required from employers or sponsors before submitting an article. The IEEE Transactions and Journals Department strongly discourages courtesy authorship; it is the obligation of the authors to cite only relevant prior work. The IEEE Transactions and Journals Department does not publish conference records or proceedings, but can publish articles related to conferences that have undergone rigorous peer review. Minimally, two reviews are required for every article submitted for peer review. \section{Publication Principles} The two types of contents of that are published are; 1) peer-reviewed and 2) archival. The Transactions and Journals Department publishes scholarly articles of archival value as well as tutorial expositions and critical reviews of classical subjects and topics of current interest. Authors should consider the following points: \begin{enumerate} \item Technical papers submitted for publication must advance the state of knowledge and must cite relevant prior work. \item The length of a submitted paper should be commensurate with the importance, or appropriate to the complexity, of the work. For example, an obvious extension of previously published work might not be appropriate for publication or might be adequately treated in just a few pages. \item Authors must convince both peer reviewers and the editors of the scientific and technical merit of a paper; the standards of proof are higher when extraordinary or unexpected results are reported. \item Because replication is required for scientific progress, papers submitted for publication must provide sufficient information to allow readers to perform similar experiments or calculations and use the reported results. Although not everything need be disclosed, a paper must contain new, useable, and fully described information. For example, a specimen's chemical composition need not be reported if the main purpose of a paper is to introduce a new measurement technique. Authors should expect to be challenged by reviewers if the results are not supported by adequate data and critical details. \item Papers that describe ongoing work or announce the latest technical achievement, which are suitable for presentation at a professional conference, may not be appropriate for publication. \end{enumerate} \section{Reference Examples} \begin{itemize} \item \emph{Basic format for books:}\\ J. K. Author, ``Title of chapter in the book,'' in \emph{Title of His Published Book, x}th ed. City of Publisher, (only U.S. State), Country: Abbrev. of Publisher, year, ch. $x$, sec. $x$, pp. \emph{xxx--xxx.}\\ See \cite{b1,b2}. \item \emph{Basic format for periodicals:}\\ J. K. Author, ``Name of paper,'' \emph{Abbrev. Title of Periodical}, vol. \emph{x, no}. $x, $pp\emph{. xxx--xxx, }Abbrev. Month, year, DOI. 10.1109.\emph{XXX}.123456.\\ See \cite{b3}--\cite{b5}. \item \emph{Basic format for reports:}\\ J. K. Author, ``Title of report,'' Abbrev. Name of Co., City of Co., Abbrev. State, Country, Rep. \emph{xxx}, year.\\ See \cite{b6,b7}. \item \emph{Basic format for handbooks:}\\ \emph{Name of Manual/Handbook, x} ed., Abbrev. Name of Co., City of Co., Abbrev. State, Country, year, pp. \emph{xxx--xxx.}\\ See \cite{b8,b9}. \item \emph{Basic format for books (when available online):}\\ J. K. Author, ``Title of chapter in the book,'' in \emph{Title of Published Book}, $x$th ed. City of Publisher, State, Country: Abbrev. of Publisher, year, ch. $x$, sec. $x$, pp. \emph{xxx--xxx}. [Online]. Available: \underline{http://www.web.com}\\ See \cite{b10}--\cite{b13}. \item \emph{Basic format for journals (when available online):}\\ J. K. Author, ``Name of paper,'' \emph{Abbrev. Title of Periodical}, vol. $x$, no. $x$, pp. \emph{xxx--xxx}, Abbrev. Month, year. Accessed on: Month, Day, year, DOI: 10.1109.\emph{XXX}.123456, [Online].\\ See \cite{b14}--\cite{b16}. \item \emph{Basic format for papers presented at conferences (when available online): }\\ J.K. Author. (year, month). Title. presented at abbrev. conference title. [Type of Medium]. Available: site/path/file\\ See \cite{b17}. \item \emph{Basic format for reports and handbooks (when available online):}\\ J. K. Author. ``Title of report,'' Company. City, State, Country. Rep. no., (optional: vol./issue), Date. [Online] Available: site/path/file\\ See \cite{b18,b19}. \item \emph{Basic format for computer programs and electronic documents (when available online): }\\ Legislative body. Number of Congress, Session. (year, month day). \emph{Number of bill or resolution}, \emph{Title}. [Type of medium]. Available: site/path/file\\ \textbf{\emph{NOTE: }ISO recommends that capitalization follow the accepted practice for the language or script in which the information is given.}\\ See \cite{b20}. \item \emph{Basic format for patents (when available online):}\\ Name of the invention, by inventor's name. (year, month day). Patent Number [Type of medium]. Available: site/path/file\\ See \cite{b21}. \item \emph{Basic format}\emph{for conference proceedings (published):}\\ J. K. Author, ``Title of paper,'' in \emph{Abbreviated Name of Conf.}, City of Conf., Abbrev. State (if given), Country, year, pp. \emph{xxxxxx.}\\ See \cite{b22}. \item \emph{Example for papers presented at conferences (unpublished):}\\ See \cite{b23}. \item \emph{Basic format for patents}$:$\\ J. K. Author, ``Title of patent,'' U.S. Patent \emph{x xxx xxx}, Abbrev. Month, day, year.\\ See \cite{b24}. \item \emph{Basic format for theses (M.S.) and dissertations (Ph.D.):} \begin{enumerate} \item J. K. Author, ``Title of thesis,'' M.S. thesis, Abbrev. Dept., Abbrev. Univ., City of Univ., Abbrev. State, year. \item J. K. Author, ``Title of dissertation,'' Ph.D. dissertation, Abbrev. Dept., Abbrev. Univ., City of Univ., Abbrev. State, year. \end{enumerate} See \cite{b25,b26}. \item \emph{Basic format for the most common types of unpublished references:} \begin{enumerate} \item J. K. Author, private communication, Abbrev. Month, year. \item J. K. Author, ``Title of paper,'' unpublished. \item J. K. Author, ``Title of paper,'' to be published. \end{enumerate} See \cite{b27}--\cite{b29}. \item \emph{Basic formats for standards:} \begin{enumerate} \item \emph{Title of Standard}, Standard number, date. \item \emph{Title of Standard}, Standard number, Corporate author, location, date. \end{enumerate} See \cite{b30,b31}. \item \emph{Article number in~reference examples:}\\ See \cite{b32,b33}. \item \emph{Example when using et al.:}\\ See \cite{b34}. \end{itemize}
1,477,468,750,811
arxiv
\section{Introduction} In recent years, deep neural networks have revolutionized many computing problems once thought to be difficult. However, they are often referred to as black boxes and the mechanisms through which they learn have remained elusive. Insight into how a neural network internally represents the dataset it is trained on will provide understanding into what makes for effective training data, and how a neural network extracts relevant features from a dataset. \subsection{Layer Representations} A feedforward neural network, $N$, that acts on input data $x$ can be viewed as a composition of $n$ functions, where each function represents a layer, $L_i$, $$N(x)=L_n(L_{n-1}(\dots L_1(x)))$$ Each layer, acting on input $z$, consists of a weights matrix $W$, bias vector $b$, and (usually) nonlinear activation function $\sigma$, $$L_i(z)=\sigma(Wz+b)$$ The output of $L_i$ is what we will refer to as a layer representation. It is important to note that more complicated architectures are not restricted to a linear chain structure and thus can have skip and recurrent connections that complicate their graph structure. However, skip connections simply take in multiple inputs and recurrent (and recursive) connections can be unrolled and layer representations can be recovered. \subsection{Overview of Paper} Section 2 gives relevant background about topology and its relation to neural networks. Section 3 gives an overview of persistent homology, a technique to compute the topological features of a point-cloud dataset. Section 4 outlines the experiment performed and its results. Finally, Section 5 has the discussion of results and directions for future work. \section{Background and Previous Work} We will define topology, give motivation for the homology groups of a topological space, its applicability to neural networks, and review previous work on the subject. \subsection{Definition of Topology} Topology is the study of geometric objects with a particular structure that allows for a rigorous treatment of the concepts of "bending" and "twisting" a space and how properties of the space are preserved under these transformations. Formally, a topological space is the tuple $(X,\tau)$, where $X$ is some set and $\tau$ is a multiset consisting of subsets of $X$. Armstrong (1983) gives the following definition of a topological space, \begin{definition}[Topological space] \label{topology} Given a set $X$ and a multiset $\tau$ that consists of subsets of $X$, a topological space is the tuple $(X,\tau)$ that fulfills the following axioms, \begin{enumerate} \item $\emptyset\in\tau$ and $X\in\tau$ \item $\forall S_i\in\tau, \bigcup_iS_i\in\tau$ for finite or infinite unions \item $\forall S_i\in\tau, \bigcap_iS_i\in\tau$ for only finite intersections \end{enumerate} \end{definition} Any member of $\tau$ is termed an open set (and its complement is a closed set), and $\tau$ is the topology on $X$. Another central concept in topology is continuous deformation between topological spaces, and this is formalized in the idea of a homeomorphism, defined as, \begin{definition}[Homeomorphism] \label{homeomorphism} A homeomorphism $f:X\to Y$ is an isomorphism between two topological spaces $X$ and $Y$ and therefore fulfills the following critera, \begin{enumerate} \item $f$ is bijective \item $f$ is continuous \item $f^{-1}$ is continuous \end{enumerate} \end{definition} \subsection{Homology} The homology of a topological space informally characterizes the number of "holes" in the space. If we take a cycle to be a generalization of a closed loop on some space, such as on the surface of the sphere $S^2$, we can classify cycles by whether or not they can be continuously deformed into each other. If two cycles cannot be deformed into each other, it is said that there exists a hole on the topological space. A 0-dimensional hole is a connected component, a 1-dimensional hole is a loop, a 2-dimensional hole is a shell, and so on. The study of these holes requires a formal description of the boundaries on topological spaces. Boundaries in general are linear combinations of more basic geometric objects, which motivates us to introduce some sort of structure to allow for this. The homology of a topological space $X$ is formally represented by its homology groups, $$H_0(X),H_1(X),H_2(X),\dots$$ Each homology group $H_k(X)$ has an abelian group structure, and as such we naturally use $\mathbb{Z}$. We refer to the rank of a homology group as the Betti number $b_k$. In general, $$H_k(X)=\mathbb{Z}\times\cdots\times\mathbb{Z}=Z_{b_k}$$ A Betti number counts the number of holes in a topological space and is topologically invariant, which makes it an ideal candidate to judge the topological features of a space. Under homeomorphism, the Betti numbers of our domain and codomain remain unchanged. It is important to note that the Betti numbers do not account for all topological invariants, such as torsion. Torsion refers to features such as the twist of a Möbius strip. The Betti numbers of $S^1$, the circle, are $b_0=1,b_1=1$, and 0 otherwise. This is because there is one connected component, and one loop (1-dimensional boundary). The Betti numbers of $S^2$ are $b_0=1,b_2=1$, and 0 otherwise. This is because there is 1 connected component and one 2-dimensional boundary (around the interior of the sphere). Notably, there are no loops that can be drawn on the surface that cannot be deformed to a single point. The torus $T^2$, a doughnut-shaped object, has Betti numbers $b_0=1,b_1=2,b_2=1$, and 0 otherwise. There is one connected component, and two loops (one around the ring and another around the "main hole"). Lastly, there is a 2-dimensional boundary around the interior. It is interesting to note $T^2=S^1\times S^1$. \subsection{Application to Neural Networks} We are interested in whether the layer representation $L_i$ acts similar to a homeomorphism in that it allows a neural network to represent the topological features of a dataset. Unfortunately, neural network layers are not in general homeomorphisms,. \begin{theorem} A neural network layer, $L(x)=\sigma(Wx+b)$, need not be a homeomorphism. \begin{proof} If $W$ is not a member of the general linear group $\mathrm{GL}_n(\mathbb{R})$, it is not invertible and therefore no homeomorphism exists by \ref{homeomorphism}. Additionally, if $\sigma$ is not bijective, no homeomorphism can exist also by \ref{homeomorphism}. \end{proof} \end{theorem} However, the topological approach is still useful. If layer representations are examined and are found to resemble the topological features of the dataset, this suggests a reason neural networks are effective is that they learn a robust representation of the topological features of a space. \subsection{Previous Work} Studying neural network learning with topology has been a popular approach in theoretical machine learning. This is best exemplified with the concept of the manifold hypothesis. Fefferman, et al. (2013) gives the following definition of the manifold hypothesis, \begin{definition}[Manifold Hypothesis] \label{manifold} High-dimensional data tends to lie near low-dimensional manifolds. \end{definition} This concept is best explained with an example. Imagine there exists a model to classify $m\times n$ images between cats and dogs. The data space is $\mathbb{R}^{mn}$. However, this space clearly contains data not relevant to the task at hand (such as images of flowers or random noise), and the manifold hypothesis conjectures that there exists a much lower-dimensional submanifold of the data space that approximates the relevant data. Given that manifolds are topological spaces, it is reasonable to assume methods from topology can be used to gain insight into the learning process, should the manifold hypothesis be correct. There exists a field called topological data analysis (TDA) precisely focused on extracting topological features from data. The most common technique used in TDA is persistent homology. Persistent homology will be described in the next section, but it essentially computes whether the features corresponding to each homology group exist in a point-cloud, e.g. if one can draw loops via linear combinations of datapoints. Montúfar, et al. (2020) showed that a neural network can be trained to approximate topological features similar to persistent homology. While this paper will use persistent homology to compute topological features, it is encouraging that a neural network can be trained to compute the topological features of a dataset. \section{Persistent Homology} Persistent homology is a technique to compute the topological features of a point-cloud dataset and returns these results in a persistence diagram. \subsection{Method} Given a point-cloud, persistent homology grows $n$-dimensional balls around each point and marks when these balls intersect with the balls of another point. By connecting these balls together, one can represent the data as simplicial complexes by chaining together simplices (generalizations of points, lines, and triangles in higher-dimensional space). The validity of this is a result of the abelian group structure imposed on the homology of our dataset. These simplicial complexes can then represent topological features such as loops. At some point, all points will intersect as the balls will grow to encompass the whole dataset. We record the birth and death times of these features and plot them in a persistence diagram. \subsection{Persistence Diagrams} We generated a variation of a torus with a Klein bottle-like twist, $X$, according to the following parametrization, $$x = (R + 2P\cos\theta)\cos\phi\;,$$ $$y = (R + 2P\cos\theta)\sin\phi\;,$$ $$z = 2P\sin\theta\cos\frac{\phi}{2}\;,$$ $$w=2P\sin\theta\sin\frac{\phi}{2}$$ We display the point-cloud (only the first three dimensions) and its persistence diagram in figure 1. For $H_0(X)$, we can see that all features but one die soon after they are birthed, which is indicated by points residing close to the diagonal of the diagram. The single point that persists at infinity corresponds to the single connected component obtained when all the balls intersect in persistent homology. For $H_1(X)$, we similarly see most features die close to birth. However, there are two features that appear to persist for a significant time before ultimately dying during their merger. These correspond to loops found by chaining together simplices generated from datapoints to form a simplicial complex by the algorithm. In general, the Betti numbers are difficult to extract for complicated persistence diagrams, however we can make a reasonable guess that were this manifold to be taken in the continuum limit, we would get Betti numbers $b_0=1$ and $b_1=2$. \begin{figure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/kleinplot.png} \caption{The three-dimensional projection of the point cloud.} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/kleinpersis.png} \caption{The persistence diagram of the point-cloud.} \end{subfigure} \caption{A torus with a Klein bottle-like twist along with its persistence diagram.} \end{figure} \section{Experiment and Results} \begin{table} \caption{Architecture of Neural Network} \centering \begin{tabular}{lll} \toprule Name & Hidden Units & Activation Function \\ \midrule Input & 4 & ReLU \\ Layer 1 & 10 & ReLU \\ Layer 2 & 30 & ReLU \\ Layer 3 & 10 & ReLU \\ Output & 1 & Sigmoid \\ \bottomrule \end{tabular} \end{table} This section outlines the experiment conducted, the layer representations obtained, their persistence diagrams, and a PCA projection of them. \subsection{Experiment Design} The experiment consisted of the modified torus parametrized in the previous section along with noise sampled from the uniform distribution. The total dataset length was 9800. The network, whose architecture is given in table 1, was trained on the binary classification task of distinguishing whether or not a point lay on the modified torus. The model was built using TensorFlow. Using the Adam optimizer, the network was trained for 300 epochs when an accuracy of approximately 98\% was obtained. Using the HDBSCAN clustering algorithm, the output of each layer was clustered and projected back onto the data space. Persistence diagrams on the clusters were calculated using Ripser. Lastly, principal component analysis (PCA) was applied on the layer representations corresponding to the torus to project them onto three dimensions. The experiment was then repeated using the Tanh activation function on the network architecture described in table 1. \subsection{Results} The results from the experiments are summarized in figures 2 to 13. We notice that the clustering becomes progressively less defined throughout the layers as the network's representation becomes more consolidated. Notably, despite the visual resemblance to the data, persistent homology appeared unable to calculate topological features with the same degree of accuracy as on the raw data. The PCA projections obtained appear to resemble the original data less as the layers become deeper. It is noted that we were unable to compute the persistence diagrams of the clusterings of the third layer of the ReLU network due to the these clusterings being extremely noisy. \begin{figure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/relu11.png} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/relu12.png} \end{subfigure} \caption{First two clusters of the layer 1 representations of the ReLU network.} \end{figure} \begin{figure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/relu1per.png} \caption{Persistence diagrams of clusters.} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/relu1pca.png} \caption{PCA projection of torus.} \end{subfigure} \caption{Persistence diagrams of layer 1 clusters for the ReLU network and PCA projection.} \end{figure} \begin{figure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/relu21.png} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/relu22.png} \end{subfigure} \caption{First two clusters of the layer 2 representations of the ReLU network.} \end{figure} \begin{figure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/relu2per.png} \caption{Persistence diagrams of clusters.} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/relu2pca.png} \caption{PCA projection of torus.} \end{subfigure} \caption{Persistence diagrams of layer 2 clusters for the ReLU network and PCA projection.} \end{figure} \begin{figure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/relu31.png} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/relu32.png} \end{subfigure} \caption{First two clusters of the layer 3 representations of the ReLU network.} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{./images/relu3pca.png} \caption{PCA projection of the layer 3 representation of the ReLU network.} \end{figure} \begin{figure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/tanh11.png} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/tanh12.png} \end{subfigure} \caption{First two clusters of the layer 1 representations of the Tanh network.} \end{figure} \begin{figure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/tanh1per.png} \caption{Persistence diagrams of clusters.} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/tanh1pca.png} \caption{PCA projection of torus.} \end{subfigure} \caption{Persistence diagrams of layer 1 clusters for the Tanh network and PCA projection.} \end{figure} \begin{figure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/tanh21.png} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/tanh22.png} \end{subfigure} \caption{First two clusters of the layer 2 representations of the Tanh network.} \end{figure} \begin{figure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/tanh2per.png} \caption{Persistence diagrams of clusters.} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/tanh2pca.png} \caption{PCA projection of torus.} \end{subfigure} \caption{Persistence diagrams of layer 2 clusters for the Tanh network and PCA projection.} \end{figure} \begin{figure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/tanh31.png} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/tanh32.png} \end{subfigure} \caption{First two clusters of the layer 3 representations of the Tanh network.} \end{figure} \begin{figure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/tanh3per.png} \caption{Persistence diagrams of clusters.} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{./images/tanh3pca.png} \caption{PCA projection of torus.} \end{subfigure} \caption{Persistence diagrams of layer 3 clusters for the Tanh network and PCA projection.} \end{figure} \section{Discussion and Further Work} The visual results from the experiments seem to indicate that the network is approximating homeomorphisms in early layers, before deeper layer representations obtain a topology markedly different than the original data. This could result from repeated application of non-homeomorphic layers making recovery of the underlying topology difficult. This appears to be correlated with inability of HDBSCAN to properly cluster the layer representations of deeper layers. However, we notice that while not as defined as the raw data, the persistence diagrams indicate the presence of topological features similar to the original data for the first two layers of both networks, albeit with a much shorter lifetime due to interference from the noise of the clusterings. This brings up the possibility that the network is attempting to isolate the relevant topological features for classification, and excising superfluous features. This would appear to explain the progressively sparser PCA projections obtained as the layers get deeper. The deep PCA projections seem to cluster points in more localized areas, leaving more empty portions in the space unlike the spread-out representations of earlier layers. Furthermore, the PCA projections seem to indicate that the network approximates a continuous deformation for the first two layers of the Tanh network and first layer of the ReLU network, before the topology becomes unrecognizable. This could a result of ReLU being a surjective function, rather than bijective function like Tanh, making it harder for the layer to approximate a homeomorphism. It is noted that nothing in this study is conclusive, and these are observations gathered from a preliminary experiment. Further work must be done in testing neural networks on more varied topologies and perhaps using alternative clustering algorithms as the noise included in these clusters appear to disrupt persistent homology. \section*{References} { \small [1] Abadi, M. et al. (2015). TensorFlow: Large-Scale Machine Learning on Heterogenous Systems. https://doi.org/10.5281/zenodo.4724125. [2] Armstrong, M. A. (1983). Basic Topology. \textit{Springer}, doi:10.1007/978-1-4757-1793-8. [3] Fefferman, C., Mitter, S., \& Narayanan, H. (2013). Testing the Manifold Hypothesis. \textit{arXiv}, https://doi.org/10.48550/arXiv.1310.0425. [4] McInnes, L., Healy, J., \& Astels, S. (2017). hdbscan: Hierarchical density based clustering. \textit{The Journal of Open Source Software}, 2. https://doi.org/10.21105/joss.00205. [5] Montúfar, G., Otter, N., \& Wang, Y. (2020). Can neural networks learn persistent homology features?. \textit{arxiv}, https://doi.org/10.48550/arXiv.2011.14688. [6] Tralie et al. (2018). Ripser.py: A Lean Persistent Homology Libary for Python. \textit{Journal of Open Software}, 3(29), 925, https://doi.org/10.21105/joss.00925. } \end{document}
1,477,468,750,812
arxiv
\section{Introduction} The Hamilton-Jacobi formalism was applied in different gravity situations, for example in the context of AdS/CFT \cite{Maldacena:1997re} it was used to find counterterms \cite{Martelli:2002sp,Batrachenko:2004fd}. On the other hand, the Hamilton-Jacobi equations of canonical gravity are the main tool in the derivation of renormalization group equations \cite{deBoer:2000cz,Heemskerk:2010hk}. The scalar field has a fundamental structure in string theory, and the expectation value controls the string coupling constant $g_{s}=\langle e^{\phi}\rangle$. In high energy physics the scalar field is the Higgs particle \cite{Aad:2012tfa}, and in Cosmology is the $inflaton$ that describes cosmological perturbations observed in the experiments COBE, WMAP and Planck \cite{Spergel:2006hy,Planck:2013jfk}. In the context of AdS/CFT the scalar field is a source of the boundary operators\footnote{This depends on Dirichlet, Newmann or mix boundary conditions of the scalar field \cite{Anabalon:2015xvl,Henneaux:2006hk}.}. Interestingly the hairy solutions studied here can be obtained from his asymptotically AdS solution. By suitably adjusting the cosmological constant or the scalar field potential so that the effective cosmological vanishes, one can obtain asymptotically flat black holes \cite{Anabalon:2013qua}. We considered the boundary condition of a scalar field such that $\phi=0$ (at spatial infinity)\footnote{ In \cite{Gibbons:1996af,Astefanesei:2006sy} considered another interesting boundary condition in which the moduli have a non-trivial radial dependence, where the properties of these black holes depend on the values $\phi_{\infty}$ of moduli at spatial infinity.}. In \cite{Anabalon:2013qua,Anabalon:2012ih,Gibbons:1987ps} was constructed various exact and regular solutions of hairy black holes (charged) asymptotically flat, and AdS \cite{Anabalon:2012ta,Anabalon:2012ih,Anabalon:2012dw}. Is important understanding that the hair (scalar field) lives behind horizon and boundary, and does not exist a conserved quantity associated with the scalar field. In \cite{Anabalon:2013qua,Nunez:1996xv} argument that the back-reaction ensure that the scalar field can over in a strong gravitational field without collapsing completely. In the present paper, we considered Einstein-dilaton theories minimally coupled and with non-trivial potentials, in \cite{Nunez:1996xv,Anabalon:2012ih,Anabalon:2012dw} constructed hairy black holes in dilaton-charged gravity theories without scalar potential, in these cases the coupling of the dilaton with gauge fields ensures the existence of an effective potential. In the extremal case the near horizon data depend completely by the electric and magnetic charges and so the attractor mechanism \cite{Ferrara:1995ih,Strominger:1996kf,Ferrara:1996dd} works like the no-hair theorem.\\ To describe the geodesics (and null geodesic) we need to consider two actions, one for the gravitational field which describes the geometric of space-time for a given distribution of matter $T_{\mu\nu}$. The another action gives us information about how the particles (of mass $m$) moves in that space-time and is given by \begin{equation} I=-mc\int_{a}^{b}ds \end{equation} where $ds$ is the infinitesimal line universe $ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}$. Taking the variation we get the geodesic equation \begin{equation} \frac{d^{2}x^{\mu}}{ds^{2}}+\Gamma_{\alpha\beta}^{\mu}\frac{dx^{\alpha}}{ds}\frac{dx^{\beta}}{ds}=0 \end{equation} but to describe the null geodesics ($ds=0$) the before equation is not complete\footnote{That problem can solve easily, but we consider another option.}. The Hamilton-Jacobi is a powerful method to obtain the orbit equations. Landau-Lifshitz showed that the Hamilton-Jacobi equation for a particle in any space-time is $(\mathcal{M},g)$ \begin{equation} g^{\mu\nu}\frac{\partial I}{\partial x^{\mu}}\frac{\partial I}{\partial x^{\mu}}-m^{2}=0 \end{equation} In literature there is an extensive bibliography about the study of the orbits of several black holes in different theories \cite{Frye:2013xia,Cruz:2004ts,Magnan:2007uw,Cruz:2011yr,Wells:2011st,Hackmann:2008zz,Olivares:2013zta}. Recently was study Light propagation in a plasma on Kerr space-time using the Hamilton-Jacobi equation \cite{Perlick:2017fio}, but in fourth dimensions is complicated find a rotated hairy black holes, although there are currently several solutions in three dimensions \cite{Correa:2012rc,Natsuume:1999at}.\\ \\ For light trajectory (null geodesic) we consider the null module condition for four-vector wave $k_{\mu}k^{\mu}=0$. So, replacing $k_{\mu}=\partial\psi/\partial x^{\mu}$ we find eikonal equation in a gravitational field \begin{equation} g^{\mu\nu}\frac{\partial \psi}{\partial x^{\mu}}\frac{\partial \psi}{\partial x^{\mu}}=0 \label{iconal} \end{equation} The present paper is structured as follows: In section 2, we consider a small review of Newtonian results and its corrections given by general relativity. In section 3, we present the two hairy solutions and its properties, in the section 4 we use the holographic stress tensor method to determine the mass of hairy black holes. Finally, in section 5, following to intuitive procedure of section 2, we calculate the periapsis shift and deflection of light for both hairy black holes, and in the final part, we present the conclusions. \section{Hamilton-Jacobi method in Schwarzschild} The Kepler problem in celestial mechanics consist in found and solve the orbit equations of stellar systems (two body system). Classically is solved (exactly) considering the Newtonian potential \begin{equation} U_{N}(r)=-\frac{G_{N}mm^{'}}{r} \end{equation} And we can be solved alternatively by Hamilton-Jacobi method. Considering the Lagrangian expression for a particle in the central field force (at the plane $\theta=\pi/2$) \begin{equation} L=\frac{m}{2}(\dot{r}^{2}+r^{2}\dot{\varphi}^{2})-U_{N }(r), \qquad M=\frac{\partial L}{\partial\dot{\varphi}}=mr^{2}\dot{\varphi} \end{equation} The Hamilton-Jacobi equation is \begin{equation} \frac{1}{2m}\biggl{(}\frac{\partial I}{\partial r}\biggr{)}^{2}+\frac{1}{2mr^{2}}\biggl{(}\frac{\partial I}{\partial \varphi}\biggr{)}^{2}+U_{N}(r)=\mathcal{E}^{'} \end{equation} where the ansatz is $I(r)=-\mathcal{E}^{'}t+M\varphi+I_{r}^{(0)}$ and its solution is \begin{equation} I(r)=-\mathcal{E}^{'}t+M\varphi+\int{\sqrt{2m(\mathcal{E}^{'}-U_{N})-\frac{M^{2}}{r^{2}}}dr} \label{Ir} \end{equation} The trajectory is given by equation $\frac{\partial I}{\partial M}=cte$ \begin{equation} \varphi(r)=\int\frac{Mdr}{r^{2}\sqrt{2m(\mathcal{E}^{'}-U_{N})-\frac{M^{2}}{r^{2}}}}+\varphi_{0} \Rightarrow \frac{p}{r}=1+e\cos{(\varphi-\varphi_{0})} \label{phi0} \end{equation} Where $p=M^{2}/m^{2}m^{'}G_{N}$ and the eccentricity is $e=\sqrt{1+2\mathcal{E}^{'}p/G_{N}mm^{'}}$. Is easy to show that for elliptical orbit \footnote{In this case we have $\mathcal{E}^{'}<0$ and $r_{min}=p/(1+e)=a(1-e)$, $r_{max}=p/(1-e)=a(1+e)$.} $e<1$ \begin{equation} \Delta\varphi^{(0)}=-\frac{\partial \Delta I_{r}^{(0)}}{\partial M}=2\int_{r_{min}}^{r_{max}}\frac{Mdr}{r^{2}\sqrt{2m(\mathcal{E}^{'}-U_{N})-\frac{M^{2}}{r^{2}}}}=2\pi \label{phichan} \end{equation} This is the angle that vector position tour when $r$ changed from $r_{max}$ to $r_{min}$ and return to $r_{max}$\footnote{Another interesting result is $\mathcal{E}^{'}=-G_{N}mm^{'}/2a$, where $m^{'}$ is the mass of the star, black hole or another big gravitational source. And $a$ is the length of the semi-major axis of the elliptical orbit.}. The astronomical observations showed that perihelion of Mercury has deviations $\Delta\varphi^{(0)}\rightarrow \Delta\varphi^{(0)}+\delta\varphi$~\footnote{The closest point of the celestial object that orbits (closed orbit) another is known as $periapsis$, for a particular case of the solar system we use $perihelion$.}. Exist different forms to deal with this problem, classically we can consider perturbations to Newtonian potential $U_{N}\rightarrow U_{N}+\delta U(r)$ \cite{Landau,LandauL,Wells:2011st}. Another option is to consider the corrections of general relativity. The usual method consist in to solve the geodesic equation \cite{Magnan:2007uw}, but here we focus on the Hamilton-Jacobi method \cite{Vasudevan:2005js}. \subsection{General relativistic corrections} \label{Gr} In this section, we describe step by step the procedure proposed by Landau and Lifshitz \cite{Landau}. This gives us a correct intuition when we face the hairy case. We will consider the very know Schwarzschild asymptotically flat solution, where the action is \begin{equation} S[g_{\mu\nu}]=\frac{1}{2\kappa}\int_{\mathcal{M}}{d^{4}x\sqrt{-g}R}+\frac{1}{\kappa}% \int_{\partial\mathcal{M}}{d^{3}xK\sqrt{-h}} \label{actionShw}% \end{equation} Here $\kappa=8\pi G_{N}$, and the last term is the Gibbons-Hawking boundary term. Where $h_{ab}$ is the boundary metric and $K$ is the trace of the extrinsic curvature. The solution is \begin{equation} ds^{2}=-c^2 N(r)dt^{2}+\frac{dr^{2}}{N(r)}+r^{2}(d\theta^{2}+\sin^{2}{\theta}d\varphi^{2}) \end{equation} where, $N(r)=1-r_{g}/r$ and $r_{g}=2G_{N}m^{'}/c^{2}$. Here $m^{'}$ is the gravitational mass. Considering geodesics in the plane $\theta=\pi/2$. The Hamilton-Jacobi equation of the trajectory is \begin{equation} \frac{1}{c^{2}N(r)}\biggl{(}\frac{\partial I}{\partial t}\biggr{)}^{2} -N(r)\biggl{(}\frac{\partial I}{\partial r}\biggr{)}^{2}-\frac{1}{r^{2}} \biggl{(}\frac{\partial I}{\partial \varphi}\biggr{)}^{2}-m^{2}c^{2}=0 \label{HJ1} \end{equation} The ansatz is $I(t,r,\varphi)=-\mathcal{E}_{0}t+M\varphi+I_{r}(r)$, where the energy and mass of particle is $\mathcal{E}_{0}$, $m$, and its angular momentum is $M$. Replacing the expression $I(t,r,\varphi)$ in (\ref{HJ1}) we can solve $I_{r}$ \begin{equation} I_{r}(r)=\int{\biggl{[}\frac{r^{2}(\mathcal{E}_{0}^{2}-m^{2}c^{4})+m^{2}c^{4}rr_{g}}{c^{2}(r-r_{g})^2}-\frac{M^{2}}{r(r-r_{g})}\biggr{]}}^{1/2}dr \label{IGR} \end{equation} The trajectory equation can be determined by $\varphi(r)=-\frac{\partial I_{r}}{\partial M}+cte$, and is easy to show \begin{equation} \varphi(r)=\int \biggl{[}r^{2}\sqrt{\frac{\mathcal{E}_{0}^{2}}{c^{2}}-\biggl{(}m^{2}c^{2}+\frac{M^{2}}{r^{2}}\biggr{)}\biggl{(}1-\frac{r_{g}}{r}\biggr{)}}\biggr{]}^{-1}Mdr+cte \label{phi1} \end{equation} The classical limit given in (\ref{phi0}) can be obtained considering $c\rightarrow\infty$, $r_{g}=0$ and $\mathcal{E}_{0}=mc^{2}+\mathcal{E}^{'}$. Here $\mathcal{E}^{'}$ is the non-relativistic energy. To obtain the relevant corrections of general relativity we consider small velocities of the planets respect to light\footnote{For example, the star S2 that orbit to Sagittarius $A^{*}$ (super massive black hole) in the periapsis the velocity is five percent of the light velocity \cite{Schodel:2002vg}.}, this means $r_{g}/r<<1$. The angular change of closed orbits can be calculated in the similar form to (\ref{phichan}) \begin{equation} \Delta\varphi(r)=-\frac{\partial\Delta I_{r}}{\partial M} \label{varphii} \end{equation} For that purpose, we need expand (\ref{IGR}) and check the Newtonian term and its relativistic correction. Comparing $M^{2}/r^{2}$ part of (\ref{Ir}) and (\ref{IGR}) is easy concluded that we need to consider the change variable \begin{equation} r(r-r_{g})=r^{'2} \Rightarrow r-\frac{r_{g}}{2}\approx r^{'} \label{newcoor} \end{equation} and introducing the non-relativistic energy $\mathcal{E}^{'}$. Expanding around $r_{g}/r<<1$ keeping fix $M^{2}/r^{2}$ (regardless of the apostrophe) we have \begin{equation} I_{r}=\int\sqrt{2m(E-U)-\frac{M^{2}}{r^{2}}+\frac{3m^{2}c^{2}r_{g}^{2}}{2r^{2}}+O\biggl{(}\frac{r_{g}^{3}}{r^{3}}\biggr{)}}dr \end{equation} where \begin{equation} E=\mathcal{E}^{'}+\frac{\mathcal{E}^{'2}}{c^{2}}, \qquad U=U_{N}\biggl{(}1+\frac{4\mathcal{E}^{'}}{mc^{2}}\biggr{)}, \qquad U_{N}=-\frac{mm^{'}G_{N}}{r} \label{EUN} \end{equation} We can see the fundamental correction to $M^{2}/r^{2}$ is given by the term $r_{g}^{2}/r^{2}$, and this term can describe the periapsis shift of the orbit. We consider $E\approx\mathcal{E}^{'}$, $U\approx U_{N}$ and only expanding around $ r_{g}/r<<1$ keeping fix $M^{2}/r^{2}$ \begin{equation} I_{r}\approx\int\sqrt{2m(\mathcal{E}^{'}-U_{N})-\frac{M^{2}}{r^{2}}}~dr+\frac{3m^{2}c^{2}r_{g}^{2}}{4}\int\frac{1}{r^{2}}\biggl{[}\sqrt{2m(\mathcal{E}^{'}-U_{N})-\frac{M^{2}}{r^{2}}}\biggr{]}^{-1}dr+\ldots \end{equation} Considering the limits of integration $(r_{min}, r_{max})$, we have $I_{r}\rightarrow \Delta I_{r}$ \begin{equation} \Delta I^{(0)}_{r}=\int_{r_{min}}^{r_{max}}\sqrt{2m(\mathcal{E}^{'}-U_{N})-\frac{M^{2}}{r^{2}}}~dr \label{deltaI0} \end{equation} So~\footnote{You can see that (\ref{deltaI0}) is the same expression given in (\ref{Ir}).} \begin{equation} \Delta I_{r}\approx\Delta I_{r}^{(0)}-\frac{3m^{2}c^{2}r_{g}^{2}}{4M}~\frac{\partial\Delta I_{r}^{(0)}}{\partial M} \end{equation} then, using (\ref{varphii}) is easy to show the correction given for general relativity\footnote{For perihelion shift of Mercury this give $43^{''}$ and astronomical observations give $43,1^{''}\pm 0,4^{''}$.} \begin{equation} \Delta\varphi\approx2\pi+\frac{6\pi G_{N}^{2}m^{2}m^{'2}}{c^{2}M^{2}} \label{Schwaphi} \end{equation} To determine the trajectory of light ray (null geodesics) we need the eikonal equation (\ref{iconal}). This equation is similar to (\ref{HJ1}) with $m^{2}=0$ and $I\rightarrow\psi$, and in this case, instead of energy $\mathcal{E}=-\partial I/\partial t$ of the particle, we consider the light frequency $\omega_{0}=-\partial\psi/\partial t$, then \begin{equation} \psi(r)=-\omega_{0}t+M\varphi+\psi_{r}(r) \end{equation} From eikonal equation we have \begin{equation} \psi_{r}(r)=\frac{\omega_{0}}{c}\int \biggl{[}\frac{r^{2}}{(r-r_{g})^{2}}-\frac{\varrho^{2}}{r(r-r_{g})}\biggr{]}^{1/2}dr, \qquad \varrho=\frac{Mc}{\omega_{0}} \end{equation} where $\varrho$ is the impact parameter. One more time we need consider the before transformations given in (\ref{newcoor}) \begin{equation} \psi_{r}(r)\approx\frac{\omega_{0}}{c}\int\biggl{(}1+\frac{2r_{g}}{r}-\frac{\varrho^{2}}{r^{2}}\biggr{)}^{1/2}dr \label{psiSchw} \end{equation} the equation for the trajectory of a light ray is $\varphi(r)=-\frac{\partial\psi_{r}}{\partial M}+cte$, then \begin{equation} \varphi(r)=\int\frac{1}{r^{2}}\biggl{[}\sqrt{\frac{1}{\varrho^{2}}-\frac{1}{r^{2}}\biggl{(}1-\frac{r_{g}}{r}\biggr{)}} \biggr{]}^{-1}dr+cte \label{lightrayec} \end{equation} the gravitational correction is given by $r_{g}$, if we take $r_{g}=0$ the integral is $r=\varrho/\cos{\varphi}$. Namely a line that passing a distance $\varrho$ from the origin, integrating (\ref{psiSchw}) when $r_{g}=0$ \begin{equation} \Delta\psi_{r}^{(0)}=\frac{\omega_{0}}{c}\int_{0}^{\pi}\sqrt{1-\frac{\varrho^{2}}{r^{2}}}dr=-M\pi \end{equation} The angle formed by the asymptotes of ray light that comes from a very great distance $(r\rightarrow\infty, \varphi=-\pi/2)$, approaches the nearest point $(r=\varrho, \varphi=0)$ and moves away a great distance $(r\rightarrow\infty, \varphi=\pi/2)$, is $\Delta\varphi^{(0)}=-\partial\Delta\psi_{r}^{(0)}/\partial M=\pi$. The relativistic corrections are given by \begin{equation} \Delta\varphi=-\frac{\partial\Delta\psi_{r}}{\partial M} \label{light} \end{equation} and for that purpose we need expand (\ref{psiSchw}) around $r_{g}/r<<1$ (keeping constant $\varrho^{2}/r^{2}$) \begin{equation} \psi_{r}(r)\approx \psi^{(0)}_{r}+\frac{r_{g}\omega_{0}}{c}\ln{\varrho}+\frac{r_{g}\omega_{0}}{c}\cosh^{-1}\biggl{(}\frac{r}{\varrho}\biggr{)} \end{equation} The term $\psi^{(0)}_{r}$ corresponds to the classical ray rectilinear $\psi_{r}(r_{g}=0)=\psi^{(0)}_{r}$ in (\ref{psiSchw}). The total changing of $\psi_{r}$ during propagation of the light, from one large distance $R$ at the point $r=\varrho$ close to the center and back again to the distance $R$ is \begin{equation} \Delta\psi_{r}=\Delta\psi^{(0)}_{r}+ \frac{2r_{g}\omega_{0}}{c}\cosh^{-1}\biggl{(}\frac{r}{\varrho}\biggr{)} \end{equation} replacing in (\ref{light}) \begin{equation} \Delta\varphi=-\frac{\partial\Delta\psi^{(0)}_{r}}{\partial M}+\frac{2r_{r}R}{\varrho\sqrt{R^{2}-\varrho^{2}}}=\Delta\varphi^{(0)}+\frac{2r_{r}R}{\varrho\sqrt{R^{2}-\varrho^{2}}} \end{equation} taking the limit $R\rightarrow\infty$ and remember that $\Delta\varphi^{(0)}=\pi$ for the rectilinear ray, we obtain \begin{equation} \Delta\varphi=\pi+\frac{2r_{g}}{\varrho} \label{Schwalighdeflex} \end{equation} \newpage \section{Hairy black hole solutions} We will consider two hairy solutions one of which is the smooth limit of the first when the hairy parameter $\nu\rightarrow\infty$. We are interested in asymptotically flat hairy black hole solutions with a spherical horizon \cite{Anabalon:2013sra, Anabalon:2013eaa}. The action is \begin{equation} S[g_{\mu\nu},\phi]=\frac{1}{2\kappa}\int_{\mathcal{M}}{d^{4}x\sqrt{-g}\biggl{[}R% -\frac{(\partial\phi)^{2}}{2}-V(\phi)\biggr{]}}+\frac{1}{\kappa}% \int_{\partial\mathcal{M}}{d^{3}xK\sqrt{-h}}+S^{ct} \label{action}% \end{equation} where $V(\phi)$ is the scalar potential, $\kappa=8\pi G_{N}$ and $S^{ct}$ is the boundary counterterm which we use for constructed the renormalized quasi-local stress tensor $\tau_{ab}$. The equations of motion for the dilaton and metric are \[ \frac{1}{\sqrt{-g}}\partial_{\mu}\left( \sqrt{-g}g^{\mu\nu}\partial_{\nu}% \phi\right) -\frac{\partial V}{\partial\phi}=0 \] \[ E_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R-\frac{1}{2}T_{\mu\nu}^{\phi} \] where the stress tensor of the scalar field is \[ T_{\mu\nu}^{\phi}=\partial_{\mu}\phi\partial_{\nu}\phi-g_{\mu\nu}\left[ \frac{1}{2}\left( \partial\phi\right) ^{2}+V(\phi)\right] \] When we consider convex or positive semi-definite potential the no-hair theorems ensure that does not exist regular black holes solutions \cite{Israel:1967wq,Bekenstein:1995un,Sudarsky:1995zg}. In \cite{Martinez:2006an,Acena:2013jya,Anabalon:2013sra,Anabalon:2013qua,Acena:2012mr,Anabalon:2012dw} they were able to relax some of those conditions and find a large (exact) family of hairy black holes. In the present article, we will work with following potentials, where $l_{\nu}^{-1}=\sqrt{(\nu^{2}-1)/2\kappa}$ and $j^{-1}=1/\sqrt{2\kappa}$ \footnote{In \cite{Anabalon:2013eaa,DallAgata:2012mfj,deWit:2013ija} was showed that these scalar potentials (\ref{pot1}), (\ref{pot2}) for asymptotically-AdS space-times, fixing some particular values of the parameters it becomes the one of a truncation of $\omega$-deformed gauged $\mathcal{N}=8$ supergravity.} \begin{align} V(\phi) & =\frac{2\alpha}{\nu^{2}}\biggl{[}\frac{\nu-1}{\nu+2}\sinh{\phi l_{\nu }(\nu+1)}-\frac{\nu+1}{\nu-2}\sinh{\phi l_{\nu}(\nu-1)}+4\frac{\nu^{2}-1}% {\nu^{2}-4}\sinh{\phi l_{\nu}}\biggr{]} \label{pot1} \end{align} \begin{equation} V(\phi)=\frac{\alpha j\phi}{\kappa}[2+\cosh (j\phi)]-\frac{3\alpha}{\kappa}\sinh(j\phi) \label{pot2} \end{equation} Along the present paper, we centered in the negative branch: $\phi\in (-\infty,0]$ for which $\alpha>0$ \begin{figure}[h] \centering \includegraphics[scale=0.358]{Vvsphinu.eps} \includegraphics[scale=0.31]{Vvsphinuinfinity.eps} \caption{ The left hand side graphic (a) describe the potential (\ref{pot1}) at $\nu=5$. There are two families: $\phi\in (-\infty,0]$ for which $\alpha>0$ and $\phi\in [0,+\infty)$ for which $\alpha<0$.\\ The right hand side graphic (b) describe the potential (\ref{pot2}). There are two families: $\phi\in (-\infty,0]$ for which $\alpha>0$ and $\phi\in [0,+\infty)$ for which $\alpha<0$. \label{Vphi}} \end{figure} \newpage \subsection{Black hole solution} \label{Sol1} Considering the action (\ref{action}) with scalar potential (\ref{pot1}) and the metric ansatz \begin{equation} ds^{2}=\Omega(x)\left[ -c^{2}f(x)dt^{2}+\frac{\eta^{2}dx^{2}}{f(x)}+d\theta ^{2}+\sin^{2}\theta d\phi^{2}\right] \label{Ansatz}% \end{equation} The equations of motion can be integrated for the conformal factor \cite{Anabalon:2013sra, Anabalon:2013qua, Acena:2012mr,Acena:2013jya}: \begin{equation} \Omega(x)=\frac{\nu^{2}x^{\nu-1}}{\eta^{2}(x^{\nu}-1)^{2}} \label{omega}% \end{equation} where the parameters that characterize the hairy solutions are $\alpha$ and $\nu$. The solutions for scalar field \begin{equation} \phi(x)=l_{\nu}^{-1}\ln{x}% \end{equation} and metric function \begin{equation} f(x)=\alpha\biggl{[}\frac{1}{\nu^{2}-4}-\frac{x^{2}}{\nu^{2}% }\biggl{(}1+\frac{x^{-\nu}}{\nu-2}-\frac{x^{\nu}}{\nu+2}% \biggr{)}\biggr{]}+\frac{x}{\Omega(x)} \label{f}% \end{equation} where $\eta$ is the only integration constant and $l_{\nu}^{-1}=\sqrt{(\nu ^{2}-1)/2\kappa}$. The scalar potential given in (\ref{pot1}) and the solution for $\Omega(x), \phi(x), f(x)$ are invariant under the transformation $\nu\rightarrow-\nu$. The boundary where $\Omega(x)$ is blowing-up correspond to $x=1$ and the theory has a standard flat vacuum $V(\phi=0)=0$. The hairy parameter vary in the range $\nu\in [1,+\infty)$\footnote{Since the potential and the solution is symmetric about $\nu\rightarrow -\nu$, then the behavior is the same if we consider the range $\nu\in (-\infty,-1]$.}, and in the limit $\nu=1$ one gets $l_{\nu}\rightarrow\infty$ and $\phi\rightarrow0$ so that the Schwarzschild black hole (asymptotically flat) is smoothly obtained.\\ There are two distinct branches, one that corresponds to $x\in(0,1]$ and the other one to $x\in[1,\infty)$ --- the curvature singularities are at $x=0$ for the first branch and $x\rightarrow\infty$ for the second one (these are the locations where the scalar field is also blowing up). Considering the change of coordinates\footnote{For the negative branch $x<1$. The procedure for obtaining the coordinate transformation (\ref{trans1}) can be found in \cite{Anabalon:2015xvl}.} \begin{equation} x=1-\frac{1}{\eta r}+\frac{(\nu^{2}-1)}{24\eta^{3}r^{3}}\biggl{[}1+\frac {1}{\eta r}-\frac{9(\nu^{2}-9)}{80\eta^{2}r^{2}}\biggr{]}+O(r^{-6}) \label{trans1} \end{equation} we can read off the mass from the sub-leading term of $g_{tt}$: \begin{equation} -g_{tt}=f(x)\Omega(x)=1-\frac{\alpha+3\eta^{2}}{3\eta ^{3}r}+O(r^{-3})\label{Lapse}% \end{equation} In the section (\ref{massBh}) we use the quasilocal stress tensor to show that the mass of this black hole is \begin{equation} m_{1}^{'}=\frac{4\pi c^{2}}{\kappa}\biggl{(}\frac{\alpha+3\eta^{2}}{3\eta^{3}}\biggr{)} \end{equation} where the gravitational radio (or mass parameter) is $r_{g}=2 m_{1}^{'}G_{N}/c^{2}$ In the section (\ref{massBh}) we verified that this is the gravitational mass of the black hole \subsection{Black hole solution $\nu=\infty$} \label{Sol2} In \cite{Anabalon:2013qua}, was studied the following solution, but here we consider the neutral case.\\ One more time, considering the action (\ref{action}) with scalar potential (\ref{pot2}) and the metric ansatz \begin{equation} ds^{2}=\Omega(x)\left[ -c^{2}f(x)dt^{2}+\frac{\eta^{2}dx^{2}}{x^{2}f(x)}% +d\theta^{2}+\sin^{2}{\theta}d\phi^{2}\right] \label{Ansatz2}% \end{equation} The equations of motion can be integrated for the conformal factor \cite{Anabalon:2013sra, Anabalon:2013qua}: \begin{equation} \Omega(x)=\frac{x}{\eta^{2}\left( x-1\right) ^{2}} \end{equation} Where $\alpha$ is the parameter that characterizes the hairy solution. With this choice of the conformal factor is straightforward to obtain the expressions for the scalar field \begin{equation} \phi(x)=j^{-1}\ln{x}, \qquad j^{-1}=\frac{1}{\sqrt{2\kappa}} \end{equation} and the metric function \begin{equation} f(x)=\alpha\left[ \frac{(x^{2}-1)}{2x}-\ln(x)\right] +\frac{1}{\Omega(x)} \label{f1} \end{equation} At the boundary $\Omega(x)$ is blowing-up, this correspond to $x=1$. The theory has a standard flat vacuum $V(\phi=0)=0$. There are two distinct branches, one that corresponds to $x\in(0,1]$ and the other one to $x\in[1,\infty)$ --- the curvature singularities are at $x=0$ for the first branch and $x\rightarrow\infty$ for the second one (these are the locations where the scalar field is also blowing up). But the difference with the before hairy solution is that does not exist a hairy parameter, and we can not obtain the Schwarzschild case.\\ Here, $\eta$ is the constant of integration and considering the horizon equation we can solve it \begin{equation} f(\eta,x_{h})=0 \rightarrow \eta^{2}=\frac{x_{h}}{(x_{h}-1)^{2}}\biggl{[}\frac{\alpha (2x_{h}\ln{x_{h}}-x_{h}^{2}+1)}{2x_{h}}\biggr{]} \label{ecuhori} \end{equation} Considering the following asymptotic coordinate transformation\footnote{For the negative branch.} \begin{equation} x=1-\frac{1}{\eta r}+\frac{1}{2\eta^{2}r^{2}}-\frac{1}{8\eta^{3}r^{3}}+O(r^{-5}) \label{trans2} \end{equation} the gravitational radio $r_{g}$ (or mass parameter) can be read-off $-g_{tt}$ component in r-coordinate \begin{equation} \Omega(x)f(x)=1-\frac{\alpha}{6\eta^{3}r}+O(r^{-3}) \end{equation} then $r_{g}=\alpha/6\eta^{3}$, and the mass of the hairy black hole is \begin{equation} m_{2}^{'}=\frac{4\pi c^{2} }{\kappa}r_{g}=\frac{c^{2}}{2G_{N}}\biggl{(}\frac{\alpha}{6\eta^{3}}\biggr{)}, \qquad \end{equation} In the next section (\ref{massBh}) we verified that this is the gravitational mass of the black hole \section{Mass of hairy black holes} \label{massBh} Similar to holographic formalism for asymptotic AdS space-times \cite{Myers:1999psa,Anabalon:2015ija,Anabalon:2015xvl,Anabalon:2016izw,Balasubramanian:1999re}, for asymptotically-flat exist an identical proposal given in \cite{Astefanesei:2005ad,Astefanesei:2009wi,Astefanesei:2006zd,Astefanesei:2010bm}. If the boundary has the following topology $S^{2}\times R\times S^{1}$ the gravitational counterterm is \begin{equation} S^{ct}=-\frac{1}{\kappa}\int_{\partial\mathcal{M}}d^{3}x\sqrt{2\mathcal{R}}~\sqrt{-h} \end{equation} The (quasilocal) stress tensor was defined in \cite{Brown:1992br} like \begin{equation} \tau_{ab}\equiv\frac{2}{\sqrt{-h}}\frac{\delta S}{\delta h^{ab}} \end{equation} where for the total action (that including the boundary terms) given in (\ref{action}) we have \cite{Astefanesei:2005ad} \begin{equation} \tau_{ab}=-\frac{1}{\kappa}\left[K_{ab}-h_{ab}K-\Psi(\mathcal{R}_{ab}-\mathcal{R}h_{ab})-h_{ab}\Box\Psi+\Psi_{;ab}\right] \end{equation} Considering the foliation ($x=constant$) of the metrics (\ref{Ansatz}) and (\ref{Ansatz2}) (both foliations are similar) \begin{equation} h_{ab}dx^{a}dx^{b}=\Omega(x)[-c^{2}f(x)dt^{2}+d\theta^{2}+\sin^{2}{\theta}d\varphi^{2}] \label{hmetric} \end{equation} Where $\mathcal{R}_{ab}$, $\mathcal{R}$ are the Ricci tensor and Ricci scalar of the foliation (\ref{hmetric}). The expression for $\Psi$ was given in \cite{Astefanesei:2006zd}, this is $\Psi=\sqrt{2/\mathcal{R}}$, then \begin{equation} \mathcal{R}_{00}=0 \quad,\quad \mathcal{R}_{\theta\theta}=1 \quad,\quad \mathcal{R}_{\varphi\varphi}=\sin^2\theta \quad,\quad \mathcal{R}=\frac{2}{\Omega(x)} \quad,\quad \Psi=\sqrt{\Omega(x)} \end{equation} Here $K_{ab}$, $K$ and $n_{a}$ are the extrinsic curvature, its trace and the normal of time-like hypersurface defined by induced metric (\ref{hmetric}). We use the following very useful expressions\footnote{These expressions are correct only when the metric $g_{\mu\nu}$ and the induced metric $h_{ab}$ are diagonal.} \begin{equation} K_{ab}=\frac{\sqrt{g^{xx}}}{2}\partial_{x}h_{ab}, \qquad n_{a}=\frac{\delta_{a}^{x}}{\sqrt{g^{xx}}}, \qquad K\sqrt{-h}=n^{a}\partial_{a}\sqrt{-h} \end{equation} And is easy to show $T^{\nu}_{tt}$ for solution (\ref{Ansatz}) and $T^{\infty}_{tt}$ for (\ref{Ansatz2}) \begin{equation} T^{\nu}_{tt}=-\frac{c^{2}}{\kappa}\biggl{[}-\frac{(\Omega f)^{'}}{2\eta}\sqrt{\frac{f}{\Omega}} +\frac{\sqrt{\Omega f}}{2\eta}\biggl{(}\frac{3\Omega^{'}f}{\Omega}+f^{'}\biggr{)}-2\sqrt{\Omega}f\biggr{]} \end{equation} \begin{equation} T^{\infty}_{tt}=-\frac{c^{2}}{\kappa}\biggl{[}-\frac{x(\Omega f)^{'}}{2\eta}\sqrt{\frac{f}{\Omega}} +\frac{x\sqrt{\Omega f}}{2\eta}\biggl{(}\frac{3\Omega^{'}f}{\Omega}+f^{'}\biggr{)}-2\sqrt{\Omega}f\biggr{]} \end{equation} The conserved quantity associated with the generator of time translations symmetry $\xi=\partial/c\partial t$ is the energy. For both metrics (\ref{Ansatz}) and (\ref{Ansatz2}) we have \begin{equation} E=c^{4}\int d^{2}\Sigma\sqrt{\sigma}m^{a}\xi^{b}T_{ab}=4\pi c^{2} T_{tt}\sqrt{\frac{\Omega}{f}}~\biggr{\vert}_{x\rightarrow 1} \end{equation} Where $\sigma_{ab}$ is the metric of the transversal section (spherical) where $\sqrt{\sigma}=\Omega\sin{\theta}$. The normal to foliation $(t=constant)$ is $m_{a}=\delta_{a}^{t}c\sqrt{\Omega f}$. The energy or mass of the hairy black holes (\ref{Ansatz}), (\ref{Ansatz2}) are respectively ($E=m^{'}c^{2}$) \begin{equation} m^{'}_{1}=\frac{4\pi c^{2}}{\kappa}\biggl{[}\frac{\alpha+3\eta^{2}}{3\eta^{3}}+O(r^{-1})\biggr{]}_{r=\infty} \end{equation} \begin{equation} m^{'}_{2}=\frac{4\pi c^{2}}{\kappa} \biggl{[}\frac{\alpha}{6\eta^{3}}+O(r^{-1})\biggr{]}_{r=\infty} \end{equation} \section{Hamilton-Jacobi method} In this section, we use the Hamilton-Jacobi method to obtain the correction to periapsis shift and deflection of light by hairy solutions given in (\ref{Sol1}) and (\ref{Sol2}). We considered the intuitive procedure given in the section (\ref{Gr}) \subsection{Corrections of hairy solution} We study the geodesics and null geodesic of the metric given in (\ref{Ansatz}). Setting $\theta=\pi /2$, we have \begin{equation} ds^{2}=\Omega(x)\biggl{[}-c^{2}f(x)dt^{2}+\frac{\eta^{2}dx^{2}}{f(x)}+d\varphi^{2}\biggr{]} \end{equation} The Hamilton-Jacobi equation for geodesic of some celestial body of mass $m$, that orbit around a hairy black hole is \begin{equation} -\frac{1}{c^{2}\Omega f}\biggl{(}\frac{\partial I}{\partial t}\biggr{)}^{2}+\frac{f}{\Omega\eta^{2}}\biggl{(}\frac{\partial I}{\partial x}\biggr{)}^{2}+\frac{1}{\Omega}\biggl{(}\frac{\partial I}{\partial\varphi}\biggr{)}^{2}+m^{2}c^{2}=0 \end{equation} The ansatz $I=-\mathcal{E}_{0}t+M\varphi+I_{x}(x)$ and its solution is \begin{equation} I_{x}(x)=\int\sqrt{\frac{\eta^{2}}{f^{2}}\biggl{(}\frac{\mathcal{E}_{0}^{2}}{c^{2}}-m^{2}c^{2}\Omega f\biggr{)}-\frac{\eta^{2}M^{2}}{f}}~dx \end{equation} Considering the coordinate transformation (\ref{trans1}) and similar to Schwarzschild case we need consider $r\rightarrow r+\frac{r_{g}}{2}$ and $\mathcal{E}_{0}=mc^{2}+\mathcal{E}^{'}$, then \begin{equation} x=1-\frac{1}{\eta (r+\frac{r_{g}}{2})}+\frac{(\nu^{2}-1)}{24\eta^{3}(r+\frac{r_{g}}{2})^{3}}\biggl{[}1+\frac {1}{\eta (r+\frac{r_{g}}{2})}-\frac{9(\nu^{2}-9)}{80\eta^{2}(r+\frac{r_{g}}{2})^{2}}\biggr{]}+O(r^{-6}) \label{transhairy1} \end{equation} We can write $I_{x}$ like $I_{r}$ when $\frac{r_{g}}{r}<<1$ \begin{equation} I_{r}=\int\sqrt{2m(E-U)+\frac{3\mathcal{A}m^{2}c^{2}r_{g}^{2}}{2r^{2}}-\frac{M^{2}}{r^{2}}+O\biggl{(}\frac{r_{g}^{3}}{r^{3}}\biggr{)}}dr, \qquad \mathcal{A}=1-\frac{\mathcal{E}^{'}(\nu^{2}-1)}{3mc^{2}\eta^{2}r_{g}^{2}} \label{hairy1} \end{equation} Where the expressions for $E$, $U$ was given in (\ref{EUN}). The new term $\mathcal{A}$ depend on hairy parameter $\nu$ and constant integration $\eta$. We can show that $\mathcal{A}$ give a hairy correction to periapsis. For this purpose, we need expand (\ref{hairy1}) around $r_{g}/r<<1$ (keeping fix $M^{2}/r^{2}$) and $E\approx\mathcal{E}^{'}$, $U\approx U_{N}$ \begin{equation} \Delta I_{r}=\int_{r_{min}}^{r_{max}}\sqrt{2m(\mathcal{E}^{'}-U_{N})-\frac{M^{2}}{r^{2}}}+\frac{3m^{2}c^{2}r_{g}^{2}\mathcal{A}}{4}\int_{r_{min}}^{r_{max}}\frac{1}{r^{2}}\biggl{[}\sqrt{2m(\mathcal{E}^{'}-U_{N})-\frac{M^{2}}{r^{2}}}\biggr{]}^{-1}dr+O\biggl{(}\frac{r_{g}^{4}}{r^{4}}\biggr{)} \end{equation} \begin{equation} \Delta I_{r}=\Delta I_{r}^{(0)}-\frac{3m^{2}c^{2}r_{g}^{2}\mathcal{A}}{4M}\frac{\partial\Delta I_{r}^{(0)}}{\partial M}+O\biggl{(}\frac{r_{g}^{4}}{r^{4}}\biggr{)} \end{equation} And is easily to show, \begin{equation} \Delta\varphi\approx 2\pi+\frac{3\pi m^{2}c^{2}r_{g}^{2}\mathcal{A}}{2M^{2}}, \qquad \mathcal{A}=1+\frac{(\nu^{2}-1)}{4a}\biggl{(}\frac{\eta}{\alpha+3\eta^2}\biggr{)} \label{varphi1} \end{equation} when $\nu=1$, we have $\mathcal{A}=1$, e.i. the Schwarzschild case given in (\ref{Schwaphi}). Where the orbital parameter $a$ is the semi-major axis.\footnote{According to astronomical observations \cite{Eisenhauer:2005cv,Ghez:2003qj}, nine stars are currently known that orbit around of Sagittarius $A^{*}$, including $S1$, $S2$, $S8$. And its semi-major axis oscillate between $a\sim 900-3500~AU$. The astronomical unit $(AU)$ or radius of Earth's orbit, and it is equivalent to $1.495\times 10^{11}~m$.\\ According to \cite{Ghez:1998ph,Ghez:2003qj,Schodel:2002vg} we can show $r_{g}=\frac{2m^{*}G_{N}}{c^{2}}\approx 0.1AU$, where $m^{*}$ is the mass of Sagittarius $A^{*}$, then $r_{g}<<a$.} If we consider the horizon equation $f(x_{h},\eta,\alpha)=0$ and solve $\eta=\eta (x_{h},\alpha)$, we can write the hairy correction $\mathcal{A}$ as a function $\mathcal{A}=\mathcal{A}(a,\nu,x_{h},\alpha)$ \begin{figure}[h] \centering \includegraphics[scale=0.39]{AvsX1.eps} \includegraphics[scale=0.39]{AvsX2.eps} \caption{ The left hand side graphic (c) describe (\ref{varphi1}) versus $x_{h}$ for the negative branch, we fix $a=2000~AU$ and $\alpha=2~AU^{-2}$. And according to our approximation $r_{g}/r<<1$ which means that $x_{h}<<1$. That is, in the graphic all these hairy black holes ($\nu>1$) have the same corrective factor, which is $\mathcal{A}>1$. \\ The right hand side graphic (d) describe (\ref{varphi1}) versus $x_{h}$ for the negative branch, we fix $a=2000~AU$ and $\alpha=2~AU^{-2}$. And according to our approximation $r_{g}/r<<1$ which means that $x_{h}<<1$. That is, in the graphic all these hairy black holes ($\nu>1$) have the same corrective factor, which is $\mathcal{A}>1$. But here the correction is larger than the previous case, because the hair is bigger. \label{AvsX}} \end{figure} \newpage Similar to Schwarzschild case, we can construct the eikonal equation, whose solution is \begin{equation} \psi=-\omega_{0}t+M\varphi+\psi_{x} \end{equation} \begin{equation} \psi_{x}=\frac{\omega_{0}}{c}\int{\sqrt{1-\varrho^{2}f}}\frac{\eta}{f}dx, \qquad \varrho=\frac{Mc}{\omega_{0}} \end{equation} The equation of light trajectory is given by\footnote{Is easy to show that the Schwarzschild case ($\nu=1$) given in (\ref{lightrayec}) can be obtained when % \begin{equation} x=1-\frac{1}{\eta r}, \qquad \Omega f=1-\frac{r_{g}}{r}, \qquad \Omega=r^{2}, \qquad \eta dx=\frac{dr}{r^{2}} \end{equation}} \begin{equation} \varphi(x)=\int\frac{\eta dx}{\sqrt{\frac{1}{\varrho^{2}}-f}}+cte \end{equation} We need to consider the hairy-gravity corrections, considering one more time the coordinate transformations given in (\ref{transhairy1}) we can show \begin{equation} \psi_{r}\approx\frac{\omega_{0}}{c}\int\sqrt{1+\frac{2r_{g}}{r}-\frac{\varrho_{\nu}^{2}}{r^{2}}}dr \end{equation} where the impact parameter $\varrho$ has a new hairy correction \begin{equation} \varrho^{2}_{\nu}=\varrho^{2}+\frac{\nu^{2}-1}{4\eta^{2}} \end{equation} Now, expanding around $r_{g}/r<<1$, keeping fix $\varrho_{\nu}^{2}/r^{2}$ \begin{equation} \psi_{r}\approx\frac{\omega_{0}}{c}\int\sqrt{1-\frac{\varrho^{2}}{r^{2}}}dr+\frac{\omega_{0}r_{g}}{c}\int\frac{dr}{\sqrt{r^{2}-\varrho_{\nu}^{2}}} \end{equation} In the first term of the expansion of $\psi_{r}$ we have $\sqrt{1-\frac{\varrho^{2}}{r^{2}}-\frac{\nu^{2}-1}{4\eta^{2}r^{2}}}\approx\sqrt{1-\frac{\varrho^{2}}{r^{2}}}$, this is because if we consider the horizon equation we have $\eta=\eta (x_{h},\alpha,\nu)$, and you can prove that for $x_{h}<<1$ ($r_{g}<<r$) we have $\eta^{2}$ large, then $\frac{3}{4r^{2}\eta^2}\approx 0$. But for the second term we have only $\sqrt{r^{2}-\varrho^{2}-\frac{\nu^{2}-1}{4\eta^{2}}}$, from which $\frac{\nu^{2}-1}{4\eta^{2}}$ is relevant. This is precisely the relevant hairy correction \begin{equation} \Delta\varphi=-\frac{\partial\Delta\psi_{r}^{(0)}}{\partial M}+\frac{2r_{g}Mc}{\omega_{0}\varrho_{\nu}^{2}}\frac{1}{\sqrt{1-\frac{\varrho_{\nu}^{2}}{R^{2}}}} \end{equation} when $R\rightarrow\infty$ \begin{equation} \Delta\varphi\approx\pi+\frac{2r_{g}}{\varrho}~\frac{\varrho^{2}}{\varrho_{\nu}^{2}} \end{equation} And one more time when $\nu=1$ we obtain the Schwarzschild\footnote{For a ray of light passing near the edge of the sun we obtain $1.75^{''}$.} case giving in (\ref{Schwalighdeflex}). Using the horizon equation $f(\eta,x_{h},\nu,\alpha)=0$ we have $\eta=\eta (x_{h},\nu,\alpha)$ \begin{equation} \frac{\varrho_{\nu}^{2}}{\varrho^{2}}\biggr{\vert}_{(x_{h},\nu,\alpha)}=1+\frac{(\nu^{2}-1)}{4\eta^{2}\varrho^{2}} \label{rho1} \end{equation} To graph $\varrho^{2}/\varrho_{\nu}^{2}$ vs $x_{h}$, we consider $\varrho$ like multiples of $R_{\odot}$ (sun radius). For example the stars early-B hyper-giants (BHGs), has radios in the range of $\varrho=63.5R_{\odot}-246R_{\odot}$ \cite{Clark:2012ne}. The figures in (\ref{rhovsX}) show that the scalar field (hair) screening the effect of gravitational field, causing the deviation of light to be smaller respect to Schwarzschild case ($\nu=1$) \begin{figure}[h] \centering \includegraphics[scale=0.39]{rhovsX1.eps} \includegraphics[scale=0.39]{rhovsX2.eps} \caption{ The left hand side graphic (e) describe (\ref{rho1}) versus $x_{h}$ for the negative branch, we fix $\varrho=200R_{\odot}$ and $\alpha=10R_{\odot}^{-2}$. And according to our approximation $r_{g}/r<<1$ which means that $x_{h}<<1$, (see the graphic) all these hairy black holes ($\nu>1$) have the same screening factor, which is $\frac{\varrho^{2}}{\varrho_{\nu}^{2}}<1$. \\ The right hand side graphic (f) describe (\ref{rho1}) versus $x_{h}$ for the negative branch, we fix $\varrho=200R_{\odot}$ and $\alpha=10R_{\odot}^{-2}$. And according to our approximation $r_{g}/r<<1$ which means that $x_{h}<<1$, (see the graphic) all these hairy black holes ($\nu>1$) have the same screening factor, which is $\frac{\varrho^{2}}{\varrho_{\nu}^{2}}<1$. But here the screening is larger than the previous case, because the hair is bigger. \label{rhovsX}} \end{figure} \newpage \subsection{Corrections of hairy solution $\nu=\infty$} We study the geodesics and null geodesics of the metric given in (\ref{Ansatz2}). Setting $\theta=\pi /2$, we have \begin{equation} ds^{2}=\Omega(x)\biggl{[}-c^{2}f(x)dt^{2}+\frac{\eta^{2}dx^{2}}{x^{2}f(x)}+d\varphi^{2}\biggr{]} \end{equation} The Hamilton-Jacobi equation is \begin{equation} -\frac{1}{c^{2}\Omega f}\biggl{(}\frac{\partial I}{\partial t}\biggr{)}^{2}+\frac{x^{2}f}{\Omega\eta^{2}}\biggl{(}\frac{\partial I}{\partial x}\biggr{)}^{2}+\frac{1}{\Omega}\biggl{(}\frac{\partial I}{\partial\varphi}\biggr{)}^{2}+m^{2}c^{2}=0 \end{equation} the ansatz $I=-\mathcal{E}_{0}t+M\varphi+I_{x}(x)$, and the solution for $I_{x}(x)$ is \begin{equation} I_{x}=\int\sqrt{\frac{\eta^{2}}{x^{2}f^{2}}\biggl{(}\frac{\mathcal{E}_{0}}{c^{2}}-m^{2}c^{2}\Omega f\biggr{)}-\frac{\eta^{2}M^{2}}{x^{2}f}}~dx \end{equation} Considering the coordinate transformation (\ref{trans2}), and in a similar form to Schwarzschild case, we need to consider $r\rightarrow r+\frac{r_{g}}{2}$ and $\mathcal{E}_{0}=mc^{2}+\mathcal{E}^{'}$, then, we can write $I_{x}$ like $I_{r}$ when $\frac{r_{g}}{r}<<1$ \begin{equation} I_{r}=\int\sqrt{2m(E-U)+\frac{3\mathcal{A}m^{2}c^{2}r_{g}^{2}}{2r^{2}}-\frac{M^{2}}{r^{2}}+O\biggl{(}\frac{r_{g}^{3}}{r^{3}}\biggr{)}}~dr, \qquad \mathcal{A}=1-\frac{\mathcal{E}^{'}}{mc^{2}\eta^{2}r_{g}^{2}} \label{hairy2} \end{equation} Where the expressions for $E$, $U$ was given in (\ref{EUN}). The new term $\mathcal{A}$ depend on constant integration $\eta$. We show that $\mathcal{A}$ give a hairy correction to periapsis shift. For these purpose we need expand (\ref{hairy2}) around $r_{g}/r<<1$ (keeping fix $M^{2}/r^{2}$) and $E\approx\mathcal{E}^{'}$, $U\approx U_{N}$. And we have, \begin{equation} \Delta\varphi\approx 2\pi+\frac{3\pi m^{2}c^{2}r_{g}^{2}\mathcal{A}}{2M^{2}} \label{varphi2} \end{equation} where the orbital parameter $a$ is the semi-major axis\footnote{Here we use the horizon equation given in (\ref{ecuhori}).}, \begin{equation} \mathcal{A}(a,\alpha,x_{h})=1+\frac{3\sqrt{\alpha}}{2a}~\sqrt{\frac{2x_{h}\ln{x_{h}}-x_{h}^{2}+1}{2(x_{h}-1)^{2}}} \label{A2} \end{equation} In this case, there are not a hairy parameter such that $\mathcal{A}=1$, the unique form to obtain $\mathcal{A}=1$ is when $\alpha=0$, but according to (\ref{f}) and (\ref{f1}), these case correspond to a naked singularity. Even if we consider black holes asymptotically AdS~\footnote{The hairy black holes (asymptotically AdS) constructed in \cite{Anabalon:2012dw,Anabalon:2013qua,Acena:2012mr,Acena:2013jya}, has more general scalar potentials which consist of two parts $V(\phi)\sim \alpha (\ldots)+\Lambda (\ldots)$, where $\Lambda=-3/l^{2}$ is the cosmological constant, and when $\alpha=0$ has a new interesting interpretations, like domain wall. Where its structure is ensured for the existence of potential $V(\phi)\sim \Lambda (\ldots)$ \cite{Anabalon:2013eaa}.}, the special case $\alpha=0$ is a naked singularity. Is clear that $\alpha$ has an important role to existence of regular horizon. The condition $\alpha\neq 0$ ensure the existence of scalar potentials given in (\ref{pot1}), (\ref{pot2}), and this means that if $\alpha=0$, there are not self-interaction of the scalar field that ensures its stability and at the same time, we have a naked singularity. \begin{figure}[h] \centering \includegraphics[scale=0.39]{AvsXNu1.eps} \includegraphics[scale=0.39]{AvsXNu2.eps} \caption{ The left hand side graphic (g) describe (\ref{A2}) versus $x_{h}$ for the negative branch, we fix $a=2000~AU$ and $\alpha=(100-500)~AU^{-2}$. We put the case $Schw$ (Schwarzschild) to compare. And according to our approximation $r_{g}/r<<1$ which means that $x_{h}<<1$. That is, in the graphic all these hairy black holes have different (small) corrective factors, which is approximate $\mathcal{A}\sim 1$. \\ The right hand side graphic (h) describe (\ref{A2}) versus $x_{h}$ for the negative branch, we fix $a=2000~AU$ and $\alpha=(0.5-2)\times10^{4}~AU^{-2}$. We put the case $Schw$ (Schwarzschild) to compare. And according to our approximation $r_{g}/r<<1$ which means that $x_{h}<<1$. That is, in the graphic, all these hairy black holes have different corrective factors. But here the correction is larger than the previous case because $\alpha$ is bigger. \label{AvsX1}} \end{figure} \newpage To describe the light trajectory we use the eikonal equation. The ansatz is $\psi=-\omega_{0}t+M\varphi+\psi_{x}$, and the solution for $\psi_{x}$ is \begin{equation} \psi_{x}=\frac{\omega_{0}}{c}\int{\sqrt{1-\varrho^{2}f}}\frac{\eta}{xf}dx, \qquad \varrho=\frac{Mc}{\omega_{0}} \end{equation} The equation of light trajectory is given by \begin{equation} \varphi(x)=\int\frac{\eta dx}{x\sqrt{1/\varrho^{2}-f}}+cte \end{equation} But in here, we can not obtain the Schwarzschild as a smooth limit. To get the hairy-gravity corrections, we need to consider one more time the coordinate transformations given in (\ref{trans2}) we can show, for $\frac{r_{g}}{r}<<1$ \begin{equation} \psi_{r}\approx\frac{\omega_{0}}{c}\int\sqrt{1+\frac{2r_{g}}{r}-\frac{\varrho_{\alpha}^{2}}{r^{2}}}dr \end{equation} where the impact parameter $\varrho$ has a new hairy term \begin{equation} \varrho^{2}_{\alpha}=\varrho^{2}+\frac{3}{4\eta^{2}} \end{equation} Now, expanding around $r_{g}/r<<1$, keeping fix $\varrho_{\nu}^{2}/r^{2}$, we have \begin{equation} \psi_{r}\approx\frac{\omega_{0}}{c}\int\sqrt{1-\frac{\varrho^{2}}{r^{2}}}dr+\frac{\omega_{0}r_{g}}{c}\int\frac{dr}{\sqrt{r^{2}-\varrho_{\alpha}^{2}}} \end{equation} In the first term of the expansion of $\psi_{r}$ we have $\sqrt{1-\frac{\varrho^{2}}{r^{2}}-\frac{3}{4\eta^{2}r^{2}}}\approx\sqrt{1-\frac{\varrho^{2}}{r^{2}}}$, this is because if we consider $\eta=\eta (x_{h},\alpha)$ given in (\ref{ecuhori}), you can prove that for $x_{h}<<1$ ($r_{g}<<r$) we have $\eta^{2}$ large, then $\frac{3}{4r^{2}\eta^2}\approx 0$. But for the second term, we have only $\sqrt{r^{2}-\varrho^{2}-\frac{3}{4\eta^{2}}}$, from which $\frac{3}{4\eta^{2}}$ is not small enough. This is precisely the relevant hairy correction. Working in the same way as in the previous section, we can show that \begin{equation} \Delta\varphi=\pi+\frac{2r_{g}}{\varrho}~\frac{\varrho^{2}}{\varrho_{\alpha}^{2}}; \qquad \frac{\varrho_{\alpha}^{2}}{\varrho^{2}}\biggr{\vert}_{(x_{h},\alpha)}=1+\frac{3}{2\alpha\varrho^{2}}~\frac{(x_{h}-1)^{2}}{2x_{h} \ln{x_{h}-x_{h}^{2}+1}} \label{rho2} \end{equation} \begin{figure}[h] \centering \includegraphics[scale=0.39]{rhovsX1nu.eps} \includegraphics[scale=0.39]{rhovsX2nu.eps} \caption{ The left hand side graphic (i) describe (\ref{rho2}) versus $x_{h}$ for the negative branch, we fix the impact parameter like $\varrho=2000R_{\odot}$. And according to our approximation $r_{g}/r<<1$ which means that $x_{h}<<1$, (see the graphic) these hairy black holes has different screening factors $\frac{\varrho^{2}}{\varrho_{\nu}^{2}}<1$, such that $\alpha\approx 0$. \\ The right-hand side graphic (k) describe (\ref{rho2}) versus $x_{h}$ for the negative branch, we fix $\varrho=2000R_{\odot}$. And according to our approximation, $r_{g}/r<<1$ which means that $x_{h}<<1$, (see the graphic) these hairy black holes has small screening factor. Comparing with the before case, the screening is small, then $\frac{\varrho^{2}}{\varrho_{\nu}^{2}}\sim \frac{1}{\alpha}$ at fixed $\varrho$. \label{rhovsX2}} \end{figure} \newpage \section{Conclusions} Sagittarius $A^{*}$ is a stellar system located in the center of our galaxy, it consists of a supermassive black hole, where until now the orbits of nine stars were studied \cite{Eisenhauer:2005cv,Ghez:2003qj,Ghez:1998ph,Schodel:2002vg,Clark:2012ne}. This black hole has a disk accretion, and there are indications that dark matter influences the gravitational dynamics of these stars \cite{Ghez:1998ph}. In 2004, source IRS 13 black hole was discovered (and other stars that orbit it) that orbit around of Sagittarius $A^{*}$. The principal idea of the present paper is that these hairy solutions (\ref{Sol1}) and (\ref{Sol2}) could be an effective model to describe a more complicated gravitational system. Our idea is to consider this gravitational system (for example Sagittarius $A^{*}$) as a black hole with scalar hair, specially the solution (\ref{Sol1}) has a hairy parameter $\nu$, it can be adjusted in such a way as to describe for example the Sagittarius $A^{*}$. We propose (like an elemental model) to use the results for periapsis shift given in (\ref{varphi1}), (\ref{varphi2}), and made astronomical observations to the stars that orbit Sagittarius $A^{*}$~\footnote{There is no concrete evidence of the rotation Sagittarius $A^{*}$ but in general, all observed the black holes rotate. We consider the hairy black hole as a simple static model of the stationary system of Sagittarius $A^{*}$, assuming that its rotation is slow enough.}, estimate its shift periapsis and thereby set the parameter of hair $\nu$ (and $\alpha$). In a similar form maybe is possible use the equation for deflection of light given in (\ref{rho1}), (\ref{rho2}). An interesting future direction is, study and classify the different orbits of these hairy black holes. \section{Acknowledgments} D. Choque would like to thank Dumitru Astefanesei for interesting discussions. D. Choque acknowledge the hospitality of the Universidad Nacional de San Antonio Abad del Cusco (UNSAAC) during the stages of this research. This work has been done with support from the 047-2017-FONDECYT-DE. The UNSAAC is funded by the Peruvian Government through Financing Program of CONCYTEC. \bigskip \bigskip \bigskip \bigskip \newpage
1,477,468,750,813
arxiv
\section{Conclusion and Future Work} In this paper, we presented a parallel framework for fast computation of inverse and forward dynamics of articulated robots using parallel scan operations, and reported details of its implementation on a hybrid CPU-GPU platform. Our contributions are summarized as follows: \begin{enumerate} \item We reformulated the recursive Newton-Euler inverse dynamics algorithm into a forward-backward two-phase parallel scan operation, which is based on the utilization of several generalizations of the standard scan operation. \item We reformulated the joint space inertia inversion algorithm (JSIIA) into $n+1$ data-independent scan operations along with a positive definite matrix inversion operation. \item We reformulated the complete articulated-body inertia algorithm (ABIA) into a nonlinear recursion and a forward-backward-forward three-phase parallel scan operation. \item We implemented the aforesaid algorithms on a hybrid CPU-GPU platform with extensive use of Nvidia CUDA standard libraries. This makes our experiment results easily reproducible for further study and comparison. \end{enumerate} Besides, since our parallel framework is based on a coordinate-free Lie group / semigroup formulation and is not representation-specific, the corresponding parallel algorithms apply equally well to other representations than matrices, such as dual quaternions. Our future work shall involve expansion of our parallel framework to include parallel scan of gradient and Hessian of inverse dynamics \cite{lee2005newton}, and other parallel FD algorithms such as CFA \cite{fijany1995parallel}, and eventually an automatic scheduler for hardware-specific optimal performance. \section{Implementation} \label{sec:gpu-impl} So far, we have presented a complete parallel framework for utilizing parallel scans to accelerate articulated robot ID and FD computation. In comparison to the classic recursive algorithms \cite{ploen1999coordinate}, we have eliminated a large portion of intermediate variables by assembling multiple linear recursions to a minimal number of large dimension linear recursions (two for ID, and three for ABIA). However, such mathematically compact form does not necessarily lead to an optimal data structure for implementation on a particular hardware platform. In this section, we shall materialize the corresponding parallel algorithms on a hybrid CPU-GPU platform, and illustrate how hardware constraints may weigh in to tailor our algorithms with the flexibility of composing/decomposing linear recursions. We adopt Nvidia CUDA as our software to make our implementation may be easily reproducible for further study and comparison. \subsection{Parallel Inverse Dynamics Algorithm} In Section~\ref{sec:recap:ID}, we proposed a forward-backward double scan parallelization of the recursive ID algorithm~\cite{park1995lie}, where the forward scan computes all link velocities and accelerations (along with the bias force) and the backward scan other for computing bias forces. In the actual GPU implementation, we split the scan for velocities and accelerations into two successive scans so that the scan data may by fitted into the GPU's highspeed shared memory and local registers. The computation of bias forces $\hat F_i$'s becomes perfectly parallel after the velocity and acceleration scans, may simply be computed on $n$ GPU threads, with $n$ being the number of links. The details of our parallel ID algorithm are illustrated in Algorithm~\ref{alg:par_id_alg}. \begin{algorithm}[!h] \caption{Parallel Inverse Dynamics $\textsc{CalcInvDyn}([M_i], [q_i],[\dot{q}_i],[\ddot{q}_i])$} \label{alg:par_id_alg} \begin{algorithmic}[1] \PARCMPT Compute adjacent transformations in parallel. \STATE $[f_{i-1,i}] \gets \textsc{CalcTransform}([M_i],[S_i],[q_i]) $ \FWSCAN Compute body velocities. \STATE $ [V_i] \gets \textsc{InclusiveVelScan}([f_{i-1,i}],[S_i],[\dot{q}_i]) $ \FWSCAN Compute body accelerations. \STATE $ [\dot{V}_i] \gets \textsc{InclusiveAccScan}([f_{i-1,i}],[S_i], [\ddot{q}_i],[V_i]) $ \BWSCAN Compute body forces. \STATE $ [F_i] \gets \textsc{InclusiveForceScan}([f_{i-1,i}], [V_i], [\dot{V_i}]) $ \PARCMPT Compute joint torques. \STATE $ [\tau_i] \gets \textsc{CalcTorque}([S_i],[F_i]) $ \end{algorithmic} \end{algorithm} \subsection{Forward Dynamics Algorithm} As shown in Section~\ref{sec:dyn-scan}, the parallelization of FD algorithms such as JSIIA and ABIA relies on recognizing the parallel ID algorithm as a common subroutine to compute the bias torques and/or the inertia matrix. Although, neither FD algorithms are completely scannable, the JSIIA may be alternatively accelerated by parallelly solving a positive definite linear system of equations. The ABIA, on the other hand, relies on CPU recursion to compute the ABI, and may be accelerated by careful scheduling of the hybrid CPU/GPU system. \subsubsection{Parallel JSIIA} Given a robot with $n$ links, JSIIA calls the inverse dynamics routine $n+1$ times with different inputs. The first call computes the bias torques $\tau^{\text{bias}}_i$'s by setting zero-acceleration input, i.e., $\ddot{q}_i = 0, i=1,\dots,n$. The other $n$ inverse dynamics calls compute the $n$ columns of the JSI $M(q)$ by setting $\dot{q}=0$ and $\ddot{q} = \delta_{\cdot,j}, j=1,\dots,n$. All $n+1$ inverse dynamics are data independent, and thus may be solved simultaneously on GPUs. Besides, the additional matrix operations involved in the JSIIA can also be computed in parallel on GPUs for further acceleration. The details of our parallel JSIIA are illustrated in Algorithm~\ref{alg:JSI}. \begin{algorithm}[!h] \caption{Parallel JSIIA $\textsc{CalcFwdDyn\_JSIIA}([\tau^{\text{in}}_i], [q_i], [\dot{q}_i])$} \begin{algorithmic}[1] \PARCMPT Compute the bias torques and the joint space inertia matrix. \STATE $\begin{aligned} ([\tau^{\text{bias}}_i],\ M_{\cdot,1,...,n}) \gets (&\textsc{CalcInvDyn}([q_i], [\dot{q}_i], [0]), \\ &\textsc{CalcInvDyn}([q_i], [0], [\delta_{i,1}]), \\ &..., \\ &\textsc{CalcInvDyn}([q_i], [0], [\delta_{i,n}]) \end{aligned}$ \PARCMPT Compute differential torques. \STATE $ [\tau^{\text{diff}}_i] \gets \textsc{CalcDiffTorques}([\tau^{\text{in}}_i],\ [\tau^{\text{bias}}_i]) $ \PARCMPT Compute the inverse of the joint space inertia matrix. \STATE $M^{-1} \gets \textsc{CalcMatrixInv}(M) $ \PARCMPT Compute joint accelerations. \STATE $ [\ddot{q}^{\text{out}}_i] \gets \textsc{CalcAcc}(M^{-1},\ [\tau^{\text{diff}}_i]) $ \end{algorithmic} \label{alg:JSI} \end{algorithm} \subsubsection{Hybrid ABIA Algorithm} In comparison to JSIIA, ABIA only leverages the inverse dynamics routine once for bias torque computation. The ABI $\hat J_i$'s is recursively computed using~\eqref{eq:abi} on the CPU and sent back to the GPUs for subsequent forward dynamics computation. The high latency of sending data from CPU main memory to the GPU video memory is partially hidden through asynchronous computation of ABIs and bias torques. All remaining components can be performed in parallel on GPUs. This includes the computation for several intermediate variables, which are either perfectly parallel (line 4, 6, 8 in Algorithm~\ref{alg:hybrid_abi}) or scannable (line 5, 7 in Algorithm~\ref{alg:hybrid_abi}). The complete description of the hybrid ABIA algorithm is shown in Algorithm~\ref{alg:hybrid_abi}. \begin{algorithm} \caption{Hybrid ABIA Algorithm $\textsc{CalcFwdDyn\_ABIA}([\tau_{\text{in}_i}], [q_i], [\dot{q}_i])$} \label{alg:hybrid_abi} \begin{algorithmic}[1] \HETER \STATE Compute bias torques. \begin{align*} [\tau^{\text{bias}}_i] \gets \textsc{CalcInvDyn}([q_i], [\dot{q}_i], [0]) \end{align*} \STATE Compute differential torques. \begin{align*} [\tau^{\text{diff}}_i] \gets \textsc{CalcDiffTorques}([\tau^{\text{in}}_i],[\tau^{\text{bias}}_i]) \end{align*} \STATE Compute articulated body inertia on the CPU. \begin{align*} [\hat{J}_i] \gets \textsc{CalcABI}([J_i], [f_{i-1,i}], [S_i]) \end{align*} \STREAM Parallel compute intermediate variables on the GPU. \\ \STATE $ \begin{aligned} &\left( [\mathrm\Omega_i], [\mathrm\Pi_{i, i+1}], [Y_{i,i+1}]\right) \\ & \ \ \ \gets \textsc{CalcInterIntVar}([\hat{J}_i], [f_{i-1,i}], [S_i]) \end{aligned}$ \BWSCAN Compute intermediate variables $ [\hat{z}_i] $. \STATE $ [\hat{z}_i] \gets \textsc{InclusiveZhatScan}([Y_{i,i+1}], [\mathrm\Pi_{i, i+1}], [\tau^{\text{diff}}_i]) $ \PARCMPT Compute intermediate variables $ [\hat{c}_i] $. \STATE $ [\hat{c}_i] \gets \textsc{CalcChat}([\hat{z}_i],\ [S_i],\ [\tau^{\text{diff}}_i]) $ \FWSCAN Compute intermediate variables $ [(\lambda_i)] $. \STATE $ [\lambda_i] \gets \textsc{InclusiveLambdaScan}([Y_{i,i+1}],\ [S_i],\ [\hat{c}_i]) $ \PARCMPT Compute joint accelerations. \STATE $ [\ddot{q}^{\text{out}}_i] \gets \textsc{CalcAcc}([\lambda_i],\ [\hat{c}_i],\ [\mathrm\Pi_{i, i+1}]) $ \end{algorithmic} \end{algorithm} \section{Experiments} \label{sec:experiment} In this section, we evaluate the performance of our GPU-based parallel dynamics algorithms by comparing their running time to that of their multithreading CPU counterparts on two experiments. The first experiment investigates the speedup and scalability with respect to the number of links of a single robot. The second one is used to demonstrate the efficacy of our methods on a large group of robots with moderate number of links, which poses as a bottleneck in many applications such as model-based control optimization. The robot configuration parameters such as twists are initialized randomly before the dynamics computation in all our experiments. Each experiment will be repeated 1000 times with randomized joint inputs, and the average computation time is then reported. All running times are recorded on a desktop workstation with an 8-core Genuine Intel i7-6700 CPU and 15.6 GB memory. We implemented our GPU-based parallel dynamics algorithms using CUDA on a Tesla K40c GPU with a 11520 MB video memory and 288GB/sec memory bandwidth. CPU multithreading is used to parallelize the large number of independent dynamics computations in the second set of experiments, where the number of threads is equal to the number of independent dynamics computations. \subsection{Experiments with Different Link Numbers} We first compare the GPU and CPU's performance of computing a single inverse dynamics for robots with different number of links, and the result is shown in Figure~\ref{fig:comparison_id}. We can observe that when the number of links increases, the serial CPU's computation time increases linearly, while the GPU's time cost roughly increases at the $\log (n)$ rate. Next, we compare the time cost of a single forward dynamics call for robots with different number of links. We investigate four different implementations of the forward dynamics computation, including GPU parallel JSIIA, CPU-GPU hybrid ABIA, serial JSIIA and serial ABIA. According to the result shown in Figure~\ref{fig:comparison_jsi_abi}, the serial JSIIA has the worst performance and is dominated by other three methods. The GPU-based parallel JSIIA is the fastest for robots with moderate number of links, but it is outperformed by hybrid ABIA and the serial ABIA when the link the number is more than $100$ links or $190$ links respectively. This is because the JSIIA involves the inversion of inertia matrix. For robots with a large number of links, the inertia matrix inversion is more computationally expensive than any components in the ABIA algorithm, and may involve expensive data exchange between global memory and local memory. Our hybrid ABIA algorithm will be the most efficient for robots with many links, thanks to the asynchronous data exchange which hides the latency of sending data from CPUs to GPUs. \begin{figure} \subfigure[Comparison of ID]{\label{fig:comparison_id} \includegraphics[width=.5\textwidth]{./fig/comparison_id.pdf}} \subfigure[Comparison of FD]{\label{fig:comparison_jsi_abi} \includegraphics[width=.5\textwidth]{./fig/comparison_jsi_abi_all.pdf}} \caption{Computation time comparison of different algorithms on a single inverse / forward dynamics call for robots with different number of links} \label{fig: link_diff} \end{figure} \subsection{Experiments with Different Group Numbers} We now compare the performance while performing a large number of independent dynamics computations between GPU-based parallel algorithms and multithreading CPU algorithms. In the CPU implementation, we allocate one thread for each independent dynamics computation. The comparison result for inverse dynamics implementations are shown in Figure~\ref{fig:id_10} and Figure~\ref{fig:id_100} for robots with 10 links and 100 links respectively. We see that the running time of the CPU inverse dynamics increases linearly with the group number. In contrast, the computation time of the GPU inverse dynamics increases much slower. Due to the limited memory and necessary thread management of the GPU, the GPU inverse dynamics does not reach the ideal $O(\log(n)) $ time complexity. Nevertheless, it provides a satisfactory $\sim$100x speedup when the group number is 1000. \begin{figure} \subfigure[Comparison for robots with 10 links]{\label{fig:id_10} \includegraphics[width=.5\textwidth]{./fig/comparison_id_10l.pdf}} \subfigure[Comparison for robots with 100 links]{\label{fig:id_100} \includegraphics[width=.5\textwidth]{./fig/comparison_id_100l.pdf}} \caption{Computation time comparison of different algorithms on many independent inverse dynamics call} \label{fig: id_grp} \end{figure} Next, four different approaches for computing the forward dynamics are compared, namely GPU-based parallel JSIIA and ABIA, CPU multithreading JSIIA and ABIA, and the results are shown in Figure~\ref{fig:fd_grp}. From Figure~\ref{fig:fd_10}, we can observe that the GPU-based JSIIA is always the most efficient one. When the link number is $200$, the result is quite different as shown in Figure~\ref{fig:fd_200}: the hybrid ABIA always outperforms the other three methods, and the multithreading JSIIA is the slowest approach. This phenomenon is consistent with the result in Figure~\ref{fig:comparison_jsi_abi} where the GPU-based JSIIA and the hybrid ABIA have the shortest running time when the link number is 10 and 200 respectively. The reason is because when the link number is small (e.g., 10), the inertia matrix is small enough to be fit in the high-speed memory of the GPUs for high performance; and we can further accelerate the dynamics computation by parallelizing the matrix inversions in independent dynamics calls. When the link number is large (e.g., 200), the inertia matrix becomes so large that the running time for a single matrix inversion is significantly more expensive than other components of the dynamics computation. Even worse, due to the limited GPU memories, we have to perform these time-consuming matrix inversions in serial and can not leverage GPUs' parallel mechanism. \begin{figure} \subfigure[Comparison for robots with 10 links]{\label{fig:fd_10} \includegraphics[width=.5\textwidth]{./fig/comparison_fd_10l.pdf}} \subfigure[Comparison for robots with 200 links]{\label{fig:fd_200} \includegraphics[width=.5\textwidth]{./fig/comparison_fd_200l.pdf}} \caption{Computation time comparison of different algorithms on many independent forward dynamics call} \label{fig:fd_grp} \end{figure} \section{Introduction} \label{sec:intro} Recent developments in humanoid robot online motion planning/replanning ~\cite{yamane2003dynamics,lee2005newton,chitchian2012particle,lengagne2013generation,Chretien16} and model-based control optimization and learning~\cite{Todorov:2012:Mujoco,Erez:2012:TOD} have raised new computational challenges on classical articulated robot dynamics algorithms: hundreds to thousands of groups of inverse/forward dynamics and their derivatives under different states and control inputs must be evaluated within several milliseconds. Fortunately, such computation task is inherently akin to parallel computation on multiple levels, and therefore may take substantially less time than a sequential implementation. At the outset, data-independent parallelism from different states and inputs suggests a parallel computation on the group level. Within individual groups, the data-dependent parallelism in inverse and forward dynamics is well known to the robotics community, thanks to decades of extensive research on adapting recursive sequential algorithms ~\cite{featherstone1983calculation,rodriguez1991spatial,park1995lie} to parallel algorithms for multi-processor systems ~\cite{fijany1995parallel,featherstone1999divide1,featherstone1999divide2,anderson2000highly,yamane2007automatic}. Such algorithms are mainly intended for complex simulations with a large number (e.g., $\geq 1000$) of links that are unusual for robotic applications. A realistic scenario for dynamics computation in articulated robotics may involve simultaneously solving a large number of groups ($>1000$) of inverse/forward dynamics problem for a robot with a moderate number ($10\sim 100$) of links~\cite{lengagne2013generation,Chretien16,Todorov:2012:Mujoco,Erez:2012:TOD,laflin2016enhancing,fijany2013new}. It is arguably more suitable for a hybrid CPU/GPU implementation in light of the recent popularization of GPGPU technology~\cite{Nick10}. Several researchers~\cite{Chretien16,Zhang98} have identified part of the inverse dynamics as a prefix sum operation, also known as scan~\cite{blelloch1990prefix,harris2007parallel}, which is a useful and easy to implement building block for many parallel algorithms and is well supported by GPU platforms like NVidia CUDA~\cite{bell2011thrust}. These early work does not exploit the Lie group structure within the inverse dynamics, and thus needs to compute a large set of temporary parameters in parallel to prepare the data before applying a parallel prefix sum. In fact, articulated robot kinematics and dynamics can be considered a spatial propagation problem~\cite{rodriguez1991spatial,park1995lie,rodriguez1987kalman,featherstone2014rigid}, which naturally leads to linear recursions equivalent to scan operations~\cite{blelloch1990prefix}. This implies that the inverse/forward dynamics problem may be mostly (if not entirely) modeled as a sequence of scan operations, thereby applying state-of-the-art parallel computing technology to the classic dynamics computation problem without reinventing any wheels~\cite{negrut2014parallel}. In this paper, we will show that the parallel scan perspective on robot dynamics computation can indeed be carried to a further extent: apart from the initialization and some intermediate variables that may be computed in parallel, both the inverse dynamics and the forward dynamics problem may be reformulated into a sequence of scan operations. The paper is organized as follows. In Section~\ref{sec:dyn-recap}, we give a brief review of the recursive Newton-Euler formulation for robot dynamics, following the Lie group approach formally stated in~\cite{park1995lie}. Our main contribution is presented in Section~\ref{sec:dyn-scan} and~\ref{sec:gpu-impl}, where we formulate the recursive inverse and forward dynamics problem into a sequence of scan operations. We show that the scan operator can be considered as the binary operation of certain semigroups. Detailed algorithms of implementations are then described. In Section~\ref{sec:experiment}, we present some experiment results of a hybrid CPU/GPU implementation of our parallel scan based algorithm and make some reasonable comparison with (multithreading) CPU based implementations. In particular, our method achieves a maximal 500x speedup on inverse dynamics computation, and a maximal 100x speedup on forward dynamics. Without loss of generality, we shall only consider single open chain robot in this paper. \section*{Acknowledgments} This work is partially supported by NVidia Corp. and Hong Kong GRF 17204115. Yuanqing Wu is supported by the PRIN 2012 grant No. 20124SMZ88 and the MANET FP7-PEOPLE-ITN grant No. 607643. {\small \bibliographystyle{IEEEtran} \section{Lie Group Formulation of Robot Dynamics} \label{sec:dyn-recap} \subsection{Inverse dynamics} \label{sec:recap:ID} A comprehensive summary of state-of-the-art robot inverse and forward dynamics algorithms is given by~\cite{featherstone2014rigid}. The inverse dynamics problem of deriving necessary joint torques for generating a specified robot motion is well-known to be a forward-backward two-phase propagation process ~\cite{park1995lie,rodriguez1987kalman,murray1994mathematical}. In the forward propagation phase, the motion of each joint (with twist axis $S_i$ and joint variables $q_i$, $\dot{q}_i$ and $\ddot{q}_i$) propagates from the base link toward the end-effector of a $n$-link robot, resulting in linear recursion of link velocities and accelerations for $i=1,\dots,n$: \begin{eqnarray}\left\{ \begin{aligned} f_{i-1,i}&=M_ie^{S_iq_i}\\ V_i&=\Ad_{f_{i-1,i}^{-1}}( V_{i-1})+S_i\dot q_i\\ \dot V_i&=S_i\ddot q_i+\Ad_{f_{i-1,i}^{-1}}( \dot V_{i-1} )-\ad_{S_i\dot q_i}\Ad_{f_{i-1,i}^{-1}}(V_{i-1}). \end{aligned}\right. \label{eq:id-forward} \end{eqnarray} Here, we follow closely the notation of ~\cite{park1995lie,murray1994mathematical,ploen1999coordinate}. In the back propagation phase, the reaction force applied by each joint to the succeeding link is successively computed using Newton-Euler equation, which leads to a second set of linear recursion for joint reaction forces and joint torques for $i=n,\dots,1$: \begin{equation} \left\{ \begin{aligned} F_i&=\Ad_{f_{i,i+1}^{-1}}^T(F_{i+1})+J_i\dot V_i-\ad_{V_i}^T(J_iV_i)\\ \tau_i&=S_i^TF_i. \end{aligned} \right. \label{eq:id-backward} \end{equation} The base link velocity and acceleration $V_0,\dot V_0$ and external force $F_{n+1}$ acting on the end link are provided to initiate the recursion. A sequential recursion of \eqref{eq:id-forward} and \eqref{eq:id-backward} leads to the $O(n)$ (for $n$ links) recursive Newton-Euler inverse dynamics algorithm. Here, emphasis should be made on the coordinate invariant Lie group description of robot kinematics and dynamics ~\cite{murray1994mathematical,ploen1999coordinate}. We will show in Section \ref{sec:dyn-scan} that the scan operands and operators defined by \eqref{eq:id-forward} and \eqref{eq:id-backward} may be considered as the elements and binary operator of certain matrix semigroup that is closely related to \SE (special Euclidean group of $\mathds R^3$). This makes it possible to adapt our parallel framework to using other mathematical representations of \SE, such as dual quaternions ~\cite{dooley1991spatial} or other Clifford algebras ~\cite{selig2010rigid}. For the rest of the paper, we shall denote the ID function by: \begin{equation} \tau=\ID(q,\dot q,\ddot q,V_0,\dot V_0,F_{n+1}). \label{} \end{equation} \subsection{Forward dynamics} \label{sec:racap:FD} The robot forward dynamics (FD) problem refers to the derivation of joint acceleration $\ddot q=(\ddot q_1,\dots,\ddot q_n)^T$ as a function of the states $(q^T,\dot q^T)^T$, applied joint torques $\tau$, base link velocity and acceleration $V_0,\dot V_0$, and external force $F_{n+1}$: \begin{equation} \ddot q=\FD(q,\dot q,\tau,V_0,\dot V_0,F_{n+1}) \label{eq:fd} \end{equation} It may be numerically integrated to compute joint position and velocity trajectories. The joint torques $\tau$ is linearly related to the joint accelerations $\ddot q$ via the \emph{joint space inertia} (JSI) $M(q)$ with a bias term $\tau^{\text{bias}}$ accounting for Coriolis, centrifugal and external forces: \begin{equation}\left\{ \begin{aligned} \tau=&M(q)\ddot q+\tau^{\text{bias}}\\ \tau^{\text{bias}}:=&\ID(q,\dot q,0,V_0,\dot V_0,F_{n+1}) \end{aligned}\right. \label{eq:tau-bias} \end{equation} Since both the joint accelerations $\ddot q_i$'s and joint reaction forces $F_i$'s are unknown, the original FD problem obstructs a propagation formulation and cannot be directly solved by parallel scan operations. However, $\ddot q$ may be derived from directly inverting the JSI, \begin{equation} \ddot q=M^{-1}(q)\hat\tau:=M^{-1}(q)(\tau-\tau^{\text{bias}}) \label{eq:JSIinv} \end{equation} leading to the $O(n^3)$ \emph{JSI inversion algorithm} (JSIIA) ~\cite[ch. 6]{featherstone2014rigid}. Depending on the methods for computing JSI, JSII may be computationally appealing for moderate number of links ($n\leq 200$). However, it does not take advantage of the structure of JSI inherited from robot chain topology. This was emphasized in a series of papers on analytical factorization and inverse of JSI ~\cite{rodriguez1991spatial,fijany1995parallel,fijany2013new,rodriguez1987kalman,jain1991unified}. The goal of the factorization is to represent the inverse of JSI as the product of block diagonal and block upper/lower triangular matrices, or a sequence of linear recursions that leads to the computationally efficient \emph{propagation} methods~\cite[ch. 7]{featherstone2014rigid}. The \emph{articulated-body inertia algorithm} (ABIA) proposed by Featherstone~\cite{featherstone1983calculation} is a well-known propagation method that involves a nonlinear recursion for deriving the equivalent inertias of the sub-system rooted at link $i$ (see ~\cite{ploen1999coordinate} for the notations) for $i=n,\dots,1$: \begin{equation} \begin{aligned} \hat J_i&=&J_i&+\Ad_{f_{i,i+1}^{-1}}^T\hat J_{i+1}\Ad_{f_{i,i+1}^{-1}} \\ &&-&\frac{\Ad_{f_{i,i+1}^{-1}}^T\hat J_{i+1}S_{i+1}S_{i+1}^T\hat J_{i+1}\Ad_{f_{i,i+1}^{-1}}}{S_{i+1}^T\hat J_{i+1}S_{i+1}} \end{aligned} \label{eq:abi} \end{equation} which is essentially a recursive elimination of the unknown joint reaction forces $F_i$'s using virtual work principle. The availability of ABI allows us to derive link accelerations from joint torques $\tau_i$'s with a backward-forward two-phase propagation process~\cite{featherstone2014rigid}, which may be summarized as follows \cite{ploen1999coordinate}: \begin{equation} \begin{array}{c} \text{Backward recursion} \\ (\text{for }i=n,\dots,1) \\ \\ \left\{ \begin{aligned} \hat z_i&=Y_{i,i+1}\hat z_{i+1}+\mathrm\Pi_{i,i+1}\hat\tau_{i+1}\\ c_i&=\hat\tau_i-S_i^T\hat z_i\\ \hat c_i&=\mathrm\Omega_i^{-1}c_i:=(S_i^T\hat J_iS_i)^{-1}c_i \end{aligned}\right.\\ \\ \text{Forward recursion}\\ (\text{for }i=1,\dots,n)\\ \\ \left\{ \begin{aligned} \lambda_i&=Y_{i-1,i}^T\lambda_{i-1}+S_i\hat c_i\\ \ddot q_i&=\hat c_i-\mathrm\Pi_{i-1,i}^T\lambda_{i-1} \end{aligned}\right.\\ \end{array} \label{eq:recursive-fd} \end{equation} where $\displaystyle Y_{i,i+1}:=\Ad_{f_{i,i+1}^{-1}}^T\left( I-\frac{\hat J_{i+1}S_{i+1}S_{i+1}^T}{S_{i+1}^T\hat J_{i+1}S_{i+1}} \right)$ and $\displaystyle \mathrm\Pi_{i,i+1}:=\frac{\Ad_{f_{i,i+1}^{-1}}^T\hat J_{i+1}S_{i+1}}{S_{i+1}^T\hat J_{i+1}S_{i+1}}$. The connection of ABI to square factorization of JSI is emphasized in~\cite{rodriguez1991spatial}. All recursions involved in ABIA except that of \eqref{eq:abi} are linear and may eventually be formulated as scan operations. However, eq. \eqref{eq:abi} is a well known nonlinear recursion that does not speedup beyond a constant factor by parallel algorithms ~\cite{miklovsko1984complexity,hyafil1977complexity}, and therefore should not conform to a parallel scan operation. The joint reaction forces may also be eliminated by defining appropriate (not necessarily orthogonal) complements for joint constraint force, leading to the \emph{constraint force algorithm} (CFA)~\cite{fijany1995parallel}; the inverse of JSI is directly factorized into a product of block diagonal, block upper/lower bi-diagonal and block tri-diagonal matrices. Block cyclic-reduction algorithms~\cite{heller1976some,sweet1977cyclic} may be applied to solve a block tri-diagonal system efficiently. \section{Prefix-Sum Re-formulation of Robot Dynamics} \label{sec:dyn-scan} \subsection{A brief review of scan operation} The all-prefix-sums (scan) operation~\cite{blelloch1990prefix} takes a binary associative operator $\oplus$, and an array of $n$ elements \begin{equation} \left[ a_0, a_1, \dots, a_{n-1}\right] \label{} \end{equation} and returns the array \begin{equation} \begin{aligned} \left[ x_0, x_1,\dots, x_{n-1}\right]:= & [ a_0, (a_0 \oplus a_1),\dots, \\ & (a_0 \oplus a_1 \oplus\cdots\oplus a_{n-1})] \end{aligned} \label{eq:scan} \end{equation} It is obvious that the scan operation implies the following recursion: \begin{equation} x_i=x_{i-1} \oplus a_i \label{} \end{equation} Generalizations of the standard scan operation and their applicatinos are discussed in~\cite{blelloch1990prefix}. \subsection{Synchronous forward scan of link velocities and accelerations} \vskip-10pt Our following development is inspired by Zhang \emph{et al.}'s work~\cite{Zhang98} on scanning successive multiplications of link rotation transformations. Its practical application in humanoid motion planning is considered in~\cite{Chretien16}. More generally, we may rewrite the second and third equation of \eqref{eq:id-forward} in a single linear recursion: \begin{equation} \begin{aligned} \begin{bmatrix} \dot V_i\\ V_i\\ \hline 1 \end{bmatrix}&=\underbrace{\left[ \begin{array}{cc|c} \Ad_{f_{i-1,i}^{-1}} & -\ad_{S_i\dot q_i}\Ad_{f_{i-1,i}^{-1}} & S_i\ddot q_i\\ 0 & \Ad_{f_{i-1,i}^{-1}} & S_i\dot q_i\\ \hline 0 & 0 & 1 \end{array} \right]}_{\displaystyle A_i\in\GL{13}}\begin{bmatrix} \dot V_{i-1}\\ V_{i-1}\\ \hline 1 \end{bmatrix}\quad \\ A_0&=\left[\begin{array}{cc|c} I & 0 & \dot V_0\\ 0 & I & V_0\\ \hline 0 & 0 & 1 \end{array}\right] \end{aligned} \label{eq:scan-id-vel-acc} \end{equation} leading to a synchronous scan of both link velocities $V_i$'s and accelerations $\dot V_i$'s. Emphasis should be made on the Lie group structure of the operands: it can be shown that the operands are isomorphic to $(f_{i-1,i}^{-1},S_i\ddot q_i,S_i\dot q_i)\in\SE\times\se^2$, a Lie group with binary operation $\oplus$ defined by: \begin{equation} \begin{aligned} (g,\xi_1,\xi_2)\oplus(g',\xi_1',\xi_2'):=&(gg',\Ad_g(\xi_1')+\xi_1 \\ &-\ad_{\xi_2}(\Ad_g(\xi_2')),\Ad_g(\xi_2')+\xi_2) \end{aligned} \label{} \end{equation} The identity element of this Lie group is $(I,0,0)$, and the inverse of $(g,\xi_1,\xi_2)\in\SE\times\se^2$ is given by: \begin{equation} (g,\xi_1,\xi_2)^{-1}=(g^{-1},-\Ad_{g^{-1}}(\xi_1),-\Ad_{g^{-1}}(\xi_2)) \label{} \end{equation} We may take advantage of this isomorphism by first scanning the isomorphic operands $(f_{i-1,i}^{-1},S_i\ddot q_i,S_i\dot q_i)$'s and then performing a parallel isomorphism to recover $V_i$'s and $\dot V_i$'s. \subsection{Backward scan of bias force and joint torques for inverse dynamics} After scanning the link velocities and accelerations, the bias term $\hat F_i:=J_i\dot V_i-\ad_{V_i}^T(J_iV_i)$ in the first equation of \eqref{eq:id-backward} may be pre-computed in parallel before a second scan computes the linear recursion for $F_i$'s. Alternatively, $\hat F_i$ is quadratic in $V_i$ and linear in $\dot V_i$, and may be synchronously computed along with the scan of \eqref{eq:scan-id-vel-acc}. Moreover, it can be proved that $\hat F_i$ involves only $9$ out of the $21$ quadratic terms of $V_i=(v_i^T,w_i^T)^T$, namely $w_{ij}w_{ik}$ for $1\leq j\leq k\leq 3$ and $w_i=(w_{i1},w_{i2},w_{i3})^T$, and $w_i\times v_i\in\mathds R^3$. For convenience, we define $Q_i=((w_i\times v_i)^T,w_{i1}^2,w_{i1}w_{i2},w_{i1}w_{i3},w_{i2}^2,w_{i2}w_{i3},w_{i3}^2)^T\in\mathds R^9$. A synchronous scan may be expressed as follows: \begin{equation} \begin{aligned} \begin{bmatrix} \dot V_i\\ \hline Q_i\\ V_i\\ \hline \hat F_i\\ \hline 1 \end{bmatrix}&=\underbrace{ \left[ \begin{array}{c|cc|c|c} *&0&*&0&*\\ \hline 0&*&*&0&*\\ 0&0&*&0&*\\ \hline *&*&*&0&*\\ \hline 0&0&0&0&1 \end{array} \right]}_{\displaystyle A_i\in\mathds R^{28\times 28}} \begin{bmatrix} \dot V_{i-1}\\ \hline Q_{i-1}\\ V_{i-1}\\ \hline \hat F_{i-1}\\ \hline 1 \end{bmatrix}\\ F_0&=0 \end{aligned} \label{eq:id-fwd-scan} \end{equation} where only the block pattern of the scan operand $A_i$ is shown. A similar simplification of the operand by isomorphism may be further investigated. After scanning the forward propagation \eqref{eq:id-fwd-scan}, we proceed with the scan of the backward propagation \eqref{eq:id-backward} using the following linear recursion: \begin{equation} \begin{aligned} \begin{bmatrix} F_i\\ \hline \tau_{i+1}\\ \hline 1 \end{bmatrix}&= \underbrace{\left[ \begin{array}{c|c|c} \Ad_{f_{i,i+1}^{-1}}^T & 0& \hat F_i\\ \hline S_{i+1}^T & 0& 0 \\ \hline 0 & 0 & 1 \end{array} \right]}_{\displaystyle A_i\in\mathds R^{8\times 8}} \begin{bmatrix} F_{i+1}\\ \hline \tau_{i+2}\\ \hline 1 \end{bmatrix}\\ F_0&=\tau_{n+1}=\tau_{n+2}=0 \end{aligned} \label{eq:id-bkwd-scan} \end{equation} Therefore, we have shown that the ID problem may be completely solved by two scanning operations. \vskip-10pt \subsection{Accelerating JSIIA using scan operations} Following the discussion about \eqref{eq:JSIinv} in Section \ref{sec:racap:FD}, we may accelerate the JSIIA by parallelizing data-independent computation of the columns $M_{\cdot,j}(q)$ of the JSI $M(q)$ (see~\cite[Ch. 6]{featherstone2014rigid} for more details) using our parallel ID algorithm: \begin{equation} \begin{aligned} M_{\cdot,j}(q)=&M(q)\delta_{\cdot,j}\\ =&\ID(q,\dot q,\delta_{\cdot,j},V_0,\dot V_0,F_{n+1})\\ &-\ID(q,\dot q,0,V_0,\dot V_0,F_{n+1})\\ =&\ID(q,0,\delta_{\cdot,j},0,0,0)\\ \delta_{\cdot,j}:=&(\delta_{1,j},\cdots,\delta_{n,j})^T\quad j=1,\cdots,n \end{aligned} \label{} \end{equation} where $\delta_{i,j}$ denotes the Kronecker delta function. The joint accelerations $\ddot q$ may then be evaluated from \eqref{eq:JSIinv} using state-of-the-art parallel linear system solvers, such as parallel Cholesky decomposition~\cite{volkov2008lu}. Consequently, a total of $n+1$ parallel IDs (including the one for computing $\tau^{\text{bias}}$) are computed in parallel for the JSIIA. \subsection{Accelerating ABIA using scan operations} The details of square factorization of JSI and its analytical inversion may be found in \cite{ploen1999coordinate}. Following our discussion in Section \ref{sec:racap:FD}, the two-phase propagation summarized in \eqref{eq:recursive-fd} may be reformulated into a backforward scan: \begin{equation} \begin{aligned} \begin{bmatrix} \hat z_i\\ \hat c_{i+1}\\ \hline 1 \end{bmatrix}&=\underbrace{\left[ \begin{array}{c|c|c} Y_{i,i+1} & 0 & \mathrm\Pi_{i,i+1}\hat\tau_{i+1}\\ -\mathrm\Omega_{i+1}^{-1}S_{i+1}^T&0&\mathrm\Omega_{i+1}\hat\tau_{i+1}\\ \hline 0&0&1 \end{array} \right]}_{\displaystyle A_i\in\mathds R^{8\times 8}} \begin{bmatrix} \hat z_{i+1}\\ \hat c_{i+2}\\ \hline 1 \end{bmatrix}\\ \hat{z}_{n+1}&=\hat{c}_{n+1}=\hat{c}_{n+2}=0 \end{aligned} \label{eq:fd-bkwd-scan} \end{equation} followed by a forward scan: \begin{equation} \begin{aligned} \begin{bmatrix} \lambda_i\\ \ddot q_i\\ \hline 1 \end{bmatrix}&=\underbrace{\left[ \begin{array}{c|c|c} Y_{i-1,i}^T&0&S_i\hat c_i\\ -\mathrm\Pi_{i-1,i}^T&0&\hat c_i\\ \hline 0&0&1 \end{array} \right]}_{\displaystyle A_i\in\mathds R^{8\times 8}} \begin{bmatrix} \lambda_{i-1}\\ \ddot q_{i-1}\\ \hline 1 \end{bmatrix}\\ \lambda_0&=\ddot q_0=0 \end{aligned} \label{eq:fd-fwd-scan} \end{equation} respectively. Therefore, the complete ABIA comprises two scan operations with operands $A_i$'s given in \eqref{eq:id-fwd-scan},\eqref{eq:id-bkwd-scan} for the computation of bias torques $\tau_i^{\text{bias}}$'s, a nonlinear recursion \eqref{eq:abi} for the computation of the ABI, and then finally two scan operations with operands $A_i$'s given in \eqref{eq:fd-bkwd-scan},\eqref{eq:fd-fwd-scan} for the computation of joint accelerations $\ddot q_i$'s. We may also combine the two backward scans pertaining to \eqref{eq:id-bkwd-scan} and \eqref{eq:fd-bkwd-scan} into a single backward scan, with the corresponding linear recursion given by: \begin{equation} \begin{aligned} \begin{bmatrix} F_{i-1}\\ \hat\tau_i\\ \hline \hat z_i\\ \hat c_{i+1}\\ \hline 1 \end{bmatrix}=&\underbrace{\left[ \begin{array}{c|cc|c|c} \Ad_{f_{i-1,i}^{-1}}^T & 0 & 0 & 0 & \hat F_{i-1}\\ -S_i^T & 0 & 0 & 0 & \tau_i^{\text{in}}\\ \hline 0& \mathrm\Pi_{i,i+1} & Y_{i,i+1} & 0 & 0\\ 0&\mathrm\Omega_{i+1}& -\mathrm\Omega_{i+1}^{-1}S_{i+1}^T&0&0\\ \hline 0&0&0&0&1 \end{array} \right]}_{\displaystyle A_i\in\mathds R^{15\times 15}} \\ & \times \begin{bmatrix} F_i\\ \hat\tau_{i+1}\\ \hline \hat z_{i+1}\\ \hat c_{i+2}\\ \hline 1 \end{bmatrix} \end{aligned} \label{eq:fd-bkwd-scan-large} \end{equation} thereby reducing the hybrid ABIA to one nonlinear recursion to be executed on the CPU, and a sequence of three parallel scans to be executed on the GPU.
1,477,468,750,814
arxiv
\section{Introduction} Consider the Caputo-Fabrizio derivative of order $\alpha\in[0,1]$ for a function $f\in H^1((a,b))$, $a<b$ $$D^{\text{CF}}_\alpha f(t):=\frac{M(\alpha)}{1-\alpha}\int_a^t f'(\tau) e^{-\frac{\alpha}{1-\alpha}(t-\tau)}d\tau\,,$$ where $M(\alpha)$ is a scaling factor satisfying $M(0)=M(1)=1$. \medskip\noindent We study the fractionary SIS system \begin{equation}\label{SIS} \begin{cases} D^{\text{CF}}_\alpha S= -\left(\frac{\beta}{N} S-\gamma\right)I\\ D^{\text{CF}}_\alpha I = \left(\frac{\beta}{N} S-\gamma\right)I\\ S(0)=S_0\\ I(0)=I_0\,, \end{cases} \end{equation} where $\beta,\gamma, N>0$, and $I_0,S_0\geq 0$ satisfy $I_0+S_0=N$. Moreover, we make the technical assumption \begin{equation}\label{gammacond} \alpha+(1-\alpha)(\beta-\gamma)\leq M(\alpha). \end{equation} Note that above condition is trivially satisfied if $\alpha=1$, since $M(1)=1$ by definition. The functions $S(t)$ and $I(t)$ represent the size of susceptible and infective individuals, respectively. $N$ is assumed to be the size of the total population at initial time. $\beta$ is the average number of contacts per person per time, multiplied by the probability of disease transmission in a contact between a susceptible and an infectious subject, and $\gamma$ is the recovery rate. {Fractional epidemic models attracted the interest of researchers, see \cite{chen2021AMM} for a recent review, due to the possibility of tuning the derivative order $\alpha$ for applications to real data fitting, see for instance \cite{li2019NA}. An important feature is the ability to incorporate memory effects into the model: in particular, we refer \cite{SISCF} (and the references therein) for a detailed study of the inclusion of memory in epidemic models by using the Caputo-Fabrizio operator. The peculiarity of this fractional operator, introduced in \cite{CapFab}, is the presence of a non-singular, exponential kernel.} We show below that \eqref{SIS} rewrites as \begin{equation}\label{saturatedSIS} \begin{cases}S'=-\frac{\lambda_{\alpha}}{1+k_{\alpha} I}SI+\frac{r_{\alpha}}{1+k_{\alpha} I} I\\ I'=\frac{\lambda}{1+k_{\alpha} I}SI-\frac{r_{\alpha}}{1+k_{\alpha} I} I \end{cases}\end{equation} where $$\lambda_{\alpha}:=\frac{\beta\alpha}{N(M(\alpha)-(1-\alpha)(\beta-\gamma)},\quad r_{\alpha}:=\frac{\gamma\alpha}{N(M(\alpha)-(1-\alpha)(\beta-\gamma)},$$ and $$ k_{\alpha}:=\frac{(1-\alpha)2\beta}{N(M(\alpha)-(1-\alpha)-(1-\alpha)(\beta-\gamma)}.$$ { Dynamics as \eqref{saturatedSIS} underlie the class of SIS models with \emph{saturated incidence rate} $H_{\alpha}(I):= \frac{\lambda_{\alpha} I}{1+k_{\alpha} I}$, and \emph{saturated treatment function} $T_{\alpha}(I):=\frac{r_{\alpha} I}{1+k_{\alpha} I}$, see \cite{saturatedSIS,saturatedsis2}. The system \eqref{saturatedSIS} models a stable population in which the natural recovery rate is zero}: individuals recover from the disease only if they are treated, and they are healed at the rate $T_\alpha(I)$. In particular, $r$ is the cure rate, whereas $1/(1+k_{\alpha} I)$ measures the reverse effect of the delayed treatment, due, for instance, to the limited health system capacity. Also the incidence rate $H_\alpha(I)$ is assumed to saturate as $I$ increases: this models the psychological effect of the awareness in the susceptible population about the existence of a large size of infected individuals, inducing a more cautious behavior \cite{capasso1978}. It is worth noting that \eqref{saturatedSIS} is also related to equations arising in the dynamics between tumor cells, immune-effector cells, and immunotherapy \cite{tumor-immune}. We refer to \cite{piccoli} and the references therein, for an application of control theory to the optimization of cancer therapies in a general, nonlinear setting. \medskip Using the conservation of the population, i.e., the identity $S=N-I$, the system \eqref{SIS} reduces to a non-linear ordinary differential equation depending on the infected population $I(t)$ only, see Theorem \ref{p2}. We then complete this equation with a linear control term, and we address the problem of minimizing the size of infected individuals plus a quadratic cost on the control. To this end, we study the associated dynamic programming equation, namely an evolutive Hamilton-Jacobi equation for which the value function $u^\alpha(x,t)$ is proved to be a viscosity solution, see Theorem \ref{thmexistence}. Our main result is Theorem \ref{thm1}, in which we prove, as $t\to+\infty$, the convergence of $u^\alpha(x,t)$ to the value function $v^\alpha(x)$ of an associated stationary problem. Theorem \ref{thmexistence} and Theorem \ref{thm1}, and the techniques adopted for their proofs, are inspired by the paper \cite{FIL06}. For a general introduction on the topic, we refer to \cite{lions}. Finally, we introduce a suitable finite difference scheme for solving the Hamilton-Jacobi equation and building the corresponding optimal trajectories. Some numerical tests complete the presentation, validating our results and providing a qualitative analysis of the solutions. \section{The SIS model with Caputo Fabrizio derivative} We begin by noting the indentity for $f\in H^1((a,b))$: \begin{equation}\label{dD} \frac{d}{dt} D^{\text{CF}}_\alpha f(t)=\frac{M(\alpha)}{1-\alpha}f'(t)-\frac{\alpha}{1-\alpha}D^{CF}_\alpha f(t)\quad \forall t\in(a,b),\alpha\in[0,1). \end{equation} We have the following result, relating \eqref{SIS} to an algebraic identity and to an ordinary differential equation. \begin{theorem}\label{p2} Assume \eqref{gammacond}. Let $I_0,S_0>0$, $N:=S_0+I_0$, and $(S,I)$ be a solution of \eqref{SIS}. Then $S(t)=N-I(t)$ for all $t\geq 0$, and $I$ is the unique, global positive solution of the Cauchy problem \begin{equation}\label{ordSIS} \begin{cases} I'=b_\alpha(I):=(\beta-\gamma-\frac{\beta}{N}I)I \dfrac{\alpha}{M(\alpha)-(1-\alpha)(\beta-\gamma-\frac{2\beta}{N}I)}\\ I(0)=I_0. \end{cases} \end{equation} \end{theorem} \begin{proof} Fix $I_0,S_0>0$, $N:=S_0+I_0$, and a solution $(S,I)$ of \eqref{SIS} with initial datum $(S_0,I_0)$. Define $N(t):=S(t)+I(t)$. By the linearity of the Caputo-Fabrizio operator, summing the two equations in \eqref{SIS} we get $D_\alpha^{\text{CF}}N(t)\equiv 0$. Then, also in view of \eqref{dD}, $$0=\frac{d}{dt}D_\alpha^{\text{CF}}N(t)=\frac{M(\alpha)}{1-\alpha}N'(t)\,,$$ from which we deduce $N(t)=N(0)=S(0)+I(0)=N$. Replacing $S=N-I$ in the second equation of \eqref{SIS}, we have \begin{align*} \frac{d}{dt}D^{\text{CF}}_\alpha I& =\frac{d}{dt} \left(\left(\frac{\beta}{N} S-\gamma\right)I\right)\\ &=\frac{d}{dt} \left(\left(\beta-\gamma-\frac{\beta}{N} I\right)I\right)= \left(\beta-\gamma-\frac{2\beta}{N} I\right)I'\,. \end{align*} On the other hand, using \eqref{dD} and the second equation in \eqref{SIS}, we get \begin{align*} \frac{d}{dt}D^{\text{CF}}_\alpha I &=\frac{M(\alpha)}{1-\alpha}I'-\frac{\alpha}{1-\alpha}\left(\beta-\gamma-\frac{\beta}{N} I\right)I. \end{align*} Therefore $$\left(\beta-\gamma-\frac{2\beta}{N} I\right)I'=\frac{M(\alpha)}{1-\alpha}I'-\frac{\alpha}{1-\alpha}\left(\beta-\gamma-\frac{\beta}{N} I\right)I.$$ Making the above equation explicit with respect to $I'$, we obtain the second equation of \eqref{ordSIS}. Now, $b_\alpha$ is well defined in $(x_0,+\infty)$, where $$x_0:=((\beta-\gamma)(1-\alpha)-M(\alpha))\frac{2N}{\beta(1-\alpha)}$$ and assumption \eqref{gammacond} implies that $x_0<0$. Moreover, for $\alpha\in[0,1)$, we get $$b_\alpha'(I)=\frac{\alpha \left(\beta -\gamma -\frac{2 \beta I}{N}\right)}{M(\alpha)-(1-\alpha ) \left(\beta -\gamma -\frac{2 \beta I}{N}\right)}-\frac{2 (1-\alpha ) \alpha \frac{\beta}{N} I \left(\beta -\gamma -\frac{\beta I}{N}\right)}{ \left( M(\alpha)-(1-\alpha ) \left(\beta -\gamma -\frac{2 \beta I}{N}\right)\right)^2}$$ and $$\lim_{I\to+\infty}b'_\alpha(I)\to-\frac{\alpha}{2(1-\alpha)} =-\frac{\alpha}{2(1-\alpha)}.$$ This, together with the continuity of $b'_\alpha$ in $(x_0,+\infty)$, implies that $b'_\alpha$ is uniformly bounded in any closed subset of $(x_0,+\infty)$, in particular in $[x_1,+\infty)$ for some fixed $x_1\in(x_0,0)$. We conclude that $b_\alpha$ is Lipschitz continuous in $(x_1,+\infty)$, hence the second equation of \eqref{ordSIS} admits a unique, global solution in $(x_1,+\infty)$. It is left to show that if $I_0>0$ then $I(t)\geq0$ for all $t>0$. We argue by contradiction. Assume that $I_0>0$ and that the corresponding solution satisfies $I(\bar t)<0$ for some $\bar t>0$. Then, by continuity, $I(t^*)=0$ for some $t^*\in(0,\bar t)$, so that $\bar I(t):=I(t+t^*)$ is the solution of \eqref{ordSIS} with initial datum $0$. On the other hand, $b_\alpha(0)=0$ implies that $I(t)\equiv 0$ is the unique solution of \eqref{ordSIS}, with initial datum $0$, and consequently $\bar I(t)\equiv 0$. It follows that $0=\bar I(\bar t-t^*)=I(\bar t)<0$, namely the required contradiction. In the remaining case $\alpha=1$, we recover the classical logistic equation $$I'= \left(\beta-\gamma-\frac{\beta}{N}I\right)I\,,$$ whose unique solution in $[0,+\infty)$ is explicitly given by $$I(t)=\frac{NI_0 (\beta-\gamma)}{e^{t (\gamma-\beta)} (N(\beta-\gamma)- \beta I_0)+\beta I_0}.$$ and satisfies $I(t)\ge 0$ for all $t>0$ if $I_0>0$. \end{proof} \subsection{Equlibria and asymptotic behavior} We introduce the reproduction factor $\rho:=\beta/\gamma$ and we describe the asymptotic behavior of \eqref{ordSIS} according to the cases of $\rho>1$ and $\rho\leq 1$. \begin{proposition}\label{equilibrium} If $\rho>1$ (respectively, $\rho\leq 1$) then the \emph{endemic population} $E:=N(1-\frac{1}{\rho})$ (resp, the equilibrium $0$) is an asymptotically stable equilibrium point for \eqref{ordSIS}. In particular, for $I_0>0$, the corresponding solution $I$ satisfies $I(t)\to E$ (resp. $I(t)\to 0$) as $t\to+ \infty$. \end{proposition} \begin{proof} We rewrite the second equation in \eqref{ordSIS} in terms of $\rho$: \begin{equation}\label{RordSIS} \begin{cases} I'=(\rho-1- \frac{\rho}{N}I)I \dfrac{\alpha}{M(\alpha)/\gamma-(1-\alpha)(\rho-1-\frac{2\rho}{N}I)}\\ I(0)=I_0. \end{cases} \end{equation} We set $I_e:=E$ if $\rho>1$, $I_e:=0$ otherwise, and we define $V(x):=\frac{1}{2}(x-I_e)^2$. We prove that $V$ is a Lyapunov function for \eqref{RordSIS}. To this end, note that $V$ is a positive definite function in $\mathbb R\setminus\{I_e\}$ and that $V$ is smooth in $\mathbb R$. Moreover, by a direct computation, we get $$V'(x)b_\alpha(x)= \begin{cases} \dfrac{-\alpha N(\rho-1- \frac{\rho}{N}x)^2x}{M(\alpha)/\gamma-(1-\alpha)(\rho-1-\frac{2\rho}{N}x)}<0 \quad &\text{if } \rho>1,\, x>0\\\\ \dfrac{\alpha(\rho-1- \frac{\rho}{N}x)x^2}{M(\alpha)/\gamma-(1-\alpha)(\rho-1-\frac{2\rho}{N}x)}<0 \quad &\text{if } \rho\leq 1,\, x>0\,,\\ \end{cases} $$ and this concludes the proof. \end{proof} \begin{remark} This result was earlier proved in \cite{saturatedSIS} in the framework of saturated SIS models, here we propose an alternative proof for the fractional context under exam. \end{remark} \section{An optimal control problem for fractional SIS}\label{s3} Fix $\beta,\gamma,N>0$, $\rho=\beta/\gamma$ as before. We consider a controlled version of the infected population dynamics \eqref{ordSIS}: \begin{equation}\label{controlya} \begin{cases} I'(t)=b_\alpha(I(t))+\xi(t)\quad \text{for }t\in(0,T)\\ I(0)=x\,,\end{cases}\end{equation} where $x\geq 0$ and $\xi\in L^1((0,T))$ is the control function. We consider the following finite horizon optimal control problem: minimize with respect to $\xi$ \begin{equation}\label{pb}\begin{split}\text{} \int_0^T \frac{I^2(t)}{2}+\frac{\xi^2(t)}{2}dt+\phi(I(T)) \quad \text{subject to \eqref{controlya} and } I(t)\geq 0, t\in(0,T)\,,\end{split}\end{equation} where $T>0$, $[0,T]$ is the given planning horizon, and $\phi(I(T))$ is the expected future cost, depending on the size of the individuals remaining infected at time $T$. We assume that $\phi$ is a continuous, nonnegative function attaining its global minimum at $x=0$. The state constraint $I\geq 0$ is introduced for modeling reasons. We prove that the value function $u^\alpha(x,T)$ associated to the problem \eqref{pb} is a viscosity solution of a related dynamic programming equation. To this end, we denote by $I(t;\xi,x)$ the absolutely continuous solution of \eqref{controlya}. We say that a trajectory-control pair $(I(t;\xi,x),\xi(t))$ is admissible if $\xi\in L^1((0,T))$ and $I(t;\xi,x)\geq 0$ for all $t\in (0,T)$, and we denote by $\mathcal A_T\subset L^1((0,T))$ the set of admissible controls. Then, we define the value function associated to \eqref{pb} \begin{equation}\label{value} u^\alpha(x,T):=\inf_{\xi \in \mathcal A_T}\int_0^T \frac{I^2(t;\xi,x)}{2}+\frac{\xi^2(t)}{2}dt+\phi(I(T;\xi,x)) \end{equation} \noindent and we consider the following Hamilton-Jacobi equation \begin{equation}\label{hj} \begin{cases} u_t-b_\alpha(x)Du+\frac{1}{2}Du^2-\frac{1}{2}x^2=0& \text{in } (0,+\infty)\times(0,+\infty)\\ u|_{t=0}=\phi(x)& x\in(0,+\infty)\,, \end{cases} \end{equation} {in which the Hamiltonian is provided by the Legendre transform $$\sup_{\xi\in \mathbb R} \left\{-(b_\alpha(x)+\xi) Du-\frac{1}{2}x^2-\frac{1}{2}\xi^2\right\}\,,$$ where the supremum is achieved by the optimal control $\xi=-Du$.} \begin{remark} Equation \eqref{hj} is the dynamic programming equation associated to the problem of minimizing with respect to $\xi$ \begin{equation*}\int_0^T \frac{I^2(t)}{2}+\frac{\xi^2(t)}{2}dt+\phi(I(T)) \qquad \text{subject to \eqref{controlya}}\,,\end{equation*} namely a version of \eqref{pb} in which the state constraint $I(t)\geq 0$ is removed. The possibility to neglect the state constraint, combined with the unboundedness of the control set and the nonlinearity of the drift $b_\alpha$ (which is not Lipschitz when $\alpha=1$) are elements of novelty that require an ad hoc analysis. \end{remark} In agreement with the classical theory, we have the following result. \begin{theorem}\label{thmexistence} For all $\alpha\in(0,1]$, the value function $u^\alpha$ defined in \eqref{value} is a viscosity solution of \eqref{hj}. \end{theorem} \begin{proof} The proof is based on showing that $u^{\alpha}$ simultaneously fulfills the definition of viscosity sub-solution and of viscosity super-solution. Technical computations can be easily adapted from \cite[Theorem 10]{FIL06} (see also the proof of Theorem \ref{thmva} below) and we omit them for brevity. The main ingredients are the Dynamic Programming Principle (Proposition \ref{DPP}) and the continuity of $u^{\alpha}$ (Proposition \ref{pc}), that are proved in detail the following subsections. \end{proof} \subsection{Auxiliary results for the proof of Theorem \ref{thmexistence}}\label{thmexp} Let $\mathbb R_+:=(0,+\infty)$. For $(x,T) \in [0,+\infty)\times\mathbb R_+$, let $\mathcal C^+(x,T)$ denote the space of non-negative, absolutely continuous functions $X: [0,T]\to[0,+\infty)$ such that $X(0)=x$. Then, the value function $u^\alpha$ defined in \eqref{value} can be rewritten in a form suitable for our purposes: \begin{equation}\label{udef+} \begin{split} u^\alpha(x,T)=\inf\left\{\int_0^T\frac{1}{2}X(t)^2+ \frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2+\phi(x(T))\right.\\\mid X\in \mathcal C^+(x,T)\Bigg\}. \end{split}\end{equation} The proof of Theorem \ref{thmexistence} relies on several preliminary results, that can be summarized in a set of estimates for $u^\alpha$, proved in Section \ref{estss}, the Dynamic Programming Principle, and a continuity result for $u^\alpha$, proved in Section \ref{conss} below. \subsubsection{Estimates for $u^\alpha$}\label{estss} For $R>0$, we set \begin{align}&\label{C1} C_1^\alpha(R):=(R+||b_\alpha||_{L^\infty([0,R])})^2+||\phi||_{L^\infty([0,R])}\,, \\ & \label{C2} C_2^\alpha(R):=\sup\left\{y\in(0,+\infty)\mid \hat B_\alpha(y)\leq ||\hat B_\alpha||_{L^\infty([0,R])}+1+C^\alpha_1(R)\right\}\,,\end{align} where \begin{equation}\label{Bdef} \hat B_\alpha(x):=\int_0^x -b_\alpha(s)ds.\end{equation} Note that, since $\hat B_\alpha(0)=0$, $\hat B_\alpha$ is continuous, and $\hat B_\alpha(x)\to+\infty$ as $x\to+\infty$, then $C_2^\alpha(R)\in(0,+\infty)$ for all $R>0$. In particular $C_2^\alpha(R)\geq R$. Indeed, clearly $\hat B_\alpha(R)\leq ||\hat B_\alpha||_{L^\infty([0,R])}$, hence $R\in \{y\mid \hat B_\alpha(y)\leq ||\hat B_\alpha||_{L^\infty([0,R])}+1+C^\alpha_1(R), y\in(0,+\infty)\}$ and, consequently $R\leq C_2^\alpha(R)$. Finally define \begin{equation}C_3^\alpha(R):=R^2+||b_\alpha||_{L^\infty([0,R])}^2.\end{equation} We remark that $C_1^\alpha(R)$ and $C_2^\alpha(R)$ depend on $\phi$ only via $||\phi||_{L^\infty([0,R])}$, whereas $C_3^\alpha(R)$ is independent from $\phi$. \medskip The next three lemmas provide estimates for $u^\alpha$ related to $C_1^\alpha$, $C_2^\alpha$ and $C_3^\alpha$. \begin{lemma}\label{l28} For each $\alpha\in[0,1]$ and $R>0$ $$\phi(0)\leq u^\alpha(x,T)\leq C_1^\alpha(R) \quad \text{ for } (x,T)\in [0,R]\times(0,\infty).$$ In particular $u^\alpha(0,T)=\phi(0)$, i.e., for all $T>0$, $u^\alpha(\cdot,T)$ attains its global minimum at $x=0$. \end{lemma} \begin{proof} The lower estimate readily follows by the assumption that $\phi$ attains its global minimum at $x=0$. Indeed, for all $T\geq0$ and $\varepsilon>0$ there exists $X\in \mathcal C^+(x,T)$ such that \begin{align*}u^\alpha(x,T)+\varepsilon&> \int_0^T \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+\phi(X(T))\\&\geq \phi(X(T))\geq \phi(0).\end{align*} Therefore $u^\alpha(x,T)\geq \phi(0)$ for all $x,T\geq 0$. Moreover, choosing $X(t)\equiv 0$, we have for all $T>0$ $$u^\alpha(0,T)\leq \int_0^T \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2ds+\phi(X(T))=\phi(0).$$ Hence $\phi(0)=u(0,T)\leq u(x,T)$ for all $x\geq 0$, $T>0$. To prove the upper estimate, consider the curve $X\in \mathcal C^+(x,T)$ defined by $$X(t)=\begin{cases} x-tx &\text{for } 0\leq t\leq 1\\ 0 & \text{ for } t>1 \end{cases}$$ so that $0\leq X(t)\leq x\leq R$ and $\dot X(t)=-x$ for $t\in [0,1]$. Since $b_\alpha(0)=0$, then $T\geq 1$ implies $$\int_0^T \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2)dt=\int_0^1\frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2)dt$$ and $\phi(X(T))=\phi(0)$. For a general $T>0$ we then have \begin{align*} u^\alpha(x,T)\leq& \int_0^T \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+\phi(x(T))\\ \leq&\int_0^1 \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+||\phi||_{L^\infty([0,R])}\\ =&\int_0^1\frac{1}{2} x^2(1-t)^2+\frac{1}{2}(b_\alpha(x(1-t))+x)^2dt+||\phi||_{L^\infty([0,R])}\\ \leq&\int_0^1\frac{1}{2} R^2+\frac{1}{2}(||b_\alpha||_{L^\infty([0,R])}+R)^2dt+||\phi||_{L^\infty([0,R])}\\ =& \,R^2+\frac{1}{2}||b_\alpha||_{L^\infty([0,R])}^2+R||b_\alpha||_{L^\infty([0,R])}+||\phi||_{L^\infty([0,R])}\\ <&\, C^\alpha_1(R). \end{align*}\end{proof} \begin{lemma}\label{l29}Fix $\alpha\in[0,1]$. For each $R>0$, if $(x,T)\in [0,R]\times \mathbb R_+$, $X(t)\in \mathcal C^+(x,T)$ and \begin{equation}\label{cond}u^\alpha(x,T)+1\geq \int_0^T \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2)dt+\phi(X(T)),\end{equation} then for $t\in(0,T]$ $$|X(t)|\leq C_2^\alpha(R).$$ \end{lemma} \begin{proof} Let $X\in\mathcal C^+(x,T)$ satisfy \eqref{cond}. Then $X(t)\geq 0$ by the definition of $\mathcal C^+(x,T)$. To prove $X(t)\leq C_2^\alpha(R)$, note that by Lemma \ref{l28} $$ \int_0^T \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+\phi(X(T))\leq 1+C^\alpha_1(R).$$ In particular, also recalling that we assumed $\phi$ to be non negative, for all $\tau \in [0,T]$ $$ \int_0^{\tau} \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2)dt\leq 1+C^\alpha_1(R).$$ From this we deduce $$ \int_0^\tau-b_\alpha(X(t))\dot X(t)dt\leq 1+C_1(R).$$ Integrating the left handside of above expression, we get \begin{equation}\label{Balpha} \hat B_\alpha(X(\tau))\leq \hat B_\alpha(x)+1+C^\alpha_1(R)\leq ||\hat B_\alpha||_{L^\infty([0,R])}+1+C^\alpha_1(R).\end{equation} Since $X(\tau)\geq 0$, we deduce the claimed inequality $X(\tau)\leq C^\alpha_2(R)$ for all $\tau\in[0,T]$. \end{proof} \begin{lemma}\label{l42} For all $\alpha\in[0,1]$ and for all $(x,T)\in [0,+\infty)\times \mathbb R_+$ it holds $$u^\alpha(x,T)\leq \phi(x)+\frac{1}{2}(x^2+(b_\alpha(x))^2)T.$$ \end{lemma} \begin{proof} Choose $X(t)\equiv x$ and remark that \begin{align*}u^\alpha(x,T)&\leq \int_0^T \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+\phi(x(T))\\ &=\frac{1}{2}(x^2+b_\alpha(x)^2)T+\phi(x).\end{align*} \end{proof} \subsubsection{Continuity of $u^\alpha$}\label{conss} The next two results investigate the dependence of $u^\alpha$ on the initial datum $\phi$ and the local uniform continuity of $u^\alpha$ with respect to the space variable $x$, respectively. For any continuous function $f:{D}\subseteq [0,+\infty)\to \mathbb R$ we call \emph{modulus (of continuity)} of $f$ any increasing, continuous function $\omega : [0, \infty)\times[0,\infty)\to [0, \infty)$ such that $\omega(0) = 0$, $\omega(r) > 0$ for every $r > 0$ and $|f(x_1) - f(x_2)| \leq \omega(|x_1 - x_2|)$ for all $x_1, x_2\in {D}$. We denote by $\omega_{\phi,R}$ the modulus of continuity of $\phi$ restricted to $[0,R]$. \begin{lemma} \label{l456} Let $\alpha\in[0,1]$ and $R>0$. Define $$C^\alpha(R):=\max\{C^\alpha_1(R),||b_\alpha||_{L^\infty([0,C^\alpha_2(R)])}\}.$$ For all $(x,T)\in [0,R]\times\mathbb R_+$ if $X\in \mathcal C^+(x,T)$ and $$u^\alpha(x,T)+1\geq \int_0^T \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+\phi(x(T)),$$ then \begin{equation}\label{sigma}|X(t)-x|\leq \sigma_R(t):=tC^\alpha(R)+\sqrt{t}(C^\alpha(R)+2)\quad \text{for } t\in[0,T].\end{equation} Moreover, for all $(x,t)\in [0,R]\times\mathbb R_+$ \begin{equation}\label{nu}|u^\alpha(x,t)-\phi(x)|\leq \nu_R(t):=\max\{C_3^\alpha(R)t,\omega_{\phi,R}( \sigma_R(t))\}.\end{equation} In particular $\nu_R$ depends on $\phi$ only via $\omega_{\phi,R}$ and $||\phi||_{L^\infty([0,R])}$. \end{lemma} \begin{proof} Assume that $X\in \mathcal C^+(x,T)$ satisfies $$u^\alpha(x,T)+1\geq \int_0^T \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+\phi(x(T)).$$ By Lemma \ref{l28} and Lemma \ref{l29}, we respectively get \begin{align*}&u^\alpha(x,T)\leq C^\alpha_1(R)\leq C^\alpha(R) \qquad \text{for } x\in [0,R]\times (0,+\infty)\,,\\ &|X(t)|\leq C_2^\alpha(R)\qquad \text{for } t\geq 0.\end{align*} Moreover, it follows from the latter inequality --see also the definition of $C^\alpha(R)$-- that $$|b_\alpha(X(t))|\leq C^\alpha(R) \qquad \text{for } t\geq 0.$$ Fix $\tau\in(0,T]$. Since $\phi$ is non negative, we have \begin{equation}\label{oo1}\begin{split}C^\alpha(R)+1\geq\int_0^\tau \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt\\ \geq \int_0^\tau \frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt\,.\end{split} \end{equation} Moreover, since for all $A>0$ and $y\in \mathbb R$ $$\frac{1}{2}y^2\geq A |y|-A^2,$$ choosing $A=1/\sqrt{\tau}$ and $y=y(t)=b^\alpha(X(t))-\dot X(t)$, we get \begin{align*} \int_0^\tau \frac{1}{2}(b^\alpha(X(t))-\dot X(t))^2dt&\geq \frac{1}{\sqrt{\tau}}\int_0^\tau |b_\alpha(X(t))-\dot X(t)|dt-1\\ &\geq \frac{1}{\sqrt{\tau}}\int_0^\tau |\dot X(t)|- |b_\alpha(X(t))|dt-1\\ &\geq \frac{1}{\sqrt{\tau}}\int_0^t |\dot X(t)|- C^\alpha(R)dt-1\\ &\geq\frac{1}{\sqrt{\tau}}|X(\tau)-x|- \sqrt{\tau}C^\alpha(R)-1 \end{align*} and this, together with \eqref{oo1} implies \eqref{sigma}. Now, by Lemma \ref{l42}, we readily get $$u^\alpha(x,t)-\phi(x)\leq C^\alpha_3(R)t\leq \nu_R(t)\quad \text{for }(x,t)\in [0,R]\times\mathbb R_+.$$ On the other hand, for $t\in \mathbb R_+$ and $\varepsilon\in(0,1)$ there exists $X\in \mathcal C^+(x,t)$ such that $$u^\alpha(x,t)+\varepsilon> \int_0^t \frac{1}{2}X^2(s)+\frac{1}{2}(b(X(s))-\dot X(s))^2)+\phi(x(t))\geq \phi(X(t)).$$ In view of the arguments above, $X$ verifies \eqref{sigma}. This, together with the arbitrariness of $\varepsilon$, implies $$u^\alpha(x,t)-\phi(x)\geq -\omega_{\phi,R}(\sigma_R(t))\geq -\nu_R(t) $$ and completes the proof. \end{proof} \begin{lemma}\label{gammamodulus} Let $\alpha\in[0,1]$. For each $R>0$ there exists a modulus $\gamma_R$ for $u^\alpha(\cdot,T)$ in $[0,R]$, for all $T>0$. More precisely, for each $x,y\in [0,R]$ and $T>0$ $$|u^\alpha(x,T)-u^\alpha(y,T)|\leq \gamma_R(|x-y|).$$ Moreover $\gamma_R$ depends on $\phi$ only via $\omega_{R,\phi}$ and $||\phi||_{L^\infty([0,R])}$. \end{lemma} \begin{proof} We first let $T\leq |x-y|$. Taking $\nu_R$ as in Lemma \ref{l456}, and by enlarging it if necessary, we may assume without loss of generality $|\phi(x)-\phi(y)|\leq \nu_R(|x-y|)$ so that \begin{align*} |u^\alpha(x,T)-u^\alpha(y,T)|&\leq |u^\alpha(x,T)-\phi(x)|+|\phi(x)-\phi(y)|+|\phi(y)-u^\alpha(y,T)|\\ &\leq 3\nu_R(|x-y|).\end{align*} Assume now $T>|x-y|$ and, by swapping the variable's names $x$ and $y$ if necessary, assume that $u^\alpha(x,T)\leq u^\alpha(y,T)$. Fix $\varepsilon\in(0,1)$ and select $X\in\mathcal C^+(x,T)$ such that $$u^\alpha(x,T)+\varepsilon\geq \int_0^T \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+\phi(X(T)).$$ Since $\varepsilon<1$, we know by Lemma \ref{l29} that $$|X(t)|\leq C^\alpha_2\quad \text{for } t\in(0,T].$$ On the other hand, replacing $\nu_R$ by $\nu_{C_2^\alpha(R)}$ if necessary, we have $$|u^\alpha(\hat x,|x-y|)-\phi(\hat x)|\leq \nu_R(|x-y|))\quad \text{for } \hat x\in [0,C_2^\alpha(R)].$$ Applying above inequality to $\hat x=X(T-|x-y|)$, we get \begin{equation}\label{bu}\begin{split}u^\alpha(x,T)+\varepsilon&\geq \int_0^{T-|x-y|} \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt\\ &\quad +u^\alpha(X(T-|x-y|),|x-y|)\\ &\geq \int_0^{T-|x-y|} \frac{1}{2}X(t)^2+ \frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt\\ &\quad +\phi(X(T-|x-y|))-\nu_R(|x-y|).\end{split} \end{equation} Now, let $Y\in \mathcal C^+(y,T)$ be $$Y(t):=\begin{cases} y+\frac{t}{|x-y|}(x-y)& \text{for } 0\leq t\leq |x-y|\\ X(t-|x-y|)& \text{for } |x-y|< t\leq T. \end{cases}$$ Then, for $t\in[0,|x-y|]$ one has $Y(t)\in[0,R]$ and $\dot Y(t)\in[-1,1]$. Setting $$\tilde C^\alpha(R):=\frac{1}{2}\sup\{\xi^2+(b_\alpha(\xi)-\eta)^2\mid (\xi,\eta)\in [0,R]\times [-1,1]\}$$ one gets, also using the last inequality in \eqref{bu}, \begin{align*} u^\alpha(y,T)-u^\alpha(x,T)\leq &\int_0^T \frac{1}{2}Y(t)^2+\frac{1}{2}(b_\alpha(Y(t))-\dot Y(t))^2dt + \phi(Y(T))+\varepsilon \\ &-\int_0^{T-|x-y|} \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt\\ & -\phi(X(T-|x-y|))+\nu_R(|x-y|)\\ =&\int_0^{|x-y|} \frac{1}{2}Y(t)^2+\frac{1}{2}(b_\alpha(Y(t))-\dot Y(t))^2dt +\nu_R(|x-y|)+\varepsilon\\ \leq &\tilde C^\alpha(R)|x-y|+\nu_R(|x-y|)+\varepsilon.\end{align*} In particular, $u^\alpha(y,T)\geq u^\alpha(x,T)$ implies \begin{align*} |u^\alpha(y,T)-u^\alpha(x,T)|&=u^\alpha(y,T)-u^\alpha(x,T)\\&\leq \gamma_R(|x-y|):=\tilde C^\alpha(R)|x-y|+\nu_R(|x-y|)\,, \end{align*} and this concludes the proof of the local uniform continuity of $u^\alpha$ in $x$. Since $\tilde C^\alpha(R)$ is independent from $\phi$, we deduce from Lemma \ref{l456} and from the definition of $C_2^\alpha(R)$ and of $C^\alpha(R)$ that $\nu_R$, hence $\gamma_R$, depends on $\phi$ only via $\omega_{\phi,R}$ and $||\phi||_{L^\infty([0,R])}$. \end{proof} \begin{proposition} [Dynamic Programming Principle]\label{DPP} Let $\alpha\in[0,1]$. For all $x\geq 0$ and for all $S,T>0$ \begin{equation*} \begin{split} u^\alpha(x,T+S)=\inf\left\{\int_0^T \frac{1}{2}X^2(t)+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+u^\alpha(X(T),S)\right.\\ \mid X\in \mathcal C^+(x,T)\Bigg\}. \end{split} \end{equation*} \end{proposition} \begin{proof} Set for brevity \begin{equation*} \begin{split} \tilde u^\alpha(x,T,S):=\inf\left\{\int_0^T \frac{1}{2}X^2(t)+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+u^\alpha(X(T),S)\right.\\ \mid X\in \mathcal C^+(x,T)\Bigg\}. \end{split} \end{equation*} Fix $T,S>0$, $x\geq0$ and $\varepsilon>0$. Let $X\in \mathcal C^+(x,T)$ be such that $$\int_0^T \frac{1}{2}X^2(t)+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+u^\alpha(X(T),S)\leq \tilde u^\alpha(x,T,S)+\frac{\varepsilon}{2}.$$ Let $Y\in\mathcal C^+(X(T),S)$ be such that $$\int_0^S \frac{1}{2}Y^2(t)+\frac{1}{2}(b_\alpha(Y(t))-\dot Y(t))^2dt+\phi(Y(S))< u^\alpha(X(T),S)+\frac{\varepsilon}{2}.$$ Then the map $$Z(t):=\begin{cases} X(t)& \text{if } t\in[0,T]\\ Y(t-T)& \text{if } t\in[T,T+S]\end{cases}$$ belongs to $\mathcal C^+(x,T+S)$ and it satisies \begin{align*} u^\alpha(x,T+S)&\leq \int_0^{T+S} \frac{1}{2}Z^2(t)+\frac{1}{2}(b_\alpha(Z(t))-\dot Z(t))^2dt+\phi(Z(S+T))\\ &< \int_0^T \frac{1}{2}X^2(t)+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+u^\alpha(X(T),S)+\frac{\varepsilon}{2}\\ &<\tilde u^\alpha(x,T,S)+\varepsilon. \end{align*} By the arbitrariness of $\varepsilon$, one deduces $u^\alpha(x,T+S)\leq \tilde u^\alpha(x,T,S)$. Now, to prove the inverse inequality, let $\varepsilon>0$ and $X\in \mathcal C^+(x,T+S)$ be such that $$\int_0^{T+S} \frac{1}{2}X^2(t)+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+\phi(X(S+T))<u^\alpha(x,T+S)+\varepsilon.$$ It follows that \begin{align*} \tilde u^\alpha(x,T,S)\leq & \int_0^{T} \frac{1}{2}X^2(t)+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+u^\alpha(X(T),S)\\ &\int_0^{T} \frac{1}{2}X^2(t)+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt\\ &+\int_0^{S} \frac{1}{2}X^2(t+T)+\frac{1}{2}(b_\alpha(X(t+T))-\dot X(t+T))^2dt\\ &+\phi(X(T+S))\\ <& u^\alpha(x,T+S)+\varepsilon. \end{align*} Then $\tilde u^\alpha(x,T,S)\leq u^\alpha(x,T+S)$ and this concludes the proof. \end{proof} We prolong the definition of $u^\alpha$ by setting $u^\alpha(x,0)=\phi(x)$ for $x\geq0$. \begin{proposition}\label{pc} $u^\alpha\in C([0,+\infty)\times [0,+\infty))$. \end{proposition} \begin{proof} Let $R>0$. By Lemma \ref{gammamodulus} there exists a modulus of continuity $\gamma_R$ such that for every $S>0$, $x,y\in [0,R]$ one has $|u^\alpha(x,S)-u^\alpha(y,S)|\leq \gamma_R(|x-y|).$ In other words, setting $\bar \phi(\cdot):=u^\alpha(\cdot,S)$ we have that $\bar \phi$ is a locally uniformly continuous map, and by Lemma \ref{l28} it attains its global minimum at $x=0$. By Proposition \ref{DPP}, for every $T>0$ \begin{equation*} \begin{split} u^\alpha(x,T+S)=\inf\left\{\int_0^T \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+\bar \phi(X(T))\right.\\\mid X\in \mathcal C^+(x,T)\Bigg\}. \end{split} \end{equation*} Applying Lemma \ref{l456} to $\bar u^\alpha(x,T):=u^\alpha(x,T+S)$ and to $\bar \phi$, we deduce that there exists a modulus of continuity $\bar \nu_R$ satisfying $$|u^\alpha(x,S+T)-u^\alpha(x,S)|=|\bar u^\alpha(x,T)-\bar \phi(x)|\leq \bar \nu_R(T).$$ Note that $\bar \nu_R$ depends on $\bar \phi$ (hence on $u^\alpha(\cdot,S)$) only via $||\bar \phi||_{L^\infty([0,R])}$ and via the modulus of continuity of $\bar \phi$. In particular, by Lemma \ref{l28}, we have $||\bar \phi||_{L^\infty([0,R])}\leq C_1^\alpha(R)$. On the other hand, the modulus of continuity of $\bar \phi$ is simply $\gamma_R$, i.e. the modulus of continuity of $u^\alpha(\cdot, S)$, which is independent from $S$ by Lemma \ref{gammamodulus}. We then conclude that $$|u^\alpha(x,s)-u^\alpha(x,t)|\leq \bar \nu_R(|s-t|)\qquad \text{for }x\in B(0,R),\, t,s\in[0,+\infty)$$ and, consequently, we deduce the continuity of $u^\alpha(x,\cdot)$ for all $x\in [0,R]$. Since $R$ is arbitrary, the proof is complete. \end{proof} \section{Asymptotic solutions} In this section, we consider the infinite horizon problem of minimizing with respect to $\xi$ \begin{equation}\label{pbinf}\int_0^{+\infty} \frac{I^2(t)}{2}+\frac{\xi^2(t)}{2}dt \qquad \text{subject to \eqref{controlya} and } I(t)\geq 0, t>0\end{equation} and the associated value function $$v^\alpha(x):=\inf_{\xi\in \mathcal A_\infty}\int_0^{+\infty} \frac{I^2(t;\xi,x)}{2}+\frac{\xi^2(t)}{2}dt\,,$$ where $\mathcal A_\infty\subset L^1((0,+\infty))$ is the set of admissible controls, i.e., $\xi\in \mathcal A_\infty$ if $I(t;\xi,I_0)\geq 0$ for all $t>0$. Our aim is to prove that, for a suitable notion of convergence, the value function $u^\alpha(x,T)$ of the finite horizon problem in Section \ref{s3} tends to $v^\alpha(x)$ as $T\to+\infty$. We remark the lack of a discount factor in the optimization problem \eqref{pbinf}, so our first goal is to prove that $v^\alpha(x)$ is finite for all $x\geq 0$. To this end, we introduce a representation formula for $v^\alpha$, based on the functions $d^\alpha:[0,+\infty)\times [0,+\infty)\to [0,+\infty)$ and $\psi^\alpha:[0,+\infty)\to [0,+\infty)$ respectively defined by \begin{equation*} \begin{split} d^\alpha(x,y):=\inf\left\{\int_0^T \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt\right.\\\mid T>0,\, X\in \mathcal C^+(x,y,T)\Bigg\} \end{split} \end{equation*} and \begin{equation*} \begin{split} \psi^\alpha(x):=\inf\left\{\int_0^T \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+\phi(X(T))\right.\\\mid T>0,\, X\in \mathcal C^+(x,T)\Bigg\}\,, \end{split} \end{equation*} where, for $x,y\geq0$, $\mathcal C^+(x,y,T)$ denotes the space of non-negative, absolutely continuous functions $X: [0,T]\to[0,+\infty)$ satisfying $(X(0),X(T))=(x,y)$. \begin{theorem}\label{thmva} For all $\alpha\in[0,1]$ and $x\geq 0$ \begin{equation}\label{vchar} v^\alpha(x)=d^\alpha(x,0)+\psi^\alpha(0). \end{equation} In particular \begin{equation}\label{vcharbond}0\leq v^\alpha(x)\leq(x+||b_\alpha||_{L^\infty([0,x])})^2+\psi^\alpha(0)\end{equation} for all $x\geq 0$. Moreover $v^\alpha$ is a viscosity solution of \begin{equation}\label{hjstat}\begin{cases} -b_\alpha(x)Dv+\frac{1}{2}Dv^2-\frac{1}{2}x^2=0, \quad x\in\mathbb R_+\\ v(0)=\phi(0). \end{cases} \end{equation} \end{theorem} The proof is postponed in Section \ref{s41} below. We finally state our main result, whose proof is showed in Section \ref{s42}. \begin{theorem}\label{thm1} For all $\alpha\in[0,1]$, the value function $u^\alpha$ defined in \eqref{value} satisfies for all $R>0$ \begin{equation}\label{limit} \lim_{T\to+\infty}\max_{x\in[ 0,R]}|u^\alpha(x,T)-v^\alpha(x)|=0. \end{equation} \end{theorem} \begin{remark}\label{rmk1} An explicit viscosity solution of \eqref{hjstat} is provided by $$\bar v^\alpha(x):=\phi(0)+\int_0^x b_\alpha(s)+\sqrt{b_\alpha^2(s)+s^2}ds\,.$$ In particular, $\bar v_\alpha$ is smooth, positive, increasing and it satisfies $\bar v(x)\to+\infty$ as $x\to +\infty$. Moreover, for $\alpha=1$, using the Wolfram Mathematica software, we obtain the following closed form for $\bar v^1$: \begin{align*} \bar v^1(x)=&\phi(0)+C_0(\beta,\gamma,N)\\ &-\frac{\beta x^3}{3 N}+\frac{1}{2} x^2 (\beta -\gamma )+\frac{N^2}{\beta^2}\left(\frac{1}{3} \left(y^2(x)+1\right)^{3/2}\right.\\&\left.-\frac{1}{2} (\beta -\gamma)y(x) \sqrt{y^2(x)+1} -\frac{1}{2} (\beta -\gamma ) \sinh ^{-1}\left(y(x)\right)\right)\,, \end{align*} where $y(x):=\beta-\gamma-\frac{\beta}{N}x$ and \begin{equation*} \begin{split} C_0(\beta,\gamma,N)=\frac{N^2}{\beta^2}\left(\frac{1}{2}((\beta -\gamma )^2+1)^{1/2} (\beta -\gamma )^2-\frac{1}{3} \left((\beta -\gamma )^2+1\right)^{3/2}\right.\\\left.+\frac{1}{2} (\beta -\gamma ) \sinh ^{-1}(\beta -\gamma )\right). \end{split} \end{equation*} \end{remark} \subsection{Preliminary results and proof of Theorem \ref{thmva}}\label{s41} Our first result proves the first part of the claim of Theorem \ref{thmva}, that is \eqref{vchar} and \eqref{vcharbond} \begin{lemma} For all $\alpha\in[0,1]$ and $x\geq 0$ \begin{equation*} v^\alpha(x)=d^\alpha(x,0)+\psi^\alpha(0). \end{equation*} In particular $$0\leq v^\alpha(x)\leq(x+||b_\alpha||_{L^\infty([0,x])})^2+\psi^\alpha(0)$$ for all $x\geq 0$. \end{lemma} \begin{proof} Fix $\varepsilon>0$ and let $T>0$ and $X\in C^+(x,0,T)$ be such that $$\int_0^T \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt<d_\alpha(x,0)+\varepsilon.$$ Then prolonging the definition of $X$ to $(T,+\infty)$ by setting $X(t)=0$ for all $t>T$, and letting $\xi\in L^1((0,+\infty))$ be such that $\xi(t)=\dot X(t)-b_\alpha(X(t))$ a.e., one deduces that $X(t)=I(t;\xi,x)$ for some admissible trajectory, and consequently $v^\alpha(x)< d^\alpha(x,0)+\varepsilon.$ By the arbitrariness of $\varepsilon$ one can deduce $v^\alpha(x)\leq d^\alpha(x,0)$ and, since $\psi^\alpha(0)$ is nonnegative, also $v^\alpha(x)\leq d^\alpha(x,0)+\psi^\alpha(0)$. The inverse inequality can be proved similarly. In view of \eqref{vchar}, setting $X\in C^+(x,0,1)$ given by $X(t):=x(1-t)$ one has $X(t)\in[0,x]$ for all $t\in[0,1]$ and \begin{align*} v^\alpha(x)&=d^\alpha(x,0)+\phi^\alpha(0)\\ &\leq \int_0^1 \frac{1}{2}(x(1-t))^2+\frac{1}{2}(b_\alpha(x(1-t))-x)^2dt\\ &\leq (x+||b_\alpha||_{L^\infty([0,x])})^2+\phi^\alpha(0). \end{align*} \end{proof} \begin{lemma}\label{l1} For all $\alpha\in[0,1]$ the function $d^\alpha$ is a locally Lipschitz continuous function in $[0,+\infty)\times [0,+\infty)$. In particular, $v^\alpha$ is locally Lipschitz continuous in $[0,+\infty)$. Moreover, the following properties hold: $$d^\alpha(x,y)\geq0;\quad d^\alpha(x,y)\leq d^\alpha(x,z)+d^\alpha(z,y);\quad d(x,x)=0\,\, \forall x,y,z\in [0,+\infty).$$ \end{lemma} \begin{proof} \begin{enumerate} \item[1.] Fix $x,y\in[0,+\infty)$ and $\varepsilon>0$. Choose $T>0$ and $X\in \mathcal C^+(x,y,T)$ so that $$d^\alpha(x,y)+\varepsilon>\int_0^T\frac{1}{2} X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt.$$ Since the integrand in above expression is non-negative, we deduce by the arbitrariness of $\varepsilon$ that $d^\alpha(x,y)\geq 0$. \item[2.] Fix $R>0$ and $x,y\in [0,R]$. We set as in the proof of Lemma \ref{gammamodulus} \begin{align*} \tilde C^\alpha_R:=&\max\left\{ \frac{1}{2}x^2+\frac{1}{2}(b_\alpha(x)-\xi)^2\mid (x,\xi)\in[0,R]\times [-1,1]\right\}\,. \end{align*} We first assume that $x\not=y$, we define the curve $X\in \mathcal C^+(x,y,|x-y|)$ by $$X(t)=x-\frac{ t}{|x-y|}(x-y)\qquad \text{for } 0\leq t\leq |x-y|,$$ and we observe that $X(t)\in[0, R]$ and $\dot X(t)\in [-1,1]$ for all $t\in[0,|x-y|]$. Hence \begin{align*} d^\alpha(x,y)&\leq \int_{0}^{|x-y|}\frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2 dt\\ &\leq \tilde C^\alpha_R |x-y|. \end{align*} Now, we consider the case $x=y$. Fix any $T>0$ and set $X(t)=x$ for $t\in[0,T]$. Then we have \begin{align*} d^\alpha(x,y)&\leq \int_{0}^{T}\frac{1}{2}x^2+\frac{1}{2}b^2_\alpha(x) dt\leq \tilde C^\alpha_R T. \end{align*} Therefore \begin{equation}\label{dcont} d^\alpha(x,y)\leq \tilde C^\alpha_R|x-y|\quad \forall x,y\in[ 0,R]. \end{equation} In particular $d(0,0)=0$. \item[3.] Let $x,y,z\in[0,+\infty)$. Let $T,S>0$, $X\in \mathcal C^+(x,z,T)$ and $Y\in \mathcal C^+(z,y,S)$. Define $Z\in \mathcal C^+(x,y,T+S)$ by $$ Z(t)=\begin{cases} X(t) \quad &\text{ for } 0\leq t\leq T,\\ Y(t-T)\quad &\text{ for } T< t\leq T+S. \end{cases} $$ We have \begin{equation*}\begin{split} d^\alpha(x,y)\leq& \int_{0}^{T+S}\left(\frac{1}{2}Z(t)^2+\frac{1}{2}(b_\alpha(Z(t))-\dot Z(t))^2\right) dt\\ =&\int_{0}^{T}\frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2 dt\\ &+\int_{0}^{S}\frac{1}{2}Y(t)^2+\frac{1}{2}(b_\alpha(Y(t))-\dot Y(t))^2dt\,. \end{split}\end{equation*} By the arbitrariness of $X$ and $Y$, we deduce \begin{equation}\label{triangular} d^\alpha(x,y)\leq d^\alpha(x,z)+d^\alpha(z,y).\end{equation} \item[4.] Finally, using \eqref{dcont} and \eqref{triangular}, we deduce that for all $R>0$ and for all $x,y,\xi,\eta\in[0,R]$ $$|d^\alpha(x,y)-d^\alpha(\xi,\eta)|\leq \tilde C^\alpha_R(|x-\xi|+|y-\eta|)\leq 2C^\alpha_R||(x,y)-(\xi,\eta)||.$$ The local Lipschitz continuity of $v^\alpha$ readily follows by the identity $v^\alpha(x)=d^\alpha(x,0)+\psi^\alpha(0)$. \end{enumerate} \end{proof} \subsubsection{Proof of Theorem \ref{thmva}} It is left to prove that for all $\alpha\in[0,1]$, $v^\alpha$ is a viscosity solution of \eqref{hjstat}. \begin{proof} Let $\varphi\in C^1((0,+\infty))$ and $\hat x\in (0,+\infty)$. We first assume that $v^\alpha-\varphi$ attains a local maximum at $\hat x$. We then may assume, without loss of generality, that $v^\alpha(\hat x)-\varphi(\hat x)= 0$, so that $v^\alpha\leq \varphi$ in $B_\delta(\hat x)$ for some $\delta>0$. Setting $X(t)=\hat x+ (b_\alpha(\hat x)-D\varphi(\hat x))t$, by Lemma \ref{l1} and by the definition of $d^\alpha$, it follows that, for all $\varepsilon>0$ such that $X(\varepsilon)\in B_\delta(\hat x)$, \begin{align*}\varphi (\hat x)&=v^\alpha(\hat x)=d^\alpha(\hat x,0)+\psi^\alpha(0)\\ &\leq d^\alpha(\hat x,X(\varepsilon))+d^\alpha(X(\varepsilon),0)+\psi^\alpha(0)\\ &= d^\alpha(\hat x,X(\varepsilon))+v^\alpha(X(\varepsilon))\\ &\leq d^\alpha(\hat x,X(\varepsilon))+\varphi(X(\varepsilon)) \\ &\leq \frac{1}{2}\int_0^\varepsilon (X(t))^2+(b_\alpha(X(t))-\dot X(t))^2 dt+ \varphi(X(\varepsilon)) \,. \end{align*} Since $X(\varepsilon)=\hat x+(b_\alpha(\hat x)-D\varphi(\hat x))\varepsilon$, then for all sufficiently small $\varepsilon>0$, $$ \frac{\varphi(\hat x)-\varphi(\hat x+((b_\alpha(\hat x)-D\varphi(\hat x))\varepsilon)}{\varepsilon}-\frac{1}{\varepsilon}\int_0^\varepsilon \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2 dt\leq0\,. $$ Letting $\varepsilon\to 0^+$ and remarking that $X(0)=\hat x$ and $\dot X(0)=b_\alpha(\hat x)-D\varphi(\hat x)$, we obtain after a few computation $$\frac{1}{2}D\varphi(\hat x)^2-\frac{1}{2}\hat x^2-b_\alpha(\hat x)D\varphi(\hat x)\leq 0.$$ We deduce, by the arbitrariness of $\varphi$, that $v^\alpha$ is a viscosity subsolution of \eqref{hjstat}. Now we assume that $v^\alpha-\varphi$ attains a local minimum at $\hat x$. Again, we may assume, without loss of generality, that $v^\alpha(\hat x)-\varphi(\hat x)= 0$, so that $v^\alpha\geq \varphi$ in $B_\delta(\hat x)$ for some $\delta>0$. Let $C^\alpha(\hat x)$ be like in Lemma \ref{l456} (with $R=\hat x$), $\varepsilon\in(0,\min\{1,\delta^2 /(2(C^\alpha(\hat x)+1))^2\})$ and choose $T>0$ and $X_\varepsilon \in \mathcal{ C}^+(\hat x, 0,T)$ such that $$d^\alpha(\hat x ,0)+\varepsilon^2> \int_0^T \frac{1}{2} X_\varepsilon(t)^2+\frac{1}{2}(b_\alpha(X_\varepsilon(t))-\dot X_\varepsilon(t))^2 dt\,.$$ If $T<1$, we may prolong the definition of $X_\varepsilon$ to $[0,1]$ by setting $X_\varepsilon\equiv 0$ in $(T,1]$, and obtaining the above inequality to hold for $T=1$, as well. Therefore we assume, without loss of generality, that $T\geq1$. Arguing as in the proof of Lemma \ref{l456}, one can deduce that $$|X_\varepsilon(t)-\hat x|\leq tC^\alpha(\hat x)+\sqrt{t}(C^\alpha(\hat x)+2)\quad \text{for } t\in[0,T].$$ In particular, we have $\varepsilon<1\leq T$ and $$|X_\varepsilon(t)-\hat x|\leq t C^\alpha(\hat x)+\sqrt{t}(C^\alpha(\hat x)+2)<2\sqrt{\varepsilon}(C^\alpha(\hat x)+1)\leq \delta \qquad \forall t\in[0,\varepsilon].$$ As soon as $t\in[0,\varepsilon]$ we can deduce $v^\alpha(X_\varepsilon(t))\geq \varphi(X_\varepsilon(t)))$. Then \begin{align*}\varphi (\hat x)+\varepsilon^2=&v^\alpha(\hat x)+\varepsilon^2=d^\alpha(\hat x,0)+\varepsilon^2+\psi^\alpha(0)\\ >& \int_0^T \frac{1}{2}X_\varepsilon(t)^2+\frac{1}{2}(b_\alpha(X_\varepsilon(t))-\dot X_\varepsilon(t))^2 dt+\psi^\alpha(0)\\ =&\int_0^{\varepsilon} \frac{1}{2}X_\varepsilon(t)^2+\frac{1}{2}(b_\alpha(X_\varepsilon(t))-\dot X_\varepsilon(t))^2 dt\\+ &\int_0^{T-\varepsilon} \frac{1}{2}X_\varepsilon(t+\varepsilon)^2+\frac{1}{2}(b_\alpha(X_\varepsilon(t+\varepsilon))-\dot X_\varepsilon(t+\varepsilon))^2 dt+\psi^\alpha(0)\\ \geq&\int_0^{\varepsilon} \frac{1}{2}X_\varepsilon(t)^2+\frac{1}{2}(b_\alpha(X_\varepsilon(t))-\dot X_\varepsilon(t))^2 dt+v^\alpha(X_\varepsilon(\varepsilon))\\ \geq&\int_0^{\varepsilon} \frac{1}{2}X_\varepsilon(t)^2+\frac{1}{2}(b_\alpha(X_\varepsilon(t))-\dot X_\varepsilon(t))^2 dt+\varphi(X_\varepsilon(\varepsilon)). \end{align*} Now, $$\varphi((X_\varepsilon(\varepsilon))-\varphi(\hat x)=\varphi((X_\varepsilon(\varepsilon))- \varphi(X_\varepsilon(0))=\int_0^\varepsilon D\varphi(X_\varepsilon(t))\dot X_\varepsilon(t)dt\,,$$ hence \begin{equation}\label{ah} \varepsilon^2>\int_0^{\varepsilon} \frac{1}{2}X_\varepsilon(t)^2+\frac{1}{2}(b_\alpha(X_\varepsilon(t))-\dot X_\varepsilon(t))^2+ D\varphi(X_\varepsilon(t))\dot X_\varepsilon(t)dt\,. \end{equation} Since $y^2/2\geq z y -z^2/2 $ for all $y,z\in\mathbb R$, setting $y=y(t)=b_\alpha(X_\varepsilon(t))-\dot X_\varepsilon(t)$ and $z=z(t)=D\varphi(X_\varepsilon(t))$, we get $$\frac{1}{2}(b_\alpha(X_\varepsilon(t))- \dot X_\varepsilon(t))^2\geq (b_\alpha(X_\varepsilon(t)-\dot X_\varepsilon(t)))D\varphi(X_\varepsilon(t))-\frac{1}{2}D\varphi(X_\varepsilon(t))^2\quad \forall t\in[0,\varepsilon]\,.$$ This, together with \eqref{ah}, implies \begin{align*}\varepsilon&> \frac1\varepsilon\int_0^{\varepsilon} \frac{1}{2}(X_\varepsilon(t))^2+b_\alpha(X_\varepsilon(t))D\varphi(X_\varepsilon(t)) -\frac{1}{2}D\varphi(X_\varepsilon(t))^2dt\,. \end{align*} Letting $\varepsilon\to 0^+$, we finally conclude $$\frac{1}{2}D\varphi(\hat x)^2-\frac{1}{2}\hat x^2-b_\alpha(\hat x)D\varphi(\hat x)\geq 0.$$ We deduce, by the arbitrariness of $\varphi$, that $v^\alpha$ is a also viscosity supersolution of \eqref{hjstat} and this concludes the proof. \end{proof} \subsection{Preliminary results and proof of Theorem \ref{thm1}}\label{s42} We begin with the following regularity result for $\psi$. \begin{lemma}For all $\alpha\in[0,1]$, $\psi^\alpha$ is locally Lipschitz continuous. \end{lemma} \begin{proof} Fix $R>0$ and choose $x,y\in [0,R]$ such that $\psi^\alpha(x)<\psi^\alpha(y)$. Recall from Lemma \ref{gammamodulus} the definition \begin{align*} \tilde C^\alpha_R:=&\sup\left\{ \frac{1}{2}x^2+\frac{1}{2}(b_\alpha(x)-\xi)^2\mid (x,\xi)\in [0,R]\times [-1,1]\right\}. \end{align*} Choose $T>0$ and $X\in \mathcal C^+(x,T)$ such that $$\psi^\alpha(x)+\varepsilon>\int_0^T\frac{1}{2} X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+\phi(X(T)).$$ Define $Y\in \mathcal C^+(y, T+|x-y|)$ by $$Y(t)=\begin{cases} y+t\dfrac{x-y}{|x-y|}\quad&\text{ for } 0\leq t< |x-y|,\\ X(t-|x-y|)&\text{ for } |x-y|\leq T+|x-y|. \end{cases}$$ We have \begin{align*} \psi^\alpha(y)\leq&\int_0^{T+|x-y|}\frac{1}{2}Y(t)^2+\frac{1}{2}(b_\alpha(Y(t))-\dot Y(t))^2dt+\phi(Y(T))\\ =&\int_0^{|x-y|}\frac{1}{2}Y(t)^2+\frac{1}{2}(b_\alpha(Y(t))-\dot Y(t))^2dt\\ &+\int_0^{T} \frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+\phi(X(T))\\ <&\tilde C^\alpha_R|x-y|+\varepsilon+\psi^\alpha(x). \end{align*} From this we deduce $|\psi^\alpha(y)-\psi^\alpha(x)|\leq C^\alpha_R|x-y|$ and this completes the proof. \end{proof} \begin{lemma}\label{l25}Let $\alpha\in[0,1]$, $R>0$, and $\varepsilon>0$. There exists a constant $T>0$ such that, for each $x,y \in [0,R]$, there exists $S\in(0,T]$ and $X\in \mathcal C^+(x,y,S)$ such that $$d^\alpha(x,y)+\varepsilon>\int_0^S\frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt.$$ \end{lemma} \begin{proof} Let $R>0$ and $\varepsilon\in(0,1)$. Fix $\bar x,\bar y\in [0,R]$ and choose $\bar T>0$ and $Y\in \mathcal C^+(\bar x,\bar y, \bar T)$ so that $$d(\bar x,\bar y)+\frac{\varepsilon}{4}>\int_0^{\bar T} \frac{1}{2}Y(t)^2+\frac{1}{2}(b_\alpha(Y(t))-\dot Y(t))^2dt.$$ Let $$C_R:=\max\left\{\frac{1}{2}x^2+\frac{1}{2}(b_\alpha(x)+y)^2\mid |x|\leq 1+R, |y|\leq 1\right\}.$$ Fix $\delta\in(0,1)$ so that $$2 C_R\delta\leq \frac{\varepsilon}{4}$$ and $$|d^\alpha(x,y)-d^\alpha(\bar x,\bar y)|<\frac{\varepsilon}{2}\quad \text{for } x\in B(\bar x,\delta),\,y\in B(\bar y,\delta).$$ Let $x\in B(\bar x,\delta)\cap [0,+\infty)$ and $y\in B(\bar y,\delta)\cap [0,+\infty)$, Define $\xi \in \mathcal C^+(x,\bar x,\delta)$ and $\eta \in \mathcal C^+(\bar y,y,\delta)$, respectively, by $$ \xi(t)=x+\frac{t}{\delta}(\bar x-x) \quad \text{for } 0\leq t\leq \delta$$ $$ \eta(t)=\bar y+\frac{t}{\delta}(y-\bar y) \quad \text{for } 0\leq t\leq \delta.$$ Noting that $\xi(t),\eta(t)\in [0,R+1]$ and $\dot \xi(t),\dot \eta(t) \in B (0,1)$ for all $t\in [0,\delta]$, we have $$\int_0^\delta \frac{1}{2}\xi(t)^2+\frac{1}{2}(b_\alpha(\xi(t))-\dot \xi(t))^2dt\leq C_R\delta$$ $$\int_0^\delta \frac{1}{2}\eta(t)^2+\frac{1}{2}(b_\alpha(\eta(t))-\dot \eta(t))^2dt\leq C_R\delta.$$ Define the function $X\in \mathcal C^+(x,y,\bar T+2\delta)$ by $$X(t)=\begin{cases}\xi(t)& \text{ for } t\in [0,\delta],\\ Y(t-\delta) & \text{ for } t\in [\delta,\bar T+\delta],\\ \eta(t-\bar T-\delta) &\text{ for } t\in [\bar T+\delta,\bar T+2\delta]. \end{cases}$$ Then we have \begin{align*} \int_0^{\bar T+2\delta} \frac{1}{2}X(t)^2&+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt=\int_0^\delta \frac{1}{2}\xi(t)^2+\frac{1}{2}(b_\alpha(\xi(t))-\dot \xi(t))^2dt\\ &+ \int_0^{\bar T} \frac{1}{2}Y(t)^2+\frac{1}{2}(b_\alpha(Y(t))-\dot Y(t))^2dt\\ &+ \int_0^{\delta} \frac{1}{2}\eta(t)^2+\frac{1}{2}(b_\alpha(\eta(t))-\dot \eta(t))^2dt\\ \leq&2C_R\delta+d(\bar x,\bar y)+\frac{\varepsilon}{4}\leq d^\alpha(\bar x,\bar y)+\frac{\varepsilon}{2}\\ <&d^\alpha(x,y)+\varepsilon. \end{align*} We deduce that for each $(\bar x,\bar y)\in[0,R]\times [0,R]$ there exist constants $\bar S>0$ and $\delta>0$ such that, for any $x\in B(\bar x,\delta)\cap [0,+\infty)$ and $y\in B(\bar y,\delta)\cap[0,+\infty)$, we have $$d^\alpha(x,y)+\varepsilon>\int_0^{\bar S}\frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt$$ for some $X\in \mathcal C^+(x,y,\bar S)$. From now on the proof goes as \cite[Lemma 2.5]{FIL06} (with small adaptations): we report it for completeness. By the compactness of $[0,R]\times[0,R]$, there exists a finite collection of $(x_k,y_k)\in [0,R]\times[0,R]$, $S_k,\delta_k>0$, for $k=1,\dots,K$, such that $$[0,R]\times[0,R]\subseteq \bigcup_{k=1}^K B(x_k,\delta_k)\times B(y_k,\delta_k)$$ and such that for any $k \in \{1,2\dots,K\}$, $x \in B(x_k, \delta_k)\cap[0,+\infty)$, and $y \in B(y_k, \delta_k)\cap[0,+\infty)$, $$d^\alpha(x,y)+\varepsilon>\int_0^{S_k}\frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt$$ for some $X\in \mathcal C^+(x,y, S_k)$. Setting $T = \max_{1\leq k \leq K} S_k$, we observe that for any $(x, y) \in [0,R] \times [0,R]$, $$d(x, y) + \varepsilon >\int_0^{S_k}\frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt$$ for some $S \in (0,T]$ and $X \in C^+(x,y,S)$, and this concludes the proof. \end{proof} \begin{lemma}\label{l26}Let $\alpha\in[0,1]$, $R>0$, and $\varepsilon>0$. There exists a constant $T>0$ such that, for each $x\in [0,R]$, there exist $S\in(0,T]$ and $X\in \mathcal C^+(x,S)$ such that $$\psi^\alpha(0)+\varepsilon>\int_0^S\frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt+\phi(X(S)).$$ \end{lemma} \begin{proof}The proof is similar to Lemma \ref{l25}, hence we omit it. \end{proof} \subsubsection{Proof of Theorem \ref{thm1}} Fix $\alpha\in[0,1]$, $R>0$ and $\varepsilon>0$. Using Lemma \ref{l25} and Lemma \ref{l26}, let $T_{R,\varepsilon}$ be such that, for all $x\in [0,R]$, there exist $S,S'\in(0,T]$, $X\in \mathcal C^+(x,0,S)$ and $Y\in \mathcal C^+(0,S')$ satisfying $$d^\alpha(x,0)+\varepsilon\geq\int_0^S\frac{1}{2}X(t)^2+\frac{1}{2}(b_\alpha(X(t))-\dot X(t))^2dt$$ and $$\psi^\alpha(0)+\varepsilon\geq\int_0^{S'}\frac{1}{2}Y^2(t)+\frac{1}{2}(b_\alpha(Y(t))-\dot Y(t))^2dt+\phi(Y(S')).$$ Fix $t\geq 2T_{R,\varepsilon}$ and define $$Z(s):=\begin{cases} X(s) & \text{for } s\in [0,S]\\ 0 & \text{for } s\in (S,t-S']\\ Y(s-t+S')& \text{for } s\in (t-S',t]. \end{cases}$$ By Proposition \ref{DPP}, we get \begin{align*} u^\alpha(x,t)&\leq \int_0^{t}\frac{1}{2}Z^2(s)+\frac{1}{2}(b_\alpha(Z(s))-\dot Z(s))^2dt+\phi(Z(t))\\ &=\int_0^{S}\frac{1}{2}X^2(s)+\frac{1}{2}(b_\alpha(X(s))-\dot X(s))^2dt\\ &+ \int_0^{S'}\frac{1}{2}Y^2(s)+\frac{1}{2}(b_\alpha(Y(s))-\dot Y(s))^2dt+\phi(Y(S'))\\ &\leq d^\alpha(x,0)+\varepsilon+\psi^\alpha(0)+\varepsilon=v^\alpha(x)+2\varepsilon. \end{align*} Therefore, \begin{equation}\label{upper} u^\alpha(x,t)\leq v^\alpha(x)+\varepsilon \quad \text{for } x\in [0,R],\,t\geq 2T_{R,\varepsilon}. \end{equation} Now, choose $\varepsilon\in(0,1)$. We prove that there exists a sufficiently large $T$ such that \begin{equation}\label{lower} v^\alpha(x)\leq u^\alpha(x,t)+\varepsilon \quad \text{for } x\in[0,R],\,t\geq T. \end{equation} Recall the definition \eqref{Bdef} and consider the integral function $$\hat B_\alpha(y,x):=\hat B_\alpha(y)-\hat B_\alpha(x)=\int_x^y b_\alpha(z)dz.$$ Recall the definition of endemic population $E:=N(1-1/\rho)$ and note that, for all $x\in [0,R]$, $$\hat B_\alpha(y,x)\leq \hat B_\alpha(E)-\hat B_\alpha(x)\leq \hat B_\alpha(E)-\min\{\hat B_\alpha(x) \mid x\in [0,R]\}=: C_4^\alpha(R)\,.$$ Let $C^\alpha_1(R)$ and $C^\alpha_2(R)$ be respectively like in \eqref{C1} and \eqref{C2}. By taking a larger $C^\alpha_1(R)$ if necessary, also assume \begin{equation}\label{c1}C_1^\alpha(R)\geq \max\{\frac{1}{2}x^2+\frac{1}{2}(b_\alpha(x)+y)^2\mid |x|\leq C_2(R),\, |y|\leq 1\}.\end{equation} Set $$\bar C^\alpha(R):=C_1^\alpha(R)+C_4^\alpha(R).$$ Fix $\delta\in(0,1)$ so that $$2\bar C^\alpha(R)\delta <\varepsilon$$ and define $\bar \gamma=\min\{\frac{1}{2}z^2+\frac{1}{2}b_\alpha^2(z)\mid |z|\geq \delta\}$. Then define $$T:=\frac{1}{\bar\gamma}(2\bar C^\alpha(R)+1).$$ Let $t\geq T$ and $ X\in \mathcal C^+(x,T)$ such that $$u^\alpha(x,t)+\varepsilon>\int_0^{t}\frac{1}{2} X^2(s)+\frac{1}{2}(b_\alpha(X(s))-\dot X(s))^2dt+\phi(X(t))\,.$$ Then \begin{align*} u^\alpha(x,t)+1&>\int_0^{t}\frac{1}{2} X^2(s)+\frac{1}{2}b^2_\alpha(X(s))ds -\int_0^t b_\alpha(X(s))\dot X(s)ds\\ &=\int_0^{t}\frac{1}{2} X^2(s)+\frac{1}{2}b_\alpha(Y(s))^2ds -\int_x^{X(t)}b_\alpha(z)dz\\ &=\int_0^{t}\frac{1}{2} X^2(s)+\frac{1}{2}b_\alpha(Y(s))^2ds -B_\alpha(X(t),x)\\ &\geq \int_0^{t}\frac{1}{2} X^2(s)+\frac{1}{2}b_\alpha(Y(s))^2ds - C_4^\alpha(R). \end{align*} In view of Lemma \ref{l28}, we get \begin{equation}\label{contr} \int_0^{t}\frac{1}{2} X^2(s)+\frac{1}{2}b_\alpha^2(X(s))ds< C_1(R)+C_4^\alpha(R)+1=\bar C^\alpha(R)+1. \end{equation} We now prove that \begin{equation}\label{minumy} X(s)\leq\delta\quad \text{for some $s\in [0,t]$.}\end{equation} Assume on the contrary that $X(s)>\delta$ for all $s\in[0,t]$. Then, by the definition of $\bar\gamma$, \begin{equation*} \int_0^{t}\frac{1}{2} X^2(s)+\frac{1}{2}b_\alpha(X(s))^2ds\geq \bar\gamma t\geq \bar \gamma T=2\bar C_1(R)+1\,, \end{equation*} and we get the required contradiction with \eqref{contr}. Let $\tau \in [0,t]$ be such that $0\leq X(\tau)\leq \delta$ and note that, by Lemma \ref{l29}, $$|X(s)|\leq C^\alpha_2(R)\quad \text{for } s\in [0,t].$$ Now, define $\xi\in \mathcal C^+(Y(\tau),0,\delta)$ and $\eta \in \mathcal C(0,Y(\tau),\delta)$ respectively as $$Y(s):=X(s)-\frac{s}{\delta}X(s)\quad \text{for } s\in[0,\delta]\,,$$ $$Z(s):=\frac{s}{\delta}X(s)\quad \text{for } s\in[0,\delta]\,.$$ Noting that $$Y(s),Z(s) \in[0,C_2(R)],\quad \dot Y(s),\dot Z(s)\in B(0,R)\qquad \text{for } s\in [0,t],$$ in view of \eqref{c1}, we have $$\int_0^\delta \frac{1}{2}Y^2(s)+\frac{1}{2}(b_\alpha(Y(s))-\dot Y(s))^2ds \leq C^\alpha_1(R)\delta$$ and $$\int_0^\delta \frac{1}{2}Z^2(s)+\frac{1}{2}(b_\alpha(Z(s))-\dot Z(s))^2ds \leq C^\alpha_1(R)\delta.$$ Define the function $\eta\in\mathcal C(x,\delta, t+2\delta)$ $$\eta(s):=\begin{cases} X(s) &s\in[0,\tau]\\ Z(s-\tau)&s\in(\tau,\tau+\delta]\\ Y(s-\tau-\delta)&s\in(\tau+\delta,\tau+2\delta]\\ X(s)&s\in(\tau+2\delta,t+2\delta]. \end{cases}$$ Then \begin{align*} \int_0^t \frac{1}{2}\eta^2(s)&+\frac{1}{2}(b_\alpha(\eta(s))-\dot \eta(s))^2ds + \phi(\eta(s))\\ &=\int_0^t \frac{1}{2}X^2(s)+\frac{1}{2}(b_\alpha(X(s))-\dot X(s))^2ds+\phi(X(t))\\ &+\int_0^\delta\frac{1}{2} Y(s)^2+\frac{1}{2}(b_\alpha(Y(s))-\dot Y(s))^2ds\\ &+\int_0^\delta \frac{1}{2}Z(s)^2+\frac{1}{2}(b_\alpha(Z(s))-\dot Z(s))^2ds\\ &<u^\alpha(x,t)+\varepsilon+2C^\alpha_1(R)\delta\\ &\leq u^\alpha(x,t)+\varepsilon+2\bar C^\alpha_1(R)\delta<u^\alpha(x,t)+2\varepsilon. \end{align*} On the other hand, we have $$d^\alpha(x,0)\leq \int_0^{\tau+\delta}\frac{1}{2} \eta^2(s)+\frac{1}{2}(b_\alpha(\eta(s))-\dot \eta(s))^2)ds $$ and $$\phi(0)\leq \int_{\tau+\delta}^{t+2\delta} \frac{1}{2}\eta^2(s)+\frac{1}{2}(b_\alpha(\eta(s)-\dot \eta(s))^2)ds+\phi(\eta(t+2\delta)).$$ Therefore $$v^\alpha(x)=d^\alpha(x,0)+\phi^\alpha(0)\leq u^\alpha(x,t)+2\varepsilon\,.$$ This, together with \eqref{lower} and the arbitrariness of $\varepsilon$, concludes the proof. \begin{flushright} $\square$ \end{flushright} \section{Numerical approximation and simulations}\label{s5} In this section, we introduce a numerical scheme for solving the Hamilton-Jacobi equation \eqref{hj}, and we explore some properties of the numerical solution to confirm the results presented in the previous sections. In particular, we analyze its asymptotic behavior in time, in comparison with the solution of the stationary problem \eqref{hjstat}. Then, we employ a simple Euler integrator to build optimal trajectories for the control problem \eqref{value}, and we show the results in different scenarios. For the reader's convenience, we recall here the Hamilton-Jacobi equation \eqref{hj} together with the boundary condition in space at $x=0$, provided by Lemma \ref{l28}: $$ \left\{ \begin{array}{ll} u_t+H(x,Du)=0 & (x,t)\in [0,+\infty)\times[0,+\infty)\\ u(x,0)=\phi(x) & x\in [0,+\infty)\\ u(0,t)=\phi(0) & t\in [0,+\infty)\,,\\ \end{array} \right. $$ where the Hamiltonian $H:[0,+\infty)\times \mathbb R\to \mathbb R$, given by $$ H(x,p)=(-b_\alpha(x)+\frac{1}{2}p)p-\frac{1}{2}x^2\,, $$ is rewritten in a form suitable for the discretization. We approximate the unbounded space-time domain by means of a rectangle $[0,x_{\max}]\times[0,T]$, for sufficiently large real numbers $x_{\max}>0$ and $T>0$, and we introduce a uniform grid with nodes $(x_i,t_n)=(i\Delta x,n\Delta t)$ for $i=0,\dots,N_x$ and $n=0,\dots,N_t$, where $\Delta x=\frac{x_{\max}}{N_x}$ and $\Delta t=\frac{T}{N_t}$ denote space and time steps respectively, and $N_x, N_t$ are given integers. Moreover, we denote the approximations of $u$, $b_\alpha$, $\phi$ on the grid respectively by $U_i^n\simeq u(x_i,t_n)$, $B_{\alpha,i} \simeq b_\alpha(x_i)$, $\Phi_i \simeq \phi(x_i)$, and we collect them in the vectors $U^n=(U_0^n,\dots,U_{N_x}^n)$, $B_\alpha=(B_{\alpha,0},\dots,B_{\alpha,N_x})$, $\Phi=(\Phi_0,\dots,\Phi_{N_x})$. Then, we introduce finite differences for approximating space and time derivatives. In particular, discretization in space requires some care, in order to define a numerical Hamiltonian $H^\sharp$ which correctly approximates viscosity solutions of the equation. More precisely, using forward/backward differences, we introduce the following two-sided approximation of $Du$, $$ DU_i=(D_LU_i,D_RU_i):=\left(\frac{U_i-U_{i-1}}{\Delta x},\frac{U_{i+1}-U_{i}}{\Delta x} \right)\,, $$ for $i=1,\dots,N_x-1$, and we set $$ H^\sharp(x_i,DU_i):=\Big(-B_{\alpha,i}+\frac12 D_LU_i\Big)^+ D_LU_i+\Big(-B_{\alpha,i}+\frac12 D_RU_i\Big)^- D_RU_i\,, $$ which selects the gradient components in an upwind fashion, according to the sign of $-B_\alpha+\frac12 DU$, where $(\cdot)^+=\max\{\cdot,0\}$, $(\cdot)^-=\min\{\cdot,0\}$ denote respectively the positive and negative parts of their arguments (for further details, we refer the interested reader to \cite{sethian,falfer}). Finally, we employ a forward difference in time, and we end up with the following explicit time-marching scheme: $$ \left\{ \begin{array}{ll} U_i^{n+1}=U_i^n-\Delta t H^\sharp(x_i,DU_i^n) & i=1,\dots,N_x,\,\, n=0,\dots,N_t\\ U_i^0=\Phi_i & i=0,\dots,N_x\\ U_0^n=\Phi_0 & n=0,\dots,N_t\,. \end{array} \right. $$ We remark that the forward difference $D_RU_i$ is not defined for $i=N_x$, corresponding to the point $x=x_{\max}$ at the right boundary. Since the feedback control for the underlying control problem is given by $\xi=-Du$, we can argue that, for $x_{\max}$ sufficiently large (e.g. greater than the endemic population $E=N(1-\frac{1}{\rho})$ if $\rho>1$), the optimal dynamics in \eqref{controlya} pushes to the left, and towards the origin, the trajectory starting from $x_{\max}$. This implies that $Du(x_{\max},t)\ge 0$ for all $t\in(0,T)$, and we also have $b_{\alpha}(x_{\max})\le 0$. We conclude that $(-b_\alpha(x_{\max})+\frac12 Du(x_{\max},t))^-=0$, hence we can always select for the node $i=N_x$ just the contribution given by the backward approximation $D_LU_i$ . We also remark that, due to the definition of $b_\alpha$, the right hand side of the above scheme becomes larger and larger as $x_{\max}$ increases. This requires a severe CFL restriction on the discretization steps $\Delta t$ and $\Delta x$, in order to preserve stability. Once the solution has been computed, we reconstruct the following semi-discrete feedback control, by merging the two components of $DU$ and interpolating their values in space, via a linear interpolation operator $\mathbb{I}$: $$\xi^n(\cdot)=-\mathbb{I}[ (D_LU^n)^+ + (D_RU^n)^-](\cdot)\,.$$ Then, we build the optimal trajectories by integrating \eqref{controlya}, using a simple forward Euler scheme $$ \begin{cases} y^{n+1}=y^n+\Delta t\left(b_\alpha(y^n)+\xi^n(y^n)\right)\\ y^0=x\,.\end{cases} $$ Now, let us setup the parameters for the numerical experiments. We choose the space domain size $x_{\max}=4$, the final time $T=5$, and $N_x=200$, $N_t=4000$ nodes in space and time respectively. Moreover, we choose $M(\alpha)\equiv 1$, $N=\frac94$ and $\gamma=1$ as model parameters defining the advection speed $b_\alpha$ in \eqref{ordSIS}. On the other hand, $\alpha$ and $\rho$ (and accordingly $\beta=\gamma \rho$) will be set differently for each test. We start by choosing the exit cost $\phi(x)=x$ and a reproduction factor $\rho=\frac{3}{2}$ such that the corresponding endemic population $E=\frac{3}{4}$ is a stable equilibrium (attractive in the domain $[0,x_{\max}]$) for the uncontrolled system, see Proposition \ref{equilibrium}. In Figure \ref{Test1}, we show the results obtained for $\alpha=1$ and $\alpha=\frac{1}{2}$ at different times. In the left panels, we report the value functions compared to $\phi$, in the right ones the corresponding optimal controls. \begin{figure}[!h] \centering{ \includegraphics[width=0.45\textwidth]{test1a-U} \includegraphics[width=0.45\textwidth]{test1a-C.pdf}\\ \includegraphics[width=0.45\textwidth]{test1b-U.pdf} \includegraphics[width=0.45\textwidth]{test1b-C.pdf}\\ \includegraphics[width=0.45\textwidth]{test1c-U.pdf} \includegraphics[width=0.45\textwidth]{test1c-C.pdf}} \caption{Value functions (left panels) and optimal controls (right panels) at different times for $\alpha=1$, $\alpha=\frac12$ and $\rho=\frac{3}{2}$.}\label{Test1} \end{figure} Note that the time horizon $T=5$ has been set large enough to reveal the asymptotic behavior in time of the solutions, namely their convergence, up to machine error, to stationary regimes. Similarly, the space boundary $x_{\max}$=4 is large enough to distinguish the growth of the solutions for $x\to+\infty$. We observe a linear behavior for the case $\alpha=1$, and a quadratic behavior for the case $\alpha=\frac12$. This can be better appreciated looking at the corresponding optimal controls, and it is confirmed by the simulation in Figure \ref{Test1-asymptotics}, in which we show, for $0<\alpha\le 1$, the asymptotic behavior of the $L^\infty$ norms in space of $U$ and $DU$ at the final time (achieved by monotonicity at $x_{\max}$) as both $x_{\max},\,T\to+\infty$. \begin{figure}[!h] \centering \includegraphics[width=0.46\textwidth]{asymptotics-U.pdf} \includegraphics[width=0.45\textwidth]{asymptotics-DU.pdf} \caption{$L^\infty$ space norms (at final time) of the value functions (left) and of their gradients (right) as $x_{\max},\,T\to+\infty$ for different values of $\alpha$ ranging in $[0,1]$.}\label{Test1-asymptotics} \end{figure} In particular, we find out that $\alpha=1$ is the only value that produces a globally Lipschitz continuous solution. A rigorous proof of this statement is still under investigation. In Figure \ref{Test1-trj}, we compare some optimal trajectories obtained for $\alpha=1$ (top panels) and $\alpha=\frac12$ (bottom panels). In each plot, we report the endemic population $E$ (dashed line), the uncontrolled/controlled trajectories (bold lines), and the corresponding optimal controls (thin lines). Moreover, we choose two different initial data for the dynamics \eqref{controlya}, $x=0.5$ and $x=1.25$, respectively below and above $E$. \begin{figure}[!h] \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{test1a-T-0.5.pdf} & \includegraphics[width=0.45\textwidth]{test1a-T-1.25.pdf}\\ $\alpha=1,\,x=0.5$ & $\alpha=1,\,x=1.25$\\\\ \includegraphics[width=0.45\textwidth]{test1b-T-0.5.pdf} & \includegraphics[width=0.45\textwidth]{test1b-T-1.25.pdf}\\ $\alpha=\frac12,\,x=0.5$ & $\alpha=\frac12,\,x=1.25$\\ \end{tabular} \caption{Optimal trajectories for different fractional orders $\alpha$ and initial data $x$.}\label{Test1-trj} \end{figure} As discussed in the introduction, the fractionary SIS system can be recasted in the model \eqref{saturatedSIS}, with ordinary derivatives and saturated growth rates. In particular, the growth of infective individuals is softened as $\alpha$ decreases. This effect is apparent in the uncontrolled trajectories. In the same time horizon, we observe that the uncontrolled trajectory approaches $E$ for $\alpha=1$, while for $\alpha=\frac12$ it is ``lazier'' and still far from the endemic value at the final time. On the other hand, we observe that the optimal control always succeeds in steering the system to the origin (the unstable equilibrium in this case). Nevertheless, while the controlled trajectories are quite similar (as their optimal controls) when the evolution starts from $x=0.5<E$, the case $x=1.25>E$ for $\alpha=\frac12$ requires an additional effort to compensate the slower decay of the corresponding dynamics. Indeed, we observe an optimal control with a larger amplitude in the fragment $[0,0.5]$ of the time interval. The numerical results for the case with a reproduction factor $\rho\le 1$ are quite similar to the previous ones, and we omit them for brevity. We just remark that the endemic value now falls out of the space domain ($E\le 0$), while the state $x=0$ is a stable equilibrium for the system, see again Proposition \ref{equilibrium}. This implies that, for all the initial data in $(0,x_{\max}]$, the corresponding uncontrolled trajectories eventually converge to $x=0$, whereas the controlled ones have a faster decay, in order to optimize the cost functional \eqref{value} for the optimal control problem. Let us now consider an example with a non smooth exit cost, namely we choose $\phi(x)=\min\{2x+\frac12,6x^2\}$, so to produce a kink in the solution. Moreover, we choose the parameters as in the previous tests, with the exception of the space domain size, that we set to $x_{\max}=2$ in order to achieve a sharper CFL condition and mitigate the numerical diffusion of the scheme. In Figure \ref{kink}, we show the results for the case $\rho=\frac{3}{2}$ and $\alpha=1$. In each plot we report, at different times, the value function compared to $\phi$ and also the optimal control. \begin{figure}[!h] \centering \includegraphics[width=0.45\textwidth]{test2a.pdf} \includegraphics[width=0.45\textwidth]{test2b.pdf}\\ \includegraphics[width=0.45\textwidth]{test2c.pdf} \includegraphics[width=0.45\textwidth]{test2d.pdf}\\ \includegraphics[width=0.45\textwidth]{test2e.pdf} \includegraphics[width=0.45\textwidth]{test2f.pdf} \caption{Non smooth exit cost, value function and optimal control at different times.}\label{kink} \end{figure} We clearly observe that the kink in the solution moves and eventually exits the domain as the time increases. Asymptotically, we obtain a smooth solution as in the previous tests. We finally consider the case of a smooth exit cost $\phi(x)=x+exp(-40(x-\frac12)^2)$, which corresponds to penalize the final distribution of infective individuals around the point $x=\frac12$ more than for larger values (up to about $x=\frac32$). The results for the case $\rho=\frac{3}{2}$ and $\alpha=1$ are reported in Figure \ref{kink-gen}. We observe that, in the first part of the evolution, the point $x=\frac12$ acts as a barrier, preventing some states of the system to be steered to the desired one $x=0$. More precisely, the local minimizer of $\phi$ (around about $x=0.8$) is more favorable for states beyond this barrier, where the optimal control has a change of sign. This creates a kink in the solution, which starts moving towards the right boundary of the domain only at a later time. \begin{figure}[!h] \centering \includegraphics[width=0.45\textwidth]{test3a.pdf} \includegraphics[width=0.45\textwidth]{test3b.pdf}\\ \includegraphics[width=0.45\textwidth]{test3c.pdf} \includegraphics[width=0.45\textwidth]{test3d.pdf}\\ \includegraphics[width=0.45\textwidth]{test3e.pdf} \includegraphics[width=0.45\textwidth]{test3f.pdf} \caption{Kink generation from a smooth exit cost, value function and optimal control at different times.}\label{kink-gen} \end{figure} In Figure \ref{Test3-trj}, we compare the corresponding optimal trajectories, obtained for the initial data $x=0.48$ and $x=0.52$, respectively slightly below and above the barrier. \begin{figure}[!h] \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{test3-T-0.48.pdf} & \includegraphics[width=0.45\textwidth]{test3-T-0.52.pdf}\\ $x=0.48$ & $x=0.52$ \end{tabular} \caption{Optimal trajectories for different initial data $x$.}\label{Test3-trj} \end{figure} In the first case, we obtain a controlled trajectory similar to the previous tests, with just a larger amplitude in the control due to the choice of $\phi$. On the other hand, the second case confirms the scenario discussed above. Indeed, the optimal control acts in the positive direction for a small amount of time, pushing the controlled trajectory close to the endemic population, then readily jumps to a negative value, and starts steering the system to the origin. To conclude this section, we compare the stationary regime of the solution $u$ of the Hamilton-Jacobi equation \eqref{hj} with the smooth viscosity solution $\bar v^\alpha$ for the stationary equation \eqref{hjstat}, provided in Remark \ref{rmk1} by $$\bar v^\alpha(x)=\phi(0)+\int_0^x b_\alpha(s)+\sqrt{b_\alpha^2(s)+s^2}ds.$$ We compute $\bar v^\alpha$ on our numerical grid, approximating the integral by a simple trapezoidal quadrature rule, using the same space step $\Delta x$. Moreover, we set $x_{\max}=T=10$, and we choose the same exit cost $\phi$ of the previous test (note that this affects the convergence of $u$ in time, while $\bar v^\alpha$ only depends on $\phi(0)$). In Table \ref{table}, we report the results of the comparison under grid refinement, for different choices of $\alpha$ and $\rho$, evaluating the difference $u(\cdot,T)-\bar v^\alpha(\cdot)$ both in $L^\infty$ and $L^2$ space norms. As $\Delta x\to 0$, we clearly observe a decay of the errors, respectively of order $\mathcal{O}(\Delta x)$ and $\mathcal{O}(\Delta x^2)$ for the two norms, and also a slowdown in convergence as $\alpha$ and $\rho$ decrease. This numerical experiment is in agreement with the result proved in Theorem \ref{thm1}. In particular, convergence is obtained on bounded space intervals, and we have observed in all the experiments that possible irregularities of $u$ are pushed out of the domain towards infinity, before approaching the stationary smooth solution $\bar v^\alpha$ in the limit $T\to +\infty$. \begin{table}[!h] \centering \resizebox{\columnwidth}{!}{\begin{tabular}{r|c|c|c|c|c|c|c|c|} & \multicolumn{2}{|c|}{\small $\alpha=1$, $\rho=\frac32$} & \multicolumn{2}{|c|}{\small $\alpha=\frac12$, $\rho=\frac32$} & \multicolumn{2}{|c|}{\small $\alpha=1$, $\rho=\frac12$} & \multicolumn{2}{|c|}{\small $\alpha=\frac12$, $\rho=\frac12$}\\ \hline $\Delta x$ & \small $L^\infty$ err & \small $L^2$ err & \small $L^\infty$ err & \small $L^2$ err & \small $L^\infty$ err & \small $L^2$ err & \small $L^\infty$ err & \small $L^2$ err\\ \hline 0.1 & 0.047 & 0.01758 & 0.334 & 0.40162 & 0.089 & 0.04607 & 0.341 & 0.41196 \\\hline 0.05 & 0.023 & 0.00439 & 0.167 & 0.09971 & 0.044 & 0.01147 & 0.170 & 0.10226\\\hline 0.025 & 0.012 & 0.00109 & 0.083 & 0.02484 & 0.022 & 0.00286 & 0.085 & 0.02547\\\hline 0.0125 & 0.006 & 0.00027 & 0.042 & 0.00619 & 0.011 & 0.00071 & 0.043 & 0.00636\\\hline 0.00625 & 0.003 & 0.00007 & 0.021 & 0.00155 & 0.005 & 0.00018 & 0.021 & 0.00158\\ \hline \end{tabular}} \caption{Comparison between $u$ and $\bar v^\alpha$ under grid refinement for different model parameters.} \label{table} \end{table} \bibliographystyle{alpha} {\small
1,477,468,750,815
arxiv
\section{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\normalsize\bf}} \def\subsection{\@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize\bf}} \def\thebibliography#1{\section*{References\markboth {REFERENCES}{REFERENCES}}\list {[\arabic{enumi}]}{\settowidth\labelwidth{[#1]}\leftmargin\labelwidth \advance\leftmargin\labelsep \usecounter{enumi}} \def\hskip .11em plus .33em minus -.07em{\hskip .11em plus .33em minus -.07em} \sloppy \sfcode`\.=1000\relax} \let\endthebibliography=\endlist \catcode `\@=12 \begin{document} \vspace*{2.4cm} \noindent { \bf A DYNAMICAL MECHANISM FOR THE SELECTION OF PHYSICAL STATES IN `GEOMETRIC QUANTIZATION SCHEMES'}\vspace{1.3cm}\\ \noindent \hspace*{1in} \begin{minipage}{13cm} P. Maraner \vspace{0.3cm}\\ \makebox[3mm]Center for Theoretical Physics,\\ \makebox[3mm]Laboratory for Nuclear Science and INFN,\\ \makebox[3mm]Massachusetts Institute of Technology, \\ \makebox[3mm]Cambridge, MA 02139-4307, USA \end{minipage} \vspace*{0.5cm} \begin{abstract} \noindent Geometric quantization procedures go usually through an extension of the original theory (pre-quantization) and a subsequent reduction (selection of the physical states). In this context we describe a full geometrical mechanism which provides dynamically the desired reduction. \end{abstract} \section[]{\hspace{-4mm}.\hspace{2mm} THE STANDARD VIEWPOINT ON QUANTIZATION:\\ STATES, OBSERVABLES AND TIME EVOLUTION} \hspace*{0.8cm} The usual way to think of a physical system proceeds in two steep. First: {\sl kinematics}, that is the specification of the possible states and of the observable quantities. Second: {\sl dynamics}, that is the description of the time evolution of the representative point of the system over the space of all the possible states. As a very typical example we may think of Hamiltonian mechanics, where the states of a system with $n-$degrees of freedom are specified by the canonical variables $q^\mu,p_\mu$, $\mu=1,...,n$, while observables are identified with smooth functions of $q$ and $p$. Dynamics is obtained by pointing out a privileged observable, namely the energy of the system $h(q,p)$, by means of the canonical flow generated on the phase space. Let us note that still at this classical level, it turns out that a very few of all the possible observables of the theory play a concrete role in the physical description of the system. Furthermore, the selection of relevant observables goes typically through dynamical considerations, making the sharp separation of kinematics and dynamics an artificial one. Nevertheless the standard way to look at quantization moves from this viewpoint seeking a correspondence between the formal structures of classical and quantum mechanics: {\em states}, {\em observables} and {\em time evolution}. In order to make the quantization procedure a sensible one, that is capable to reproduce standard quantum mechanics, four conditions are usually required: ($Q1$) the correspondence is asked to be linear; ($Q2$) the constant function $1$ has to be mapped on to the identity operator; ($Q3$) the Poisson brackets should become $i$ times the commutators; and ($Q4$) the canonical variables $q$ and $p$ should act irreducibly on the quantum Hilbert space. There are of course many critiques that can be moved to such an approach. As a matter of fact, it results that it is impossible to quantize the whole algebra of classical observables without violating at least one of these conditions. The quantization program fails already at the kinematical level. Thought different viewpoints have been expressed in the literature it is the common believe that the weak point of the above construction is that of requiring that such only locally defined objects as the canonical coordinates have to be promoted to globally well defined operators acting irreducibly on the quantum Hilbert space. Once condition ($Q4$) is lifted, it is infact possible to proceed in a very general and elegant manner to the construction of the desired correspondence: Kostant's {\em pre-quantization} scheme \cite{Wo80}. The whole huge algebra of smooth functions on the phase space is mapped in the algebra of formally self-adjoint operators on a suitable Hilbert space in such a way that conditions ($Q1$), ($Q2$) and ($Q3$) are fulfilled. The problem is that the pre-quantum Hilbert space is too large for physics. In maintaining such an approach it is therefore necessary to introduce a mechanism capable of selecting the subspace of physical states. Thought many different approaches have been suggested, the general viewpoint is that of picking out a {\em real} or {\em complex polarization} on the classical phase space and requiring that the physical states are the one preserving the polarization. Such a prescription---definitely of kinematical character---works well as long as the quantization of systems with a high degree of symmetry is concerned \cite{On76}, but it appears more and more problematic as soon as the dynamics of systems with less symmetry or no symmetry at all is considered. There is in fact no longer guarantee that time evolution respects the polarization, and physical states may evolve in non physical ones. It is the aim of this paper to present a slightly different approach to the problem focusing more on the dynamical aspects rather than on the kinematical ones \cite{KM97}. Working in a coordinate free manner we will be facing the problem of directly defining the quantum dynamics without going through the quantization of the whole algebra of classical observables. This yields a dynamical mechanism that produces the selection of the right set of physical quantum states. \section{\hspace{-4mm}.\hspace{2mm} COORDINATE FREE QUANTIZATION} \hspace*{0.8cm} A sensible quantization scheme should not depend on the choice of coordinates. Before discussing quantization let us therefore briefly recall how is possible to formulate Hamiltonian mechanics in a coordinate free manner. {\it Coordinate Free Formulation of Hamiltonian Mechanics} \cite{HM}: It is convenient to denote phase space coordinates by means of a single variable $\xi=(q^1,...,q^n,p_1,...,p_n)$. In this canonical coordinate frame we introduce the skew-symmetric two-tensors $\omega_{ij}$, $i=1,...,2n$, \begin{equation} \omega_{ij}=\pmatrix{0 & -I \cr I & 0 }, \label{sf} \end{equation} $I$ is the $n$-dimensional identity matrix, and ${\bar\omega}^{ij}$ defined by the relation $\omega_{ik}{\bar\omega}^{kj} =\delta_i^j$. The fundamental Poisson bracket may so be recasted in the covariant form $\{\xi^i,\xi^j\}={\bar\omega}^{ji}$. We note that a canonical transformation does not affect the form of $\omega_{ij}$. Furthermore, the information on the canonical structure being contained in $\omega_{ij}$, we are now free to introduce arbitrary coordinate frames, not necessarily preserving \ref{sf}. The phase space of a Hamiltonian system may so be identified with a {\sl symplectic manifold}, that is a $2n$-dimensional manifold ${\cal M}$ equipped with a closed nondegenerate two-form, the {\sl symplectic form} $\omega_{ij}$. For a Lagrangian system with configuration space $Q$ the symplectic manifold ${\cal M}$ have to be identified with the cotangent bundle $T^\star\!Q$, but this is not the most general case. Many system of physical interest are not included in this class and the discussion of more general phase spaces is necessary. In order to give a coordinate free formulation of the dynamics of the system we have to introduce the canonical one-form $\theta_i$, defined by the relation $\omega_{ij}=\partial_i\theta_j-\partial_j\theta_i$. $\theta_i$ is defined up to the total derivative of an arbitrary phase space function, $\theta_i\rightarrow\theta_i+\partial_i\chi$ (note the formal equivalence of the canonical one and two-forms with a vector potential and a magnetic field!). Dynamics is then defined by means of Hamilton's principle \begin{equation} \delta\int(\theta_i{\dot\xi}^i- h(\xi))dt=0 \end{equation} {\it The Geometrical Background of (pre-)Quantization} \cite{Wo80}: In a coordinate free language the problem of pre-quantization may therefore be recasted in the following terms. For every symplectic manifold ${\cal M}$ construct a Hilbert Space $H({\cal M})$ such that it is possible to exhibit a map form the algebra of smooth function on ${\cal M}$ into that of the formally self-adjoint operators on $H({\cal M})$ satisfying conditions ($Q1$), ($Q2$) and ($Q3$). This is achieved by identifying $H({\cal M})$ with the Hilbert space of the square integrable sections of the line bundle $L$ on ${\cal M}$ having the symplectic two-form $\omega_{ij}$ as curvature form. The correspondence between classical and quantum observables may then be constructed in terms of the covariant derivative on the line bundle. Without going too much into details we recall that for ${\cal M}=T^\star\!Q$, in a canonical coordinate frame and fixed the gauge $\theta=(0,...,0,-q^1,...,-q^n)$ the general rule yields \begin{eqnarray} p_\mu&\rightarrow& -i\hbar{\partial\over\partial q^\mu}, \label{qcin} \\ q^\mu&\rightarrow&\ i\hbar{\partial\over\partial p_\mu}+q^\mu, \label{pcin} \end{eqnarray} whereas $H(T^\star\!Q)$ roughly corresponds with the space of square integrable functions of $q$ and $p$, $\psi(q,p)$. In this simple case the selection of the right set of physical states is achieved by requiring the wave functions to be constant in the $p_\mu$ directions, that is $\partial\psi/\partial p_\mu=0$ for $\mu=1,...,n$. This is of course a very simple case. Nevertheless it somehow suggest to think of the phase space ${\cal M}$ as a sort of configuration space on which the operators $-i\hbar{\partial\over\partial\xi}= (-i\hbar{\partial\over \partial q^\mu}, -i\hbar{\partial\over\partial p_\mu})$ play the role of the canonical momenta conjugate to $\xi=(q^\mu,p_\mu)$. We note that operators \ref{qcin} and \ref{pcin} may then be identified with the kinematical momenta of a charged particle moving on ${\cal M}$ in the magnetic field $\omega_{ij}$ represented by the vector potential $\theta_i$. \section[]{\hspace{-4mm}.\hspace{2mm} A SLIGHTLY DIFFERENT VIEWPOINT:\\ DYNAMICS AND QUANTIZATION ON $T^\star\!{\cal M}$} \hspace*{0.8cm} A slightly different way to look at the pre-quantization scheme may infact be that of considering an extension of the mechanical system from the original phase space ${\cal M}$ to the enlarged phase space $T^\star\!{\cal M}$ (note: the cotangent bundle of the phase space!). Thought this introduces many ambiguities it has the advantage that the quantization of a cotangent bundle is definitely simpler than the one of an arbitrary symplectic manifold. Ambiguities arise both in the extension and in the subsequent reduction of the system and many different mechanisms may be thought of to achieve the aim. From this perspective Kostant's pre-quantization represents one of the possible extension schemes and the selection of physical states by means of polarizations is only one of the possible reduction procedures. This way of facing quantization appears as a promising one and has been adopted by many authors (see \cite{KlQ,Fe94,FL94,Go95,Jo96} for a few recent examples). In this context, we present a somehow peculiar quantization procedure constructed in the following way. We first extend classical dynamics from the phase space ${\cal M}$ to its cotangent bundle $T^\star\!{\cal M}$ by constructing a full geometrical theory depending on the parameter $\hbar$. In the regime of small values of the parameter the theory reduces dynamically to Hamiltonian mechanics. We then proceed to the quantization on the extended theory and note how the same dynamical mechanism provides the reduction to the physical sector of the quantum theory. {\it A Full Geometrical Extension of Hamiltonian Mechanics}: We start with a Hamiltonian system with phase space ${\cal M}$ and Hamiltonian $h(\xi)$. Thought our theory is covariant in character it is useful to parameterize ${\cal M}$ by means of a canonical atlas, so that in every coordinate frame the symplectic structure $\omega_{ij}$ appears in the canonical form \ref{sf}. We now introduce a metric structure on ${\cal M}$ requiring that in every canonical frame the metric determinant satisfies the condition $g(\xi)=h^{-2n}(\xi)$. We finally extend our mechanical system to $T^\star\!{\cal M}$ by defining dynamics by means of the variational principle \begin{equation} \delta\int({1\over2}\hbar g_{ij}{\dot\xi}^i{\dot\xi}^j +\theta_i{\dot\xi}^i)dt=0. \label{gd} \end{equation} $\hbar$ is Plank's constant over $2\pi$. We claim that the phase space trajectories produced by \ref{gd} differ from the one produced by Hamilton's principle only over scales of order $\hbar$. Hamiltonian mechanics may therefore be regarded as the effective theory describing the small $\hbar$ regime of the geometrical theory \ref{gd}. Observe that although $\hbar$ appears into the theory, we are not claiming that \ref{gd} describes quantum mechanics. We just find it very useful to incorporate Plank's constant in the extension of the theory to $T^\star\!{\cal M}$ in such a way that this parameter controls dynamically the reduction of the extended theory to the original one. Before proceeding in the demonstration of our claim let us note that the variational principle \ref{gd} is formally equivalent to that describing the free motion of a particle of mass $\hbar$ on the metric manifold ${\cal M}$ in the universal magnetic field represented by the symplectic form $\omega_{ij}$. It is therefore possible to visualize the mechanism responsible for the reduction of our theory by thinking of a particle of mass $m$ and charge $e$ moving in a plane under the influence of a magnetic field of magnitude $B$ normal to the plane. In the analogy the plane represents the phase space of a one-dimensional system while the magnetic field its symplectic structure. The regime of a small mass corresponds to that of a strong magnetic field, or equivalently, to that of a nearly homogeneous one. This problem, sometimes called the guiding center problem, has been extensively discussed in the literature \cite{HGCM}. As long as the magnetic field may be considered as homogeneous the particle follows a circular orbit of radius $r_m=mc|{\vec v}|/eB$ the center of which is motionless. For a very small mass the circle is so narrow that the particle appears at rest. However, as soon as a weak inhomogeneity is introduced the center of the orbit---usually called {\em guiding center}---starts drifting on the plane. Moreover, the guiding center motion is Hamiltonian. We shall identify the guiding center motion with the motion of our original system while the rapid rotation around the effective trajectory with the degrees of freedom suppressed by the reduction. Having this picture in mind we now sketch a formal demonstration. Starting from the Lagrangian ${\cal L}(\xi,\dot\xi)={1\over2}\hbar g_{ij} {\dot\xi}^i{\dot\xi}^j+\theta_i{\dot\xi}^i$ of the extended system we proceed to the construction of the relative Hamiltonian formalism by introducing the canonical momenta $p_i^\xi=\partial{\cal L}/ \partial\dot\xi^i$ conjugate to the variables $\xi^i$. The Hamiltonian describing the dynamics of the extended system yields \begin{equation} {\cal H}={1\over2\hbar}g^{ij}(\xi)(p_i^\xi-\theta_i)(p_j^\xi-\theta_j), \label{ham1} \end{equation} where $g^{ij}$ denotes the inverse of the metric tensor. In order to discuss the small $\hbar$ regime of the theory it is very convenient to replace the set of canonical variable $\xi^i, p_i^\xi$, $i=1,...,2n$, with the gauge covariant {\em kinematical momenta} and {\em guiding center coordinates} $$ \Pi_i={1\over\hbar^{1/2}}(p_i^\xi-\theta_i) \hskip0.7cm\mbox{and}\hskip0.7cm X^i =\xi^i+\hbar^{1/2}{\bar\omega}^{ij}\Pi_j. $$ The new set of coordinates is canonical: $\Pi_\mu$ is conjugate to $\Pi_{n+\mu}$ and $X^{n+\mu}$ to $X^\mu$, $\mu=1,...,n$. Furthermore, in the new set of variables \ref{ham1} appears as the Hamiltonian of an $n$-dimensional harmonic oscillator with masses and frequencies depending on the parameters $X^i$ and weakly on the `positions' and `velocities' $\Pi_i$. Since we are only interested in the small $\hbar$ regime of the theory it appears natural to expand $g^{ij}$ in powers of $\hbar^{1/2}$, \begin{equation} {\cal H}={1\over2}g^{ij}(X)\Pi_i\Pi_j+{\cal O}(\hbar^{1/2}) \label{ham2} \end{equation} which makes clear that only the dependence on the $X^i$ is relevant. A further analysis of the commutation relation makes it clear that the $X^i$ may be regarded as slow parameter of the system so that it is possible to perform a second canonical transformation (see \cite{KM97} for details) bringing \ref{ham2} into the form \begin{equation} {\cal H}= h(X)\ {1\over2}\sum_i\Pi_i\Pi_i+{\cal O}(\hbar^{1/2}). \label{ham3} \end{equation} The condition $g=h^{-2n}$ has been used. The guiding center motion described by the set of canonical coordinates $X^i$ and the rapid rotation of the system around the guiding center trajectory are separated up to terms of order $\hbar^{1/2}$. Our demonstration is completed by observing that the radius of the circular phase space trajectory described by the $\Pi_i$ is of order $\hbar$. The effective dynamics produced by the full geometrical variational principle $\ref{gd}$ corresponds therefore to Hamiltonian dynamics. {\it Quantizing Free Dynamics on $T^\star\!{\cal M}$}: The quantization of the extended dynamical system \ref{gd} proceeds in a straightforward manner. It is infact equivalent to the quantization of a particle moving on a curved manifold in an external magnetic field. Thought affected by ordering ambiguities arising from the non trivial geometry the solution of this problem has been extensively discussed in the literature \cite{MONO}. A little care has to be taken when the topology of the problem is non trivial. The Hilbert space of the system has to be constructed as that of square integrable sections of a line bundle $L$ over the `configuration space' ${\cal M}$ having the magnetic field $\omega_{ij}$ as curvature form. The same mathematical framework of pre-quantization is therefore recovered by means of the analogy of our theory with a magnetic system. As an example Kostant's quantization condition ensuring the existence of the line bundle $L$ $$ \int_\Sigma\omega=2\pi n, $$ $\Sigma$ an arbitrary compact surface in ${\cal M}$ and $n$ an integer, reappears as the Dirac's condition on monopole charge. In any coordinate frame the quantum Hamiltonian describing the theory is given by \begin{equation} {\cal H}={1\over g^{1/2}}\Pi_ig^{ij}g^{1/2}\Pi_j +\hbar{\cal I}_1 +\hbar^2{\cal I}_2 +... \label{hamq} \end{equation} where the kinematical momenta $\Pi_i=-i\hbar^{1/2}\partial_i- \theta_i/\hbar^{1/2}$ have been introduced and ${\cal I}_1$, ${\cal I}_2$, ... are `optional' invariants reflecting the ordering ambiguities inherent the quantization procedure. It is worthwhile to stress that ${\cal H}$ is a globally well defined operator on the Hilbert space of the theory. Once quantization has been performed the same mechanism producing the reduction of the classical theory to Hamiltonian mechanics provides dynamically the reduction to the physical sector of the quantum theory. Depending only on the canonical formalism the argument goes exactly as the one in classical case and will not be repeated. By introducing the guiding center operators $X^i=\xi^i+\hbar^{1/2}{\bar\omega}^{ij}\Pi_j$ we obtain a set of operators fulfilling the canonical commutation relations $[X^\mu,X^{\nu+n}]=i\hbar\delta^{\mu\nu}$ and $[\Pi_\mu,\Pi_{\nu+n}]= -i\delta_{\mu\nu}$, $\mu,\nu=1,...,n$. Up to irrelevant terms the dynamics of the physical variables $X^i$ separates from that of the $\Pi_i$ and Hamiltonian \ref{hamq} decomposes as in \ref{ham3}. The energy necessary to induce a transition in the spectrum of the fast variables $\Pi$ being of order 1---to be compared with $\hbar$---the system behaves as frozen in one of the harmonic oscillator eigenstates of ${1\over2}\sum_i\Pi_i\Pi_i$ and dynamics is effectively reduced to the physical sector described by the $X$.
1,477,468,750,816
arxiv
\section{Introduction} \label{introduction} The past few decades have demonstrated how the Internet is playing an ever-increasing role in daily life, and has become an integral asset in society. In particular, the use of various digital technologies and online platforms for communication has been rapidly adopted into the home and work place alike. However, this has also introduced several implications as various malicious actors, or cyber-criminals, are quickly exploiting both the benefits afforded by such technologies as well as the vulnerabilities presented by them for their own criminal gains. Digital communities not only bring people closer together but also, inadvertently, provide criminals with new ways to access potential victims online. This has included extremist organisations strategically shifting their radicalisation and recruitment processes to online platforms for the purpose of indoctrinating individuals~\cite{Sabouni2017}. The same technologies that allow for a globalised world to interact seamlessly are also being utilised, adapted and abused by extremist organisations to target individuals and ensure organisational longevity \cite{Bertram2016TerrorismTI}. Shifting to social media and online means of communication has also provided the additional benefits of granting extremists with a perceived sense of anonymity, allowing access to an increased audience size, and utilising interactive features provided by online platforms to facilitate the acts of like-minded individuals exchanging radical thoughts \cite{Neo2016}. One of the leading examples of an extremist organisation making use of social media platforms for the purpose of radicalisation is ISIS. ISIS' early social media strategies on Twitter emphasised several of the unique characteristics of social media listed above. In many ways ISIS' highly energised recruitment efforts online and its reliance on the Internet have been central to its identity \cite{Greenberg2016}, in addition to introducing many initiatives from law enforcement agencies to monitor and remove offensive content online. The UK government specifically outlined online hate crime, with particular emphasis on online extremism, as one of the principal threats to cyber security in their National Cyber Security Strategy \cite{HMGovernmentUK2016}. Terrorist use of the Internet has also been highlighted as one of the primary forms of harmful and illegal content online in their Online Harms Paper \cite{HMGovernmentUK2019}. Although the development of legislative and policing capabilities to prevent acts of extremism is clearly required, constructing approaches to reduce the radicalisation effects and impacts of extremist propaganda is also crucial to counter them. Such approaches are referred to as countering violent extremism (CVE), and have been perceived by both researchers and policy makers alike to being central to the process of addressing the pressing need to combat radicalisation to violence and extremism \cite{Davis2016}. CVE programs have generally included carrying focus groups and community engagement programs in particular demographics to encourage discussion around identity and social integration, and, specifically within the UK, the teaching of fundamental British values to deter extremism \cite{Webber2019}. More recently, the use of the Internet as an aid in CVE strategies has become increasingly apparent; for instance, some of the work carried out by Moonshot CVE, a social enterprise working to ``disrupt and eventually end violent extremism'' \cite{Moonshot}, makes use of Internet capabilities to develop counter-messaging campaigns and provide online interventions to vulnerable individuals. These are intended to carry out counter-extremism interventions through the same digital channels utilised by extremist groups, so as to reach the same vulnerable audiences. As social media becomes more present in daily life, CVE strategies must also embrace the same technologies to effectively discredit and nullify extremist groups \cite{Bertram2016TerrorismTI}. Within the research landscape, several questions have been raised regarding the evaluation of such online CVE strategies, though very little has been carried out to explore how such initiatives compare against the content and strategies used by extremist organisations. Some studies have emphasised the influence that online messages can have on human behaviours and opinions. For instance, a study conducted by Frischlich et al. \cite{Frischlich2015} shows how people can be manipulated into agreeing with extremist viewpoints when under conditions of threat propagated by various media. Thus it can be assumed that online CVE could influence opinions in a similar way, however further research is required to strengthen this hypothesis. In this article, we seek to advance current research by computationally exploring both extremist content found online, as well as the CVE strategy of counter-narratives \cite{Greenberg2016} designed to diminish the influence of extremist organisations on social media platforms. Specifically, we engage in a two part study that considers firstly, how extremists and counter-extremist organisations craft content and secondly, how the psychological motivation behind the messages compares between them; thus, our contributions provide novel insight into both sides of online extremism. In particular, this study will apply computational techniques to analyse and compare the behaviour of various pro-extremist and counter-extremist Twitter accounts, and the effects this could have on influencing human behaviour. Through this research, we hope to understand the extent to which current CVE counter-extremist strategies on Twitter relate to extremist content, and identify potential avenues for future research on whether such CVE approaches can be made more effective. The remainder of the report will be structured as follows. Section~\ref{relatedwork} will review the current literature on extremist use of the Internet. Section~\ref{methodology} will provide a detailed account of our approach and methodology, including the datasets and data analysis tools that were used. The results and observations from the analysis of both the pro-ISIS tweets and the counter-extremism tweets will be discussed in Section~\ref{results}. We then conclude and outline avenues for future work in Section~\ref{conclusion}. \section{Related Work} \label{relatedwork} The phenomenon of radicalisation through online platforms has been researched extensively over the past decade by counter-terrorism and cyber security researchers. Many previous studies have examined key narratives incorporated by ISIS, as well as some of the major themes and components used within their online materials for the purpose of recruiting and disseminating propaganda. One such study is carried out by El-Badawy et al. \cite{EmmanEl-BadawyMiloComerford2015}, which closely examines violent Jihadi propaganda in order to understand their extremist ideology. The findings from this study showed that common beliefs shared by a majority of the global Muslim community, which may not necessarily be extreme, are frequently used to form solidarity with a wider target audience. Moreover, justifications from the Quran, Hadith or from scholarship are also often used to resonate with their Muslim audience \cite{EmmanEl-BadawyMiloComerford2015}. Similarly Torok \cite{Torok2013} provides a qualitative analysis of the social media accounts of a number of extremist groups; the results from this study identify a number of key discursive schemas, and highlight common themes used by various extremist Islamist groups, including `blaming of the West, unity of Islam, restoring the glory of Islam, and the embracing of death'. These findings also reinforce the observation that the unity of the wider Muslim community is a key radicalisation mechanism used to normalise extremist content and actions. More similar to the research that will be covered in this paper, numerous studies have made use of computational approaches to analyse online extremist content and detect radicalisation. One such study is detailed by Vergani and Bliuc in \cite{Vergani2015}, where computational text-analysis tools were used to analyse the first 11 issues \textit{Dabiq} to investigate the evolution of ISIS' language. The results from this provided four key findings: affiliation or achievement plays a major role in motivating collective action of the group; ISIS is increasingly adopting emotional tones to increase influence, including anger and anxiety; ISIS texts exhibit more concern for women; and finally, they are making more use of Internet jargon to adapt itself to online environments and appeal to younger audiences. Another such study was conducted by Fernandez et al. in \cite{MiriamFernandezMoizzahAsif2018}, where they explored how online radical content could be detected, not just by searching for key terms and expressions associated with extremist discourse, but by further analysing the contextual semantics of such terms \cite{Fernandez2018}. This provided a more realistic and reliable radicalisation detection model by helping to discriminate radical content from content that only uses radical terminology, i.e. content simply reporting on events or sharing harmless religious rhetoric. Despite the extensive research analysing online extremist content, to our knowledge, there have been few carried out to systematically or computationally analyse counter-extremism content currently existing online in a similar way. This could largely be due to the fact that, at present, there are few existing counter-extremism initiatives online, or at least few that exist on mainstream social media platforms such as Twitter. That being said, some of the work within this line of research includes a report by Ashour, which outlined a broad framework consisting of three major ``pillars'' that could be used to counter extremists narratives \cite{Ashour2010}. The first pillar was formed from a comprehensive message that dismantles and counter-argues against every dimension of the extremist narrative, such as the theological and political aspects. Secondly, choosing effective `messengers', who could be credible sources of information, namely former extremists who have been successfully de-radicalised, would also be imperative. Finally, the role of the media is essential to effectively disseminate counter-narrative content and attract a wider audience is also imperative. More recently, Wakeford and Smith~\cite{Wakeford2020} reinforce this point by arguing that it is not enough to simply delegitimise extremist posts; law enforcement agencies need to learn from extremist organisations, investigate and understand what makes them so influential, and harness this in their own counter-extremism efforts The research detailed in this paper will therefore aim to fill the gap currently in this research landscape by providing more extensive empirical and statistical insight into the strategies used by pro-extremist users and how extremist content is constructed online. In particular, our work focuses on the extent to which the first and third pillars described above are currently being used in counter-extremist posts. We additionally bring some understanding into the psychological motivation behind their posts. By this, we specifically refer to how language can be manipulated to influence human behaviour. By comparing these findings to the content shared by counter-extremism agencies, we provide unique insight into both sides of the problem, and also provide avenues for discussion regarding the extent to which counter-messages may be effective at diminishing the online influence of extremist organisations. \section{Methodology} \label{methodology} Our approach consists of analysing two datasets of tweets---one consisting of tweets from pro-ISIS accounts and the other consisting of tweets from counter-extremism agencies---to gain insight into the linguistic components used within them. We first use computational methods to carry out an empirical analysis to better understand the techniques used by pro-ISIS supporters and various counter-extremism organisations to promote their content. This will include comparing the usage of hashtags, links to external websites, and the most commonly used terms. Following this, we implement a more comprehensive linguistic analysis of the different sets of tweets to gain an understanding of how online narratives are framed. Through applying insights from previous research regarding the use of language and motivational theory (such as Regulatory Focus theory introduced by Higgins in \cite{Higgins1997}) in certain texts, this analysis will allow us to further explore how certain linguistic components can be used to influence behavioural change. This will also provide insight on whether online counter-narrative content can be crafted more effectively, for instance, by utilising more appropriate linguistic terms. Below, we describe the Twitter datasets that are used in the study, and the methods and tools used to analyse them. \subsection{Datasets} \label{datasets} In order to analyse extremist content on social media, we acquired a publicly available dataset of noticeably pro-ISIS tweets posted by key ISIS-supporting Twitter accounts\footnote{https://www.kaggle.com/fifthtribe/how-isis-uses-twitter/data}. The dataset was published by the Kaggle data science community, and consists of over 17,000 English-only tweets retrieved from 112 distinct pro-ISIS supporter accounts over a period of three months during the aftermath of the November 2015 Paris terror attacks. These tweets were identified as being pro-ISIS after analysing specific indicators. This includes using certain key terms within their username, Twitter bio, or the actual tweet itself; following or being followed by other known radical accounts; or utilising images of ISIS logos or well-known radical leaders. This particular dataset has been used in several previous studies, including \cite{Fernandez2018} and \cite{Nouh2019}. Our study focussed on English-only tweets as the counter-extremist tweets used in this study were retrieved from English-speaking organisations only---further details of this are given below---making the analysis of the extremist and counter-extremist tweets more comparable. Before using this dataset in our analysis, we first validated that these tweets were in fact posted by pro-ISIS Twitter accounts by manually checking the profiles of the 112 accounts using the Twitter API. Our assumption here is that, if the account no longer exists on Twitter or, in other words, has been blocked from the social media platform, then the account most likely did belong to an ISIS-supporting individual or group. This is due to the fact that the suspension or blocking of an account suggests that it had displayed malicious behaviour that did not comply with the Twitter terms of service. From this, we identified that only two of the Twitter accounts were not blocked and still existed on the platform at the time this research was conducted, where one of the accounts belonged to a journalist, and the other belonged to a researcher focusing on Jihadi groups. These two accounts and any tweets posted by these accounts were thus deleted from the pro-ISIS dataset, leaving a final total of 16,949 tweets from 110 pro-ISIS Twitter accounts. In addition to the pro-ISIS dataset, we used the Twitter API to retrieve a number of tweets from the Twitter accounts of major, English-speaking organisations specifically dealing with counter-extremism, including Governments and Law Enforcement Agencies (GLEAs), and NGOs. The accounts that tweets were retrieved from belong to the following agencies: The Commission for Countering Extremism in the UK (@CommissionCE); Counter Terrorism Policing UK (@TerrorismPolice); the UK Home Office (@ukhomeoffice); the US Department of State Bureau of Counterterrorism (@StateDeptCT); the Counter Extremism Project (@FightExtremism), an international policy organisation; the Global Center on Cooperative Security (@GlobalCtr), an international policy organisation; and Tech Against Terrorism (@techvsterrorism), an NGO supporting tech industries. These accounts were selected on the basis of the volume of tweets relevant to counter-extremism that were available to retrieve. Additionally, since the Twitter account of the UK Home Office does not solely post counter-extremism content, the tweets retrieved from it were filtered with the criteria that they include an extremism-related term (e.g., \textit{CVE}, \textit{terrorism}, \textit{extremist}). To gain deeper insight into how current counter narratives are constructed, we separated this collection of counter-extremism tweets into three further datasets: counter-extremism tweets from GLEAs in the UK, counter-extremism tweets from GLEAs in the US, and counter-extremism tweets from NGOs. Each of these datasets held between 2,000 and 3,000 tweets (with 2481 tweets from UK GLEAs, 2703 tweets from US GLEAs, and 2649 tweets from NGOs) and were analysed separately to investigate whether counter-narratives are crafted differently in each of the three organisational bodies. We then created a further dataset with the tweets from all the above mentioned counter-extremism datasets (with 7833 tweets in total) to easier compare the results from extremist and counter-extremist posts. Before analysing the datasets, a series of pre-processing steps were carried out to clean the tweets and prepare them for further linguistic analysis. These steps included: (1) Removing any duplicate tweets or retweets from the datasets to reduce the levels of noise. (2) Removing all punctuation marks. (3) Removing any URLs from tweets. Similar data cleaning methods were used by \cite{Fernandez2018} and \cite{Nouh2019} prior to working with the dataset. It should also be noted here that account names and usernames were not used throughout the duration of this analysis---only the text used within the actual tweets were linguistically analysed after the specified Twitter accounts were chosen and organised into their appropriate sub-datasets. \subsection{Analysis Framework and Methodology} \label{framework} Our analysis of the extremist and counter-extremist tweets is conducted with particular regards to two research questions: \begin{itemize} \item RQ1: How are pro-extremist and counter-extremist messages constituted and what do they focus on promoting? \item RQ2: How do pro-extremist and counter-extremism Twitter accounts compare in terms of the methods used to disseminate content and the psychological motivation used within their tweets? \end{itemize} The initial empirical analysis will be used to examine data associated with each dataset of tweets, including the most commonly used hashtags and terms, and some of the `topics' that are associated with them. The results from this will then be compared across all five datasets of tweets to identify any similarities or differences between their respective approaches to promoting their content. This analysis will be carried out using the Pandas\footnote{https://pandas.pydata.org/} data analysis library and the `Natural Language Toolkit' (NLTK)\footnote{https://www.nltk.org/} provided by the Python programming language. To complement the aforementioned analysis, we use a framework majorly based on the theory of utilising regulatory focus to influence the thoughts and actions of a target audience through motivational regulation. In particular, this analysis is underpinned by the idea that regulatory focus distinguishes between two types of motivational regulation, promotion and prevention, as detailed by Higgins~\cite{Higgins1997}. Here, a promotion focus places emphasis on desires and potential goals, and often views these goals as hopes and aspirations \cite{Johnsen2014}. In contrast, a prevention focus places emphasis on potential losses, and tends to view goals as duties and obligations \cite{Higgins1997}. Moreover, the findings of Fuglestad et al. \cite{Fuglestad2008} also supported the notion that the regulatory focus and the frame of a message could be highly relevant to behavioural change, which they exemplified with smoking cessation and weight-loss interventions. Their study showed that a promotion focus could be related to the initiation of behavioural change (such as quitting smoking and dieting), while a prevention focus predicted the long-term maintenance of new, healthy behaviours. Some of these findings were applied to our study to observe if either prevention or promotion regulatory focus were used in the extremist or counter-extremist message frames to radicalise or de-radicalise target audiences respectively. This was assessed using an analysis framework based on the findings from Vaughn \cite{Vaughn2018}, which provided some statistical insight into the linguistic components used in both prevention and promotion regulatory focus. To analyse the datasets of tweets, we used the programmatically coded dictionary from the Linguistic Inquiry and Word Count (LIWC) linguistic analysis tool to automate the process of extracting psychological meaning from textual content. A similar approach has been used in \cite{Vaughn2018} and other studies to examine and predict the psychological frames and behaviours of various groups as well as textual content, for instance, to predict depression \cite{Pennebaker2015}. LIWC is a widely used tool utilised in lexical approaches for personality measurement, and statistically analyses textual content based on 81 different categories by calculating the percentage of words in the input text that match predefined words in a given category. LIWC is used in our approach to assess the extent to which each of the datasets of tweets make use of promotion and prevention regulatory focus, in accordance with an analysis framework detailed by Vaughn in \cite{Vaughn2018}. Here, Vaughn specifies the LIWC categories that share significant differences in promotion-focussed text and prevention-focussed text. Further details of this have been provided in Section~\ref{liwc}. below. The next section will detail the results from our analysis, and discuss the insights gained from this. \section{Results and Discussion} \label{results} \subsection{Empirical Analysis} \label{empircal analysis} \begin{table*}[ht] \caption{The 15 most used hashtags found in the pro-ISIS and counter-extremism tweets are summarised below.} \centering \begin{tabular}{| l | l | l | l | l |} \hline \textbf{Pro-ISIS Supp.} & \textbf{Counter-extremists} & \textbf{NGOs} & \textbf{US GLEAs} & \textbf{UK GLEAs}\\ \hline \#isis--1577 & \#actioncountersterrorism--826 & \#cve--160 & \#counterterrorism--163 & \#actioncountersterrorism--826\\ \hline \#syria--1373 & \#extremism--348 & \#isis--116 & \#isil--159 & \#extremism--304\\ \hline \#is--677 & \#cve--255 & \#pve--96 & \#terrorist--102 & \#stop--97\\ \hline \#iraq--634 & \#counterterrorism--238 & \#terrorism--84 & \#cve--95 & \#runhidetell--87\\ \hline \#islamicstate--443 & \#isis--181 & \#counterterrosim--74 & \#terrorism--94 & \#gunsoffourstreets--74\\ \hline \#aleppo--406 & \#terrorism--179 & \#gifct--58 & \#hizballah--78 & \#ctiru--67\\ \hline \#amaqagency--332 & \#isil--163 & \#techvsterrorism--58 & \#isis--65 & \#extremists--54\\ \hline \#breaking--324 & \#terrorist--135 & \#cft--54 & \#ct--65 & \#ath--51\\ \hline \#russia--271 & \#pve--97 & \#aml--53 & \#syria--57 & \#terrorists--47\\ \hline \#breakingnews--252 & \#stop--97 & \#radicalization--50 & \#gctf--51 & \#ctaw2016pic--47\\ \hline \#turkey--229 & \#runhidetell--87 & \#violentextremism--44 & \#bokoharam--50 & \#knowthegameplan--40\\ \hline \#usa--216 & \#ct--85 & \#extremism--39 & \#iraq--39 & \#illegalguns--35\\ \hline \#palmyra--215 & \#hizballah--78 & \#humanrights--37 & \#nigeria--37 & \#ctpolicingcareers--34\\ \hline \#ypg--199 & \#syria--77 & \#technology--37 & \#iran--22 & \#worldcup--23\\ \hline \#assad--159 & \#bokoharam--76 & \#un--31 & \#turkey--20 & \#besafebesound--19\\ \hline \end{tabular} \label{table:1} \end{table*} \subsubsection{Most Commonly Used Hashtags} \label{hashtags} Hashtags used by the pro-ISIS Twitter accounts followed most of the obvious political and extremist interests of radical Islamist as well as ISIS-specific adherents. In all the 16,494 pro-ISIS tweets, a total of 2418 distinct hashtags were detected, where 41\% of the tweets contained at least one hashtag. The 15 most used hashtags found in the pro-ISIS tweets---as well as those found in the other datasets of tweets---are summarised in Table~\ref{table:1}. The most popular hashtags by a wide margin were \textit{\#isis} and \textit{\#syria}, which were used 1577 and 1373 times respectively. Considering the fact that most ISIS-related activity was based in Syria and its surrounding areas, it is no surprise that a majority of the most common hashtags were related to locations where ISIS activity was most prevalent, including Iraq and Aleppo, as well as the states which had the most impact on ISIS activity---at the time of data collection---including Russia and the USA. The consistent usage of such hashtags helped amplify the coverage of ISIS-related news amongst supporter networks. The tweets from counter-extremism NGOs used a total of 647 distinct hashtags (where 44\% of the tweets contained at least one hashtag), with the tweets from US GLEAs using a similar number of 605 distinct hashtags (where 57\% of the tweets contained at least one hashtag). However, the counter-extremism tweets from UK GLEAs used considerably fewer distinct hashtags, as only 375 unique hashtags were detected; however, we also found that hashtags were use significantly more often, where 76\% of the tweets contained at least one hashtag. This suggests these tweets were more consistent with their usage of hashtags to promote their content. In terms of those which were most used, similar strategies were used across all three counter-extremism datasets. As shown in Table~\ref{table:1}, the majority of the top hashtags used in the tweets were based around counter-terrorism and extremism (e.g., \textit{\#extremism}, \textit{\#cve}, \textit{\#terrorist}, \textit{\#counterterrorism}). A notable observation here is that the counter-extremism tweets from both the NGOs dataset and the US GLEAs dataset use the most similar hashtags, for instance, both datasets frequently use hashtags related to ISIS, including \textit{\#isis}, \textit{\#isil} and \textit{\#syria}. It should be noted here that the difference between the use of such hashtags by the pro-ISIS accounts and these counter-extremism accounts are that pro-ISIS tweets would use these hashtags to inform their audience of attacks and made by ISIS and to promote their cause, as quoted in the following tweet: \textit{``\#ISIS claims control in outskirts of south-\#Ramadi - 25 and Commander of 6th Regiment killed''}. US GLEAs would use such hashtags to inform on the US government's strategies on dealing with ISIS, as shown in the following tweet: \textit{``The US is dedicated to cutting off \#ISIL's financing, disrupting its plots \& stopping the flow of \#FTF's. \#CSISLive''}. This suggests that a majority of their counter-extremist policies were tailored to dealing with ISIS, since this is the only extremist organisation mentioned. NGOs would use hashtags relating to ISIS largely to promote research analysing ISIS activities and behaviours, for instance: \textit{``Successful terrorist operations have shifted from a 'tactical bonus' to 'strategic necessity' for \#ISIS, as 'online sphere has been tailored to facilitate these attacks more''.} Contrastingly, the tweets from UK GLEAs did not make frequent use of ISIS-related hashtags. Instead, the most commonly used hashtags were concentrated around informing audiences on how to report acts of terrorism, including \textit{\#actioncountersterrorism}, \textit{\#stop}, \textit{\#runhidetell}, and \textit{\#knowthegameplan}. Another noteworthy point is that the top hashtags used by UK GLEAs were used more consistently, compared to the tweets from the other two datasets; the top hashtag in the UK dataset was used 826 times, which is considerably more that those used in the NGOs and US datasets, where the top hashtags were used 160 and 163 times respectively. \subsubsection{Key Words and Topic Modelling} \label{topics} The next part of the empirical analysis included determining which words and topics were mentioned the most in each dataset of tweets. The most used word in tweets from the pro-ISIS accounts was \textit{ISIS}, with \textit{Syria} being the second most common word. Along with the frequent mentioning of \textit{Aleppo}, \textit{Assad} and \textit{Iraq}, other common terms included \textit{killed}, \textit{army}, \textit{breaking}, \textit{soldiers} and \textit{attack}. Further details of the most common words in each dataset are provided in the word clouds in Figure~\ref{fig:wordclouds}. A topic model, using the Non-Negative Matrix Factorization (NMF) topic detection model, also provided useful insight into the most discussed subjects amongst pro-ISIS users as well as key emerging themes of ISIS ideologies, with the top 15 terms of each topic being detailed in Table~\ref{table:2}. We found that using the Non-Negative Matrix Factorization (NMF) topic detection model worked better with shorter texts, such as tweets, than other models, like Latent Dirichlet Allocation (LDA) \cite{godfrey2014}. A large proportion of the most discussed topics revolved around reporting the latest reports of attacks against ISIS as well as those instigated by ISIS. This includes Topic\#1, which seems to include discussions on Russia's and Turkey's involvement in ISIS territory in Syria (as reported in \cite{russia} and \cite{turkey}); as well as Topic\#5, seemingly discussing attacks from and on the US army during `The Battle of Mosul' (as reported in \cite{Ackerman2016} and \cite{Chulov2016}). Other topics heavily discussed the `fighters' of ISIS and their martyrdom, as well as prayers for `victory' and `reward' (Topic\#2 and Topic\#4), as shown in the following tweet: \textit{``Fighting Khawarij is greatest Jehad, whoever is killed by them receives the reward of a double martyr''}. \begin{figure*}[ht] \centering \includegraphics[width=0.35\linewidth]{is.png}\hspace{1em} \includegraphics[width=0.35\linewidth]{ngo.png}\par\medskip \medskip \includegraphics[width=0.35\linewidth]{us.png}\hspace{1em} \includegraphics[width=0.35\linewidth]{uk.png} \caption{Word clouds of most commonly used words in each dataset: pro-ISIS tweets (top-left), NGOs tweets (top-right), US GLEAs tweets (bottom-left), and UK GLEAs tweets (bottom-right).} \label{fig:wordclouds} \end{figure*} The most common words used in the counter-extremism tweets were similar across all three datasets, with the words \textit{terrorism}, \textit{extremism}, and \textit{counterextremism} being used most frequently in all sets of tweets. The counter-extremism tweets from both NGOs and US GLEAs also mentioned \textit{ISIS} or \textit{ISIL} on many occasions, whereas such terms were not amongst the most common words used by UK GLEAs. Tweets from UK GLEAs consistently made use of words relating to reporting extremist incidents including \textit{report}, \textit{police}, \textit{suspicious} and \textit{actioncountersterrorism}. In contrast, the tweets from NGOs and US GLEAs focussed more on informing about terrorist incidents and counter-extremism initiatives. \begin{table*}[ht] \centering \caption{A topic model of the most discussed topics in each dataset.} \begin{tabular}{| c | P{2.75cm} | P{2.75cm} | P{2.75cm} | P{2.75cm} | P{2.75cm} |} \hline & \textbf{Pro-ISIS Supporters} & \textbf{Counter-extremists} & \textbf{NGOs} & \textbf{US GLEAs} & \textbf{UK GLEAs}\\ \hline \textit{Topic\#1} & kill, soldier, today, airstrike, civilian, militant, wound, injure, bomb, Russian, children, yesterday, Turkish, dozen, Iraqi & Twitter, follow, status, find, visit, ISIS, use, social, media, discuss, internet, join, event, launch, watch & status, discuss, workshop, role, present, panel, participate, event, join, host, policies, brief, look, secure, first & icymi, yesterday, remark, ISIL, video, testimonies, destroy, strategies, coordinate, global, discuss, effort, envoy, countries, statement & extremist, content, terrorist, see, online, via, report, internet, social, media, act, remove, material, access, combat\\ \hline \textit{Topic\#2} & islam, state, fighter, capture, force, via, unit, fight, group, takfir, declare, call, war, muslim, martyredom & icymi, yesterday, remark, ISIL, video, testimonies, destroy, strategies, coordinate, global, discuss, effort, envoy, countries, statement & content, extremist, online, platform, facebook, media, social, white, youtube, group, nazi, video, remove, supremacist, hate & design, terrorist, special, global, foreign, organise, individual, leader, yesterday, announce, member, entities, group, case, today & run, hide, tell, attack, safe, rare, advice, knife, gun, keep, simple, terror, prepare, weapon, firearm\\ \hline \textit{Topic\#3} & Al Qaeda, sheikh, jabhat, leader, sham, jaish, release, ibn, jaysh, today, new, village, airstrike, Baghdadi, area & counter, terror, extreme, violent, police, UK, global, effort, right, discuss, prevent, support, coordinate, threat, ct & attack, kill, people, ISIS, bomb, Taliban, claim, Afghanistan, suicide, soldier, boko, haram, target, group, wound & read, latest, via, initial, program, article, policies, safe, remark, foreign, counterterror, recruit, prison, radical, challenge & report, suspicious, act, something, could, behaviour, anonymous, live, see, save, instinct, ignore, online, public, vigilant\\ \hline \textit{Topic\#4} & Allah, may, accept, brother, protect, one, pleas, make, jazak, victorious, Muslim, sake, reward, love, bless & report, online, see, content, suspicious, extremist, help, act, via, terrorist, presence, active, visit, behaviour, find & report, new, recommend, juvenile, offend, policies, brief, violent, effort, rehabilitate, prevent, develop, need, societial, program & violent, counter, extreme, effort, fact, global, coalition, summit, prevent, build, support, extremist, local, terror, partner & presence, help, online, report, via, content, step, find, visit, us, button, speak, get, advice, website\\ \hline \textit{Topic\#5} & ISI, US, Assad, fight, Muslim, support, rebel, Syrian, Mosul, help, want, group, back, Aleppo, YPG & run, hide, tell, safe, could, attack, rare, remember, simple, keep, knife, gun, terror, watch, weapon & counter, terror, global, extreme, forum, violent, internet, prevent, un, effort, strategies, threat, present, launch, nation & attack, terrorist, condemn, statement, us, honor, victim, kill, remember, unit, bomb, year, die, mark, ago, families & game, secure, enjoy, plan, weekend, great, safe, time, address, stay, go, stadium, look, listen, check\\ \hline \end{tabular} \label{table:2} \end{table*} The topic model provided further insight into what content the counter-extremism tweets promoted in each of the three datasets. Again, tweets from both the NGOs and US GLEAs discussed similar topics. The majority of these topics were focussed on new \textit{counter extremism} efforts and policies, and reports of threats in various countries. Additionally, both sets of tweets discuss the activities of specific terrorist organisations, namely ISIS or ISIL, and strategies to counter them. The tweets from NGOs also mentioned \textit{white supremacists} and their online presence, referring to other groups of extremists aside from radical Islamist organisations. Aside from this, NGOs would often promote workshops or events organised to discuss and promote counter-terrorism strategies and policies, hence, one of their frequently discussed topics was to promote tickets for such events, specifically Topic\#1 and Topic\#5. The tweets from US GLEAs would also refer to the US government and their response to terrorist incidents, as seen in Topic\#4 and Topic\#5. In contrast, a huge majority of the counter-extremism tweets from UK GLEAs were concentrated around providing advice on how to respond to terrorist incidents as well as promoting the appropriate channels for reporting such incidents. In this way, the tweets were addressed to an audience from a more specific demographic, i.e., UK residents who could report incidents to appropriate law enforcement departments within the UK. Moreover, tweets from this dataset also promoted campaigns against violent crimes in general, such as gun and knife crime in Topic\#2, and did not refer to any specific form of extremism or organisation, for instance: \textit{``our advice to the public on what do if caught in a gun or knife terror attack. It could keep you, your friends and family safe \#ActionCountersTerrorism''}. Such tweets were often posted multiple times, showing that UK GLEAs often repeated tweets to emphasise its importance to their target audience. \subsection{LIWC Analysis} \label{liwc} \subsubsection{Exploring Motivational Theory} Although there are 81 categories in the LIWC standard dictionary, only the categories that were proven in \cite{Vaughn2018} to indicate the use of a regulatory focus were used in our analysis framework, though some observations were also made for LIWC categories which showed notable differences between the datasets. Table~\ref{table:3} summarises the results from the LIWC analysis whilst assessing the extent to which a promotion or prevention focus were used. The table shows the mean percentage of all the words used within the tweets that fall into a particular LIWC category. For instance, a mean percentage of 1.35 for \textit{positive emotion} words implies that 1.35\% of the words used within the respective dataset were associated with positive emotion. Example words of each LIWC category are also included in the table. Overall, counter-extremism tweets make use of a promotion focus more than the pro-ISIS tweets. Counter-extremism content tended to use the most language associated with \textit{positive emotion}, which was specified as an indicator for descriptions of pursuing hopes, and therefore associated with a promotion focus. Similarly, counter-extremism tweets made use of words related to \textit{work}, \textit{achievement} and \textit{leisure} more than pro-ISIS tweets, which also indicated a stronger promotion focus. Further still, such tweets from UK GLEAs had a stronger promotion focus than any of the other counter-extremism organisations. The results from the LIWC analysis also clearly show that the pro-ISIS tweets had a very small mean percentage for the number of words used that were associated with any of the four defining conditions for content with a promotion focus. This suggests that extremists do not tend to frame their messages around promotion, or view their goals as hopes and aspirations. When looking at the results for the prevention focus section of the LIWC analysis, we can observe, interestingly, that counter-extremism tweets posted by UK GLEAs also made use of language associated with a prevention focus more than any of the other sets of tweets. The results for the UK GLEAs showed a greater mean percentage for almost all of the distinctive LIWC categories indicating the strong use of prevention-focussed content. Pennebaker's findings in \cite{pennebakersecret} found that, when compared to descriptions of pursuing hopes, descriptions of pursuing duties were more likely to include stories about dynamic social interactions and processes. This also infers that a higher percentage of function words including \textit{pronouns}, \textit{prepositions}, \textit{auxiliary verbs}, \textit{negations}, and \textit{conjunctions} are used in the text, which can be observed from Table~\ref{table:3}. \begingroup \begin{table*}[ht] \centering \caption{Results from the LIWC analysis when observing regulatory focus.} \begin{tabular}{|m{5em}|c | c | c | c | c | c |} \hline \textbf{LIWC Categories} & \textbf{Examples} & \textbf{Pro-ISIS Supporters} & \textbf{Counter-extremists} & \textbf{NGOs} & \textbf{US GLEAs} & \textbf{UK GLEAs}\\ \hline \multicolumn{7}{|l|}{\textbf{Promotion Focus}} \\ \hline \textit{Positive Emotion} & happy, pretty, good & 1.35 & 2.46 & 2.24 & 2.22 & 3.78\\ \hline \textit{Work} & work, class, boss & 1.08 & 5.68 & 4.08 & 3.75 & 4.72\\ \hline \textit{Achievement} & try, goal, win & 0.97 & 1.69 & 2.04 & 2.50 & 2.23\\ \hline \textit{Leisure} & house, TV, music & 0.57 & 2.57 & 0.82 & 1.08 & 1.36\\ \hline \multicolumn{7}{|l|}{\textbf{Prevention Focus}} \\ \hline \textit{Function words} & it, to, no, very & 23.97 & 27.35 & 26.77 & 25.95 & 37.10\\ \hline \textit{Pronouns} & I, them, itself & 4.29 & 5.04 & 2.98 & 3.65 & 8.79\\ \hline \textit{Personal Pronouns} & I, them, her & 2.61 & 2.88 & 1.54 & 2.06 & 5.14\\ \hline \textit{Conjunctions} & but, whereas & 2.06 & 3.11 & 2.67 & 2.30 & 4.51\\ \hline \textit{Negations} & no, never, not & 0.64 & 0.32 & 0.20 & 0.26 & 0.53\\ \hline \textit{Negative Emotion} & hate, worthless, enemy & 2.63 & 3.60 & 3.60 & 3.70 & 4.08\\ \hline \textit{Social Processes} & talk, us, friend & 4.95 & 9.24 & 5.78 & 5.53 & 11.14\\ \hline \textit{Family} & mom, brother, cousin & 0.24 & 0.13 & 0.16 & 0.12 & 0.11\\ \hline \textit{Friends} & pal, buddy, coworker & 0.06 & 0.23 & 0.27 & 0.30 & 0.17\\ \hline \end{tabular} \label{table:3} \end{table*} \endgroup In addition to this, messages from a prevention focus tend to focus on the avoidance of negative outcomes, and because of this, often use more language associated with \textit{negative emotions}. The results show that UK GLEA tweets had a higher percentage for this particular LIWC category as well, supporting the observation that it made use of a prevention focus the most compared to the other datasets. On the other hand, the pro-ISIS tweets generally seemed to use prevention-focussed narratives less than counter-extremism tweets, and so did not put as much emphasis on duties and obligations as the counter-extremism tweets did. Overall, the results from this linguistic analysis show that the pro-ISIS tweets used much less regulatory focus, whether promotion or prevention, in their narratives than the counter-extremism tweets. Further analysis would therefore be required to assess their radicalisation techniques. However, an observation that is common across the results for all five datasets is that they all, generally, used a prevention focus in their messages than a promotion focus. This could suggest that both extremist and counter-extremist narratives view their goals more as duties and obligations than hopes and aspirations. Through knowledge gained from previous studies such as \cite{Higgins1997} and \cite{Johnsen2014}, we can infer that using such a focus could be useful to maintain behavioural change, though it does not necessarily inspire initial behavioural change as narratives from a promotion focus would do. Due to this, it could be beneficial for online counter-extremism narratives to make use of more promotion-focussed content and pursue positive end-states or goals in order to initiate de-radicalisation, although prevention-focussed narratives would still be necessary to maintain these efforts. \subsubsection{Additional Observations} \label{additional obvs} In addition to exploring the usage of regulatory focus, LIWC was also used to analyse each of the datasets for any other notable distinctions in the linguistic composition of the tweets. Significant observations have been summarised in Table~\ref{table:4}. Whilst conducting this analysis, an immediate distinction that can be seen is the usage of pronouns in each dataset. Overall, all of the counter-extremism datasets generally used less singular first-person pronouns (such as \textit{I}, \textit{me}, \textit{my}), and more plural first-person pronouns (such as \textit{we}, \textit{our}, \textit{us}) than the pro-ISIS tweets. Second-person pronouns (such as \textit{you}, \textit{yours}, \textit{yourself}) were present in the tweets from UK GLEAs significantly more than any of the other datasets of tweets. This is in line with the previous observations made from the empirical analysis where the counter-extremism tweets from UK GLEAs mainly addressed their audience directly to inform them of how to properly report and protect against terrorist incidents. When looking at third-person pronouns (such as \textit{she}, \textit{he}, \textit{they}), the results from the LIWC analysis show that they were used more in the tweets from the pro-ISIS accounts than any of the counter-extremism tweets. The use of pronouns in speech and text has been studied extensively in previous works, and has often been identified as a discursive tool used to persuade audiences. This effect of persuasion is partly due to the variability of the scope of reference of the pronouns used, which is determined by the audience, who can then interpret whether they are inclusive or exclusive of them \cite{Wilson1990, Zupnik1994}. In particular, the use of personal pronouns such as \textit{we}, \textit{you}, \textit{our} and \textit{us} is a common persuasive technique used in writing to make audiences feel more immediately involved. The LIWC analysis shows that this particular strategy is used more in the counter-extremism tweets than the pro-ISIS tweets; more specifically, the tweets from the UK GLEAs used such pronouns significantly more than the other sets of tweets. However, it should be noted here that the pro-ISIS tweets used such second-person pronouns (e.g. \textit{you}, \textit{yours}) more than the counter-extremism tweets from NGOs and US GLEAs. \begin{table*}[ht] \centering \caption{Additional observations made from the LIWC analysis.} \begin{tabular}{| c | c | c | c | c | c |} \hline \textbf{LIWC Category} & \textbf{Pro-ISIS Supporters} & \textbf{Counter-extremists} & \textbf{NGOs} & \textbf{US GLEAs} & \textbf{UK GLEAs}\\ \hline \textit{I} & 0.50 & 0.08 & 0.06 & 0.09 & 0.15\\ \hline \textit{We} & 0.46 & 1.40 & 0.78 & 1.41 & 2.08\\ \hline \textit{You} & 0.52 & 0.94 & 0.19 & 0.19 & 2.38\\ \hline \textit{She/he} & 0.42 & 0.15 & 0.15 & 0.11 & 0.21\\ \hline \textit{They} & 0.71 & 0.31 & 0.35 & 0.26 & 0.38\\ \hline \textit{Anxiety} & 0.25 & 1.79 & 1.42 & 1.60 & 2.13\\ \hline \textit{Religion} & 1.30 & 0.43 & 0.57 & 0.40 & 0.27\\ \hline \textit{Death} & 0.85 & 0.25 & 0.40 & 0.27 & 0.07\\ \hline \end{tabular} \label{table:4} \end{table*} Another noteworthy point here is that making use of third-person pronouns in political discourse is a tactic that can be used to delineate the level of commitment and involvement of an organisation to the statement being made \cite{Ho2013}. This is used most in the pro-ISIS tweets, largely due to the fact that most of these tweets are from pro-ISIS supporters, and likely not ISIS themselves, though the counter-extremism tweets---especially those from GLEAs---were directly from official representatives of the organisations. This shows that, in general, the counter-extremism agencies were more involved or committed to any future responsibilities declared in the tweets. The results from the LIWC analysis also showed significant differences in the use of language associated with anxiety (e.g. \textit{nervous}, \textit{afraid}, \textit{tense}). Generally, the counter-extremism tweets used more anxiety-related language than the pro-ISIS tweets from UK GLEAs using such language the most, which is supported by the findings from Vergani and Bliuc in \cite{Vergani2015}. The use of language related to death (e.g. \textit{kill}, \textit{bury}, \textit{grave}) was more common in the pro-ISIS tweets, though this is justifiable considering our analysis showed that themes of martyrdom and the attacks on ISIS, as well as the findings from Torok in \cite{Torok2013}, were frequently discussed. Another observation is that the pro-ISIS tweets used more religion-associated language than the counter-extremism tweets. Tweets from US GLEAs and NGOs referred to religion slightly more than those from UK law GLEAs, though this could largely be due to the fact that these datasets were shown to discuss ISIS frequently, as noted earlier in Section~\ref{empircal analysis}. Recent research, such as the study carried out by El-Said in \cite{El-Said2012}, has shown that a major de-radicalisation strategy, particularly when countering ISIS narratives, is to involve clerics and scholars to promote authentic religious teachings, and use them to refute misinformed religious teachings propagated by extremists. This provides a further area of development for counter-extremism campaigns on social media platforms, where making use of such religious teachings could help to directly counter a significant amount of extremist content online. \subsection{Limitations} \label{limitations} Despite gaining some useful insights through this study, certain limitations of our approach could have impacted our observations. The first limitation that affects any research carried out in counter-extremism is that it is very difficult to ethically measure the effectiveness of counter-extremism initiatives. This makes it challenging to come to any concrete conclusions about how such counter-narratives can be improved since there is no efficient way to evaluate them (other than first hand experience with individuals impacted). Additionally, measuring the effect of online extremist or counter-extremist content on behavioural change is also hard to do with ethically-sound methodologies, and therefore can mainly be supported by the findings from previous studies and research, as done in this paper. However, since it is undeniable that online extremists played a major role in the radicalisation of their target audiences on mainstream social media, it should be possible for counter-extremism narratives to reach the same platforms as online extremists, and therefore distribute content that is accessible and influential among their target audiences \cite{Ashour2009}. Another point to note about is that the counter-extremism tweets were gathered from different time frames than the pro-ISIS tweets. For instance, tweets from UK GLEAs were collected from October 2016 to September 2019, tweets from US GLEAs were collected from March 2013 to September 2019, and tweets from NGOs were collected from January 2015 to September 2019. This wide time span is largely due to the lack of counter-extremism content available on Twitter (an interesting observation in itself). In our study, we felt that it was more important to gain a dataset of tweets large enough to analyse and compare with the results of the dataset of the pro-ISIS tweets (which were all from 2015 and 2016). It should be noted here, however, that most of the counter-extremism accounts started posting more frequently at around the same time the pro-ISIS tweets were posted (in 2015 and 2016), which is when ISIS supporters were more prevalent on Twitter \cite{Ceron}. \section{Conclusions and Future Work} \label{conclusion} Up until recently, regulation of the internet against organisational crime and extremism in online spaces has mainly concentrated on disruption efforts. Ultimately, our work suggests that perhaps other initiatives, such as CVE, could be used to combat online radicalisation. From this study, we sought to advance the current research in online online extremism and counter-extremism narratives by comparing the online behaviours of Twitter accounts from both extremist and counter-extremist organisations, and assessing how the two sets of messages compare with each other. To our knowledge, this is the first work to explicitly compare extremist and counter-extremist content in this way, whilst also applying psychological motivational theory to explore how such posts can influence behavioural change in online audiences. Although our study analyses data from one particular use-case of online radicalisation through pro-ISIS tweets, we believe that a similar methodology could be applied to other use cases of radicalisation using online platforms to gain further insights into the effectiveness of counter-extremism strategies. Through performing linguistic analysis on datasets of tweets from pro-ISIS supporters and various counter-extremism organisations, we found that, oftentimes, counter-extremism tweets from certain agencies---namely US GLEAs and NGOs---would promote topics and use hashtags which were also used frequently by pro-ISIS supporters. This included frequent discussion around ISIS activity and use of the hashtag \textit{\#ISIS}. In contrast, counter-extremism tweets from UK GLEAs seemed to share completely different content when compared to each of the other datasets of tweets. In this case, the majority of their posts were crafted for the purpose of informing online audiences on how to report or protect themselves against possible extremist activity, where specific extremist groups were rarely referred to. Consequently, most of these posts were not constructed to directly counter extremist content being posted online. In terms of the psychological motivation behind the tweets, with specific regards to Regulatory Focus Theory, we found that counter-extremism tweets generally seemed to use regulatory focus more than the pro-ISIS tweets, with tweets from UK GLEAs using such motivational theory the most. An avenue for future work here would be to analyse the pro-ISIS tweets with more advanced methods, frameworks or tools to assess any other radicalisation techniques used by extremists to radicalise and manipulate their audience. Our findings also showed that, overall, both extremist and counter-extremist tweets used prevention-focussed narratives more than promotion-focussed narratives. An area for further study would therefore be to explore whether using more promotion-focussed narratives would be an effective counter-extremism strategy. Previous research conducted by Higgins in \cite{Higgins1997} suggested that promotion-focussed narratives inspired initial behavioural change, whereas using a prevention focus could facilitate the maintenance of this behavioural change. Thus, another hypothesis that may be worth investigating here would be whether the regulatory focus of such online extremism and counter-extremism content changed chronologically; did such narratives make use of a promotion focus initially, and then shift to using a prevention focus? Such theories could also be explored on other online platforms, not just Twitter. \bibliographystyle{IEEEtran}
1,477,468,750,817
arxiv
\subsection{Programmability: Hardware vs. Software} \subsection{Why eBPF for Edge Programmability} eBPF, short for extended Berkeley Packet Filter~\cite{mccanne1993bsd}, is a software infrastructure that operates within the Linux kernel. BPF has been used for user-defined packet filtering for decades, but was extended and redesigned for additional functionalities. The extended BPF enables use cases such as advanced network packet processing, monitoring, tracing, and security~\cite{cilium, katran, falco}, and is now being expanded constantly in the Linux kernel development efforts. We argue that the versatility of eBPF makes it a good candidate for designing a programmable edge store due to: \begin{itemize}[leftmargin=10pt,itemindent=0em,nolistsep] \item\textit{Expressiveness:} Previous work on eBPF applicability in the storage domain highlights its performance and flexibility benefits~\cite{barbalace2019extos,bijlani2019extension}. The basic instruction set is expressive to capture many common storage-related usecases like aggregation, filtering, transformation, etc. These functions use cases can be attached to the execution of virtually any function within the Linux kernel and can be (re)programmed without halting the system. \item\textit{Wide availability:} The eBPF toolchain is supported by the Linux kernel, and the clang and gcc compilers. Hence, it is immediately available on any device supporting Linux (including sensors and IoT). This availability decreases the inherent inertia when deploying a new software stack. \item \textit{Secure and bounded multi-tenant execution:} Thanks to its simple(r) ISA, eBPF is amenable for verification and extensions~\cite{2019-pldi-ebpf}. The current Linux/eBPF toolchain can inject and run user-defined code in the Linux kernel in a safe way by providing symbolic execution and termination guarantees that ensure the extension is safe and will not stall in lengthy or infinite computation. Reusing the rest of the Linux's isolation machinery, we can ensure a safe and secure multi-tenancy execution of storage customization logic for all applications. \item \textit{A unified ISA for all:} Currently eBPF toolchains exist for multi-arch CPUs, (smart) NICs and switches (P4 supports eBPF compilation), and even FPGAs~\cite{2020-osdi-hxdp,2020-csur-ebpf-xdp} with a support for JITing. As a result, we believe that eBPF is what comes the closet to a unified ISA with support for heterogenous computing and I/O devices. Such unified support also open possibilities for a unified optimizations across the network-storage stacks. For example, network support can be use to replicate data packet necessary for storage replication in a distributed setting \end{itemize} \noindent\textbf{What are the alternatives:} Alternatives would have been hardware-supported programmability (FPGAs, ISCs, ASICs)~\cite{2019-atc-insider,barbalacecomputational}, though their broader applicability is yet to be seen at the edge~\cite{2018-hotedge-fpga}. We also look into other language provided isolation like with Rust, Java script, or WebAssembly. Such techniques have been used in the context of storage~\cite{2018-osdi-splinter,2020-socc-sfunct} in data centers, and with compute on the edge as well~\cite{2020-middleware-sledge}. However, they (i) have high runtime overheads (JVM); (ii) does not support enhancing kernel storage routines; (iii) do not support multiple devices. A complete userspace based solution with a lightweight virtualization support~\cite{2020-nsdi-firecracker} would be possible, however, building such infrastructure depends a lot on underlying hardware capabilities (support for hardware virtualization) of edge node devices. However, in a modular system, the mechanism for programmability can be changed to a better alternative, if necessary. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/griffin-last.pdf} \caption{An overview of the Griffin design.}~\label{fig:griffin} \end{figure} \begin{table*}[] \centering \small \begin{tabular}{ll} \toprule \textbf{API} & \textbf{Description} \\ \midrule \textit{ret = execute(ac, object)} & execution of a application-defined ac logic on an object, the basic eBPF functionality \\ \textit{t = register(ac\_trigger, event | code\_path)} & a generic trigger registration on certain event or code path (similar to the eBPF) \\ \textit{replica\_list = replica\_ac(nodes, state)} & selection of replica nodes based on any arbitray user criteria \\ \textit{new\_replica\_list = loadbalancer\_ac(replica\_nodes, state)} & ac executed with the current replicas, state, and outputs the new list \\ \textit{\{state, action\} = consistency(object, new\_data, state)} & ac takes an object, new data, and the current state, and returns a new state and action \\ \textit{new\_replica\_list = migration\_ac(replica\_nodes, t, state)} & ac take the trigger, old replia set, monitoring state and outputs the new replica list \\ \bottomrule \end{tabular} \caption{\label{tab:API} Abridged Griffin{} API. \textit{ac} stands for eBPF powered AppCode (ac).} \end{table*} \subsection{Griffin{} Design} We now present the design of Griffin---our proposed edge storage middleware. An overview of the system architecture is depicted in Figure~\ref{fig:griffin}. Griffin{} spans a large set of potentially heterogeneous storage nodes at the edge. Our proposed approach with software programmability is very similar to the spirit of Malacology~\cite{2017-eurosys-malacology}. Malacology, presents a customized re-use of battle-hardened code of a mature and stable codebase of the Ceph file system. Here, we argue for building the re-usable pieces using the eBPF language support, which can be selectively used by applications based on their needs. The code snippets can be made available to wider egde application as a library. The goal of Griffin{} is then to provide an expressive APIs for the user to specify their application needs such as data format (e.g., key-value, timeseries, graph-based), data lifetime, replication, consistency (e.g., strong, read-after-write, or eventual), and common or customized data operations with service-level objectives (e.g., latency). In the following, we explain the system-wide services, such as data replication, consistency, and session migration, provided by Griffin{}. We also show how eBPF serves as the fundamental technology to enable light-weight programmable storage functionalities including garbage collection, encryption, data erasure and replication, and customized computation offloading, which are essential in implementing the APIs in Griffin{}. We refer to these eBPF provided logic as \textit{appcode} that the system will run. See Table~\ref{tab:API} for a brief overview of the API. \textbf{Computation offloading:} Griffin{} introduces a computation offloading service for applications to run customized \textit{computation} appcode directly on the storage device that holds the data needed for the computation. This can benefit a wide range of data-intensive edge applications where moving the data is typically more expensive than performing the computation itself. Griffin{} employs eBPF to implement such a service since it allows to perform computation in the Linux kernel and supports runtime updates. Despite the limits on the function complexity, eBPF allows to chain user-defined functionalities in order to implement an expressive set of functionalities that can be offloaded in the kernel. Thanks to this in-kernel function execution it is possible to remove layers of abstraction from the computation. This would help to shrink the overhead of the execution, reducing the latency perceived by the application and improves the CPU utilization for useful computation. \textbf{Monitoring:} Apart from the basic code execution, light-weight monitoring of the infrastructure is the most significant operation that eBPF supports out of the box. The monitoring data is very critical to gather and maintain in a light-weight manner, and is used in the other storage system service customizations. To this end, Griffin{} provides a health monitoring service to continuous monitor the status of the edge storage devices including the CPU utilization, storage utilization and health, and network statistics. Furthermore, eBPF/XDP can also be used to monitor the network latencies. Based on the collected data, the system has some predefined triggers. For example, when a storage device is reaching the end of its lifetime, the health monitoring service carries out two tasks: (i) It notifies the data replication service to find a new device to replicate the data stored on this device and performs complete data erasure. (ii) It signals the edge provider about this situation so that the edge provider can perform maintenance in due time. Observability and monitoring are currently among the most widespread use cases of eBPF. We plan to leverage eBPF to implement a health monitoring service for the storage middleware so that data safety is always guaranteed and timely maintenance can be performed by edge provider. \textbf{Load balancing and replication:} Replication of data is a crucial factor to consider in edge computing. First, data replication is key to the availability of edge applications. Edge nodes can become unavailable due to both unfavorable network conditions or system failures. With data replication we can ensure an edge application can always have its required data at its disposal and remain operative. Second, data replication can be used to improve edge application performance. As we previously highlighted, many edge applications are collaborative and rely on data sharing to achieve their goals, and since users are distributed on the territory placing data on multiple edge nodes can be beneficial to end-to-end latency improvement for as many users as possible. On the other hand, the scarcity of edge resources highlights the need to perform replication only when needed, in order to save storage space across edge nodes. For this reason we believe that it is necessary to let the users specify the replication policies and preferences, in order to allow the storage middleware to allocate resources in optimally. Griffin{} features a data replication service which provides an eBPF-powered API to help customize many decisions when doing replication and load balancing. The default policy is not to replicate data. Applications can provide eBPF appcode to enable replication. The input to the code is a set of edge nodes (with further properties like location, load, capacity), and the eBPF replication code returns a list of nodes where replicas should be placed. The application is free to apply any criteria it sees fit like best locality, the least loaded, or maximum capacity, etc. The head of the list is the primary node, others are replica in the chain-replication protocol~\cite{2004-osdi-chain} (the only one supported). Similarly, load-balancing appcode can be attached to a load balancing trigger (e.g., CPU or network utilization). The code takes the list of replica machines and can output a new replica where the data should replicated. The collection of tigger metrics can be made light-weight and distributed using network-wide eBPF distribution. \textbf{Consistency:} Griffin{} also builds a consistency management service which is directly attached to the replication service. It allows each edge application to specify its required consistency model (e.g., strong, read-my-write, and eventual) via a high-level API and automatically enforces the selected consistency model when data is replicated by the data replication service. eBPF consistency appcode can be attached and executed everytime a read and/or write is issued. The code can be attached for a particular object or objects satisfying certain criteria (name, creation time, or location). The appcode initializes a state associated with the object at the start. Upon execution, the appcode is expected to return the new state associated and action (hold, reject, accept) with the object. Using this basic mechanism one can implement multiple consistency models. For example, a state can be associated with a timestamp which can be used to resolve if the write or read should be admitted or rejected from the system. A hold action can be used to wait for a quorum response. Presence or absence of the state can be use to implement the first-writer-wins or the last-writer-wins consistency model. Read-my-write consistency model can be used by comparing the timestamps (vector clocks can also be stored as the state). Once a read or write is accepted in the system, it will follow the chain replication protocol for data reading and writing. This design is very similar to the client-driven semantic reconciliation of vector-clocked objects in Dynamo~\cite{2007-sosp-dynamo}. \textbf{Session migration:} Griffin{} includes a session migration service to handle user mobility. The decisions what to migrate and when is taken with the input data from the monitoring service with user-defined triggers. These triggers can be on the capacity, geo-area bounds, load, and any other monitored property of the system. By default, there is no user session migration with the user mobility. A user can register their triggers to represent interests in system properties. When the session migration trigger executes, it takes input the current replica servers, cause of the trigger, and should output a new list of replica servers. Griffin{} ensures that happens in a safe and atomic manner (at an appropriate time) by coordinating user traffic between different edge storage nodes. \textbf{Data erasure and garbage collection:} While data storage and sharing is fundamental in stateful edge applications, the data may only be useful for the application within a certain time frame. In central clouds it is not crucial to dispose old data immediately since the resources are elastic and virtually infinite (for the user). However, edge computing does not possess such a luxury of resource abundance. Thus, it is important to proactively retrieve storage space as soon as old data is no longer actively being used. Valid options include performing a data backup to a central cloud or simply erasing the data. Griffin{} has a dedicated data erasure and garbage collection (GC) service, expanding what is already an essential component of modern storage technologies. This service exposes APIs for the user to explicitly specify the lifetime of the data and the policy to use for GC in their applications (for example once read/write, delete after certain time period, or if certain event has happened). Such lifecycle-based data management is also explored in data center for short-lived data, e.g., serverless~\cite{2018-osdi-pocket}. At runtime, Griffin{} reclaims storage space according to the specified data lifetime and policy defined in eBPF triggers. eBPF can also be used in this case to enforce different GC policy implementations as well as data erasure instrumenting the storage accesses with garbage collection and data erasure behavior at runtime, depending the lifetime of the data. \section{Introduction}\label{sec:introduction} \subfile{1_intro} \section{Edge Storage: A New Start?}\label{sec:differences} \subfile{3_infrastructure_challenge} \section{Programmable Edge Storage}\label{sec:programmability} \subfile{4_programmable_storage} \section{Conclusion} \subfile{5_conclusion} \Urlmuskip=0mu plus 1mu\relax \bibliographystyle{plain}
1,477,468,750,818
arxiv
\section*{Introduction} In Topology, Differential Geometry and Algebraic Geometry, it is usual to study their geometric objects considering suitable finite open coverings and studying the associated finite ringed spaces. Let us remember how these finite ringed spaces are constructed: \begin{vacio} \label{Parr} \medskip Let $S$ be a topological space and let ${\mathcal U}=\{U_1,\dots,U_n\}$ be a finite open covering of $S$. For each $s\in S$ define $U_s:=\underset{s\in U_i} \cap U_i$. Observe that the topology generated by ${\mathcal U}$ is equal to the topology generated by $\{U_s\}_{s\in S}$. We shall say that ${\mathcal U}$ is a minimal open covering of $S$ if $U_i\neq U_j$ if $i\neq j$ and $U_i=U_s$ for some $s\in S$, for every $i$. Define the following equivalence relation on $S$: $s\sim s'$ if the covering ${\mathcal U}$ does not distinguish them, i.e., if $U_s=U_{s'}$. Consider on $S$ the topology generated by the covering ${\mathcal U}$ and let $X:=S/\sim$ be the quotient topological space. $X$ is a finite $T_0$-topological space, then it is a finite poset as follows: $[s]\leq [s']$ if $U_{s'}\subseteq U_{s}$. Let $\pi\colon S\to X$, $s\mapsto [s]$ be the quotient morphism and let $U_{[s]}=\{[s']\in S\colon [s']\geq [s]\}$ be the minimal open neighborhood of $[s]$. One has that $\pi^{-1}(U_{[s]})=U_s$. Suppose now that $(S,{\mathcal O}_S)$ is a ringed space (a scheme, a differentiable manifold, an analytic space, etc.). We have then a sheaf of rings on $X$, namely ${\mathcal O}:=\pi_*{\mathcal O}_S$, so that $\pi\colon (S,{\mathcal O}_S)\to (X,{\mathcal O})$ is a morphism of ringed spaces. We shall say that $(X,{\mathcal O})$ is the {\it ringed finite space associated with the finite covering ${\mathcal U}$}. Observe that ${\mathcal O}_{[s]}={\mathcal O}(U_{[s]})={\mathcal O}_S(U_s)$. To fix ideas, suppose that $ S $ is a quasi-compact quasi-separated scheme (see \cite{Gr} 1.2.1). There exists a minimal affine open covering ${\mathcal U}=\{U_{s_1},\ldots,U_{s_n}\}$ of $S$ (see \cite{SanchoHomotopy} 3.13). Consider the associated ringed finite space $X$. It is easy to prove that the functors $\mathcal M \functor \pi_*{\mathcal M}$, ${\mathcal N}\functor \pi^*{\mathcal N}$ stablish an equivalence of categories between the category of quasi-coherent ${\mathcal O}_S$-modules and the category of quasi-coherent ${\mathcal O}_X$-modules. Besides, $H^i(S,{\mathcal M})=H^i(X,\pi_*{\mathcal M})$ for any quasi-coherent ${\mathcal O}_S$-module ${\mathcal M}$. Observe that for every $U_{s_j}, U_{{s_{j'}}}\subseteq U_{s_i}$: \begin{enumerate} \item[-] The restriction morphism ${\mathcal O}_S(U_{s_i})\to {\mathcal O}_S(U_{s_j})$ is a flat morphism, since the morphism ${\mathcal O}_S(U_{s_i})\to ({\mathcal O}_S(U_{s_i})\backslash{\mathfrak p})^{-1}{\mathcal O}_S(U_{s_i})={\mathcal O}_{S,{\mathfrak p}}=({\mathcal O}_S(U_{s_j})\backslash{\mathfrak p})^{-1}{\mathcal O}_S(U_{s_j})$ is flat, for any ${\mathfrak p}\in U_{s_j}=\Spec {\mathcal O}_S(U_{s_j})$. \item[-] The natural morphism ${\mathcal O}(U_{s_j})\otimes_{{\mathcal O}_S(U_{s_i})}{\mathcal O}_S(U_{{s_{j'}}})\to {\mathcal O}_S(U_{s_j}\cap U_{{s_{j'}}})$ is an isomorphism, because $U_{s_j}\times_{U_{s_i}} U_{{s_{j'}}}=U_{s_j}\cap U_{{s_{j'}}}$. \item[-] The morphism ${\mathcal O}_S(U_{s_j}\cap U_{{s_{j'}}})\to \prod_{U_{s_k}\subseteq U_{s_j}\cap U_{{s_{j'}}}} {\mathcal O}_S(U_{s_k})$ is faithfully flat: it is flat because $U_{s_j}\cap U_{{s_{j'}}}$ and $U_k$ are affine and it is faithfully flat because $\coprod_{U_{s_k}\subseteq U_{s_j}\cap U_{{s_{j'}}}} U_k\to U_{s_j}\cap U_{{s_{j'}}}$ is a surjective map. \end{enumerate} Therefore, for any points $x_j,x_{j'}\geq x_i$ in $X$: \begin{enumerate} \item[a.] The natural morphism ${\mathcal O}_{x_i}\to {\mathcal O}_{x_j}$ is flat. \item[b.] The natural morphism ${\mathcal O}_{x_j}\otimes_{{\mathcal O}_{x_i}}{\mathcal O}_{x_{j'}}\to {\mathcal O}(U_{x_j}\cap U_{x_{j'}})$ is an isomorphism. \item[c.] The morphism ${\mathcal O}(U_{x_j}\cap U_{x_{j'}})\to \prod_{x_k\geq x_j, x_{j'}} {\mathcal O}_{x_k}$ is faithfully flat. \end{enumerate} \end{vacio} \begin{vacio} We shall say that a ringed finite space is schematic if it satisfies a., b. and c. In \cite{KS} 4.4, 4.11, it is proved that a finite ringed space $X$ is schematic iff $X$ satisfies a. and $R^n\delta_*{\mathcal O}_X$ is a quasi-coherent module for any $n$, where $\delta\colon X\to X\times X$ is the diagonal morphism. In \cite{KG} 4.5, it is proved that $X$ is schematic iff $X$ satisfies a. and $R^ni_*{\mathcal O}_{U}$ is quasi-coherent for any open subset $U\overset i\subset X$ and for $n\in{\mathbb N}$. In Algebraic Geometry, it is usual to approach the study of schemes and their morphisms through the category of quasi-coherent modules, for example, the theory of intersection can be studied with the $K$-theory of quasi-coherent modules. We shall denote by ${\bf Qc\text{-}Mod}_X$ the category of quasi-coherent ${\mathcal O}_X$-modules. We prove that a finite ringed space $X$ is schematic iff ${\bf Qc\text{-}Mod}_X$ satisfies minimal conditions: \begin{enumerate} \item[-] A ringed finite space $X$ is schematic iff for any morphism $f\colon {\mathcal M}\to {\mathcal N}$ of quasi-coherent ${\mathcal O}_X$-modules $\Ker f$ is quasi-coherent, and $\delta_*(\mathcal M)\in {\bf Qc\text{-}Mod}_{X\times X}$, for any ${\mathcal M}\in {\bf Qc\text{-}Mod}_X$, where $\delta\colon X\to X\times X$, $\delta(x)=(x,x)$ is the diagonal morphism. \item[-] A ringed finite space $X$ is schematic iff for any morphism $f\colon {\mathcal M}\to {\mathcal N}$ of quasi-coherent ${\mathcal O}_X$-modules $\Ker f$ is quasi-coherent, and $i_*(\mathcal M)\in {\bf Qc\text{-}Mod}_X$ for any ${\mathcal M}\in {\bf Qc\text{-}Mod}_U$ and any open subset $U\overset i\hookrightarrow X$. \end{enumerate} Likewise, we study and characterize affine finite spaces. Let us use the previous notations. It can be proved that $S$ is an affine scheme iff the morphisms ${\mathcal O}(S)\to \prod_{i}{\mathcal O}_{s_i}$ and ${\mathcal O}(U_{s_i})\otimes_{{\mathcal O}(S)}{\mathcal O}(U_{s_j}) \to \prod_{U_{s_k}\subset U_{s_i}\cap U_{s_j}} {\mathcal O}(U_{s_k})$ are faithfully flat, for any $i,j$. We say that a finite ringed space $ X $ is affine if\begin{enumerate} \item[-] The morphism ${\mathcal O}(X)\to \prod_{x\in X} {\mathcal O}_x$ is faithfully flat. \item[-] The morphism ${\mathcal O}_{x}\otimes_{{\mathcal O}(X)}{\mathcal O}_{x'}\to \prod_{z\in U_{x}\cap U_{x'}} {\mathcal O}_z$ is faithfully flat, for any $ x,x'\in X$.\end{enumerate} \noindent Affine finite spaces are schematic and a finite ringed space $X$ is schematic iff $U_x$ is affine, for any $x\in X$. We prove that $X$ is affine iff ${\mathcal O}(X)\to {\mathcal O}_x$ is flat, ${\mathcal O}_X(U_x\cap U_{y})={\mathcal O}_x\otimes_{{\mathcal O}(X)}{\mathcal O}_y$ and $X$ and $U_x\cap U_{y}$ are acyclic, for any $x, y\in X$. A schematic space $X$ is affine iff $H^1(X,{\mathcal M})=0$ for any quasi-coherent module ${\mathcal M}$ (see \cite{KS} 5.11), which is equivalent to saying that the functor ${\mathcal M}\functor \Gamma(X,{\mathcal M})$ is exact (see Corollary \ref{corotonto}).\end{vacio} \begin{vacio} Next, we study the morphisms between schematic finite spaces. Let $f\colon X\to Y$ be a morphism of ringed spaces between schematic finite spaces. We say that $f$ is affine if $f_*{\mathcal O}_X$ is a quasi-coherent module and $f^{-1}(U)$ is affine, for any affine open subset $U\subset Y$. We prove that $f$ is affine if and only if $f_*$ preserves quasi-coherence and it is an exact functor. We say that $f$ is schematic if $f_*{\mathcal O}_X$ is quasi-coherent and the morphism $U_x\to U_{f(x)}$, $x'\mapsto f(x')$ is affine, for any $x\in X$. We prove that the following statements are equivalent: \begin{enumerate} \item[-] $f$ is schematic. \item[-] $f_*$ preserves quasi-coherence. \item[-] The natural flat morphism ${\mathcal O}_x\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_y \to \prod_{z\in U_x\cap f^{-1}(U_y)} {\mathcal O}_z$ is faithfully flat, for any $x\in X$ and $y\geq f(x)$. \item[-] $R^n\Gamma_{f*}{\mathcal O}_X\in {\bf Qc\text{-}Mod}_{X\times Y}$ for any $n$, where $\Gamma_f\colon X\to X\times Y$, $\Gamma_f(x)=(x,f(x))$ is the graph of $f$. \item[-] $R^n(f\circ i)_*{\mathcal O}_U\in {\bf Qc\text{-}Mod}_X$, for any open subset $U\overset{i}\hookrightarrow X$ and any $n\geq 0$. \end{enumerate}\end{vacio} \begin{vacio} Now, we ask ourselves whether schematic finite spaces are determined by the category of their quasi-coherent modules. A morphism of schemes $F \colon S \to T$ between quasi-compact quasi-separated schemes is an isomorphism if and only if the functors $${\bf Qc\text{-}Mod}_S\,\, \dosflechaso{F_*}{F^*} \,\, {\bf Qc\text{-}Mod}_T$$ are mutually inverse. Given a schematic morphism $f\colon X\to Y$, we prove that the following statements are equivalent \begin{enumerate} \item[-] The functors ${\bf Qc\text{-}Mod}_X\,\, \dosflechaso{f_*}{f^*} \,\, {\bf Qc\text{-}Mod}_Y$ are mutually inverse. \item[-] ${\mathcal O}_Y=f_*{\mathcal O}_X$ and $f$ is affine. \item[-] Essentially, $f$ is the quotient morphism defined on $X$ by a minimal affine open covering. \item[-] The cylinder $C(f)=X\coprod Y$ of $f$ is a schematic space and $f$ is a faithfully flat morphism. \end{enumerate} \noindent A morphism that satisfies any of these statements will be called quasi-isomorphism.\end{vacio} \begin{vacio} \label{Pa2} Let us talk less accurately. Given a schematic finite space $X$, consider the ringed space $$\tilde X:=\ilim{x\in X} \Spec {\mathcal O}_x.$$ $\Spec {\mathcal O}_x$ is a subspace of $\tilde X$ via the natural morphism $i_x\colon \Spec {\mathcal O}_x\to \tilde X$ and $\tilde X=\cup_{x\in X} \Spec {\mathcal O}_x$. $\tilde X$ is quasi-compact, the set of its quasi-compact open subsets is a basis of its topology and the intersection of two quasi-compact open subsets is quasi-compact. Given a quasi-coherent ${\mathcal O}_X$-module ${\mathcal M}$, let $\tilde {\mathcal M}_x$ be the ${\mathcal O}_{\Spec {\mathcal O}_x}$-module of localizations of ${\mathcal M}_x$ and consider the ${\mathcal O}_{\tilde X}$-module $\tilde {\mathcal M}:=\plim{x\in X} i_{x,*} \tilde{\mathcal M}_x$. We prove that $H^n(X,{\mathcal M})=H^n(\tilde X,\tilde{{\mathcal M})}$, for any $n\geq 0$, and that the category of quasi-coherent modules of $X$ is equivalent to the category of quasi-coherent modules of $\tilde X$. Let $X$ and $Y$ be schematic finite spaces. We say that a morphism of ringed spaces $f\colon \tilde X\to \tilde Y$ is schematic if $f_*$ preserves quasi-coherence. Let ${\mathcal C}_W$ be the category of schematic finite spaces localized by the quasi-isomorphisms. We prove that $$\Hom_{{\mathcal C}_W}(X,Y)=\Hom_{sch}(\tilde X,\tilde Y).$$\end{vacio} \section{Finite ringed spaces: basic notions} Let $X$ be a finite set. It is well known ([\ref{Alexandrof}]) that giving a topology on $X$ is equivalent to giving a preorder relation on $X$: $$x \leq y \, \iff \, \bar{x} \subseteq \bar{y}, \ \ \text{ where } \bar{x}, \bar{y} \text{ are the closures of } x \text{ and } y.$$ In addition, the topology is $T_0$ if and only if the preorder is a partial order (i.e. it satisfies antisymmetric property). Let $X$ be a finite topological space. For each point $p \in X$, let us denote $$ U_x = \text{smallest open subset containing } x,$$ that is, $U_x= \{y \in X : y \geq x \}$. Then, $ x \leq y \Leftrightarrow U_y \subseteq U_x$. The family of open subsets $\{U_x\}_{x \in X}$ constitutes a minimal basis of open subsets of $X$ (any other base contains this one). A map $f\colon X \to Y$ between finite topological spaces is continuous if and only if it is monotone (i.e. $x \leq y$ implies $f(x) \leq f(y)$). Let $X$ be a finite topological space and let $F$ be a sheaf of abelian groups (resp. rings, etc.) on $X$. The stalk of $F$ at $x\in X$, $F_x$, is an abelian group (resp. ring, etc.) and coincides with the sections of $F$ on $U_x$. For each $x\leq y$, the natural morphism $r_{xy}\colon F_x\to F_y$ is just the restriction morphism $F(U_x)\to F(U_y)$, which satisfies: $r_{xx}=\Id$ for any $x$, and $r_{yz}\circ r_{xy}=r_{xz}$ for any $x\leq y\leq z$. Conversely, consider the following data: - An abelian group (resp. a ring, etc) $F_x$ for each $x\in X$. - A morphism of groups (resp. rings, etc) $r_{xy}\colon F_x\to F_y$ for each $x\leq y$, satisfying: $r_{xx}=\Id$ for any $x$, and $r_{yz}\circ r_{xy}=r_{xz}$ for any $x\leq y\leq z$. \noindent Let $\mathcal F$ be the following presheaf of groups (resp. rings, etc.): For each open subset $U\subset X$, $\mathcal F(U):=\plim{x\in U} F_x$. It is easy to prove that $\mathcal F_x=F_x$ and that $\mathcal F$ is a sheaf. \begin{definicion} A ringed space is a pair $(X,{\mathcal O})$, where $X$ is a topological space and ${\mathcal O}$ is a sheaf of (commutative with unit) rings on $X$. A morphism of ringed spaces $(X,{\mathcal O})\to (X',{\mathcal O}')$ is a pair $(f,f^\#)$, where $f\colon X\to X'$ is a continuous map and $f^\#\colon {\mathcal O}'\to f_*{\mathcal O}$ is a morphism of sheaves of rings (equivalently, a morphism of sheaves of rings $f^{-1}{\mathcal O}'\to {\mathcal O}$). A {\it finite ringed space} is a ringed space $(X,{\mathcal O})$ whose underlying topological space $X$ is finite. \end{definicion} A morphism of ringed spaces $(X,{\mathcal O})\to (X',{\mathcal O}')$ between two finite ringed spaces is equivalent to the following data: - a continuous (i.e. monotone) map $f\colon X\to X'$, - for each $x\in X$, a ring homomorphism $f^\#_x\colon {\mathcal O}'_{f(x)}\to {\mathcal O}_x$, such that, for any $x\leq y$, the diagram \[ \xymatrix{ {\mathcal O}'_{f(x)} \ar[r]^{f^\#_{x}} \ar[d]_{r_{f(x)f(y)}} & {\mathcal O}_{x}\ar[d]^{r_{xy}}\\ {\mathcal O}'_{f(y)} \ar[r]^{f^\#_{y}} & {\mathcal O}_{y}}\] is commutative. We denote by $\Hom(X,Y)$ the set of morphisms of ringed spaces between two ringed spaces $X$ and $Y$. \begin{ejemplo} Let $\{*\}$ be the topological space with one element. We denote by $(*,R)$ the finite ringed space whose underlying topological space is $\{*\}$ and the sheaf of rings is a ring ${\mathcal O}_* =R$. For any ringed space $(X,{\mathcal O})$ there is a natural morphism of ringed spaces $(X,{\mathcal O}) \to (*,{\mathcal O}(X))$. \end{ejemplo} Let $(X,{\mathcal O})$ be a finite ringed space. A sheaf ${\mathcal M}$ of ${\mathcal O}$-modules (or ${\mathcal O}$-module) is equivalent to these data: an ${\mathcal O}_x$-module ${\mathcal M}_x$ for each $x \in X$, and a morphism of ${\mathcal O}_x$-modules $r_{xy} \colon {\mathcal M}_x \to {\mathcal M}_y$ for each $x \leq y$, such that $r_{xx}=\Id$ and $r_{xz}=r_{yz} \circ r_{xy}$ for any $x \leq y \leq z$. Again, one has that $$ {\mathcal M}_x = \text{stalk of } {\mathcal M} \text{ at } x = {\mathcal M}(U_x) $$ and $r_{xy}$ is the restriction morphism ${\mathcal M}(U_x) \to {\mathcal M}(U_y)$. For each $x \leq y$ the morphism $r_{xy}$ induces a morphism of ${\mathcal O}_y$-modules $$ \widetilde{r_{xy}} \colon {\mathcal M}_x\otimes_{{\mathcal O}_x}{\mathcal O}_y \to {\mathcal M}_y.$$ An ${\mathcal O}$-module ${\mathcal M}$ is said to be quasi-coherent if for any $x\in X$ there exist an open neighbourhood $U$ of $x$ and an exact sequence of ${\mathcal O}_{\vert U}$-modules \[ {\mathcal O}_{\vert U}^I \to {\mathcal O}_{\vert U}^J\to{\mathcal M}_{\vert U}\to 0.\] \begin{teorema}[(\cite{SanchoHomotopy} 3.6)\,] \label{qc} Let $(X,{\mathcal O})$ be a finite ringed space. An ${\mathcal O}$-module ${\mathcal M}$ is quasi-coherent if and only if for any $x\leq y$ the morphism \[\widetilde{r_{xy}} \colon {\mathcal M}_x\otimes_{{\mathcal O}_x}{\mathcal O}_y\to{\mathcal M}_y\] is an isomorphism. \end{teorema} \begin{proof}[Proof] $\Rightarrow)$ Let $U$ be an open neighbourhood of $p$ such that there exists an exact sequence \[ {\mathcal O}_{\vert U}^I \to {\mathcal O}_{\vert U}^J\to{\mathcal M}_{\vert U}\to 0.\] We can suppose $X=U$. Obviously, $({\mathcal O}^I)_x\otimes_{{\mathcal O}_x}{\mathcal O}_y=({\mathcal O}_x)^I \otimes_{{\mathcal O}_x}{\mathcal O}_y=({\mathcal O}_x \otimes_{{\mathcal O}_x}{\mathcal O}_y)^I=({\mathcal O}_y)^I=({\mathcal O}^I)_y$ and $({\mathcal O}^J)_x\otimes_{{\mathcal O}_x}{\mathcal O}_y= ({\mathcal O}^J)_y$, then ${\mathcal M}_x\otimes_{{\mathcal O}_x}{\mathcal O}_y={\mathcal M}_y$. $\Leftarrow)$ Given $x\in X$, consider an exact sequence of ${\mathcal O}_x$ modules ${\mathcal O}_{x}^I \to {\mathcal O}_{x}^J\to{\mathcal M}_{x}\to 0$. Tensoring by $\otimes_{{\mathcal O}_x}{\mathcal O}_y$, for any $x\leq y$, one has the exact sequence ${\mathcal O}_{y}^I \to {\mathcal O}_{y}^J\to{\mathcal M}_{y}\to 0$. Then, one has a sequence of morphisms $${\mathcal O}_{|U_x}^I \to {\mathcal O}_{|U_x}^J\to{\mathcal M}_{|U_x}\to 0$$ which is exact since it is exact on stalks at $y$, for any $y\in U_x$. Therefore, ${\mathcal M}$ is quasi-coherent. \end{proof} \medskip We shall denote by $\text{\bf Mod}_X$ the category of ${\mathcal O}$-modules on a ringed space $(X,{\mathcal O})$ and by ${\bf Qc\text{-}Mod}_X$ the subcategory of quasi-coherent ${\mathcal O}$-modules. Also, for any ring $R$, we shall denote by $\text{\bf Mod}_R$ the category of $R$-modules. \begin{observaciones} \noindent a) If $f\colon X \to Y$ is a morphism of ringed spaces and ${\mathcal N}$ is a quasi-coherent ${\mathcal O}_Y$-module, then $f^* {\mathcal N}:=f^{-1}{\mathcal N}\otimes_{f^{-1}{\mathcal O}_Y}{\mathcal O}_X$ is a quasi-coherent ${\mathcal O}_X$-module. In particular, this is true for morphisms between finite ringed spaces. \noindent b) If $f\colon \mathcal {\mathcal M} \to {\mathcal N}$ is a morphism of ${\mathcal O}_X$-modules where ${\mathcal M}$ and ${\mathcal N}$ are quasi-coherent, then $\Coker f$ is quasi-coherent. However, it is not always true that $\Ker f$ is quasi-coherent. \end{observaciones} \begin{ejemplo} Let $(X,{\mathcal O})$ be a finite ringed space and $\pi\colon (X,{\mathcal O}) \to (*,{\mathcal O}(X))$ the natural morphism of ring. If $M$ is an ${\mathcal O}(X)$-module, then $\pi^* M$ is a quasi-coherent ${\mathcal O}_X$-module, which we denote $\tilde M$. We say that $\tilde M$ is the quasi-coherent module associated with $M$ and we have a functor $\pi^*\colon \text{\bf Mod}_R \to {\bf Qc\text{-}Mod}_X, M \mapsto \tilde M$. Note that $\tilde M_x=M\otimes_{{\mathcal O}(X)} {\mathcal O}_x$, for each $x \in X$. \end{ejemplo} \begin{definicion} A finite ringed space $(X, {\mathcal O})$ is a \textit{finite flat-restriction space} (or \textit{finite fr-space}) if the restriction morphisms $r_{xy} \colon {\mathcal O}_x\to {\mathcal O}_y$ are flat, for any $x\leq y$. \end{definicion} \begin{proposicion} \label{fr} Let $(X,{\mathcal O})$ be a finite ringed space. $(X,{\mathcal O})$ is a finite fr-space $\Leftrightarrow$ For any open subset $U$ (resp. $U_x$) of $X$ and any morphism $f \colon {\mathcal M} \to {\mathcal N}$ of quasi-coherent ${\mathcal O}_U-$modules (resp. ${\mathcal O}_{U_x}$-modules), $\Ker f$ is quasi-coherent. \end{proposicion} \begin{proof}[Proof] $\Rightarrow)$ Let $f \colon {\mathcal M} \to {\mathcal N}$ be a morphism of quasi-coherent ${\mathcal O}_U-$modules. We have to prove that, for each $x \leq y \in U$, the morphism $$ \tilde r_{xy} \colon (\Ker f)_x \otimes_{{\mathcal O}_x} {\mathcal O}_y \to (\Ker f)_y$$ is an isomorphism. This follows from the next conmutative diagram of exact rows $$\xymatrix{0 \ar[r] & (\Ker f)_x \otimes_{{\mathcal O}_x} {\mathcal O}_y \ar[d]^{\tilde r_{xy}} \ar[r] & {\mathcal M}_x\otimes_{{\mathcal O}_x} {\mathcal O}_y \ar[r]^-{f_x} \ar[d]^{\tilde r_{xy}} & {\mathcal N}_x \otimes_{{\mathcal O}_x} {\mathcal O}_y \ar[d]^-{\tilde r_{xy}} \\ 0 \ar[r] & (\Ker f)_y \ar[r] & {\mathcal M}_y \ar[r]_-{f_y} & {\mathcal N}_y}$$ in which the first row is exact because ${\mathcal O}_x \to {\mathcal O}_y$ is a flat morphism and the second and the third vertical morphisms are isomorphisms because ${\mathcal M}$ and ${\mathcal N}$ are quasi-coherent ${\mathcal O}_U$-modules. \noindent $\Leftarrow)$ Let $x \leq y \in X$. Let $f_x \colon M_x \hookrightarrow N_x$ be an injective morphism of ${\mathcal O}_x$-modules. We have to prove that $f_x \otimes 1 \colon M_x \otimes_{{\mathcal O}_x} {\mathcal O}_y \to N_x \otimes_{{\mathcal O}_x} {\mathcal O}_y $ is still injective. Consider the open subset $U_x$ of $X$ and the functor $\text{\bf Mod}_{{\mathcal O}_x} \to {\bf Qc\text{-}Mod}_{U_x}, M_x \mapsto \widetilde{M_x}$. Then, the morphism $f_x$ gives us a morphism $\widetilde{f_x} \colon \widetilde{M_x} \to \widetilde{N_x}$ of quasi-coherent ${\mathcal O}_{U_x}$-modules. Note that $(\widetilde{f_x})_y = f_x \otimes 1$. Since by hypothesis $\Ker \widetilde{f_x}$ is quasi-coherent, we have that: $$\Ker (f_x \otimes 1)= (\Ker \widetilde{f_x})_y = \Ker f_x \otimes_{{\mathcal O}_x} {\mathcal O}_y = 0 \otimes_{{\mathcal O}_x} {\mathcal O}_y =0,$$ so we conclude that $f_x \otimes 1$ is injective. \end{proof} \begin{observacion} Let $(X,{\mathcal O})$ be a finite ringed space. The proposition above says that ${\bf Qc\text{-}Mod}_U$ is an abelian category for each open subset $U$ of $X$ if and only if $(X,{\mathcal O})$ is a finite flat-restriction space. It is also true that in this case ${\bf Qc\text{-}Mod}_U$ is a Grothendieck category (see [\ref{Enochs}]). \end{observacion} \section{Affine finite spaces} \begin{notacion} Let $(X,{\mathcal O})$ be a finite ringed space. For each $x,y\in X$, let us denote $U_{xy}=U_x\cap U_y$ and ${\mathcal O}_{xy}={\mathcal O}(U_{xy})$. If ${\mathcal M}$ is an ${\mathcal O}$-module, we denote ${\mathcal M}_{xy}= {\mathcal M}(U_{xy})$. \end{notacion} \begin{definicion} \label{9} A finite ringed space $(X,{\mathcal O})$ is called an \textit{affine (schematic) finite space} if it satisfies the following conditions: \begin{enumerate} \item ${\mathcal O}(X)\to \prod_{x\in X} {\mathcal O}_x$ is faithfully flat, for any $x \in X$. \item ${\mathcal O}_x\otimes_{{\mathcal O}(X)}{\mathcal O}_y={\mathcal O}_{xy}$, for any $x,y\in X$. \item ${\mathcal O}_{xy}\to \prod_{z\in U_{xy}} {\mathcal O}_z$ is faithfully flat, for any $x,y\in X$.\end{enumerate} \end{definicion} \begin{proposicion} If $(X,{\mathcal O})$ is an affine finite space, then it is a finite fr-space. \end{proposicion} \begin{proof}[Proof] By condition 3. of the definition above, ${\mathcal O}_{xx}={\mathcal O}_x \to \prod_{z\in U_x} {\mathcal O}_z$ is faithfully flat. Therefore, ${\mathcal O}_x \to {\mathcal O}_z$ is flat, for any $z\geq x$. \end{proof} \begin{proposicion}[(\cite{KS} 4.12)\,] \label{carafin} Let $(X,{\mathcal O})$ be a ringed finite space. $X$ is affine iff \begin{enumerate} \item The morphism ${\mathcal O}(X)\to \prod_{x\in X} {\mathcal O}_x$ is faithfully flat. \item The morphism ${\mathcal O}_{y}\otimes_{{\mathcal O}(X)}{\mathcal O}_{y'}\to \prod_{z\in U_{yy'}} {\mathcal O}_z$ is faithfully flat, for any $ y,y'\in X$.\end{enumerate}\end{proposicion} \begin{proof}[Proof] $\Rightarrow)$ It follows immediately from the definition. $\Leftarrow)$ In first place, note that for any $x\leq u,u'$, the morphism ${\mathcal O}_{u}\otimes_{{\mathcal O}(X)} {\mathcal O}_{u'}\to {\mathcal O}_{u}\otimes_{{\mathcal O}_x}{\mathcal O}_{u'}$ is an epimorphism and the composite morphism $${\mathcal O}_{u}\otimes_{{\mathcal O}(X)} {\mathcal O}_{u'}\to {\mathcal O}_{u}\otimes_{{\mathcal O}_x}{\mathcal O}_{u'}\to \prod_{z\in U_{uu'}} {\mathcal O}_z$$ is injective, because it is faithfully flat. Therefore, ${\mathcal O}_{u}\otimes_{{\mathcal O}(X)} {\mathcal O}_{u'}={\mathcal O}_{u}\otimes_{{\mathcal O}_x}{\mathcal O}_{u'}$. We only have to prove that $$ {\mathcal O}_y \otimes_{{\mathcal O}(X)} {\mathcal O}_{y'}= {\mathcal O}_{yy'}$$ Let us prove it by reduction to absurdity. Let $y,y'\in X$ be maximal such that the morphism $ {\mathcal O}_y \otimes_{{\mathcal O}(X)} {\mathcal O}_{y'}\to {\mathcal O}_{yy'}$ is not an isomorphism. First, if $y \leq y'$, then $U_{yy'}=U_{y'}$ and the epimorphism $ {\mathcal O}_y \otimes_{{\mathcal O}(X)} {\mathcal O}_{y'} \to {\mathcal O}_{y'}$ is faithfully flat, by 2. Therefore, ${\mathcal O}_y \otimes_{{\mathcal O}(X)} {\mathcal O}_{y'}={\mathcal O}_{y'} ={\mathcal O}_{yy'}$. So, neither $y \leq y'$, nor $y' \leq y$. The morphism $B:= {\mathcal O}_{y}\otimes_{{\mathcal O}(X)}{\mathcal O}_{y'}\to \prod_{z\in U_{yy'}} {\mathcal O}_z=:C$ is faithfully flat. Thus, the sequence of morphisms $$(*)\qquad B\to C{\dosflechasa[]{}{}} C\otimes_BC$$ is exact. By the maximality of $y$ and $y'$, given $z,z'\in U_{yy'}$, ${\mathcal O}_z\otimes_{{\mathcal O}(X)}{\mathcal O}_{z'}={\mathcal O}_{z}\otimes_{{\mathcal O}_y}{\mathcal O}_{z'}={\mathcal O}_{zz'}$. The natural morphism ${\mathcal O}_z\otimes_{{\mathcal O}(X)}{\mathcal O}_{z'}\to {\mathcal O}_z\otimes_B{\mathcal O}_{z'}$ is surjective. The composite morphism ${\mathcal O}_z\otimes_{{\mathcal O}(X)}{\mathcal O}_{z'}\to {\mathcal O}_z\otimes_B{\mathcal O}_{z'}\to {\mathcal O}_{zz'}$ is an isomorphism, then ${\mathcal O}_z\otimes_B{\mathcal O}_{z'}={\mathcal O}_{zz'}$. Therefore, $ C\otimes_BC=\prod_{z,z'\in U_{yy'}}{\mathcal O}_{zz'}$. Then, from the diagram $(*)$, $B={\mathcal O}_{yy'}$ and we have come to contradiction. \end{proof} \begin{corolario} \label{minimal affine} A finite ringed space $(U_x,{\mathcal O})$ is affine iff the morphism ${\mathcal O}_{y}\otimes_{{\mathcal O}_x}{\mathcal O}_{y'}\to \prod_{z\in U_{yy'}} {\mathcal O}_z$ is faithfully flat, for any $ y,y'\geq x$. \end{corolario} \begin{proof}[Proof] It follows easily from the proposition above. \end{proof} \begin{proposicion} \label{afinfp} Let $X$ be an affine finite space and $U\subset X$ an open set. Then, $U$ is affine iff ${\mathcal O}(U)\to \prod_{q\in U} {\mathcal O}_q$ is a faithfully flat morphism.\end{proposicion} \begin{proof}[Proof] $\Rightarrow)$ ${\mathcal O}(U)\to \prod_{q\in U} {\mathcal O}_q$ is a faithfully flat morphism by definition of affine finite space. $\Leftarrow)$ We have to check that $U$ satisfies the conditions 2. and 3. of Definition \ref{9}. Condition 3. is clear, because $X$ is affine. Now, let us check 2.: for each $x,y \in U$, the morphism ${\mathcal O}_x\otimes_{{\mathcal O}(X)}{\mathcal O}_{y}\to {\mathcal O}_x\otimes_{{\mathcal O}(U)} {\mathcal O}_y$ is surjective and the composite morphism $${\mathcal O}_x\otimes_{{\mathcal O}(X)}{\mathcal O}_{y}\to {\mathcal O}_x\otimes_{{\mathcal O}(U)} {\mathcal O}_{y}\to {\mathcal O}_{xy}$$ is an isomorphism, thus ${\mathcal O}_x\otimes_{{\mathcal O}(U)} {\mathcal O}_{y}\simeq {\mathcal O}_{xy}$.\end{proof} \begin{corolario} \label{Uxy} If $X$ is an affine finite space, then $U_{xy}$ is affine for every $x,y\in X$.\end{corolario} \begin{proof}[Proof] It follows from condition 3. of Definition \ref{9} and the proposition above.\end{proof} \begin{proposicion} \label{11} Let $X$ be an affine finite space and $\mathcal M$ a quasi-coherent ${\mathcal O}_X$-module. The natural morphism $${\mathcal M}(V) \otimes_{{\mathcal O}(X)}{\mathcal O}(U)\to {\mathcal M}(U\cap V)$$ is an isomorphism, for any open set $V\subseteq X$ and any affine open set $U\subseteq X$. \end{proposicion} \begin{proof}[Proof] 1. The morphism ${\mathcal O}(X)\to \prod_{x\in X}{\mathcal O}_x=:B$ is faithfully flat. The sequence of morphisms $${\mathcal O}(X)\to B= \prod_{x\in X}{\mathcal O}_x{\dosflechasa[]{}{}} \ B\otimes_{{\mathcal O}(X)} B=\prod_{x,y\in X} {\mathcal O}_{xy}$$ is a split sequence of morphisms under a faithfully flat base change (${\mathcal O}(X)\to B$), then this sequence of morphisms is universally exact, i.e., if we tensor the sequence of morphisms by $M\otimes_C-$ (where $C$ is a commutative ring, $M$ is a $C$-module and ${\mathcal O}(X)$ a $C$-algebra) then we obtain an exact sequence of morphisms. In particular, ${\mathcal O}_{xy}\hookrightarrow \prod_{z\in U_{xy}} {\mathcal O}_z$ is universally injective and the sequence of morphisms $$(*)\qquad {\mathcal O}(X)\to \prod_{x\in X}{\mathcal O}_x{\dosflechasa[]{}{}} \prod_{x,y\in X, z\in U_{xy}} {\mathcal O}_z$$ is universally exact. 2. Let $W\subset U_x$ be an affine open set. Consider the universally exact sequence of morphisms $${\mathcal O}(W)\to \prod_{z\in W} {\mathcal O}_z{\dosflechasa[]{}{}} \prod_{z,z'\in W, z''\in U_{zz'}} {\mathcal O}_{z''}.$$ Tensoring by ${\mathcal M}_x\otimes_{{\mathcal O}_x}-$, we obtain the exact sequence of morphisms $${\mathcal M}_x\otimes_{{\mathcal O}_x} {\mathcal O}(W)\to \prod_{z\in W} {\mathcal M}_z{\dosflechasa[]{}{}} \prod_{z,z'\in W, z''\in U_{zz'}} {\mathcal M}_{z''},$$ which shows that ${\mathcal M}_x\otimes_{{\mathcal O}_x} {\mathcal O}(W)={\mathcal M}(W) $. Therefore (using Corollary \ref{Uxy}), $$\mathcal M_x\otimes_{{\mathcal O}(X)}{\mathcal O}_y={\mathcal M}_x\otimes_{{\mathcal O}_x}{\mathcal O}_x\otimes_{{\mathcal O}(X)}{\mathcal O}_y= {\mathcal M}_x\otimes_{{\mathcal O}_x}{\mathcal O}_{xy} ={\mathcal M}_{xy}.$$ 3. Consider the exact sequence of morphisms $${\mathcal M}(V)\to \prod_{y\in V} {\mathcal M}_y{\dosflechasa[]{}{}} \prod_{y,y'\in V, z\in U_{yy'}} {\mathcal M}_z.$$ Tensoring by $\otimes_{{\mathcal O}(X)}{\mathcal O}_x,$ we obtain the exact sequence of morphisms $${\mathcal M}(V)\otimes_{{\mathcal O}(X)}{\mathcal O}_x\to \prod_{y\in V} {\mathcal M}_{xy}{\dosflechasa[]{}{}} \prod_{y,y'\in V, z\in U_{yy'}} {\mathcal M}_{xz}.$$ which shows that $\mathcal M(V)\otimes_{{\mathcal O}(X)} {\mathcal O}_x={\mathcal M}(V\cap U_x)$. 4. Consider the universally exact sequence $(*)$, where $X=U$. Tensoring by ${\mathcal M}(V)\otimes_{{\mathcal O}(X)}$, we obtain the exact sequence of morphisms $${\mathcal M}(V)\otimes_{{\mathcal O}(X)}{\mathcal O}(U)\to \prod_{x\in U}{\mathcal M}(V\cap U_x){\dosflechasa[]{}{}} \prod_{x,y\in U, z\in U_{xy}} {\mathcal M}(V\cap U_z),$$ which shows that $\mathcal M(V)\otimes_{{\mathcal O}(X)} {\mathcal O}(U)={\mathcal M}(V\cap U)$. \end{proof} \begin{teorema}[(\cite{KS} 2.5,\,4.12)\,] Let $(X,{\mathcal O})$ be an affine finite space. Consider the canonical morphism $$\pi\colon (X,{\mathcal O})\to (*,{\mathcal O}(X)), \,\,\pi(x)=*,\text{ for any }x\in X.$$ The functors $$\xymatrix @R=8pt { \text{\bf Qc-Mod}_X \ar[r]^-{\pi_*} & { \text{\bf Mod}}_{{\mathcal O}(X)}, & \mathcal M \ar@{|->}[r] & \pi_*{\mathcal M}={\mathcal M}(X) \\ {\text{\bf Mod}}_{{\mathcal O}(X)} \ar[r]^-{\pi^*} & \text{\bf Qc-Mod}_X, & M \ar@{|->}[r] & \pi^*{\mathcal M}=\tilde M} $$ establish an equivalence between the category of quasi-coherent ${\mathcal O}_X$-modules and the category of ${\mathcal O}(X)$-modules. \end{teorema} \begin{proof}[Proof] The natural morphism $\pi^*\pi_*{\mathcal M}\to{\mathcal M}$ is an isomorphism because this morphism on stalks at $x$ is the morphism ${\mathcal M}(X)\otimes_{{\mathcal O}(X)}{\mathcal O}_x\to {\mathcal M}_x$, which is an isomorphism by Proposition \ref{11}. The natural morphism $M\to \pi_*\pi^*M=(\pi^*M)(X)$ is an isomorphism: Tensoring the exact sequence of morphisms $(*)$, in the proof of Proposition \ref{11}, by $M\otimes_{{\mathcal O}(X)} -$ we obtain the exact sequence of morphisms $$M\otimes_{{\mathcal O}(X)}{\mathcal O}(X)\to \prod_{x\in X}(\pi^*M)_x{\dosflechasa[]{}{}} \prod_{x,y\in X, z\in U_{xy}} (\pi^*M)_z$$ which shows that $M=M\otimes_{{\mathcal O}(X)}{\mathcal O}(X)=(\pi^*M)(X)$. \end{proof} \begin{lemma} \label{prodaf1} Let $A\to B$ and $A'\to B'$ be flat (resp. failthfully flat) morphisms of commutative $C$-{a}lgebras. Then, $A\otimes_CA'\to B\otimes_C B'$ is a flat morphism (resp. faithfully flat).\end{lemma} \begin{proof}[Proof] It follows from the equality $M\otimes_{A\otimes_CA'} (B\otimes_CB')=(M\otimes_AB)\otimes_{A'}B'$.\end{proof} \begin{proposicion} \label{inter} The intersection of two affine open sets of an affine finite space is affine.\end{proposicion} \begin{proof}[Proof] Let $U$ and $U'$ be two affine open sets of the affine finite space $X$. Consider the faithfully flat morphisms ${\mathcal O}(U)\to \prod_{x\in U}{\mathcal O}_x$, ${\mathcal O}(U')\to \prod_{x'\in U'}{\mathcal O}_{x'}$. The composition of the faithfully flat morphisms (recall Lemma \ref{prodaf1}) $${\mathcal O}(U'\cap U)\overset{\text{\ref{11}}}={\mathcal O}(U)\otimes_{{\mathcal O}(X)}{\mathcal O}(U')\to \prod_{(x,x')\in U\times U'}{\mathcal O}_{xx'}\to \prod_{(x,x')\in U\times U', z\in U_{xx'}} {\mathcal O}_z,$$ is faithfully flat, hence ${\mathcal O}(U'\cap U)\to \prod_{z\in U\cap U'}{\mathcal O}_z$ is faithfully flat. By Proposition \ref{afinfp}, $U\cap U'$ is affine. \end{proof} Let $R$ be a commutative ring with a unit. A \textit{finite $R$-ringed space} is a finite ringed space $(X,{\mathcal O})$ such that ${\mathcal O}$ is a sheaf of $R$-algebras; that is, for any $x \in X$, ${\mathcal O}_x$ is an $R$-algebra and for any $x \leq x'$, $r_{xx'} \colon {\mathcal O}_x \to {\mathcal O}_{x'}$ is a morphism of $R$-algebras. Let $X$ and $Y$ be two finite $R$-ringed spaces. The \textit{direct product} $X \times_R Y$ is the finite $R$-ringed space $(X \times Y, {\mathcal O}_{X \times Y})$, where $({\mathcal O}_{X \times Y})_{(x,y)}:= {\mathcal O}_x \otimes_{R} {\mathcal O}_y$, for each $(x,y) \in X \times Y$ and the morphisms of restriction are the obvious ones. \begin{proposicion}[(\cite{KG} 5.27)\,] \label{ProdAf} Let $X$ and $Y$ be affine finite $R$-ringed spaces. Then, $X\times_R Y$ is an affine finite space and ${\mathcal O}(X\times_R Y)={\mathcal O}(X)\otimes_R {\mathcal O}(Y)$.\end{proposicion} \begin{proof}[Proof] Consider the universally exact sequence ${\mathcal O}(X)\to \proda{x\in X} {\mathcal O}_x{\dosflechasa[]{}{}} \proda{x,x';z\in U_{xx'}} {\mathcal O}_z$. Tensoring by $\otimes_{R}{\mathcal O}_y$ we obtain the exact sequence $${\mathcal O}(X)\otimes_{R}{\mathcal O}_y\to \proda{x\in X} {\mathcal O}(U_x\times_R U_y){\dosflechasa[]{}{}} \proda{x,x';z\in U_{xx'}} {\mathcal O}(U_z\times_R U_y).$$ Hence, ${\mathcal O}(X)\otimes_{R}{\mathcal O}_y={\mathcal O}(X\times_R U_y)$. Consider the universally exact sequence ${\mathcal O}(Y)\to \proda{y\in Y} {\mathcal O}_y{\dosflechasa[]{}{}} \proda{y,y';z\in U_{yy'}} {\mathcal O}_z$. Tensoring by ${\mathcal O}(X)\otimes_{R}$ we obtain the exact sequence $${\mathcal O}(X)\otimes_{R}{\mathcal O}(Y)\to \proda{y\in Y} {\mathcal O}(X\times_R U_y){\dosflechasa[]{}{}} \proda{y,y';z\in U_{yy'}} {\mathcal O}(X\times_R U_z).$$ Hence, ${\mathcal O}(X)\otimes_{R}{\mathcal O}(Y)={\mathcal O}(X\times_R Y)$. In particular, ${\mathcal O}_{xx'}\otimes_R{\mathcal O}_{yy'}={\mathcal O}_{(x,y)(x',y')}$, for any $x,x'\in X$ and $y,y'\in Y$. By Lemma \ref{prodaf1}, the morphism $${\mathcal O}(X\times_R Y)={\mathcal O}(X)\otimes_R {\mathcal O}(Y) \to \proda{x\in X}{\mathcal O}_x\otimes_R \proda{y\in Y} {\mathcal O}_y = \proda{(x,y)\in X\times_R Y} {\mathcal O}_{(x,y)}$$ is faithfully flat. By Lemma \ref{prodaf1}, the morphism $${\mathcal O}_{(x,y)(x',y')}={\mathcal O}_{xx'}\otimes_R {\mathcal O}_{yy'} \to \proda{z\in U_{xx'}}{\mathcal O}_z\otimes_R \proda{z'\in U_{yy'}} {\mathcal O}_{z'} = \proda{(z,z')\in U_{xx'}\times_R U_{yy'}} {\mathcal O}_{(z,z')}= \proda{(z,z')\in U_{(x,y)(x',y')}} {\mathcal O}_{(z,z')}$$ is faithfully flat. Therefore, $X\times_R Y$ is affine. \end{proof} \subsection{Some conmutative algebra results.} If $X$ is an affine finite space then, for each $x \leq y \in X$, the morphism ${\mathcal O}_x\to {\mathcal O}_y$ is flat and ${\mathcal O}_y\otimes_{{\mathcal O}_x}{\mathcal O}_y={\mathcal O}_{yy}={\mathcal O}_y$. In this subsection we study this kind of morphisms. In this paper, we use well-known properties of flat morphisms and faithfully flat morphisms, that can be found in \cite{Matsumura}. \begin{notacion}\label{Notation} Given ${\mathfrak p}\in\Spec R$ and an $R$-module $M$, we denote $M_{\mathfrak p}:=(R\backslash {\mathfrak p})^{-1}\cdot M$.\end{notacion} \begin{proposicion} \label{last} Let $f\colon A\to B$ be a morphism of rings and $f^*\colon \Spec B\to \Spec A$ the induced morphism. The following conditions are equivalent: \begin{enumerate} \item $A\to B$ is a flat morphism and $B\otimes_A B=B$. \item $A_{f^*({\mathfrak p})}=B_{f^*({\mathfrak p})}$, for all ${\mathfrak p}\in\Spec B$. \item The morphism $f^*\colon \Spec B\to \Spec A$ is injective and $A_{f^*({\mathfrak p})}=B_{\mathfrak p}$, for any ${\mathfrak p}\in\Spec B$. \end{enumerate} \end{proposicion} \begin{proof}[Proof] $1. \Rightarrow \,2.$ The morphism $A_{f^*({\mathfrak p})}\to B_{f^*({\mathfrak p})}$ is faithful flat, for any ${\mathfrak p}$. Besides, $B_{f^*({\mathfrak p})}=A_{f^*({\mathfrak p})}\otimes_A B= A_{f^*({\mathfrak p})}\otimes_A(B\otimes_A B)=B_{f^*({\mathfrak p})}\otimes_{A_{f^*({\mathfrak p})}}B_{f^*({\mathfrak p})}$, then $A_{f^*({\mathfrak p})}=B_{f^*({\mathfrak p})}$. $2. \Rightarrow \,3.$ If $A_{f^*({\mathfrak p})}=B_{f^*({\mathfrak p})}$ then $B_{f^*({\mathfrak p})}=B_{\mathfrak p}$ and $f^{*-1}({f^*({\mathfrak p})})=\{{\mathfrak p}\}$, then $f^*$ is injective. $3. \Rightarrow \,1.$ The morphism $A\to B$ is flat: Given an injective morphism $N\hookrightarrow M$ of $A$-modules, $N_{f^*({\mathfrak p})}\to M_{f^*({\mathfrak p})}$ is injective, for any ${\mathfrak p}$. Then, $N\otimes_A B_{\mathfrak p}\to M\otimes_A B_{\mathfrak p}$ is injective, for any ${\mathfrak p}$ and $N\otimes_A B\to M\otimes_A B$ is injective. $\Spec B_{{\mathfrak p}}\subseteq \Spec B_{f^*({\mathfrak p})}\subseteq \Spec A_{f^*({\mathfrak p})}$ and $\Spec B_{{\mathfrak p}}=\Spec A_{f^*({\mathfrak p})}$. Hence, $\Spec B_{{\mathfrak p}}=\Spec B_{f^*({\mathfrak p})}$ and $B_{f^*({\mathfrak p})}=B_{{\mathfrak p}}$. Then, $(B\otimes_A B)_{\mathfrak p}=(B\otimes_A B)\otimes_B B_{\mathfrak p} =(B\otimes_AB)\otimes_B B_{f^*({\mathfrak p})}= (B\otimes_AB)\otimes_A A_{f^*({\mathfrak p})}= B_{f^*({\mathfrak p})} \otimes_{A_{f^*({\mathfrak p})}} B_{f^*({\mathfrak p})}=B_{\mathfrak p}$, for any ${\mathfrak p}\in\Spec B$. Therefore, $B\otimes_A B=B$. \end{proof} \begin{notacion} \label{N3.9} Given a morphism $f\colon A\to B$ and an ideal $I\subseteq B$ denote $A\cap I:=f^{-1}(I)$. Denote $(I)_0=\{{\mathfrak p}\in\Spec B\colon I\subseteq {\mathfrak p}\}$. \end{notacion} \begin{proposicion} \label{sudor} Let $A\to B$ be a flat morphism of rings such that $B\otimes_A B=B$. Then, \begin{enumerate} \item $(I\cap A)\cdot B=I$, for any ideal $I\subseteq B$. \item $\Spec B$ is a topological subspace of $\Spec A$, with their Zariski topologies. \item Let ${\mathfrak q}\in \Spec A$. \begin{enumerate} \item If ${\mathfrak q}\notin \Spec B$, then ${\mathfrak q}\cdot B=B$. \item If ${\mathfrak q}\in\Spec B$, then ${\mathfrak q}\cdot B\subset B$ is a prime ideal and $({\mathfrak q}\cdot B) \cap A={\mathfrak q}$.\end{enumerate} \item $\Spec B=\capa{\Spec B \subseteq\, \text{open set $U\subseteq \Spec A$}} U$.\end{enumerate} \end{proposicion} \begin{proof}[Proof] 1. Let ${\mathfrak p}\in \Spec B$, ${\mathfrak q}:=A\cap {\mathfrak p}$ and $M$ a $B$-module. By Proposition \ref{last}, $M_{\mathfrak p}=M\otimes_BB_{\mathfrak p}=M\otimes_BB_{\mathfrak q}=M_{\mathfrak q}$. Then, $$[(I\cap A)\cdot B]_{\mathfrak p}=[(I\cap A)\cdot B]_{\mathfrak q}=(I_{\mathfrak q}\cap A_{\mathfrak q})\cdot B_{\mathfrak q}= I_{\mathfrak q}=I_{\mathfrak p}.$$ Hence, $(I\cap A)\cdot B=I$. 2. By Proposition \ref{last}, we can think $\Spec B$ as a subset of $\Spec A$. Given an ideal $I\subseteq B$, observe that $(I)_0=((I\cap A)\cdot B)_0=(I\cap A)_0\cap \Spec B$. 3. (a) Suppose that there exists a prime ideal ${\mathfrak p}\subset B$ that contains to ${\mathfrak q}\cdot B$. Denote ${\mathfrak p}'={\mathfrak p}\cap A$. Then, ${\mathfrak q}\in \Spec A_{{\mathfrak p}'}=\Spec B_{\mathfrak p}\subseteq \Spec B.$, which is contradictory. (b) Let ${\mathfrak p}\in \Spec B$ be a prime ideal such that ${\mathfrak p}\cap A={\mathfrak q}$. Then, ${\mathfrak p}=({\mathfrak p}\cap A)\cdot B={\mathfrak q}\cdot B$. 4. If ${\mathfrak q}\in \Spec A\backslash \Spec B$, then $({\mathfrak q})_0\cap \Spec B= ({\mathfrak q}\cdot B)_0=(B)_0= \emptyset$. Then, $\Spec B$ is equal to the intersection of the open sets $U\subseteq \Spec A$ such that $\Spec B\subseteq U$. \end{proof} \section{Schematic finite spaces} \subsection{Definition, examples and first characterizations} \begin{definicion} \label{defesq} We say that a finite ringed space $(X,{\mathcal O})$ is a \textit{schematic finite space} if it is locally affine; i.e. if there exists an open covering $\{U_i\}_{i\in I}$ on $X$, such that $U_i$ is an affine finite space, for each $i \in I$. \end{definicion} \begin{proposicion} Let $X$ be a finite ringed space. $X$ is a schematic finite space iff the open subsets $U_x$ are affine finite spaces for all $x \in X$. \end{proposicion} \begin{proof}[Proof] $\Rightarrow)$ Let $\{U_i\}_{i\in I}$ be an affine open covering of $X$. For each $x \in X$, $U_x$ is an open subset of one of the affine finite spaces $U_i$. So, it follows from Corollary \ref{Uxy}, that $U_x$ is also affine. $\Leftarrow)$ It is clear, since $\{U_x\}_{x \in X}$ is an open covering of $X$. \end{proof} \begin{observaciones} \item 1. All schematic finite spaces are finite fr-spaces. \item 2. Affine finite spaces are schematic. \item 3. If $X=U_x$, then $X$ schematic if and only if it is affine. \end{observaciones} \medskip The finite ringed space associated with a minimal affine finite covering ${\mathcal U}$ of a quasi-compact and quasi-separated scheme is a schematic finite space, by Paragraph \ref{Parr}. \begin{ejemplos} \label{Eejemplos} Let us give some examples of schematic finite spaces (below we indicate the ringed space constructed in Paragraph \ref{Pa2}): $$\xymatrix@C=6pt @R=8pt{ k[x] \ar[dr] & & & k[x,y] \ar[r]\ar[rdd] & k[x,y,1/y] \ar[rd] & \\ & k[x,1/x] & & k[1/x,y/x] \ar[r] \ar[ru] & k[1/x,y/x,x/y] \ar[r] & k[x,y,1/x,1/y] \\ k[1/x] \ar[ru] & & & k[1/y,x/y] \ar[r] \ar[ru] & k[y,1/y,x/y] \ar[ru] & \\\text{1. Projective} & \!\! \!\!\!\!\!\!\!\!\text{line} \quad& & & \text{2. Projective plane} & } $$ $$\xymatrix @C=-4pt @R=8pt{ k[x] \ar[dr] & & & k[x] \ar[dr] & \\ & k[x,1/x] & && k(x)\\ k[x] \ar[ru] &&& k[x] \ar[ru]&\\ \text{3. Affine line} & \text{with a double point} & & \quad\text{ 4. Two lines} & \text{glued at the generic point}}$$ It can be proved that the first three examples are finite models of the schemes we indicate, but the fourth it is not the model of any scheme. Also note that none of these examples are affine finite spaces. \end{ejemplos} If $X$ and $Y$ are schematic finite $R$-ringed spaces, then $X\times_R Y$ is an schematic finite space, by Proposition \ref{ProdAf}. \begin{proposicion}[(\cite{KS} 4.11)\,] \label{first characterization schematic} Let $X$ be a finite ringed space. $X$ is a schematic finite space if and only if it satisfies the following two conditions: \begin{enumerate} \item The natural morphism ${\mathcal O}_{y}\otimes_{{\mathcal O}_x}{\mathcal O}_{y'} \to {\mathcal O}_{yy'}$ is an isomorphism for any $y,y' \geq x$. \item The natural morphism ${\mathcal O}_{yy'}\to \prod_{z\in U_{yy'}}{\mathcal O}_z$ is faithfully flat, for any $y,y' \in X$ for which there is an element $x \in X$ such that $y \geq x$ and $y' \geq x$. \end{enumerate} \end{proposicion} \begin{proof}[Proof] It follows easily from the definition of affine finite space that the open subsets $\{U_x\}_{x \in X}$ are affine finite spaces iff the conditions 1. and 2. above are satisfied. \end{proof} \begin{proposicion}[(\cite{KS} 4.11)\,] \label{4pg20} Let $(X,{\mathcal O}_X)$ be a ringed finite space. $X$ is a schematic finite space iff the morphism $${\mathcal O}_{y}\otimes_{{\mathcal O}_x}{\mathcal O}_{y'}\to \prod_{z\in U_{yy'}} {\mathcal O}_z$$ is faithfully flat, for any $x\leq y,y'\in X$.\end{proposicion} \begin{proof}[Proof] It follows directly from Corollary \ref{minimal affine}. \end{proof} \subsection{More characterizations of schematic finite spaces} In this section, we see that schematic finite spaces can be characterized by the good behavior of their quasi-coherent modules. \begin{proposicion} \label{C4.9} Let $X$ be a schematic finite space, $U\overset{i}\subseteq X$ an open subset and ${\mathcal N}$ a quasi-coherent ${\mathcal O}_U$-module. Then, $i_* {\mathcal N}$ is a quasi-coherent ${\mathcal O}_X$-module. \end{proposicion} \begin{proof}[Proof] Let $x \leq y \in X$. We have to see that the morphism $(i_* {\mathcal N})_x \otimes_{{\mathcal O}_x} {\mathcal O}_y \to (i_* {\mathcal N})_y$ is an isomorphism. This morphism is equal to the morphism $$ {\mathcal N}(U \cap U_x) \otimes_{{\mathcal O}(U_x)} {\mathcal O}(U_y) \to {\mathcal N}(U \cap U_y ),$$ which is an isomorphism by Proposition \ref{11}. \end{proof} \hskip-0.65cm\colorbox{white}{\,\begin{minipage}{15.15cm}\begin{teorema} \label{T1} Let $X$ be a finite ringed space. $X$ is a schematic finite space if and only if it satisfies the next two conditions: \begin{enumerate} \item $\Ker f$ is quasi-coherent, for any morphism $f \colon {\mathcal M} \to {\mathcal N}$ of quasi-coherent ${\mathcal O}_{X}$-modules. \item For any open subset $i\colon U_x\hookrightarrow X$ and any quasi-coherent ${\mathcal O}_{U_x}$-module ${\mathcal M}$, the ${\mathcal O}_X$-module $i_*{\mathcal M}$ is quasi-coherent. \end{enumerate} \end{teorema}\end{minipage}} \begin{proof}[Proof] $\Rightarrow)$ We know that schematic finite spaces are finite fr-spaces. By Proposition \ref{fr}, $\Ker f$ is quasi-coherent. The second condition follows from the proposition above. $\Leftarrow)$ First, let us prove that $X$ is an fr-space. Let $i\colon U_x\hookrightarrow X$ be an open subset and ${\mathcal M}\to {\mathcal N}$ a morphism of quasi-coherent ${\mathcal O}_{U_x}$-modules. $\Ker[{\mathcal M}\to {\mathcal N}]=\Ker[i_*{\mathcal M}\to i_*{\mathcal N}]_{|U_x}$, then it is quasi-coherent. By Proposition \ref{fr}, $X$ is an fr-space. If $X$ is an fr-space and satisfies condition 2., then it is schematic: Consider $x \leq x'$, let $j\colon U_{x'}\hookrightarrow U_x$ be the inclusion morphism and $\mathcal N$ a quasi-coherent ${\mathcal O}_{U_{x'}}$-module. Since condition 2. is satisfied, the ${\mathcal O}_{U_x}$-module $j_*{\mathcal N}=i^*((i\circ j)_*{\mathcal N})$ is quasi-coherent. It follows from this result that we can suppose $X=U_x$ (because being finite fr-space and schematic are local conditions). Now, by Corollary \ref{minimal affine}, we only have to prove that, for each $y,y'\geq x$, the morphism $${\mathcal O}_y \otimes_{{\mathcal O}_x}{\mathcal O}_{y'} \to \prod_{z\in U_{yy'}} {\mathcal O}_{z}$$ is faithfully flat. Consider the open subset $i\colon U_y\hookrightarrow X=U_x$. Since $i_*{\mathcal O}_{U_y}$ is quasi-coherent, $${\mathcal O}_{yy'}=(i_*{\mathcal O}_{U_y})(U_{y'})=(i_*{\mathcal O}_{U_y})(U_{x})\otimes_{{\mathcal O}_x}{\mathcal O}_{y'}={\mathcal O}_y\otimes_{{\mathcal O}_x}{\mathcal O}_{y'}.$$ In particular, ${\mathcal O}_y={\mathcal O}_{yy}={\mathcal O}_y\otimes_{{\mathcal O}_x}{\mathcal O}_{y}$. The morphism ${\mathcal O}_y\to {\mathcal O}_z$ is flat, for any $z\geq y,y'$, then the morphism ${\mathcal O}_y\otimes_{{\mathcal O}_x}{\mathcal O}_{y'}\to {\mathcal O}_z\otimes_{{\mathcal O}_x}{\mathcal O}_{y'}={\mathcal O}_{zy'}={\mathcal O}_z$ is flat. If the morphism ${\mathcal O}_y\otimes_{{\mathcal O}_x}{\mathcal O}_{y'} \to \prod_{z\in U_{yy'}} {\mathcal O}_{z}$ is not faithfully flat, there exists an ideal $I\underset\neq\subset {\mathcal O}_y\otimes_{{\mathcal O}_x}{\mathcal O}_{y'}$ such that $I\cdot \prod_{z\in U_{yy'}} {\mathcal O}_{z}=\prod_{z\in U_{yy'}} {\mathcal O}_{z}$. Observe that the morphism ${\mathcal O}_y\to {\mathcal O}_y\otimes_{{\mathcal O}_x}{\mathcal O}_{y'} $ is flat since ${\mathcal O}_x\to {\mathcal O}_{y'}$ is flat. Besides, $$({\mathcal O}_y\otimes_{{\mathcal O}_x}{\mathcal O}_{y'})\otimes_{{\mathcal O}_y}({\mathcal O}_y\otimes_{{\mathcal O}_x}{\mathcal O}_{y'})={\mathcal O}_{y}\otimes_{{\mathcal O}_x}({\mathcal O}_{y'}\otimes_{{\mathcal O}_x}{\mathcal O}_{y'})={\mathcal O}_y\otimes_{{\mathcal O}_x}{\mathcal O}_{y'}. $$ By Proposition \ref{sudor}, there exists an ideal $J\subset {\mathcal O}_y$ such that $J\cdot ({\mathcal O}_y\otimes_{{\mathcal O}_x}{\mathcal O}_{y'})=I$. Let $\mathcal M$ be the quasi-coherent ${\mathcal O}_{U_y}$-module associated with the ${\mathcal O}_y$-module ${\mathcal O}_y/J$. Then, $i_*\mathcal M$ is the quasi-coherent ${\mathcal O}_X$-module associated with the ${\mathcal O}_x$-module ${\mathcal O}_y/J$ and $${\mathcal M}(U_{yy'})=(i_*{\mathcal M})(U_{y'}) =({\mathcal O}_y/J)\otimes_{{\mathcal O}_x}{\mathcal O}_{y'}=({\mathcal O}_y\otimes_{{\mathcal O}_x}{\mathcal O}_{y'})/J\cdot ({\mathcal O}_y\otimes_{{\mathcal O}_x}{\mathcal O}_{y'})=({\mathcal O}_y\otimes_{{\mathcal O}_x}{\mathcal O}_{y'})/I\neq 0.$$ However, ${\mathcal M}_{|U_{yy'}}=0$, since $\mathcal M_z=({\mathcal O}_y/J)\otimes_{{\mathcal O}_y}{\mathcal O}_z={\mathcal O}_z/J\cdot {\mathcal O}_z={\mathcal O}_z/I\cdot {\mathcal O}_z=0$, for any $z\in U_{yy'}$. So we have a contradiction; therefore, the morphism ${\mathcal O}_{y}\otimes_{{\mathcal O}_x}{\mathcal O}_{y'} \to \prod_{z\in U_{yy'}} {\mathcal O}_{z}$ is faithfully flat. \end{proof} \hskip-0.65cm\colorbox{white}{\,\begin{minipage}{15.15cm} \begin{teorema} \label{Cdelta} Let $(X,{\mathcal O})$ be a finite ringed space. Let $\delta\colon X\to X\times X$, $\delta(x)=(x,x)$ be the diagonal morphism. Then, $X$ is schematic iff it satisfies these two conditions: \begin{enumerate} \item $\Ker f$ is quasi-coherent, for any morphism $f \colon {\mathcal M} \to {\mathcal N}$ of quasi-coherent ${\mathcal O}_{X}$-modules. \item $\delta_{*}{\mathcal N}$ is a quasi-coherent ${\mathcal O}_{X\times X}$-module for any quasi-coherent ${\mathcal O}_X$-module ${\mathcal N}$. \end{enumerate} \end{teorema}\end{minipage}} \begin{proof}[Proof] $\Rightarrow)$ For any $(x,y)\leq (x',y')$, we have $$\aligned (\delta_*{\mathcal N})_{(x,y)}\otimes_{{\mathcal O}_{(x,y)}}{\mathcal O}_{(x',y')} & = {\mathcal N}_{xy} \otimes_{{\mathcal O}_{x} \otimes {\mathcal O}_y}({\mathcal O}_{x'} \otimes {\mathcal O}_{y'}) = {\mathcal N}_{xy}\otimes_{{\mathcal O}_x}{\mathcal O}_{x'}\otimes_{{\mathcal O}_{y}}{\mathcal O}_{y'} \overset{\text{\ref{11}}}={\mathcal N}_{x'y}\otimes_{{\mathcal O}_{y}}{\mathcal O}_{y'} \overset{\text{\ref{11}}}={\mathcal N}_{x'y'}\\ & =(\delta_*{\mathcal N})_{(x',y')}.\endaligned$$ $\Leftarrow)$ First, note that for any $x \in X$ and any $x'\leq x''$, $${\mathcal O}_{xx'}\otimes_{{\mathcal O}_{x'}}{\mathcal O}_{x''}= (\delta_* {\mathcal O})_{(x,x')} \otimes_{{\mathcal O}_x} {\mathcal O}_x \otimes_{{\mathcal O}_{x'}} {\mathcal O}_{x''}= (\delta_* {\mathcal O})_{(x,x')} \otimes_{{\mathcal O}_{(x,x')}} {\mathcal O}_{(x,x'')}= (\delta_* {\mathcal O})_{(x,x'')}= {\mathcal O}_{xx''}.$$ In consequence, for any open subset $i\colon U_x\hookrightarrow X$ and any quasi-coherent ${\mathcal O}_{U_x}$-module ${\mathcal M}$ there exists a quasi-coherent ${\mathcal O}_X$-module ${\mathcal N}$ such that ${\mathcal N}_{|U_x}\simeq {\mathcal M}$: define ${\mathcal N}_{x'}:={\mathcal M}_x\otimes_{{\mathcal O}_x}{\mathcal O}_{xx'}$, for any $x'\in X$. ${\mathcal N}$ is quasi-coherent since for any $x'\leq x''$, $${\mathcal N}_{x'}\otimes_{{\mathcal O}_{x'}}{\mathcal O}_{x''}={\mathcal M}_x\otimes_{{\mathcal O}_x}{\mathcal O}_{xx'}\otimes_{{\mathcal O}_{x'}}{\mathcal O}_{x''}={\mathcal M}_x\otimes_{{\mathcal O}_x}{\mathcal O}_{xx''}={\mathcal N}_{x''}.$$ Besides, ${\mathcal N}_x={\mathcal M}_x$, so ${\mathcal N}_{|U_x}={\mathcal M}$. By Theorem \ref{T1}, we have to prove that $i_*{\mathcal M}$ is a quasi-coherent ${\mathcal O}_X$-module. That is, we have to prove that $(i_*{\mathcal M})_{y'}=(i_*{\mathcal M})_y \otimes_{{\mathcal O}_y}{\mathcal O}_{y'}$, for any $y\leq y'\in X$: $$\aligned (i_*{\mathcal M})_{y'}={\mathcal M}(U_{y'x})={\mathcal N}(U_{y'x}) & =(\delta_*{\mathcal N})(U_{y'}\times U_{x})= (\delta_*{\mathcal N})(U_y\times U_{x})\otimes_{{\mathcal O}_y\otimes {\mathcal O}_x}{\mathcal O}_{y'}\otimes {\mathcal O}_{x}\\ & ={\mathcal N}(U_{yx})\otimes_{{\mathcal O}_y}{\mathcal O}_{y'}={\mathcal M}(U_{yx}) \otimes_{{\mathcal O}_y}{\mathcal O}_{y'}=(i_*{\mathcal M})_y \otimes_{{\mathcal O}_y}{\mathcal O}_{y'}.\endaligned$$ \end{proof} \begin{observacion} The theorem above can be restated by saying that a finite ringed space $X$ is schematic iff it is a finite fr-space and for any quasi-coherent module ${\mathcal N}$, any $x \in X$ and any $x'\leq x'' \in X$, ${\mathcal N}_{xx'}\otimes_{{\mathcal O}_{x'}}{\mathcal O}_{x''}={\mathcal N}_{xx''}$. \end{observacion} \begin{proposicion} \label{K2?} Let $X$ be a finite fr-space and ${\mathcal N}$ a quasi-coherent ${\mathcal O}_X$-module. Then, ${\mathcal N}_{pq}\otimes_{{\mathcal O}_p}{\mathcal O}_{p'}={\mathcal N}_{p'q}$, for any $p\leq p' \in X$ and for any $q \in X$ iff ${\mathcal N}_{p'}\otimes_{{\mathcal O}_p}{\mathcal O}_{p''}={\mathcal N}_{p'p''}$, for any $p\leq p' ,p'' \in X$. \end{proposicion} \begin{proof}[Proof] $\Rightarrow)$ ${\mathcal N}_{p'}\otimes_{{\mathcal O}_p}{\mathcal O}_{p''}={\mathcal N}_{pp'}\otimes_{{\mathcal O}_p}{\mathcal O}_{p''}={\mathcal N}_{p'p''}$. $\Leftarrow)$ Let $U\subseteq U_p$ be an open set and $p\leq p'$. Consider the exact sequence of morphisms $${\mathcal N}(U)\to \prod_{x\in U} {\mathcal N}_x{\dosflechasa[]{}{}} \prod_{z\geq x\in U} {\mathcal N}_z.$$ Tensoring by $\otimes_{{\mathcal O}_p} {\mathcal O}_{p'}$ we obtain the exact sequence of morphisms $${\mathcal N}(U)\otimes_{{\mathcal O}_p} {\mathcal O}_{p'}\to \prod_{x\in U} {\mathcal N}_{p'x}{\dosflechasa[]{}{}} \prod_{z\geq x\in U} {\mathcal N}_{p'z},$$ which shows that ${\mathcal N}(U)\otimes_{{\mathcal O}_p} {\mathcal O}_{p'}={\mathcal N}(U\cap U_{p'})$. In particular, $${\mathcal N}_{pq}\otimes_{{\mathcal O}_p}{\mathcal O}_{p'}={\mathcal N}_{p'q}.$$\end{proof} \begin{corolario} \label{ultima caracterizacion schem} Let $X$ be a finite ringed space. $X$ is schematic iff it is a finite fr-space and for any quasi-coherent ${\mathcal O}_X$-module ${\mathcal N}$ and any $x\leq x' ,x'' \in X$, ${\mathcal N}_{x'}\otimes_{{\mathcal O}_x}{\mathcal O}_{x''}={\mathcal N}_{x'x''}$ \end{corolario} \begin{proof}[Proof] It follows directly from Theorem \ref{Cdelta} and Proposition above. \end{proof} \section{Affine morphisms} \begin{definicion} Let $X$ and $Y$ be schematic finite spaces. A morphism $f\colon X\to Y$ of ringed spaces is said to be an affine morphism if $f_*{\mathcal O}_X$ is a quasi-coherent ${\mathcal O}_Y$-module and the preimage of any affine open subspace of $Y$ is an affine open subspace of $X$.\end{definicion} \begin{ejemplos} \label{Kolmogorov} 1. A schematic finite space $X$ is affine iff $(X,{\mathcal O})\to (*,{\mathcal O}(X))$ is an affine morphism. 2. If $X$ is an affine finite space and $U\subseteq X$ an affine open subset, the inclusion morphism $i\colon U\hookrightarrow X$ is an affine morphism: $i_*{\mathcal O}_U$ is quasi-coherent by Proposition \ref{11}, and for any affine open subset $V\subseteq Y$, $i^{-1}(V)=V\cap U$ is affine by Proposition \ref{inter}. 3. Let $X$ be a schematic finite space. Given $x.x'\in X$, we shall say that $x\sim x'$ if $x\leq x'$ and $x'\leq x$. Let $\bar X:=X/\sim$ be the Kolmogorov quotient of $X$ and define ${\mathcal O}_{[x]}:={\mathcal O}_x$, for any $[x]\in\bar X$. Then, $\bar X$ is a schematic space, the quotient morphism $\pi\colon \bar X\to X$, $\pi(x):=[x]$ is affine and $\pi_*{\mathcal O}_X={\mathcal O}_{\bar X}$. \end{ejemplos} \begin{proposicion} Let $X$ and $Y$ be affine finite spaces and $f\colon X\to Y$ an affine morphism. Let $M$ be an ${\mathcal O}(X)$-module (therefore, an ${\mathcal O}(Y)$-module). Then, $$f_*\tilde M=\tilde M.$$ \end{proposicion} \begin{proof}[Proof] For any open set $U_y$, $$\aligned (f_*\tilde M)(U_y) & = \tilde M(f^{-1}(U_y))\overset{\text{\ref{11}}}= \tilde M(X)\otimes_{{\mathcal O}(X)} {\mathcal O}_X(f^{-1}(U_y)) = M\otimes_{{\mathcal O}(X)} {\mathcal O}_X(f^{-1}(U_y))\\ & = M\otimes_{{\mathcal O}(X)} (f_*{\mathcal O}_X)(U_y)\overset{\text{\ref{11}}} = M\otimes_{{\mathcal O}(X)} {{\mathcal O}(X)} \otimes_{{\mathcal O}(Y)} {\mathcal O}_Y(U_y)=M\otimes_{{\mathcal O}(Y)} {\mathcal O}_Y(U_y)\\ & =\tilde M(Y)\otimes_{{\mathcal O}(Y)} {\mathcal O}_Y(U_y) \overset{\text{\ref{11}}} =\tilde M(U_y).\endaligned$$ \end{proof} \begin{proposicion} \label{qctoqc} Let $f\colon X\to Y$ be an affine morphism and $\mathcal M$ a quasi-coherent ${\mathcal O}_X$-module. Then, $f_*{\mathcal M}$ is a quasi-coherent ${\mathcal O}_Y$-module.\end{proposicion} \begin{proof}[Proof] Being $f_*{\mathcal M}$ a quasi-coherent ${\mathcal O}_Y$-module is a local property. We can suppose that $Y$ is affine. Then $X$ is affine. The proof is completed by the previous proposition. \end{proof} \begin{proposicion} \label{PObv} The composition of affine morphisms is affine\end{proposicion} \begin{proof}[Proof] It is obvious.\end{proof} \begin{proposicion} \label{hmm} Let $X$ and $Y$ be schematic finite spaces. A morphism of ringed spaces $f\colon X\to Y$ is affine iff $f_*{\mathcal O}_X$ is quasi-coherent and $f^{-1}(U_y)$ is affine for any $y\in Y$.\end{proposicion} \begin{proof}[Proof] $\Rightarrow)$ It is obvious. $\Leftarrow)$ Let us proceed by induction on $\# Y$. If $\# Y=1$, it is obvious. We can suppose that $Y$ is affine and we only have to prove that $X$ is affine. The morphism ${\mathcal O}(Y)\to \prod_{y\in Y}{\mathcal O}_{y}$ is faithfully flat, then the morphism $${\mathcal O}(X)\to \prod_{y\in Y} {\mathcal O}(X)\otimes_{{\mathcal O}(Y)} {\mathcal O}_{y}=\prod_{y\in Y} (f_*{\mathcal O}_X)(Y)\otimes_{{\mathcal O}(Y)} {\mathcal O}_{y} \overset{\text{\ref{11}}}=\prod_{y\in Y} {\mathcal O}(f^{-1}(U_y))$$ is faithfully flat. Since $f^{-1}(U_{y})$ is affine, the morphism ${\mathcal O}(f^{-1}(U_y))\to \prod_{x\in f^{-1}(U_y)} {\mathcal O}_x$ is faithfully flat. The composition of faithfully flat morphisms is faithfully flat, then ${\mathcal O}(X)\to \prod_{y\in Y, x\in f^{-1}(U_y)} {\mathcal O}_x$ is faithfully flat. Therefore, the morphism $${\mathcal O}(X)\to \prod_{x\in X} {\mathcal O}_x$$ is faithfully flat. Let $x,x'\in X$. Given an open set $V\subseteq Y$ denote $\bar V:=f^{-1}(V)$. $\bar U_{f(x)f(x')}$ is an affine open subset of $\bar U_{f(x)}$ (by induction hypothesis), then $\bar U_{f(x)f(x')}\cap U_x$ is affine, and it is included in $\bar U_{f(x')}$, hence $\bar U_{f(x)f(x')}\cap U_x\cap U_{x'}$ is affine. Then, $$U_{xx'}=\bar U_{f(x)f(x')}\cap U_x\cap U_{x'}$$ is affine and the morphism ${\mathcal O}_{xx'}\to \prod_{z\in U_{xx'}} {\mathcal O}_z$ is faithfully flat. Since $f_*{\mathcal O}_X$ is quasi-coherent (and Proposition \ref{11}), $$\aligned {\mathcal O}(\bar U_{f(x)}) & =f_*{\mathcal O}_X(U_{f(x)}) ={\mathcal O}(X)\otimes_{{\mathcal O}(Y)}{\mathcal O}_{f(x)}, \quad {\mathcal O}(\bar U_{f(x')})={\mathcal O}(X)\otimes_{{\mathcal O}(Y)}{\mathcal O}_{f(x')}. \\ {\mathcal O}(\bar U_{f(x)f(x')}) & =f_*{\mathcal O}_X(U_{f(x)f(x')}) ={\mathcal O}(X)\otimes_{{\mathcal O}(Y)}{\mathcal O}_{f(x)f(x')}={\mathcal O}(X)\otimes_{{\mathcal O}(Y)} {\mathcal O}_{f(x)}\otimes_{{\mathcal O}(Y)}{\mathcal O}_{f(x')}\\ & =({\mathcal O}(X)\otimes_{{\mathcal O}(Y)} {\mathcal O}_{f(x)})\otimes_{{\mathcal O}(X)}({\mathcal O}(X)\otimes_{{\mathcal O}(Y)}{\mathcal O}_{f(x')})={\mathcal O}(\bar U_{f(x)})\otimes_{{\mathcal O}(X)}{\mathcal O}(\bar U_{f(x')}.)\endaligned \qquad (*)$$ Now it is easy to prove that $${\mathcal O}_{xx'}={\mathcal O}(\bar U_{f(x)f(x')}\cap U_x\cap U_{x'})\overset{\text{\ref{11}}}=({\mathcal O}(\bar U_{f(x)f(x')})\otimes_{{\mathcal O}(\bar U_{f(x)})} {\mathcal O}_x)\otimes_{{\mathcal O}(\bar U_{f(x')})} {\mathcal O}_{x'}\overset{(*)}={\mathcal O}_x\otimes_{{\mathcal O}(X)}{\mathcal O}_{x'}.$$ Therefore, $X$ is affine. \end{proof} \begin{corolario} \label{afloc} Let $X$ and $Y$ be schematic finite spaces and let $f\colon X\to Y$ be a morphism of ringed spaces. Then, being $f$ affine is a local property on $Y$. \end{corolario} \begin{ejemplo} \label{E5.8} Let $(X,{\mathcal O})$ be a schematic finite space and ${\mathcal O}\to {\mathcal O}'$ a morphism of sheaves of rings, such that ${\mathcal O}'$ is a quasi-coherent ${\mathcal O}$-module. $(X,{\mathcal O}')$ is a schematic finite space: Given $x\leq y,y'$, the morphism ${\mathcal O}_y\otimes_{{\mathcal O}_x} {\mathcal O}_{y'}\to \prod_{z\in U_{yy'}} {\mathcal O}_z$ is faithfully flat, by Proposition \ref{4pg20}. Tensoring by $\otimes_{{\mathcal O}_x}{\mathcal O}'_{x}$ we obtain the faithfully flat morphism ${\mathcal O}'_y\otimes_{{\mathcal O}'_x} {\mathcal O}'_{y'}\to \prod_{z\in U_{yy'}} {\mathcal O}'_z$. Hence, $(X,{\mathcal O}')$ is a schematic finite space by Proposition \ref{4pg20}. The obvious morphism $\Id\colon (X,{\mathcal O}')\to (X,{\mathcal O})$ is affine. \end{ejemplo} \section{Schematic morphisms} \begin{definicion} Let $X,Y$ be schematic finite spaces. A morphism of ringed spaces $f\colon X\to Y$ is said to be a schematic morphism if for any $x\in X$ the morphism $f_x\colon U_x\to U_{f(x)}$, $f_x(x'):=f(x')$ is affine.\end{definicion} \begin{ejemplo} If $X$ is a schematic finite space, $X\to (*,{\mathcal O}(X))$ is a schematic morphism.\end{ejemplo} \begin{ejemplo} If $U$ is an open subspace of a schematic finite space $X$, then the inclusion morphism $U\hookrightarrow X$ is schematic.\end{ejemplo} \begin{observacion} Let $X$ and $Y$ be schematic finite spaces and $f\colon X\to Y$ be a morphism of ringed spaces. Then, being $f$ schematic is a local property on $Y$ and on $X$. \end{observacion} \begin{proposicion} The composition of schematic morphisms is schematic.\end{proposicion} \begin{proof}[Proof] It is a consequence of Proposition \ref{PObv}.\end{proof} \begin{proposicion} \label{afinesqu} Affine morphisms between schematic finite spaces are schematic morphisms. \end{proposicion} \begin{proof}[Proof] Let $f\colon X\to Y$ be an affine morphism. Then, $f^{-1}(U_{f(x)})$ is affine, $U_x\hookrightarrow f^{-1}(U_{f(x)})$ is an affine morphism and $f^{-1}(U_{f(x)})\to U_{f(x)}$ is affine, by Corollary \ref{afloc}. The composition $U_x\hookrightarrow f^{-1}(U_{f(x)})\to U_{f(x)}$ is affine, by Proposition \ref{PObv}. Hence, $f$ is a schematic morphism. \end{proof} \begin{proposicion} \label{K40} Let $f\colon X\to Y$ be a schematic morphism and ${\mathcal M}$ a quasi-coherent ${\mathcal O}_X$-module. Then, $f_*{\mathcal M}$ is a quasi-coherent ${\mathcal O}_Y$-module. \end{proposicion} \begin{proof}[Proof] We can suppose that $Y$ is affine. Consider an open set $U_x\overset i\subseteq X$ and denote ${\mathcal M}_{U_x}=i_*{{\mathcal M}}_{|U_x}$. Observe that $f_*{\mathcal M}_{U_x}=(f\circ i)_*{{\mathcal M}}_{|U_x}$ is a quasi-coherent ${\mathcal O}_Y$-module, because the composite morphism $f\circ i\colon U_x\to U_{f(x)}\hookrightarrow Y$ is affine and by Proposition \ref{qctoqc}. Let $\{U_{x_1},\ldots, U_{x_n}\}$ be an open covering of $X$ and $\{U_{x_{ijk}}\}_k$ an open covering of $U_{x_i}\cap U_{x_j}$, for each $i,j$. Consider the exact sequence of morphisms $${\mathcal M}\to \prod_i {\mathcal M}_{U_{x_i}} {\dosflechasa[]{}{}} \prod_{i,j,k} {\mathcal M}_{U_{x_{ijk}}}$$ Taking $f_*$, we obtain an exact sequence of morphisms, then $f_*{\mathcal M}$ is a quasi-coherent ${\mathcal O}_Y$-module. \end{proof} \begin{corolario} \label{C4.10} Let $X$ be a schematic finite space and $U \overset i \hookrightarrow X$ an open subset. Given a quasi-coherent ${\mathcal O}_U$-module ${\mathcal N}$, there exists a quasi-coherent ${\mathcal O}_X$-module ${\mathcal M}$, such that ${\mathcal M}_{|U}\simeq {\mathcal N}$. \end{corolario} \begin{proof}[Proof] Define ${\mathcal M}:=i_*{\mathcal N}$. \end{proof} \begin{lemma} Let $X$ be an affine finite space and $U\subset X$ an open set. Then, $U$ is affine iff $U\cap U_x$ is affine, for any $x\in X$.\end{lemma} \begin{proof}[Proof] If $U$ is affine, then $U\cap U_x$ is affine, for any $x\in X$, by Proposition \ref{inter}. Let us prove the converse implication. The inclusion morphism $i\colon U\hookrightarrow X$ is an affine morphism, by Proposition \ref{hmm}. Hence, $U=i^{-1}(X)$ is affine. \end{proof} \begin{proposicion} A morphism of ringed spaces $f\colon X\to Y$ between affine finite spaces is affine iff it is a schematic morphism.\end{proposicion} \begin{proof}[Proof] $\Rightarrow)$ It is known (see Proposition \ref{afinesqu}). $\Leftarrow)$ By Proposition \ref{K40}, we only have to prove that $f^{-1}(U)$ is affine, for any affine open subset $U\subseteq Y$. By the previous lemma, we only have to prove that $f^{-1}(U)\cap U_x$ is affine. The composition of affine morphisms is affine, then $U_x\to U_{f(x)}\hookrightarrow Y$ is affine. Hence, $f^{-1}(U)\cap U_x$ is affine. \end{proof} \begin{corolario} \label{C9.10} Let $f\colon X\to Y$ be a schematic morphism. Then, $f$ is affine iff there exists an affine open covering of $Y$, $\{U_i\}$, such that $f^{-1}(U_i)$ is affine, for any $i$.\end{corolario} \begin{proof}[Proof] Recall that being $f$ affine is a local property on $Y$. \end{proof} In \cite{KG} 5.6, it is proved that a morphism of ringed spaces $f\colon X\to Y$ is schematic iff $R^if_*{\mathcal M}$ is quasi-coherent for any quasi-coherent module ${\mathcal M}$ and any $i$. \hskip-0.65cm\colorbox{white}{\,\begin{minipage}{15.15cm} \begin{teorema} \label{Tcohsch} Let $X$ and $Y$ be schematic finite spaces and $f\colon X\to Y$ a morphism of ringed spaces. Then, $f$ is a schematic morphism iff $f_*\mathcal M$ is quasi-coherent, for any quasi-coherent ${\mathcal O}_X$-module $\mathcal M$. \end{teorema}\end{minipage}} \begin{proof}[Proof] $\Rightarrow)$ Recall Proposition \ref{K40}. $\Leftarrow)$ We can suppose that $X$ and $Y$ are affine. We only have to prove that $f^{-1}(U_y)$ is affine, for any $y\in Y$, by Proposition \ref{hmm}. By Proposition \ref{afinfp}, we only have to prove that the morphism ${\mathcal O}_X(f^{-1}(U_y))\to \prod_{x\in f^{-1}(U_y)} {\mathcal O}_{x}$ is faithfully flat. Observe that $${\mathcal O}_X(f^{-1}(U_y))=(f_*{\mathcal O}_X)(U_y)=(f_*{\mathcal O}_X)(Y)\otimes_{{\mathcal O}_Y(Y)}{\mathcal O}_y={\mathcal O}_X(X)\otimes_{{\mathcal O}_Y(Y)}{\mathcal O}_y$$ and for any $x\in f^{-1}(U_y)$ $${\mathcal O}_{x}\otimes_{{\mathcal O}_Y(Y)} {\mathcal O}_y={\mathcal O}_{x}\otimes_{{\mathcal O}_y} {\mathcal O}_y\otimes_{{\mathcal O}_Y(Y)} {\mathcal O}_y={\mathcal O}_{x}\otimes_{{\mathcal O}_y} {\mathcal O}_y={\mathcal O}_x.$$ The morphism ${\mathcal O}_X(X)\to {\mathcal O}_x$ is flat, then tensoring by $\otimes_{{\mathcal O}_Y(Y)} {\mathcal O}_y$ the morphism ${\mathcal O}_X(f^{-1}(U_y))\to {\mathcal O}_x$ is flat, for any $x\in f^{-1}(U_y)$. If the morphism ${\mathcal O}_X(f^{-1}(U_y))\to \prod_{x\in f^{-1}(U_y)} {\mathcal O}_{x}$ is not faithfully flat, there exists an ideal $I\underset\neq\subset {\mathcal O}_X(f^{-1}(U_y))$ such that $I\cdot \prod_{x\in f^{-1}(U_y)} {\mathcal O}_{x}=\prod_{x\in f^{-1}(U_y)} {\mathcal O}_{x}$. Observe that the morphism ${\mathcal O}_X(X)\to {\mathcal O}_X(X)\otimes_{{\mathcal O}_Y(Y)}{\mathcal O}_y={\mathcal O}_X(f^{-1}(U_y))$ is flat since ${\mathcal O}_Y(Y)\to {\mathcal O}_y$ is flat. Besides, $$\aligned {\mathcal O}_X(f^{-1}(U_y)) & \otimes_{{\mathcal O}_X(X)} {\mathcal O}_X(f^{-1}(U_y)) = {\mathcal O}_X(X)\otimes_{{\mathcal O}_Y(Y)}{\mathcal O}_y \otimes_{{\mathcal O}_X(X)} {\mathcal O}_X(X)\otimes_{{\mathcal O}_Y(Y)}{\mathcal O}_y\\ & ={\mathcal O}_X(X)\otimes_{{\mathcal O}_Y(Y)} {\mathcal O}_y\otimes_{{\mathcal O}_Y(Y)}{\mathcal O}_y={\mathcal O}_X(X)\otimes_{{\mathcal O}_Y(Y)} {\mathcal O}_y={\mathcal O}_X(f^{-1}(U_y)).\endaligned$$ By Proposition \ref{sudor}, there exists an ideal $J\subset {\mathcal O}_X(X)$ such that $J\cdot {\mathcal O}_X(f^{-1}(U_y))=I$. Let $\mathcal M$ be the quasi-coherent ${\mathcal O}_X$-module associated with the ${\mathcal O}_X(X)$-module ${\mathcal O}_X(X)/J$. Then, $f_*\mathcal M$ is the quasi-coherent ${\mathcal O}_Y$-module associated with the ${\mathcal O}_Y(Y)$-module ${\mathcal O}_X(X)/J$ and $$\aligned \mathcal M(f^{-1}(U_y)) & =f_*{\mathcal M}(U_y)=f_*{\mathcal M}(Y)\otimes_{{\mathcal O}_Y(Y)}{\mathcal O}_y=({\mathcal O}_X(X)/J)\otimes_{{\mathcal O}_Y(Y)}{\mathcal O}_y\\ & =({\mathcal O}_X(X)\otimes_{{\mathcal O}_Y(Y)}{\mathcal O}_y)/J\cdot ({\mathcal O}_X(X)\otimes_{{\mathcal O}_Y(Y)}{\mathcal O}_y)={\mathcal O}_X(f^{-1}(U_y))/I\neq 0.\endaligned$$ However, ${\mathcal M}_{|f^{-1}(U_y)}=0$ since $\mathcal M_x={\mathcal O}_x/J\cdot {\mathcal O}_x={\mathcal O}_x/I\cdot {\mathcal O}_x=0$, for any $x\in f^{-1}(U_y)$. This is contradictory, then the morphism ${\mathcal O}_X(f^{-1}(U_y))\to \prod_{x\in f^{-1}(U_y)} {\mathcal O}_{x}$ is faithfully flat. \end{proof} \begin{notacion} Let $f\colon X\to Y$ be a morphism of ringed spaces, between ringed finite spaces, $x\in X$ and $y\in Y$. We shall denote $U_{xy}:=U_x\cap f^{-1}(U_y)$ and ${\mathcal O}_{xy}:={\mathcal O}(U_x\cap f^{-1}(U_y))$.\end{notacion} \hskip-0.65cm\colorbox{white}{\,\begin{minipage}{15.15cm} \begin{proposicion} \label{K6} A morphism of ringed spaces $f\colon X\to Y$ between schematic finite spaces is schematic iff for any $x\in X$ and $y\geq f(x)$ \begin{enumerate} \item $U_{xy}$ is affine. \item ${\mathcal O}_{xy}={\mathcal O}_{x}\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_{y}$.\end{enumerate} \end{proposicion}\end{minipage}} \begin{proof}[Proof] Consider the morphism $f_x\colon U_x\to U_{f(x)}$. Then, $f_{x*}{\mathcal O}_{U_x}$ is a quasi-coherent module iff condition 2. is satisfied. By Proposition \ref{hmm}, $f_x$ is affine iff the conditions 1. and 2. are satisfied. Then, $f$ is schematic iff the conditions 1. and 2. are satisfied. \end{proof} \hskip-0.65cm\colorbox{white}{\,\begin{minipage}{15.15cm} \begin{teorema} \label{K7} Let $f\colon X\to Y$ be a morphism of ringed spaces between schematic finite spaces. Then, $f$ is schematic iff the induced morphism on spectra by the morphism of rings ${\mathcal O}_x\otimes_{{\mathcal O}_{f(x)}} {\mathcal O}_y\to \prod_{z\in U_{xy}} {\mathcal O}_{z}$ is surjective, for any $x$ and $y\geq f(x)$.\end{teorema}\end{minipage}} \begin{proof}[Proof] $\Rightarrow)$ By Proposition \ref{K6}, $U_{xy}$ is affine and ${\mathcal O}_x\otimes_{{\mathcal O}_{f(x)}} {\mathcal O}_y= {\mathcal O}_{xy}$. Therefore, the morphism $${\mathcal O}_x\otimes_{{\mathcal O}_{f(x)}} {\mathcal O}_y={\mathcal O}_{xy}\to \prod_{z\in U_{xy}} {\mathcal O}_{z}$$ is faithfully flat, and the induced morphism on spectra is surjective. $\Leftarrow)$ Let $z\in U_{xy}$. Since ${\mathcal O}_x\to {\mathcal O}_z$ is a flat morphism, the morphism $${\mathcal O}_x\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_y\to {\mathcal O}_z\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_y={\mathcal O}_z\otimes_{{\mathcal O}_{f(z)}}{\mathcal O}_{f(z)}\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_y={\mathcal O}_z\otimes_{{\mathcal O}_{f(z)}} {\mathcal O}_{f(z)}={\mathcal O}_z$$ is flat. Denote $B={\mathcal O}_x\otimes_{{\mathcal O}_{f(x)}} {\mathcal O}_y$ and $C=\prod_{z\in U_{xy}} {\mathcal O}_{z}$. The morphism $B\to C$ is faithfully flat. Let $z,z'\in U_{xy}$. The morphism ${\mathcal O}_z\otimes_{{\mathcal O}_x}{\mathcal O}_{z'}\to {\mathcal O}_z\otimes_{B}{\mathcal O}_{z'}$ is surjective and the composite morphism $${\mathcal O}_z\otimes_{{\mathcal O}_x}{\mathcal O}_{z'}\to {\mathcal O}_z\otimes_{B}{\mathcal O}_{z'}\to {\mathcal O}_{zz'}$$ is an isomorphism. Therefore, ${\mathcal O}_z\otimes_{B}{\mathcal O}_{z'}= {\mathcal O}_{zz'}$. The exact sequence of morphisms $$B\to C =\prod_{z\in U_{xy}} {\mathcal O}_{z} {\dosflechasa[]{}{}} C\otimes_BC=\prod_{z,z'\in U_{xy}} {\mathcal O}_{zz'}$$ shows that $B={\mathcal O}_{xy}$. Therefore the morphism ${\mathcal O}_{xy}\to \prod_{z\in U_{xy}} {\mathcal O}_{z}$ is faithfully flat. By Proposition \ref{afinfp}, $U_{xy}$ is affine. By Proposition \ref{K6}, $f$ is schematic. \end{proof} \section{Removable points. Minimal schematic space} \begin{proposicion} \label{22} Let $X$ be a schematic finite space. Let $p\in X$ be a point such that the morphism ${\mathcal O}_p\to \prod_{q>p}{\mathcal O}_q$ is faithfully flat\footnote{If $I:=\{q\in X\colon q>p\}=\emptyset$, define $\prod_{q\in I}{\mathcal O}_q:=\{0\}$.}. Consider the ringed subspace $Y=X-\{p\}$ of $X$ (${\mathcal O}_{Y,y}:={\mathcal O}_{X,y}$, for any $y\in Y$). Then, $Y$ is a schematic finite space, the inclusion map $i\colon Y\hookrightarrow X$ is an affine morphism and $i_*{\mathcal O}_Y={\mathcal O}_X$.\end{proposicion} \begin{proof}[Proof] $Y$ is a schematic finite space by Proposition \ref{4pg20}, because $X$ is a schematic finite space. Let us prove that $i\colon Y\hookrightarrow X$ is affine and $i_*{\mathcal O}_Y={\mathcal O}_X$: Consider $U_x\subset X$. If $x\neq p$, then $i^{-1}(U_x)=U_x$ and $(i_*{\mathcal O}_Y)(U_x)={\mathcal O}_{Y,x}={\mathcal O}_{X,x}={\mathcal O}_X(U_x)$. If $x=p$ denote $U:=U_p-\{p\}$. Observe that ${{\mathcal O}_X}_{|U}={{\mathcal O}_Y}_{|U}$. The morphism $A={\mathcal O}_p\to \prod_{x\in U}{\mathcal O}_x=B$ is faithfully flat. The exact sequence of morphisms $$A\to B{\dosflechasa[]{}{}} B\otimes_AB$$ and the equality $B\otimes_A B=\prod_{x,y\in U} {\mathcal O}_{xy}$ show that $A={\mathcal O}_X(U)={\mathcal O}_Y(U)$. By Proposition \ref{afinfp}, $U$ is affine. Then, $i^{-1}(U_p)=U$ is affine and $(i_*{\mathcal O}_Y)(U_p)={\mathcal O}_Y(U)=A={\mathcal O}_X(U_p)$. \end{proof} \begin{observacion} If $U$ is affine then the morphism ${\mathcal O}_U\to \prod_{q\in U} {\mathcal O}_q$ is faithfully flat. We have proved that ${\mathcal O}_p\to \prod_{q>p}{\mathcal O}_q$ is faithfully flat iff $U:=U_p-\{p\}$ is affine and ${\mathcal O}_p={\mathcal O}(U)$. \end{observacion} \begin{lemma} \label{flat} Let $X$ be an affine finite space and $U\subseteq X$ an affine open subset. Then, the restriction morphism ${\mathcal O}(X)\to {\mathcal O}(U)$ is flat. \end{lemma} \begin{proof}[Proof] The morphism ${\mathcal O}(U)\to \prod_{x\in U} {\mathcal O}_x$ is faithfully flat and the composite morphism ${\mathcal O}(X)\to {\mathcal O}(U)\to \prod_{x\in U} {\mathcal O}_x$ is flat. Then, the morphism ${\mathcal O}(X)\to {\mathcal O}(U)$ is flat. \end{proof} \begin{proposicion} \label{bla}Let $X$ be a schematic finite space. Let $p\in X$ be a point such that the morphism ${\mathcal O}_p\to \prod_{q>p}{\mathcal O}_q$ is faithfully flat and let $Y:=X-\{p\}$. An open set $V\subseteq X$ is affine iff $V\cap Y$ is affine\end{proposicion} \begin{proof}[Proof] We only have to prove the converse implication. We can suppose that $V=Y$ and we have to prove that $X$ is affine. By Lemma \ref{flat}, the morphism ${\mathcal O}_X(X)={\mathcal O}_Y(Y) \to {\mathcal O}_Y(U_x\cap Y)={\mathcal O}_X(U_x)={\mathcal O}_x$ is flat, for any $x\in X$. Then, the morphism ${\mathcal O}(X)={\mathcal O}(Y)\to \prod_{y\in Y} {\mathcal O}_{Y,y}=\prod_{y\in Y} {\mathcal O}_{X,y}$ is faithfully flat. Therefore the morphism ${\mathcal O}(X)\to \prod_{x\in X} {\mathcal O}_{X,x}$ is faithfully flat. Likewise, given $U_x,U_{x'}\subseteq X$, the morphism ${\mathcal O}_X(U_x\cap U_{x'})\to \prod_{x''\in U_x\cap U_{x'}} {\mathcal O}_{X,x''}$ is faithfully flat. Besides, $${\mathcal O}_x\otimes_{{\mathcal O}(X)}{\mathcal O}_{x'}={\mathcal O}_Y(U_x\cap Y)\otimes_{{\mathcal O}(Y)}{\mathcal O}_Y(U_{x'}\cap Y)\overset{\text{\ref{11}}}={\mathcal O}_Y( U_x\cap Y\cap U_{x'}\cap Y)={\mathcal O}_X(U_x\cap U_{x'})$$ Therefore, $X$ is affine.\end{proof} \begin{definicion} Let $X$ be a schematic finite space. We shall say that $x\in X$ is removable if ${\mathcal O}_x\to \prod_{x'>x} {\mathcal O}_{x'}$ is faithfully flat.\end{definicion} If ${\mathcal O}_x=0$ obviously $x$ is a removable point. \begin{proposicion} Let $X$ be a schematic finite space and let $p,p'\in X$ be two points. Then, $p,p'$ are removable points of $X$ iff $p$ is a removable point of $X$ and $p'$ is a removable point of $X-p$.\end{proposicion} \begin{proof}[Proof] It is immediate.\end{proof} \begin{proposicion} Let $U'\subseteq U$ be affine open subsets of a schematic finite space $X$ and suppose that the morphism ${\mathcal O}(U)\to {\mathcal O}(U')$ is {faithfully} flat. Then, $U-U'$ is a set of removable points of $X$.\end{proposicion} \begin{proof}[Proof] ${\mathcal O}(U)={\mathcal O}(U')$ because the morphism ${\mathcal O}(U)\to {\mathcal O}(U')$ is faithfully flat and $${\mathcal O}(U)\otimes_{{\mathcal O}(U)}{\mathcal O}(U')={\mathcal O}(U') \overset{\text{\ref{11}}}={\mathcal O}(U')\otimes_{{\mathcal O}(U)}{\mathcal O}(U').$$ Let $x\in U-U'$, then ${\mathcal O}_x={\mathcal O}(U)\otimes_{{\mathcal O}(U)}{\mathcal O}_x={\mathcal O}(U')\otimes_{{\mathcal O}(U)}{\mathcal O}_x \overset{\text{\ref{11}}}={\mathcal O}(U'\cap U_x)$. By Proposition \ref{inter}, $U'\cap U_x$ is affine, then the morphism ${\mathcal O}_x={\mathcal O}(U'\cap U_x)\to \prod_{x'\in U'\cap U_x}{\mathcal O}_{x'}$ is faithfully flat. Hence, ${\mathcal O}_x\to \prod_{y>x} {\mathcal O}_y$ is faithfully flat, and $x$ is a removable point.\end{proof} \begin{observacion} In addition, we have proved that ${\mathcal O}(U)={\mathcal O}(U')$.\end{observacion} \begin{definicion} A schematic finite space $X$ is said to be minimal if there are no removable points in $X$ and it is $T_0$. Let $\tilde X$ the Kolmogorov space of $X$ and $P$ be the set of all the removable points of $\tilde X$, we shall denote $X_M:=\tilde X\backslash P$. \end{definicion} By Proposition \ref{22}, $X_M$ is a schematic finite space, the natural morphism $X_M\overset{i}\hookrightarrow \tilde X$ is affine and ${\mathcal O}_{\tilde X}=i_*{\mathcal O}_{X_M}$. \begin{proposicion} Let $f\colon X\to Y$ be a schematic morphism. If $x\in X$ is not a removable point, then $f(x)\in Y$ is not a removable point. Then, we have the commutative diagram $$\xymatrix{ X \ar[d]^-f \ar[r] & \tilde X \ar[d]^-{\tilde f}& \,\, X_M \ar@{_{(}->}[l] \ar[d]^-{f_{M}} \\ Y \ar[r] & \tilde Y & \,\, Y_M \ar@{_{(}->}[l] }$$ where $\tilde X$ and $\tilde Y$ are the Kolmogorov spaces of $X$ and $Y$ respectively, and $\tilde f$ and $f_M$ are the induced morphisms. \end{proposicion} \begin{proof}[Proof] Consider the affine morphism $f_x\colon U_x\to U_{f(x)},\, f_x(x'):=f(x')$. Since $f_{x*}{\mathcal O}_{U_x}$ is a quasi-cohe\-rent ${\mathcal O}_{U_{f(x)}}$-module, then ${\mathcal O}_x\otimes_{{\mathcal O}_{f(x)} }{\mathcal O}_y= {\mathcal O}(f_x^{-1}(U_y))$, for any $y\in U_{f(x)}$. If $f(x)$ is a removable point, then ${\mathcal O}_{f(x)}\to \prod_{y>f(x)} {\mathcal O}_y$ is faithfully flat. Tensoring by ${\mathcal O}_x\otimes_{{\mathcal O}_{f(x)}}$, one has the faithfully flat morphism ${\mathcal O}_x\to \prod_{y>f(x)} {\mathcal O}(f_x^{-1}(U_y))$. The open sets $f_x^{-1}(U_y)$ are affine, because $f_x\colon U_x\to U_{f(x)}$ is affine. Then, the morphisms ${\mathcal O}(f_x^{-1}(U_y))\to \prod_{x'\in f_x^{-1}(U_y)} {\mathcal O}_{x'}$ are faithfully flat. Hence, the morphism ${\mathcal O}_x\to \prod_{x'\in f_x^{-1}(U_y), y>f(x)}{\mathcal O}_{x'}$ is faithfully flat. Therefore, ${\mathcal O}_x\to \prod_{x'>x}{\mathcal O}_{x'}$ is faithfully flat and $x$ is a removable point. \end{proof} \begin{proposicion} \label{P6.11} Let $p\in Y$ be a removable point, $i\colon Y-\{p\}\to Y$ be the inclusion morphism and $f\colon X\to Y$ a schematic morphism. If $f(X)\subseteq Y-p$ and $g\colon X\to Y-\{p\}$ is the morphism of ringed spaces such that $f=i\circ g$, then $g$ is a schematic morphism. Therefore, $f_{_M}\colon X_M\to Y_M$ is a schematic morphism. \end{proposicion} \begin{proof}[Proof] It is an immediate consequence of Theorem \ref{K7}. \end{proof} \begin{proposicion} Let $X$ be a schematic finite space and $U\subset X$ an affine open subspace. Let $X':=X\coprod \{u\}$ be the ringed finite space defined by \begin{enumerate} \item The preorder on $X\subset X'$ is the pre-established preorder. Given $x\in X$, then $u< x$ iff $x\in U$, and $x<u$ iff $x\leq x'$, for any $x'\in U$. \item ${\mathcal O}_{X',x}:={\mathcal O}_{X,x}$ for any $x\in X$, and ${\mathcal O}_{X',u}:={\mathcal O}_X(U)$. The restriction morphisms are the obvious morphisms. \end{enumerate} Then, $X'$ is a schematic finite space and $u$ is a removable point of $X'$.\end{proposicion} \begin{proof}[Proof] Let us denote $U_u:=U\subset X$ and $\tilde U_{x'}:=\{y\in X'\colon y\geq x'\}$, for any $x'\in X'$. By Proposition \ref{inter}, the morphism $${\mathcal O}_{X',y}\otimes_{{\mathcal O}_{X',x'}}{\mathcal O}_{X',y'}\overset{\text{\ref{11}}} = {\mathcal O}_X(U_y\cap U_{y'})\to \prod_{x\in U_y\cap U_{y'}} {\mathcal O}_{X,x}= \prod_{x\in U_y\cap U_{y'}} {\mathcal O}_{X',x}$$ is faithfully flat, for any $y,y'\geq x'$. If $U\subseteq U_y\cap U_{y'}$, the morphism ${\mathcal O}_X(U_y\cap U_{y'})\to {\mathcal O}_X(U)$ is flat, by Lemma \ref{flat}. Hence, the morphism ${\mathcal O}_{X',y}\otimes_{{\mathcal O}_{X',x'}}{\mathcal O}_{X',y'}\to \prod_{x\in \tilde U_y\cap \tilde U_{y'}} {\mathcal O}_{X',x}$ is faithfully flat. By Proposition \ref{4pg20}, $X'$ is a schematic finite space. The morphism $${\mathcal O}_{X',u}={\mathcal O}_X(U)\to \prod_{x\in U} {\mathcal O}_{X,x} =\prod_{x'>u} {\mathcal O}_{X',x'}$$ is faithfully flat, because $U$ is affine. Hence, $u$ is removable. \end{proof} \section{Serre Theorem} Let $X$ be a finite topological space and $F$ a sheaf of abelian groups on $X$. \begin{proposicion}\label{aciclicity} If $X$ is a finite topological space with a minimum, then $H^i(X,F)=0$ for any sheaf $F$ and any $i>0$. In particular, for any finite topological space one has \[ H^i(U_p,F)=0\] for any $p\in X$, any sheaf $F$ and any $i>0$. \end{proposicion} \begin{proof}[Proof] Let $p$ be the minimum of $X$. Then $U_p=X$ and, for any sheaf $F$, one has $\Gamma(X,F)=F_p$; thus, taking global sections is the same as taking the stalk at $p$, which is an exact functor. \end{proof} Let $f\colon X\to Y$ a continuous map between finite topological spaces and $F$ a sheaf on $X$. The i-th higher direct image $R^if_*F$ is the sheaf on $Y$ given by: $$ [R^if_*F]_y=H^i(f^{-1}(U_y),F).$$ Let $F$ be a sheaf on a finite topological space $X$. We define $C^nF$ as the sheaf on $X$ whose sections on an open subset $U$ are $$ (C^nF)(U)=\proda{U \ni x_0<\cdots <x_n } F_{x_n}$$and whose restriction morphisms $(C^nF)(U)\to (C^nF)(V)$ for any $V\subseteq U$ are the natural projections. One has morphisms $d\colon C^nF \to C^{n+1}F$, $a=(a_{x_0<\cdots < x_{n}})\mapsto d(a)=(d(a)_{x_0<\cdots < x_{n+1}})$ defined in each open subset $U$ by the formula $$ (\di a) _{x_0<\cdots < x_{n+1}}= \suma{0\leq i\leq n} (-1)^i a_{x_0<\cdots \widehat{x_i}\cdots <x_{n+1}} + (-1)^{n+1} \bar a _{x_0<\cdots <x_n} $$ where $\bar a _{x_0<\cdots <x_n}$ denotes the image of $ a _{x_0<\cdots <x_n}$ under the morphism $F_{x_n}\to F_{x_{n+1}}$. There is also a natural morphism $\di\colon F\to C^0F$. One easily checks that $\di^2=0$. \begin{teorema}[(\cite{KG} 2.15)\,] $C^{\displaystyle \cdot} F$ is a finite and flasque resolution of $F$ (in fact, it is the Godement resolution of $F$). \end{teorema} \begin{proof}[Proof] By definition, $C^nF=0$ for $n>\dim X$. It is also clear that $C^nF$ are flasque. Let us see that \[ 0\to F\to C^0F \to \cdots\to C^{\dim X}F\to 0\] is an exact sequence. We have to prove that $(C^{\displaystyle \cdot} F)(U_p)$ is a resolution of $F(U_p)$. One has a decomposition \[ (C^nF)(U_p)= \proda{p=x_0<\cdots <x_n } F_{x_n}\times \proda{p<x_0<\cdots <x_n } F_{x_n} = (C^{n-1}F)(U^*_p)\times (C^nF)(U^*_p)\] with $U_p^*:=U_p-\{ p\}$; via this decomposition, the differential $\di \colon (C^nF)(U_p) \to (C^{n+1}F)(U_p)$ becomes: \[ \di(a,b)=(b-\di^*a,\di^*b)\] with $\di^*$ the differential of $(C^{\displaystyle \cdot} F)(U_p^*)$. If $d(a,b)=0$, then $b=d^*a$ and $d(0,a)=(a,b)$. It is immediate now that every cycle is a boundary. \end{proof} This theorem, together with De Rham's theorem (\cite{Godement}, Thm. 4.7.1), yields that the cohomology groups of a sheaf can be computed with the standard resolution, i.e., $H^i(U,F)=H^i\Gamma(U,C^{\displaystyle \cdot} F)$, for any open subset $U$ of $X$ and any sheaf $F$ of abelian groups on $X$. \begin{teorema}[(\cite{KS} 4.3,\,4.12)\,] \label{afin} Every quasi-coherent module of an affine finite space is acyclic. \end{teorema} \begin{proof}[Proof] Let $X$ be an affine finite space. Proceed by induction over the order of $X$. The open sets $U_{xy}$ are affine by Corollary \ref{Uxy}. If $X=U_{xy}$, then $X=U_x$ and every sheaf is acyclic. We can suppose that every quasi-coherent module on $U_{xy}$ is acyclic, by induction hypothesis. Let us prove that any quasi-coherent ${\mathcal O}_X$-module ${\mathcal M}$ is acyclic. We have to prove that the sequence of morphisms $${\mathcal M}(X)\to \prod_{x_1\in X} {\mathcal M}_{x_1}\to \prod_{x_1<x_2} {\mathcal M}_{x_2}\to \cdots$$ is exact. It is sufficient to check that tensoring the previous sequence by $\otimes_{{\mathcal O}(X)}{\mathcal O}_z$, for any $z\in X$, the sequence of morphisms $$\xymatrix @C18pt{ {\mathcal M}_z\ar[r] & \proda{z\leq x_1} {\mathcal M}_{x_1} \times \proda{z\not\leq x_1} {\mathcal M}_{x_1z} \ar[r] & \proda{z\leq x_1<x_2} {\mathcal M}_{x_2}\,\, \times\! \proda{\scriptsize \begin{array}{l} \,\,\, \,\, \, \, x_1<x_2 \\z\not\leq x_1, z\leq x_2\end{array}} {\mathcal M}_{x_2}\,\, \times\! \proda{\scriptsize \begin{array}{l} x_1<x_2\\ \, \,\, z\not\leq x_2\end{array}} {\mathcal M}_{x_2z} \, \ar[r] &\,\, \cdots}$$ is exact. That is, we have to prove that the sequence of morphisms $(S)$ $$\xymatrix @C18pt @R8pt{ {\mathcal M}(U_z) \, \ar[r] & \, C^0(U_z,{\mathcal M}) \times \proda{z\not\leq x_1} {\mathcal M}(U_{x_1z}) \ar[r] & \, C^1(U_z,{\mathcal M})\times \proda{z\not\leq x_1} C^0(U_{x_1z},{\mathcal M})\,\,\times \proda{\scriptsize \begin{array}{l} x_1<x_2\\ \,\,\, z\not\leq x_2 \end{array}} \! {\mathcal M}(U_{x_2z}) \\ \qquad \,\, \ar[r] & \quad \cdots \qquad \qquad \qquad \qquad & }$$ is exact. Let $D^{\displaystyle \cdot}_r:=\oplus_{x_1<\cdots<x_r,z\not\leq x_r}( {\mathcal M}(U_{x_rz})\oplus C^{\displaystyle \cdot} ({U_{x_rz}},{\mathcal M}))[-r]$ and let $d_r$ be the differential such that over each direct summand $ ({\mathcal M}(U_{x_rz})\oplus C^{\displaystyle \cdot} ({U_{x_rz}},{\mathcal M}))[-r]$ is the known differential of ${\mathcal M}(U_{x_rz})\oplus C^{\displaystyle \cdot} ({U_{x_rz}},{\mathcal M})$ multiplied by $(-1)^r$. $H^i(D^{\displaystyle \cdot}_r)=0$ for any $i\geq 0$. The sequence of morphisms $(S)$ is equal to the differential complex $D^{\displaystyle \cdot}:=D^{\displaystyle \cdot}_0\oplus D^{\displaystyle \cdot}_1\oplus\cdots \oplus D^{\displaystyle \cdot}_n$ with the diffrential $$d=\pamatrix{d_0 & 0 & 0 & \cdots & 0 & 0\\ - & d_1 & 0 & \cdots & 0& 0\\ \vdots & \vdots & \ddots & & \vdots & \vdots \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ - & - & - & \cdots & d_{n-1} & 0 \\ - & - & - & \cdots & - & d_n}$$ Let $D_{>0}^{\displaystyle \cdot} =\oplus_{i>0} D^{\displaystyle \cdot}_i$. Consider the exact sequence of morphisms of complexes $$0\to D_{>0}^{\displaystyle \cdot}\to D^{\displaystyle \cdot} \to D^{\displaystyle \cdot}_0\to 0.$$ Then, $H^i(D^{\displaystyle \cdot})=H^i(D_{>0}^{\displaystyle \cdot})$, for any $i\geq 0$. Let $D_{>1}^{\displaystyle \cdot} =\oplus_{i>1} D^{\displaystyle \cdot}_i$. Consider the exact sequence of morphisms $0\to D_{>1}^{\displaystyle \cdot} \to D_{>0}^{\displaystyle \cdot} \to D^{\displaystyle \cdot}_1\to 0$. Then, $H^i(D^{\displaystyle \cdot})=H^i(D_{>0}^{\displaystyle \cdot})=H^i(D_{>1}^{\displaystyle \cdot})$. Recursively $H^i(D^{\displaystyle \cdot})=H^i(D_{>n}^{\displaystyle \cdot})=0$ for any $i\geq 0$, and the sequence of morphisms $(S)$ is exact. \end{proof} Let $R$ and $R'$ be commutative rings and $R\to R'$ a flat morphism of rings. Let $(X,{\mathcal O})$ be an $R$-ringed finite space. Let ${\mathcal O}\otimes_R R'$ be the sheaf of rings on $X$ defined by $({\mathcal O}\otimes_R R')(U):={\mathcal O}(U)\otimes_R R'$. Consider the obvious morphism $\pi\colon (X,{\mathcal O}\otimes_R R')\to (X,{\mathcal O})$, $\pi(x)=x$. Let ${\mathcal M}$ be a sheaf of ${\mathcal O}$-modules. Then, $$H^i(X,\pi^*{\mathcal M})=H^i(X,{\mathcal M})\otimes_R R'.$$ If ${\mathcal N}$ is a quasi-coherent ${\mathcal O}\otimes_R R'$-module, then $\pi_*{\mathcal N}={\mathcal N}$ is a quasi-coherent ${\mathcal O}$-module. Let $S\subset R$ be a multiplicative system, $R'=S^{-1}\cdot R$ and ${\mathcal N}$ a quasi-coherent ${\mathcal O}\otimes_R R'$-module. Then, $\pi_*{\mathcal N}$ is a quasi-coherent ${\mathcal O}$-module, ${\mathcal N}=\pi^*\pi_*{\mathcal N}$ and $H^i(X,{\mathcal N})=H^i(X,\pi_*{\mathcal N}).$ \begin{naida}[Serre Theorem (\cite{KS} 5.11)\,] \label{Serre} Let $X$ be a schematic finite space. $X$ is affine iff every quasi-coherent ${\mathcal O}_X$-module ${\mathcal M}$ is acyclic (or $H^1(X,{\mathcal M})=0$).\end{naida} \begin{proof}[Proof] $\Leftarrow)$ Let $R:={\mathcal O}(X)$. Recall Notation \ref{Notation}. Given ${\mathfrak p}\in \Spec R$, consider the sheaf of rings on $X$, ${\mathcal O}\otimes_RR_{\mathfrak p}$. Obviously, $(X,{\mathcal O}\otimes_RR_{\mathfrak p})$ is a schematic finite space and $(X,{\mathcal O})$ is affine iff $(X,{\mathcal O}\otimes_RR_{\mathfrak p})$ is affine for any ${\mathfrak p}$. Hence, we can suppose $R$ is a local ring. We can suppose that $X$ is minimal. Let $X'$ be the set of the closed points of $X$. Let $x'\in X'$. The morphism ${\mathcal O}_{x'}\to \prod_{x> x'} {\mathcal O}_{x}$ is flat but it is not faithfully flat, then there exists a prime ideal $I_{x'}\subset {\mathcal O}_{x'}$ such that $I_{x'}\cdot \prod_{x>x'} {\mathcal O}_{x}= \prod_{x>x'} {\mathcal O}_{x}$. Let ${\mathfrak p}$ be the quasicoherent ideal defined by ${\mathfrak p}_{x'}:=I_{x'}$ if $x'\in X'$ and ${\mathfrak p}_x:={\mathcal O}_x$ if $x\notin X'$. Observe that $({\mathcal O}_X/{\mathfrak p})_x=0$, for any $x\in X\backslash X'$, then $({\mathcal O}_{X}/{\mathfrak p})(X)=\prod_{x'\in X'} {\mathcal O}_{X}/I_{x'}$. Consider the exact sequence of morphisms $$0\to {\mathfrak p} \to {\mathcal O}\to {\mathcal O}_{X}/{\mathfrak p}\to 0$$ The morphism $R={\mathcal O}(X)\to ({\mathcal O}_{X}/{\mathfrak p})(X)=\prod_{x'\in X'} {\mathcal O}_{X}/I_{x'}$ is surjective, because $H^1(X,{\mathfrak p})=0$. $R$ is a local ring, then $\prod_{x'\in X'} {\mathcal O}_X/I_{x'}$ is a local ring, hence $X'=\{x'\}$. Therefore, $X=U_{x'}$, which is affine. \end{proof} This theorem yields the usual Serre's criterion on algebraic varieties (see \cite{KG} 4.13 and \cite{Serree}). \begin{corolario} \label{corotonto} A schematic finite space $X$ is affine iff the functor $$\Gamma\colon {\bf Qc\text{-}Mod}_X\to {\bf Mod}_{{\mathcal O}(X)},\,\, {\mathcal M}\mapsto \Gamma(X,{\mathcal M})$$ is exact.\end{corolario} \begin{proof}[Proof] $\Rightarrow)$ By the Serre Theorem $H^1(X,{\mathcal M})=0$, for any quasi-coherent module ${\mathcal M}$. Hence $\Gamma$ is exact. $\Leftarrow$) It has been proved in the proof of Serre Theorem. \end{proof} \begin{corolario} A schematic finite space $X$ is affine iff $H^1(X,{\mathcal I})=0$ for any quasi-coherent ideal ${\mathcal I}\subseteq {\mathcal O}$.\end{corolario} \begin{proof}[Proof] $\Leftarrow)$ Let $R={\mathcal O}(X)$. We have just proved this implication when $R$ is a local ring, in the proof of the Serre Theorem. Let ${\mathfrak p}\in\Spec R$ and let ${\mathcal I}^{\mathfrak p}\subset {\mathcal O}\otimes_RR_{\mathfrak p}$ be a quasi-coherent ideal. Consider the obvious morphism ${\mathcal O}\to {\mathcal O}\otimes_RR_{\mathfrak p}$. ${\mathcal J}:={\mathcal O}\times_{{\mathcal O}\otimes_R R_{\mathfrak p}}{\mathcal I}^{\mathfrak p}$ is a quasi-coherent ideal of ${\mathcal O}$ and $\pi^*{\mathcal J}={\mathcal I}^{\mathfrak p}$ (where $\pi\colon (X,{\mathcal O}\otimes_R R_{\mathfrak p})\to (X,{\mathcal O})$ is defined by $\pi(x):=x$). Then, $H^1(X,{\mathcal I}^{\mathfrak p})=H^1(X,{\mathcal J})\otimes_R R_{\mathfrak p}=0$. Then, $(X,{\mathcal O}\otimes_RR_{\mathfrak p})$ is affine, for any ${\mathfrak p}$. Therefore, $X$ is affine. \end{proof} \begin{corolario} Let $X$ be a minimal affine finite space and suppose that ${\mathcal O}(X) $ is a local ring. Then, there exists a point $p\in X$ such that $X=U_p$.\end{corolario} \begin{teorema} \label{R1} Let $X$ and $Y$ be schematic finite spaces. A ringed space morphism $f\colon X\to Y$ is affine iff $f_*{\mathcal M}$ is quasi-coherent and $R^1f_*{\mathcal M}=0$, for any quasi-coherent ${\mathcal O}_X$-module ${\mathcal M}$.\end{teorema} \begin{proof}[Proof] $\Rightarrow)$ It is obvious. $\Leftarrow)$ Let $U\subseteq Y$ be an affine open subspace. By Corollary \ref{C4.10}, any quasi-coherent module ${\mathcal M}$ on $f^{-1}(U)$ is the restriction of a quasi-coherent module on $X$. $H^1(f^{-1}(U),{\mathcal M})=H^1(U,f_*{\mathcal M})=0$, for any quasi-coherent ${\mathcal O}_{X}$-module ${\mathcal M}$. By Serre Theorem \ref{Serre}, $f^{-1}(U)$ is affine. Hence, $f$ is affine. \end{proof} \hskip-0.65cm\colorbox{white}{\,\begin{minipage}{15.15cm} \begin{teorema} \label{Tafex} Let $f\colon X\to Y$ be a schematic morphism. The functor $$f_*\colon \,{\bf Qc\text{-}Mod}_X\, \to \,{\bf Qc\text{-}Mod}_Y,\quad \mathcal M\functor f_*\mathcal M$$ is exact iff $f$ is affine.\end{teorema}\end{minipage}} \begin{proof}[Proof] $\Rightarrow)$ 1. Let $U\subseteq Y$ be an open subset, $V=f^{-1}(U)$ and $f_{|V}\colon V\to U$, $f_{|V}(x):=f(x)$. Then $f_{|V*}$ is exact: Any short exact sequence of quasi-coherent modules $\mathcal N^\bullet$ on $V$ is a restriction of a short exact sequence of quasi-coherent modules $\mathcal M^\bullet$ on $X$; and $f_{V*}\mathcal N^\bullet=(f_*\mathcal M^\bullet)_{|U}$. 2. We can suppose that $Y$ is affine. We can suppose that $Y=(*,A)$. We can suppose that $A={\mathcal O}_X(X)$. 3. We have to prove that $X$ is affine. The functor, $$\,{\bf Qc\text{-}Mod}_X\, \to \,{\bf Mod}_{{\mathcal O}(X)},\quad \mathcal M\functor f_*\mathcal M=\Gamma(X,{\mathcal M})$$ is exact. By Corollary \ref{corotonto}, $X$ is affine. $\Leftarrow)$ By Theorem \ref{R1}, $R^1f_*{\mathcal M}=0$, for any quasi-coherent module ${\mathcal M}$. Then, $f_*$ is exact. \end{proof} \section{Cohom. characterization of schematic finite spaces} We say that an $R$-ringed space $(X,{\mathcal O}_X)$ is a flat $R$-ringed space if the morphism $R\to {\mathcal O}_x$ is flat, for any $x\in X$. \begin{teorema} \label{2} Let $(X,{\mathcal O})$ be a flat $R$-ringed finite space and $M$ an $R$-module. If $H^i(X,{\mathcal O})$ is a flat $R$-module, for any $i>0$, then ${\mathcal O}(X)$ is a flat $R$-module and $$H^i(X,\tilde M)=H^i(X,{\mathcal O})\otimes_{R} M,\quad \forall i.$$ \end{teorema} \begin{proof}[Proof] Let $C^i:=\Ker[C^i(X,{\mathcal O})\to C^{i+1}(X,{\mathcal O})]$ and $B^i:=\Ima[C^{i-1}(X,{\mathcal O})\to C^{i}(X,{\mathcal O})]$. Let $n=\dim X$. $C^n=C^n(X,{\mathcal O})$ is $R$-flat. The sequence $0\to B^n\to C^n\to H^n(X,{\mathcal O})\to 0$ is exact, then $B^n$ is flat. The sequence $0\to C^{n-1}\to C^{n-1}(X,{\mathcal O})\to B^n\to 0$ is exact then $C^{n-1}$ is $R$-flat. Recursively, $C^i$ and $B^i$ are $R$-flat for any $i$. Hence, ${\mathcal O}(X)=C^0$ is $R$-flat and $$H^i(X,\tilde M)=H^i(X,C^{\displaystyle \cdot}{\mathcal O}\otimes_R M)=H^i(X,C^{\displaystyle \cdot}{\mathcal O})\otimes_R M=H^i(X,{\mathcal O}_X)\otimes_R M.$$ \end{proof} \begin{corolario} \label{coro2} Let $X$ be a flat ${\mathcal O}(X)$-ringed finite space. Assume $H^i(X,{\mathcal O})$ is a flat ${\mathcal O}(X)$-module, for any $i>0$. Then, the morphism ${\mathcal O}(X)\to \prod_{x\in X} {\mathcal O}_x$ is faithfully flat.\end{corolario} \begin{proof}[Proof] Let $M$ be an ${\mathcal O}(X)$-module. By Theorem \ref{2}, $\tilde M(X)=M$. Then, the morphism $$M=\tilde M(X)\hookrightarrow \prod_{x\in X} \tilde M_x= M\otimes_{{\mathcal O}(X)} \prod_{x\in X} {\mathcal O}_x$$ is injective. Therefore, the flat morphism ${\mathcal O}(X)\to \prod_{x\in X} {\mathcal O}_x$ is faithfully flat.\end{proof} \begin{teorema} \label{afin'} Let $X$ be a flat ${\mathcal O}(X)$-ringed finite space. Then, $X$ is affine iff \begin{enumerate} \item[1'.] $X$ is acyclic. \item[2'.] ${\mathcal O}_x\otimes_{{\mathcal O}(X)} {\mathcal O}_y={\mathcal O}_{xy}$, for any $x,y$, \item[3'.] $U_{xy}$ is acyclic, for any $x,y$.\end{enumerate} \end{teorema} \begin{proof}[Proof] $\Rightarrow)$ $U_{xy}$ is affine by Corollary \ref{Uxy}. By Theorem \ref{afin}, $X$ and $U_{xy}$ are acyclic. $\Leftarrow)$ Let $z\in U_{xy}$. The morphism $${\mathcal O}_{xy}\to {\mathcal O}_z\otimes_{{\mathcal O}(X)} {\mathcal O}_{xy}= {\mathcal O}_z\otimes_{{\mathcal O}(X)} {\mathcal O}_x\otimes_{{\mathcal O}(X)}{\mathcal O}_y={\mathcal O}_{zx}\otimes_{{\mathcal O}(X)}{\mathcal O}_y={\mathcal O}_z\otimes_{{\mathcal O}(X)}{\mathcal O}_y={\mathcal O}_{zy}={\mathcal O}_z$$ is flat, since the morphism ${\mathcal O}(X)\to {\mathcal O}_z$ is flat. By Corollary \ref{coro2}, the morphisms ${\mathcal O}(X)\to \prod_{x\in X}{\mathcal O}_x$ and ${\mathcal O}(U_{xy})\to \prod_{z\in U_{xy}}{\mathcal O}_z$ are faithfully flat. Hence, $X$ is affine. \end{proof} \begin{teorema} \label{T5.9} A finite fr-space $X$ is schematic iff for any $x\leq y,y'$, \begin{enumerate} \item ${\mathcal O}_{y}\otimes_{{\mathcal O}_x}{\mathcal O}_{y'}={\mathcal O}_{yy'}$. \item $U_{yy'}$ is acyclic.\end{enumerate}\end{teorema} \begin{proof}[Proof] $\Rightarrow)$ $U_x$ is an affine finite space. By Theorem \ref{afin'}, we are done. $\Leftarrow)$ $U_x$ is an affine finite space by Theorem \ref{afin'}, then $X$ is schematic.\end{proof} \begin{proposicion} \label{12} Let $X$ be an affine finite space. An open subset $U\subseteq X$ is affine iff it is acyclic.\end{proposicion} \begin{proof}[Proof] $\Leftarrow)$ $U$ satisfies 1'. and 3' of Theorem \ref{afin'}. The composite morphism of the epimorphism ${\mathcal O}_x\otimes_{{\mathcal O}(X)}{\mathcal O}_{y}\to {\mathcal O}_x\otimes_{{\mathcal O}(U)} {\mathcal O}_y$ and the morphism $ {\mathcal O}_x\otimes_{{\mathcal O}(U)} {\mathcal O}_{y}\to {\mathcal O}_{xy}$ is an isomorphism, then ${\mathcal O}_x\otimes_{{\mathcal O}(U)} {\mathcal O}_{y}\to {\mathcal O}_{xy}$ is an isomorphism. Besides, the morphism ${\mathcal O}(U)={\mathcal O}(X)\otimes_{{\mathcal O}(X)}{\mathcal O}(U)\to {\mathcal O}_x\otimes_{{\mathcal O}(X)}{\mathcal O}(U)\overset{\text{\ref{11}}}={\mathcal O}_x$ is flat. \end{proof} \begin{proposicion} \label{K4} Let $f\colon X\to Y$ be a schematic morphism and ${\mathcal M}$ a quasi-coherent ${\mathcal O}_X$-module. Then, $R^if_*{\mathcal M}$ is a quasi-coherent ${\mathcal O}_Y$-module, for any $i\geq 0$. If $Y$ is affine, $R^if_*{\mathcal M}=\widetilde{H^i(X,{\mathcal M})}$. \end{proposicion} \begin{proof}[Proof] We can suppose that $Y$ is affine. Given $y\in Y$, the inclusion morphism $f^{-1}(U_y)\overset{j}\hookrightarrow X$ is affine: Let $U\subset X$ be an affine subset and $i$ be the composite morphism $U\hookrightarrow X\overset f\to Y$, which is an affine morphism. Then, $j^{-1}(U)=f^{-1}(U_y)\cap U=i^{-1}(U_y)$ is affine. Observe that $R^nf_*(j_*{\mathcal N})=R^n(f\circ j)_*{\mathcal N}=0$ for any $n>0$ and any quasi-coherent module ${\mathcal N}$, since $j$ and $f\circ j$ are affine morphisms. Likewise, $H^n(X,j_*{\mathcal N})=H^n(f^{-1}(U_y),{\mathcal N})=H^n(Y, (f\circ j)_*{\mathcal N})=0$, for any $n>0$. Denote ${\mathcal M}_{f^{-1}(U_y)}=j_*{\mathcal M}_{|f^{-1}(U_y)}$ and consider the obvious exact sequence of morphisms $$0\to {\mathcal M}\to \oplus_{y\in Y} {\mathcal M}_{f^{-1}(U_y)}\overset\pi\to {\mathcal M}'\to 0.$$ Then, $H^1(X,{\mathcal M})=\Coker \pi_X$ and $H^{n-1}(X,{\mathcal M}')=H^n(X,{\mathcal M})$ for any $n>1$. Besides, $R^1f_*{\mathcal M}=\Coker \pi$, which is quasi-coherent, and $R^nf_*{\mathcal M}=R^{n-1}f_*{\mathcal M}'$ for any $n>1$. Therefore, $R^1f_*{\mathcal M}=\widetilde{H^1(X,{\mathcal M})}$ since $Y$ is affine. Hence, $R^1f_*{\mathcal M}'=\widetilde{H^1(X,{\mathcal M}')}$, since ${\mathcal M}'$ is quasi-coherent. By induction on $n$, $$R^nf_*{\mathcal M}=R^{n-1}f_*{\mathcal M}'=\widetilde{H^{n-1}(X,{\mathcal M}')}= \widetilde{H^{n}(X,{\mathcal M})}$$ for any $n>1$. \end{proof} This proposition, Theorem \ref{Tcohsch} and \cite{KG} Theorem 5.6 show that the definitions of schematic morphism given in this paper and in \cite{KG} are equivalent. \begin{lemma} \label{K1} Let $X$ be an $fr$-space and $\delta\colon X\to X\times X$, $\delta(x):=(x,x)$ be the diagonal morphism. Let ${\mathcal M}$ be an ${\mathcal O}_X$-module. $R^i\delta_*{\mathcal M}$ is quasi-coherent iff $$H^i(U_{pq},{\mathcal M})\otimes_{{\mathcal O}_p}{\mathcal O}_{p'}=H^i(U_{p'q},{\mathcal M}),$$ for any $p\leq p'$ and for any $q$.\end{lemma} \begin{proof}[Proof] $(R^i\delta_*{\mathcal M})_{(q,q')}=H^i(U_{qq'},{\mathcal M})$ and $$\aligned (R^i\delta_*{\mathcal M})_{(p,p')}\otimes_{{\mathcal O}_{(p,p')}} {\mathcal O}_{(q,q')} & =H^i(U_{pp'},{\mathcal M})\otimes_{{\mathcal O}_p\otimes {\mathcal O}_{p'}} ({\mathcal O}_{q}\otimes {\mathcal O}_{q'})\\ & = (H^i(U_{pp'},{\mathcal M})\otimes_{{\mathcal O}_p}{\mathcal O}_q)\otimes_{{\mathcal O}_{p'}}{\mathcal O}_{q'},\endaligned$$ for any $(p,p')\leq (q,q')$. Now, the proof is easily completed. \end{proof} \begin{proposicion} Let $X$ be an $fr$-space and $\delta\colon X\to X\times X$ the diagonal morphism. Let ${\mathcal M}$ be an ${\mathcal O}_X$-module. Then, $\delta_*{\mathcal M}$ is a quasi-coherent ${\mathcal O}_{X\times X}$-module iff ${\mathcal M}_{p'}\otimes_{{\mathcal O}_p}{\mathcal O}_{p''}={\mathcal M}_{p'p''}$, for any $p\leq p',p''$.\end{proposicion} \begin{proof}[Proof] It is a consequence of Lemma \ref{K1} and Proposition \ref{K2?}. \end{proof} \begin{vacio}[Cohomological characterization of schematic finite spaces (\cite{KS} 4.7,\,4.4\,):] \label{ccefe} Let $X$ be an $fr$-space and $\delta\colon X\to X\times X$ the diagonal morphism. $X$ is a schematic finite space iff $R^i\delta_*{\mathcal O}_X$ is a quasi-coherent module, for any $i\geq 0$.\end{vacio} \begin{proof}[Proof] $\Leftarrow)$ We have to prove that $U_p$ is affine. $U_p$ is acyclic, and satisfies the property 2' of Theorem \ref{afin'}, by the previous proposition. We only need to prove that $U_{qq'}$ is acyclic, for any $q,q'\in U_p$: $$0=H^i(U_{q},{\mathcal O})\otimes_{{\mathcal O}_p}{\mathcal O}_{q'}=H^i(U_{pq},{\mathcal O})\otimes_{{\mathcal O}_p}{\mathcal O}_{q'}\overset{\text{\ref{K1}}}=H^i(U_{q'q},{\mathcal O}).$$ $\Rightarrow)$ The diagonal morphism $\delta$ is schematic by Theorem \ref{Cdelta} and Theorem \ref{Tcohsch}. By Proposition \ref{K4}, we are done. \end{proof} \begin{corolario}[(\cite{KG} 4.5)\,] An $fr$-space $X$ is schematic iff for any open set $j\colon U_q\hookrightarrow X$, $R^ij_*{\mathcal O}_{U_q}$ is a quasi-coherent ${\mathcal O}_X$-module, for any $i$.\end{corolario} \begin{proof}[Proof] Let $\delta\colon X\to X\times X$ be the diagonal morphism. Then, $X$ is a schematic finite space iff $R^i\delta_*{\mathcal O}_X$ is a quasi-coherent module, for any $i$, which is equivalent to say that $H^i(U_{pq},{\mathcal O})\otimes_{{\mathcal O}_p}{\mathcal O}_{p'}=H^i(U_{p'q},{\mathcal O})$, for any $p\leq p'$, and any $q$, that is to say, $R^ij_*{\mathcal O}_{U_q}$ is a quasi-coherent ${\mathcal O}_X$-module, for any $i$ and any open set $j\colon U_q\hookrightarrow X$. \end{proof} A scheme is said to be a semiseparated scheme if the intersection of two affine open sets is affine. For example, the line with a double point is a semiseparated scheme (but it is not separated). The plane with a double point is not semiseparated, but it is quasi-separated. \begin{definicion} A ringed finite space $X$ is said to be semiseparated if the open sets $U_{pq}$ are acyclic, for any $p,q\in X$.\end{definicion} \begin{proposicion} Let $X$ be a ringed finite space and let $\delta\colon X\to X\times X$ be the diagonal morphism. $X$ is seimiseparated iff $R^i\delta_*{\mathcal O}_X=0$, for any $i>0$.\end{proposicion} \begin{proof}[Proof] $(R^i\delta_*{\mathcal O}_X)_{(p,q)}=H^i(U_{pq},{\mathcal O})$, and $R^i\delta_*{\mathcal O}_X=0$ iff $(R^i\delta_*{\mathcal O}_X)_{(p,q)}=0$ for any $p,q\in X$. Hence, $X$ is semiseparated iff $R^i\delta_*{\mathcal O}_X=0$, for any $i>0$. \end{proof} \begin{teorema} Let $X$ be an $fr$-space and $\delta\colon X\to X\times X$ be the diagonal morphism. $X$ is a semiseparated schematic finite space iff $R^i\delta_*{\mathcal O}_X=0$, for any $i>0$ and $\delta_*{\mathcal O}_X$ is a quasi-coherent module.\end{teorema} \begin{proof}[Proof] $\Rightarrow)$ By \ref{ccefe}, $\delta_*{\mathcal O}_X$ is quasi-coherent, and $R^i\delta_*{\mathcal O}_X=0$ by the previous proposition. $\Leftarrow)$ $X$ is a schematic finite space, by \ref{ccefe}, and it is semiseparated by the previous proposition. \end{proof} \begin{proposicion} A schematic finite space is semiseparated iff it satisfies any of the following equivalent conditions: \begin{enumerate} \item The intersection of any two affine open subspaces is affine. \item There exists an affine open covering of $X$, ${\mathcal U}=\{U_1,\ldots, U_n\}$ such that $U_i\cap U_j$ is affine for any $i,j$. \end{enumerate} \end{proposicion} \begin{proof}[Proof] Assume $X$ is a semiseparated schematic finite space. Let $\delta\colon X\to X\times X,$ $\delta(x)=(x,x)$ be the diagonal morphism and $U$, $U'$ two affine open subspaces. Since $R^i\delta_*{\mathcal O}_X=0$, for any $i>0$, $H^i(U\cap U',{\mathcal O})=H^i(U\times U',\delta_*{\mathcal O})=0$. By \ref{12} $U\cap U'$ is affine. Assume that there exists an affine open covering of $X$, ${\mathcal U}=\{U_1,\ldots, U_n\}$ such that $U_i\cap U_j$ is affine for any $i,j$. $R^i\delta_*{\mathcal O}_X$ is quasi-coherent, by \ref{ccefe}. $R^i\delta_*{\mathcal O}_X(U_i\times U_j)=H^i(U_i\cap U_j,{\mathcal O}_X)=0$, then $R^i\delta_*{\mathcal O}_X=0$ and $X$ is semiseparated. \end{proof} All the examples in Examples \ref{Eejemplos} are semiseparated finite spaces. Finally, let us give some cohomological characterizations of schematic morphisms. \begin{proposicion} \label{ri=0} Let $X$ be an affine finite space and $Y$ a schematic finite space. A morphism of ringed spaces $f\colon X\to Y$ is affine iff $f_*{\mathcal O}_X$ is quasi-coherent and $R^if_*{\mathcal O}_X=0$, for any $i>0$. \end{proposicion} \begin{proof}[Proof] $\Rightarrow)$ Affine finite spaces are acyclic, then $(R^if_*{\mathcal O}_X)_y=H^i(f^{-1}(U_y),{\mathcal O}_X)=0$, for any $i>0$ and any $y\in Y$. Hence, $R^if_*{\mathcal O}_X=0$, for any $i>0$. $\Leftarrow)$ Let $U\subseteq Y$ be an affine open subspace. $H^i(f^{-1}(U),{\mathcal O}_X)= H^i(U,f_*{\mathcal O}_X)\overset{\text{\ref{2}}}=0$, for any $i>0$, then $f^{-1}(U)$ is acyclic, therefore it is affine by Proposition \ref{12}.\end{proof} \begin{proposicion} A morphism of ringed spaces $f\colon X\to Y$ between schematic finite spaces is affine iff $f_*{\mathcal O}_X$ is quasi-coherent, $R^if_*{\mathcal O}_X=0$ for any $i>0$ and there exists an open covering $\{U_i\}$ of $Y$ such that $f^{-1}(U_i)$ is affine, for any $i$. \end{proposicion} \begin{proof}[Proof] $\Rightarrow)$ $(R^if_*{\mathcal O}_X)_y=H^i(f^{-1}(U_y),{\mathcal O}_X)=0$, for any $i>0$ and any $y\in Y$. Hence, $R^if_*{\mathcal O}_X=0$, for any $i>0$. $\Leftarrow)$ The morphisms $f^{-1}(U_i)\to U_i$ are affine, by the previous proposition. Then, $f$ is affine. \end{proof} \begin{proposicion} Let $f\colon X\to Y$ be a morphism of ringed spaces between schematic finite spaces. Then, $f$ is schematic iff $\,\Gamma_f\colon X\to X\times_{\mathbb Z} Y$, $\Gamma_f(x)=(x,f(x))$ is schematic.\end{proposicion} \begin{proof}[Proof] $\Leftarrow)$ It is easy to check that $\pi_2\colon X\times_{\mathbb Z} Y\to Y$, $\pi_2(x,y)=y$ is schematic. Then, $f$ is schematic because $f= \pi_2\circ \Gamma_f$ and $\pi_2$ and $\Gamma_f$ are schematic. $\Rightarrow)$ Let $x\in X$ and $(x',y)\in X\times_{\mathbb Z} Y$ (where $(x,f(x))\leq (x',y)$). $U_{x(x',y)}=U_{x'}\cap U_{xy}$ is affine because it is the intersection of two affine open subsets of the affine finite space $U_x$. Observe that $$\aligned {\mathcal O}_{x}\otimes_{{\mathcal O}_{(x,f(x))}} {\mathcal O}_{(x',y)} & ={\mathcal O}_{x}\otimes_{{\mathcal O}_{x}\otimes_{\mathbb Z} {\mathcal O}_{f(x)}} {\mathcal O}_{x'}\otimes_{\mathbb Z} {\mathcal O}_{y}={\mathcal O}_{x'}\otimes_{{\mathcal O}_{f(x)}} {\mathcal O}_{y} \\ & \overset{\text{\ref{K6}}}={\mathcal O}_{x'y} ={\mathcal O}_{x(x',y)}\endaligned$$ Then, $\Gamma_f$ is schematic, by Proposition \ref{K6}. \end{proof} \begin{teorema} A morphism of ringed spaces $f\colon X\to Y$ between schematic finite spaces is schematic iff $R^i{\Gamma_f}_*{\mathcal O}_X$ is a quasi-coherent ${\mathcal O}_{X\times Y}$-module, for any $i\geq 0$.\end{teorema} \begin{proof}[Proof] $\Rightarrow)$ By Proposition \ref{K4}, $R^i{\Gamma_f}_*{\mathcal O}_X$ is a quasi-coherent ${\mathcal O}_{X\times Y}$-module, for any $i\geq 0$. $\Leftarrow)$ $R^i{\Gamma_f}_*{\mathcal O}_X$ is a quasi-coherent ${\mathcal O}_{X\times Y}$-module, for any $i\geq 0$. Then, $$H^0(U_x,{\mathcal O}_X)\otimes_{f(x)}{\mathcal O}_y=H^0(U_{xf(x)},{\mathcal O}_X)\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_{y}=H^0(U_{xf(x)},{\mathcal O}_X)\otimes_{{\mathcal O}_{(x,f(x))}}{\mathcal O}_{(x,y)}=H^0(U_{xy},{\mathcal O}_X),$$ for any $x$ and $y\geq f(x)$. Therefore, ${\mathcal O}_{x}\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_{y}={\mathcal O}_{xy}$. Besides, $$0=H^i(U_{x},{\mathcal O}_X)\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_{y}=H^i(U_{xf(x)},{\mathcal O}_X)\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_{y}=H^i(U_{xf(x)},{\mathcal O}_X)\otimes_{{\mathcal O}_{(x,f(x))}}{\mathcal O}_{(x,y)}=H^i(U_{xy},{\mathcal O}_X),$$ for any $i>0$. Then, the open subsets $U_{xy}$ are acyclic, hence $f$ is schematic by Proposition \ref{K6}. \end{proof} \begin{teorema} Let $f\colon X\to Y$ be a morphism of ringed spaces between schematic finite spaces. Let $x\in X$ and let $f_{U_x}$ be the composite morphism $U_x\hookrightarrow X\to Y$. Then, $f$ is schematic iff $R^i{f_{U_x}}_*{\mathcal O}_{U_x}$ is a quasi-coherent ${\mathcal O}_{Y}$-module, for any $i\geq 0$ and any $x\in X$.\end{teorema} \begin{proof}[Proof] $\Rightarrow)$ If $f$ is schematic, $f_{U_x}$ is schematic and $R^i{f_{U_x}}_*{\mathcal O}_{U_x}$ is a quasi-coherent ${\mathcal O}_{Y}$-module, for any $i\geq 0$, by Proposition \ref{K4}. $\Leftarrow)$ $R^i{f_{U_x}}_*{\mathcal O}_{U_x}$ is a quasi-coherent ${\mathcal O}_{Y}$-module. Then, $$H^0(U_{xf(x)},{\mathcal O}_X)\otimes_{{\mathcal O}_{f(x)}}{{\mathcal O}_{y}}=H^0(U_{xy},{\mathcal O}_X),$$ for any $x$ and $y\geq f(x)$. Therefore, ${\mathcal O}_x\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_y={\mathcal O}_{xy}$. Besides, $$0=H^i(U_{x},{\mathcal O}_X)\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_{y}=H^i(U_{xf(x)},{\mathcal O}_X)\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_{y}=H^i(U_{xy},{\mathcal O}_X),\,\text{for any } i>0.$$ Then, the open sets $U_{xy}$ are acyclic. By Proposition \ref{K6}, $f$ is schematic. \end{proof} \section{Quasi-isomorphisms} \begin{definicion} A schematic morphism $f\colon X\to Y$ is said to be a quasi-isomorphism if \begin{enumerate} \item f is affine. \item $f_*{\mathcal O}_X={\mathcal O}_Y$.\end{enumerate} \end{definicion} If $f\colon X\to Y$ is a quasi-isomorphism we shall say that $X$ is quasi-isomorphic to $Y$. \begin{ejemplos} \label{E8.2} \begin{enumerate} \item If $X$ is an affine finite space, the morphism $X\to (*,{\mathcal O}(X))$ is a quasi-isomorphism. \item Let $X$ be a schematic finite space and let $\tilde X$ be the Kolmogorov quotient of $X$. The quotient morphism $\pi\colon X\to \tilde X$ is a quasi-isomorphism (see Example \ref{Kolmogorov}.3.). \item If $X$ is a schematic finite $T_0$-topological space, $X_M \hookrightarrow X$ is a quasi-isomorphism. \item Let $f\colon X\to Y$ be a schematic morphism. $(Y,f_*{\mathcal O}_X)$ is a schematic finite space by Example \ref{E5.8}. Let us prove that the obvious ringed morphism $f'\colon X\to (Y,f_*{\mathcal O}_X)$, $f'(x)=f(x)$ is schematic. By Theorem \ref{K7}, the morphism $${\mathcal O}_x\otimes_{(f_*{\mathcal O}_X)_{f(x)}} (f_*{\mathcal O}_X)_y= {\mathcal O}_x\otimes_{(f_*{\mathcal O}_X)_{f(x)}} (f_*{\mathcal O}_X)_{f(x)}\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_y={\mathcal O}_x\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_y \to \prod_{z\in U_{xy}}{\mathcal O}_z$$ is surjective on spectra. Again by Theorem \ref{K7}, $f'$ is schematic. We have the obvious commutative diagram $$\xymatrix{ X \ar[rr]^-f \ar[rd]^-{f'} & & Y \\ & (Y,f_*{\mathcal O}_X) \ar[ru]^-{\Id} &}$$ $\Id$ is affine. If $f$ is affine, then $f'$ is a quasi-isomorphism. \end{enumerate} \end{ejemplos} \begin{definicion} Let $f\colon X\to Y$ be a schematic morphism. We shall say that $f$ is flat if the morphism ${\mathcal O}_{Y,f(x)}\to {\mathcal O}_{X,x}$ is flat, for any $x\in X$. We shall say that $f$ is faithfully flat if the morphism ${\mathcal O}_{Y,y}\to \prod_{x\in f^{-1}(U_y)}{\mathcal O}_{X,x}$ is faithfully flat, for any $y\in Y$. \end{definicion} If $\{U_i\}$ is an open covering of $X$, the natural morphism $\coprod_i U_i\to X$ is faithfully flat. \begin{observacion} \label{O12.3} Quasi-isomorphisms are faithfully flat morphisms: Given $y\in Y$, $f^{-1}(U_y)$ is affine, then the morphism $${\mathcal O}_y=(f_*{\mathcal O}_X)_y={\mathcal O}_X(f^{-1}(U_y))\to \prod_{x\in f^{-1}(U_y)} {\mathcal O}_x$$ is faithfully flat. \end{observacion} \begin{proposicion} Let $X$ be a schematic finite space, ${\mathcal U}=\{U_1,\ldots,U_n\}$ a minimal affine open covering of $X$ and $Y$ the ringed finite space associated with ${\mathcal U}$. Then, $Y$ is a schematic finite space and the quotient morphism $\pi\colon X\to Y$ is a quasi-isomorphism.\end{proposicion} \begin{proof}[Proof] $Y=\{y_1,\ldots,y_n\}$, where $\pi^{-1}(U_{y_i})=U_i$. Recall that $\pi_*{\mathcal O}_X={\mathcal O}_Y$. Let $y_1\leq y_2$, then $${\mathcal O}_{y_1}={\mathcal O}_X(U_1)\to {\mathcal O}_X(U_2)={\mathcal O}_{y_2}$$ is a flat morphism by Lemma \ref{flat}. Let $y_i,y_j\geq y_k$, then $$\aligned {\mathcal O}_{y_i}\otimes_{{\mathcal O}_{y_k}}{\mathcal O}_{y_j} & ={\mathcal O}_X(U_i)\otimes_{{\mathcal O}_X(U_k)}{\mathcal O}_X(U_j)\overset{\text{\ref{11}}}= {\mathcal O}_X(U_i\cap U_j)={\mathcal O}_X(\pi^{-1}(U_{y_i}\cap U_{y_j}))\\ &={\mathcal O}_Y(U_{y_i}\cap U_{y_j}) ={\mathcal O}_{y_iy_j}.\endaligned$$ $U_i\cap U_j$ is an affine finite space by Proposition \ref{inter}. $U_i\cap U_j=\cupa{U_k\subset U_i\cap U_j} U_z$. The morphisms ${\mathcal O}(U_k)\to \proda{x\in U_k}{\mathcal O}_x$, ${\mathcal O}(U_i\cap U_j)\to \proda{U_k\subset U_i\cap U_j,x\in U_k} {\mathcal O}_x$ are faithfully flat. Then, the morphism ${\mathcal O}(U_i\cap U_j)\to \proda{U_k\subset U_i\cap U_j} {\mathcal O}(U_k)$ is faithfully flat. Therefore, the morphism ${\mathcal O}_{y_iy_j}\to \proda{y_k\in U_{y_iy_j}} {\mathcal O}_{y_k}$ is faithfully flat. Then, $Y$ is a schematic finite space. Finally, $\pi$ is affine by Proposition \ref{hmm}. \end{proof} \begin{proposicion} \label{P9.6} The composition of quasi-isomorphisms is a quasi-isomorphism. \end{proposicion} \hskip-0.65cm\colorbox{white}{\,\begin{minipage}{15.15cm} \begin{teorema} \label{T12.7} Let $f\colon X\to Y$ be a schematic morphism. The functors $$f_*\colon\,{\bf Qc\text{-}Mod}_X\,\to\,{\bf Qc\text{-}Mod}_Y\,\text{ and }\, f^*\colon\,{\bf Qc\text{-}Mod}_Y\,\to\,{\bf Qc\text{-}Mod}_X\,$$ are mutually inverse (i.e., the natural morphisms $\mathcal M\to f_*f^*{\mathcal M}$, $f^*f_*{\mathcal N}\to {\mathcal N}$ are isomorphisms) iff $f$ is a quasi-isomorphism.\end{teorema}\end{minipage}} \begin{proof}[Proof] $\Leftarrow)$ The morphism $f^*f_*{\mathcal M}\to {\mathcal M}$ is an isomorphism: It is a local property on $Y$. We can suppose that $Y$ is an affine finite space (then $X$ is affine). Consider a free presentation of ${\mathcal M}$, $\oplus_I {\mathcal O}_X\to \oplus_J{\mathcal O}_X\to {\mathcal M}\to 0$. Taking $f_*$, which is an exact functor because $R^1f_*=0$, one has the exact sequence of morphisms $\oplus_I {\mathcal O}_Y\to \oplus_J{\mathcal O}_Y\to f_*{\mathcal M}\to 0$. Taking $f^*$, one has the exact sequence of morphisms $\oplus_I {\mathcal O}_X\to \oplus_J{\mathcal O}_X\to f^*f_*{\mathcal M}\to 0$, then $f^*f_*{\mathcal M}={\mathcal M}$. Likewise, $f_*f^*{\mathcal N}={\mathcal N}$. $\Rightarrow)$ ${\mathcal O}_Y=f_*f^*{\mathcal O}_Y=f_*{\mathcal O}_X$. Obviously, $f_*$ is an exact functor. By Theorem \ref{Tafex}, $f$ is affine. \end{proof} \begin{corolario} \label{superafin} Let $f\colon X\to Y$ be a quasi-isomorphism. $Y$ is affine iff $X$ is affine.\end{corolario} \begin{proof}[Proof] $\Leftarrow)$ For any quasi-coherent ${\mathcal O}_Y$-module ${\mathcal N}$, ${\mathcal N}=f_*f^*{\mathcal N}$, then $$H^1(Y,{\mathcal N})=H^1(X,f^*{\mathcal N})=0.$$ By the Serre Theorem $Y$ is affine.\end{proof} \begin{corolario} \label{qc-iso} Let $X\overset f\to Y\overset g\to Z$ be schematic morphisms and assume $g\circ f$ is a quasi-isomorphism. Then, \begin{enumerate} \item If $g$ is a quasi-isomorphism, then $f$ is a quasi-isomorphism. \item If $f$ is a quasi-isomorphism, then $g$ is a quasi-isomorphism. \end{enumerate}\end{corolario} \begin{proof}[Proof] 1. Considering the diagram $$\xymatrix{\,{\bf Qc\text{-}Mod}_X\, \ar[r]^-{f_*} \ar@/^{5mm}/[rr]^-{(g\circ f)_*} &\,{\bf Qc\text{-}Mod}_Y\, \ar[r]^-{g_*} \ar@<1ex>[l]^-{f^*} &\,{\bf Qc\text{-}Mod}_Z\, \ar@<1ex>[l]^-{g^*} \ar@/^{9mm}/[ll]^-{(g\circ f)^*}}$$ it is easy to prove that $f_*$ and $f^*$ are mutually inverse functors. 2. Proceed likewise. \end{proof} \begin{corolario} Let $X$ and $Y$ be affine finite spaces. Then, a schematic morphism $f\colon X\to Y$ is a quasi-isomorphism iff ${\mathcal O}_Y(Y)={\mathcal O}_X(X)$.\end{corolario} \begin{proof}[Proof] $\Leftarrow)$ Observe that the diagram $$\xymatrix{X \ar[r]^-f \ar[d] & Y\ar[d]\\ (*,{\mathcal O}_X(X)) \ar[r] & (*,{\mathcal O}_Y(Y))}$$ is commutative, $(X,{\mathcal O}_X)$ is quasi-isomorphic to $(*,{\mathcal O}_X(X))$ and $(Y,{\mathcal O}_Y)$ is quasi-isomorphic to $(*,{\mathcal O}_Y(Y))$. \end{proof} \begin{corolario} \label{proposiciontonta} Let $f\colon X\to Y$ be a schematic morphism and let $f_{M}\colon X_M\to Y_M$ be the induced morphism. Then, $f$ is a quasi-isomorphism iff $f_{M}$ is a quasi-isomorphism. \end{corolario} \begin{proof}[Proof] It is an immediate consequence of Corollary \ref{qc-iso}.\end{proof} \begin{corolario} \label{C11.12} Let $f\colon X\to X'$ be a quasi-isomorphism , $X''$ a schematic finite space and $g\colon X'\to X''$ a morphism of ringed spaces. Then, $g$ is schematic (resp. affine) iff $g\circ f$ is schematic (resp. affine) \end{corolario} \begin{proof}[Proof] Recall Theorem \ref{Tcohsch} (resp. Theorem \ref{Tafex}). \end{proof} \begin{proposicion} \label{P11.13} Let $f\colon X\to X'$ be an affine morphism of schematic spaces. Assume that $X'$ is $T_0$. Let ${\mathcal U}:=\{f^{-1}(U_{x'})\}_{x'\in X'}$, let $X/\!\sim$ be the schematic space associated with the open covering ${\mathcal U}$, and $\pi\colon X\to X/\!\sim$ the quotient morphism. The morphism $f'\colon X/\!\sim\,\to X'$, $f'([x])=f(x)$, induced by $f$, is affine and $Y$ is homeomorphic to $\Ima f$. \end{proposicion} \begin{proof}[Proof] By Corollary \ref{C11.12}, we only have to prove that $Y$ is homeomorphic to $\Ima f$. The morphism $f'\colon X/\!\sim\,\to \Ima f$ is clearly bijective and continuous. Given $[x],[x']\in X/\!\sim$, if $f'([x])\leq f'([x'])$, then $f(x)\leq f(x')$ and $U_{f(x')}\subseteq U_{f(x)}$. Hence, $f^{-1}(U_{f(x')})\subseteq f^{-1}(U_{f(x)})$ and $U_{[x']}\subseteq U_{[x]}$. Therefore, $[x]\leq [x']$. That is, $f'$ is a homeomorphism. \end{proof} \begin{lemma} \label{lematonto} Let $h\colon X\to Y$ be a quasi-isomorphism. Then, $Y\backslash h(X)$ is a set of removable points of $Y$.\end{lemma} \begin{proof}[Proof] Let $y\in Y\backslash h(X)$. Since $h^{-1}(U_y)$ is affine, the morphism $${\mathcal O}_Y(U_y)={\mathcal O}_{X}(h^{-1}(U_y))\to \prod_{x'\in h^{-1}(U_y)} {\mathcal O}_{X,x'}$$ is faithfully flat. This morphism factors through the morphism $${\mathcal O}_Y(U_y)\to \prod_{h(x')\in U_y} {\mathcal O}_{Y,h(x')},$$ then this last morphism is faithfully flat. Hence, $y$ is a removable point of $Y$. \end{proof} \hskip-0.65cm\colorbox{white}{\,\begin{minipage}{15.15cm} \begin{teorema} \label{comosonqc} Let $f\colon X\to Y$ be a quasi-isomorphism. Assume that $Y$ is $T_0$. Consider the affine open covering of $X$, $\{f^{-1}(U_y)\}_{y\in Y}$ and let $X/\!\sim$ be the associated schematic finite space. Then, $f$ is the composition of the quotient morphism $\pi\colon X\to X/\!\sim$ and an isomorphism $f'\colon X/\!\sim\,\to Y\backslash P$, $f'([x])=f(x)$, where $P$ is a set of removable points of $Y$. Therefore, if $f\colon X\to Y$ is a quasi-isomorphism and $Y$ is minimal, $f$ is the composition of the quotient morphism $X\to X/\!\sim$ and an isomorphism $X/\!\sim\,\simeq Y$. \end{teorema}\end{minipage}} \begin{proof}[Proof] By Lemma \ref{lematonto}, we can suppose that $f$ is surjective. By Proposition \ref{P11.13}, $f'\colon Y'\to Y$ is a homeomorphism and it is affine. Finally, ${\mathcal O}_{Y,f'([x])}=(f_*{\mathcal O}_{X})_{f(x)}={\mathcal O}_{Y',[x]}$, for any $[x]$. \end{proof} \section{Change of base and flat schematic morphisms} \begin{proposicion}[(\cite{KG} 5.27)\,] \label{XxY} Let $X$, $X'$ and $Y$ schematic finite spaces and $f\colon X\to Y$ and $f'\colon X'\to Y$ schematic morphisms. Then, \begin{enumerate} \item $X\times_Y X'$ is a schematic finite space. \item If $X$, $X'$ and $Y$ are affine, then ${\mathcal O}(X\times_YX')={\mathcal O}(X)\otimes_{{\mathcal O}(Y)}{\mathcal O}(X')$ and $X\times_YX'$ is affine. \item Given a commutative diagram of schematic morphisms $$\xymatrix@R=8pt{ U \ar[rd]^-g \ar[rr]^-h & & X \ar[rd]^-f & \\ & V \ar[rr] & & Y\\ U' \ar[ru]_-{g'} \ar[rr]_-{h'}& & X' \ar[ru]_-{f'}} $$ the morphism $h\times h'\colon U\times_VU'\to X\times_Y X'$, $h\times h'(u,u'):=(h(u),h'(u'))$ is schematic. \item $\pi\colon X\times_Y X'\to X$, $\pi(x,x')=x$, is schematic. \item If $h\colon X\to X'$ is a schematic $Y$-morphism, then $\Gamma_h\colon X\to X\times_Y X'$, $\Gamma_h(x):=(x,h(x))$ is schematic. \item The diagonal morphism $\delta\colon X\to X\times_Y X$, $\delta(x)=(x,x)$ is schematic. \end{enumerate} \end{proposicion} \begin{proof}[Proof] 1. We only need to prove 2. 2. $X\times_{{\mathcal O}(Y)} X'$ is an affine schematic space and ${\mathcal O}(X\times_{{\mathcal O}(Y)} X')={\mathcal O}(X)\otimes_{{\mathcal O}(Y)} {\mathcal O}(X')$, by Proposition \ref{ProdAf}. Let $(x,x')\in X\times_{{\mathcal O}(Y)} X'$. If $f(x)=f'(x')$, then $${\mathcal O}_x\otimes_{{\mathcal O}(Y)}{\mathcal O}_{x'}= {\mathcal O}_{x}\otimes_{{\mathcal O}_{f(x)}}({\mathcal O}_{f(x)}\otimes_{{\mathcal O}(Y)}{\mathcal O}_{f(x)})\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_{x'}\overset{\text{\ref{11}}}= {\mathcal O}_{x}\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_{f(x)}\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_{x'}={\mathcal O}_x\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_{x'}$$ If $f(x)\neq f'(x')$, then $(x,x')\in X\times_{{\mathcal O}(Y)}X'$ is a removable point: Consider the morphism $f_x\colon U_x \to Y,$ $f_x(z):=f(z)$. Then, ${\mathcal O}_{xy}=(f_{x*}{\mathcal O}_{U_x})_{y}=(f_{x*}{\mathcal O}_{U_x})(Y)\otimes_{{\mathcal O}(Y)} {\mathcal O}_{y}={\mathcal O}_x\otimes_{{\mathcal O}(Y)} {\mathcal O}_{y}$. Observe that $U_{x'f(x)}\underset\neq\subset X'$ and it is affine since $f'_{x'}\colon U_{x'} \to Y,$ $f'_{x'}(z):=f'(z)$ is affine. The morphism $${\mathcal O}_x\otimes_{{\mathcal O}(Y)} {\mathcal O}_{x'} ={\mathcal O}_{xf(x)} \otimes_{{\mathcal O}(Y)} {\mathcal O}_{x'}={\mathcal O}_x\otimes_{{\mathcal O}(Y)} {\mathcal O}_{f(x)} \otimes_{{\mathcal O}(Y)} {\mathcal O}_{x'}={\mathcal O}_x\otimes_{{\mathcal O}(Y)} {\mathcal O}_{x'f(x)}\to \proda{z\in U_{x'f(x)} } {\mathcal O}_x \otimes_{{\mathcal O}(Y)} {\mathcal O}_z $$ is faithfully flat. Therefore, $(x,x')$ is removable. In conclusion, $X\times_Y X'=(X\times_{{\mathcal O}(Y)} X')\backslash \{$A set of removable points$\}$, then $X\times_Y X'$ is affine and ${\mathcal O}(X\times_Y X')={\mathcal O}(X\times_{{\mathcal O}(Y)} X')={\mathcal O}(X)\otimes_{{\mathcal O}(Y)}{\mathcal O}(X')$. 3. By Proposition \ref{K6}, given $(u,u')\in U\times_VU'$, $(x,x')\in X\times_Y X'$ (where $(x,x')\geq (h(u), h'(u'))$), we have to prove that $U_{(u,u')(x,x')}$ is acyclic and ${\mathcal O}_{(u,u')(h(u),h(u'))}\otimes_{{\mathcal O}_{(h(u),h(u'))}} {\mathcal O}_{(x,x')}={\mathcal O}_{(u,u')( x,x')}$: $U_{(u,u')(x,x')} =U_{ux}\times_{U_{g(u)f(x)}} U_{u'x'}$, which is affine (then acyclic) and $$\aligned & {\mathcal O}_{(u,u')} \otimes_{{\mathcal O}_{(h(u),h(u'))}} {\mathcal O}_{(x,x')} =({\mathcal O}_{u}\otimes_{{\mathcal O}_{g(u)}} {\mathcal O}_{u'})\otimes_{{\mathcal O}_{h(u)}\otimes_{{\mathcal O}_{f(h(u))}}{\mathcal O}_{h'(u')}} ({\mathcal O}_{x}\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_{x'}) \\ & = ({\mathcal O}_u\otimes_{{\mathcal O}_{h(u)}}{\mathcal O}_x)\otimes_{{\mathcal O}_{g(u)}\otimes_{{\mathcal O}_{f(h(u))}}{\mathcal O}_{f(x)}} ({\mathcal O}_{u'}\otimes_{{\mathcal O}_{h'(u')}}{\mathcal O}_{x'}) \overset{\text{\ref{K6}}}={\mathcal O}_{u x}\otimes_{{\mathcal O}_{g(u)f(x)}} {\mathcal O}_{u' x'} ={\mathcal O}_{(u,u')(x,x')}. \endaligned$$ 4.,5. and 6. are particular cases of 3. \end{proof} Obviously, $$\Hom_Y(Z,X\times_Y X')=\Hom_Y(Z,X)\times \Hom_Y(Z,X')$$ for any schematic finite $Y$-spaces $Z,X,X'$. \begin{proposicion} \label{qc1} Affine morphisms and quasi-isomorphisms are stable by base change. \end{proposicion} \begin{proof}[Proof] Let $f\colon X\to Y$ be an affine morphism and $Y'\to Y$ a schematic morphism. In order to prove that the schematic morphism $X\times_YY'\to Y'$ is affine, it is sufficient to prove that $X\times_YU_{y'}$ is affine, for any $y'\in Y'$. Observe that $X\times_YU_{y'}=f^{-1}(U_{f(y')})\times_{U_{f(y')} }U_{y'}$, which is affine because $f^{-1}(U_{f(y')})$ is affine. Let $f$ be a quasi-isomorphism. We only have to prove that ${\mathcal O}(X\times_YU_{y'})={\mathcal O}_{y'}$: $${\mathcal O}(X\times_YU_{y'})={\mathcal O}(f^{-1}(U_{f(y')}))\times_{U_{f(y')}}U_{y'})= {\mathcal O}(f^{-1}(U_{f(y')}))\otimes_{{\mathcal O}_{f(y')}} {\mathcal O}_{y'} ={{\mathcal O}_{f(y')}}\otimes_{{\mathcal O}_{f(y')}} {\mathcal O}_{y'}={\mathcal O}_{y'}.$$ \end{proof} \begin{lemma} \label{L11.3} Let $f\colon X\to Y$ and $g\colon Y'\to Y$ be schematic morphisms and let $g'\colon X\times_Y Y'\to X$ be defined by $g'(x,y'):=x$. Let ${\mathcal M}$ be a quasi-coherent ${\mathcal O}_X$-module. If $X,Y$ and $Y'$ are affine, then $$\Gamma(X\times_Y Y',g'^*{\mathcal M})=\Gamma(X,{\mathcal M})\otimes_{{\mathcal O}(Y)}{\mathcal O}(Y').$$ \end{lemma} \begin{proof}[Proof] Consider an exact sequence of ${\mathcal O}_X$-modules $\oplusa{I}\, {\mathcal O}_X\to \oplusa{J}\,{\mathcal O}_X\to {\mathcal M}\to 0$. 1. Taking $g'^*$, $\oplusa{I}\, {\mathcal O}_{X\times_Y Y'}\to \oplusa{J}\,{\mathcal O}_{X\times_Y Y'}\to g'^*{\mathcal M}\to 0$ is exact. Taking sections, the sequence $\oplusa{I} \,{\mathcal O}(X\times_Y Y')\to \oplusa{J}\,{\mathcal O}(X\times_Y Y')\to g'^*{\mathcal M}(X\times_Y Y')\to 0$ is exact. By Proposition \ref{XxY}, ${\mathcal O}(X\times_Y Y')={\mathcal O}(X)\otimes_{{\mathcal O}(Y)}{\mathcal O}(Y')$. Hence, the sequence of morphisms $$\oplusa{I}\, {\mathcal O}(X)\otimes_{{\mathcal O}(Y)}{\mathcal O}(Y')\to \oplusa{J}\,{\mathcal O}(X)\otimes_{{\mathcal O}(Y)}{\mathcal O}(Y')\to g'^*{\mathcal M}(X\times_Y Y')\to 0$$ is exact. 2. The sequence of ${\mathcal O}(X)$-modules $\oplusa{I} \,{\mathcal O}(X)\to \oplusa{J}\,{\mathcal O}(X)\to {\mathcal M}(X)\to 0$ is exact. Hence, the sequence $\oplusa{I} \,{\mathcal O}(X)\otimes_{{\mathcal O}(Y)}{\mathcal O}(Y')\to \oplusa{J} \,{\mathcal O}(X)\otimes_{{\mathcal O}(Y)}{\mathcal O}(Y')\to {\mathcal M}(X)\otimes_{{\mathcal O}(Y)} {\mathcal O}(Y')\to 0$ is exact. 3. Therefore, the natural morphism $\Gamma(X,{\mathcal M})\otimes_{{\mathcal O}(Y)}{\mathcal O}(Y')\to \Gamma(X\times_Y Y',g'^*{\mathcal M})$ is an isomorphism. \end{proof} \begin{teorema} \label{Tcbp} Let $f\colon X\to Y$ be a schematic morphism and $g\colon Y'\to Y$ a flat schematic morphism. Denote $f'\colon X\times_Y Y'\to Y'$, $f'(x,y')=y'$, $g'\colon X\times_Y Y'\to X$, $g'(x,y')=x$ the induced morphisms. Then, the natural morphism $$g^*R^if_*\mathcal M\to R^if'_*(g'^*\mathcal M)$$ is an isomorphism.\end{teorema} \begin{proof}[Proof] We have to prove that the morphism is an isomorphism on stalks at $z$, for any $z\in Y'$. That is to say, we have to prove that the morphism $$H^i(f^{-1}(U_{g(z)}), {\mathcal M})\otimes_{{\mathcal O}_{g(z)}} {\mathcal O}_z\to H^i(f^{-1}(U_{g(z)})\times_{U_{g(z)}} U_z, g'^*{\mathcal M})$$ is an isomorphism. We can suppose that $Y'=U_z$, $Y=U_{g(z)}$ and $X=f^{-1}(U_{g(z)})$. Then, $g'$ is an affine morphism and $H^i(X\times_Y Y', g'^*{\mathcal M})=H^i(X,g'_*g'^*{\mathcal M})$. By Lemma \ref{L11.3}, $$(g'_*g'^*{\mathcal M})_x=\Gamma(U_x\times_{Y}Y',g'^*{\mathcal M})= {\mathcal M}_x\otimes_{{\mathcal O}(Y)}{\mathcal O}(Y').$$ Hence, $C^{\displaystyle \cdot}(g'_*g'^*{\mathcal M})=(C^{\displaystyle \cdot}{\mathcal M})\otimes_{{\mathcal O}(Y)}{\mathcal O}(Y')$ and $H^i(X,g'_*g'^*{\mathcal M})=H^i(X,{\mathcal M})\otimes_{{\mathcal O}(Y)}{\mathcal O}(Y')$. That is, $$H^i(X\times_Y Y', g'^*{\mathcal M})=H^i(X,g'_*g'^*{\mathcal M})=H^i(X,{\mathcal M})\otimes_{{\mathcal O}(Y)}{\mathcal O}(Y').$$ \end{proof} \begin{proposicion} \label{Pff} Let $X$ and $Y$ be schematic finite spaces and $f\colon X\to Y$ a schematic morphism. Then, $f$ is flat (resp. faithfully flat) iff the functor $$f^*\colon\,{\bf Qc\text{-}Mod}_Y\,\to\,{\bf Qc\text{-}Mod}_X, \,\,\mathcal M \functor f^*\mathcal M$$ is exact (resp. faithfully exact). \end{proposicion} \begin{proof}[Proof] Obviously, if $f$ is flat then $f^*$ is exact. If $f$ is faithfully flat then $f^*$ is faithfully exact: We only have to check that ${\mathcal M}=0$ if $f^*{\mathcal M}=0$. Given $y\in Y$, the morphism ${\mathcal O}_y\hookrightarrow \prod_{x\in f^{-1}(U_y)} {\mathcal O}_x$ is faithfully flat. Tensoring by ${\mathcal M}_y\otimes_{{\mathcal O}_y}$ one has the injective morphism $$\aligned {\mathcal M}_y\hookrightarrow \prod_{x\in f^{-1}(U_y)} {\mathcal M}_y\otimes_{{\mathcal O}_y}{\mathcal O}_x & =\prod_{x\in f^{-1}(U_y)} {\mathcal M}_y\otimes_{{\mathcal O}_y}{\mathcal O}_{f(x)} \otimes_{{\mathcal O}_{f(x)} }{\mathcal O}_x=\prod_{x\in f^{-1}(U_y)} {\mathcal M}_{f(x)} \otimes_{{\mathcal O}_{f(x)} }{\mathcal O}_x\\ & =\prod_{x\in f^{-1}(U_y)} (f^*{\mathcal M})_x=0.\endaligned$$ Hence, ${\mathcal M}=0$. If $f^*$ is exact $f$ is flat: Let $f(x)\in Y$. Consider an ideal $I\subseteq {\mathcal O}_{f(x)}$ and let $\tilde I\subseteq {\mathcal O}_{U_{f(x)}}$ be the quasi-coherent ${\mathcal O}_{U_{f(x)}}$-module associated with $I$. Consider the inclusion morphism $i\colon U_{f(x)}\hookrightarrow Y$. The morphism $i_*\tilde I\hookrightarrow i_*{\mathcal O}_{U_{f(x)}}$ is injective, then $f^*i_*\tilde I\hookrightarrow f^* i_*{\mathcal O}_{U_{f(x)}}$ is injective. Hence, $I\otimes_{{\mathcal O}_{f(x)}} {\mathcal O}_x=(f^*i_*\tilde I)_x\hookrightarrow (f^* i_*{\mathcal O}_{U_{f(x)}})_x={\mathcal O}_x$ is injective. Therefore, the morphism ${\mathcal O}_{f(x)}\to {\mathcal O}_x$ is flat. Finally, if $f^*$ is faithfully exact $f$ is faithfully flat: Let $y\in Y$ be a maximal point of $Y$, if it exists, such that the flat morphism ${\mathcal O}_y\to \prod_{x\in f^{-1}(U_y)} {\mathcal O}_x$ is not faifhfully flat. Then, there exists an ideal $I\underset\neq\subset {\mathcal O}_y$ such that $I\cdot \prod_{x\in f^{-1}(U_y)} {\mathcal O}_x=\prod_{x\in f^{-1}(U_y)} {\mathcal O}_x$. Let ${\mathcal M}$ be the quasi-coherent ${\mathcal O}_{U_y}$-module associated with ${\mathcal O}_y/I$ and let $f'\colon f^{-1}(U_y)\to U_y$ be the morphism defined by $f'(x):=f(x)$. Obviously, $f'^*{\mathcal M}=0$. Let $i\colon U_y\hookrightarrow Y$ and $i'\colon f^{-1}(U_y)\hookrightarrow X$ be the inclusion morphisms. By Theorem \ref{Tcbp}, $0=i'_*f'^*{\mathcal M}=f^*i_*{\mathcal M}$, then $i_*{\mathcal M}=0$. Hence, $0=i^*i_*{\mathcal M}={\mathcal M}$ which is contradictory. \end{proof} \begin{proposicion} Let $f\colon X\to Y$ be a schematic morphism. Then, $f$ is a quasi-isomorphism iff it is faithfully flat and the natural morphism $f^*f_*{\mathcal M}\to {\mathcal M}$ is an isomorphism, for any quasi-coherent ${\mathcal O}_X$-module ${\mathcal M}$. \end{proposicion} \begin{proof}[Proof] $\Rightarrow)$ It is Remark \ref{O12.3} and Theorem \ref{T12.7}. $\Leftarrow)$ $f_*$ is an exact functor since $f^*$ is faithfully exact and $\Id=f^*f_*$. By Theorem \ref{Tafex}, $f$ is affine. Finally, the morphism ${\mathcal O}_Y\to f_*{\mathcal O}_X$ is an isomorphism, since $f^*{\mathcal O}_Y={\mathcal O}_X\to f^*f_*{\mathcal O}_X$ is an isomorphism. \end{proof} \begin{proposicion} Let $f\colon X\to Y$ be a schematic morphism. Then, $f$ is faithfully flat iff $f_M\colon X_M\to Y_M$ is surjective and flat. \end{proposicion} \begin{proof}[Proof] Let $g\colon X'\to X$ be a quasi-isomorphism. Then, $f$ is faithfully flat iff $f\circ g$ is faithfully flat, since $f^*$ is faithfully exact iff $g^*\circ f^*$ is faithfully exact. Let $g\colon Y\to Y'$ be a quasi-isomorphism. Likewise, $f$ is faithfully flat iff $g\circ f$ is faithfully flat. Therefore, we have to prove that $f_M$ is a faithfully flat iff is surjective and flat. The converse implication is obvious. Let us prove the direct implication. If $f$ is not surjective, let $y\in Y_M$ be maximal satisfying $f^{-1}(y)=\emptyset$. Consider the commutative diagram of obvious morphisms $$\xymatrix{{\mathcal O}_y \ar[r]^-{i_1} \ar[d]^-{i_2} & \prod_{f_M(x)\geq y} {\mathcal O}_x \ar@{=}[r] & \prod_{f_M(x)> y} {\mathcal O}_x \ar[dl]^-{i_4}\\ \prod_{y'\geq y} {\mathcal O}_{y'} \ar[r]^-{i_3} & \prod_{y'\geq y} \prod_{f_M(x)\geq y'} {\mathcal O}_x & }$$ The morphisms $i_1$ and $i_3$ are faithfully flat since $f_M$ is faithfully flat, $i_4$ is obviously faithfully flat, hence $i_2$ is faithfully flat and $y$ is a removable point, which is contradictory. \end{proof} \begin{proposicion} Let $f\colon X\to Y$ be a schematic morphism and $g\colon Y'\to Y$ a faithfully flat schematic morphism. Let $f'\colon X\times_Y Y'\to Y'$ be the morphism defined by $f'(x,y')=y'$. Then, \begin{enumerate} \item $f$ is affine iff $f'$ is affine. \item $f$ is a quasi-isomorphism iff $f'$ is a quasi-isomorphism.\end{enumerate} \end{proposicion} \begin{proof}[Proof] We can suppose that $X$,$Y$ and $Y'$ are minimal schematic spaces. The morphism $g'\colon X\times_Y Y'\to X$, $g'(x,y'):=x$ is faithfully flat since it is flat and surjective. 1. $\Leftarrow)$ The functor $g^*f_*=f'_*g'^*$ is exact since $f'_*$ and $g'^*$ are exact. Hence, $f_*$ is exact since $g^*$ is faithfully exact and $f$ is affine. 2. $\Leftarrow)$ We only have to prove that the morphism ${\mathcal O}_Y\to f_*{\mathcal O}_X$ is an isomorphism. Taking $g^*$, we obtain the isomorphism ${\mathcal O}_X\to g^*f_*{\mathcal O}_X=f'_*g'^*{\mathcal O}_X=f'_*{\mathcal O}_{X\times_Y Y'}={\mathcal O}_X$. Hence, ${\mathcal O}_Y\to f_*{\mathcal O}_X$ is an isomorphism. \end{proof} \section{Quasi-open immersions} \begin{definicion} We shall say that a schematic morphism $f\colon X\to Y$ is a quasi-open immersion if it is flat and the diagonal morphism $X\to X\times_YX$ is a quasi-isomorphism.\end{definicion} \begin{ejemplo} If $X$ is a schematic finite space and $U\subseteq X$ an open subset, then the inclusion morphism $U\hookrightarrow X$ is a quasi-open immersion. \end{ejemplo} \begin{proposicion} \label{P3.13} If $f\colon X\to Y$ is a quasi-isomorphism, then it is a quasi-open immersion\end{proposicion} \begin{proof}[Proof] Quasi-isomorphisms are faithfully flat morphisms, by Observation \ref{O12.3}. The morphism $X\times_Y X\to X$ is a quasi-isomorphism by Proposition \ref{qc1}. The composite morphism $X\to X\times_Y X\to X$ is the identity morphism, then $X\to X\times_Y X$ is a quasi-isomorphism, by Corollary \ref{qc-iso}. \end{proof} \begin{proposicion} \label{P13.4} If $f\colon X\to Y$ is a quasi-open immersion and $Y'\to Y$ a schematic morphism, then $X\times_YY'\to Y'$ is a quasi-open immersion.\end{proposicion} \begin{proof}[Proof] The morphism $X\to Y$ is flat. Taking $\times_YY'$, the morphism $X\times_YY'\to Y'$ is flat. The morphism $X\to X\times_Y X$ is a quasi-isomorphism. Taking $\times_YY'$, the morphism $X\times_YY'\to (X\times_Y Y')\times_{Y'} (X\times_YY')$ is a quasi-isomorphism, by Proposition \ref{qc1}. Hence, $X\times_YY'\to Y'$ is a quasi-open immersion. \end{proof} \begin{proposicion} Let $f\colon X\to Y$ be a schematic morphism. Let $Y'\to Y$ be a faithfully flat schematic morphism and $f'\colon X\times_Y Y'\to Y'$, $f'(x,y'):=f(x)$ the induced morphism. Then $f$ is a quasi-open immersion iff $f'$ is a quasi-open immersion.\end{proposicion} \begin{proposicion} The composition of two quasi-open immersions is a quasi-open immersion.\end{proposicion} \begin{proof}[Proof] The composition of two flat morphisms is flat. Let $f\colon X\to Y$, $g\colon Y\to Z$ be quasi-open immersions. Consider the commutative diagram $$\xymatrix{X \ar[r]^-{\delta_X} & X\times_YX \ar[r]^-{\Id\times \Id} \ar[d] & X\times_ZX\ar[d]^-{f\times f}\\ & Y \ar[r]_-{\delta_Y} & Y\times_ZY}$$ Observe that $X\times_Y X=Y\times_{Y\times_ZY} (X\times_Z X)$ and $\delta_Y$ is a quasi-isomorphism. Then, $\Id\times \Id$ is a quasi-isomorphism by Proposition \ref{P13.4}. The morphism $(\Id\times \Id)\circ \delta_X$ is a quasi-isomorphism by Proposition \ref{P9.6}, hence $g\circ f$ is a quasi-open immersion. \end{proof} \begin{proposicion} Let $f\colon X\to Y$, $g\colon Y\to Z$ be schematic morphisms and suppose $g\circ f$ is a quasi-open immersion. \begin{enumerate} \item If $g$ is a quasi-open immersion, then $f$ is a quasi-open immersion. \item If $f$ is a quasi-isomorphism, then $g$ is a quasi-open immersion. \end{enumerate} \end{proposicion} \begin{proof}[Proof] 1. Consider the commutative diagram $$\xymatrix{X \ar[r]^-{\delta_X} & X\times_YX \ar[r]^-{\Id\times \Id} \ar[d] & X\times_ZX\ar[d]^-{f\times f}\\ & Y \ar[r]_-{\delta_Y} & Y\times_ZY}$$ $\Id\times \Id$ is a quasi-isomorphism since $\delta_Y$ is a quasi-isomorphism. $(\Id\times \Id)\circ \delta_X$ is a quasi-isomorphism, since $g\circ f$ is a quasi-open immersion. Hence, $\delta_X$ is a quasi-isomorphism, by Corollary \ref{qc-iso}, that is, $f$ is a quasi-open immersion. 2. The obvious morphism $X\times_ZX\to Y\times_ZY$ is a quasi-isomorphism, since is the composition of the quasi-isomorphisms $X\times_ZX\to X\times_ZY$, $X\times_ZY\to Y\times_ZY$. Consider the commutative diagram $$\xymatrix{X \ar[r]^-{\delta_X} \ar[d]_-f & X\times_ZX\ar[d]^-{f\times f}\\ Y \ar[r]_-{\delta_Y} & Y\times_ZY}$$ Then, $\delta_Y$ is a quasi-isomorphism since $f,\delta_X$ and $f\times f$ are quasi-isomorphisms. That is, $g$ is a quasi-open immersion. \end{proof} \begin{definicion} Let $X$ and $Y$ be ringed finite spaces and $f\colon X\to Y$ a morphism of ringed spaces. $C(f):=X\coprod Y$ is a finite ringed space as follows: the order relation on $X$ and on $Y$ is the pre-stablished order relation, and given $x\in X$ and $y\in Y$ we shall say that $x>y$ if $f(x)\geq y$; ${\mathcal O}_{C(f),x}:={\mathcal O}_{X,x}$ for any $x\in X$, ${\mathcal O}_{C(f),y}:={\mathcal O}_{Y,y}$ for any $y\in Y$; the morphisms between the stalks of ${\mathcal O}_{C(f)}$ are defined in the obvious way. \end{definicion} Observe that $X$ is an open subset of $C(f)$ and $F\colon C(f)\to Y$, $F(x):= f(x)$, for any $x\in X$ and $F(y):= y$, for any $y\in Y$ is a morphism of ringed spaces. $F_*{\mathcal O}_{C(f)}={\mathcal O}_Y$ because $$(F_*{\mathcal O}_{C(f)})_y={\mathcal O}_{C(f),y}={\mathcal O}_{Y,y}$$ Besides, $R^iF_*{\mathcal O}_{C(f)}=0$ for any $i>0$, because $$(R^iF_*{\mathcal O}_{C(f)})_y=H^i(U_y,{\mathcal O}_{C(f)})=0.$$ \begin{teorema} Let $f\colon X\to Y$ be a schematic morphism. Then, $f$ is a quasi-open immersion iff $C(f)=X\coprod Y$ is a schematic finite space. If $f$ is a quasi-open immersion, then it is the composition of the open inclusion $X\hookrightarrow C(f)$ and the quasi-isomorphism $F\colon C(f)\to Y$. \end{teorema} \begin{proof}[Proof] $\Rightarrow)$ Given, $x\geq x'\in X\subset C(f)$, the morphism $${\mathcal O}_{C(f), x'}={\mathcal O}_{X,x'}\to {\mathcal O}_{X,x}={\mathcal O}_{C(f),x}$$ is flat. Given, $y\leq y'\in Y\subset C(f)$, the morphism $${\mathcal O}_{C(f), y}={\mathcal O}_{Y,y}\to {\mathcal O}_{Y,y'}={\mathcal O}_{C(f),y'}$$ is flat. Given $x\in X\subset C(f)$ and $f(x)\geq y \in Y\subset C(f)$, the morphism $${\mathcal O}_{C(f),y}\to {\mathcal O}_{C(f),f(x)}={\mathcal O}_{Y,f(x)}\to {\mathcal O}_{X,x}={\mathcal O}_{C(f),x}$$ is flat. Given $c\in C(f)$, we shall denote $\tilde U_c:=\{z\in C(f)\colon z\geq c\}$. We have to prove that $\tilde U_c$ is affine. Recall Theorem \ref{afin'}. a. If $c=x\in X$, then $\tilde U_c=U_x\subseteq X$ is affine. If $c=y\in Y$, $\tilde U_y$ is acyclic. b. Given $x,x'\in \tilde U_y\cap X$, $\tilde U_x\cap \tilde U_{x'}=U_x\cap U_{x'}$ which is quasi-isomorphic to $U_x\times_{U_y} U_{x'}$, then $\tilde U_x\cap \tilde U_{x'}$ is acyclic and $${\mathcal O}_{C(f),xx'}={\mathcal O}_{xx'}={\mathcal O}_{x}\otimes_{{\mathcal O}_y}{\mathcal O}_{x'}={\mathcal O}_{C(f),x}\otimes_{{\mathcal O}_{C(f),y}} {\mathcal O}_{C(f),x'}$$ c. Given, $y',y''\in \tilde U_{y}\cap Y$, $\tilde U_{y'}\cap \tilde U_{y''}=F^{-1}(U_{y'}\cap U_{y''})$, which is acyclic because $U_{y'}\cap U_{y''}$ is acyclic, and $${\mathcal O}_{C(f),y'y''}={\mathcal O}_{y'y''}={\mathcal O}_{y'}\otimes_{{\mathcal O}_y}{\mathcal O}_{y''}={\mathcal O}_{C(f),y'}\otimes_{{\mathcal O}_{C(f),y}} {\mathcal O}_{C(f),y''}$$ d. Given $x,y'\in \tilde U_y$, where $x\in X$ and $y'\in Y$. Observe that $\tilde U_{x}\cap \tilde U_{y'}=U_{xy'}$ and $U_{xy'}=f_x^{-1}(U_{f(x)}\cap U_{y'})$, which is affine since $f_x\colon U_x\to U_{f(x)}$ is affine. and $U_{f(x)}\cap U_{y'}\subset U_y$ is affine. Finally, $${\mathcal O}_{C(f),xy'}={\mathcal O}_{xy'}\overset*={\mathcal O}_{x}\otimes_{{\mathcal O}_y}{\mathcal O}_{y''}={\mathcal O}_{C(f),x}\otimes_{{\mathcal O}_{C(f),y}} {\mathcal O}_{C(f),y'}$$ ($*$ observe that $U_x\times_{U_y}U_{y'}=U_{xy'})$. Therefore $C(f)$ is schematic. If ${\mathcal M}$ is a ${\mathcal O}_{C(f)}$-quasi-coherent module, $F_*{\mathcal M}$ is a quasi-coherent ${\mathcal O}_Y$-module since $$(F_*{\mathcal M})_y\otimes_{{\mathcal O}_{Y,y}}{\mathcal O}_{Y,y'}= {\mathcal M}_y\otimes_{{\mathcal O}_{C(f),y}}{\mathcal O}_{C(f),y'}={\mathcal M}_{y'}=(F_*{\mathcal M})_{y'}.$$ By Theorem \ref{Tcohsch}, $F$ is schematic. $F$ is a quasi-isomorphism since $F_*{\mathcal O}_{C(f)}={\mathcal O}_Y$ and $F^{-1}(U_y)=\tilde U_y$, for any $y\in Y$. $\Leftarrow)$ The morphism $f$ is the composition of the open immersion $X\hookrightarrow C(f)$ and the quasi-isomorphism $F\colon C(f)\to Y$, hence $f$ is a quasi-open immersion. \end{proof} \begin{proposicion} Let $f\colon X\to Y$ be a schematic morphism. Then, $f$ is a quasi-isomorphism iff it is a faithfully flat quasi-open immersion.\end{proposicion} \begin{proof}[Proof] $\Rightarrow)$ It is Remark \ref{O12.3} and Proposition \ref{P3.13}. $\Leftarrow$) If $y\in Y$, then $y$ is a removable point of $C(f)$, since the morphism $${\mathcal O}_{C(f),y}={\mathcal O}_{Y,y}\to \prod_{x\in f^{-1}(U_y)} {\mathcal O}_{X,x}=\prod_{x>y,x\in X} {\mathcal O}_{C(f),x}$$ is faithfully flat. The morphism $X\to C(f)$ is a quasi-isomorphism, since $X=C(f)-Y$ and $C(f)-Y$ is quasi-isomorphic to $C(f)$. Finally, $X$ is quasi-isomorphic to $Y$, since $C(f)$ is quasi-isomorphic to $Y$. \end{proof} \begin{teorema} Let $f\colon X\to Y$ be a schematic morphism. Then, $f$ is a quasi-open immersion iff $f$ is flat and the morphism $f^*f_*\mathcal M\to \mathcal M$ is an isomorphism for any quasi-coherent ${\mathcal O}_X$-module.\end{teorema} \begin{proof}[Proof] $\Rightarrow)$ The diagonal morphism $\delta\colon X\to X\times_YX$ is a quasi-isomorphism. Then, $\delta$ is affine. Consider the projections, $\pi_1,\pi_2\colon X\times_Y X\to X$. The morphism $f$ is flat, then $f^*f_*\mathcal M=\pi_{2*}\pi_1^*\mathcal M$, by Theorem \ref{Tcbp}. Observe that $$\aligned (\pi_{2*} \pi_1^*\mathcal M)_x & =\Gamma(X\times_Y U_x, \pi_1^*\mathcal M)= \Gamma(X\times_Y U_x, \delta_*\delta^*\pi_1^*\mathcal M) =\Gamma(X\times_Y U_x, \delta_*\mathcal M)\\ & =\Gamma(U_x, \mathcal M) =\mathcal M_x,\endaligned$$ for any $x\in X$. Therefore, the morphism $f^*f_*\mathcal M\to \mathcal M$ is an isomorphism. $\Leftarrow)$ Let $i\colon U_x\to X$ be the obvious inclusion and denote $i_*{\mathcal M}_{|U_x}={\mathcal M}_{U_x}$. The natural morphism $f^*f_*{\mathcal M}_{U_x}\to {\mathcal M}_{U_x}$ is an isomorphism. Then, $$\aligned (\pi_1^*{\mathcal M})_{(x,x')} & ={\mathcal M}_x\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_{x'}=(f_*{\mathcal M}_{U_x})_{f(x)}\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_{x'} =(f^*f_*{\mathcal M}_{U_x})_{x'}=({\mathcal M}_{U_x})_{x'}\\ &={\mathcal M}(U_{xx'})=(\delta_*{\mathcal M})_{(x,x')}.\endaligned$$ Hence, $\delta_*$ is an exact functor, since $\pi_1$ is flat. By Theorem \ref{Tafex}, $\delta$ is an affine morphism. Besides, ${\mathcal O}_{X\times_Y X}=\pi_1^*{\mathcal O}_X=\delta_*{\mathcal O}_X$. Hence, $\delta$ is a quasi-isomorphism and $f$ is a quasi-open immersion. \end{proof} \begin{lemma} Let $f\colon X\to Y$ be a schematic morphism and suppose that $X$ is affine and $f_*{\mathcal O}_X={\mathcal O}_Y$. Then, $f$ is a quasi-open immersion.\end{lemma} \begin{proof}[Proof] We have to prove that $C(f)$ is a schematic finite space. By Proposition \ref{4pg20}, we have to prove that ${\mathcal O}_{z_1}\otimes_{{\mathcal O}_z}{\mathcal O}_{z_2} \to \prod_{w\in U_{z_1z_2}} {\mathcal O}_w$ is faithfully flat, for any $z\leq z_1,z_2\in C(f)$. Suppose that $z_1,z_2\in X$. The epimorphism ${\mathcal O}_{z_1}\otimes_{{\mathcal O}(X)}{\mathcal O}_{z_2}\to {\mathcal O}_{z_1}\otimes_{{\mathcal O}_z}{\mathcal O}_{z_2}$ is an isomorphism, since the composite morphism ${\mathcal O}_{z_1}\otimes_{{\mathcal O}(X)}{\mathcal O}_{z_2}\to {\mathcal O}_{z_1}\otimes_{{\mathcal O}_z}{\mathcal O}_{z_2}\to {\mathcal O}_{z_1z_2}$ is an isomorphism. Besides, the morphism ${\mathcal O}_{z_1z_2}\to \prod_{w\in U_{z_1z_2}} {\mathcal O}_w$ is faithfully flat, since $X$ is affine. Suppose that $z_1\in X$ and that $z_2 \in Y$ (then $z\in Y$). Observe that $U_{z_1}\cap f^{-1}({U_{z_2}})=U_{z_1z_2}=\{c\in C(f)\colon c\geq z_1,z_2\}$. Then, $${\mathcal O}_{z_1}\otimes_{{\mathcal O}_z}{\mathcal O}_{z_2}={\mathcal O}_{z_1}\otimes_{{\mathcal O}_z}{\mathcal O}_z\otimes_{{\mathcal O}_z}{\mathcal O}_{z_2} ={\mathcal O}_{z_1}\otimes_{{\mathcal O}_z}({\mathcal O}_z\otimes_{{\mathcal O}_{f(z)} }{\mathcal O}_z)\otimes_{{\mathcal O}_z}{\mathcal O}_{z_2}={\mathcal O}_{z_1}\otimes_{{\mathcal O}_{f(z)}}{\mathcal O}_{z_2}={\mathcal O}_{z_1z_2}\to \prod_{w\in U_{z_1z_2}} {\mathcal O}_w$$ is faithfully flat, since $U_{z_1z_2}\subset X$ is affine, by Proposition \ref{K6}. Suppose that $z_1,z_2\in Y$. Observe that $U_{C(f),z_1z_2}= U_{Y,z_1z_2}\coprod f^{-1}(U_{Y,z_1z_2})$ and ${\mathcal O}_Y(U_{Y,z_1z_2})={\mathcal O}_X(f^{-1}(U_{Y,z_1z_2}))$. The morphism ${\mathcal O}_{z_1}\otimes_{{\mathcal O}_z}{\mathcal O}_{z_2}={\mathcal O}_{Y,z_1z_2}\to \prod_{w\in U_{Y,z_1z_2}} {\mathcal O}_w$ is faithfully flat, since $U_{Y,z_1z_2}$ is affine. For any open subset $V\subset X$ and any $x\in V$, the morphism ${\mathcal O}_X(V)={\mathcal O}(X)\otimes_{{\mathcal O}(X)} {\mathcal O}(V)\to {\mathcal O}_x\otimes_{{\mathcal O}(X)}{\mathcal O}(V)\overset{\text{\ref{11}}}={\mathcal O}_x$ is flat. Hence, the morphism ${\mathcal O}_{z_1}\otimes_{{\mathcal O}_z}{\mathcal O}_{z_2}={\mathcal O}_{Y,z_1z_2}= {\mathcal O}_X(f^{-1}(U_{Y,z_1z_2}))\to \prod_{x\in f^{-1}(U_{Y,z_1z_2})} {\mathcal O}_x$ is flat. Therefore, the morphism $${\mathcal O}_{z_1}\otimes_{{\mathcal O}_z}{\mathcal O}_{z_2}\to \prod_{w\in U_{C(f),z_1z_2}} {\mathcal O}_w=\prod_{w\in U_{Y,z_1z_2}} {\mathcal O}_w\times \prod_{x\in f^{-1}(U_{Y,z_1z_2})} {\mathcal O}_x$$ is faithfully flat. \end{proof} \begin{proposicion} Let $f\colon X\to Y$ be a schematic morphism and suppose that $X$ is affine. Then, there exist an open inmersion $i\colon X\to Z$ such that $i_*X={\mathcal O}_Z$ and an affine morphism $g\colon Z\to Y$ such that $f=g\circ i$. \end{proposicion} \begin{proof}[Proof] The obvious morphism $f'\colon X\to (Y,f_*{\mathcal O}_X)$, $f'(x)=f(x)$ is a quasi-open inmersion by the lemma above and Example \ref{E8.2}. Let $i\colon X\to C(f')$, $\pi\colon C(f')\to (Y,f_*{\mathcal O}_X)$ and let $\Id\colon (Y,f_*{\mathcal O}_X)\to Y$ be the obvious morphism. Observe that $i$ is an open immersion, $g:=\Id\circ \pi$ is affine, since $\pi$ is a quasi-isomorphism and $\Id$ is affine, and $$f=\Id \circ f'=\Id\circ \pi\circ i=g\circ i.$$ \end{proof} \section{Quasi-closed immersions} Let $\mathcal I\subset {\mathcal O}_X$ be a quasi-coherent ideal and $(\mathcal I)_0:=\{x\in X\colon ({\mathcal O}_X/\mathcal I)_x\neq 0\}$, which is a closed subspace of $X$. Consider the schematic space $(X,{\mathcal O}_X/\mathcal I)$ and observe that $x\in X\backslash Y$ iff $({\mathcal O}_X/\mathcal I)_x=0$. Hence, $X\backslash Y$ is a set of removable points of $(X,{\mathcal O}_X/\mathcal I)$ and the obvious morphism $((\mathcal I)_0,{\mathcal O}_X/\mathcal I)\to (X,{\mathcal O}_X/\mathcal I)$ is a quasi-isomorphism. We shall say that the composition of the affine morphisms $$((\mathcal I)_0,{\mathcal O}_X/\mathcal I)\to (X,{\mathcal O}_X/\mathcal I)\to (X,{\mathcal O}_X)$$ is a closed immersion. Let $f\colon X'\to X$ be a schematic morphism. Let $\mathcal I=\Ker[{\mathcal O}_{X}\to f_*{\mathcal O}_{X'}]$. The obvious morphism $f'\colon X'\to (X,{\mathcal O}_X/I)$, $f'(x)=f(x)$ is schematic since $f'_*{\mathcal M}=f_*{\mathcal M}$ is a quasi-coherent ${\mathcal O}_X/I$-module, for any quasi-coherent ${\mathcal O}_{X'}$-module ${\mathcal M}$, because it is a quasi-coherent ${\mathcal O}_X$-module. Obviously, $f$ is the composition of the morphisms $X'\to (X,{\mathcal O}_X/I) \to (X,{\mathcal O}_X)$. Assume that ${\mathcal O}_{X',x'}\neq 0$ for any $x'\in X'$ (recall that if ${\mathcal O}_{X'.x'}=0$ then $x'$ is a removable point). The closure of $f(X')$ in $X$ is $ (\mathcal I)_0$: $x\in X\backslash (\mathcal I)_0$ iff $({\mathcal O}_X/\mathcal I)_x= 0$, which is equivalent to saying that $(f_*{\mathcal O}_{X'})_x= 0$ since the morphism of sheaves (of rings) ${\mathcal O}_{X}/\mathcal I\to f_*{\mathcal O}_{X'}$ is inyective. $(f_*{\mathcal O}_{X'})_x={\mathcal O}_{X'}(f^{-1}(U_x))= 0$ iff $f^{-1}(U_x)=\emptyset$. The morphism $X'\to ((\mathcal I)_0,{\mathcal O}_X/\mathcal I)$, $x\mapsto f(x)$ is schematic and $f'$ is the composition of the morphisms $X'\to ((I)_0,{\mathcal O}_X/I) \to (X,{\mathcal O}_X/I)$ (see Proposition \ref{P6.11}). \begin{definicion} Let $f\colon X'\to X$ be a schematic morphism. We shall say that $f$ is a quasi-closed immersion if it is affine and the morphism ${\mathcal O}_X\to f_*{\mathcal O}_{X'}$ is an epimorphism.\end{definicion} Suppose that ${\mathcal O}_{X',x'}\neq 0$, for any $x'\in X'$, let $f\colon X'\to X$ be a quasi-closed immersión and $\mathcal I=\Ker[{\mathcal O}_X\to f_*{\mathcal O}_{X'}]$. Then $ f$ is the composition of a quasi-isomorphism $X'\to ((I)_0,{\mathcal O}_X/I)$ and a closed immersion $ ((I)_0,{\mathcal O}_X/I)\to (X,{\mathcal O}_X)$. \section{{\rm Spec}\hspace{0.1cm}{\sl O}$_X$} Let $\{X_i, f_{ij}\}_{i,j\in I}$ (where $\#I<\infty$) be a direct system of morphisms of ringed spaces. Let $\ilim{i} X_i$ be the direct limit of the topological spaces $X_i$: $\ilim{i} X_i=\coproda{i} X_i/\!\sim$, where $\sim$ is the equivalence relation generated by the relation $x_i\sim f_{ij}(x_i)$, and $U\subseteq \ilim{i} X_i$ is an open subset iff $f_j^{-1}(U)$ is an open subset for any $j\in I$, where $f_j\colon X_j\to \ilim{i} X_i$ is the natural map. Define ${\mathcal O}_{\ilim{i} X_i}(V):=\plim{i\in I} {\mathcal O}_{X_i}(f_i^{-1}(V))$, for any open set $V\subseteq \ilim{i} X_i$. It is well known that $(\ilim{i} X_i, {\mathcal O}_{\ilim{i} X_i})$ is the direct limit of the direct system of morphisms $\{X_i,f_{ij}\}$ in the category of ringed spaces. \begin{definicion} Given a schematic finite space $X$ we shall denote $$\Spec {\mathcal O}_X:=\ilim{x\in X} \Spec {\mathcal O}_x.$$ (the sheaf of rings considered on $\Spec O_x$ is the sheaf of localizations of ${\mathcal O}_x$, $\tilde {\mathcal O}_x$.) \end{definicion} Observe that ${\mathcal O}_{\Spec {\mathcal O}_X}(\Spec {\mathcal O}_X)=\plim{x\in X} {\mathcal O}_x={\mathcal O}_X(X)$. \begin{ejemplo} Obviously, if $X=U_x$, then $\Spec {\mathcal O}_X=\Spec {\mathcal O}_x$ and ${\mathcal O}_{\Spec{\mathcal O}_X}=\tilde {\mathcal O}_x$. \end{ejemplo} Consider the following relation on $\coprod \Spec {\mathcal O}_{x_i}$: Given ${\mathfrak p}\in \Spec{\mathcal O}_x$ and ${\mathfrak q}\in\Spec {\mathcal O}_y$ we shall say that ${\mathfrak p}\equiv {\mathfrak q}$ if there exist $u\geq x,y$ and ${\mathfrak r} \in \Spec {\mathcal O}_u$ such that the given morphisms $\Spec {\mathcal O}_u\hookrightarrow \Spec {\mathcal O}_x$ and $\Spec {\mathcal O}_u\hookrightarrow \Spec {\mathcal O}_y$ map ${\mathfrak r}$ to ${\mathfrak p}$ and ${\mathfrak r} $ to ${\mathfrak q}$, respectively (recall Proposition \ref{last}). Let us prove that $\equiv$ is an equivalence relation: Let ${\mathfrak p}\equiv {\mathfrak q}$, (${\mathfrak r}\mapsto {\mathfrak p},{\mathfrak q}$) and ${\mathfrak q}\equiv {\mathfrak q}'$ (${\mathfrak q}'\in \Spec {\mathcal O}_z$, there exist $u'\geq y,z$ and ${\mathfrak r}'\in\Spec{\mathcal O}_{u'}$ such that ${\mathfrak r}'\mapsto {\mathfrak q},{\mathfrak q}'$). Recall that ${\mathcal O}_{uu'}={\mathcal O}_{u}\otimes_{{\mathcal O}_y}{\mathcal O}_{u'}$ and that ${\mathcal O}_{uu'}\to \prod_{w\in U_{uu'}} {\mathcal O}_{w}$ is faithfully flat. Then, $$\Spec {\mathcal O}_u\cap \Spec {\mathcal O}_{u'}=\Spec {\mathcal O}_{uu'}=\cup _{w\in U_{uu'}} \Spec {\mathcal O}_{w}$$ Since, ${\mathfrak q}={\mathfrak r}={\mathfrak r}' \in \Spec {\mathcal O}_u\cap \Spec {\mathcal O}_{u'}$ there exists $v\in U_{uu'}$ and ${\mathfrak r}''\in \Spec {\mathcal O}_{v}$ such that ${\mathfrak r}''\mapsto {\mathfrak r},{\mathfrak r}'$. Then, ${\mathfrak r}''\mapsto {\mathfrak p},{\mathfrak q}'$ and ${\mathfrak p}\equiv {\mathfrak q}'$. Observe that $\Spec {\mathcal O}_X=\coprod_{x\in X} \Spec {\mathcal O}_{x}/\equiv$ as topological spaces. Besides, the morphisms $\Spec {\mathcal O}_{x}\to \Spec {\mathcal O}_X$ are injective, $\Spec {\mathcal O}_X=\cup_{x\in X} \Spec{\mathcal O}_x$ as topological spaces ($U\subseteq \Spec {\mathcal O}_X$ is an open set iff $U\cap \Spec{\mathcal O}_x$ is an open set, for any $x\in X$). \begin{lemma} \label{radical} Let $A\to B$ be a flat morphism and assume $B\otimes_A B=B$. If $I\subseteq A$ is a radical ideal, then $I\cdot B$ is a radical ideal of $B$. \end{lemma} \begin{proof}[Proof] Let ${\mathfrak p}\in\Spec B\subset\Spec A$ and recall Notation \ref{Notation}. Then, $$(\rad (I\cdot B))_{\mathfrak p}=\rad (I\cdot B_{\mathfrak p})\overset{\text{\ref{last}}}=\rad (I\cdot A_{\mathfrak p})=\rad(I)\cdot A_{\mathfrak p}=I\cdot A_{\mathfrak p}\overset{\text{\ref{last}}}=I\cdot B_{\mathfrak p}=(I\cdot B)_{\mathfrak p}$$ Therefore, $\rad (I\cdot B)=I\cdot B$. \end{proof} \begin{proposicion} Let $X$ be a schematic finite space. Let $\mathcal I\subseteq {\mathcal O}_X$ be a quasi-coherent ideal. The ideal $\rad\mathcal I\subset {\mathcal O}_X$, defined by $(\rad\mathcal I)_x:=\rad\mathcal I_{x}$, for any $x\in X$, is a quasi-coherent ideal of ${\mathcal O}_X$.\end{proposicion} \begin{proof}[Proof] We only have to prove that given a flat morphism $A\to B$ such that $B\otimes_AB=B$ and an ideal $I\subseteq A$, then $(\rad I)\cdot B=\rad (I\cdot B)$. This is a consequence of Lemma \ref{radical}. \end{proof} \begin{observacion} $(\rad\mathcal I)(U)=\rad (\mathcal I(U))$, for any open subset $U\subset X$: $$(\rad\mathcal I)(U)=\plim{x\in U} (\rad\mathcal I)_x= \plim{x\in U} (\rad\mathcal I_x)=\rad\plim{x\in U} \mathcal I_x=\rad\mathcal I(U).$$ \end{observacion} \begin{definicion} Let $X$ be a schematic finite space. We shall say that a quasi-coherent ideal $\mathcal I\subseteq {\mathcal O}_X$ is radical if $\mathcal I=\rad(\mathcal I)$. \end{definicion} \begin{notacion} Let $X$ be a schematic finite space. Given a quasi-coherent ideal $\mathcal I\subset {\mathcal O}_X$, we shall denote $$(\mathcal I)_0:=\cupa{x\in X} \{{\mathfrak p}\in \Spec {\mathcal O}_x\colon \mathcal I_x\subseteq {\mathfrak p}\}\subseteq\Spec{\mathcal O}_X$$ Given a closet subset $C\subset \Spec {\mathcal O}_X$, let $\mathcal I_C\subset {\mathcal O}_X$ be the radical quasi-coherent ideal defined by $\mathcal I_{C,x}:=\capa{{\mathfrak p}'\in C\cap \Spec {\mathcal O}_x} {\mathfrak p}'\subset {\mathcal O}_x$,\footnote{If $C\cap \Spec {\mathcal O}_x=\emptyset$, then $\capa{{\mathfrak p}'\in C\cap \Spec {\mathcal O}_x} {\mathfrak p}':={\mathcal O}_X$.} for any $x\in X$. \end{notacion} \begin{proposicion} The maps $$\{\text{Closed subspaces of $\Spec {\mathcal O}_X$}\} \longleftrightarrow \{\text{Radical quasi-coherent ideals of ${\mathcal O}_X$}\}$$ $$\xymatrix @R8pt{ C \ar@{|->}[r] \quad & \quad \qquad \mathcal I_C \\ (\mathcal I)_0 \quad & \quad \quad \qquad \ar@{|->}[l] \mathcal I \quad }$$ are mutually inverse. \end{proposicion} \begin{notacion} Given a ring $B$ and $b\in B$ we denote $B_b=\{1,b^{-1},b^{-2},\cdots\}\cdot B$.\end{notacion} \begin{proposicion} \label{P16.2} If $X$ is an affine finite space, then $\Spec {\mathcal O}_X=\Spec {\mathcal O}(X)$. \end{proposicion} \begin{proof}[Proof] The morphism ${\mathcal O}(X)\to {\mathcal O}_x$ is flat and ${\mathcal O}_x\otimes_{{\mathcal O}(X)}{\mathcal O}_x\overset{\text{\ref{11}}}={\mathcal O}_x$, then $\Spec {\mathcal O}_x\hookrightarrow \Spec {\mathcal O}(X)$ is a subspace. The morphism ${\mathcal O}(X)\to \prod_{x\in X} {\mathcal O}_x$ is faithfully flat, then the induced morphism $\coprod_{x\in X} \Spec {\mathcal O}_x\to \Spec {\mathcal O}(X)$ is surjective. The sequence of morphisms $${\mathcal O}(X)\to \prod_{x\in X} {\mathcal O}_x{\dosflechasa[]{}{}} \prod_ {x\leq x'\in X} {\mathcal O}_{x'}$$ is exact. Then, the natural morphism $f\colon\Spec {\mathcal O}_X\to \Spec {\mathcal O}(X)$ is continuous and bijective. Given a closed set $C\subset\Spec {\mathcal O}_X$, let $\mathcal I_C$ be the radical quasi-coherent ideal of ${\mathcal O}_X$ associated with $C$. $\mathcal I_C=\tilde{\mathcal I_C(X)}$ since $X$ is affine. Recall Notation \ref{N3.9}. Then, $C\cap \Spec {\mathcal O}_x=(\mathcal I_{C,x})_0=(\mathcal I_C(X)\cdot {\mathcal O}_x)_0$. Hence $f(C)=(\mathcal I_C(X))_0$ and $f$ is a homeomorphism. Also observe that $(\plim{x} {\mathcal O}_x)_{(a_x)}= \plim{x\in X} {\mathcal O}_{x,a_x}$, for any $(a_x)\in \plim{x\in X} {\mathcal O}_x\subseteq \prod_{x\in X} {\mathcal O}_x$, hence ${\mathcal O}_{\ilim{x\in X} \Spec {\mathcal O}_x}=\widetilde{{\mathcal O}(X)}$. \end{proof} \begin{definicion} Let $f\colon X\to Y$ be a schematic morphism. Consider the morphisms ${\mathcal O}_{f(x)}\to {\mathcal O}_x$, which induce the scheme morphisms $\Spec {\mathcal O}_x\to \Spec {\mathcal O}_{f(x)}$, which induce a morphism of ringed spaces $$\tilde f\colon \Spec {\mathcal O}_X\to \Spec {\mathcal O}_Y.$$ We shall say that $\tilde f$ is the morphism induced by $f$. \end{definicion} \begin{proposicion} \label{P16.4} Let $f\colon X\to Y$ be a quasi-isomorphism. Then, the morphism induced by $f$, $\tilde f\colon \Spec {\mathcal O}_X\to \Spec {\mathcal O}_Y$, is an isomorphism. \end{proposicion} \begin{proof}[Proof] Observe that $$\aligned \Spec {\mathcal O}_X & =\ilim{x\in X} \Spec {\mathcal O}_x= \ilim{y\in Y}\ilim{x\in f^{-1}(U_y)} \Spec {\mathcal O}_x\overset{\text{\ref{P16.2}}} =\ilim{y\in Y} \Spec {\mathcal O}_X(f^{-1}(U_y))\\ &= \ilim{y\in Y} \Spec {\mathcal O}_Y(U_y) = \Spec {\mathcal O}_Y.\endaligned$$ \end{proof} \begin{proposicion} Let $X$ be a schematic finite space, $U\overset i\subset X$ an open subset and $\mathcal I\subseteq {\mathcal O}_U$ a quasi-coherent ideal. Then, there exists a quasi-coherent ideal $\mathcal J\subseteq {\mathcal O}_X$ such that $\mathcal J_{|U}=\mathcal I$. \end{proposicion} \begin{proof}[Proof] $\mathcal J:=\Ker[{\mathcal O}_X\to i_*({\mathcal O}_U/\mathcal I)]$ holds $\mathcal J_{|U}=\mathcal I$. \end{proof} \begin{notacion} Given a schematic finite space $X$ we shall denote $\tilde X=\Spec{\mathcal O}_X$. \end{notacion} \begin{proposicion} \label{P16.12} Let $X$ be a schematic finite space and $U\subset X$ an open subset. Then, \begin{enumerate} \item $\tilde U$ is a topological subspace of $\tilde X.$ \item $\tilde U=\capa{\tilde U\subseteq \text{ open subset }\bar V\subseteq \tilde X } \bar V.$ \end{enumerate} \end{proposicion} \begin{proof}[Proof] 1. Given a closed set $C\subset \tilde U$, let $\mathcal I_C\subseteq {\mathcal O}_U$ be the radical quasi-coherent ideal associated. Let $\mathcal J\subseteq {\mathcal O}_X$ be the quasi-coherent ideal such that $\mathcal J_{|U}=\mathcal I$. Then, the closed subset $D=(\mathcal J)_0=(\rad\mathcal J)_0$ of $\tilde X$ holds that $D\cap \tilde U=C$. 2. Let ${\mathfrak p}\in \tilde X-\tilde U$. Let $\mathcal P\subset {\mathcal O}_X$ be the sheaf of ideals defined by $\mathcal P_x={\mathfrak p}\subset {\mathcal O}_x$ if ${\mathfrak p}\in \Spec {\mathcal O}_x$ and $\mathcal P_x={\mathcal O}_x$ if ${\mathfrak p}\notin \Spec {\mathcal O}_x$. By Proposition \ref{sudor}, $\mathcal P$ is quasi-coherent. $(\mathcal P)_0\subset \Spec {\mathcal O}_X$ is the closure of ${\mathfrak p}$ and $(\mathcal P)_0\cap \tilde U_x=\emptyset$, for any $\tilde U_x\subset \tilde U$, hence $(\mathcal P)_0\cap \tilde U=\emptyset$. Then, $\tilde U$ is equal to the intersection of the open subsets $\bar V\subseteq \tilde X$, such that $\tilde U\subseteq \bar V$. \end{proof} \begin{definicion} Let $X$ be a schematic finite space. We shall say that a quasi-coherent ${\mathcal O}_X$-module $\mathcal M$ is finitely generated if ${\mathcal M}_x$ is a finitely generated ${\mathcal O}_{x}$-module, for any $x\in X$.\end{definicion} \begin{proposicion} Let $X$ be an affine finite space and ${\mathcal M}$ a a quasi-coherent ${\mathcal O}_X$-module. Then, ${\mathcal M}$ is finitely generated iff ${\mathcal M}(X)$ is a finitely generated ${\mathcal O}(X)$-module.\end{proposicion} \begin{proof}[Proof] $\Rightarrow)$ Given $x\in X$, ${\mathcal M}_x={\mathcal M}(X)\otimes_{{\mathcal O}(X)}{\mathcal O}_x$. Let $N^x\subset {\mathcal M}(X)$ be a finitely generated ${\mathcal O}(X)$-submodule such that $N^x\otimes_{{\mathcal O}(X)}{\mathcal O}_x={\mathcal M}_x$ and $N:=\sum_{x\in X} N^x$. Then $N={\mathcal M}(X)$, since $N\otimes_{{\mathcal O}(X)}{\mathcal O}_x=M_x$ for any $x\in X$. $\Leftarrow)$ ${\mathcal M}_x={\mathcal M}(X)\otimes_{{\mathcal O}(X)}{\mathcal O}_x$ is a finitely generated ${\mathcal O}_x$-module, for any $x\in X$. \end{proof} \begin{proposicion} Let $X$ be a schematic finite space. Any quasi-coherent ${\mathcal O}_X$-module is the direct limit of its finitely generated submodules.\end{proposicion} \begin{proof}[Proof] Let ${\mathcal M}$ be a quasi-coherent ${\mathcal O}_X$-module. Let us fix $x_1\in X$ and a finitely generated submodule $N_1\subset {\mathcal M}_{x_1}$. Consider the inclusion morphism $i_1\colon U_{x_1}\hookrightarrow X$ and let ${\mathcal M}_1:=\Ker[{\mathcal M}\to i_{1*} ({\mathcal M}_{|U_{x_1}}/\tilde N_1)]$. Observe that ${\mathcal M}_1\subset{\mathcal M}$ and ${\mathcal M}_{1|U_1}=\tilde N_1$. Given $x_2\in X$, let $N_{2}\subset {\mathcal M}_{1,x_2}$ be a finitely generated submodule such that $N_{2}\otimes_{{\mathcal O}_{x_2}}{\mathcal O}_y ={\mathcal M}_{1,y}$, for any $y\in U_ {x_1}\cap U_ {x_2}$. Let $U_2=U_{x_1}\cup U_{x_2}$ and let ${\mathcal N}_2\subset {\mathcal M}_{1|U_2}$ be the finitely generated ${\mathcal O}_{U_2}$-module such that ${\mathcal N}_{2|U_{x_1}}=\tilde N_1$ and ${\mathcal N}_{2|U_{x_2}}=\tilde N_2$. Consider the inclusion morphism $i_2\colon U_2\hookrightarrow X$ and let ${\mathcal M}_2:=\Ker[{\mathcal M}_1\to i_{2*} ({\mathcal M}_{1|U_2}/{\mathcal N}_{2})]$. Observe that ${\mathcal M}_{2|U_2}={\mathcal N}_2$. Given $x_3\in X$, let $N_{3}\subset {\mathcal M}_{2,x_3}$ be a finitely generated submodule such that $N_{3}\otimes_{{\mathcal O}_{x_3}}{\mathcal O}_y={\mathcal M}_{2,y}$ for any $y\in U_{x_3}\cap U_2$. Let $U_3:=U_{2}\cup U_{x_3}$ and let ${\mathcal N}_3\subset {\mathcal M}_{2|U_3}$ be the finitely generated ${\mathcal O}_{U_3}$-module such that ${\mathcal N}_{3|U_{2}}={\mathcal N}_2$ and ${\mathcal N}_{3|U_{x_3}}=\tilde N_3$. Consider the inclusion morphism $i_{3}\colon U_{3}\hookrightarrow X$ and let ${\mathcal M}_3:=\Ker[{\mathcal M}_2\to i_{3*} ({\mathcal M}_{2|U_{3}}/{\mathcal N}_{3})]$. Observe that ${\mathcal M}_{3|U_3}={\mathcal N}_3$. So on we shall get a finitely quasi-coherent ${\mathcal O}_X$-submodule ${\mathcal M}_n\subset{\mathcal M}$ such that ${\mathcal M}_{n,x_1}=N_1$. Now it is easy to prove this proposition. \end{proof} \begin{corolario} \label{C15.23} Let $X$ be a schematic finite space. Any quasi-coherent ideal $\mathcal I \subset {\mathcal O}_X$ is the direct limit of its finitely generated ideals $\mathcal I_i\subset \mathcal I$. \end{corolario} \begin{lemma} \label{L15.20} Let $X$ be a schematic finite space, $\bar U\subset \tilde X$ an open subset and $C=\tilde X-\bar U $. Then, $\bar U$ is quasi-compact iff there exists a finitely generated ideal $\mathcal I\subset {\mathcal O}_X$ such that $({\mathcal I})_0=C$. \end{lemma} \begin{proof}[Proof] $\Rightarrow)$ Consider the quasi-coherent ideal $\mathcal I_C\subset {\mathcal O}_X$. Let $J=\{\mathcal I_j\}_{j\in J}$ the set of finitely generated ideals of ${\mathcal O}_X$ contained in $\mathcal I_C$. By Corollary \ref{C15.23}, $\mathcal I_C=\ilim{j\in J} \mathcal I_j$. Then, $C=(\mathcal I_C)_0=(\ilim{j\in J} \mathcal I_j)_0=\cap_{j\in J} (\mathcal I_j)_0$ and $\bar U=\cup_{j\in J} (\tilde X-(\mathcal I_j)_0)$. There exists $j\in J$ such that $\bar U=\tilde X-(\mathcal I_j)_0$, since $\bar U$ is quasi-compact. Hence, $C=(\mathcal I_j)_0$. $\Leftarrow$) Let $x\in X$, then $\mathcal I_x=(a_1,\ldots,a_n)\subset {\mathcal O}_{x}$ is finitely generated. $C\cap \tilde U_{x}=(\mathcal I_x)_0= \cap_{i} (a_i)_0$, then $\bar U\cap \tilde U_{x}=\cup_i \Spec {\mathcal O}_{x,a_i}$ is quasi-compact. Therefore, $\bar U=\cup_{x} (\bar U\cap \tilde U_{x})$ is quasi-compact. \end{proof} \begin{proposicion} \label{P16.16} Let $X$ be a schematic finite space. Then, \begin{enumerate} \item The intersection of two quasi-compact open subsets of $\tilde X$ is quasi-compact. \item The family of quasi-compact open subsets of $\tilde X$ is a basis for the topology of $\tilde X$. \item If $\bar V\subseteq \tilde X$ is a quasi-compact open subset then $\bar V\cap \tilde U$ is quasi-compact, for any open subset $U\subset X$. \end{enumerate} \end{proposicion} \begin{proof}[Proof] 1. Let $\bar U_1,\bar U_2\subset \tilde X$ be two quasi-compact open subsets, $C_1:=\tilde X-\bar U_1$, $C_2:=\tilde X-\bar U_2$, and $\mathcal I_1,\mathcal I_2\subset {\mathcal O}_X$ two finitely generated ideals such that $C_1=(\mathcal I_1)_0$ and $C_2=(\mathcal I_1)_0$. Then, $C_1\cup C_2=(\mathcal I_1)_0\cup (\mathcal I_2)_0=(\mathcal I_1\cdot\mathcal I_2)_0$ and $\bar U_1\cap\bar U_2=\tilde X-(\mathcal I_1\cdot \mathcal I_2)_0$. By Lemma \ref{L15.20}, $\bar U_1\cap\bar U_2$ is quasi-compact. 2. Let $\bar U\subset \tilde X$ be an open subset and $C=\tilde X-\bar U$. $\mathcal I_C=\ilim{j\in J} \mathcal I_j$, where $\{\mathcal I_j\}_{j\in J}$ is the set of the finitely generated of ${\mathcal O}_X$ contained in $\mathcal I_C$. Then, $C=(\mathcal I_C)_0= (\ilim{j\in J} \mathcal I_j)_0=\cap_{j\in J} ( \mathcal I_j)_0$ and $\bar U=\cup_{j\in J} (\tilde X-(\mathcal I_j)_0)$, where the open subsets $\tilde X-(\mathcal I_j)_0$ are quasi-compact by Lemma \ref{L15.20}. 3. Let $C=\tilde X-\bar V$· and let $\mathcal I\subset {\mathcal O}_X$ be a finitely generated ideal such that $C=(\mathcal I)_0$. Then, $C\cap \tilde U=(\mathcal I_{|U})_0$ and $\bar V\cap\tilde U=\tilde U-(\mathcal I_{|U})_0$. By Lemma \ref{L15.20}, $\bar V\cap\tilde U$ is quasi-compact. \end{proof} \begin{corolario} Let $X$ be a schematic finite space, $U\subseteq X$ an open subset and $\bar U\subset \tilde U$ a quasi-compact open subset. Then, \begin{enumerate} \item There exists a quasi-compact open subset $\bar W\subset \tilde X$, such that, $\bar W\cap \tilde U=\bar U$. \item $\bar U$ is equal to the intersection of the quasi-compact open subsets of $\tilde X$ which contain it.\end{enumerate} \end{corolario} \begin{proof}[Proof] 1. By Proposition \ref{P16.12}, there exists an open subset $\bar W'\subseteq \tilde X$ such that $\bar W'\cap \tilde U=\bar U$. Given ${\mathfrak p}\in \bar U$, there exists a quasi-compact open subset $\bar W_{{\mathfrak p}}\subset \bar W'$ such that ${\mathfrak p}\in \bar W_{{\mathfrak p}}$. There exist ${\mathfrak p}_1,\ldots,{\mathfrak p}_n\in \bar U$ such that $\bar U\subset \cup_{i=1}^n \bar W_{{\mathfrak p}_i}\subset \bar W'$. Hence, $\bar W:= \cup_{i=1}^n \bar W_{{\mathfrak p}_i}$ holds $\bar W\cap \tilde U=\bar U$. 2. Given an open subset $\bar V\subset \tilde X$ such that $\bar U\subset \bar V$, there exists a quasi-compact open subset $\bar V'\subset \tilde X$ such that $\bar U\subset \bar V'\subset \bar V$. By Proposition \ref{P16.12}, we are done. \end{proof} \begin{lemma} \label{L16.18} Let $X$ be a schematic finite space, $U_1,U_2\subset X$ open subsets, $\bar V_1\subset \tilde U_1$ and $\bar V_2\subset \tilde U_2$ quasi-compact open subsets and $\bar W\subset \tilde X$ an open subset such that $\bar V_1\cap \bar V_2\subset \bar W$. Then, there exist open subsets $\bar W_1,\bar W_2\subset \tilde X$ such that $\bar V_1\subset \bar W_1$, $\bar V_2\subset \bar W_2$ and $\bar W_1\cap \bar W_2\subset \bar W$. \end{lemma} \begin{proof}[Proof] By the quasi-compactness of $\bar V_1$ and $\bar V_2$, to prove this theorem we can easily reduce ourselves to the case in which $\bar V_1=\Spec {\mathcal O}_{x_1,a_1}\subset \tilde U_{x_1}$ ($a_1\in {\mathcal O}_{x_1}$) and $\bar V_2=\Spec {\mathcal O}_{x_2,a_2}\subset \tilde U_{x_2}$ ($a_2\in {\mathcal O}_{x_2}$). 1. Suppose that $\bar V_1\cap \bar V_2=\emptyset$. Let ${\mathcal O}_{U_{x_1,a_1}}$ be the quasi-coherent ${\mathcal O}_{U_{x_1}}$-module defined by ${\mathcal O}_{U_{x_1,a_1}}(U_z)={\mathcal O}_{z,a_1}$, for any $z\in U_{x_1}$. Let $i_{x_1}\colon U_{x_1}\subset X$ be the inclusion morphism. Let $\mathcal I_1$ be the kernel of the natural morphism ${\mathcal O}_X\to i_{x_1*}{\mathcal O}_{U_{x_1,a_1}}$. Likewise, define ${\mathcal O}_{U_{x_2,a_2}}$, $i_{x_2}$ and $\mathcal I_2$. Observe that $i_{x_1*}{\mathcal O}_{U_{x_1,a_1}}(U_z)={\mathcal O}_{x_1z, a_1}$ for any $z\in X$, and the natural morphism $$i_{x_1*}{\mathcal O}_{U_{x_1,a_1}}(U_z)\to \prod_{y\in U_{x_1z}} {\mathcal O}_{y,a_1}$$ is injective. Then, the sequence of morphisms $$(**)\qquad 0\to \mathcal I_{1,z}\to {\mathcal O}_z\to \prod_{y\in U_{x_1z}} {\mathcal O}_{y,a_1}$$ is exact. Observe that $\bar V_1\cap \tilde U_z=\cup_{y\in U_{x_1z}} \Spec{\mathcal O}_{y,a_1}$. Then, $(\mathcal I_1)_0$ is equal to the closure $Cl(\bar V_1)$ of $\bar V_1$ in $\tilde X$. Let $z=x_2$, tensoring $(**)$ by $\otimes_{{\mathcal O}_{x_2}} {\mathcal O}_{x_2,a_2}$, we obtain the exact sequence $$0\to \mathcal I_{1,x_2}\otimes_{{\mathcal O}_{x_2}} {\mathcal O}_{x_2,a_2}\to {\mathcal O}_{x_2,a_2}\to \prod_{y\in U_{x_1x_2}} {\mathcal O}_{y,a_1}\otimes_{{\mathcal O}_{x_2}} {\mathcal O}_{x_2,a_2}.$$ ${\mathcal O}_{y,a_1}\otimes_{{\mathcal O}_{x_2}} {\mathcal O}_{x_2,a_2}=0$ since $\Spec {\mathcal O}_{y,a_1}\cap \Spec {\mathcal O}_{x_2,a_2}\subset \bar V_1\cap\bar V_2=\emptyset$. Then, $\mathcal I_{1,x_2}\otimes_{{\mathcal O}_{x_2}} {\mathcal O}_{x_2,a_2}= {\mathcal O}_{x_2,a_2}$, hence $\mathcal I_{1,x_2}\cdot {\mathcal O}_{x_2,a_2}= {\mathcal O}_{x_2,a_2}$. Therefore, $(\mathcal I_1)_0\cap \bar V_2=\emptyset$, that is, $Cl(\bar V_1)\cap \bar V_2\overset{*}=\emptyset$. Let $\mathcal J_1\subset\mathcal I_1$ be a finitely generated ideal such that $\mathcal J_{1,x_2} \cdot {\mathcal O}_{x_2,a_2}= {\mathcal O}_{x_2,a_2}$. Again, $(\mathcal J_1)_0\cap \bar V_2=\emptyset$ and $\bar V_1\subset (\mathcal J_1)_0$. Likewise, define a finitely generated ideal $\mathcal J_2$ such that $(\mathcal J_2)_0\cap \bar V_1=\emptyset$ and $\bar V_2\subset (\mathcal J_2)_0$. Given a subset $Y\subset \tilde X$ denote $Y^c:=\tilde X-Y$. Let $\bar W_1:=(Cl((\mathcal J_1\cdot \mathcal J_2)_0^c)\cup (\mathcal J_2)_0)^c$ and $\bar W_2:=(Cl((\mathcal J_1\cdot \mathcal J_2)_0^c)\cup (\mathcal J_1)_0)^c$. Obviously, $\bar W_1\subset ((\mathcal J_1\cdot \mathcal J_2)_0^c\cup (\mathcal J_2)_0)^c=(\mathcal J_1)_0-(\mathcal J_2)_0$ and $\bar W_2\subset (\mathcal J_2)_0-(\mathcal J_1)_0$. Then, $\bar W_1\cap \bar W_2=\emptyset$. We only have to prove that $\bar V_1\subset \bar W_1$. We know that $\bar V_1\cap (\mathcal J_2)_0=\emptyset$, it remains to prove that $\bar V_1\cap Cl((\mathcal J_1\cdot \mathcal J_2)_0^c)=\emptyset$. $\bar V_1\cap (\mathcal J_1\cdot \mathcal J_2)_0^c=\emptyset$ and $(\mathcal J_1\cdot \mathcal J_2)_0^c$ is the union of a finite set of subsets $\Spec {\mathcal O}_{y,b}\subset \tilde U_y\subset \tilde X$ (with $b\in {\mathcal O}_y$ and $y\in X$). As we have proved above ($\overset*=$), $\bar V_1\cap Cl(\Spec {\mathcal O}_{y,b})=\emptyset$ since $\bar V_1\cap \Spec {\mathcal O}_{y,b}=\emptyset$. Then, $\bar V_1\cap Cl((\mathcal J_1\cdot \mathcal J_2)_0^c)=\emptyset$. 2. Suppose that $V_1\cap V_2\neq \emptyset$. Let $\mathcal I:=\mathcal I_{\tilde X-\bar W}\subset {\mathcal O}_X$. $\tilde X-\bar W\subset \tilde X$ is equal to $\tilde Y:=\Spec\, {\mathcal O}_X/\mathcal I$. $\bar V_1\cap \tilde Y=\Spec\, ({\mathcal O}_X/\mathcal I)_{x_1,[a_1]}$ and $\bar V_2\cap \tilde Y=\Spec ({\mathcal O}_X/\mathcal I)_{x_2,[a_2]}$. By 1., there exist open subsets $\bar W'_1,\bar W'_2\subset \tilde Y$ such that $\bar W'_1\cap \bar W'_2=\emptyset$, $\bar V_1\cap \tilde Y\subset \bar W'_1$ and $\bar V_2\cap \tilde Y\subset \bar W'_2$. Then, $\bar W_1=\bar W\cup \bar W'_1$ and $\bar W_2=\bar W\cup \bar W'_2$ are the searched open subsets. \end{proof} \begin{lemma} \label{L15.25} Let $X$ be a schematic finite space and $B$ the family of quasi-compact open subsets of $\tilde X$. Let $\mathcal F'$ be a presheaf on $\tilde X$ and $\mathcal F$ the sheafification of $\mathcal F'$. If for any $\bar V\in B$ and any finite open covering $\{\bar V_i\in B\}$ of $\bar V$ the sequence of morphisms $$\mathcal F'(\bar V)\to \prod_{i} \mathcal F'(\bar V_i) {\dosflechasa[]{}{}} \prod_{ij} \mathcal F'(\bar V_i\cap \bar V_j)$$ is exact, then $\mathcal F'(\bar V)=\mathcal F(\bar V)$. \end{lemma} \begin{proof}[Proof] It is well known. \end{proof} \begin{corolario} Let $X$ be a schematic finite space and let $\{F_i\}_i$ be a direct system of sheaves of abelian groups on $\tilde X$. Then, $$H^n(\tilde X,\ilim{i\in I} F_i)=\ilim{i\in I} H^n(\tilde X,F_i),$$ for any $n\geq 0$.\end{corolario} \begin{corolario} \label{coro25} Let $X$ be a schematic finite space, $U\subset X$ an open subset, $\bar V\subset \tilde U$ a quasi-compact open subset and $\mathcal F$ a sheaf of abelian groups on $\tilde X$. Then, $$\mathcal F_{|\tilde U}(\bar V)=\ilim{ \bar V\subset\bar W} \mathcal F(\bar W).$$ Therefore, $H^n(\bar V,\mathcal F_{|\tilde U})=\ilim{ \bar V\subset\bar W} H^n(\bar W,\mathcal F)$, for any $n\geq 0$. \end{corolario} \begin{proof}[Proof] Let $\mathcal G$ be the presheaf on $\tilde U$ defined by $\mathcal G(\bar V):=\ilim{ \bar V\subset \bar W } \mathcal F(\bar W)$. $\mathcal F_{|\tilde U}$ is the sheafification of $\mathcal G$. Let $\{\bar V_i\}$ a finite quasi-compact open covering of $\bar V$. Let $I_i$ be the family of open subsets $\bar W_i\subset \tilde X$ such that $\bar V_i\subset\bar W_i$ and $I=\prod_i I_i$. The sequence of morphisms $$\mathcal F(\cup_i\bar W_i)\to \prod_{i} \mathcal F(\bar W_i) {\dosflechasa[]{}{}} \prod_{i,j} \mathcal F(\bar W_i\cap \bar W_j)$$ is exact. Taking direct limits, we obtain the exact sequence $$\ilim{(\bar W_i)\in I} \mathcal F(\cup_i\bar W_i)\to \ilim{(\bar W_i)\in I} \prod_{i} \mathcal F(\bar W_i) {\dosflechasa[]{}{}} \ilim{(\bar W_i)\in I} \prod_{ij} \mathcal F(\bar W_i\cap \bar W_j)$$ Observe that $\ilim{(\bar W_i)\in I} \mathcal F(\cup_i\bar W_i)=\mathcal G(\cup_i \bar V_i)$, $\ilim{(\bar W_i)\in I} \prod_{i} \mathcal F(\bar W_i)= \prod_{i} \mathcal G(\bar V_i)$, and by Lemma \ref{L16.18}, $\ilim{(\bar W_i)\in I} \prod_{ij} \mathcal F(\bar W_i\cap \bar W_j)=\prod_{ij} \mathcal G(\bar V_i\cap \bar V_j)$. Hence, $$\mathcal G(\cup_i\bar V_i)\to \prod_{i} \mathcal G(\bar V_i) {\dosflechasa[]{}{}} \prod_{i,j} \mathcal G(\bar V_i\cap \bar V_j)$$ is exact. By Lemma \ref{L15.25}, $\mathcal G(\bar V)=\mathcal F_{|\tilde U}(\bar V)$. Finally, let $F\to C^{\displaystyle \cdot}\mathcal F$ be the Godement resolution, then $$H^n(\bar V,\mathcal F_{|\tilde U})=H^n\Gamma(\bar V, (C^{\displaystyle \cdot}\mathcal F)_{|\tilde U})= \ilim{ \bar V\subset\bar W}H^n\Gamma(\bar W, C^{\displaystyle \cdot}\mathcal F)=\ilim{ \bar V\subset\bar W} H^n(\bar W,\mathcal F).$$ \end{proof} \begin{teorema} \label{T16.22} Let $X$ be a schematic finite space and $U\subset X$ an open subset. Then, $${{\mathcal O}_{\tilde X}}_{|\tilde U}={\mathcal O}_{\tilde U}.$$ Let $x\in X$ and ${\mathfrak p}\in \tilde U_x\subseteq \tilde X$. Then, ${\mathcal O}_{\tilde X,{\mathfrak p}}={\mathcal O}_{x,{\mathfrak p}}.$ \end{teorema} \begin{proof}[Proof] Let $i\colon U\hookrightarrow X$ be the inclusion morphism and $\tilde i\colon \tilde U\hookrightarrow \tilde X$ the induced morphism. The natural morphism ${\mathcal O}_{\tilde X}\to \tilde i_*{\mathcal O}_{\tilde U}$ defines by adjunction the morphism ${{\mathcal O}_{\tilde X}}_{|\tilde U}\to {\mathcal O}_{\tilde U}$ and we have to prove that the morphism ${\mathcal O}_{\tilde X,{\mathfrak p}}={{\mathcal O}_{\tilde X}}_{|\tilde U,{\mathfrak p}} \to {\mathcal O}_{\tilde U,{\mathfrak p}}$ is an isomorphism, for any ${\mathfrak p}\in\tilde U$. Let $I:=\{(\bar W,\bar V)$, where $\bar V$ is any quasicompact open subset of $\tilde U$ such that ${\mathfrak p}\in \bar V$ and $\bar W$ is any open subset of $\tilde X$ such that $\bar V\subset \bar W\}.$ Then, $${\mathcal O}_{\tilde X,{\mathfrak p}}=\ilim{(\bar W,\bar V)\in I} {\mathcal O}_{\tilde X}(\bar W)= \ilim{{\mathfrak p}\in \bar V}\ilim{\bar V\subset \bar W} {\mathcal O}_{\tilde X}(\bar W) =\ilim{{\mathfrak p}\in \bar V}{\mathcal O}_{\tilde U}(\bar V)={\mathcal O}_{\tilde U,{\mathfrak p}}.$$ Finally, ${\mathcal O}_{\tilde X,{\mathfrak p}}={\mathcal O}_{\tilde U_x,{\mathfrak p}}={\mathcal O}_{x,{\mathfrak p}}$. \end{proof} \section{$H^n(X,{\mathcal M})=H^n(\tilde X,\tilde{{\mathcal M}})$} \begin{notacion} Given an affine scheme $\Spec R$ and an $R$-module $M$ we shall denote $\tilde M$ the sheaf of localizations of the $R$-module $M$. \end{notacion} \begin{definicion} Let $X$ be a schematic finite space and $\tilde X=\Spec{\mathcal O}_X$. We shall say that an ${\mathcal O}_{\tilde X}$-module $\bar {\mathcal M}$ is quasi-coherent if $\bar {\mathcal M}_{|\tilde U_x}$ is a quasi-coherent ${\mathcal O}_{\tilde U_x}$-module for any $x\in X$.\end{definicion} I warn the reader that this definition is not the usual definition of quasi-coherent module. Let $X$ be a schematic finite space and $\bar {\mathcal M}$ a quasi-coherent ${\mathcal O}_{\tilde X}$-module. Let ${\mathcal M}$ be the ${\mathcal O}_X$-module defined by ${\mathcal M}_x=\bar {\mathcal M}_{|\tilde U_x}(\tilde U_x)$, then it easy to check that ${\mathcal M}$ is a quasi-coherent ${\mathcal O}_X$-module (see \cite{Hartshorne} II 5.1 (d) and 5.2 c.) Let ${\mathcal M}$ be a quasi-coherent ${\mathcal O}_X$-module. Define $\tilde{\mathcal M}:=\plim{x\in X} \tilde i_{x*}\widetilde{{\mathcal M}_x}$, where $\tilde i_x\colon \tilde U_x\to \tilde X$ is the morphism induced by the inclusion morphism $i_x\colon U_x\hookrightarrow X$. Observe that $\tilde {\mathcal M}(\tilde X)=\plim{x\in X} {\mathcal M}_x={\mathcal M}(X)$. \begin{proposicion} Let $X$ be an affine finite space and ${\mathcal M}$ a quasi-coherent ${\mathcal O}_X$-module. then $\tilde{\mathcal M}=\widetilde{{\mathcal M}(X)}$.\end{proposicion} \begin{proof}[Proof] Observe that $\tilde i_{x*}\widetilde{{\mathcal M}_x}=\widetilde{{\mathcal M}_x}$, then $$\tilde{\mathcal M}=\plim{x\in X} \tilde i_{x*}\widetilde{{\mathcal M}_x}=\plim{x\in X} \widetilde{{\mathcal M}_x}=\widetilde{\plim{x\in X} {\mathcal M}_x}=\widetilde{{\mathcal M}(X)}.$$ \end{proof} \begin{proposicion} \label{P16.17} Let $X$ be a schematic finite space, $U\subset X$ an open subset and ${\mathcal M}$ a quasi-coherent ${\mathcal O}_X$-module. Then, $\tilde{\mathcal M}_{|\tilde U}=\widetilde{{\mathcal M}_{|U}}.$ In particular, $\tilde M$ is a quasi-coherent ${\mathcal O}_{\tilde X}$-module. \end{proposicion} \begin{proof}[Proof] Proceed like in the proof of Theorem \ref{T16.22}. \end{proof} Let ${\mathcal M}$ and ${\mathcal M}'$ be quasi-coherent ${\mathcal O}_X$-modules. Any morphism of ${\mathcal O}_X$-modules $ {\mathcal M}\to {\mathcal M}'$ induces a natural morphism $\tilde {\mathcal M} = \plim{x\in X} \tilde i_{x*}\widetilde{{\mathcal M}_x}\to \plim{x\in X} \tilde i_{x*}\widetilde{{\mathcal M}'_x}=\tilde {\mathcal M}'$. Let $\bar {\mathcal M}$ and $\bar {\mathcal N}$ be quasi-coherent ${\mathcal O}_{\tilde X}$-modules. Any morphism of ${\mathcal O}_{\tilde X}$-modules $\bar {\mathcal M}\to \bar {\mathcal N}$ induces natural morphisms ${\mathcal M}_x:=\bar {\mathcal M}_{|\tilde U_x}(\tilde U_x)\to \bar {\mathcal N}_{|\tilde U_x}(\tilde U_x)=:{\mathcal N}_x$ and then a morphism ${\mathcal M}\to {\mathcal N}$. \medskip \begin{teorema} Let $X$ be a schematic finite space. The category of quasi-coherent ${\mathcal O}_X$-modules is equivalent to the category of quasi-coherent ${\mathcal O}_{\tilde X}$-modules. \end{teorema} \begin{proof}[Proof] The functors $\bar {\mathcal M} \functor \{\bar {\mathcal M}_{|\tilde U_x}(\tilde U_x)\}_{x\in X}$ and $\mathcal M \functor \tilde{\mathcal M} $ are mutually inverse.\end{proof} \begin{proposicion} \label{P15.6} Let $f\colon X \to Y$ be a schematic morphism and $\tilde f\colon \tilde X\to \tilde Y$ the induced morphism. Let ${\mathcal M}$ be a quasi-coherent ${\mathcal O}_X$-module and ${\mathcal N}$ a quasi-coherent ${\mathcal O}_Y$-module. Then, \begin{enumerate} \item $\tilde f_*\tilde {\mathcal M}=\widetilde{f_*{\mathcal M}}$. \item $\tilde f^*\tilde {\mathcal N}=\widetilde{f^*{\mathcal N}}$. \end{enumerate} \end{proposicion} \begin{proof}[Proof] Consider the obvious commutative diagram $$\xymatrix{\tilde X \ar[r]^-{\tilde f} & \tilde Y & & \\ \tilde U_x \ar@{^{(}->}[u]^-{\tilde i_{x}} \ar[r]_-{\tilde f_{xy}} & \tilde U_y \ar@{^{(}->}[u]_-{\tilde i_{y}} & & (f(x)\geq y)}$$ 1. Observe that $(f_*{\mathcal M})(U_y)={\mathcal M}(f^{-1}(U_y))=\plim{x\in f^{-1}(U_y)} {\mathcal M}_x$, then $$\aligned \widetilde{f_*{\mathcal M}} & =\plim{y\in Y} \tilde i_{y*}(\plim{x\in f^{-1}(U_y)} \widetilde{ {\mathcal M}_x})= \plim{y\in Y} \tilde i_{y*}(\plim{x\in f^{-1}(U_y)} \tilde f_{xy*}\widetilde{{\mathcal M}_x})= \plim{y\in Y} \plim{x\in f^{-1}(U_y)} \tilde i_{y*}\tilde f_{xy*}\widetilde{{\mathcal M}_x} \\ &=\plim{y\in Y}\plim{x\in f^{-1}(U_y)} \tilde f_*\tilde i_{x*}\widetilde{{\mathcal M}_x}=\tilde f_*\plim{y\in Y}\plim{x\in f^{-1}(U_y)} \tilde i_{x*}\widetilde{{\mathcal M}_x}=\tilde f_*\plim{x\in X} \tilde i_{x*}\widetilde{{\mathcal M}_x}= \tilde f_*\tilde{\mathcal M}.\endaligned $$ 2. Observe that $(f^*{\mathcal N})_x={\mathcal N}_{f(x)}\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_x$, then $\widetilde{(f^*{\mathcal N})}_{|\tilde U_x}\overset{\text{\ref{P16.17}}}=\widetilde{({f^*{\mathcal N}})_{|U_x}}=\widetilde{{\mathcal N}_{f(x)}\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_x}$. On the other hand, $(\tilde f^*\tilde{\mathcal N})_{|\tilde U_x}=\tilde f_{xf(x)}^*(\tilde{\mathcal N}_{|\tilde U_{f(x)}})=\tilde f_{xf(x)}^*(\widetilde{{\mathcal N}_{f(x)}})= \widetilde{{\mathcal N}_{f(x)}\otimes_{{\mathcal O}_{f(x)}}{\mathcal O}_x}$. \end{proof} \begin{lemma} \label{L17.7} Let $X$ be a schematic finite space and $\mathcal F$ a sheaf of abelian groups on $X$. Let $U,V\subset X$ be two open subsets. Consider the obvious commutative diagram $$\xymatrix{\tilde U \ar@{^{(}->}[r]^-i & \tilde X\\ \widetilde{U\cap V}=\tilde U\cap \tilde V \ar@{^{(}->}[r]_-{\bar i} \ar@{^{(}->}[u]_-{\bar j}& \tilde V \ar@{^{(}->}[u]^-{j}} $$ Then, $j^*(R^ni_*\mathcal F)=R^n\bar i_*(\bar j^*\mathcal F)$, for any $n\geq 0$. \end{lemma} \begin{proof}[Proof] Let ${\mathfrak p}\in \tilde V$. Then, $$\aligned (j^*(R^ni_*\mathcal F))_{\mathfrak p} & = (R^ni_*\mathcal F)_{\mathfrak p}=\ilim{{\mathfrak p}\in \bar W\subset \tilde X} H^n(\bar W\cap \tilde U, \mathcal F_{|\tilde U})\overset{\text{\ref{coro25}}}=\ilim{{\mathfrak p}\in \bar W'\subset \tilde V} H^n(\bar W'\cap \tilde U\cap \tilde V, \mathcal F_{|\tilde U\cap \tilde W})\\ & =(R^n\bar i_*(\bar j^*\mathcal F))_{\mathfrak p}.\endaligned$$ \end{proof} \begin{teorema} Let $X$ be a semiseparated schematic finite space and $\mathcal M$ a quasi-coherent ${\mathcal O}_X$-module. Then, $$H^n(X,\mathcal M)=H^n(\tilde X,\tilde{{\mathcal M}}),$$ for any $n\geq 0$. \end{teorema} \begin{proof}[Proof] Let $\tilde i\colon \tilde U_x\hookrightarrow \tilde X$ be the inclusion morphism. The morphism $\bar i\colon \tilde U_y\cap\tilde U_x\hookrightarrow \tilde U_y,$ ${\mathfrak p}\mapsto i({\mathfrak p})$, is an affine morphism of schemes since $\tilde U_x\cap \tilde U_y$ is an affine scheme because $X$ is semiseparated. Let $\tilde{\mathcal N}$ be a quasi-coherent ${\mathcal O}_{\tilde U_x}$-module and ${\mathfrak p}\in \tilde U_y$. By Lemma \ref{L17.7}, $(R^n\tilde i_{*}\tilde{\mathcal N})_{\mathfrak p}=(R^n\bar i_{*}\tilde{\mathcal N}_{|\tilde U_y\cap \tilde U_x})_{\mathfrak p}=0$, for any $n>0$. Hence, $R^n\tilde i_{*}{\mathcal N}=0$ and $H^n( \tilde X,\tilde i_*{\mathcal N})=H^n(\tilde U_x,{\mathcal N})=0$, for any $n>0$. That is, $\tilde i_*{\mathcal N}$ is acyclic. Given a quasi-coherent ${\mathcal O}_X$-module ${\mathcal M}$ denote $\tilde{{\mathcal M}}_{\tilde U_x}=\tilde i_*\tilde i\,^*\tilde{\mathcal M}$. Observe that $\tilde{{\mathcal M}}_{\tilde U_x}$ is acyclic and $\tilde{{\mathcal M}}_{\tilde U_x}(\tilde X)=\tilde {\mathcal M}_{|\tilde U_x}(\tilde U_x)=\widetilde{{\mathcal M}_{|U_x}} (\tilde U_x)={\mathcal M}(U_x)={\mathcal M}_x$. The obvious sequence of morphisms $$\tilde{\mathcal M} \to \prod_{x\in X} \tilde {\mathcal M}_{\tilde U_x} \to \prod_{x_1<x_2} \tilde {\mathcal M}_{\tilde U_{x_2}} \to \prod_{x_1<x_2<x_3} \tilde {\mathcal M}_{\tilde U_{x_3}}\to \cdots$$ is exact. Denote this resolution $\tilde{\mathcal M}\to \tilde C^{\displaystyle \cdot}\tilde {\mathcal M}$ and let ${\mathcal M}\to C^{\displaystyle \cdot} {\mathcal M}$ be the standard resolution of ${\mathcal M}$. Then, $$H^n(\tilde X,\tilde{\mathcal M})=H^n(\Gamma(\tilde X,\tilde C^{\displaystyle \cdot} \tilde {\mathcal M}))= H^n(\Gamma(X,C^{\displaystyle \cdot} {\mathcal M}))=H^n(X,{\mathcal M}).$$ \end{proof} \section{\rm Hom$_{sch}(\tilde X,\tilde Y)$ = Hom$_{[sch]}(X,Y)$} \begin{proposicion} Let $f\colon X\to Y$ be a schematic morphism. Then, the induced morphism $\tilde f\colon \tilde X\to \tilde Y$ is quasi-compact, that is, $\tilde f^{-1}(\bar V)$ is quasi-compact for any quasi-compact open subset $\bar V\subset \tilde Y$ \end{proposicion} \begin{proof}[Proof] Any affine scheme morphism is quasi-compact. Given $x\in X$, denote $\tilde f_x \colon \tilde U_x\to \tilde U_{f(x)}$ the morphism induced by $f_x\colon U_x\to U_{f(x)}$, $f_x(x'):=f(x')$. By Proposition \ref{P16.16}.2., $\bar V\cap \tilde U_{f(x)}$ is quasi-compact. Then, $$\tilde f^{-1}(\bar V)=\cup_{x\in X} \tilde f^{-1}(\bar V)\cap \tilde U_x= \cup_{x\in X} \tilde f_x^{-1}(\bar V\cap \tilde U_{f(x)})$$ is quasi-compact. \end{proof} \begin{proposicion} \label{L18.8} Let $f\colon X\to Y$ be a schematic morphism and $\tilde f\colon \tilde X\to\tilde Y$ the induced morphism. Then, $\tilde f\,^{-1}(\tilde U)=\widetilde{f^{-1}(U)}$, for any open subset $U\subset X$. \end{proposicion} \begin{proof}[Proof] Given two open subsets $V,V'$ of a schematic finite space, observe that $\tilde V\cap \tilde V'= \widetilde{V\cap V'}$ and $\tilde V\cup \tilde V'=\widetilde{V\cup V'}$. Given $y\geq f(x)$, $\Spec \prod_{z\in U_{xy}} {\mathcal O}_z\to \Spec ({\mathcal O}_x\otimes_{{\mathcal O}_{f(x)}} {\mathcal O}_y)=\Spec {\mathcal O}_x\times_{\Spec {\mathcal O}_{f(x)}} \Spec {\mathcal O}_y$ is surjective, by Theorem \ref{K7}. Hence, $\tilde U_x\cap \tilde f^{-1}(\tilde U_y)=\cup_{z\in U_{xy}} \tilde U_z=\tilde{U_{xy}}=\!\!\!\!\widetilde{\,\,\,\,U_x\cap f^{-1}(U_y)}$. Obviously, $ \widetilde{f^{-1}(U)}\subseteq \tilde f\,^{-1}(\tilde U)$. Let ${\mathfrak p}\in \tilde f\,^{-1}(\tilde U)$ and $x\in X$ such that ${\mathfrak p}\in \tilde U_x$. Then, $\tilde f({\mathfrak p})\in \tilde U_{f(x)}\cap \tilde U$. Let $y\in U_{f(x)}\cap U$ such that $\tilde f({\mathfrak p})\in \tilde U_y$. Then, ${\mathfrak p}\in \tilde U_x\cap \tilde f\,^{-1}(\tilde U_y)=\widetilde{U_x\cap f\,^{-1}(U_y)}\subset \widetilde{f^{-1}(U)}$. Therefore, $\tilde f\,^{-1}(\tilde U)\subseteq \widetilde{f^{-1}(U)}$. \end{proof} \begin{definicion} Let $X$ and $Y$ be schematic finite spaces. We shall say that a morphism of ringed spaces $f'\colon \tilde X\to \tilde Y$ is a schematic morphism if $f'_*\tilde {\mathcal M}$ is a quasi-coherent ${\mathcal O}_{\tilde Y}$-module for any quasi-coherent ${\mathcal O}_{\tilde X}$-module $\tilde{\mathcal M}$. \end{definicion} \begin{ejemplo} If $f\colon X\to Y$ is a schematic morphism, then $\tilde f\colon \tilde X\to \tilde Y$ is a schematic morphism, by Proposition \ref{P15.6}. \end{ejemplo} \begin{proposicion} \label{P16.21} Let $f\colon \Spec B\to \Spec A$, $f'\colon \tilde A\to f_*\tilde B$ be a morphism of ringed spaces. If $f_*\tilde M$ is a quasi-coherent $\tilde A$-module for any $B$-module $M$, then $(f,f')$ is a morphism of schemes.\end{proposicion} \begin{proof}[Proof] Let $f''\colon \Spec B\to \Spec A$ be the morphism defined on spectra by $f'_{\Spec A}\colon A\to B$. We only have to prove that $f=f''$. Let ${\mathfrak p}\in \Spec B$, $a\in A$ and $U_a=\Spec A\backslash (a)_0$. By the hypothesis, $f_*\widetilde{B/{\mathfrak p}}$ is a quasi-coherent $\tilde A$-module. Then, $$(B/{\mathfrak p})_a=((f_*\widetilde{B/{\mathfrak p}})(\Spec A))_a=(f_*\widetilde{B/{\mathfrak p}})(U_a)=\widetilde{B/{\mathfrak p}}(f^{-1}(U_a)).$$ Then, $$\aligned f'_{\Spec A}(a)\in{\mathfrak p} & \iff (B/{\mathfrak p})_a=0 \iff \widetilde{B/{\mathfrak p}}(f^{-1}(U_a))=0\iff f^{-1}(U_a)\cap ({\mathfrak p})_0=\emptyset\\ & \iff {\mathfrak p}\notin f^{-1}(U_a) \iff f({\mathfrak p})\notin U_a \iff a\in f({\mathfrak p}).\endaligned$$ Therefore, ${f'_{\Spec A}}^{-1}({\mathfrak p})=f({\mathfrak p})$, that is to say, $f''=f$. \end{proof} \begin{proposicion} Any schematic morphism $f'\colon \tilde X\to \tilde Y$ is a morphism of locally ringed spaces.\end{proposicion} \begin{proof}[Proof] Let ${\mathfrak p}\in\tilde U_x\subset \tilde X$. Let $g$ be the composition of the schematic morphisms $$\Spec {\mathcal O}_{x,{\mathfrak p}} \hookrightarrow \Spec {\mathcal O}_x\hookrightarrow \tilde X\to \tilde Y.$$ Let $y\in Y$ be a point such that $f({\mathfrak p})\in \tilde U_y$, then $g^{-1}(\tilde U_y)=\Spec {\mathcal O}_{x,{\mathfrak p}}$. Consider the continuous morphism $h\colon \Spec {\mathcal O}_{x,{\mathfrak p}}\to \tilde U_y$, ${\mathfrak q}\mapsto g({\mathfrak q})$ and let $\tilde i_y\colon \tilde U_y \hookrightarrow \tilde Y$ be the inclusion morphism. Consider the morphism ${\mathcal O}_{\tilde Y}\to g_*\tilde {\mathcal O}_{x,{\mathfrak p}}$. Taking $i^*$, we obtain a morphism $\phi\colon {\mathcal O}_{\tilde U_y}\to h_*\tilde{\mathcal O}_{x,{\mathfrak p}}$. The morphism of ringed spaces $(h,\phi)$ is schematic, since $h_*{\mathcal M}=\tilde i_y^*\tilde i_{y*}h_*{\mathcal M}=\tilde i_y^*g_*{\mathcal M}$ is quasi-coherent, for any quasi-coherent ${\mathcal O}_{\tilde U_y}$-module ${\mathcal M}$. By Proposition \ref{P16.21}, $h$ is a morphism of locally ringed spaces. We are done. \end{proof} \begin{lemma} \label{L17.11} Let ${\mathcal M}$ be a finitely generated ${\mathcal O}_{X}$-module. For any ${\mathfrak p}\in \tilde X$, there exist an open neighbourhood $\bar U$ of ${\mathfrak p}$ and an epimorphism of sheaves ${\mathcal O}^n_{\bar U} \to \tilde{\mathcal M}_{|\bar U}$. \end{lemma} \begin{proof}[Proof] $\bar V:=\{{\mathfrak q}\in \tilde X\colon \tilde {\mathcal M}_{{\mathfrak q}}=0\}$ is an open subset of $\tilde X$, since $\bar V\cap \tilde U_x=\{{\mathfrak q}\in \tilde U_x\colon {{\mathcal M}_x}_{{\mathfrak q}}=0\}$ is an open subset of $\tilde U_x$. Hence, given another quasi-coherent module $\tilde {\mathcal M}'$, and a morphism of ${\mathcal O}_{\tilde X}$-modules $\tilde{\mathcal M}'\to \tilde{\mathcal M}$, if the morphism on stalks at ${\mathfrak p}$, $ \tilde {\mathcal M}'_{\mathfrak p}\to \tilde{\mathcal M}_{\mathfrak p}$ is an epimorphism then there exists an open neighbourhood $\bar V$ of ${\mathfrak p}$ such that the morphism of sheaves $\tilde{\mathcal M}'_{|\bar V}\to \tilde{\mathcal M}_{|\bar V}$ is an epimorphism. Let $m_1,\ldots,m_r$ be a generator system of the ${\mathcal O}_{\mathfrak p}$-module $\tilde {\mathcal M}_{\mathfrak p}$. Let $\bar W\subset \tilde X$ be an open neighbourhood of ${\mathfrak p}$, such that there exist $n_1,\ldots,n_r\in \tilde{\mathcal M}(\bar W)$ satisfying $n_{i,{\mathfrak p}}=m_i$. The morphism of ${\mathcal O}_{\bar W}$-modules ${\mathcal O}_{\bar W}^r \to \tilde{\mathcal M}_{|\bar W}$, $(a_i)\mapsto \sum_i a_i\cdot n_i$ is an epimorphism on stalks at ${\mathfrak p}$. Hence, it is an epimorphism in an open neighbourhood $\bar U$ of ${\mathfrak p}$. \end{proof} \begin{proposicion} Any schematic morphism $f'\colon \tilde X\to \tilde Y$ is quasi-compact. \end{proposicion} \begin{proof}[Proof] Let $\bar V\subset \tilde Y$ be a quasi-compact open subset, we have to prove that $f'^{-1}(\bar V)$ is quasi-compact. We only have to prove that $\tilde U_x\cap f'^{-1}(\bar V)$ is quasi-compact, for any $x\in X$. Hence, we can suppose that $\tilde X=\tilde U_x$. Let $C:=\tilde Y-\bar V$. By Lemma \ref{L15.20}, there exists a finitely generated ideal $\mathcal I\subset {\mathcal O}_Y$ such that $(\mathcal I)_0=C$. Consider the exact sequence of morphisms $0\to \mathcal I \to {\mathcal O}_{\tilde Y}\to {\mathcal O}_{\tilde Y}/\mathcal I\to 0$. By Lemma \ref{L17.11}, there exist an open covering $\{\bar U_i\}$ of $\tilde Y$ and epimorphisms ${\mathcal O}_{\bar U_i}^{n_i}\to \mathcal I_{|\bar U_i}$. Taking $f^*$, one has an exact sequence of morphisms $$f'^*\mathcal I\to \tilde{\mathcal O}_x \to f'^*({\mathcal O}_{\tilde Y}/\mathcal I)\to 0$$ and $\mathcal J:=\Ima[f'^*\mathcal I\to \tilde{\mathcal O}_x ]$ is a finitely generated quasi-coherent ideal of $\tilde {\mathcal O}_x$, since it is so locally (over $f'^{-1}(\bar U_i)$). $(\mathcal J)_0=f'^{-1}(C)$, therefore $f'^{-1}(\bar V)=\tilde X-(\mathcal J)_0$ is a quasi-compact open subset. \end{proof} Let ${\mathcal C}_{sch}$ be the category of schematic finite spaces and $W$ the family of quasi-isomorphisms. Let us construct the localization of ${\mathcal C}_{sch}$ by $W$, ${\mathcal C}_{sch}[W^{-1}]$. \begin{definicion} A schematic pair of morphisms from $X$ to $Y$, $X\,\overset{f}{-->} Y$ is a pair of schematic morphisms $(\phi',f')$ $$\xymatrix{ & X' \ar[ld]_-{\phi'} \ar[rd]^-{f'} & \\ X & & Y}$$ where $\phi'$ is a quasi-isomorphism. \end{definicion} \begin{ejemplo} A schematic morphism $f\colon X\to Y$ can be considered as a schematic pair of morphisms: Consider the pair of morphisms $(Id_X,f)$ $$\xymatrix{ & X \ar[ld]_-{Id_X} \ar[rd]^-{f} & \\ X & & Y}$$ \end{ejemplo} \begin{definicion} Let $f=(\phi',f')\colon X --> Y$ y $g=(\varphi',g')\colon Y --> Z$ be two schematic pairs of morphisms, where $\phi'\colon X'\to X$ is a quasi-isomorphism and $f'\colon X'\to Y$ is a schematic morphism and $\varphi'\colon Y'\to Y$ is a quasi-isomorphism and $g'\colon Y'\to Z$ is a schematic morphism. Let $\pi_1,\pi_2\colon X'\times_Y Y'\to X',Y'$ be the two obvious projection maps (observe that $\pi_1$ is a quasi-isomorphism). We define $g\circ f:=(\phi'\circ\pi_1,g'\circ\pi_2)\colon X --> Z$ $$\xymatrix{ && X'\times_Y Y' \ar[ld]_-{\pi_1} \ar[rd]^-{\pi_2}&& \\& X' \ar[ld]_-{\phi'} \ar[rd]^-{f'} & & Y' \ar[ld]_-{\varphi'} \ar[rd]^-{g'} &\\ X \ar@{-->}[rr]_-{f} & & Y\ar@{-->}[rr]_-{g} & & Z}$$ \end{definicion} Let $f\colon X\to Y$ and $g\colon Y\to Z$ be schematic morphisms. Then, $$(Id_X,f)\circ (Id_Y,g)=(Id_X,f\circ g).$$ \begin{definicion} \label{d5} Two schematic pairs of morphisms $(\phi',f'),(\phi'',f'')\colon X-->Y$ $$\xymatrix{ & X' \ar[ld]_-{\phi'} \ar[rd]^-{f'} & & & X'' \ar[ld]_-{\phi''} \ar[rd]^-{f''} &\\ X & & Y & X & & Y}$$ are said to be equivalent, $(\phi',f')\equiv (\phi'',f'')$, if there exist a schematic space $T$ and two quasi-isomorphisms $\pi'\colon T\to X'$, $\pi''\colon T\to X''$ such that the diagram $$\xymatrix{ & & T \ar[dl]_-{\pi'} \ar[dr]^-{\pi''} &&\\ & X' \ar[dl]_-{\phi'} \ar[drrr]^-{f'} & & X'' \ar[dlll]_-{\phi''} \ar[rd]^-{f''} & \\ X & & & & Y }$$ is commutative. \end{definicion} In order to prove the associative property the reader should consider the following commutative diagram (where the double arrows are quasi-isomorphisms) $$\xymatrix{ & & & T\times_{X''} T' \ar@<0.2ex>[rd] \ar[rd] \ar@<0.2ex>[ld]\ar[ld] &&& \\ && T \ar@<0.2ex>[rd]\ar[rd] \ar@<0.2ex>[ld]\ar[ld] & & T' \ar@<0.2ex>[rd]\ar[rd] \ar@<0.2ex>[ld]\ar[ld] & & \\ & X' \ar@<0.2ex>[ld]\ar[ld] \ar[rrrrrd]& & X'' \ar[rrrd] \ar@<0.2ex>[llld]\ar[llld] & & X''' \ar[rd] \ar@<0.2ex>[llllld]\ar[llllld] & \\ X & & & & & & Y }$$ \begin{naida}[Definitions and notations] Let $f=(\phi,f')\colon X --> Y$ be a schematic pair of morphisms. The equivalence class of $f$ (resp. $(\phi,f')$) will be denoted $[f]$ (resp. $[\phi,f']$). We shall say that $[f]$ (or $[f]\colon X\to Y$) is a [schematic] morphism from $X$ to $Y$. Let $f\colon X\to Y$ be a schematic morphism. The equivalence class of $(Id,f)$ will be denoted $[f]$. \end{naida} \begin{proposicion} \label{Prop6} Let $(\phi',f')\colon X --> Y$ be a schematic pair of morphisms, where $\phi'\colon X'\to X$ is a quasi-isomorphism and $f'\colon X'\to Y$ is a schematic morphism. Let $\varphi\colon X''\to X'$ be a quasi-isomorphism. Then, $[\phi',f']=[\phi'\circ \varphi,f'\circ \varphi]$.\end{proposicion} \begin{proof}[Proof] Consider the commutative diagram $$\xymatrix{ & & X'' \ar[dl]_-{\varphi} \ar[dr]^-{Id} &&\\ & X' \ar[dl]_-{\phi'} \ar[drrr]_-{f'} & & X'' \ar[dlll]^-{\phi'\circ \varphi} \ar[rd]^-{f'\circ \varphi} & \\ X & & & & Y }$$ \end{proof} \begin{teorema} Let $f,F\colon X --> Y$ and $g,G\colon Y --> Z$ be schematic pairs of morphisms. If $[f]=[F]$ and $[g]=[G]$, then $[g\circ f]=[G\circ F]$. \end{teorema} \begin{proof}[Proof] 1. Let us prove that $[g\circ f]= [g\circ F]$. Write $f=(\phi,f')$, $\phi\colon X'\to X$, $f'\colon X'\to Y$. Let $\varphi\colon X''\to X'$ be a quasi-isomorphism and let $F':=(\phi\circ \varphi, f'\circ \varphi)$. By Proposition \ref{Prop6}, $[f]=[F']$. Consider the commutative diagram $$\xymatrix{ X'' \ar@<0.2ex>[d]\ar[d]_\varphi & X''\times_Y Y' \ar@<0.2ex>[d]\ar[d] \ar@<0.2ex>[l]\ar[l]& & \\ X' \ar[rrd]_{f'} \ar@<0.2ex>[d]\ar[d]_\phi & X'\times_Y Y' \ar@<0.2ex>[l]\ar[l] \ar[r] & Y' \ar@<0.2ex>[d]\ar[d] \ar[rd] & \\ X \ar@{-->}[rr]_{f,F'} & & Y\ar@{-->}[r]_g & Z}$$ By Proposition \ref{Prop6}, $[g\circ f]=[ g\circ F']$. Finally, since $[f]=[ F]$, there exists $F'$ such that $[g\circ f]=[g\circ F']=[ g\circ F]$. 2. Likewise, $[g\circ f]=[ G\circ f]$. 3. The theorem is a consequence of 1. and 2. \end{proof} Let $f\colon X--> Y$ and $g\colon Y--> Z$ be two schematic pairs of morphisms. We define $[g]\circ [f]:=[g\circ f].$ \begin{proposicion} \label{inv} Let $(\phi,f')\colon X --> Y$ be a schematic pair of morphisms. Then, \begin{enumerate} \item If $f'$ is a quasi-isomorphism, then $[\phi,f']^{-1}=[f',\phi]$. \item $[\phi,f']=[f']\circ [\phi]^{-1}$. \end{enumerate} \end{proposicion} \begin{proof}[Proof] We have the quasi-isomorphism $\phi\colon X'\to X$ and the morphism $f'\colon X' \to Y$. Let $$\pi_1,\pi_2\colon X'\times_XX'\to X', \, \pi_1(x_1',x'_2):=x'_1,\, \pi_2(x_1',x'_2):=x'_2,$$ which are quasi-isomorphisms. 1. Let $\delta\colon X'\to X'\times_X X'$ be the diagonal morphism, which is a quasi-isomorphism because $\pi_1$ and $\pi_1\circ \delta=Id$ are quasi-isomorphisms. Then, $$[\phi,f']\circ [f',\phi]=[f'\circ \pi_1,f'\circ \pi_2]\overset{\text{\ref{Prop6}}}=[f'\circ \pi_1\circ\delta,f'\circ \pi_1\circ\delta]=[f',f']\overset{\text{\ref{Prop6}}}=[Id_Y,Id_Y]=[Id_Y]$$ Likewise, $[f',\phi]\circ [\phi,f']=[Id_X]$. 2. It is easy to check that $[f']\circ [\phi]^{-1}=[Id_{X'}, f']\circ [\phi,Id_{X'}]=[\phi,f'].$ \end{proof} \begin{proposicion} \label{paque} Let $X$ be a minimal schematic finite space. Let $f,g\colon X\to Y$ be two schematic morphisms. Then, $[f]=[g]$ iff $f=g$.\end{proposicion} \begin{proof}[Proof] $\Rightarrow)$ There exists a (surjective) quasi-isomorphism $\pi\colon T\to X$ such that $f\circ \pi=g\circ \pi$. Then $f$ and $g$ are equal as continuous maps. Finally, the morphism ${\mathcal O}_Y\to f_*\pi_*{\mathcal O}_T=f_*{\mathcal O}_X$ coincides with the morphism ${\mathcal O}_Y\to g_*\pi_*{\mathcal O}_T=g_*{\mathcal O}_X$. \end{proof} \begin{proposicion} \label{paque+} Let $X$ be a minimal schematic finite space. Let $f,g\colon X\to Y$ be two schematic morphisms and $\pi\colon Y\to Y'$ a quasi-isomorphism. Then, $\pi\circ f=\pi\circ g$ iff $f=g$.\end{proposicion} \begin{proof}[Proof] $\Rightarrow)$ Observe that $[f]=[g]$ since $[\pi]\circ [f]=[\pi\circ f]=[\pi\circ g]=[\pi]\circ [g]$. By Proposition \ref{paque}, $f=g$.\end{proof} \begin{proposicion} \label{P14.11} A [schematic] morphism $[\phi,f]\colon X \to Y$ is an isomorphism iff $f$ is a quasi-isomorphism.\end{proposicion} \begin{proof}[Proof] $\Leftarrow)$ $[\phi,f]^{-1}=[f,\phi]$, by Proposition \ref{inv}. $\Rightarrow)$ $[\phi,f]=[f]\circ [\phi]^{-1}$ is invertible, then $[f]$ is invertible. Put $f\colon Z\to Y$ and let $[\varphi,g]\colon Y \to Z$ be the inverse morphism of $[f]$, where $\varphi \colon T\to Y$ is a quasi-isomorphism (and we can assume that $T$ is minimal) and $g\colon T\to Z$ is a schematic morphism. Then, $[\Id_Y]=[f]\circ [\varphi,g]=[f]\circ [g]\circ [\varphi]^{-1}$ and $[\varphi]=[f\circ g]$. Hence, $\varphi=f\circ g$, by Proposition \ref{paque}. Besides, $[\Id_Z]=[\varphi,g]\circ [f]$. If we consider the commutative diagram $$\xymatrix{Z\times_YT \ar[r]^-{\pi_2} \ar[d]_-{\pi_1} & T \ar[d]^-{\varphi} \ar[rd]^-g & \\ Z \ar[r]_-f & Y & Z}$$ then $[\Id_Z]=[\varphi,g]\circ [f]=[\pi_1,g\circ \pi_2]$. Let $i\colon (Z\times_YT)_M\subseteq Z\times_Y T$ be the natural inclusion, $\pi_1';=\pi_1\circ i$ and $\pi_2'=\pi_2\circ i$. Then, $[\pi'_1,g\circ \pi'_2]=[\Id_Z]$ and $[\pi'_1]=[g\circ \pi'_2]$. By Proposition \ref{paque}, $\pi'_1=g\circ \pi'_2$. Let $\tilde g\colon T \to (Z\times_YT)_M$, $\tilde g(t)=(g(t),t)$. Observe that $$\pi'_1\circ \tilde g \circ \pi'_2=g\circ \pi'_2=\pi'_1.$$ By Proposition \ref{paque+}, $\tilde g\circ \pi_2'=\Id_{(Z\times_YT)_M}$. Obviously, $\pi_2'\circ \tilde g=\Id_T$. Hence, $\pi_2'$ is an isomorphism and $\pi_2$ a quasi-isomorphism. Finally, $f$ is a quasi-isomorphism since $\pi_1$, $\phi$ and $\pi_2$ are quasi-isomorphisms and $f\circ \pi_1=\phi\circ \pi_2$. \end{proof} Given a [schematic] morphism $g=[\phi,f]\colon X \to Y$, consider the functors $$\begin{array}{l} g_*\colon\, {\bf Qc\text{-}Mod}_X \,\to \,{\bf Qc\text{-}Mod}_Y,\,\, g_*\mathcal M:=f_*\phi^*\mathcal M\\ g*\colon \,{\bf Qc\text{-}Mod}_Y\, \to \,{\bf Qc\text{-}Mod}_X,\,\, g^*\mathcal M:=\phi_*f^*\mathcal M\end{array}$$ Recall $[\phi,f]=[\phi\circ\varphi,f\circ \varphi]$, where $\varphi$ is a quasi-isomorphism. Then we have canonical isomorphisms $f_*\phi^*\mathcal M=f_*\varphi_*\varphi^*\phi^*\mathcal M=(f\circ \varphi)_*(\phi\circ \varphi)^*\mathcal M$ and $\phi_*f^*\mathcal M=\phi_*\varphi_*\varphi^*f^*\mathcal M=(\phi\circ \varphi)_*(f\circ \varphi)^*\mathcal M$. \begin{proposicion} Let $g=[\phi,f]\colon X\to Y$ be a [schematic] morphism. The functors $g_*$ and $g^*$ are mutually inverse iff $g$ is a [schematic] isomorphism.\end{proposicion} \begin{proof}[Proof] $\Rightarrow)$ If $\Id=g_*g^*$ and $\Id=g^*g_*$, then $\Id=f_*\phi^*\phi_*f^*=f_*f^*$ and $\Id=\phi_*f^*f_*\phi^*$, hence $f^*f_*=\phi^*\phi_*=\Id$. By Theorem \ref{T12.7}, $f$ is a quasi-isomorphism and $g$ is invertible. $\Leftarrow)$ By Proposition \ref{P14.11}, $f$ is a quasi-isomorphism, then $g_*g^*=f_*\phi^*\phi_*f^*=f_*f^*=\Id$ and $g^*g_*=\phi_*f^*f_*\phi^*=\phi_*\phi^*=\Id$. \end{proof} \begin{notacion} Let $X$ and $Y$ be two schematic finite spaces. $\Hom_{[sch]}(X,Y)$ will denote the family of [schematic] morphisms from $X$ to $Y$. $\Hom_{sch}(\tilde X,\tilde Y)$ will denote the set of schematic morphisms from $\tilde X$ to $\tilde Y$. \end{notacion} \begin{lemma} \label{L17.2} Let $g\colon Y'\to Y$ be a [schematic] isomorphism and $X$ a schematic finite space. Then, the maps $$\begin{array}{l} \Hom_{[sch]}(X,Y')\to \Hom_{[sch]}(X,Y), \,\,[f]\mapsto [g]\circ [f]\\ \Hom_{[sch]}(Y,X)\to \Hom_{[sch]}(Y',X), \,\,[f]\mapsto [f]\circ [g]\end{array}$$ are biyective. \end{lemma} \begin{proposicion} Let $X$ be a schematic finite space and $Y$ an affine finite space. Then, $$\Hom_{[sch]}(X,Y)=\Hom_{rings}({\mathcal O}(Y), {\mathcal O}(X)).$$ \end{proposicion} \begin{proof}[Proof] For any schematic finite space $T$, $\Hom_{sch}(T,(*,A))=\Hom_{rings}(A,{\mathcal O}(T)).$ Consider the natural morphism $\pi\colon Y\to (*,{\mathcal O}(Y))$, which is a quasi-isomorphism. Then, $$\Hom_{[sch]}(X,Y)=\Hom_{[sch]}(X,(*,{\mathcal O}(Y))=\Hom_{rings}({\mathcal O}(Y),{\mathcal O}(X)).$$\end{proof} Let $[\phi,f]\colon X\to Y$ be a [schematic] morphism, where $\phi\colon X'\to X$ is a quasi-isomorphism and $f\colon X'\to Y$ a schematic morphism. Consider the morphisms $$\xymatrix{\tilde{X'} \ar@{=}[d]^-{\tilde \phi} \ar[rd]^-{\tilde f}& \\ \tilde X & \tilde Y}$$ where $\tilde\phi$ is an isomorphism, by Proposition \ref{P16.4}. The morphism $$\Hom_{[sch]}(X,Y) \to \Hom_{sch}(\tilde X,\tilde Y),\,\, [\phi,f]\mapsto \tilde f\circ \tilde\phi^{-1}$$ is well defined. \begin{lemma} \label{L17.4} Let $X$ be a minimal schematic space, $Y$ a schematic $T_0$-space, $f\colon X\to Y$ a schematic morphism and $\tilde f\colon \tilde X\to\tilde Y$ the induced morphism. Given $x\in X$, $y=f(x)$ iff $y$ is the greatest element of $Y$ such that $\tilde f(\tilde U_x)\subseteq \tilde U_{y}$. \end{lemma} \begin{proof}[Proof] Obviiously $f(U_x)\subset U_{f(x)}$ and $\tilde f(\tilde U_x)\subset \tilde U_{f(x)}$. If $\tilde f(\tilde U_x)\subseteq \tilde U_{y'}$, then $\tilde U_x\subset \tilde f^{-1}(\tilde U_{y'})\overset{\text{\ref{L18.8}}}=\widetilde{f^{-1}(U_{y'})}$ Hence, $\tilde U_x=\tilde U_x\cap \widetilde{f^{-1}(U_{y'})}=\widetilde{U_x\cap f^{-1}(U_{y'})}$. Therefore, $x\in U_x\cap f^{-1}(U_{y'})$, since $x$ is not a removable point. That is, $f(x)\in U_{y'}$ and $f(x)\geq y'$. \end{proof} \begin{proposicion} \label{P17.5} Let $X$ and $Y$ be schematic finite spaces. The natural morphism $$\Hom_{[sch]}(X,Y)\to \Hom_{sch}(\tilde X,\tilde Y),\,[\phi,f]\mapsto \tilde f\circ \tilde\phi^{-1}$$ is injective. \end{proposicion} \begin{proof}[Proof] Let $[\phi,f],[\phi',f']\colon X\to Y$ be [schematic] morphisms such that $\tilde f\circ \tilde\phi^{-1}=\tilde f'\circ \tilde\phi'^{-1}$. We can suppose that $\phi=\phi'$, then $\tilde f =\tilde f'$. Let us say that $f$ and $f'$ are morphisms from $X'$ to $Y$. By Proposition \ref{P16.4} and Lemma \ref{L17.2} we can suppose that $X'$ and $Y$ are minimal schematic spaces. By Lemma \ref{L17.4}, the map $f$ is determined by $\tilde f$. The morphism of rings ${\mathcal O}_{f(x')}\to {\mathcal O}_{x'}$ is determined by the morphism of schemes $\Spec {\mathcal O}_{x'}\to \Spec {\mathcal O}_{f(x')}$. Therefore, $f=f'$. \end{proof} \begin{definicion} Let $U\overset{i_1}\to U_1$ and $U\overset{i_2} \to U_2$ be quasi-open immersions. We denote $U_1\cupa{U} U_2:= C(i_1)\coproda{U} C(i_2)$. \end{definicion} \begin{vacio} \label{ll} Observe that $C(i_1)$ and $C(i_2)$ are open subsets of $U_1\cupa{U} U_2$, $C(i_1)\cup C(i_2)= U_1\cupa{U} U_2$, $C(i_1)\cap C(i_2)= U$ and the natural morphisms $C(i_j)\to U_j$ are quasi-isomorphisms, for $j=1,2$. \end{vacio} Let $U_1,U_2\subset X$ be open subsets. Then, the natural morphism $U_1\cupa{U_1\cap U_2} U_2\to U_1\cup U_2$ is a quasi-isomorphism. Let $$\xymatrix @R=8pt { & V_1 \ar[rr]^-{\tilde{}}& & U_1\\ V \ar[ur] \ar[dr] \ar[rr]^-{\tilde{}} & & U \ar[ru] \ar[rd] & \\ & V_2 \ar[rr]^-{\tilde{}}& & U_2} $$ be a commutative diagram of quasi-open immersions, where the arrows $\tilde\longrightarrow$ are quasi-isomorphisms. Then, the natural morphism $V_1\cupa{V} V_2\to U_1\cupa{U} U_2$ is a quasi-isomorphism. \begin{teorema} \label{T18.14} Let $U\overset{i_1}\to U_1$ and $U\overset{i_2}\to U_2$ be quasi-open immersions. Then, $$\Hom_{[sch]}(U_1\cupa{U} U_2,Y)=\Hom_{[sch]}(U_1,Y)\times_{\Hom_{[sch]}(U,Y)} \Hom_{[sch]}(U_2,Y).$$ In other words (by \ref{ll}), let $U_1,U_2\subset X$ be open subsets. Then, $$\Hom_{[sch]}(U_1\cup U_2,Y)=\Hom_{[sch]}(U_1,Y)\times_{\Hom_{[sch]}(U_1\cap U_2,Y)} \Hom_{[sch]}(U_2,Y).$$ \end{teorema} \begin{proof}[Proof] Let $U_1\overset{j_1}\hookrightarrow U_1\cup U_2$ and $U_2\overset{j_2}\hookrightarrow U_1\cup U_2$ be the obvious inclusion morphisms . We have to prove that $$\alineas{rcl} \Hom_{[sch]}(U_1\cup U_2,Y) & \to & \Hom_{[sch]}(U_1,Y)\times_{\Hom_{[sch]}(U_1\cap U_2,Y)}\Hom_{[sch]}(U_2,Y) \\ \left[f\right] & \mapsto & ([f]\circ [j_1],[f]\circ [j_2])\end{array}$$ is bijective. Let $[f],[g]$ such that $([f]\circ [j_1],[f]\circ [j_2])=([g]\circ [j_1],[g]\circ [j_2])$. There exist a minimal schematic finite space $W$, a quasi-isomorphism $\phi \colon W\to U_1\cup U_2$ and morphisms $f',g'\colon W\to Y$ such that $[f]=[\phi ,f']$ and $[g]=[\phi ,g']$. Then, $$[f]\circ [j_1]=[\phi _{|\phi ^{-1}(U_1)},f'_{|\phi ^{-1}(U_1)}] \text{ and } [g]\circ [j_1]=[\phi _{|\phi ^{-1}(U_1)},g'_{|\phi ^{-1}(U_1)}].$$ By Proposition \ref{paque}, $f'_{|\phi ^{-1}(U_1)}=g'_{|\phi ^{-1}(U_1)}$. Likewise, $f'_{|\phi ^{-1}(U_2)}=g'_{|\phi ^{-1}(U_2)}$. That is, $f'=g'$ and $[f]=[g]$. Let $([f_1],[f_2])\in \Hom_{[sch]}(U_1,Y)\times_{\Hom_{[sch]}(U,Y)}\Hom_{[sch]}(U_2,Y)$. Write $[f_1]=[\phi_1,g_1]$ and $[f_2]=[\phi_2,g_2]$. Since $[f_1]\circ [i_1]=[f_2]\circ [i_1]$, there exist a schematic finite space $V$ and a commutative diagram $$\xymatrix @R=10pt @C=32pt{ & V_1\times_{U_1} U_1\cap U_2 \ar@{.>}[dd] \ar[r] & V_1 \ar[rdd]^-{g_1} \ar@{.>}[d]_-{\phi_1} & \\ & & U_1 & \{\mathcal V} \ar@{.>}[ruu] \ar@{.>}[rdd] & U_1\cap U_2 \ar[ru]^-{i_1} \ar[rd]_-{i_2} & & Y \\ & & U_2 & \\& V_2\times_{U_2} U_1\cap U_2 \ar@{.>}[uu] \ar[r] & V_2 \ar@{.>}[u]^-{\phi_2} \ar[ruu]_-{g_2} & }$$ (where the arrows $\cdots\!\!>$ are quasi-isomorphisms). Then, we have a schematic morphism $g\colon V_1\cupa{V} V_2\to Y$ and the composition $\phi$ of the quasi-isomorphisms $V_1\cupa{V} V_2\to U_1\cupa{U_1\cap U_2} U_2\to U_1\cup U_2$. The reader can check that $[\phi,g]\mapsto ([f_1],[f_2])$. \end{proof} \begin{teorema} Let $X$ and $Y$ be schematic finite spaces and suppose that $Y$ is semiseparated. Then, the morphism $$\Hom_{[sch]}(X,Y)\to \Hom_{sch}(\tilde X,\tilde Y),\,[\phi,f]\mapsto \tilde f\circ \tilde\phi^{-1}$$ is bijective. \end{teorema} \begin{proof}[Proof] By Proposition \ref{P17.5}, it is injective. Let $f'\in \Hom_{sch}(\tilde X,\tilde Y)$. 1. Assume that $\tilde X=\Spec A$ is affine. We can suppose that $Y$ is a $T_0$-space. Observe that $f'_*\tilde A$ is a quasi-coherent ${\mathcal O}_{\tilde Y}$-module, then $f'_*\tilde A=\tilde {\mathcal A}$, where $\mathcal A$ is a quasi-coherent ${\mathcal O}_Y$-module and an ${\mathcal O}_Y$-algebra. Let us prove that $(Y,{\mathcal A})$ is affine. We only have to prove that $(Y,{\mathcal A}_{\mathfrak p})$ is affine for any ${\mathfrak p}\in\Spec A$. Given an open subset $U\subset Y$, let $I_U:=\{$quasi-compact open subsets $\bar V\subset \tilde Y\colon \tilde U\subset \bar V\}$. Recall $$\mathcal A(U)=\mathcal A_{|U}(U)=\widetilde{{\mathcal A}_{|U}}(\tilde U)\overset{\text{\ref{P16.17}}}=\tilde {\mathcal A}_{|\tilde U}(\tilde U)\overset{\text{\ref{coro25}}}=\ilim{\bar V\in I_U} \tilde{\mathcal A} (\bar V)=\ilim{\bar V\in I_U} \tilde A(f'^{-1}(\bar V)). $$ Denote by $f'_{\mathfrak p}$ the composition of the morphisms $\Spec A_{\mathfrak p}\hookrightarrow \Spec A\overset{f'}\to \tilde Y$. Observe that $${\mathcal A}_{{\mathfrak p}}(U):=\mathcal A(U)_{\mathfrak p}=\ilim{\bar V\in I_U} \tilde A(f'^{-1}(\bar V))_{\mathfrak p}=\ilim{\bar V\in I_U} \tilde A_{\mathfrak p}({f'_{\mathfrak p}}^{-1}(\bar V)).$$ Then, we can suppose that $A=A_{\mathfrak p}$. The obvious morphism $\Id\colon (Y,{\mathcal A})\to (Y,{\mathcal O}_Y)$ is affine and $(U_{yy'},{{\mathcal O}_Y}_{|U_{yy'}})$ is affine, for any $y,y'\in Y$, since $Y$ is semiseparated. Hence, $(U_{yy'},{\mathcal A}_{|U_{yy'}})$ is affine. Then, the morphism ${\mathcal A}_{yy'}\to \prod_{k\in U_{yy'}} {\mathcal A}_k$ is faithfully flat. Let $y$ be the greatest point of $Y$ such that $f'({\mathfrak p})\in \tilde U_y$ and let $y'\in Y$ be another point. Observe that $${\mathcal A}_{y'}=\ilim{\bar V\in I_{U_{y'}}} \tilde{\mathcal A} (\bar V) \overset*=\ilim{\bar V\in I_{U_{y'}},\bar W\in I_{U_{y}} } \tilde{\mathcal A} (\bar V\cap \bar W)\overset{\text{\ref{L16.18}}}=\ilim{\bar W\in I_{U_{yy'}}} \tilde{\mathcal A} (\bar W)\overset{\text{\ref{coro25}}}=\tilde{\mathcal A}_{|\tilde U_{yy'}}(\tilde U_{yy'})= {\mathcal A}_{yy'}$$ ($\overset*=$ since $f'^{-1}(\bar V)=f'^{-1}(\bar V\cap \bar W)$). Then, ${\mathcal A}_{y'}={\mathcal A}_{yy'} \to \prod_{k\in U_{yy'}} {\mathcal A}_{k}$ is faithfully flat. Hence, $y'$ is a removable point of $(Y,{\mathcal A})$ if $y\not\leq y'$. Therefore, $(Y,{\mathcal A})$ is quasi-isomorphic to $(U_y,{\mathcal A}_{|U_y})$, then it is affine. Finally, $\Spec {\mathcal A}=\Spec {\mathcal A}(Y)=\Spec A=\tilde X$ and the morphism $\tilde X=\Spec {\mathcal A} \to \Spec {\mathcal O}_Y=\tilde Y$ induced by the obvious morphism $\Id\colon (Y,{\mathcal A})\to (Y,{\mathcal O}_Y)$ is $f'$. Therefore, $\Hom_{[sch]}(X,Y)=\Hom_{sch}(\tilde X,\tilde Y)$. 2. Now, in general. $$\aligned \Hom_{[sch]}(X,Y) & =\Hom_{[sch]}(\ilim{x\in X} U_x,Y) \overset{\text{\ref{T18.14}}}= \plim{x\in X}\Hom_{[sch]}(U_x,Y)=\plim{x\in X}\Hom_{sch}(\tilde U_x,\tilde Y)\\ & \overset +=\Hom_{sch}(\ilim{x\in X} \tilde U_x,\tilde Y)= \Hom_{sch}(\tilde X,\tilde Y)\endaligned$$ ($\overset +=$ given $(f_x)\in \plim{x\in X}\Hom_{sch}(\tilde U_x,\tilde Y)$ the induced morphism of ringed spaces $f\colon \ilim{x\in X} \tilde U_x\to \tilde Y$ is schematic, since for any quasi-coherent ${\mathcal O}_{\tilde X}$-module $\tilde {\mathcal M}=\plim{x\in X}\tilde i_{x*}\widetilde{\mathcal M}_x$, the ${\mathcal O}_{\tilde Y}$-module $f_*\tilde {\mathcal M}=\plim{x\in X}f_*\tilde i_{x*}\widetilde{\mathcal M}_x= \plim{x\in X}f_{x*}\widetilde{\mathcal M}_x$ is quasi-coherent). \end{proof}
1,477,468,750,819
arxiv
\section{Introduction} Parametric amplifiers having quantum-limited noise properties play a crucial role in a variety of quantum information technologies. In optical-domain systems, they are a crucial resource for preparing both discrete-variable and continuous-variable entangled states \cite{WeedbrookRMP2012}. For superconducting microwave circuits, quantum parametric amplifiers harnessing Josephson nonlinearities serve as a workhorse for fast, high-fidelity qubit measurements. Given the crucial role they play, there has been an enormous amount of activity (especially in the microwave-domain) looking at alternate amplifier designs that provide advantages in terms of bandwidth, noise and isolation (i.e.~in-built non-reciprocity) (see e.g.~Refs.~\cite{Castellanos-Beltran2008,Malnou2018,Bergeal2010, Bergeal2010a, Abdo2011, Abdo2013, Chien2019,Macklin2015,Sivak2019, Abdo2013b,Sliwa2015, Lecocq2017, Kamal2017, Mercier2019, Lecocq2019,Metelmann2014,Roy2015,Metelmann2015,Ockeloen2016, Metelmann2017,Zhong2019}). Almost all these strategies ultimately use parametric processes that induce dynamical instability: in the absence of external dissipation or additional nonlinearities, the internal intra-cavity system dynamics would lead to unbounded exponential growth. This instability is cut-off by dissipation (i.e.~the coupling to input and output transmission lines), and the underlying instability physics is used to generate the desired amplification. This basic paradigm is at the heart of standard degenerate and non-degenerate parametric amplifier designs (see \cite{RevModPhys,Roy2016} for pedagogical reviews), as well as more complex amplifier designs. While undeniably powerful, the strategy of harnessing an instability has some inherent drawbacks. As we discuss, it necessarily leads to a fundamental gain-bandwidth trade-off: amplification is achieved by tuning pump parameters closer and closer to instability, which correspondingly increases the system's response time. This is analogous to the phenomenon of critical slowing down that occurs when one approaches a second-order phase transition. The net result is a system bandwidth that scales inversely with the square-root of the amplifier's power gain \cite{RevModPhys,Roy2016}. \begin{figure} \centering\includegraphics[width=\columnwidth]{Figure1.pdf} \caption{Photons entering the Bogoliubov amplifier at the input port are converted into Bogoliubov quasiparticles via the squeezing transformation $\hat S_{\rm in}$.When leaving the amplifier they are converted back to photons via the inverse squeezing transformation $\hat S_{\rm out}$. Crucially, $\hat S_{\rm out} \neq \hat S_{\rm in}^{-1}$, results in the enhancement of the photon amplitude with factor $\sqrt{\mathcal G}>1$.} \label{Figure1} \end{figure} Here, we present an extremely general way to achieve ideal quantum limited amplification that does not involve proximity to a dynamical instability. The basic idea is to exploit what at first glance seems a very sub-optimal situation: use parametric processes that are effectively detuned from resonance. Such systems are described by mean-field Hamiltonians that are fully stable even without any external dissipation. They still however can generate remarkably ideal quantum-limited amplification: in particular, even though they use at most two photonic cavity modes, they are {\it fundamentally not subject to any gain-bandwidth constraints}. As we discuss, this is a direct consequence of the eigenstates of these systems being {\it squeezed photons}. Such squeezed photons are described by so-called Bogoliubov modes, bosonic annihilation operators that are generated by squeezing transformations acting on standard photonic creation and destruction operators. The class of dynamically-stable amplifiers we introduce here all have the feature of having a set of Bogoliubov modes whose number operators are conserved. The heuristic operation of our class of amplifiers is sketched in Fig. \ref{Figure1}. Photons enter the input port, and are effectively converted to Bogoliubov quasiparticles; this involves a squeezing transformation $\hat {S}_{\rm in}$. These quasiparticles then have simple, number conserving dynamics. To leave the amplifier, these Bogoliubov quasiparticles are converted back to photons via a second squeezing transformation $\hat {S}_{\rm out}$. Amplification is achieved by the simple fact that $ \hat {S}_{\rm out} \neq \hat {S}_{\rm in}^{-1} $: there is a net squeezing-amplification transformation implemented on photons scattered by the amplifier. Our schemes have the further advantage that by a simple parameter tuning, they can directly implement the enhanced bandwidth strategy of Ref.~\cite{Roy2015}, where the frequency dependent gain is extremely flat near resonance. While Ref.~\cite{Roy2015} achieved this via the introduction of a secondary, optimized external impedance, in our systems, this bandwidth enhancement is in-built. Yet another advantage of these designs is an intrinsic resilience against pump-depletion effects, something that is a direct consequence of not operating in proximity to an instability. We show that our strategy is extremely general, and discuss a variety of implementations. The simplest corresponds to new way to operate a standard single-cavity DPA such that there is no gain-bandwidth product; this is analyzed in Sec.\ref{SecOptDetunedDPA}. We focus most of our attention however on an even more novel setup: a two-mode, two-port amplifier that achieved a perfect DPA squeezing operation {\it in transmission}. Despite the presence of an extra mode compared to a standard DPA, this system nonetheless has quantum-limited performance (i.e.~one quadrature is amplified noiselessly). It also has no gain-bandwidth limitation, and overcomes a key limitation of standard DPA: they operate in reflection, meaning that there is no intrinsic separation of amplifier input and output. A detailed analysis of this setup is presented in Sec.\ref{SecOptImbalnecedDPA}. We also show that this setup is amenable with current superconducting circuit technology: Sec.\ref{SecExpResults} describes rests of an experimental implementation of this novel two-mode amplifier showing a bandwidth which is enhanced by over a factor of $6$ compared to the standard setups. Note that previous work has explored the use of detuned parametric driving to realize quantum non-demolition dynamics, which conserves one or more photonic quadratures \cite{Szorkovszky2013,Szorkovszky2014,Szorkovszky2014b}. Such QND interactions are distinct from the ideas we present here; in particular such QND systems are on the cusp of instability (i.e.~they cannot be diagonalized), where as our systems are fully stable (i.e.~described by diagonalizable Hamiltonians). \section{The basics: single mode Bogoliubov-mode amplifier} \subsection{Recap of a standard DPA} \label{SecRecapDPA} We first recall the basics of a degenerate parametric amplifier (DPA) in the stiff-pump limit. The amplifier consists of a principle cavity with a weak nonlinear coupling to an auxiliary pump mode. By driving this mode strongly with an external pump at an appropriate frequency, one realizes a mean-field Hamiltonian of the form: \begin{align}\label{eq:DPA} \hH = \Delta \hat a^\dagger \hat a + \frac{\nu}{2} \left( \hat a^\dagger \hat a^\dagger + \hat a \hat a \right) . \end{align} We are working in a rotating frame set by the pump frequency. $\hat a$ is the photon lowering operator for the signal mode, and $\Delta$ is the detuning of the pump from the cavity resonance frequency. $\nu$ is the effective parametric drive amplitude (determined by both the nonlinearity and pump amplitude). We assume without loss of generality that both $\Delta$ and $\nu$ are real and positive. Note that for $\Delta < \nu$, the DPA Hamiltonian given in Eq.~\ref{eq:DPA} is unstable: it cannot be diagonalized, and in the absence of the dissipation it generates unbounded exponential growth, corresponding to a dynamical instability. Coupling the system to an input-output waveguide with coupling rate $\kappa$ makes the system dynamically stable as long as $\kappa/2 \geq\nu$. Gain is generated by approaching the point of instability, i.e. by increasing $\nu$ so it approaches $\kappa/2$ from below. Signals incident on the waveguide will be reflected with gain. The frequency dependent gain is obtained via input-output theory \cite{Gardiner1985} and yields for zero detuning $(\Delta = 0)$ \begin{align}\label{eq:Gain_Normal} \mathcal G[\omega] = \frac{\left(\frac{2\omega}{D}\right)^2+ \mathcal G_0}{\left(\frac{2\omega}{D}\right)^2+1}, \hspace{0.5cm} \sqrt{\mathcal G_0} = \frac{\frac{\kappa}{2}+\nu}{\frac{\kappa}{2}- \nu} \end{align} where $D = 2 \kappa/( \sqrt{\mathcal G_0}+1)$ serves as the effective bandwidth of the amplifier, defined here as the full width at half of the maximum gain. Only signals contained within a frequency range of approximately $D$ around resonance will be significantly amplified. In the relevant case where the zero frequency power gain is very large $ \mathcal G_0 \gg 1$, we have \begin{align} D \approx \frac{2\kappa}{ \sqrt{\mathcal G_0} } \implies D \sqrt{ \mathcal G_0} \approx 2 \kappa . \end{align} This encapsulates the fundamental gain/bandwidth product that limits conventional parametric amplifiers. If one wants to increase the peak gain, it is necessarily accompanied by a reduction in the operating bandwidth of the amplifier. This is a generic feature of any amplifier that generates gain by operating closer to a point of dynamical instability. \begin{figure} \centering\includegraphics[width=0.45\textwidth]{Figure2.pdf} \caption{ Plot of the gain as a function of injected signal frequency for a zero frequency gain of 20dB. The optimally detuned Bogoliubov amplifier (ODBA) has a broad range of frequencies over which amplification occurs (solid black line). Conventional parametric amplifiers only amplify in a narrow range of frequencies (dotted teak line). } \label{Figure2} \end{figure} \subsection{Optimally-detuned DPA: The ODBA}\label{SecOptDetunedDPA} We now consider our DPA system in a case where the Hamiltonian is stable and diagonalizable. This requires $\Delta \geq \nu$. Here, the Hamiltonian can be diagonalized as \begin{equation} \hH = \Lambda \; \hat {\beta}^\dagger \hat {\beta}, \,\,\,\,\,\, \hat {\beta} = \cosh(r) \hat {a} + \sinh(r) \hat {a}^\dag , \end{equation} with $\Lambda = \sqrt{\Delta^2 - \nu^2}$ and where $\hat {\beta}$ is a canonical bosonic lowering operator in the Bogoliubov basis. The eigenstates of the Bogoliubov mode number operator $\hat n_{r} = \hat\beta^{\dag}\hat \beta $ are squeezed Fock states $|n_{r}\rangle = \hat S(r) |n\rangle$ with $\hat S(r) = \exp{[r/2(\hat a \hat a - \hat a^{\dag} \hat{a}^\dag)]}$. The squeezing parameter $r$ is given by $\tanh 2 r = \nu/\Delta $. The dynamics of this Hamiltonian in the Bogoliubov basis are obviously stable and trivial. An input signal $\hat \beta_{\textup{in}}$ injected into the waveguide would scatter of as \begin{align} \hat \beta_{\rm out}[\omega] = e^{i \phi[\omega]} \hat \beta_{\rm in}[\omega] , \; \: \phi[\omega] = 2 \rm{arg} \left[ i(\omega - \Lambda ) + \frac{\kappa}{2} \right], \end{align} hence, the input in the Bogoliubov basis is just reflected with a phase $\phi[\omega]$. This phase is crucial to obtain any net amplification, which becomes obvious when considering the transformation back into the original frame \begin{align} \label{Eq.DBAoutputCavBas} \left( \begin{array}{c} \hat a_{\textup{out}}[\omega] \\ \hat a_{\textup{out}}^{\dag}[\omega] \end{array} \right) = \mathcal S^{-1}(r) \left[ \begin{array}{cc} e^{ + i \phi[\omega] } & 0 \\ 0 & e^{- i \phi[\omega] } \end{array} \right] \mathcal S(r) \left( \begin{array}{c} \hat a_{\textup{in}} [\omega] \\ \hat a_{\textup{in}}^{\dag} [\omega] \end{array} \right) , \end{align} with the single mode squeezing transformation $\mathcal S(r)$ \begin{align} \left( \begin{array}{c} \hat \beta \\ \hat \beta^{\dag} \end{array} \right) = \; \mathcal S(r) \; \left( \begin{array}{c} \hat a \\ \hat a^{\dag} \end{array} \right), \; \mathcal S(r) = \left( \begin{array}{cc} \cosh r & \sinh r \\ \sinh r & \cosh r \end{array} \right), \end{align} with $\mathcal S^{-1}(r) \mathcal S(r) = \mathbb{1}$. Crucially, the phase $\phi[\omega]$ ensures that the squeezing transformations do not cancel each other out. This lack of cancellation results in net amplification of input signals. To specify the parameters required for an single-mode Bogoliubov amplifier, we take the following operational approach. We imagine having a fixed decay rate $\kappa$, while being able to freely adjust the detuning $\Delta$ and drive $\nu$. Changing both parameters is feasible in nearly all experimental platforms. We chose both such that the energy of the Bogoliubov mode matches the photonic loss rate \begin{align}\label{eq.MatchingODPA} \sqrt{\Delta^2-\nu^2} = \frac{\kappa}{2} . \end{align} The corresponding frequency dependent gain now reads \begin{align}\label{eq:Gain_Optimal} \mathcal G[\omega] = \frac{(\frac{2\omega}{D'})^4+ \mathcal G_0}{(\frac{2\omega}{D'})^4+1}, \end{align} where $D' = \sqrt{2}\kappa $ is the effective bandwidth of the optimally detuned Bogoliubov amplifier (ODBA). While superficially similar looking to the gain profile in Eq.~(\ref{eq:Gain_Normal}) of a conventional DPA , the ODBA offers two distinct advantages. The first is that there is no gain-bandwidth limitation: the gain can be as large as desired without sacrificing bandwidth (see Fig.~\ref{Figure2}). The fact that the gain and bandwidth are independent is of enormous utility in experiments. The other distinct advantage of the ODBA is the relative flatness of the gain profile around zero frequency. In a standard DPA, the gain is only flat around an extremely narrow range of frequencies $\omega$ which satisfy $\omega \ll D$. In contrast, the gain of a ODBA is nearly constant around for frequencies $\omega$ near zero, with small leading order correction of the order $(\omega/D')^4$ (see Fig.~\ref{Figure2}) The ODBA still maintains one of the main attractive features of a conventional DPA: it can be used as a quantum-limited amplifier without any added noise. Such amplifiers are required for several tasks related to quantum computation and communication. Our scheme is relevant in several experiment platforms, such as optical, microwave and mechanical setups. It is especially well suited for Josephson amplifiers used to readout superconducting quantum circuits. Finally, we stress that the ODBA is simple to implement experimentally. It does not require additional hardware, but relies instead on a carefully chosen choice of detuning and drive strength, both of which can be easily tuned in experiments. \section{An ideal two-port squeezer: the two-mode Bogoliubov amplifier} The previous section showed how exploiting dynamics that conserved the number of squeezed photons (Bogoliubov excitations) in a simple parametrically driven cavity was enough to realize a quantum amplifier with exceptional properties. Here, we show how this basic idea becomes even more powerful in the setting of a two-cavity system. \subsection{The Optimally Imbalanced Parametric Amplifier: The OIBA}\label{SecOptImbalnecedDPA} Parametric modulation of the coupling between two coupled cavity modes results in two basic interactions at the mean-field level: frequency conversion for modulating at the frequency difference of the mode pair, or parametric amplification if one drives at the sum of their frequencies. Our two-mode Bogoliubov amplifier utilizes resonant versions of these interactions simultaneously. We thus start with the mean-field Hamiltonian (rotating at the respective mode resonant frequencies) \begin{align} \label{Eq.:HamiltonianDbasis} \hH =& G_{1} \hat d_1^{\dag}\hat d_2^{\dag} + G_{1}^{\ast} \hat d_1^{\phantom{\dag}} \hat d_2^{\phantom{\dag}} + G_{2} \hat d_1^{\dag}\hat d_2 + G_{2}^{\ast} \hat d_1^{\phantom{\dag}} \hat d_2^{\dag} . \end{align} Here $\hat {d}_{n} (n\in 1,2)$ denotes the annihilation operator of mode $n$, and the coupling coefficients $G_n$ contain the amplitude and the phase of two external modulation tones. With the latter we can control which quadratures of the modes are involved in the interaction. Unlike the amplifier of Sec. \ref{SecOptDetunedDPA}, there is no explicit detuning term here. Nonetheless, our system can be made dynamically stable (even without any external dissipation) by constraining $G_1$ to be smaller than $G_2$. In this regime, the intra-cavity dynamics of our system is yet again simply understood in terms of Bogoliubov modes whose total excitation number is conserved. Defining the squeezing parameter $r$ via $\tanh 2 r = G_{1}/G_2$, and introducing canonical Bogoliubov-mode lowering operators $\hat \beta_n = \hat d_{n}\cosh r + \hat d_{n}^{\dag} \sinh r $, Eq.~(\ref{Eq.:HamiltonianDbasis}) takes the form \begin{align}\label{Eq.BogHam} \hH_{\textup{hop}} =& \; \widetilde G \; \hat \beta_1^{\dag} \hat \beta_2 + h.c. , \hspace{0.3cm} \widetilde G = \sqrt{G_{2}^2 - G_{1}^2}. \end{align} Eq.~(\ref{Eq.BogHam}) describes a simple hopping interaction that converts excitations between the local Bogoliubov modes at a frequency $\tilde{G}$. At first glance, this might seem to be of zero utility for amplification. This however misses the fact that when we now couple modes $1$ and $2$ to input-output transmission lines or waveguides, excitations enter and the leave the system as photons, not as Bogoliubov excitations. This then provides a simple heuristic picture for our amplification mechanism as illustrated in Fig.\ref{Figure3}: Input photons on mode 1 are converted to $\beta_1$-mode excitations; this involves a single-mode squeezing transformation $\hat S_{\rm in}$ The hopping interaction in Eq.~(\ref{Eq.BogHam}) then converts the excitations to $\beta_2$ with a phase shift. The excitations in $\beta_2$ are then converts to photons in the transmission line coupled to mode 2; this involves a second single-mode squeezing transformation $\hat S_{\rm out}$. \begin{figure} \centering\includegraphics[width=0.5\textwidth]{Figure3.pdf} \caption{ Illustration of the operation mode of the OIBA. An input signal $\hat a_{\textup{in}}$ on mode 1 gets squeezed $(\hat S_{\rm in})$, converted to a mode 2 Bogoliubov quasiparticle and undergoes a second squeezing transformation before leaving mode 2. Crucially, due to the conversion process between the modes the squeezing transformations do not cancel each other out and the output signal is amplified. } \label{Figure3} \end{figure} The above heuristic picture is borne out by an explicit calculation of the system's scattering matrix. Without loss of generality we take the coefficients $G_1, G_2$ to be real, and couple both cavity modes to external waveguides with symmetric coupling strengths $\kappa_{n} = \kappa$ (see Appendix \ref{AppAsymmetries} for the asymmetric case). From the Langevin equations of the system we find the eigenvalues of the dynamics \begin{align}\label{Eq.Eigenvalues} \epsilon_{1,2} = - \frac{\kappa}{2} \pm i \widetilde G , \hspace{0.5cm} \widetilde G \equiv \sqrt{ G_{2}^{2} - G_{1}^2}, \end{align} we see that if we choose $\tilde{G} = \kappa/2$, i.e., for optimally imbalanced interaction strengths $G_{1,2}$ in Eq.(\ref{Eq.:HamiltonianDbasis}), we have a situation where the splitting of the system's normal modes is exactly equal to their width. As in Sec. \ref{SecOptDetunedDPA}, this matched-splitting operation leads to a number of exceptional properties, including an extremely flat gain versus frequency profile. We thus follow on this special tuning point in what follows. The output for $\tilde{G} = \kappa/2$ yields in the Bogoliubov basis \begin{align} \label{Eq.ScattIBAgeneral} & \left[ \begin{array}{c} \hat \beta_{1,\textup{out}}[\omega] \\ \hat \beta_{2,\textup{out}}[\omega] \end{array} \right] = e^{i\phi[\omega]} \mathcal P^{-1} \left[ \begin{array}{cc} \cos \theta[\omega] & \sin \theta[\omega] \\ - \sin \theta[\omega] & \cos \theta[\omega] \end{array} \right] \mathcal P \left[ \begin{array}{c} \hat \beta_{1,\textup{in}}[\omega] \\ \hat \beta_{2,\textup{in}}[\omega] \end{array} \right]. \end{align} The matrix operations here acting on the Bogoliubov input modes here have a simple interpretation. They correspond to the input excitation passing through a sequence of phase shifters. The corresponding frequency dependent beam-splitter angle, and phase shifter matrix are: \begin{align} \theta[\omega] = \textup{arcsin} \left[ \frac{4\omega^4}{\kappa^4}+ 1 \right]^{-\frac{1}{2} }, \mathcal P = \left( \begin{array}{cc} - i & 0 \\ 0 & 1 \end{array} \right) . \end{align} In addition we have an overall frequency-dependent phase shift \begin{align} \phi[\omega] = \textup{atan} \left[ \frac{2 \frac{\omega}{\kappa} }{1 - \frac{2\omega^2}{\kappa^2} } \right] . \end{align} The output of the mode $i=1,2$ contains now contributions from both modes, in contrast to the single-mode ODBA, where the input signal is simply reflected with a phase. This phase was the important ingredient to avoid the cancellation of the squeezing transformation, cf. Eq.~{\ref{Eq.DBAoutputCavBas}}. Here the situation is slightly different: to prevent the squeezing operations from canceling (hence generating amplification), the beam-splitter operation is now crucial. To see this we can consider the scattering behavior in the original mode-basis. and work in the basis of the orthogonal quadratures $ \hat X_{n} = (\hat d_{n}^{\phantom{\dag}} + \hat d_{n}^{\dag} )/\sqrt{2}$ and $\hat P_{n} = - i (\hat d_{n}^{\phantom{\dag}} - \hat d_{n}^{\dag})/\sqrt{2}$. The scattering matrix relating input and output fields becomes $(\textbf{X}_{\textup{out}}[\omega] = \textbf{s}[\omega] \textbf{X}_{\textup{in}}[\omega] )$ \begin{align} \label{ScatteringOIBA} \textbf{s}[\omega] =& e^{ i\phi[\omega]} \; \mathcal S_{Q}^{-1}(r) \; \mathcal B(\theta[\omega]) \; \mathcal S_{Q}(r) \nonumber \\ =& \frac{e^{ i\phi[\omega]} } {\sqrt{\frac{4\omega^4}{\kappa^4}+ 1 } } \left( \begin{array}{cccc} \cot \theta[\omega] & 0 & 0 & -e^{-2 r} \\ 0 & \cot \theta[\omega] & e^{2 r} & 0 \\ 0 & -e^{-2 r} & \cot \theta[\omega] & 0 \\ e^{2 r} & 0 & 0 & \cot \theta[\omega] \\ \end{array} \right), \end{align} in the basis $\textbf{X} =[\hat X_{1}, \hat P_{1}, \hat X_{2}, \hat P_{2}]^T$, and with the beam-splitter matrix \begin{align} \mathcal B(\theta[\omega]) =& \left[ \begin{array}{cccc} \cos \theta[\omega] & 0 & \sin \theta[\omega] & 0 \\ 0 & \cos \theta[\omega] & 0 & \sin \theta[\omega] \\ - \sin \theta[\omega] & 0 & \cos \theta[\omega] & 0 \\ 0 & - \sin \theta[\omega] & 0 & \cos \theta[\omega] \end{array} \right] , \end{align} and the squeezing transformation for the quadrature basis (absorbing the phase shift $\mathcal P$) \begin{align} \mathcal S_{Q}(r) = \frac{1}{\sqrt{2}} \left[ \begin{array}{cccc} - i e^{+r} & e^{-r} & 0 & 0 \\ i e^{+r} & e^{-r} & 0 & 0 \\ 0 & 0 & e^{+r} & i e^{-r} \\ 0 & 0 & e^{+r} & - i e^{-r} \end{array} \right], \end{align} which describes local squeezing transformations. From the scattering matrix given in Eq.(\ref{ScatteringOIBA}) it becomes clear that the beam-splitter prevents the cancellation of the squeezing transformations, with the net result being phase-sensitive amplification involving a frequency conversion process. We already see the remarkable features of the amplifier: the reflections (diagonal elements) are zero on resonance $(\theta[0] = \pi/2)$ and thus the system is perfectly impedance matched to its input ports. Moreover, the output of cavity 1 (2) contains the amplified $P$-quadrature and squeezed $X$-quadrature of cavity 2 (1), allowing for a separation of input and output ports for the signal. In contrast to single mode-squeezing, i.e., realized via the interaction $\mathcal H_{s} = \lambda \hat d^{\dag} \hat d^{\dag} + h.c.$, the squeezing/amplification here is also accompanied by a frequency conversion. Despite the fact that we use two degrees of freedom here (which might seem extraneous), we find that the amplification process is quantum-limited: it reaches the quantum-limit for phase-sensitive amplifiers of zero added noise \cite{Caves1982}. Another crucial aspect of the optimally imbalanced Bogoliubov amplifier (OIBA) is the off-resonant gain behavior. Defining $ \mathcal G_0 = e^{4r}$ as the zero-frequency power gain, we find for the gain as a function of frequency \begin{align} \mathcal G[\omega] \equiv |s_{23} [\omega] |^2 = \frac{ \mathcal G_0 }{ 1 + \left[ \frac{2 \omega } { \mathcal D }\right]^4 } , \hspace{0.2cm} \mathcal D = \sqrt{2} \kappa, \end{align} thus, the gain scales linearly with the (tunable) coupling strengths $G_{1,2}$, and the bandwidth over which amplification is possible is not affected by the amount of the gain, i.e., this amplifier is not limited by a fixed gain-bandwidth product. The resulting bandwidth is $\mathcal D = \sqrt{2} \kappa $, see Fig.~\ref{Figure4}. \begin{figure} \centering\includegraphics[width=0.9\columnwidth]{Figure4.pdf} \caption{ Bandwidths for the OIBA as a function of the gain $\mathcal G_{0}$. The black solid line depicts the amplification bandwidth $\mathcal D = \sqrt{2} \kappa$, which scales independently of the gain. This behavior does not translate to the squeezing bandwidth $\mathcal D_{sq}$ (emerald solid line), which decreases for increasing gain. However, the $\mathcal D_{sq}$ is enhanced compared to the squeezing bandwidth $ D \simeq 2\kappa/ \sqrt{\mathcal G_0}$ for the standard single mode setup (dotted grey line). \label{Figure4} } \end{figure} An additional important question is whether this remarkable feature of a gain-independent bandwidth also manifests itself in the output squeezing. As discussed in Sec.~\ref{SecRecapDPA}, a conventional single-mode squeezer has an amplification bandwidth scaling with $ D \simeq 2\kappa/ \sqrt{\mathcal G_0}$, which coincides with the squeezing bandwidth. This squeezing bandwidth is defined as the frequency range over which the amount of squeezing is within $3$~dB of the maximal on-resonance value. To determine the squeezing bandwidth for the Bogoliubov amplifier we consider the symmetrized output noise spectra of the $X_1$-quadrature \begin{align} \frac{ \bar S_{X_1 X_1} [\omega]}{\bar S_{\textup{SN}}} =& \frac{ 1 }{ 1 + \left[ \frac{2 \omega }{ \mathcal D}\right]^4 } \left( \left[ \frac{2 \omega }{ \mathcal D}\right]^4 + \frac{1}{\mathcal G_0} \right) . \end{align} with $ \bar S_{\textup{SN}} = 1/2$ as the shot-noise value. The first term describes vacuum fluctuation driving cavity 1, while the second term describes the squeezed cavity-2 noise. As discussed above, on resonance, i.e., $\omega = 0$, and $\bar n_2^T = 0 $ the amount of squeezing scales inversely with the gain just as in a single-mode setup. However, the noise contribution from mode-1 becomes relevant for finite frequency; we find for the squeezing bandwidth $\mathcal D_{sq} \simeq \sqrt{2} \kappa ( \mathcal G_0)^{-1/4}$. Hence the gain-independence of the amplification bandwidth does not translate to the squeezing bandwidth. However, it is notable that the squeezing bandwidth is enhanced compared to a single-mode setup, i.e., for the same gain value $\mathcal G_{0} = \mathcal G_{s} \equiv \mathcal G$ we find $\mathcal D_{sq}/D \simeq \mathcal G^{1/4}/\sqrt{2}$. Note, without cavity-1 noise contribution in the spectrum $ \bar S_{X_1 X_1} [\omega]$, the squeezing bandwidth would scale independently of the gain. \subsection{OIBA: Experimental Results} \label{SecExpResults} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figure5.pdf} \caption[Signal (red), idler (blue), of a JPC. JPC] {{(a) Signal(red) and Idler(blue) modes of a JPC connected to differential excitations of the central JRM.(b) The common mode of the JPC and JRM connects to both external modes and is typically used for pumping rather than signal input and output.}} \label{Figure5} \end{figure} We demonstrate the power and functionality of these theoretical predictions using a shunted Josephson Parametric Converter (JPC) whose properties and fabrication are described in Ref.~\cite{Chien2019}. The core of this amplifier is a shunted Josephson Ring Modulator (JRM); a ring of four nominally identical Josephson junctions, shunted with linear inductors, as shown in Fig.~\ref{Figure5}. The ring is threaded with an external magnetic flux which creates three-wave mixing among the devices three modes. The Hamiltonian of this device written in terms of the normal modes is \begin{align} \begin{split} H_{JRM} = & -4E_J [ \cos(\dfrac{\varphi_X}{2})\cos(\dfrac{\varphi_Y}{2})\cos(\varphi_Z)\cos(\dfrac{\varphi_{ext}}{4}) \\ & + \sin(\dfrac{\varphi_X}{2})\sin(\dfrac{\varphi_Y}{2})\sin(\varphi_Z)\sin(\dfrac{\varphi_{ext}}{4})] \\ & +\dfrac{E_L}{4}(\varphi_X^2 +\varphi_Y^2 +2\varphi_Z), \end{split} \label{HRJM_full} \end{align} where $E_L= \dfrac{\Phi_0^2}{L}$. We refer to these modes as the signal, idler, and common mode of the JRM. These mode excitations are again noted in Fig~\ref{Figure5}. The four Josephson junctions on the outer arms of the JRM provide nonlinear couplings between the eigenmodes of the circuit \cite{Abdo2011}. Assuming that the ground state of the circuit is $\varphi_X = \varphi_Y =\varphi_Z = 0$ and it is stable as we tune the external magnetic flux bias, we can expand the nonlinear coupling terms around this ground state and make appropriate substitutions to rewrite the Hamiltonian in terms of the raising and lowering operators, up to 3rd order, as in \cite{Schackert2013, chenxu2019} \begin{align}\label{3bodyHamiltonian} \hH_{JPC} = \hH_{0} + g_3(\hat a + \hat a^\dag)(\hat b+ \hat b^\dag)(\hat c+ \hat c^\dag), \end{align} where $\hH_{0}$ denotes the Hamiltonian of the three uncoupled modes and the $g_3$ term represents the strength of the 3-wave mixing that gives rise to the gain and conversion processes in the JRM \cite{Sliwa2015}. These 3-wave processes are then driven with a far off-resonance stiff tone to create effective two-body interactions. Note that the expansion of the cosine potentials also results in higher order terms that have been neglected, as they have significantly smaller magnitude. These terms will nonetheless hinder performance compared to the ideal 3-wave mixing Hamiltonian. To achieve the Optimally Imbalanced Bogoliubov Amplifier (OIBA) experimentally, we start with two pump tones (one each at the sum and difference frequencies of the signal and idler modes) which drive the conversion and gain processes of the JPC with identical coupling rates. From here, the conversion tone amplitude is increased so that a large dip in signal reflection becomes visible. Then, we switch the read-out scheme to transmission ,using a mixer to undo the amplifier's frequency conjugation so that we can compare same-frequency input and outputs in a Vector Network Analyzer. To increase the gain, ${\mathcal G}$, both pumps are both now individually increased slowly at the same rate. As long as the dip in reflection is still present in reflection, the correct ratio of the pumps is approximately maintained. If we compare the response of the OIBA to a more standard non-degenerate amplification process using a single pump tone, we can see that both scattering parameters of the OIBA are superior. The $\approx10$~dB dip in Fig.~\ref{Figure6}~b. means that the amplifier is still approximately matched in reflection. The OIBA is still bi-directional, meaning that one can amplify both from the signal to the idler mode, but also in reverse. A fully directional amplifier is still necessary to achieve ideal practical operation, because stray photons can still end up traveling backwards down the amplification chain to the qubit \cite{Lecocq2017, Li2019}. Thus, in the OIBA measurements should still be performed with external isolators to protect the device being measured. While the broad gain peak in transmission is not an ideal 20~dB, it has all the qualitative features predicted by theory, including the much larger bandwidth, flat peak, and phase-sensitive mode of amplification. At this bias point the single gain pump achieves only about $5$ MHz bandwidth at this gain, which is typical for a standard Josephson Parametric Amplifier. The OIBA on the other hand, has a bandwidth of $\approx 33$ MHz. Theory predicts an even broader bandwidth for the OIBA, but that rests upon having $\kappa_{sig} = \kappa_{idl}$, a condition that was slightly violated here ($\kappa_{sig} = 25$~MHz, $\kappa_{idl}= 20$~MHz). In addition, it was experimentally difficult to reach 20~dB of gain in this mode of amplification on this amplifier due to unwanted higher order terms. In the future, this difficulty could be alleviated by using a more ideal mixing element, such as an array of SNAILs, with suppressed higher-order terms. Despite these caveats, we stress that this experimental device still serves proving the validity of the basic concept. The crucial observation is the ratio of bandwidths between the two kinds of amplifier modes: the new OIBA scheme always yields a far larger bandwidth than the typical single-pump amplification setup (for the same external flux conditions). Further, the OIBA approach does not suffer from a gain-bandwidth product limit that constraints the standard single-pump approach to amplification. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figure6.pdf} \caption{(a)OIBA-pumped amplifier (teal) transmission gain versus frequency, compared to a more standard amplification setup using a single pump tone (black); in both cases we have 10 dB of gain in reflection. The OIBA scheme exhibits a dramatic improvement in bandwidth for the same amount of gain.(b) The reflection scattering parameter of the OIBA amplifier versus frequency, showing that there is almost perfect impedance matching on resonance (i.e.~almost vanishing reflection).} \label{Figure6} \end{figure} \subsection{OIBA: Pump-Depletion} We now show that the OIBA scheme has another strong advantage over more conventional approaches: it is more robust against pump depletion effects. In the former section we have seen that OIBA amplifier can be realized in a superconducting-circuit setup which uses a JPC as the mixing element, and which is driven via two external microwave tones. The relevant 3-wave mixing process is described via the Hamiltonian \begin{align} \hH_{3W} =& \; g_1 \hat a_{1} \hat d_{1}^{\dag} \hat d_{2}^{\dag} +g_2 \hat a_{2} \hat d_{1}^{\dag} \hat d_{2} + h.c. \end{align} here $\hat a_{n}$ denote the pump modes which are driven strongly at frequencies $\omega_{P,1} = \omega_{1} + \omega_{2}$ and $\omega_{P,2} = \omega_{1} - \omega_{2}$ (and detuned from their resonance). Performing a displacement transformation $\hat a_{n} \rightarrow a_{n} e^{-i\omega_{P,n} t - i \varphi_{n}} + \delta \hat a_{n}$, we can decompose the pump modes into their classical amplitudes $a_{n}$ and corresponding fluctuations $ \delta \hat a_{n}$. The pump phases $\varphi_{n}$ will determine which quadrature-quadrature coupling we obtain. In general one assumes stiff pump modes, i.e., one sets $\hat a_{n} = \ev{\hat a_{n}}$ and neglects $\delta \hat a_{n}$, then the resonant interaction simplifies to Eq.~(\ref{Eq.:HamiltonianDbasis}) with $G_{n} = g_n \ev{\hat a_n} e^{-i \varphi_n}$. The stiff pump approximation breaks down for larger input signal power. Here backaction effects on the pump become relevant and limit the dynamical range of the parametric amplifier. To go beyond the stuff pump approximation we derive the equations of motion for the pump modes' expectation values \begin{align} \frac{d}{dt} \ev{\hat a_{1}} =& - \frac{\gamma_1}{2} \ev{\hat a_{1}} - \sqrt{\gamma_{1}} a_{1,\textup{in}} - i g_1 \ev{ \hat d_{1} \hat d_{2}} , \nonumber \\ \frac{d}{dt} \ev{\hat a_{2}} =& - \frac{\gamma_2}{2} \ev{\hat a_{2}} - \sqrt{\gamma_{2}} a_{2,\textup{in}} - i g_2 \ev{ \hat d_{1} \hat d_{2}^{\dag} } , \end{align} where $\gamma_{1,2}$ denote the damping of the pump modes and $a_{1,2,\textup{in}}$ correspond to the amplitude of the respective pump tone. The coupling to the correlators $\langle\hat d_{1}\hat d_{2}^{(\dag)}\rangle$ in the equations for the pump modes describe the backaction effects, the latter would be neglected under a stiff pump approximation. We evaluate the correlators on a mean-field level \cite{Abdo2013} and define \begin{align} \Sigma_{1,\textup{eff}} \equiv & \; i \frac{g_1}{\ev{a_1}} \ev{ \hat d_{1} \hat d_{2} } = \frac{\gamma_{1,\textup{eff}}}{2} + i \Omega_{1,\textup{eff}}, \nonumber \\ \Sigma_{2,\textup{eff}} \equiv& \; i \frac{g_2}{\ev{a_2}} \ev{ d_{1} \hat d_{2}^{\dag} } = \frac{\gamma_{2,\textup{eff}}}{2} + i \Omega_{2,\textup{eff}}, \end{align} with the effective damping rates \begin{align} \label{Eq.EffectiveDecay} \gamma_{1,\textup{eff}} =& + \frac{\gamma_1}{2} \frac{ \sqrt{\mathcal C_1} }{\left[1 +\mathcal C_{+} \mathcal C_{-} \right]^2} \bigg\{ \frac{ X_{n,\textup{in}}^2}{\bar n_{1,\textup{in}}} \mathcal C_{+} - \frac{P_{n,\textup{in}}^2}{\bar n_{1,\textup{in}}} \mathcal C_{-} \bigg\} + \gamma_{1,\textup{eff}}^{\textup{vac}}, \nonumber \\ \gamma_{2,\textup{eff}} =& \mp \frac{\gamma_2}{2} \frac{ \sqrt{\mathcal C_2} }{\left[1 + \mathcal C_{+} \mathcal C_{-}\right]^2} \bigg\{ \frac{ X_{n,\textup{in}}^2}{\bar n_{2,\textup{in}}} \mathcal C_{+} + \frac{P_{n,\textup{in}}^2}{\bar n_{2,\textup{in}}} \mathcal C_{-} \bigg\} , \end{align} with $\mathcal C_{\pm} = \sqrt{\mathcal C_{2}} \pm \sqrt{\mathcal C_{1}}$, the cooperativities $\mathcal C_{n} = 4 G_n^2/\kappa^2 $, and where the minus sign in $\gamma_{2,\textup{eff}} $ refers to an input in cavity 1. The damping rate associated with vacuum fluctuations driving mode-1 reads \begin{align} \gamma_{1,\textup{eff}}^{\textup{vac}} = \frac{ 2 g_1^2}{\kappa} \frac{1 }{1 + \mathcal C_{+} \mathcal C_{-} }, \end{align} which is negligible as it neither scales with the input signal nor the gain. In addition, the frequency shifts yield \begin{align} \Omega_{n^{\prime},\textup{eff}} =& (-1)^{n^{\prime}+1} \; \frac{\gamma_{n^{\prime}}}{2} \frac{ \sqrt{\mathcal C_1 \mathcal C_2} }{\left[1 + \mathcal C_{+} \mathcal C_{-}\right]^2} \; \frac{ X_{n,\textup{in}} P_{n,\textup{in}} }{\bar n_{n^{\prime},\textup{in}}} , \end{align} which become only relevant if we have an input signal simultaneously in both quadratures, i.e., $ X_{n,\textup{in}}$ and $ P_{n,\textup{in}}$. Note, determining all backaction effects requires a self-consistent calculation, i.e., the cooperativities appearing in the expression for the effective decay rates $\gamma_{n, \textup{eff}}$ and the frequency shifts $\Omega_{n,\textup{eff}}$ depend on the pump amplitude and vice versa. Including the backaction onto the pump modes we have now pump amplitudes which depend as well on the phase of the input signal, a situation which deviates from a standard single-tone (phase-insensitive) parametric amplifier. The latter case would be recovered by setting $\mathcal C_{2 } = 0$ in the upper expressions. Note that the OIBA requires a fine tuning of the pump amplitudes: for optimal performance the matching condition $\tilde G = \kappa/2$ has to be achieved. Including pump depletion effects, one has a potential problem, as now, increases in the input signal strength could cause one to violate the matching condition. To quantify this effect we define \begin{align} \mathcal C_{1,2} (\gamma_{1,\textup{eff}} ,\gamma_{2,\textup{eff}} ) \equiv \mathcal C_{n, \textup{eff}} = \frac{ \mathcal C_{n} }{ ( 1 + \bar \gamma_{n,\textup{eff}})^2 } \equiv \mathcal C_{n} \chi_{n} \end{align} as the effective cooperativities with $\bar\gamma_{n,\textup{eff}} = \gamma_{n,\textup{eff}} /\gamma_n $ and $\mathcal C_{n}$ denoting the undisturbed cooperativities, i.e., $\mathcal C_{n} = \mathcal C_{n,\textup{eff}} $ for $\chi_{n} = 1$. The alteration of the cooperativities affects the matching condition for the OIBA amplifier, and we define the deviation from the optimal matching as $\delta\mathcal C = \mathcal C_{2,\textup{eff}} - \mathcal C_{1,\textup{eff}} - 1$. Assuming $\chi_n\approx \chi$ the deviation can be approximated as $\delta\mathcal C \approx \chi -1 $ and thus can be assumed to be small . The effect of this mismatch is a minor back-reflection of the input signal, e.g. for the example in Fig.~\ref{Figure7} around $0.04\%$ of the signal are reflected at the $1$dB compression point of the amplifier. \begin{figure}[t] \centering\includegraphics[width=0.5\textwidth]{Figure7.pdf} \caption{ Gain saturation pump-mode backaction effects in the OIBA amplifier, as a function of the input signal strength $X_{2,\textup{in}}^2/\bar n_{\textup{in}}$, where $\bar n_{\textup{in}}$ denotes the average number of pump photons required to obtain $20$dB of gain. All numerical results were obtained self-consistently. For comparison the effective damping and the depleted gain for a standard parametric amplifier are plotted as well [grey dashed/orange dotted line]. Parameters are $\gamma_1/\kappa = \gamma_2/\kappa = 12$ and $g_1/\kappa = g_2/\kappa = 0.014$.Taking typical JPC-parameters from Ref.~\cite{Abdo2013} the $1$~dB compression point for the Bogoliubov amplifier is shifted by $32$~dBm in comparison to the standard paramp [explicit values denoted in graph]. } \label{Figure7} \end{figure} To analyze the back-action effects further we consider the situation where an input is injected in the X-quadrature of the second cavity. The effective decay rates in Eq.~\ref{Eq.EffectiveDecay} can be approximated as \begin{align}\label{Eq.EffectiveDecayApprox} \gamma_{n,\textup{eff}} \approx \frac{\gamma_n}{16} \mathcal G_{\textup{eff}} \frac{ X_{2,\textup{in}}^2}{\bar n_{n,\textup{in}}} , \end{align} thus, the effective decay rates scale linearly with the effective power gain $\mathcal G_{\textup{eff}}$ and the input signal. This coincides with the scaling for the standard parametric amplifier (for $\mathcal C_{2 } = 0$ )\cite{Abdo2013}, leading to gain saturation if the input signal strength is increased for constant gain. In a standard parametric amplifier the effective gain saturation scales as \begin{align} \sqrt{ \frac{ \mathcal G_{PA,\textup{eff}} }{\mathcal G_{PA}} } =& \frac{1}{\sqrt{\mathcal G_{PA}}}\frac{ \chi^{-1} + \mathcal C }{ \chi^{-1} - \mathcal C } \approx 1 - \sqrt{\mathcal G_{PA}} \bar \gamma_{ \textup{eff}} \end{align} with $\mathcal G_{PA} = (1 + \mathcal C)^2/(1 - \mathcal C)^2 $ and the approximation in the second step holds for small effective decay rates. Crucially, the reduction of the gain is enhanced by the amplitude gain in this standard parametric amplifier. Taking that the OIBA effective decay rates in Eq.(\ref{Eq.EffectiveDecayApprox}) scale linearly with the effective gain, we should at first side expect similar saturation effects in the OIBA. The effective decay rates are even enhanced compared to the standard case, see Fig.~\ref{Figure7}. Importantly, the enhanced decay rates do not mean that the gain saturates earlier. In contrast, the effective decay rates are enhanced because the effective gain is larger, see Fig.~\ref{Figure7}. This originates in the modified scaling of the gain with the pump amplitude. The OIBAs effective gain saturation scales as \begin{align} \sqrt{ \frac{ \mathcal G_{0,\textup{eff}} }{\mathcal G_{0}} } =& \frac{ 2 \sqrt{ \chi } }{ \chi + 1 } \approx 1 - \frac{ \bar\gamma_{n,\textup{eff}}^2 }{2} \end{align} where we approximated $\chi_{n} \approx \chi$ and expanded the resulting expression for small effective decay rates. Clearly, this is a much favorable scaling, as the reduction of the gain is not enhanced by a gain-factor. This explains why the saturation of the OIPA sets in for much higher input signal strength, cf. Fig.~\ref{Figure7}. This robustness should hold true for similar amplifiers without a gain-bandwidth product, where the gain scales directly with the pump amplitude. \section{The class of Bogoliubov amplifiers} The previous sections have established a general principle for realizing amplification without instability, by using table Hamiltonians in the Bogoliubov basis. We now show that these ideas can be further generalized to a wide class of multi-mode systems. We consider $N$ Bogoliubov modes which obey the stable dynamics \begin{align}\label{eq.GenBogDef} \hH = \sum_{i,j = 1}^{N} \; \lambda_{i,j} \; \hat \beta^{\dag}_{i} \hat \beta_{j} , \end{align} with coupling strength $\lambda_{i,j} $. The Bogoliubov modes $\hat \beta_{n} $ are obtained via the general squeezing transformation \begin{align} \hat S_n = e^{ R_{n,m}\left( \hat a_{n}\hat a_{m} - \hat a_{n}^{\dag} \hat a_{m}^{\dag} \right)}, \; \; R_{n,m} = \frac{r_{n}}{1 +\delta_{n,m}} , \end{align} acting on the cavity mode operators $\hat a_{n,m} $ in the unsqueezed basis, i.e., $\hat \beta_{n} = \hat S^{\dag}_n \hat a_{n} \hat S_{n}$. Here $\delta_{n,m} $ denotes the Kronecker delta. The squeezing transformation corresponds to either single-mode squeezing ($n=m$) or two-mode squeezing ($n \neq m$). The squeezing parameter $r_n$ depends on parameters in the respective unsqueezed cavity basis and are specified case by case. We can consider now two different classes containing a phase-sensitive and a phase-insensitive version each: namely the class of detuned Bogoliubov amplifiers and the class of imbalanced Bogoliubov amplifiers. \subsubsection{Detuned Bogoliubov amplifiers} We start with the detuned Bogoliubov amplifiers, they are obtained by setting $\lambda_{ij} = \delta_{ij} \lambda$ in Eq.~(\ref{eq.GenBogDef}), so that the Hamiltonian in the Bogoliubov basis reduces to \begin{align} \hH = \lambda \; \sum_{i=1}^{N} \; \hat \beta_{i}^{\dag} \hat \beta_{i} . \end{align} For $n=m$, i.e., single-mode squeezing, and $N =1$ we have only a single Bogoliubov mode and recover the ODBA discussed in Sec \ref{SecOptDetunedDPA}. This means that by setting $\lambda = \sqrt{\Delta^2 - \nu^2}$ we obtain the phase-sensitive amplifier described by the Hamiltonian in Eq.~(\ref{eq:DPA}) in the unsqueezed cavity basis. However, it is also possible to design a phase-insensitive version via a two-mode squeezing transformation, i.e., for $n \neq m$ and the two Bogoliubov modes $\hat \beta_{n} = \cosh r \; \hat a_{n} + \sinh r \; \hat a_{m}^{\dag}$ with $n,m = 1,2$. The squeezing parameter yields then $\tanh 2r = G/\Delta$, while the Bogoliubov mode energy becomes $\lambda = \sqrt{\Delta^2 - G^2}$. Here the detuning $\Delta$ and the two-mode squeezing strength $G$ are defined in the original basis as \begin{align} \hH = \Delta \left( \hat a_{1}^{\dag} \hat a_{1} + \hat a_{2}^{\dag} \hat a_{2} \right) + G \left[ \hat a_{1}^{\dag} \hat a_{2}^{\dag} + \hat a_{1} \hat a_{2} \right] . \end{align} Thus we simply have a detuned two-mode squeezing interaction among two cavity modes, in analogy to the ODBA, which involves a detuned single-mode squeezing interaction. Note that both kinds of detuned Bogoliubov amplifiers realize amplification without instability when the energy of the Bogoliubov mode matches the photonic loss rate, i.e., $\lambda = \kappa/2$ as done in Eq.~(\ref{eq.MatchingODPA}). Thus, by simply detuning the standard single or two-mode squeezing interactions the amplification process is stabilized and the resulting bandwidth is independent of the gain. \subsubsection{Imbalanced Bogoliubov amplifiers} A second approach to achieving amplification using stable dynamics is to have a Hamiltonian that describes hopping interactions between localized Bogliubov modes. Considering the simplest case of two modes and $i \neq j$ in Eq.~(\ref{eq.GenBogDef}) the Hamiltonian in the Bogoliubov base becomes \begin{align} \hH = \lambda \; \left( \hat \beta^{\dag}_{1} \hat \beta_{2} + \hat \beta_{1} \hat \beta_{2}^{\dag} \right), \end{align} which corresponds to a swapping of excitation between the modes, i.e., the number of Bogoliubov quasiparticles is conserved and they coherently oscillate back and forth between the modes $\beta_{1}$ and $\beta_{2}$. Based on this stable dynamics in the Bogoliubov basis we can repeat our protocol to obtain phase-sensitive ($n=m$) and phase-insensitive amplification ($n \neq m$) without instability. Figure \ref{Figure8}(c,d) depict sketches of the required configurations of the imbalanced Bogoliubov amplifiers in the unsqueezed basis. We find that in addition to single- or two-mode squeezing interactions with strength $G_{1}$, we require a hopping interaction between the cavity modes $1$ and $2$ associated with strength $G_{2}$ in this original basis. The sweet spot of operating without an instability is obtained for $\lambda = \sqrt{G_{2}^2 - G_{1}^2} = \kappa/2 $. Hence we refer to this class as the imbalanced Bogoliubov amplifiers, as the interaction strengths of the involved processes have to be imbalanced to match this condition. Note that the bosonic Kitaev chain amplifier introduced in \cite{McDonaldPRX2018} can be viewed as a multi-mode realization of this kind of amplifier. \begin{figure} \centering\includegraphics[width=1.0\columnwidth]{Figure8.pdf} \caption{ The class of one and two-mode Bogoliubov amplifiers in the unsqueezed cavity basis. Configurations (a,c) realize phase-sensitive amplification without an instability, while the configurations (b,d) correspond to the phase-insensitive counterpart. In sketch (c,d) $G_{1}$ denotes single- or two-mode squeezing in (d) and (c) respectively, while $G_{2}$ corresponds to a hopping interaction. } \label{Figure8} \end{figure} \section{Conclusion} We have presented a novel class of quantum-limited amplifiers which operate effectively detuned from any instability. This mode of operation brings in the remarkable features of no gain-bandwidth limitation and a very flat frequency-gain profile. We showed that the removal of the instability is best understood in the basis of Bogoliubov modes undergoing stable dynamics. Crucially, the transformations of an input signal into and out of the Bogoliubov basis are distinct and do not cancel each other out, leading to net amplification of the input signal. A theoretical analysis on the level of a mean-field ansatz shows that such Bogoliubov amplifiers are potentially more robust to detrimental backaction effects induced by large input signals. We introduced in detail the optimally imbalanced Bogoliubov amplifier (OIBA), which is based on a imbalanced frequency conversion and parametric amplification process. The OIBA, for which we presented proof-of principle experimental results, is an amplifier operating in transmission and is perfectly impedanced matched to its input ports. These features make it an interesting candidate for a cascaded amplifier architecture, and for further applications to quantum signal processing. \section{Acknowledgements} \label{sec_thanks} AM acknowledges funding by the Deutsche Forschungsgemeinschaft through the Emmy Noether program (Grant No.~ME 4863/1-1) and the project CRC 910. AAC and AM acknowledge support from the Air Force Office of Scientific Research under Award No.~FA9550-19-1-0362, and the Army Research Office under Grant No.~W911NF-19-1-0328. OL, TZC, and MJH acknowledge support from the Army Research Office under Grant No.~W911NF-18-1-0144 and the National Science Foundation under Grant No. PIRE-1743717. \newpage
1,477,468,750,820
arxiv
\section{ABSTRACT} We discuss the synchrotron emission of fast cooling electrons in shocks. The fast cooling electrons behind the shocks can generate a position-dependent inhomogeneous electron distribution if they have not enough time to mix homogeneously. This would lead to a very different synchrotron spectrum in low frequency bands from that in the homogeneous case due to the synchrotron absorption. In this paper, we calculate the synchrotron spectrum in this inhomogeneous case in a gamma-ray burst (GRB). Both the forward shock and the reverse shock are considered. We find for the reverse shock dominated case, we would expect a ``reverse shock bump'' in the low frequency spectrum. The spectral bump is due to the combining synchrotron absorption in both the forward and reverse shock regions. In the forward shock spectrum in the low frequencies has two unconventional segments with spectral slopes of $\lesssim1$ and $11/8$. The slope of $11/8$ has been found by some authors, while the slope of $\lesssim1$ is new, which is due to the approximately constant electron temperature in the optically thick region. In the future, simultaneous observations in multiple bands (especially in the low frequency bands) in the GRB early afterglow or prompt emission phases will possibly reveal these spectral characteristics and enable us to identify the reverse shock component and distinguish between the forward and reverse shock emissions. This also may be as a method to diagnose the electron distribution status (homogeneous or inhomogeneous) after fast cooling in relativistic shock region. \section{INTRODUCTION} Gamma-ray bursts (GRBs) are the most powerful explosions in the universe. The standard fireball and internal-external shock models succeeded in explaining many observations, especially the late GRB afterglow data. The internal shock model is one of the dominant models explaining the GRB prompt emission (Paczynski \& Xu 1994; Rees \& M\'{e}sz\'{a}ros 1994), although it suffers several crucial drawbacks (see Zhang \& Yan 2011 for a summary). The traditional internal shock model involves an unsteady relativistic wind (or multiple shells) driven by the GRB central engine. A collision between two shells with different speeds would generate a forward shock and a reverse shock, in which the electrons are accelerated and produce the prompt emission. After a series of collisions, the merged shell runs into the circumburst medium and will also generate a forward shock and a reverse shock (external shock) again. The forward shock (blast wave) model (Sari et al. 1998) is well consistent with the GRB afterglow data (e.g. Galama et al. 1998). However, the expected bright reverse shock emission in the early afterglow is not observed in the majority of bursts (Roming et al. 2006). Only a handful of bursts appear to be consistent with the reverse shock model (e.g. GRB 990123, Sari \& Piran 1999, however, see Meszaros \& Rees 1999 and Wei 2007, or recent bursts, GRBs 130427A and 160509A, Laskar et al. 2013, 2016). This can be because the shells from the central engine are magnetized and the reverse shock is not as strong as expected (e.g., Zhang \& Kobayashi 2005; Fan, Wei \& Wang 2004). The complexity of the reverse shock emission (e.g. Kobayashi 2000 and Wu et al. 2003) and its superposition with the forward shock emission also makes the identification of the reverse shock signature from the light curve difficult. In GRB prompt and early afterglow phases, the magnetic field in the emission regions is strong enough that the energetic electrons accelerated in the shocks cool by synchrotron or inverse Compton radiations within a much shorter time scale than the dynamic time\footnote{Here the dynamic time is the time in which the shock crosses through the shell. }, which is called fast cooling. The spectrum in the fast cooling regime for a homogeneous electron distribution has been studied detailedly by some authors (e.g., Sari et al. 1998). The spectral slope below the synchrotron-self absorption (SSA) frequency is 2 or 5/2. However, because electrons in the shocked shell are accelerated instantaneously and then cool before the shock crosses through the shell, the electron with different (equivalent) temperatures can have not enough time to diffuse throughout the shell and thus the electrons distribution is position-dependent, i.e, the electron distribution in the shell can be inhomogeneous. The inhomogeneity will considerably affect the spectrum below the synchrotron absorption frequency. Granot et al. (2000) have found the spectrum has an unconventional segment with a slope of 11/8 in this case. We will show in the later sections that their result is for the forward shock. As we know that the forward shock and the reverse shock would be produced in pairs. If the forward shock is advancing toward us, the reverse shock emission would cross through the forward shock region before it reaches us. This can affect the reverse shock spectrum significantly if the forward shock is optically thick to the reverse shock emission at low frequencies. Thus both shocks need to be considered when the emission from the two regions are considered. Here we revisit the inhomogeneity problem and consider both the reverse shock and forward shock. We find some unprecedented spectral characteristics. This paper is organized as follows. In section 3, we derive the electron distribution in the inhomogeneous case. Section 4 is the calculation of the synchrotron emission. In the last section, we discuss the possible application of our results. \section{MODELS} When a shock is crossing a shell, the electrons in a very thin layer (fluid element) behind the shock will be accelerated instantaneously into a power-law distribution. With the shock crossing the shell, more fluid elements become hot and cool down rapidly by synchrotron and inverse Compton (IC) radiation. These fluid elements have different electron equivalent temperatures due to the electrons accelerated at different times. This would generate a temperature gradient: the electrons near the shock front have not enough time to cool down but still remain hot while those far downstream would be cooler. \subsection{ELECTRON DISTRIBUTION STRUCTURE} Fig. \ref{fig1} is the schematic of electron status in the forward and reverse shock regions. The equivalent temperatures in low frequency regions (the red region and the purple red regions in Fig. \ref{fig1}) in the two shocks can be different due to different electron number densities. \begin{figure}[!htb] \centering \includegraphics[width=3.2in]{shock.eps} \caption{Schematic of the equivalent temperature and the electron distribution in shocked shells. The color bar from blue to red in the upper part indicates the equivalent temperatures of the shells from hot to cool. Positions 1, 2 have different electron distributions due to different distances from the shock front and the corresponding electron distributions are shown in the lower part of this picture. $x_m$ is the position where all the electrons cool down to $\sim\gamma_m$. $x_a$ is the position where the synchrotron peak frequency of electrons is equal to $\hat\nu_a$. See the text.} \label{fig1} \end{figure} We consider the time when the reverse shock right crosses through the shell and a photon is emitted from the reverse shock front. At this time, the reverse shock emission reaches its peak. With the traveling of the photon in the shocked shell toward the observer, more photons along the photon path in the shell will be emitted and encounter absorption during their propagation. We can define several critical positions behind shock front in each shocked region. The first is $x_m$. All the electrons within the fluid element at $x_m$ cool down to $\sim\gamma_m$ due to the radiation loss. The radiation power is $P_{r}=\frac{4}{3}\sigma_Tc(\gamma_e^2-1)U_B(1+Y)$ including the cyclo-synchrotron and IC radiations, where Y is the Compton Y factor and $U_B=B^2/8\pi$ is the comoving magnetic field energy density. $c$ and $\sigma_T$ are the light speed and the Thomson cross section, respectively. Below $x_m$, the electrons at the fluid elements are hot and approximately remain a power law distribution with a high energy cutoff at some energy of $\hat\gamma_c$, while beyond it, most electrons begin to cool (below $\gamma_m$) and approximately have a monoenergetic distribution with an energy of $\hat\gamma_c$. Here $\hat\gamma_c$ is the energy to which the electrons cool down since the shock front passes some given position $x$ behind the shock front: \begin{eqnarray} \hat\gamma_c(x)=\frac{1+e^{-2bt(x)}}{1-e^{-2bt(x)}} \end{eqnarray} where $b=4\sigma_TU_{B}(1+Y)/3m_ec$ is a factor and $m_e$ is the electron mass. $t$ is the elapsed time since the electrons at x are accelerated. Here the form of $\hat\gamma_c$ can apply to both the ultrarelativistic and non-relativistic cases. When $2bt\ll1$, which corresponds to the ultra-relativistic case, we go back to the familiar form $\hat\gamma_c=6\pi m_ec/\sigma_TB^2(1+Y)t$. The time $t$ can be given by \begin{eqnarray} t(x)=\left\{\begin{array}{ll} 4x/c& \textrm{reverse shock region} \\ (6\Delta-2x)/c & \textrm{forward shock region}. \end{array} \right. \end{eqnarray} We take into account the relativistic shock speed of $c/3$ in the shocked shell frame. In GRB, the internal shock is mildly relativistic for typical parameters and thus the shock speed in the shocked shell would be $\lesssim c/3$, which may somewhat affect the final result. $\Delta$ is the comoving width of the shell. The Compton Y factor is defined by the ratio of synchrotron photon energy density, including the contributions from the reverse shock region ($U_{syn,rs}$) and the forward shock region ($U_{syn,fs}$), to the magnetic field energy density, i.e., $Y\equiv (U_{syn,rs}+U_{syn,fs})/U_{B}=[-1+\sqrt{1+4(U_{e,rs}+U_{e,fs})/U_B}]/2$ (Sari \& Esin 2001). $U_{e,rs}$ and $U_{e,fs}$ are the electron energy densities in the two shocked regions. The corresponding electron number densities are $n_{0,rs}$ and $n_{0,fs}$. We neglect the second IC scattering, since it should occur in the Klein-Nishina limit. Note that in the shock case, though the electron distribution can be inhomogeneous due to insufficient diffusion time, the photon energes are approximately homogeneously distributed in the shocked shell because of the transparency of the shell to the spectral peak energy ($\nu_m$, synchrotron peak frequency corresponding to $\gamma_m$). Thus we have $(U_{e,rs}+U_{e,fs})/U_{B}\simeq [(p-1)/(p-2)]m_ec^2(n_{0,rs}+n_{0,fs})\gamma_m/U_B$. Here we assume the heated electrons in the shocked regions have the distribution of $dN/d\gamma_e\propto\gamma_e^{-p}$ and $p$ is the electron power-law slope. The second critical position is $x_a$. As mentioned above, in the cool part of the shocked shell, the electrons in each fluid element have approximately mono energy distributions. There is a position where the synchrotron peak frequency of electrons is equal to a frequency $\hat\nu_a$ at which the optical depth is equal to 1. $x_a$ can be obtained by solving \begin{eqnarray} \label{SSAfre} \int_{0}^{x_a}\alpha_\nu(x,\hat\nu_a) dx=1, \end{eqnarray} where $\alpha_\nu$ is the absorption coefficient and will be given in the next section, $\hat\nu_a=0.45q_e B\hat\gamma_a^2/2\pi m_ec$ is used and $\hat\gamma_a$ is the electron energy corresponding to $\hat\nu_a$. Over $x_a$, the electrons begin to be thermalized with the peak of $\hat\gamma_a$ and the peak energy approximately remains a constant. An alternative derivation of $\hat\nu_a$ and $\hat\gamma_a$ is by balancing the cooling and SSA heating, which should give similar results for $p<3$ (Ghisellini \& Svensson 1989). We calculate $x_{a,rs}$ ($\hat\gamma_{a,rs}$) and $x_{a,fs}$ ($\hat\gamma_{a,fs}$) independently in the reverse and forward shock regions, respectively, using the assumption that the radiation in one region does not affect the electron equivalent temperature in the other region. We now can give the approximate electron distribution at each fluid element in the shocked shell: \begin{eqnarray} \frac{dN(x)}{d\gamma_e}\simeq\left\{\begin{array}{ll} \frac{n_0(p-1)}{\gamma_m}(\frac{\gamma_e}{\gamma_m})^{-p}(1-b\gamma_et)^{p-2} & \gamma_m<\gamma_e<\hat\gamma_c(x)~ ~ \textrm{or} ~ ~0<x<x_m\\ n_0\delta[\gamma_e-\hat\gamma_c(x)] & \hat\gamma_a<\gamma_e\leq\gamma_m ~ ~ ~ ~ ~~ \textrm{or} ~ ~~ ~x_m\leq x\leq x_a\\ n_0\delta[\gamma_e-\hat\gamma_a] & \gamma_e\leq\hat\gamma_a ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~\textrm{or}~ ~ x_a<x\leq \Delta, \end{array}\right. \end{eqnarray} where $n_0$ is the electron number density. The first equation describes the evolution of the instantaneously injected electrons with a power-law electron distribution (Kardashev 1962). The electron distribution in the region larger than $x_m$ in the shell are described by a $\delta$-function. In the region larger than $x_a$, the electron cooling by radiation is ineffective due to the balance between the SSA heating and the synchrotron+IC cooling and thus the electron peak energy roughly remains a constant until the electrons cool by adiabatic expansion. Here we neglected the escape of electrons, since the electrons should be confined by the magnetic field within a dynamic time scale. \subsection{SYNCHROTRON SPECTRUM} Using the above electron distribution, we can derive the synchrotron intensity $I_\nu$. The emission and absorption coefficients at each fluid element can be given by \begin{eqnarray} j_\nu&=& \left\{\begin{array}{ll}\frac{1}{4\pi}\int d\gamma_eNP_\nu & \hat\gamma_c\ge\gamma_m\\ \frac{N_0}{4\pi}P_\nu(\hat\gamma_c) & 2<\hat\gamma_c<\gamma_m \\ \frac{N_0}{4\pi}P_{\nu,cyc}& \hat\gamma_c<2 \end{array} \right. \\ \alpha_\nu&=& \left\{\begin{array}{ll}-\frac{1}{8\pi m_e\nu^2}\int d\gamma_e\gamma_ep_eP_\nu \frac{\partial}{\partial \gamma_e}\Big(\frac{N}{\gamma_ep_e}\Big) & \hat\gamma_c\ge\gamma_m\\ \frac{N_0}{8\pi m_e\nu^2\hat\gamma_c^2}[\frac{d}{d\gamma_e}(\gamma_e^2P_\nu)]_{\gamma_e=\hat\gamma_c} & 2<\hat\gamma_c<\gamma_m \\ \frac{N_0}{8\pi m_e\nu^2\hat\gamma_cp_c}[\frac{d}{d\gamma_e}(\gamma_ep_eP_{\nu,cyc})]_{\gamma_e=\hat\gamma_c} & \hat\gamma_c\leq2, \end{array} \right. \label{av} \end{eqnarray} where $P_\nu$ is the synchrotron spectral power and $P_{\nu,cyc}=\frac{4\sigma_Tc}{3\pi}\frac{p_c^2U_{B}}{\nu_L}\frac{2}{1+3p_c^2}e^{-\frac{2(1-\nu/\nu_L)}{1+3p_c^2}}$ is the approximate cyclo-synchrotron power spectrum and applies to electron energy $<2$ (Ghisellini et al. 1998). $p_c=\sqrt{\hat\gamma_c^2-1}$ and $p_e=\sqrt{\gamma_e^2-1}$ are the electron momentums, and $\nu_L$ is the Larmor frequency. In the above second formula, we can find the absorption coefficient decreases with a power law of $\nu^{-5/3}$ below the synchrotron peak frequency $\nu(\hat\gamma_c)$, while it exponentially decreases over it. Thus the synchrotron absorption optical depth in a region with a given electron energy considerably decreases with the photon frequency increase when the photon frequency is higher than the synchrotron peak frequency the electron emits. We can give the radiation intensity based on the radiative transfer equation: \begin{eqnarray} I_\nu=\int_{0}^{\Delta} j_{\nu,rs} e^{-\int^{3\Delta}_{x}\alpha_{\nu,rs} ds}dx+\int_{\Delta}^{3\Delta}j_{\nu,fs} e^{-\int_{x}^{3\Delta}\alpha_{\nu,fs} ds}dx \label{intensity} \end{eqnarray} Here the upper limit of $\Delta$ corresponds to the peak time of the reverse shock emission in the observer frame. The subscripts $fs$ and $rs$ denote the forward shock and the reverse shock, respectively. The right terms of the equation are composed of two parts including the contribution of the reverse and the forward shock regions. We take into account the fact that the reverse shock emission would cross through both the reverse and forward shock regions before it reaches the observer. The upper limit of $3\Delta$ is due to the fact that when the reverse shock emission catches up with the forward shock front, it travels $3\Delta$ in the shocked shell (or medium) frame. We focus on the \emph{relativistic} reverse shock in the paper, which has stronger emission compared with the Newton reverse shock, and we can neglect the shell spreading. \section{RESULTS} The spectral calculation depends on many unknown shock parameters. The parameters in GRB and its afterglow can be in a wide range. We first calculate the spectra by adopting parameters in wide range. Then we take a group of plausible parameters in afterglow and calculate the spectrum. \subsection{GENERAL PARAMETERS} Fig. \ref{fig2}-\ref{fig4} indicate the resulting spectra and the spectral slopes (in the comoving frame of the shocked shell). One can find that the reverse shock spectrum has a bump when the electron number density in the reverse shock region is much larger than that in the forward shock region. This is because the photons from the reverse shock region encounter the absorptions in both the forward and reverse shock regions. The synchrotron absorption from the forward shock region would lead to a sharp cutoff toward low frequencies due to the exponential increase of the optical depth with frequency decrease. In low frequencies below the cutoff, the spectrum would be dominated by the forward shock emission. With the increase of the photon frequency, the forward shock region gradually becomes optical thin to the reverse shock emission. The reverse shock emission will generate a bump due to the piling-up of electrons at around $\hat\gamma_{a,rs}$. When the photon frequency is high enough that the whole shell is transparent, the spectrum goes back to the optical-thin case. \begin{figure} [!htb] \centering{ \includegraphics[width=4.2in]{spec_Bm_50000_delta_12.eps}} \caption{The spectra (the upper panel) and the corresponding spectral index (lower panel) with the magnetic field of $B=5\times 10^{4}$ G. Different colors represent different parameters, marked in the figure. Other parameters are $\gamma_m$=$10^3$,$\gamma_{Max}$=$10^7$, p=2.3 and $\Delta$=$10^{12}$cm. In the upper panels, the solid lines represent the spectra from the shocked regions. The dotted lines and the dashed lines represent the contributions from the reverse shock and the forward shock, respectively. The spectral slopes are shown in the lower panel. } \label{fig2} \end{figure} \begin{figure} [!htb] \centering{ \includegraphics[width=4.2in]{spec_Bm_500_delta_12.eps} } \caption{Same as in Fig. \ref{fig2} but for the magnetic field of $B=500$ G.} \label{fig3} \end{figure} \begin{figure} [!htb] \centering{ \includegraphics[width=4.2in]{spec_Bm_50_delta_13.eps}} \caption{Same as in Fig. \ref{fig2} but for the magnetic field of $B=50$ G} and shell width of $\Delta=10^{13}$cm. \label{fig4} \end{figure} When the electron density of the forward shock region is much larger than that in the reverse shock region, the forward shock emission is dominated. The forward shock spectrum is quite different compared with the conventional homogeneous case. There is an unconventional spectra segment in the low frequency bands, with a slope of $11/8$, which has been found by Granot et al. (2000). This segment results from the equivalent temperature gradient in the shocked shell. The lower the photon frequency, the more approaching the forward shock front the emitted position. When the emitted position is close enough to the shock front, electrons would be uncooled (with the average energy of $\sim\gamma_m$) and thus the spectral slope would be 2 due to the absorption mainly from the electrons of $\gamma_m$. Another segment has a spectral index of $\sim1$, which is a new segment. This segment is due to the fact that with the frequency increase, the electrons with the dominated contribution to the emission come to the SSA region (larger than $x_{a,fs}$) and the electron number approximately linearly increase with the frequency, which leads to a spectrum with a slope of $\sim1$. It is worth noting that the segment of $\sim1$ also appears in the reverse shock spectrum around the bump, which is due to the contributions of electrons in the reverse shock SSA region. \subsection{APPLICATION TO THE EARLY AFTERGLOW} We present an application of our model in early afterglow in this section. The circumburst environment in GRBs is usually believed to be stellar wind (Chevalier \& Li 2000) or interstellar medium (ISM). Here we consider the wind case (Wu et al. 2003). Using a group of plausible parameters for GRB afterglows, we calculate the spectrum when the reverse shock just crosses the shell. The number densities in the reverse and forward shock regions are $n_{0,rs}=2.1\times10^{9}$cm$^{-3} (\frac{A}{3\times10^{35}})^{5/4}E^{-1/4}_{52} (\frac{\Gamma}{300})^{3/4}\Delta^{-7/4}_{14}$ and $n_{0,fs}=6.9\times10^{7}$cm$^{-3}(\frac{A}{3\times10^{35}})^{7/4}E^{-3/4}_{52} (\frac{\Gamma}{300})^{5/4}\Delta^{-5/4}_{14}$, respectively. Here the circumburst number density profile is $n=A r^{-2}$, where $A=\dot M/4\pi m_pv_w=3\times10^{35}$cm$^{-1}(\dot M/10^{-5}M_{\odot}$yr$^{-1})(v_w/10^{3}$km s$^{-1})^{-1}$. $E$ and $\Gamma$ are the energy and the initial Lorentz factor of the shell, respectively. We use the usual notation $Q=10^mQ_m$ throughout the paper. Assuming the magnetic field energy fraction of the internal energy $\epsilon_B$ in the two regions are the same, then the magnetic fields in the two regions are also the same, which are given by $B=\sqrt{8\pi e_{fs}\epsilon_B}=1883.1$ G$ (\frac{A}{3\times10^{35}})^{3/4}E^{-1/4}_{52}(\frac{\Gamma}{300})^{3/4}\Delta^{-3/4}_{14}\epsilon_{B,-2}^{1/2}$ and $e_{fs}=1.4\times10^{7}$ erg cm$^{-3} (\frac{A}{3\times10^{35}})^{3/2}E^{-1/2}_{52}(\frac{\Gamma}{300})^{3/2}\Delta^{-3/2}_{14}$ is the internal energy density in the two shocked region. The Lorentz factor of the shocked shell and medium is $\gamma_{sh}=33.9(\frac{A}{3\times10^{35}})^{-1/4}E^{1/4}_{52}(\frac{\Gamma}{300})^{1/4}\Delta^{-1/4}_{14}$. The minimum Lorentz factors in the two regions are $\gamma_{m,rs}=\epsilon_e(\bar\gamma_{rs}-1)[(p-2)/(p-1)]m_p/m_e=144.8$, where $\bar\gamma_{rs}=4.4(\frac{A}{3\times10^{35}})^{1/4}E^{-1/4}_{52}(\frac{\Gamma}{300})^{3/4} \Delta^{1/4}_{14}$ is the Lorentz factor of shocked shell relative to the unshocked shell and $\epsilon_e=0.1$ is the electron energy fraction of the internal energy, and $\gamma_{m,fs}=\epsilon_e\gamma_{sh}[(p-2)/(p-1)]m_p/m_e=1392.6$. The resulting spectrum is shown in Fig.\ref{case}. We can find the reverse shock flux begins to exceed the forward shock flux from around 4$\times 10^{14}$ Hz. At V band ($\sim 5.4\times 10^{14}$ Hz), the reverse shock flux is roughly one order of magnitude higher than the forward shock flux, i.e., the reverse shock emission is brighter than the forward shock emission by $\sim2.5$ mag at V band. The reverse shock spectral bump and the forward shock spectral segments of $11/8$ and $\sim1$ clearly appears. The bump lies between the UV and soft X-ray bands and seemingly will not be straightforward to observe. However, here we use some canonical parameters. In reality, GRB parameter ranges can be wide so that the reverse shock bump will also lie in a wide range. Thus the bumps for some bursts will possibly move to the observable bands, such as optical bands. \begin{figure} [!htb] \centering{ \includegraphics[width=4.2in]{case_spec.eps}} \caption{The synchrotron spectrum for typical parameters at early afterglow phase in the wind environment. We use the frequency in the observer frame taking into account the relativistic motion of the shocked material. The adopted parameters are E=10$^{52}$ erg, A=3${\times} 10^{35}$cm$^{-1}$, $\Gamma$=300, $\Delta$=$10^{14}$cm, $p=2.3$, $\epsilon_B=0.01$ and $\epsilon_e=0.1$. In the upper panel, the solid line represents the spectra from the shocked regions. The dotted line and the dashed line represent the contributions from the reverse shock and the forward shock, respectively. The spectral slope is shown in the lower panel. } \label{case} \end{figure} \section{DISCUSSION} In this paper, we consider the case that the electron distributions in shocks in GRB and its early afterglow phases are inhomogeneous due to the fast cooling of the electrons. We calculate the spectra in this case by considering the radiative transfer. We find the spectrum in the optically thick part is quite different from that of the usually considered homogeneous case. For the reverse shock dominated case, the spectrum has a bump due to the combining absorption of the reverse and forward shock regions. For the forward shock dominated case, there is an unconventional slope of $11/8$ in the spectrum, which is consistent with Granot et al. (2000 or Granot \& Sari 2002). There is also a new spectral segment of $\sim1$ following the segment of 11/8 toward high frequencies. Thus the spectral slopes of the forward shock emission for the fast cooling are (2,11/8,1,-1/2,-p/2) from low frequency to high frequency. We also present an application to the GRB early afterglow phase, i.e., the early forward and reverse shock phase of the external shock, with typical parameters. We find the above spectral characteristics, such as the reverse shock bump and the forward shock spectral segments of $11/8$ and $\sim1$, are all present. Such spectrum characteristics will possibly be observed in the future. It should be noted that such spectrum shape also possibly presents in other objects with similar physical conditions to GRBs, such as active galactic nuclei (AGNs). Actually, the reverse shock signature of the external shock is not clearly confirmed to date from the data. Even the widely believed reverse shock cases, such as optical flashes of GRB 990123 and GRB 041219a, are still under debate. M\'{e}sz\'{a}ros \& Rees (1999) and Wei (2007) considered the applicability of the internal shock model. As shown in this paper, the reverse shock emission has very distinct features and behaviors as a bump above the forward shock emission if it dominates over the forward emission, which is easily recognized. Thus this will be a method to identify the reverse shock emission from the spectrum. In the future if we observe the continuous spectrum in wide bands, especially in the low frequency bands at early afterglow phase, or even several uncontinuous points in the low frequency bands, we could find the reverse shock emission. While if the slopes of $11/8$ and/or $\sim1$ are observed, the spectrum should be from the forward shock. If such spectra are detected in the prompt emission phase, this suggests that the prompt emission should come from a shock, most possibly the internal shock. The shock model can naturally generate such an inhomogenity we discuss in the paper. This may be used to distinguish between the internal shock model and other dissipation models, such as the magnetic reconnection model (e.g., Zhang \& Yan 2011) where the electron distribution is close to the homogeneous case, since the magnetic reconnection occurs randomly. Such spectral observation in turn suggests the electron distributions in shocks are indeed inhomogeneous, which may also present us new insight into the diffusing process of fast cooling electrons behind a relativistic shock. The diffusion process of electrons is not well understood in physics, depending on some unknown factors such as magnetic field strengthen and structure in the emission region, which requires more MHD simulation of shock. On the contrary, if the spectrum is characterized by the slopes like 2 and/or 5/2 in low frequencies, then this may suggest the electron distribution of the emission region is homogeneous. Thus the spectrum observation will be a probe of the mixability of electron distribution in emission region. In observations, the wide band detections at early times, thanks to the rapid localization of GRBs by swift, have given us unprecedented insight into GRB early afterglow physics (e.g., Zhang et al. 2006), though the coverage of bands is not enough to give us the spectral information yet. However, the follow-up projects, especially in the low energies, are increasing. The GRB and its early afterglow spectral detection in wider bands than now is promising. SVOM (Paul et al. 2011; Wei et al. 2016), NGRG (Grossan et al. 2014) and UFFO (Grossan et al. 2012; Park et al. 2013) will allow simultaneous or rapid follow-up observations of GRB in wide bands. GWAC (the Ground Wide Angle Camera) will possibly observe the optical emission and GFTs (the Ground follow-up Telescope) will observe the near infrared emission during, after or even before GRBs (Paul et al. 2011). Early observations in longer wavelengths, such as millimeter band and submillimeter band can be operated by EVLA (the Expanded very large Array \footnote{http://www.aoc.nrao.edu/evla/}) or ALMA (the Atacama Large Millimeter/submillimeter Array \footnote{http://www.almaobservatory.org}). SKA (the Square Kilometer Array, e.g., Carilli et al. 2003) and FAST (the Five-Hunfred Aperture Spherical Radio Telescope, Nan et al. 2011) will possibly contribute to the radio observation. All these instruments will provide us more spectral information at the early afterglow time or even during the GRB, allowing us to diagnose the revers shock and the forward shock signatures. \begin{acknowledgements} We thank Z. Li and X. F. Wu for useful suggestions and comments. We also thank an anonymous referee for valuable comments to improve the paper. We acknowledge partial support by the Chinese Natural Science Foundation (No. 11203067, 11133006, 11433004), Yunnan Natural Science Foundation (2011FB115 and 2014FB188) (X.H.Z.) and the Key Research Program of the CAS (Grant NO. KJZD-EW-M06) (J.M.B.). \end{acknowledgements}
1,477,468,750,821
arxiv
\section{Introduction} Milankovitch cycles, or orbitally-induced climate variations, are thought to influence, if not control, Earth's ice ages \citep{hays1976,imbrie1980,raymo1997,lisiecki2007}. This mechanism has also been proposed as an important player in the habitability of exoplanets, which may have orbital evolution very different from that of Earth \citep{spiegel2010,brasser2014,armstrong2014}. In \cite{deitrick2018} (hereafter, Paper I), we discussed much of the work that has been done to understand Milankovitch cycles, both for Earth and for exoplanets. Briefly, we review the subset of the literature most concerned with the modeling of climate. Milutin Milankovi\'{c} and Wladimir K\"oppen supplied a plausible explanation for the orbital forcing of Earth's ice ages: small variations in summer-time insolation at high latitudes controls whether ice sheets on the continent grow or retreat. This idea is generally accepted as at least part of the story \citep{hays1976, roe2006, huybers2008, lisiecki2010}, though the reality is somewhat more complicated because of geography, ice shelf calving, atmospheric circulation, and changes in greenhouse gases \citep{clark1998,abeouchi2013}, and some studies have challenged the role of orbital forcing entirely \citep{wunsch2004,maslin2016}. Much of the controversy surrounding Milankovitch theory stems from the fact that Earth's orbital and obliquity variations are rather small---Earth's obliquity varies by $\sim 2.5^{\circ}$ and its eccentricity by $\sim0.05$ \citep{laskar1993}. For exoplanets, the role of orbital forcing may be more compelling---many exoplanets have variations that are much larger than Earth's, and there is evidence that primordial obliquities (\emph{i.e.}, the obliquity after the formation stage) can be very different from Earth's present value \citep{miguel2010}. In this study, we are interested in how planetary habitability is affected by obliquity, eccentricity, and variations of these parameters. For example, it was proposed that, at zero obliquity, the lack of insolation at the poles of an Earth-like planet would cause the ice caps to grow uncontrollably and trigger a snowball state \citep{laskar1993}, however, climate models demonstrated that this is not the case \citep{williamskasting97}. In fact, the models indicate that Earth's climate can remain stable (and warm) at any obliquity \citep{williamskasting97,williams2003,spiegel2009} at its current solar flux. For obliquities larger than Earth's, the seasonality of the planet is intensified \citep{williamskasting97,williams2003,spiegel2009}, \emph{i.e.}, mid- and high-latitudes experience extremely warm summers and extremely cold, dark winters. At obliquity $\gtrsim55^{\circ}$, the poles begin to receive more insolation over an orbit than the equator \citep{vanwoerkom1953,williams1975,williams1993,lissauer2012,rose2017}. In such conditions, it is possible that ice sheets form at the equator (``ice-belts''), rather than at the poles \citep{williams2003,rose2017}, but this phenomenon appears to be sensitive to the atmospheric properties and the details of the model \citep{ferreira2014,rose2017}. The other important development is that high obliquity ($\gtrsim55^{\circ}$) tends to increase the distance (from the host star) to the outer edge of the habitable zone (HZ), because the insolation distribution is more even across the surface than at low obliquity \citep{spiegel2009,rose2017}. The habitable zone, as we discuss it here, is the range of stellar flux at which a planet with an Earth-like atmosphere can maintain liquid water on its surface \citep[see][]{kasting1993,selsis2007,kopparapu2013}. The effect of planet's eccentricity, $e$, on the orbitally-averaged stellar flux, $\langle S \rangle$, can be directly calculated \citep{laskar1993}, which results in a dependence of the form: \begin{equation} \langle S \rangle \propto (1-e^2)^{-1/2}. \label{eqn:annualinsol} \end{equation} Thus, the insolation increases as the eccentricity increases, and some studies have indeed shown that the outer-edge of the habitable zone can increase as a result \citep{williams2002,dressing2010}. This relationship is complicated by the fact that eccentricity can introduce a global ``seasonality''---a result of the varying distance between the planet and host star over an orbit. Because of Kepler's second law, the planet spends much of its orbit near apoastron, and if the orbit is sufficiently long period, snowball states can be triggered at these times \citep{bolmont2016}. Thus an increase in eccentricity does not warm an Earth-like planet in all cases. How orbital and obliquity \emph{variations} (exo-Milankovitch cycles) affect habitability is only beginning to be understood. Some studies have found that increases in eccentricity can rescue a planet from a snowball state \citep{dressing2010,spiegel2010}. Others have shown that strong variations can affect the boundaries of the habitable zone \citep{armstrong2014,way2017}. There may be some threat to the planet in the form of water loss if the planet is near the inner edge because of periastron's proximity to the host star during high eccentricity times \citep{way2017}. Exo-Milankovitch cycles may also increase or decrease the outer edge of the habitable zone, as suggested in \cite{armstrong2014}. \cite{forgan2016} showed that Milankovitch cycles can be very rapid for circumbinary planets, though that study did not find them a threat to planetary habitability in the cases considered. Though the effects of different eccentricity and obliquity values and their variations have been studied by the previously discussed works, their remains no complete synthesis of orbital evolution, obliquity evolution, and climate, including the effects of ice sheets and oceans. The majority of the aforementioned works examined only static orbits and obliquities \citep{williamskasting97,williams2002,williams2003,spiegel2009,dressing2010,ferreira2014,bolmont2016,rose2017}. The studies that did model climate under varying orbital conditions were limited in various ways. \cite{spiegel2010} and \cite{way2017} allowed eccentricity to vary, but did not include obliquity variations. \cite{armstrong2014} included obliquity variations in addition to orbital variations. Unfortunately, that paper contained a sign error in the obliquity equations (though the code was correct) that was propagated to \cite{forgan2016}. The climate models used by \cite{spiegel2010} and \cite{forgan2016} did not include ice sheets and the thermal inertia associated with them, and so produced climates that are potentially too warm and too stable against the snowball instability. The climate model used in \cite{armstrong2014} included ice sheets, but the outgoing longwave radiation prescription and the lack of latitudinal heat diffusion makes that model excessively stable against snowball states, and that model did not include oceans (see Section \ref{sec:armstrongcomp}). \cite{spiegel2010} and \cite{forgan2016} included oceans only in a limited capacity: the albedo and heat capacities used are the average of land and ocean properties. This mutes the seasonal response of land and the thermal inertia of water. \cite{way2017} used a 3D GCM, easily the most robust model of the lot, but because that model is so computationally expensive, only a handful of simulations were run. Here, we present the first fully coupled model of orbits, obliquities, and climates of Earth-like exoplanets. This model treats land and ocean as separate components and includes ice sheet growth and decay on land. Because the model is computationally inexpensive, thousands of coupled orbit-obliquity-climate simulations can be run in a reasonable time frame. This facilitates the exploration of broad regions of parameter space and will help in the prioritization of planet targets for characterization studies. The purpose of this study is to examine the effect of obliquity and orbital evolution on potentially habitable planets. In Paper I, we modeled the orbit and obliquity of an Earth-mass planet, in the habitable zone of a G dwarf star, with an eccentric gas giant companion. This ``dynamically hot'' scenario represents an end-member case, in which the orbital evolution has a large impact on the climate of the planet, without catastrophic destruction of the planetary system. In this paper, we couple the climate model described in Section \ref{sec:climatemodel} to the orbit and obliquity model and analyze the ultimate climate state of the planet. In a number of interesting scenarios, we apply a fully-analytic climate model \citep{rose2017} to gain some deeper understanding of the results. Finally, we revisit the G dwarf systems from \cite{armstrong2014} with this new climate model to update the results in that paper. \section{Methods} We use a combination of a secular orbital model (\texttt{DISTORB}), an N-Body model (\texttt{HNBody} \citep{rauch2002}), a secular obliquity model (\texttt{DISTROT}), and a one-dimensional (1D) latitudinal energy balance model (EBM) with ice-sheets. For a more detailed description of \texttt{DISTORB} and \texttt{DISTROT}, and a description of how we employ the N-Body model, see Paper I. We describe the EBM and ice-sheet model below. \subsection{Climate model} \label{sec:climatemodel} The climate model, \texttt{POISE} (Planetary Orbit-Influenced Simple EBM), is a one-dimensional EBM \citep{budyko1969, sellers1969} based on \cite{northcoakley1979}, with a number of modifications, foremost of which is the inclusion of a model of ice sheet growth, melting, and flow. The model is one-dimensional in $x = \sin{\phi}$, where $\phi$ is the latitude. In this fashion, latitude cells of size $dx$ will not have equal width in latitude, but will be equal in area. The general energy balance equation is: \begin{align} &\begin{aligned} C(x) \frac{\partial T}{\partial t}(x,t) -& D(x,t) \nabla^2 T(x,t) + I(x,T,t) = S(x,t) (1-\alpha(x,T,t)), \label{eqn:ebmeq} \\ \end{aligned} \end{align} where $C(x)$ is the heat capacity of the surface at location $x$, $T$ is the surface temperature, $t$ is time, $D$ is the coefficient of heat diffusion between latitudes (due to atmospheric circulation), $I(x,t)$ is the outgoing long-wave radiation (OLR) to space (i.e., the thermal infrared flux), $S(x,t)$ is the incident insolation (stellar flux), and $\alpha$ is the planetary albedo and represents the percent of the insolation that is reflected back into space. Though the model lacks a true longitudinal dimension, each latitude is divided into a land portion and a water portion. The land and water have distinct heat capacities and albedos, and heat is allowed to flow between the two regions. The energy balance equation can then be separated into two equations, one equation for the water component and one for the land component: \begin{align} &\begin{aligned} C_L \frac{\partial T_L}{\partial t} - D \frac{\partial}{\partial x} (1-x^2) \frac{\partial T_L}{\partial x} +& \frac{\nu}{f_L} (T_L-T_W) + I(x,T_L,t) \\& = S(x,t) (1-\alpha(x,T_L,t)),\label{eqn:ebland}\\ \end{aligned}\\ &\begin{aligned} C_W^{eff} \frac{\partial T_W}{\partial t} - D \frac{\partial}{\partial x} (1-x^2) \frac{\partial T_W}{\partial x} + &\frac{\nu}{f_W} (T_W-T_L) + I(x,T_W,t)\\& = S(x,t) (1-\alpha(x,T_W,t)),\label{eqn:ebwater}\\ \end{aligned} \end{align} where we have employed the co-latitudinal component of the spherical Laplacian, $\nabla^2$ (the radial and longitudinal/azimuthal components vanish). The effective heat capacity of the ocean is $C_W^{eff} = m_d C_W$, where $m_d$ is an adjustable parameter representing the mixing depth of the ocean. The parameter $\nu$ is used to adjust the land-ocean heat transfer to reasonable values, and $f_L$ and $f_W$ are the fractions of each latitude cell that are land and ocean, respectively. The insolation (or solar/stellar flux) received as a function of latitude, $\phi$, and declination of the host star, $\delta$, is calculated using the formulae of \cite{berger1978}. Declination, $\delta$, varies over the course of the planet's orbit for nonzero obliquity. For Earth, for example, $\delta \approx 23.5^{\circ}$ at the northern summer solstice, $\delta = 0^{\circ}$ at the equinoxes, and $\delta \approx -23.5^{\circ}$ at the northern winter solstice. Because $\delta$ is a function of time (or, equivalently, orbital position), the insolation varies, and gives rise to the seasons (again, assuming the obliquity is nonzero). For latitudes and times where there is no sunrise (e.g., polar darkness during winter): \begin{equation} S(\phi,\delta) = 0, \end{equation} while for latitudes and times where there is no sunset: \begin{equation} S(\phi,\delta) = \frac{S_{\star}}{\rho^2} \sin{\phi} \sin{\delta}, \end{equation} and for latitudes with a normal day/night cycle: \begin{equation} S(\phi,\delta) = \frac{S_{\star}}{\pi \rho^2} (H_0 \sin{\phi} \sin{\delta} + \cos{\phi} \cos{\delta} \sin{H_0}). \end{equation} Here, $S_{\star}$ is the solar/stellar constant (in W m$^{-2}$), $\rho$ is the distance between the planet and host star normalized by the semi-major axis (\emph{i.e.} $\rho = r/a$), and $H_0$ is the hour angle of the of the star at sunrise and sunset, and is defined as: \begin{equation} \cos{H_0} = - \tan{\phi}\tan{\delta}. \end{equation} The declination of the star with respect to the planet's celestial equator is a simple function of its obliquity $\varepsilon$ and its true longitude $\theta$: \begin{equation} \sin{\delta} = \sin{\varepsilon} \sin{\theta}. \label{eqn:decl} \end{equation} See also \cite{laskar1993} for a comprehensive derivation. For these formulas to apply, the true longitude should be defined as $\theta = f + \Delta^*$, where $f$ is the true anomaly (the angular position of the planet with respect to its periastron) and $\Delta^*$ is the angle between periastron and the planet's position at its northern spring equinox, given by \begin{equation} \Delta^* = \varpi + \psi + 180^{\circ}. \end{equation} Above, $\varpi$ is the longitude of periastron, and $\psi$ is the precession angle. Note that we add $180^{\circ}$ because of the convention of defining $\psi$ based on the vernal point, $\vernal$, which is the position of the \emph{sun} at the time of the northern spring equinox. For exoplanets, there is likely a more sensible definition, however, we adhere to the Earth conventions for the sake of consistency with past literature. A point of clarification is in order: EBMs (at least, the models employed in this study) can be either \emph{seasonal} or \emph{annual}. The EBM component of \texttt{POISE} is a seasonal model---the variations in the insolation throughout the year/orbit are resolved and the temperature of the surface at each latitude varies in response, according to the leading terms in Equations (\ref{eqn:ebland}) and (\ref{eqn:ebwater}). In an annual model (we utilize one in this study to understand ice sheet stability; see Section \ref{sec:rosemodel}), the insolation at each latitude is averaged over the year, and the energy balance equation (Eq. \ref{eqn:ebmeq}) is forced into ``steady state'' by setting $\partial T/\partial t$ equal to zero (this can be done numerically or analytically). By ``steady state'', we mean that the surface conditions (temperature and albedo) come to final values and remain there. Seasonal EBMs, on the other hand, can be in ``equilibrium'', in that the orbitally averaged surface conditions remain the same from year to year, but the surface conditions vary \emph{throughout} the year. The planetary albedo is a function of surface type (land or water), temperature, and zenith angle. For land grid cells, the albedo is: \begin{equation} \alpha = \left\{ \begin{array}{cc} \alpha_L + 0.08 P_2(\sin{Z}) & \begin{array}{c}\hspace{1mm} \text{if } M_{\text{ice}} = 0 \text{ and } T > -2^{\circ} \text{ C} \end{array}\\ \alpha_i & \begin{array}{c} \hspace{1mm} \text{if }M_{\text{ice}} > 0\text{ or } T <= -2^{\circ} \text{ C}, \end{array}\\ \end{array} \right. \label{eqn:albland} \end{equation} while for water grid cells it is: \begin{equation} \alpha = \left\{ \begin{array}{cc} \alpha_W + 0.08 P_2(\sin{Z}) & \hspace{1mm} \text{if } T > -2^{\circ} \text{ C}\\ \alpha_i &\hspace{4mm} \text{if } T <= -2^{\circ} \text{ C},\\ \end{array} \right. \label{eqn:albwater} \end{equation} where $Z$ is the zenith angle of the sun at noon and $P_2(x) = 1/2 (3 x^2-1)$ (the second Legendre polynomial). This last quantity is used to approximate the additional reflectivity seen at shallow incidence angles, \emph{e.g.} at high latitudes on Earth. The zenith angle at each latitude is given by \begin{equation} Z = | \phi - \delta |. \end{equation} The albedos, $\alpha_L$, $\alpha_W$ (see Table \ref{tab:ebm_tab}), not accounting for zenith angle effects, are chosen to match Earth data \citep{northcoakley1979} and account, over the large scale, for clouds, various surface types, and water waves. Additionally, the factor of $0.08$ in Equations (\ref{eqn:albland}) and (\ref{eqn:albwater}) is chosen to reproduce the albedo distribution in \cite{northcoakley1979}. The functional form of Equations \ref{eqn:albland} and \ref{eqn:albwater} is also given by \cite{northcoakley1979}---those authors fit Earth measurements using Fourier-Legendre series, finding that the dominant albedo term is the second order Legendre polynomial. The ice albedo, $\alpha_i$, is a single value that does not depend on zenith angle due to the fact that ice tends to occur at high zenith angle, so that the zenith angle is essentially already accounted for in the choice of $\alpha_i$. Equation (\ref{eqn:albland}) indicates that when there is ice on land ($M_{\text{ice}}>0$), or the temperature is below freezing, the land takes on the albedo of ice. Though there are multiple conditionals governing the albedo of the land, in practice the temperature condition is only used when ice sheets are turned off in the model, since ice begins to accumulate at $T = 0^{\circ}$ C, and so is always present when $T < -2^{\circ}$ C. Equation (\ref{eqn:albwater}) indicates a simpler relationship for the albedo over the oceans: when it is above freezing, the albedo is that of water (accounting also for zenith angle effects); when it is below freezing, the albedo is that of ice. We take the land fraction and water fraction to be constant across all latitudes. This is roughly like having a single continent that extends from pole to pole. The effect of geography on the climate is beyond the scope of this work, which is to isolate the orbitally-induced climate variations. Like \cite{budyko1969} and subsequent studies, including \cite{northcoakley1979}, we utilize a linearization of the OLR with temperature: \begin{equation} I = A + BT, \label{eqn:olrnc}.\\ \end{equation} We adopt the values for Earth as determined by \cite{northcoakley1979}: $A = 203.3$ W m$^{-2}$ and $B = 2.09$ W m$^{-2}$ $^{\circ}$C$^{-1}$, and $T$ is the surface temperature in $^{\circ}$C. The purpose of this linearization is that it allows the coupled set of equations to be formulated as a matrix problem that can be solved using an implicit Euler scheme \citep{press1987} with the following form: \begin{equation} \mathscr{M}\cdot T_{n+1} = \frac{C T_n}{\Delta t} - A + S (1-\alpha),\label{eqn:matrixform}\\ \end{equation} where $T_n$ is a vector containing the current surface temperatures, $T_{n+1}$ is a vector representing the temperatures to be calculated, and $C$, $A$, $S$, and $\alpha$ are vectors containing the heat capacities, OLR offsets (Equation \ref{eqn:olrnc}), insolation at each latitude, and albedos, respectively. The matrix $\mathscr{M}$ contains all of the information on the left-hand sides of Equations \ref{eqn:ebland} and \ref{eqn:ebwater} related to temperature. The time-step, $\Delta t$, is chosen so that conditions do not change significantly between steps, resulting in typically 60 to 80 time-steps per orbit. The new temperature values can then be calculated by taking the dot-product of $\mathscr{M}^{-1}$ with the right-hand side of Equation \ref{eqn:matrixform}. The large time step allowed by this integration scheme greatly speeds the climate model, allowing us to run thousands of simulation for millions of years. The ice sheet model consists of three components: mass balance (that is, local ice accumulation and ablation), longitudinal flow across the surface, and isostatic rebound of the bedrock. Longitudinal flow ensures that the ice sheets maintain a realistic size and shape, for example, they do not grow to unrealistic heights at the poles, while bedrock rebound is necessary to accurately model ice flow. We model ice accumulation and ablation in a similar fashion to \cite{armstrong2014}. Ice accumulates on land at a constant rate, $r_{\text{snow}}$, when temperatures are below 0$^{\circ}$ C. Melting/ablation occurs when ice is present and temperatures are above 0$^{\circ}$ C, according to the formula: \begin{equation} \frac{dM_{\text{ice}}}{dt} = \frac{2.3 \sigma (T_{\text{freeze}}^4 - (T+T_{\text{freeze}})^4)}{L_h}, \label{eqn:ablation} \end{equation} where $M_{\text{ice}}$ is the surface mass density of ice, $\sigma = 5.67 \times 10^{-8}$ W m$^{-2}$ K$^{-4}$ is the Stefan-Boltzmann constant, $L_h$ is latent heat of fusion of ice, $3.34 \times 10^5$ J kg$^{-1}$ and $T_{\text{freeze}} = 273.15$ K. The factor of 2.3 that appears here, though not in \cite{armstrong2014}, is added to scale the melt rate to roughly Earth values of 3 mm $^{\circ}$C$^{-1}$ day$^{-1}$ \citep[see][]{braithwaitezhang2000,Lefebre2002,huybers2008}. The ice sheets flow across the surface via deformation and sliding at the base. We use the formulation from \cite{huybers2008} to model the changes in ice height due to these effects. Bedrock depression is moderately important in this model (despite the fact that we have only one atmospheric layer and thus do not resolve elevation-based effects), because the flow rate is affected. This ultimately affects the ice sheet height---without the bedrock component, the ice sheets grow to be $\sim10\%$ taller, but less massive (see Section \ref{sec:repmilank}). The ice flow \citep[via][]{huybers2008} is: \begin{align} &\begin{aligned} \frac{\partial h}{\partial t} = \frac{\partial}{\partial y} &\left[ \frac{2A_{\text{ice}}(\rho_i g)^n}{n+2} \left | \left ( \frac{\partial (h+H)}{\partial y} \right )^{n-1} \right | \right. \left. \cdot \frac{\partial (h+H)}{\partial y}~ (h+H)^{n+2} + u_b h \right], \label{eqn:iceflow1}\\ \end{aligned} \end{align} where $h$ is the height of the ice, $H$ is the height of the bedrock (always negative or zero, in this case), $A_{\text{ice}}$ represents the deformability of the ice, $\rho_i$ is the density of ice, $g$ is the acceleration due to gravity, and $n$ is the exponent in Glen's flow law \citep{glen1958}, where $n=3$. The ice height and ice surface mass density, $M_{\text{ice}}$ are simply related via $M_{\text{ice}} = \rho_i h$. The first term inside the derivative represents the ice deformation; the second term is the sliding of the ice at the base. The latitudinal coordinate, $y$, is related to the radius of the planet and the latitude, $y = R \phi$, thus $\Delta y = R \Delta x (1-x^2)^{-1/2}$. Finally, $u_b$, the ice velocity across the sediment, is: \begin{align} &\begin{aligned} u_b = &\frac{2 D_0 a_{\text{sed}}}{(m+1)b_{\text{sed}}} \left ( \frac{ |a_{\text{sed}}|}{2D_0 \mu_0} \right )^m \cdot \left ( 1 - \left [ 1- \frac{b_{\text{sed}}}{|a_{\text{sed}}|} \min \left ( h_s,\frac{|a_{\text{sed}}|}{b_{\text{sed}}} \right ) \right ]^{m+1} \right ), \label{eqn:basalvel} \end{aligned} \end{align} as described by \cite{jenson1996}. The constant $D_0$ represents a reference deformation rate for the sediment, $\mu_0$ is the reference viscosity of the sediment, $h_s$ is the depth of the sediment, and $m=1.25$. The shear stress from the ice on the sediment is: \begin{equation} a_{\text{sed}} = \rho_i g h \frac{\partial (h+H)}{\partial y}, \label{eqn:shear} \end{equation} and the rate of increase of shear strength with depth is: \begin{equation} b_{\text{sed}} = (\rho_s-\rho_w)g \tan{\phi_s}, \end{equation} where $\rho_s$ and $\rho_w$ are the density of the sediment and water, respectively, and $\phi_s$ is the internal deformation angle of the sediment. We adopt the same numerical values as \cite{huybers2008} for all parameters related to ice and sediment (see Table \ref{tab:ice_tab}), with a few exceptions. We use a value of $A_{\text{ice}}$ (ice deformability) that is consistent with ice at 270 K \citep{paterson1994}, and a value of $r_{\text{snow}}$ (the precipitation rate) that best allows us to reproduce Milankovitch cycles on Earth (see Section \ref{sec:valid}). Note also that the value of $D_0$ in Table A2 of \cite{huybers2008} appears to be improperly converted for the units listed (the correct value, from \cite{jenson1996}, is listed in the text, however). With Equations (\ref{eqn:basalvel}) and (\ref{eqn:shear}), Equation (\ref{eqn:iceflow1}) can be treated numerically as a diffusion equation, with the form: \begin{equation} \frac{\partial h}{\partial t} = D_{\text{ice}} \frac{\partial^2 (h+H)}{\partial y^2},\\ \end{equation} where, \begin{align} &\begin{aligned} D_{\text{ice}} & = \frac{2A_{\text{ice}}(\rho_i g)^n}{n+2} \left | \left ( \frac{\partial (h+H)}{\partial y} \right )^{n-1} \right | ~ (h+H)^{n+2}\\ &+\frac{2 D_0 \rho_i g h^2}{(m+1)b_{\text{sed}}} \left ( \frac{ |a_{\text{sed}}|}{2D_0 \mu_0} \right )^m \cdot \left ( 1 - \left [ 1- \frac{b_{\text{sed}}}{|a_{\text{sed}}|} \min \left ( h_s,\frac{|a_{\text{sed}}|}{b_{\text{sed}}} \right ) \right ]^{m+1} \right ), \end{aligned} \end{align} and $D_{\text{ice}}$ is evaluated at each time-step, at every boundary to provide mass continuity. We solve the diffusion equation numerically using a Crank-Nicolson scheme \citep{crank1947}. The bedrock depresses and rebounds locally in response to the changing weight of ice above, always seeking isostatic equilibrium. The equation governing the bedrock height, $H$, is \citep{clark1998,huybers2008}: \begin{equation} \frac{\partial H}{\partial t} = \frac{1}{T_b}\left( H_{eq} - H - \frac{\rho_i h}{\rho_b} \right), \label{eqn:brock} \end{equation} where $T_b$ is a characteristic relaxation time scale, $H_{eq} = 0$ is the ice-free equilibrium height, and $\rho_b$ is the bedrock density. We again adopt the values used by \cite{huybers2008} (see Table \ref{tab:ice_tab}). Because of the longer time-scales (years) associated with the ice sheets, the growth/melting and ice-flow equations are run asynchronously in \texttt{POISE}. First, the EBM (Equation \ref{eqn:ebmeq}) is run for 4-5 orbital periods, and ice accumulation and ablation is tracked over this time frame, but ice-flow (Equation \ref{eqn:iceflow1}) is ignored. The annually-averaged ice accumulation/ablation is then calculated from this time-frame and passed to the ice-flow time-step, which can be much longer (years). The EBM is then re-run periodically to update accumulation and ablation and ensure that conditions vary smoothly and continuously. To clarify, the hierarchy of models and their time-steps is as follows: \begin{enumerate} \item The EBM (shortest time-step): run for a duration of several orbital periods with time-steps on the order of days. The model is then rerun at the end of every orbital/obliquity time-step and at user-set intervals throughout the ice-flow model. \item The ice-flow model (middle time-step): run at the end of every orbital time-step (with time-steps of a few orbital periods), immediately after the EBM finishes. The duration of the model will follow one of two scenarios: \begin{enumerate}[label=\alph*] \item If the orbital/obliquity time-step is sufficiently long, the EBM is rerun at user-set intervals, then the ice-flow model continues. The ice-flow model and the EBM thus alternate back-and-forth until the end of the orbit/obliquity time-step. \item If the orbital/obliquity time-step is shorter than the user-set interval, the ice-flow model simply runs until the end of the orbital time-step. \end{enumerate} \item The orbital/obliquity model (longest time-step). The time-steps are set by the fastest changing variable (see Paper I) amongst those parameters. \end{enumerate} This approach is shown schematically in Figure \ref{fig:poisestruct}. The user-set interval discussed above must be considered carefully. The assumption is that annually-averaged climate conditions like surface temperature and albedo do not change much during the time span over which the ice-flow model runs. Hence, we choose a value that ensures that the ice-flow does not run so long that it dramatically changes the albedo without updating the temperature and ice balance (growth/ablation) via the EBM. The initial conditions for the EBM are as follows. The first time the EBM is run, the planet has zero ice mass on land, the temperature on both land and water is set by the function \begin{equation} T_0 = 7.5^{\circ}\text{C} + (20^{\circ}\text{C})(1-2\sin^2{\phi}),\label{eqn:inittemp} \end{equation} where $\phi$ is the latitude. This gives the planet a mean temperature of $\sim14^{\circ}$ C, ranging from $\sim28^{\circ}$ C in the tropics to $\sim-13^{\circ}$ at the poles. This is thus a ``warm start'' condition. The initial albedo of the surface is calculated from the initial temperatures. We then perform a ``spin-up'' phase, running the EBM iteratively until the mean temperature between iterations changes by $<0.1^{\circ}$ C, \emph{without} running the orbit, obliquity, or ice-flow models, to bring the seasonal EBM into equilibrium at the actual stellar flux the planet receives and its actual initial obliquity. Then, every time the EBM is rerun (at the user-set interval or the end of the orbit/obliquity time-step), the initial conditions are taken from the previous EBM run (temperature distribution) and the end of the ice-flow run (albedo, ice mass). \begin{figure} \includegraphics[width=\textwidth]{f1.pdf} \caption{\label{fig:poisestruct} Hierarchy of \texttt{POISE} and the orbit and obliquity models. The orbit and obliquity models (\texttt{DISTORB} and \texttt{DISTROT}) are run for $\sim$ hundreds of years (with an adaptive time step determined by the rates of change of the orbital/obliquity parameters). \texttt{POISE} is run at the end of each orbit/obliquity time step. First, the EBM is run for several orbits, with time steps of $\sim$ 5 days. Then the ice flow model is run with time steps of $\sim 3-5$ orbits. The ice flow model runs until the next orbit/obliquity time step, or until a user-set time, at which point the EBM is rerun for several orbits.} \end{figure} \begin{table} \caption{\textbf{Parameters used in the EBM}} \centering \begin{tabular}{lllp{0.5\linewidth}} \hline\hline \\ [-1.5ex] Variable & Value & Units & Physical description \\ [0.5ex] \hline \\ [-1.5ex] $C_L$ & $1.55 \times 10^7$ & J m$^{-2}$ K$^{-1}$ & land heat capacity \\ $C_W$ & $4.428 \times 10^6$ & J m$^{-2}$ K$^{-1}$ m$^{-1}$ & ocean heat capacity per meter of depth \\ $m_d$ & 70 & m & ocean mixing depth \\ $D$ & 0.58 & W m$^{-2}$ K$^{-1}$ & meridional heat diffusion coefficient\\ $\nu$ & 0.8 & & coefficient of land-ocean heat flux\\ $A$ & 203.3 & W m$^{-2}$ & OLR parameter\\ $B$ & 2.09 & W m$^{-2}$ K$^{-1}$& OLR parameter\\ $\alpha_L$ & 0.363 & & albedo of land \\ $\alpha_W$& 0.263 & & albedo of water \\ $\alpha_i$& 0.6 & & albedo of ice\\ $f_L$ & 0.34 & & fraction of latitude cell occupied by land\\ $f_W$ & 0.66 & & fraction of latitude cell occupied by water\\ \hline \end{tabular} \label{tab:ebm_tab} \end{table} \begin{table} \caption{\textbf{Parameters used in the ice sheet model}} \centering \begin{tabular}{lllp{0.5\linewidth}} \hline\hline \\ [-1.5ex] Variable & Value & Units & Physical description \\ [0.5ex] \hline \\ [-1.5ex] $T_{freeze}$ & 273.15 & K & freezing point of water \\ $L_h$ & $3.34 \times 10^5$ & J kg$^{-1}$ & latent heat of fusion of water\\ $r_{\text{snow}}$ & $2.25 \times 10^{-5}$ & kg m$^{-2}$ s$^{-1}$ & snow/ice deposition rate \\ $A_{\text{ice}}$ & $2.3 \times 10^{-24}$ & Pa$^{-3}$ s$^{-1}$ & deformability of ice \\ $n$ & 3 & & exponent of Glen's flow law \\ $\rho_i$ & 916.7 & kg m$^{-3}$ & density of ice \\ $\rho_s$ & 2390 & kg m$^{-3}$ & density of saturated sediment \\ $\rho_w$ & 1000 & kg m$^{-3}$ & density of liquid water\\ $D_0$ & $7.9 \times 10^{-7}$ & s$^{-1}$ & reference sediment deformation rate\\ $\mu_0$ & $3 \times 10^9$ & Pa s & reference sediment viscosity \\ $m$ & 1.25 & & exponent in sediment stress-strain relation\\ $h_s$ & 10 & m & sediment depth \\ $\phi_s$ & 22 & degrees & internal deformation angle of sediment \\ $T_b$ & 5000 & years & bedrock depression/ rebound timescale\\ $\rho_b$ & 3370 & kg m$^{-3}$ & bedrock density\\ \hline \end{tabular} \label{tab:ice_tab} \end{table} \subsection{Analytical solution for ice stability} \label{sec:rosemodel} To better understand the snowball instability, we compare our results to the analytical EBM from \cite{rose2017}. Their model is an \emph{annual} EBM and is analytic in that the solution is algebraic, rather than numerical. While this model does not capture seasonal variations or the thermal inertia associated with ice sheets, it is nonetheless instructive for understanding how the snowball state is triggered. We utilize the Python code\footnote{Available at https://github.com/brian-rose/ebm-analytical} developed by those authors for our results in Section \ref{sec:icestab}. According to the ``slope-stability theorem'' \citep{cahalan1979}, the ice edge is stable as long as \begin{equation} \frac{dq}{dx_s} > 0, \label{eqn:slopestable1} \end{equation} where $x_s = \sin{\phi_s}$, $\phi_s$ is the latitude of the ice edge (land and ocean are not separate component in the analytic model), and $q$ is the non-dimensional quantity \begin{equation} q = \frac{a_0 Q}{A+B T_{\text{ref}}}. \end{equation} The quantity $q$ represents the absorbed solar/stellar radiation, divided by the planet's cooling function (or outgoing longwave radiation) at some temperature. Thus, it is analogous to the total heating that the planet receives, both from the host star and its own greenhouse effect. Here, $Q$ is the global average incoming flux ($4Q$ is the solar/stellar constant, $S_{\star}$) and $T_{\text{ref}}$ is the temperature threshold at which the planetary albedo switches from a value appropriate for ice free to ice covered ($T_{\text{ref}}$ is the freezing point, in other words). For ice free latitudes, the co-albedo, $a_0$, is a single value in the annual model. In our comparison using our seasonal model, we take this to be the average co-albedo of the unfrozen surfaces, $a_0 = f_L (1-\alpha_L)+f_W (1-\alpha_W)$, and we set $T_{\text{ref}} = -2^{\circ}$ C. Equation (\ref{eqn:slopestable1}) applies to low obliquity planets. If the planet has high obliquity, ice will tend to form at the equator, and the stability condition is \begin{equation} \frac{dq}{dx_s} > 0. \label{eqn:slopestable2} \end{equation} In the annual model, there is a distinct boundary between ``low'' and ``high'' obliquity, and the transition occurs at \begin{equation} \varepsilon = \sin^{-1}\left (\sqrt{\frac{2}{3}} \right) \approx 54.74^{\circ}. \end{equation} See Equation (3b) of \cite{rose2017}. This angle is the obliquity at which the average annual insolation is the same at all latitudes. At a single value of $q$, there can be multiple equilibrium locations for the ice edge---but only some of these ``branches'' are stable (those with positive or zero slopes) according to the slope-stability theorem. At Earth's obliquity, the slope (Equation \ref{eqn:slopestable1}) is negative at high latitudes, which gives rise to the ``small ice cap instability'' (SICI), and near the equator, giving rise to the ``large ice cap instability'' (LICI). The slope is positive between $\sim 35^{\circ}$ and $\sim 80^{\circ}$---in other words, an ice cap extending to this range of latitudes is stable. As we will show, this stability concept is useful in understanding how the snowball states occur in many of our simulations. However, because the seasonal EBM (\texttt{POISE}) is not an equilibrium model, it does deviate from the annual model at times. Hence, the ice stability diagrams that we analyze in Section \ref{sec:icestab} do not always accurately predict the occurrence of snowball states. \subsection{Statistics and machine learning} \label{davealgo} To extend the predictive power and utility of the model, we calculate correlations between orbital parameters and snowball states and area of ice coverage. We then employ a machine learning algorithm to determine how often we can correctly predict the climate state of the planet considered here, given a set of orbital properties. The properties that go into this analysis are shown in Table \ref{tab:davebot}. There are 10 model inputs (orbit/spin parameters) and 2 model outputs ($\delta_{\text{snow}}$ and $f_{\text{ice}}$).The fractional ice cover, $f_{\text{ice}}$, is the fractional area of the globe that is covered in ice year-round at the end of the simulation (the last orbital time-step). The other output parameter, $\delta_{\text{snow}}$, is 1 if the planet is in a snowball state at the end of the simulation and 0 if it is not. Note that $\delta_{\text{snow}} = 1$ when the \emph{oceans} are frozen year-round; this means that there exist circumstances in which $\delta_{\text{snow}}=1$ but $f_{\text{ice}}\neq1$ (the land component can warm above freezing seasonally, even if the oceans are frozen). In practice, this only occurs when the ice sheet model is not used, as the ice significantly alters the thermal inertia of the land. It is usually the case that $\delta_{\text{snow}}=1$ when $f_{\text{ice}}=1$ and $\delta_{\text{snow}}=0$ when $f_{\text{ice}}<1$. \begin{table} \caption{\textbf{Parameters used in statistical analysis and machine-learning algorithm}} \centering \begin{tabular}{lp{0.45\linewidth}} \hline\hline \\ [-1.5ex] \multicolumn{1}{c}{Parameter} & \multicolumn{1}{c}{Description} \\ [0.5ex] \hline \\ [-1.5ex] $S$ & Incident stellar flux (stellar constant) \\ $e_0$ & Initial eccentricity\\ $\Delta e$ & Maximum change in eccentricity\\ $\langle e \rangle$ & Mean eccentricity\\ $i_0$ & Initial inclination\\ $\Delta i$ & Maximum change in inclination\\ $\langle i \rangle$ & Mean inclination\\ $\varepsilon_0$ & Initial obliquity\\ $\Delta \varepsilon$ & Maximum change in obliquity\\ $\langle \varepsilon \rangle$ & Mean obliquity\\ \hline $\delta_{\text{snow}}$ & Equal to 1 in snowball state, 0 otherwise\\ $f_{\text{ice}}$ & Fractional area permanently (year-round) covered in ice\\ \hline \end{tabular} \label{tab:davebot} \end{table} We examine how the input features of our model (Table \ref{tab:davebot}) correlate with the final climate state ($\delta_{\text{snow}}$ and $f_{\text{ice}}$) to gain insight into how the underlying physical processes influence the outcomes of our simulations. For example, if the mean eccentricity correlates with likelihood that the planet enters a snowball state, we can infer that orbital dynamical processes could influence the climate evolution. Note that we cannot and do not seek to show causal relationships in the correlation analysis, but rather identify features that may impact the climate evolution. The relationship between any feature of our model and the final state of the simulated planet climate likely has a non-linear correlation given the inherent complexities of our coupled orbital dynamics and climate model. To characterize these correlations, we compute the simple Pearson correlation coefficient ($R$) and the maximal information coefficient \citep[MIC;][]{reshef2011}. Pearson's $R$ measures the linear relationship between two variables and ranges from [-1,1] with 0 representing no linear correlation and 1 and -1 represent a perfect positive and negative linear correlation, respectively. We also compute the $p-$values associated with each correlation, which are measure of statistical significance: the $p-$value indicates that there is a $p$-percent chance that the null hypothesis produces the observed correlation $R$. A $p<0.05$ is the traditional definition of significance for when testing a single hypothesis, however, since we are testing multiple hypotheses (10 in total for each climate parameter), we set the threshold for significance to $p<0.05/10$ or $p<0.005$ \citep[a Bonferroni correction;][]{dunn1959}. The MIC characterizes non-linear relationships between variables by estimating the maximum mutual information between two variables. Mutual information between two variables characterizes the reduction in uncertainty of one variable after observing the other \citep[see][]{reshef2011}. For independent variables, their mutual information is 0 as observing one does not provide any insight into the other. The MIC ranges from [0,1] where MIC $= 0$ represents no relationship while MIC $= 1$ represents some noiseless functional relationship of any form. The MIC depends on the estimate of the joint distribution of the two variables when computing the maximum mutual information and hence is sensitive to how the variables are binned. Following the suggestion of \cite{reshef2011}, we set the number of bins to be $N^{0.6}$ for $N$ simulations. We computed the MIC using the Python package \texttt{minepy} \citep{albanese2013} for each feature versus the final surface area of ice ($f_{\text{ice}}$) and the final climate state ($\delta_{\text{snow}}$). We also define a measurement of the non-linearity associated with each parameter: \begin{equation} \zeta_{NL} = \text{MIC} - R^2. \end{equation} By subtracting out a measure of the linearity of the relationship ($R^2$, in this case), $\zeta_{NL}$ captures the degree to which the measured correlation is non-linear. This quantity allows us to probe how the coupling between our models impact a planet's final climate state as opposed to direct climate scalings. As an alternative method to estimate the correlation between various features and simulation results, we turn to a machine learning (ML) approach akin to that of \cite{tamayo2016}. The purpose of this method is to look for correlations not found by either of the previous methods. Following the procedure of that study, we use an ML algorithm to predict the results of our simulations as a function of the features of our model (Table \ref{tab:davebot}). We use the \texttt{scikit-learn} \citep{pedregosa2011} implementation of the random forest algorithm \citep{brieman2001}. The random forest algorithm is a particularly powerful and flexible algorithm that fits an ensemble of decision trees on numerous randomized sub-samples of the data set and averages the predictions of the decision trees to produce an accurate, low-variance prediction. The random forest algorithm has a particular advantage for our purposes in that it can compute ``feature importances" as a means to estimate how the algorithm weights various inputs when producing an output. An input with a high feature importance implies that the algorithm weights that feature more heavily when making a prediction. Feature importances, $\xi_i$, can hence be considered as a proxy for how much that feature correlates with the predicted variable (the simulation output). The feature importances are all normalized such that they sum to 1, \emph{i.e.}, $\sum \xi_i = 1$. We cast our ML problem in two forms. First, we consider the binary classification problem in which we use a random forest classifier (RFC) to predict whether or not the simulation results in a snowball planet state, $\delta_{\text{snow}}$. Second, we consider the regression problem in which we use a random forest regressor (RFR) to predict the area fraction of the planet covered in ice, $f_{\text{ice}}$, a continuous quantity that ranges from 0 to 1. In both cases, we fit the ML algorithms with the following procedure. We divide our data set using 75\% of the data for our training set in which we fit and calibrate our algorithms and the remaining 25\% as the testing set used to estimate the performance of our fitted algorithms on unseen data. We fit each algorithm, a process commonly referred to as ``training", and use $k-$folds cross-validation with $k=5$ to tune the hyperparameters of our model using only the training set. After training the algorithms, we find that both the RFC and RFR algorithms generalize exceptionally well. For example, the RFC's predictions of $\delta_{\text{snow}}$ achieve a classification accuracy of $\sim97\%$ on the testing set. After training the models and verifying their accuracy, we extract the feature importances ($\xi_i$) for each algorithm as shown in Tables \ref{tab:daveresults1} and \ref{tab:daveresults2}. Note that in order prevent the random forest regressor (RFR) from predicting negative values for $f_{\text{ice}}$, we instead use the value $\log_{10}{(f_{\text{ice}}+1)}$ as the model output. \subsection{Initial orbital and obliquity conditions} We model the climate of planet 2 in the dynamically evolving system, TSYS, from the previous study (Paper I), over a narrower range of rotational periods. This hypothetical system, which consists of a warm Neptune, an Earth-mass planet (planet 2), and a Jovian exterior to the HZ, allows us to test the effects on habitability of exo-Milankovitch cycles. This test system is chosen as an end-member scenario, i.e., the effect of orbital evolution on climate is maximized (without destabilizing the system). The initial orbital and spin properties are shown in Table \ref{tab:sys1table2nd}. As mentioned in Paper I, the warm Neptune has almost no dynamical effect on the rest of the system. To understand the effects of orbital evolution over a range of stellar fluxes, we leave the semi-major axis fixed at $a = 1.0031$ au and instead vary the luminosity of the star over the range $L_{\star} = 3.6 \times 10^{26}$ W to $L_{\star} = 3.95\times10^{26}$ W. This corresponds to an incident stellar flux range of $S = 1304.00$ W m$^{-2}$ to $S = 1395.88$ W m$^{-2}$. \begin{table}[h] \centering \caption{{\bf Initial conditions for TSYS}} \begin{tabular}{lccc} \hline\hline \\ [-1.5ex] Planet & 1 & 2 & 3 \\ [0.5ex] \hline \\ [-1.5ex] $m$ ($M_{\oplus}$) & 18.75 & 1 & 487.81 \\ $a$ (au) & 0.1292 & 1.0031 & 3.973 \\ $e$ & 0.237 & 0.001-0.4 & 0.313 \\ $i$ ($^{\circ}$) & 1.9894 & 0.001-35 & 0.02126 \\ $\varpi$ ($^{\circ}$) & 353.23 & 100.22 & 181.13 \\ $\Omega$ ($^{\circ}$) & 347.70 & 88.22 & 227.95 \\ $P_{rot}$ (days) & & 0.65,1,1.62 & \\ $\varepsilon$ ($^{\circ}$) & & 0-90 & \\ $\psi$ & & 281.78 & \\ \hline \end{tabular} \label{tab:sys1table2nd} \end{table} The planet Kepler-62 f, discussed in the previous study, requires some additional adjustments to the climate model because of its (cooler) location in the HZ and the different stellar spectrum. It is also interesting enough to warrant its own study and so we will reserve a climate analysis of this planet for a future work. \section{Model Validation} \label{sec:valid} To validate the climate model, we adjust our input parameters to reproduce Earth-like values. We use the OLR parameters, $A$ and $B$, and heat diffusion coefficient $D$ from \cite{northcoakley1979} and surface albedos for land, water, and ice that give us good agreement to the data used in that paper, see Table \ref{tab:ebm_tab}. \subsection{Comparison with Earth and LMDG} \label{sec:validcomp} Like \cite{spiegel2009}, we compare our vertical heat fluxes to the Earth Radiation Budget Experiment satellite data \citep{barkstrom1990}. In Figure \ref{fig:monthf} we show the values for the flux in (blue), flux out (red), and the difference, or net heating (orange), as a function of latitude, for the Earth, using our climate model \texttt{POISE}. Our model compares well with the zonally- and monthly-averaged satellite data, though it is too simple to capture all of the variations. Our model also produces sharp jumps at high latitudes because of the sudden change in albedo at freezing temperatures. For the Earth, this sudden change is not seen because of a combination of geographic variations, darkening of snow and ice, clouds, etc., which are not captured in our model. \begin{figure*} \begin{centering} \includegraphics[width=0.8\textwidth]{f2.pdf} \caption{\label{fig:monthf} Monthly averaged vertical fluxes for the EBM (solid lines) and satellite data for Earth (dashed lines). Blue corresponds to incoming flux (equal to $(1-\alpha)S(\phi)$), red is the outgoing long-wave radiation (OLR), and orange is the difference (net heating). } \end{centering} \end{figure*} Further, in Figures \ref{fig:lmdcompe1o1r1}-\ref{fig:lmdcompe1o1r2}, we compare \texttt{POISE} to the Generic LMD 3D Global Climate Model (LMDG) \citep{wordsworth2011,leconte2013,leconte2013b,charnay2013}, for rotation periods of 0.65 and 1.62 days, obliquities of 23.5$^{\circ}$ and 85$^{\circ}$, and eccentricities of $0.1$ and $0.3$ (eight GCM simulations in total). These initial orbital and rotational conditions sample a broad range of the conditions we explore further with the EBM. We use present Earth geography in the LMDG simulations, though in the EBM there is a fixed quantity of land at each latitude, so some difference in the models is attributable to geography. All LMDG simulations are started from an initial state corresponding to present-day Earth, with present-day topography, and run for 30 years (the typical timescale required for convergence). \begin{figure} \includegraphics[width=0.5\textwidth]{f3a.pdf} \includegraphics[width=0.5\textwidth]{f3b.pdf} \caption{\label{fig:lmdcompe1o1r1} Comparison between our EBM (solid lines) and the LMDG 3D GCM (dashed lines), for $\varepsilon = 23.5^{\circ}$, $P_{rot} = 0.65$ day, and $e = 0.1$ (left two columns) and $\varepsilon = 23.5^{\circ}$, $P_{rot} = 1.62$ day, and $e = 0.1$ (right two columns). The surface temperature, OLR, and albedo compare reasonably well to the zonally-averaged quantities from LMDG considering the differences in geography and missing physics (\emph{e.g.} clouds and Hadley cells). The meridional flux in the EBM peaks at $\sim 7$ PW, a bit higher than Earth's $\sim 6$ PW, while LMDG's peak is a tad low at $\sim 5$ PW. For $P_{rot} = 1.62$ day, despite the slower rotation the meridional flux is very similar to that of the $P_{rot} = 0.65$ day rotator, which suggests that parameterizations of the heat flux with rotation rate $\Omega$ \citep[$D \propto \Omega^{-2}$; see][]{williamskasting97,spiegel2008} probably overestimate the latitudinal heat flow.} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{f4a.pdf} \includegraphics[width=0.5\textwidth]{f4b.pdf} \caption{\label{fig:lmdcompe1o1r2} Same as Figure \ref{fig:lmdcompe1o1r1}, but for $\varepsilon = 85^{\circ}$. The left two columns again correspond to $P_{rot} = 0.65$ days, the right two to $P_{rot}=1.62$ days. \texttt{POISE} compares worse with LMDG in these high obliquity cases. \texttt{POISE} captures the general patterns but underestimates the surface temperature at mid-latitudes and overestimates the OLR at the equator and south pole. At high obliquity, the geography may play a larger role than at low obliquity, due to the extreme seasonality---land and ocean have different heat capacities and so will heat on different time-scales, possibly explaining the discrepancy between the models.} \end{figure} In Figures \ref{fig:lmdcompe1o1r1}-\ref{fig:lmdcompe1o1r2} we plot the annually-averaged surface temperature, OLR, albedo, and meridional flux as a function of latitude for the \texttt{POISE} and LMDG simulations. With a climate model as simple as an EBM, we cannot replicate all of the variations with latitude in these quantities found by LMDG. Still, \texttt{POISE} captures LMDG's general patterns in surface temperature and heat fluxes. It captures the surface temperature better in the low obliquity cases than in the high obliquity cases, though, oddly, the meridional flux in \texttt{POISE} matches LMDG more closely in the high obliquity cases. A primary source of error in the high obliquity cases is that the EBM simply does not capture all of the physical processes that occur during the planet's extreme summers. During the summer, nearly an entire hemisphere experiences sunlight for months on end, leading to extremely high temperatures and strong circulation. Ultimately, the simple parameterization of the OLR ($I =A+B T$) probably breaks down under such conditions, and convection should lead to cloud formation and a change in albedo, similar to the effect on synchronously rotating planets \citep{joshi2003,edson2011,edson2012,yang2013}. \subsection{Reproducing Milankovitch Cycles} \label{sec:repmilank} For the purpose of this study, we tune the ice deposition rate so that the model can reproduce the Earth's ice age cycles at $\sim40,000$ years and $\sim100,000$ years over a 10 million year simulation. To reproduce the effect of Earth's moon on Earth's obliquity, we force the precession rate to be $50.290966''$ year$^{-1}$ \citep{laskar1993}. This choice does not perfectly match the dynamics of the Earth-moon-sun system, but it is close enough to replicate the physics of the ice age cycles. The results of this tuning are shown in Figure \ref{fig:huybers} (see \cite{huybers2008}, Figure 4, for comparison), for a 200,000 year window. The ice sheets in the northern hemisphere high latitude region grow and retreat as the obliquity, eccentricity (not shown), and climate-precession-parameter, or CPP ($e\sin{(\varpi+\psi)}$), vary. The ice deposition rate is less than that used by \cite{huybers2008} and so the ice accumulation per year is slightly smaller. The ice ablation occurs primarily at the ice edge (around latitude $60^{\circ}$) and is slightly larger than \cite{huybers2008}, but is qualitatively similar. There are a number of differences between our reproduction of Milankovitch cycles and those of \cite{huybers2008}. Most notably, our ice sheets tend to persist for longer periods of time, taking up to three obliquity cycles to fully retreat. We also require a lower ice deposition (snowing) rate than \cite{huybers2008} in order to ensure a response from the ice sheets to the orbital forcing. We attribute these differences primarily to the difference in energy balance models used for the atmosphere. For example, our model has a single-layer atmosphere with a parameterization of the OLR tuned to Earth, while \cite{huybers2008} used a multi-layer atmosphere with a simple radiative transfer scheme. Further, while the model \cite{huybers2008} contained only land, our model has both land and water which cover a fixed fraction of the surface. The primary effect of having an ocean in this model is to change the effective heat capacity of the surface. This dampens the seasonal cycle, and affects the ice sheet growth and retreat. Thus, our seasonal cycle is somewhat muted compared to theirs, and our ice sheets do not grow and retreat as dramatically on orbital time scales. Ultimately, our ice age cycles are more similar to the longer late-Pleistocene cycles than to $\sim 40,000$ year cycles of the early-Pleistocene. Even though we cannot perfectly match the results of \cite{huybers2008}, we are comfortable with these results for a number of reasons. First, both models make approximations to a number of physical processes and thus have numerous parameters that have to be tuned to reproduce the desired behavior. Second, both models are missing boundary conditions based on the continent distribution of the Earth---continental edges can limit the equator-ward advance of ice sheets or alter the speed of their flow through calving of ice shelves. Finally, because the purpose of this study is to understand the response of ice sheets and climate to orbital variations, it is enough to merely ensure that the ice sheets respond in a way qualitatively similar to the Earth's without being overly sensitive (\emph{i.e.}, resulting in ice free or snowball conditions with an insolation value of the solar constant, $\sim 1370$ W m$^{-2}$, and an OLR prescription similar to Earth's). To investigate the importance of the bedrock depression/rebound component of the model, we compare this Earth case to one with $\partial H/\partial t$ (Eqn \ref{eqn:brock}) set to zero. Figure \ref{fig:icecomp} shows the ice sheet height, $h+H$, and surface mass density, $\Sigma_i$, with (upper panels) and without the bedrock component (middle panels), and the difference (lower panels). The ice sheets reach higher altitude (by several hundred meters) without bedrock depression, but the ice mass is decreased by $\sim 10^5$ kg m$^{-2}$. The effect of isostasy is thus to confine the ice sheets while allowing them to grow larger. While this subtly increases the thermal inertia, it ultimately makes a minor difference in the prevalence of snowball states in our results (Section \ref{sec:results}). \begin{figure*} \begin{centering} \includegraphics[width=0.8\textwidth]{f5.pdf} \caption{\label{fig:huybers} Milankovitch cycles on Earth, in the northern hemisphere. The panels are arranged to compare with Figure 4 of \cite{huybers2008}. From top to bottom, we have: CPP $= e \sin{(\varpi+\psi)}$, obliquity, ice sheet height (m), annually averaged surface temperature ($^{\circ}$C), annual ice accumulation rate (m yr$^{-1}$), and annual ice ablation rate (m yr$^{-1}$). } \end{centering} \end{figure*} \begin{figure*} \begin{centering} \includegraphics[width=\textwidth]{f6.pdf} \caption{\label{fig:icecomp} Ice sheet evolution for Earth with (upper panels) and without (middle panels) isostatic depression and rebound of the bedrock. Also shown is the difference (lower panels). The left panels show the ice sheet height/altitude; the right panels show the surface density of the ice. Without the bedrock model, the ice grows taller (in elevation), but there is less ice overall because the surface does not sink under the weight of the ice.} \end{centering} \end{figure*} \section{Results} \label{sec:results} \subsection{Static cases} First, we identify the regimes in which ice sheets are able to form. The presence and distribution of permanent ice on land will depend on the stellar flux received by the planet and the planet's obliquity. In Figure \ref{fig:trans} we show how ice covered fraction, $f_{\text{ice}}$ depends on incoming stellar flux at two obliquities ($\varepsilon = 23.5^{\circ}$ and $\varepsilon = 50^{\circ}$). Note that this initial ice coverage in each simulation is determined by the initial temperature distribution (Eqn. \ref{eqn:inittemp}), and is very different from the final result in most cases. The ice coverage includes both land and ocean grid-points. The stellar flux is normalized by Earth's value, $S_0 = 1367.5$ W m$^{-2}$. No orbital evolution occurs in these simulations, however, the spin axis is allowed to precess at a rate set by the stellar torque (see Paper I). Two quantities are displayed in these plots: the fractional area of the planet that is permanently ice covered (\emph{i.e.} ice covered year-round) and the total ice mass at the end of the simulation. At the lowest stellar flux values, the planet is globally ice covered ($f_{\text{ice}}=1$), but the ice sheet mass remains at zero. This is because, in our model, precipitation is shut off when the oceans are frozen over, and in these coldest cases, the oceans freeze over during the spin-up phase of the simulation, thus no ice accumulates on land. In the $\varepsilon = 50^{\circ}$ case, the coldest cases are actually not ice covered year round. Since the oceans have frozen before ice sheets can grow on land, and the thermal inertia of the land is low (compared to the oceans and the ice sheets), the temperature over land actually rises above freezing during the summer months. Thus, the fact that $f_{\text{ice}}< 1$ is probably a side effect of our modeling choices---these cases really are in a snowball state. At higher stellar flux values, it takes hundreds to thousands of years for the planet to cool into the snowball state, thus ice sheets are allowed to grow on land. Because it takes much more energy in the model to melt a thick layer of ice (than to simply heat the land), these cases remain fully ice covered year-round. All points within the gray-shaded region entered a snowball state in $<200$ kyr, after which all ice sheets appear to be stable under static orbital/obliquity conditions. The light-blue region corresponds to our ``transition region'', wherein stable ice sheets form at some latitudes and persist year-round. In the dark-blue region, ice may form seasonally, but no permanent ice sheets appear. Note that in the $\varepsilon = 23.5^{\circ}$ cases, the ice covered area is not necessarily equal to zero because the oceans remain frozen at the poles year round, even though no ice sheets grow from year to year. \begin{figure*} \includegraphics[width=0.5\textwidth]{f7a.pdf} \includegraphics[width=0.5\textwidth]{f7b.pdf} \caption{\label{fig:trans} The fractional ice cover, $f_{\text{ice}}$, for static orbital/obliquity conditions as a function of stellar flux, $S/S_0$, where $S_0 = 1367.5$ W m$^{-2}$, for $\varepsilon = 23.5^{\circ}$ (left) and $\varepsilon = 50^{\circ}$ (right). The ice covered area includes both land and ocean grid-points. The gray shaded area represents snowball states (the ocean surface is permanently and completely ice-covered), dark-blue represent ice-free (no year-round ice) states, and light-blue is the ``transition region'', where the ocean is not totally ice-covered and ice sheets form on land. For reference, the Antarctic ice sheet is estimated to be $27 \times 10^6$ km$^{3}$, on the order of $10^{19}$ kg of ice mass \citep{fretwell2013}.} \end{figure*} The higher obliquity case remains clement (not in a snowball state) at lower stellar flux, and thus higher semi-major axis, than the low obliquity case, consistent with past results \citep{spiegel2009,armstrong2014}. The transition region is also narrower in this case, and the boundary between the transition region and the ice sheet free region (light- and dark-blue) is sharper, consistent with \cite{rose2017}, which demonstrated that ice (as represented by $T<0^{\circ}$ C on land or ocean) is less stable on higher obliquity planets. Interestingly, even though the obliquity is less than $55^{\circ}$ (the approximate value at which the annual insolation at the poles begins to exceed that of the equator), the ice sheets in the transition region form along the \emph{equator}, not the poles. This is a result of the temperature dependence of ice ablation---when the atmosphere is warmer, the ice melts faster (see Equation (\ref{eqn:ablation})). Even though the equatorial latitudes receive more sunlight over the course of an orbit, the summers are much more intense at the poles. High latitude summers are then much warmer than conditions ever get at the equator. So while the snowy season at the poles may be colder and longer, the intense summers are more than enough to melt the ice accumulated during winters, whereas the melting seasons are not hot enough or long enough to fully melt the equatorial ice. \subsection{Dynamically evolving cases} Next, we vary the initial eccentricity, inclination, rotation rate, and obliquity of planet 2 (Earth-mass) in our test system. Figures \ref{fig:lowoblmidmap}-\ref{fig:highoblmidfastmap} show the fractional area of the planet that is ice covered for several slices of this parameter space at an incident stellar flux of $S = 1332.27$ W m$^{-2}$, or $S/S_0 = 0.974$. This stellar flux puts the planet right at the boundary between the snowball state and the transition zone for a planet with low eccentricity and $23.5^{\circ}$ obliquity (Figure \ref{fig:trans}, left panel), and places the $\varepsilon_0 = 50^{\circ}$ simulations in the ice-free regime. The obliquity amplitude ($\Delta \varepsilon$) is shown in each panel as contours (see Paper I). The blue-white color scale in each figure shows the fraction, $f_{\text{ice}}$, of the total area of the panel that is permanently ice-covered, where ``permanent'' means covered year-round as in the previous section. Thus, some cases that have $f_{\text{ice}} = 0$ do have seasonal ice formation. The left panels shows the climate conditions assuming a static orbit and obliquity fixed at the initial values. Here, inclination has no direct effect on the insolation or climate, so $f_{\text{ice}}$ depends only on the eccentricity ($S\propto(1-e^2)^{-1/2}$). The planet is in a snowball state $f_{\text{ice}}=1$ at $e=0$, but as $e$ is increased, $f_{\text{ice}}$ decreases. The stellar torque on the equatorial bulge is included and results in a constant axial precession rate, but this has minimal impact on the total ice coverage. In the middle panels, the orbit and obliquity are also static, but they are fixed at the mean values from the 2 Myr simulation. The structure of this phase space is very different from that of the static initial conditions (upper right). For the cases with $\varepsilon_0 = 23.5^{\circ}$ (Figures \ref{fig:lowoblmidmap}, \ref{fig:lowoblmidslowmap}, and \ref{fig:lowoblmidfastmap}), using the mean properties tends to decrease the portion of phase space with $f_{\text{ice}}=1$, however, for the $\varepsilon_0 = 50^{\circ}$ cases (Figures \ref{fig:highoblmidmap}, \ref{fig:highoblmidslowmap}, and \ref{fig:highoblmidfastmap}), the mean properties produce snowball states where none existed before (at the initial values). Hence, using the mean orbital/obliquity properties in a climate simulation produces very different results from using the initial (or, perhaps, observed) properties. Finally, the right panel in each figure shows $f_{\text{ice}}$ for the full 2 Myr simulation with evolving orbits and obliquities. Now, the ice coverage increases almost universally, and snowball states are much more frequent than under static conditions. There are some configurations that had $f_{\text{ice}}=1$ under static conditions but are not completely ice covered under evolving conditions (at low inclination and low eccentricity, for example), but in general, the evolution tends to encourage the snowball instability, except at higher $e_0$. Interestingly, there are several blue ``islands'' (where $f_{\text{ice}} < 1$) that are completely surrounded by snowball states in the dynamically evolving cases. There is a complex interplay between the obliquity and eccentricity that we will discuss in more detail in Section \ref{sec:icestab}. \begin{figure*} \centering \includegraphics[width=0.4\textwidth]{f8.pdf} \caption{\label{fig:orbitmean} Mean eccentricity values as a function of initial inclination and eccentricity. These values are used as input to the climate model for the middle panels of Figures \ref{fig:lowoblmidmap}-\ref{fig:highoblmidfastmap}. There is a single simulation in the upper right corner for which the orbital model fails (the eccentricity exceeds $\sim 0.66$)---we model the system and climate up until the code halts, but this point does not factor heavily into our analysis.} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.4\textwidth]{f9a.pdf} \includegraphics[width=0.4\textwidth]{f9b.pdf} \caption{\label{fig:oblmean} Mean obliquity values as a function of initial inclination and eccentricity for $P_{rot}=1$ day at $\varepsilon_0 = 23.5^{\circ}$ (left) and $\varepsilon_0=50^{\circ}$. These values are used in the climate model for the middle panels of Figures \ref{fig:lowoblmidmap} and \ref{fig:highoblmidmap}. The high obliquity ``arc'' through the center of each panel is the result of a secular spin-orbit resonance (see Paper I). Corresponding plots for $P_{rot}=1.62$ days and $P_{rot}=0.65$ days (that is, the conditions used in the middle panels of Figures \ref{fig:lowoblmidslowmap}-\ref{fig:highoblmidfastmap}) appear very similar in structure. The range of mean obliquity values is smaller ($15^{\circ}\lesssim\langle \varepsilon \rangle\lesssim50^{\circ}$) for $P_{rot}=1.62$ days, while it is slightly increased ($15^{\circ}\lesssim\langle \varepsilon \rangle\lesssim65^{\circ}$) for $P_{rot}=0.65$.} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{f10.pdf} \caption{\label{fig:lowoblmidmap} Climate states as a function of initial eccentricity and inclination, for $P_{rot} = 1$ day and initial obliquity $\varepsilon_0 = 23.5^{\circ}$, with a stellar constant of $S = 1332.27$ W m$^{-2}$. Each panel shows the fraction of the surface area that is permanently ice-covered over the final orbit (blue color-scale) and contours of $\Delta \varepsilon$ (black lines), under three different conditions: left, static orbit and obliquity at the initial values; middle, static orbit and obliquity at the mean values from the simulation; right, dynamically evolving orbit and obliquity.} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{f11.pdf} \caption{\label{fig:highoblmidmap} Same as Figure \ref{fig:lowoblmidmap} but for $P_{rot} = 1$ day and initial obliquity $\varepsilon_0 = 50^{\circ}$. } \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{f12.pdf} \caption{\label{fig:lowoblmidslowmap}Same as Figure \ref{fig:lowoblmidmap} but for $P_{rot} = 1.62$ day and initial obliquity $\varepsilon_0 = 23.5^{\circ}$.} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{f13.pdf} \caption{\label{fig:highoblmidslowmap}Same as Figure \ref{fig:lowoblmidmap} but for $P_{rot} = 1.62$ day and initial obliquity $\varepsilon_0 = 50^{\circ}$.} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{f14.pdf} \caption{\label{fig:lowoblmidfastmap}Same as Figure \ref{fig:lowoblmidmap} but for $P_{rot} = 0.65$ day and initial obliquity $\varepsilon_0 = 23.5^{\circ}$.} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{f15.pdf} \caption{\label{fig:highoblmidfastmap}Same as Figure \ref{fig:lowoblmidmap} but for $P_{rot} = 0.65$ day and initial obliquity $\varepsilon_0 = 50^{\circ}$.} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{f16.pdf} \caption{\label{fig:oblvsrotmap}Same as Figure \ref{fig:lowoblmidmap} but varying $P_{rot}$ and $\varepsilon$ with $e_0=0.2$ and $i_0 = 20^{\circ}$. } \end{figure*} Figure \ref{fig:oblvsrotmap} illustrates the effects of rotation rate and initial obliquity. The ice cover is shown in the same style as Figures \ref{fig:lowoblmidmap}-\ref{fig:highoblmidfastmap}, but with $e$ and $i$ fixed, and $\varepsilon_0$ and $P_{\text{rot}}$ varied instead. Under static initial eccentricity and obliquity (left), low obliquity cases form some permanent ice, while high obliquity cases form none. From $\varepsilon \sim 33^{\circ}-40^{\circ}$, the planet enters a snowball state, because the ice edge is unstable at these obliquities (see Section \ref{sec:icestab}), but these cases lack the warming effect that comes with even higher obliquity. The static mean conditions do not enter a snowball state anywhere in this parameter space. With a variable orbit and obliquity, snowball states occur throughout much of this space. Note also that the obliquity variation in some regions is extremely large in amplitude and sometimes chaotic (see Paper I). \begin{figure*} \includegraphics[width=\textwidth]{f17.pdf} \caption{\label{fig:caseLowSlow_1} Evolution of climate and orbit for a case at initial values: $S = 1332.27$ W m$^{-2}$, $e_0 = 0.16725$, $i_0 = 14.54^{\circ}$, $\varepsilon_0 = 23.5^{\circ}$, and $P_{rot} = 1.62$ day (inside the horizontal blue strip near the center of Figure \ref{fig:lowoblmidslowmap}, right panel). The climate-obliquity-precession-parameter is defined as COPP $= e \sin{\varepsilon} \sin{(\varpi+\psi)}$ and represents the asymmetry between the northern and summer hemispheres (see text).} \end{figure*} Figure \ref{fig:caseLowSlow_1} shows the climate and orbit evolution for a point in the parameter space of Figure \ref{fig:lowoblmidslowmap} ($\varepsilon = 23.5^{\circ}$ and $P_{\text{rot}} = 1.62$ day). In this figure we have the surface temperature, planetary albedo, ice sheet height, bedrock height, and insolation, all averaged over an orbit or ``year'', as a function of latitude and time. Also shown are the three parameters that affect the insolation: obliquity, eccentricity, and ``climate-obliquity-precession-parameter'' (COPP), which is defined as: \begin{equation} \text{COPP} = e \sin{\varepsilon} \sin{(\varpi+\psi)}, \label{eqn:coppdef} \end{equation}where, again, $\varpi+\psi$ represents the instantaneous angle between periastron and the planet's position at its northern spring equinox. This is essentially the same as the commonly used ``climate precession parameter'' or CPP, but additionally takes into account the effect of obliquity variations (which are neglected in the CPP because Earth's are very small). COPP can be thought of as a measurement of the asymmetry between the northern and southern hemispheres, and so varies with the angle $\varpi+\psi$, modulated by the eccentricity and obliquity. When COPP $>0$, the northern hemisphere receives more stellar flux than the southern; vice-versa for COPP $<0$. Despite the climate in Figure \ref{fig:caseLowSlow_1} approaching very near to snowball states, the planet remains clement throughout this 2 Myr evolution. Ice sheets grow and recede at both poles rather dramatically, from almost nothing to nearly 4 km in height (in some regions) and back. This oscillation is a result of a nearly 200 W m$^{-2}$ swing in the annual insolation over $\sim 50,000$ years, due to the combined effects of the obliquity and eccentricity variations. The envelope of the obliquity oscillation is imprinted on the latitude of the ice edge, though the primary driver of growth and retreat is the change in eccentricity. The ice edge progresses into the mid-latitudes during periods when the obliquity oscillation is lowest in amplitude. \begin{figure*} \includegraphics[width=\textwidth]{f18.pdf} \caption{\label{fig:caseLowSlow_2} Same as Figure \ref{fig:caseLowSlow_1} but for $e_0 = 0.16725$, $i_0 = 16.04^{\circ}$, $\varepsilon_0 = 23.5^{\circ}$, and $P_{rot} = 1.62$ day (slightly lower inclination than the case in that Figure). A snowball state occurs at $t\sim750,000$ years---the temperature drops globally, the albedo approaches that of ice everywhere, and ice sheets no longer grow (precipitation is shut off artificially) and instead just gradually flatten.} \end{figure*} In Figure \ref{fig:caseLowSlow_2}, we have the same evolution for a case immediately adjacent to that in Figure \ref{fig:caseLowSlow_1}. The eccentricity and obliquity variations are very similar to the previous case, however, the obliquity peaks at a slightly higher value ($\sim 35^{\circ}$, compared to $\sim 30^{\circ}$ in the previous). The ice sheets grow and retreat in a similar fashion until the obliquity approaches its highest value, at which point the planet abruptly enters a snowball state. The appearance of the large ice cap instability (LICI) is somewhat counter to expectation here---as we have shown before (and numerous other studies have found), high obliquity tends to grant a planet additional warmth at low stellar flux. The analytic solution to the annual EBM from \cite{rose2017} provides an explanation for how the instability occurs, see Section \ref{sec:icestab}. In addition to snowball states, we also observe some very high temperatures at high-obliquity, high-eccentricity times. For a case with $\varepsilon_0 = 23.5^{\circ}$, $P_{rot} = 1$ day, $e_0 = 0.3$, and $i_0 = 17.5^{\circ}$, which is inside the secular resonance in Figure \ref{fig:lowoblmidmap}, the obliquity reaches $\sim 80^{\circ}$ while the eccentricity is $\sim 0.4$. Figure \ref{fig:hotcasetemp} shows the orbital/obliquity evolution and the resulting average, minimum, and maximum surface temperatures (over an orbital period). At the highest obliquity times, the north pole of the planet reaches $140^{\circ}$ C. Such strong heating should probably result in strong convection, which would increase the albedo (due to cloud formation) and cause increased horizontal heat flow, but our simple EBM does not model such effects (see Section \ref{sec:validcomp}). Thus this temperature is improbable, except perhaps over dry continental interiors. It is beyond the scope of this study to comprehensively model this scenario with a GCM, but it is worth future investigation in the future. \begin{figure*} \includegraphics[width=\textwidth]{f19.pdf} \caption{\label{fig:hotcasetemp} Evolution of the orbit, obliquity and maximum surface temperature for a case with $P_{rot} =1$ day, $\varepsilon_0 = 23.5^{\circ}$, $e_0 = 0.3$, and $i_0 = 17.5^{\circ}$ over a 250 kyr period. The upper left panel is the maximum surface temperature over an orbit (averaged over land/ocean); lower left, obliquity; upper right, eccentricity; lower right, COPP. The obliquity reaches large values because of the secular spin-orbit resonance (see Paper I). The highest obliquity times correspond to high eccentricity times. As a result, the insolation at high latitudes is extremely high during summer and the surface temperature exceeds the boiling point of water. This effect depends also on the angle $\varpi+\psi$ (the angle between the equinox and pericenter) and is responsible for the additional variation in maximum temperature between these warm periods.} \end{figure*} \subsection{Examining ice stability} \label{sec:icestab} In the previous section, we saw that the ice caps often become unstable as a result of the orbital/obliquity evolution. Though we highlighted the snowball instability (or LICI), the SICI can also be observed in the rapid retreat of the ice sheets. We can use the analytical solution from \cite{rose2017} (Section \ref{sec:rosemodel}) to plot the ice edge latitude as a function of the dimensionless parameter, $q$ (Figure \ref{fig:caseLowSlow_icestab}). As we discussed, the slope of this curve indicates whether the equilibrium ice line is stable or unstable. \begin{figure*} \includegraphics[width=0.5\textwidth]{f20a.pdf} \includegraphics[width=0.5\textwidth]{f20b.pdf} \caption{\label{fig:caseLowSlow_icestab}Ice edge latitude as a function of the parameter $q$ (see Section \ref{sec:climatemodel}) from the analytical annual energy balance model \citep{rose2017}, for the cases shown in Figures \ref{fig:caseLowSlow_1} (left) and \ref{fig:caseLowSlow_2} (right). The solution is a function of obliquity: light blue corresponds to the minimum obliquity in the simulation, red to the maximum obliquity, and the gray-shaded area is the range explored by the planet. Vertical dashed lines indicate the value of $q$, which is a function of eccentricity, at the corresponding times. In the left panel, markers show the ice edge latitude for northern and southern land and ocean at the time of maximum obliquity, at the coeval value of $q$, which depends on the eccentricity. Triangles and circles represent land and ocean, respectively, while closed and open markers represent northern and southern hemispheres, respectively. The right panel also shows these ice edge latitudes and the analytical solution at 500 years before the planet becomes fully glaciated (dark blue).} \end{figure*} Figure \ref{fig:caseLowSlow_icestab} shows the ice edge latitude as a function of the parameter $q$, from the \cite{rose2017} solution, for the two cases discussed above (see Section \ref{sec:rosemodel}). The dimensionless parameter $q$ describes the combined effects of insolation and greenhouse warming. The panels in Figure \ref{fig:caseLowSlow_icestab} show the equilibrium ice edge latitude at different obliquities---the light blue line at each case's minimum obliquity, and the red line at its highest obliquity. The gray shaded area indicates the full range of solutions the simulation explores. When the slope of the line is positive or zero (as in the upper and lower branches), the ice edge is in a stable equilibrium (the annual solution is an equilibrium model). When the slope is negative or undefined, the ice edge is unstable and gives rise to the small ice cap instability (SICI) at the highest latitudes, and the large ice cap instability (LICI) at the mid to low latitudes. When the ice edge is at 90$^{\circ}$, there is no ice cap; when it is at 0$^{\circ}$, the planet is in a snowball state. The left-hand panel corresponds to the case that does not experience the LICI (Figure \ref{fig:caseLowSlow_1}). In this case, there is always a stable branch for the ice edge at all obliquities. The points shown in the plot are the actual ice edge locations from our full seasonal model, for both the land and ocean in each hemisphere, at the time of the highest obliquity. The vertical dashed lines indicate the average annual value of $q$ (which depends on the eccentricity) at each obliquity extreme. These points lag the analytic ice edge solution (which represents the climate in equilibrium) in time, and are dependent on the seasonality and the nature of the ice sheet model, and so do not fall directly on the analytical solution at most times. Nevertheless, the points stay very near to the analytical solution, and give a sense of why the instability is avoided. In this case, the instability never occurs because the ice edges (land and ocean in each hemisphere) remain on a stable branch of the analytical solution at mid-latitudes (or retreat to $0^{\circ}$). In the right-hand panel, we see the same quantities plotted for the second case (Figure \ref{fig:caseLowSlow_2}), which experiences the LICI. We can see that at the highest obliquity (red curve), there is no stable ice edge between 0$^{\circ}$ and 90$^{\circ}$. We have additionally plotted the analytical solution $\sim 500$ years before the planet has fully entered the snowball state. We can see that the ice edges in each hemisphere are precariously perched upon a branch of the solution where the slope is becoming undefined. At this point, the ice must either retreat entirely or expand to the equator. Because this occurs near a minimum in global insolation (the eccentricity is low), and the ice sheets have high thermal inertia, the snowball state is more easily reached. This demonstrates the susceptibility of planets with large orbital/obliquity variations to the snowball instability. Essentially, if planets proceed to a high obliquity and low eccentricity state with ice sheets extending to mid-latitudes, the ice edge becomes unstable and the entire planet quickly freezes. For the climate parameters we use here, this instability occurs when the obliquity reaches $\sim 35^{\circ}$. These climate parameters ($a_0, A, B$, and $D$) are chosen to reproduce Earth's atmosphere, however, a planet with different atmospheric properties will respond differently to this obliquity oscillation. For some types of atmospheres, the instability will occur at a different obliquity, for others, the instability may not occur at all \citep[for a detailed exploration of the climate parameters, see][]{rose2017}. \begin{figure*} \includegraphics[width=0.5\textwidth]{f21a.pdf} \includegraphics[width=0.5\textwidth]{f21b.pdf} \caption{\label{fig:icestability1} The quantities $dq/dx_s$ and $\Delta q$, which are related to the stability of ice caps in the annual EBM (see text), for a case with $P_{rot} =1.62$ day, $\varepsilon_0 = 23.5^{\circ}$, $e_0 = 0.167$, and $i_0 = 14.54^{\circ}$ (left) and $i_0 = 16.04^{\circ}$ (right). The quantities $dq/dx_s$ and $\Delta q$ are plotted as a function of time for the northern ice sheet on land (red), the southern ice sheet (orange), the northern sea ice (dark blue), and the southern sea ice (light blue). Negative values of $dq/dx_s$ indicate the ice cap is unstable in the annual model (but not necessarily in our seasonal model). Negative values of $\Delta q$ indicate that the average insolation is below that required to maintain the ice edge at its current latitude, suggesting that the ice should grow. In the left-hand case, the ice-cap is stable over the entire simulation. In the right-hand case, $dq/dx_s$ periodically dips below zero for the ocean in both hemispheres, but the snowball instability isn't triggered until $dq/dx_s<0$ for land.} \end{figure*} Figure \ref{fig:icestability1} shows two parameters that can be used to analyze the ice edge stability: $dq/dx_s$ and $\Delta q$, for a clement (\emph{i.e.} non-snowball) case with $P_{rot} = 1.62$ day and $\varepsilon_0 = 23.5^{\circ}$. Both quantities are calculated at the ice edge latitude for northern and southern land and ocean, for a total of four ice edges. The ``perturbation'', $\Delta q$, is \begin{equation} \Delta q = q_{true} - q_{equil}, \end{equation} where $q_{\text{true}}$ is the ``true'' value of $q$, calculated from the stellar flux and the eccentricity at that instant in time and $q_{\text{equil}}$ is calculated from the analytical solution, at each ice edge and the current obliquity. Thus, it is when both $dq/dx_s$ and $\Delta q$ are negative that we would expect the snowball states to occur---this corresponds to the third quadrant in the right panel of the figure. Both $dq/dx_s$ and $q_{\text{equil}}$ are calculated from the Python package developed in \cite{rose2017}, see Section \ref{sec:rosemodel}. As described previously, the ice caps will become unstable any time $dq/dx_s < 0$. Whether or not the caps collapse to the poles or grow to the equator depends on the direction of the perturbation, $\Delta q$. Figure \ref{fig:icestability1} (left panels) shows a case in which the ice edges are truly stable (except in the earliest phase, when the ice sheets are growing): $dq/dx_s > 0$ over the entire simulation. The same quantities are shown in Figure \ref{fig:icestability1} (right panels) for an adjacent case which undergoes the snowball instability. In this case, $dq/dx_s$ becomes negative several times for the sea ice in both hemispheres and $\Delta q$ is negative during some of these excursions. The ice edges do not grow immediately to the poles, however. This may be due to the fact that the model is not in equilibrium, but since the sea ice is treated as a thin veneer that melts instantly when $T>-2$, the response time of the oceans to changes in insolation should be relatively short. \cite{rose2017} shows that the seasonal model does deviate from the analytical solution; this is probably the reason the instability does not occur during those times. Careful inspection of the upper right panel in Figure \ref{fig:icestability1} shows that it is actually the northern ice sheet (red curve) that leads the way into the snowball state, not the sea ice in either hemisphere. It is interesting that this happens so quickly after $dq/dx_s$ becomes negative for this ice sheet, when the instability did not occur during previous excursions below zero. It is possibly a result of hysteresis: one may note that $\Delta q$ at the northern ice edge was fairly large and positive during the first three eccentricity cycles. During the fourth ($\sim 220,000$ years), however, $\Delta q$ barely exceeds zero before $dq/dx_s$ becomes negative. In other words, the ice sheet receives strong heating during all of the previous three eccentricity maxima, but very weak heating during the last, which leaves it poised, so to speak, to continue growing the next time $dq/dx_s< 0$. The analytical theory does not always provide a simple explanation, as it does for the case shown in Figure \ref{fig:icestability1}. Figure \ref{fig:icestability2} shows another nearby case that undergoes the snowball instability. For most of the simulations, whenever $dq/dx_s<0$, $\Delta q$ is positive. At these times the sea ice usually disappears entirely (gaps in the blue curves left panels). The occurrence of a snowball state at $\sim 750$ kyr may be a result of hysteresis again---$\Delta q$ does undergo a negative period shortly prior to the snowball state, but this period does not appear significantly different from the cycles before it. \begin{figure*} \includegraphics[width=0.5\textwidth]{f22.pdf} \caption{\label{fig:icestability2} Same as Figure \ref{fig:icestability1}, but for $P_{rot} =1.62$ day, $\varepsilon_0 = 23.5^{\circ}$, $e_0 = 0.167$, and $i_0 = 18.96^{\circ}$. This case enters a snowball state at $\sim750,000$ years. The northern and southern sea ice caps melt completely numerous times prior to the instability at $\sim760$ kyr---shown as gaps in the blue curves. Eccentricity and obliquity are high during these times.} \end{figure*} \subsection{Relative importance of obliquity, eccentricity and COPP} With orbital and obliquity cycles as large as our test planet here, the periodicity of the ice is plainly visible. It is interesting, still, to perform periodogram analysis to understand the relative importance of the three insolation parameters: obliquity, eccentricity, and COPP. We calculate periodograms for each of these variables, for the ice sheet heights at $65^{\circ}$ north and south, and for the total global ice mass. These are calculated using the periodogram function in the \texttt{SciPy} package for Python, with a Bartlett window function to produce a clean power spectrum \citep{jones2001}. We first perform a periodogram analysis on a static, but eccentric case. Under our ``static'' conditions, the orbit and obliquity do not change, but we can still allow the spin axis to precess according to Equation (12) in Paper I. This results in a sinusoidal variation in COPP. This parameter is typically the weakest of the three insolation parameters, so this example, which has no variation in $\varepsilon$ or $e$, allows us to see its effect more plainly. The ice sheets grow and decay in response to the planet's precession. The total ice volume's strongest peak is at half the period of COPP---this is because the northern and southern ice sheets grow and decay at opposing times. \begin{figure*} \includegraphics[width=0.5\textwidth]{f23a.pdf} \includegraphics[width=0.5\textwidth]{f23b.pdf} \caption{\label{fig:fft_slowrot} Normalized power spectra showing the strength at different periods in the ice height (top panel), global ice volume (middle panel), and the insolation parameters (obliquity, eccentricity, and COPP). Vertical dashed lines in the top two panels indicate peaks in the insolation parameters. The left panel shows a case with $P_{rot} =1.62$ day, $\varepsilon_0 = 23.5^{\circ}$, $e_0 = 0.167$, and $i_0 = 11.67^{\circ}$ and the right shows a case with $P_{rot} =1.62$ day, $\varepsilon_0 = 23.5^{\circ}$, $e_0 = 0.25$, and $i_0 = 16.04^{\circ}$. The ice sheets are strongly coupled to the eccentricity and, to lesser extent, the obliquity. The case on the right lies within the secular spin-orbit resonance, hence the obliquity and eccentricity have the same period of oscillation.} \end{figure*} Figure \ref{fig:fft_slowrot} shows the periodograms for two cases with $P_{\text{rot}} = 1.62$ day and $\varepsilon_0 = 23.5^{\circ}$ that are characteristic of the behavior we see over much of this parameter space. The left panel shows a case that is outside the secular resonance (see Figure \ref{fig:lowoblmidslowmap}) and the right shows a case that is \emph{inside} the resonance. Outside the resonance, the obliquity and eccentricity have distinct peaks, and both can be seen in the ice sheet growth and decay. In the secular resonance, the obliquity oscillates with almost exactly the same period as the eccentricity (a consequence of resonance), and the ice sheets follow this period. Interestingly, in all of the parameter space we explore, the ice mass is dominated by the eccentricity cycle, not the obliquity cycle, except in the secular resonance, when the frequencies are similar and thus difficult to disentangle. The periods associated with COPP cannot even be seen in the ice sheets on a linear scale. The ice sheets are mostly driven by the eccentricity, while the obliquity controls their stability (Section \ref{sec:icestab}). \subsection{Importance of ice sheets} The inclusion of the ice sheet model has important consequences. The snowball instability is triggered more easily (\emph{i.e.}, at higher $S_{\star}$), because of the extra energy required to melt the ice sheets (compared to the energy required simply to raise the surface temperature above freezing). Thus the climate with ice sheets is generally cooler at the same stellar flux than without. Indeed, without ice sheets, for our test planet at $\varepsilon = 23.5^{\circ}$, the snowball state is not reached until $S/S_0 \approx 0.95$, compared to $S/S_0 \approx 0.975$ with ice sheets (Figure \ref{fig:trans}). \begin{figure*} \includegraphics[width=\textwidth]{f24.pdf} \caption{\label{fig:areacovnoice} Fractional area of ice coverage at $\varepsilon_0 = 23.5^{\circ}$, $P_{rot}=1$ day, with ice sheets disabled. On the left are static conditions at the initial values; on the right, dynamic orbit and obliquity. Compare to Figure \ref{fig:lowoblmidmap}. The stellar flux here is lower than in the simulations from Figure \ref{fig:lowoblmidmap}, $S = 1304$ W m$^{-2}$. The ice coverage is very different from the cases with ice sheets at low inclinations---in the lower left, where the obliquity variations are relatively small.} \end{figure*} The response to orbital variations is altered as well. Figure \ref{fig:areacovnoice} shows the fractional area coverage for $\varepsilon_0 = 23.5^{\circ}$, $P_{rot}=1$ day, at $S = 1304$ W m$^{-2}$. Without perturbations, at this stellar flux, there are no snowball states. At $e \sim 0.25$, the area of ice coverage increases slightly, because of increased apoastron distances and time spent there, but the ice coverage drops to zero at the highest eccentricities. When perturbations are included, the area of ice coverage increases in most regions and snowball states are reached at $i_0 \gtrsim 12^{\circ}$ and $e_0 \lesssim 0.25$. The change in ice coverage between static and dynamic cases is more pronounced here than in the low obliquity cases with ice sheets (Figure \ref{fig:lowoblmidmap}). Further, the region of small obliquity variations (lower left) does not experience snowball states as often as the cases with ice sheets. \subsection{Comparison with \cite{armstrong2014}} \label{sec:armstrongcomp} Here, we revisit the 17 test systems from \cite{armstrong2014}. Refer to that paper for the physical details of these systems. We simulate the orbital evolution using \texttt{DISTORB} and \texttt{HNBody} and the obliquity evolution using \texttt{DISTROT}. In cases 1, 2, 5, 6, 7, 13, 14, and 17, the combined orbital/obliquity evolution resulting from the secular model (\texttt{DISTORB}) matches sufficiently well with \cite{armstrong2014}, and we couple these directly to the climate model, \texttt{POISE}. In the rest of the cases, the eccentricity and/or obliquity evolution (using \texttt{DISTORB}) diverges significantly from the \cite{armstrong2014} simulations or the semi-major axis evolution is large enough that we must use \texttt{HNBody} for the orbital evolution. Whether we ultimately use \texttt{DISTORB} or \texttt{HNBody}, we ensure that the obliquity/orbital evolution matches well with \cite{armstrong2014} before running the climate model. In all cases, we run the climate model with the same parameters and initial conditions as for our Earth comparison (Section \ref{sec:valid}) and the Earth-mass planet in our test system. For each system, we run three sets of \texttt{POISE} simulations: one set with the orbit and obliquity held constant at their initial values, one set with the orbit and obliquity held constant at their mean values (over 1 Myr), and one set with the full orbital and obliquity variations. We generate a comparison with \cite{armstrong2014} by varying the stellar luminosity and locating the value, $L_{\text{OHZ}}$, at which the transition between warm, clement conditions and the snowball state occurs. The semi-major axis at which the outer edge of the habitable zone (OHZ) occurs is then calculated from \begin{equation} a_{\text{OHZ}} = a_{\oplus}\sqrt{\frac{L_{\odot}}{L_{\text{OHZ}}}}. \end{equation} The purpose of this somewhat awkward definition is solely to compare directly with \cite{armstrong2014}. We do not vary the \emph{initial} semi-major axis of the planet ($a_0 = 1$ au in every case) because the eccentricity and obliquity evolution would be different at every location. Varying the stellar luminosity instead gives us a way of isolating the effects of the dynamical evolution. This definition of $a_{\text{OHZ}}$ is also not fully self-consistent because in several cases (systems 4, 10, and 11), the semi-major axis of the planet varies by $\sim 10 \%$, leading to a significant change in the stellar flux received by the planet. This ultimately leads to a significant decrease ($\sim6-8 \%$) in $a_{\text{OHZ}}$ for these three cases. In reality, it is probably more accurate to describe this result as an excursion beyond the habitable zone due to an increase in semi-major axis $a$, rather than a decrease in the distance at which the planet enters a snowball state. Such is the difficulty in reducing a concept as multi-faceted as orbital evolution to a single parameter, $a_{\text{OHZ}}$. The percent enhancement of the OHZ is then calculated for each system relative to system 1 and displayed in Figure \ref{fig:ohzcomp} for the static initial, static mean, and variable orbit and obliquity (compare to Figure 11 in \cite{armstrong2014}). Note also that system 1 has the same $a_{\text{OHZ}}$ for the static initial, static mean, and variable orbit/obliqiuty values, so the percent enhancement for each is zero. In most cases, the change in $a_{\text{OHZ}}$ from system 1 is $\lesssim 1 \%$. The OHZ is enhanced under static initial conditions for systems 3, 10, and 15 as a result of the high initial eccentricity of the planet. In systems 2, 3, 5, 6, 15, and 16, the enhancement under static mean conditions is a result of the planet's high mean obliquity. Variations enhance the OHZ relative to system 1 only in systems 3 and 15, which also saw warmer conditions due to the higher initial eccentricity. For the most part, the variations lead to a decrease in $a_{\text{OHZ}}$. Except in cases where there was no change to the OHZ, variations always lead to a decrease in the $a_{\text{OHZ}}$ compared to static conditions in the same system. Ultimately, our results are significantly different from \cite{armstrong2014}. Compare our Figure \ref{fig:ohzcomp} with their Figure 11. We find that, in general, dynamical evolution of the eccentricity and obliquity of a HZ planet tends to make the planet more susceptible to snowball states than when it has static orbital conditions, while \cite{armstrong2014} found that dynamical variations tended to inhibit glaciation and snowball states. There are two fundamental reasons our results differ from that study. The first is related to the parameterization of the OLR. The stability of the EBM is related to the strength of the longwave (LW) radiation feedback and the ice-albedo feedback. The LW radiation feedback is negative: a small positive perturbation to the surface temperature will cause the OLR to increase, generating more cooling and returning the surface to the unperturbed temperature. The process also works in the other direction: a small negative perturbation to the temperature will cause the OLR to decrease, creating additional heating and returning the temperature to its previous value. The ice-albedo feedback is positive: a small negative perturbation to the surface temperature will cause the ice to grow, reflecting more radiation to space and causing the surface to cool further. A positive perturbation will likewise generate runaway warming, if the ice-albedo feedback is the dominant feedback of the model. Of course, the real Earth and more sophisticated 3D models have a number of other feedback processes that work to alter the climate stability, but in a 1D EBM like ours and the model in \cite{armstrong2014}, stability is simply a LW competition between the radiation feedback and the ice-albedo feedback. In this simple formulation, the LW radiation feedback is contained within the parameter $B$. A large, positive value of $B$ will create a very stable climate, while a smaller value will create a less stable climate. For Earth, $B \approx 2.09$ W m$^{-2}$ K$^{-1}$ \citep{northcoakley1979}. A Taylor expansion of the OLR parameterization in \cite{spiegel2009}, for example, shows that their model 2 has $B \approx 2.28$ W m$^{-2}$ K$^{-1}$ at a surface temperature of 288 K, and so their model should be more stable against snowball states when using this formulation than with the OLR from \cite{northcoakley1979}. The OLR from \cite{armstrong2014} is found by combining their Equations (23) and (24) and comparing to the full energy balance equation (our Equation \ref{eqn:ebmeq}): \begin{equation} I(T) = \frac{\epsilon_s \sigma T_s^4}{1+\tau} - F_{\text{surf}}, \label{eqn:johnolr} \end{equation} where $\epsilon_s$ is the emissivity of the atmosphere, $\sigma$ is the Stefan-Boltzmann constant, $F_{\text{surf}}$ is a tunable constant and $\tau$ is a tunable parameter used to approximate the greenhouse effect that was \emph{not} assumed to be a function of temperature. The authors found that setting $\epsilon_s = 1$ and $\tau = 0.095$ reproduced Earth and so fixed these values for the rest of the study. As stated before, a Taylor expansion of Equation \ref{eqn:johnolr} with respect to temperature gives the value of $B$: \begin{equation} B = \frac{dI}{dT} = \frac{4 \epsilon_s \sigma T_s^3}{1+\tau}. \end{equation} Plugging in their constants and a surface temperature of $T_s = 288$ K, one finds $B = 4.95$ W m$^{-2}$ K$^{-1}$. As far as EBMs go, this model is extremely stable against the snowball instability. The second reason our model differs from \cite{armstrong2014} is our inclusion of the horizontal heat transport (however crudely it is represented here). A comparison between our energy balance equation (\ref{eqn:ebmeq}) and that in \cite{armstrong2014} shows that $D = 0$ in the latter. It can be shown that when $D=0$, the ice-albedo feedback does not affect adjacent latitudes as it should. Conceptually, ice-albedo feedback occurs because, for example, when the albedo (and thus temperature) changes in one model cell, the temperature gradient between adjacent cells is changed. This causes the heat flow between cells to change. The feedback works because cooling (or heating) in one cell alters heat flow to and from adjacent cells, cooling (or heating) those adjacent areas. Without that horizontal heat flow, there is no ice-albedo feedback, and no snowball \emph{instability}---that is, snowball states can still occur, but only when all latitudes in the model \emph{individually} come into radiative equilibrium at below freezing temperatures. That occurs at a much lower stellar flux than that caused by the instability. \begin{figure*} \includegraphics[width=\textwidth]{f25.pdf} \caption{\label{fig:ohzcomp} Percent enhancement in the distance to the OHZ from the host star for the 17 systems in \cite{armstrong2014}. The percent enhancement for each system is measured relative to system 1, like in \cite{armstrong2014}. Black bars are for static orbits and obliquity at the initial values, blue bars are for static orbits and obliquity at the mean values, and red bars are for variable orbits and obliquity. In cases 4, 10, and 11, the semi-major axis of the Earth-mass planet varies by $\sim10\%$, leading to large changes in insolation and subsequent snowball states. In cases 2, 3, 5, 6, 15, and 16, the large mean obliquity leads to an extension of the habitable zone for static mean conditions. In most systems, variable eccentricity and obliquity leads decrease in the OHZ distance.} \end{figure*} \subsection{Predicting climate states with machine learning} Results from the statistical analysis and machine learning model are shown in Tables \ref{tab:daveresults1} and \ref{tab:daveresults2}. Correlations are strongest with stellar flux, $S$, and the eccentricity parameters. The MIC values are similar $\sim 0.2-0.3$ across most of the parameters, except for $\varepsilon$'s relationship to $\delta_{\text{snow}}$. Interestingly, $\Delta i$ shows a stronger correlation, $R$, with $\delta_{\text{snow}}$ and $f_{\text{ice}}$ than the obliquity parameters, despite the fact that the inclination has no direct impact on climate. The linear relationships ($R$) between ($f_{\text{ice}}$, $i_0$), ($f_{\text{ice}}$, $\langle i \rangle$), ($\delta_{\text{snow}}$, $i_0$), and ($\delta_{\text{snow}}$, $\varepsilon_0$) are insignificant if a $p-$value of $<0.005$ is desired (see Section \ref{davealgo}). However, the MIC for these quantities shows a non-linear relationship about as strong as any other parameter. One plausible explanation is that the inclination (especially the variation in inclination) affects both the evolution of the eccentricity and the evolution of the obliquity (see Equations 5,6, 12, and 13 in Paper I), and thus is indirectly coupled to the climate through two variables. The stellar flux, $S_{\star}$ (defined here for a circular orbit), is unsurprisingly the most important parameter in determining the final climate parameters, $\delta_{\text{snow}}$ and $f_{\text{ice}}$. The mean eccentricity, $\langle e \rangle$, tends to be the next most important parameter, as expected (see Equation \ref{eqn:annualinsol}). The remaining variables tend to have similar, and relatively small, weighting. About half the time, one could correctly predict the climate state of our test planet with the stellar flux and the mean insolation. However, including all variables, the ML model can predict $\delta_{\text{snow}}$ correctly 97\% of the time, and $f_{\text{ice}}$. For the RF regressor, the accuracy metric is the $R^2$ score, which in this case is $R^2 =0.93$ (the best possible score is 1). The similar weights of the remaining variables illustrates the complexity of the interplay between orbit and climate. Note that feature importances should be interpreted cautiously as correlations between features can skew the features---for example, in the case of two highly-correlated features, one feature can display a high importance ($\xi_i$), while the second displays a low importance. \begin{table} \caption{\textbf{Relative importance of input parameters on $\delta_{\text{snow}}$}} \centering \begin{tabular}{lrrrr} \hline\hline \\ [-1.5ex] \multicolumn{1}{c}{Parameter} & \multicolumn{1}{c}{Pearson $R$ ($p$)} & \multicolumn{1}{c}{MIC} & \multicolumn{1}{c}{$\zeta_{NL}$} & \multicolumn{1}{c}{$\xi_i$} \\ [0.5ex] \hline \\ [-1.5ex] $S_{\star}$ & -0.517486 (0.0) & 0.259659 & -0.008133 & 0.367391 \\ $e_0$ & -0.469633 (0.0) & 0.191850 & -0.028705 & 0.088580\\ $\Delta e$ & -0.281968 (0.0) & 0.181865& 0.102360 & 0.014340 \\ $\langle e \rangle$ & -0.480688 (0.0) & 0.256887 & 0.025826 & 0.227943 \\ $i_0$ & 0.026494 (0.0132) & 0.256149 & 0.255448 & 0.022177\\ $\Delta i$ & -0.318399 (0.0) & 0.216146 & 0.114768 & 0.024869\\ $\langle i \rangle$ & 0.056757 ($1.08 \times 10^{-7}$)& 0.200756 & 0.197534 & 0.047204\\ $\varepsilon_0$ & -0.026059 (0.01478) & 0.000490 & -0.000189 & 0.015797 \\ $\Delta \varepsilon$ & 0.084789 ($1.95 \times 10^{-15}$) & 0.097013 & 0.089824 &0.094639\\ $\langle \varepsilon \rangle$ & -0.031998 (0.00276) & 0.124936 & 0.123913 & 0.097059\\ \hline \end{tabular} \label{tab:daveresults1} \end{table} \begin{table} \caption{\textbf{Relative importance of input parameters on $f_{\text{ice}}$}} \centering \begin{tabular}{lrrrr} \hline\hline \\ [-1.5ex] \multicolumn{1}{c}{Parameter} & \multicolumn{1}{c}{Pearson $R$ ($p$} & \multicolumn{1}{c}{MIC} & \multicolumn{1}{c}{$\zeta_{NL}$} & \multicolumn{1}{c}{$\xi_i$} \\ [0.5ex] \hline \\ [-1.5ex] $S_{\star}$ & -0.502261 (0.0) & 0.260615 & 0.008349 & 0.396097 \\ $e_0$ & -0.498351 (0.0) & 0.268657 & 0.020303 & 0.085960\\ $\Delta e$ & -0.322404 (0.0) & 0.218874 & 0.114929 & 0.012151\\ $\langle e \rangle$ & -0.515085 (0.0) & 0.295807 & 0.030495 & 0.249936\\ $i_0$ & -0.011158 (0.2967) & 0.255632 & 0.255508 & 0.016456\\ $\Delta i$ & -0.361029 (0.0) & 0.216911 & 0.086569 & 0.021697\\ $\langle i \rangle$ &0.020870 (0.0509) & 0.199982 & 0.199546 & 0.036169\\ $\varepsilon_0$ &-0.062202 ($5.77 \times 10^{-9}$) & 0.170839 & 0.166970 & 0.018088\\ $\Delta \varepsilon$ & 0.059806 ($2.16 \times 10^{-8}$)& 0.148690 & 0.145113 & 0.079007\\ $\langle \varepsilon \rangle$ &-0.092422 ($4.61 \times 10^{-18}$) & 0.242192 & 0.233650 & 0.084440\\ \hline \end{tabular} \label{tab:daveresults2} \end{table} \begin{figure*} \includegraphics[width=\textwidth]{f26.pdf} \caption{\label{mlmodelpredict} Snowball states ($\delta_{\text{snow}}$) for $P_{rot} = 1$ day and initial obliquity $\varepsilon_0 = 23.5^{\circ}$, with a stellar constant of $S = 1332.27$ W m$^{-2}$ from the full orbit/climate simulation (left) and the machine learning algorithm (RF classifier; middle). White regions are simulations that ended in a snowball state; dark blue are those that did not. In the ML case shown here, this slice ($P_{rot} = 1$ and $\varepsilon_0 = 23.5^{\circ}$) of parameter space was excluded from the training set and the algorithm was trained on the remaining data. The right panel shows the fractional ice coverage area for $P_{rot} = 1$ day and initial obliquity $\varepsilon_0 = 23.5^{\circ}$, with a stellar constant of $S = 1332.27$ W m$^{-2}$, as predicted by the random forest regressor. Compare to the right panel in Figure \ref{fig:lowoblmidmap}. } \end{figure*} Figure \ref{mlmodelpredict} shows $\delta_{\text{snow}}$ for the full orbit$+$climate simulations (left), compared to the ML algorithm predictions (middle), for one slice of our parameter space. The ML algorithm captures the basic shape of the parameter space, though it does miss a few features such as the blue island at $e\approx0.15$ and $i_0\approx20^{\circ}$. In the case shown, this slice of parameter space ($P_{rot} = 1$ day, $\varepsilon_0 = 23.5^{\circ}$, and $S = 1332.27$ W m$^{-2}$) was excluded from the training set. In the right panel, we show the predicted ice area coverage for $P_{rot} = 1$ day and initial obliquity $\varepsilon_0 = 23.5^{\circ}$ at $S = 1332.27$ W m$^{-2}$. Again, this slice was excluded from the training set. Though the model does slightly better at predicting $\delta_{\text{snow}}$, the algorithm picks out the structure of the original map of $f_{\text{ice}}$ (Figure \ref{fig:lowoblmidmap}). We conclude that the ML algorithm does very well at predicting the ultimate climate state of this test planet. Though we trained the model on a fixed grid of initial conditions, future studies will probe training sets created with randomized initial conditions. Future analyses will be able to extend the model beyond what is computational feasible via direct integration: when it becomes prohibitive to run a desired number of simulations, we may be able to make do with a fraction of that number when we apply ML. \section{Discussion} We reiterate our primary conclusions here: \begin{enumerate} \item In predicting the climate state of a potentially habitable planet, it is not enough to simply run a climate model with the initial conditions (\emph{i.e.} the observed orbit), nor is it sufficient to use the averaged quantities. Variations in the orbit need to be considered, because of the instability brought on by coupled obliquity and eccentricity variations. In particular, we note the instability that occurs when the planet's obliquity reaches $\sim 35^{\circ}$ during an eccentricity minimum, if a large ice cap is present. At this obliquity, with the climate parameters we use here, there is no stable location for the ice edge; it must either retreat or grow uncontrollably. If the incoming stellar flux is decreased because the eccentricity is low, the ice will grow to the equator. If the eccentricity is sufficiently high at such times, the ice caps will collapse entirely. \item Coupled orbital and obliquity variations tend to trigger the snowball instability. The eccentricity oscillations cause the global flux to vary and as a result, the planet can go from completely ice free to having large ice caps in a few thousand years. If the obliquity remains low enough, the ice caps remain stable. When the obliquity is oscillating by a large amount, however, the ice latitude can become suddenly unstable. Many times, the ice caps are small enough that they disappear entirely (the small ice cap instability); other times, the ice caps are large enough to trigger the large ice cap instability and the planet becomes entirely ice covered. \item For eccentricity variations this large ($\Delta e \sim 0.1-0.3$), the ice ages are primarily controlled by the eccentricity, not the obliquity. This is very different from the recent Earth, where the insolation variations are dominated by the obliquity cycle. Obliquity is important mainly in determining the \emph{stability} and \emph{location} of ice sheets. \item The thermal inertia of ice sheets plays an important role. The inclusion of ice sheets causes snowball states to be triggered at higher incident stellar flux than if a simple temperature dependent albedo is used to mimic ice. Interestingly, the difference between static and dynamic orbital conditions seems to be reduced somewhat by the presence of ice sheets. The model is more susceptible to snowball states in general, but ice sheets somewhat diminish the response of the climate to orbital variations. \end{enumerate} In summary, planets undergoing strong orbital forcing are prone to the snowball or large ice cap instability, and surface habitability is therefore compromised. It should be noted, however, that Earth potentially went through several snowball states during the Proterozoic Eon ($\sim 2.5$ to $0.54$ billion years ago), and photosynthetic life persisted during these phases \citep{harland1964, kirschvink1992}. One explanation is that the surface was not actually completely frozen during such time periods---the Earth was in a ``soft'' snowball (or ``water-belt'') state, with some open ocean in the tropics \citep{chandler2000}. An alternative explanation is that meltwater ponds persisted on the surface of the ice, creating a refuge for photosynthetic life \citep{hoffman2017}. Unfortunately, the EBM does not capture all the necessary physics to distinguish a soft snowball state from a hard snowball state. Therefore, our results are probably pessimistic in regard to surface habitability. Modeling of Exo-Milankovitch cycles is difficult because of the timescales involved. 3D GCMs can take weeks to converge for static orbital conditions and decade long integrations. We have approached the problem with a comparatively simple, computationally efficient EBM---however, such models lack important phenomena and thus must be treated cautiously. As much as possible, we attempt to validate our results against a more sophisticated model. In terms of average yearly behavior, the EBM does a decent job. The greatest discrepancies occur in simulations that reach high obliquity and have relatively high stellar flux. In these cases, the summer insolation at the poles can be intense enough (locally) to reach runaway greenhouse temperatures. Undoubtedly, there will also be cloud formation, which affects the albedo, as observed in GCM simulations of synchronous rotators \citep{joshi2003,edson2011,edson2012,yang2013}. The difference is that here, the planet is in a very different rotation state, which may inhibit the global scale redistribution of heat seen in those studies. The carbon-silicate cycle on a planet like Earth is probably too slow to prevent orbitally induced snowball states. Earth's carbon-silicate cycle operates on a $\sim 0.5$ Myr time-scale \citep{kasting1993,haqqmisra2016}; the planet in this configuration can evolve from ice-free to completely ice-covered in thousands of years. If a planet has significantly higher outgassing rate and weathering rates than Earth, there may be some hope of preventing the instability through this negative feedback. Even with an Earth-like carbon-silicate cycle, however, the snowball states could eventually be escaped by building atmospheric carbon dioxide pressure. The planet may then become extremely warm for an extended period until carbon is weathered out of the atmosphere. And, of course, the obliquity and eccentricity will continue to vary in the same manner as before, perhaps leading to periods of intense polar heating. A long term simulation of exo-Milankovitch cycles with a carbon cycle would certainly be interesting. In Paper I, we discussed possibilities for determining whether an exoplanet is undergoing Milankovitch cycles. As mentioned there, constraining this phenomenon will largely rely on two-dimensional mapping techniques \citep{palle2008,cowan2009,kawahara2010,fujii2012,cowan2013,kawahara2016,schwartz2016}. A 2-D map of the surface and/or atmosphere of an exoplanet will be difficult to generate and will most likely require a large telescope such as the \emph{Large UltraViolet Optical and InfraRed surveyor} \citep[\emph{LUVOIR};][]{bolcar2015,dalcanton2015}. Planets such as we have investigated here, with large amplitude obliquity and eccentricity cycles, would be ideal cases for constraining Milankovitch cycles. Referring to Figures \ref{fig:lowoblmidmap} - \ref{fig:highoblmidfastmap}, and comparing the left and right panels in each, we can see that there are regions of parameter space where we expect the planet to be in a snowball state under static obliquity/orbital conditions, but it is clement when these parameters are allowed to vary. We also see many regions where the planet is warm under static conditions, but enters a snowball state when variations are included. By comparing the climate state under static and dynamic scenarios with observed 2-D albedo maps, it might be possible to infer that the planet is undergoing Milankovitch cycles. This will, of course, depend heavily on one's trust in the climate models used and the elimination of alternative explanations. For the nearer future, the more practical application of the type of modeling we present here is target prioritization. In scenarios where the orbital parameters of a potentially habitable planet and its companions are well constrained, modeling of dynamical effects on climate (such as Milankovitch cycles) may better inform the likelihood of surface habitability. If there appears to be a high probability of snowball states due to such variations, the target will be less favorable than another for detecting surface biosignatures. Conversely, if one is primarily interested in determining the presence of Milankovitch cycles, a target in a dynamically ``hot'' system will be preferable. Regardless of motivation, our understanding of the coupling of climate to obliquity and orbital variations will be important to the interpretation of \emph{LUVOIR} observations. \section{Conclusions} In Paper I, we showed that secular spin-orbit resonances can exist even in relatively simple planetary systems, and that they can cause very large obliquity oscillations. In this paper, we applied a climate model to one of these systems. We have modeled the climate evolution of a planet with an Earth-like atmosphere in response to extreme orbital forcing. The large changes in eccentricity and obliquity drive the growth and retreat of ice caps, which can extend from the poles to $\sim 30^{\circ}$ latitude. These exo-Milankovitch cycles often lead to the snowball instability, in which the planet's oceans become completely ice covered, as well as the small ice cap instability, in which the ice completely disappears. We reiterate that planetary systems are extremely complex, and in cases like that shown here, the presence of companions can affect an Earth-like planet's habitability. It is particularly important to understand the eccentricity and obliquity evolution in combination, because the stability of ice sheets is intimately coupled to the obliquity and the eccentricity affects the amount of intercepted stellar energy. At a single stellar flux, a planet can be either clement and habitable or completely ice-covered, depending on the orbital parameters and the planet's recent climate history. This further complicates the concept of a static habitable zone based on the stellar flux. We have shown that orbital and obliquity evolution, and the long time scales of ice evolution, should be considered when assessing a planet's potential habitability. \section{Acknowledgements} This work was supported by the NASA Astrobiology Institute's Virtual Planetary Laboratory under Cooperative Agreement number NNA13AA93A. This work was facilitated though the use of advanced computational, storage, and networking infrastructure provided by the Hyak supercomputer system at the University of Washington. The results reported herein benefited from the authors' affiliation with the NASA's Nexus for Exoplanet System Science (NExSS) research coordination network sponsored by NASA's Science Mission Directorate. Thank you to David Crisp, Andrew Lincowski, Tony Del Genio, Ravi Kopparapu, Jacob Haqq-Misra, and Natasha Batalha for helpful discussions, and to the anonymous referee, whose feedback resulted in a greatly improved manuscript. \clearpage \software{Scipy \citep{jones2001}, minepy \citep{albanese2013}, ebm-analytical \citep{rose2017}} \bibliographystyle{aasjournal}
1,477,468,750,822
arxiv
\section{Introduction} When galaxies infall into clusters and move at high speed with respect to the hot intracluster medium (ICM), the ram pressure (RP) exerted by the ICM upon the galaxy interstellar medium (ISM) can remove the galaxy gas, wherever the RP exceeds the gravitation pull (Gunn \& Gott 1972, Jaffe' et al. 2018). The hot halo gas is affected first, then the disk gas is removed starting from the outer disk regions and stripping works its way inward in the disk, until the galaxy remains eventually devoid of gas. \begin{figure}[b] \begin{center} \includegraphics[width=5.5in]{Fig1new.pdf} \caption{The jellyfish galaxy JO206, a massive ($9 \times 10^{10} M_{\odot}$) galaxy in a low mass cluster (velocity dispersion $\sim 500 \rm km \, s^{-1}$). Left. White-light MUSE image. Right. MUSE map of $\rm H\alpha$ emission. Note the tentacles of stripped ionized gas departing for $\sim 80$ kpc from the disk, to the west. Clearly visible in this tail are $\rm H\alpha$-emitting, star-forming clumps. From Poggianti et al. (2017a).} \label{fig1} \end{center} \end{figure} \begin{figure}[b] \begin{center} \includegraphics[width=3.2in]{fig2.pdf} \caption{Global SFR-M relation for undisturbed galaxies (black points, grey line fit) versus RP stripped galaxies (colored points, light blue line fit). RP galaxies show a modest but significant enhancement in their SF activity. From Vulcani et al. (2018)} \label{fig1} \end{center} \end{figure} \begin{figure}[b] \begin{center} \vspace*{-5.0cm} \includegraphics[width=5.0in]{fig3new.pdf} \vspace*{-4.3 cm} \caption{Characteristics of a sample of PSB galaxies in z=0.3-0.4 clusters. Top. Time since quenching. Bottom. Fraction of mass assembled during the last 1.5 Gyr, used to roughly estimate the mass fraction involved in the burst prior to truncation. Boxes represent the 25th and 75th percentiles, the median is the horizontal line and the whiskers are the 15th and 85th percentiles. From Werle et al. (2022).} \label{fig1} \end{center} \end{figure} \begin{figure}[b] \begin{center} \includegraphics[width=5.6in]{fig7.pdf} \caption{MUSE gas metallicity map of the galaxy JW100 (top left). Three main tails can be identified (top right). The metallicity strongly decreases along each tail (bottom panels). Assuming a typical ICM metallicity of 0.3 times solar, the fraction of gas provided by ICM mixing can be estimated (right Y-axes in bottom panels). From Franchetto et al. (2021).} \label{fig1} \end{center} \end{figure} \begin{figure}[b] \begin{center} \includegraphics[width=3.4in]{fig9.pdf} \caption{Global SFR - HI mass relation for two GASP jellyfish galaxies (JO201 and JO206), compared to a sample of normal spirals of similar masses (grey distribution). In jellyfish galaxies there is a clear excess of SF for their HI mass. From Ramatsoku et al. (2020), see also Deb et al. (2022).} \label{fig1} \end{center} \end{figure} \begin{figure}[b] \begin{center} \centerline{\includegraphics[width=2.7in]{fig12.pdf}\includegraphics[width=2.7in]{fig11.pdf}} \caption{ALMA molecular gas (CO(1-0) emission in the jellyfish galaxy JW100 (red in the left panel, grey in the right panel). Left. There is a large amount of extraplanar CO, in clouds with molecular gas masses ranging from $10^6$ to $10^9 M_{\odot}$. Right. Blue contours are the $\rm H\alpha$ emission. While molecular gas close to the disk may be partly stripped, the clouds far out in the tail must have formed in-situ. From Moretti et al. (2020b).} \label{fig1} \end{center} \end{figure} \begin{figure}[b] \vspace*{-0.5 cm} \begin{center} \includegraphics[width=3.8in]{fig5.pdf} \includegraphics[width=4.3in]{fig6.pdf} \caption{The molecular gas content of 4 GASP jellyfish galaxies obtained with ALMA (colored symbols) is compared with samples of normal galaxies (grey points and lines). Top. Ratio of molecular gas mass and galaxy stellar mass as a function of stellar mass. Bottom left. Molecular to neutral gas mass ratio versus stellar mass. Bottom right. Total (molecular+neutral) gas mass over stellar mass as a function of stellar mass. From Moretti et al. (2020b).} \label{fig1} \end{center} \end{figure} \begin{figure}[b] \begin{center} \includegraphics[width=4.0in]{fig8.pdf} \caption{Example of unwinding arms in a RP stripped galaxy (JO200). The observed $\rm H\alpha$ velocity map (left) is compared with RP simulations (right). From Bellhouse et al. (2021).} \label{fig1} \end{center} \end{figure} Neutral gas studies were the first direct evidence for gas stripping in clusters, revealing HI tails, truncated HI disks, HI disturbed morphologies and a progressively increasing HI deficiency towards the central regions of clusters (see Cortese et al. 2021 for a recent review). By now, observations of extraplanar tails of stripped material have been obtained with several different methods (HI line, integral-field spectroscopy, $\rm H\alpha$ imaging, X-ray emission, radio continuum, UV and optical imaging, see Boselli et al. 2022), showing tails of gas in different phases and even tails of stars formed in the stripped gas. It is important to realize that each of the methods for identifying galaxies affected by ram pressure provides only a partial view of this phenomenon. The GASP (GAs Stripping Phenomena in galaxies) project investigates the physical mechanisms that remove gas from galaxies and their effects on the star formation activity and galaxy structure. This programme has been studying galaxies in low redshift clusters, groups, filaments and isolated galaxies, but in these proceedings I will focus on cluster galaxies only. Readers are refereed to Vulcani et al. (2021) for an overview of non-cluster galaxies. GASP provides the only large sample of confirmed ram-pressure stripped galaxies with integral-field (IF) spectroscopy. It consists of 64 ram-pressure stripped galaxies with a wide range of stellar masses ($10^9$-$10^{11.5} M_\odot$), that were selected for showing unilateral debris in B-band images. They are hosted in 39 low redshift clusters (z=0.04-0.07) whose velocity dispersions range from $\sim 500$ to $\sim 1400 \rm km \, s^{-1}$.\footnote{Cases of GASP ram-pressure stripped galaxies in groups and even in filaments have been presented in Vulcani et al. (2021) and references therein.} These galaxies have various stages and degrees of stripping, from weak or initial signs of stripping, to very strong stripping (``jellyfish galaxies'', with long gas tails, Fig.~1), to the final stages of stripping (truncated disks with gas only left in the galaxy center). Their properties can be contrasted with the GASP control sample of 30 undisturbed galaxies. Recently, MUSE data of distant galaxy clusters have started to allow the first detailed studies of ram-pressure ionized gas tails also at higher redshifts (2 galaxies at z=0.7, Boselli et al. 2019; 13 galaxies at z=0.3-0.4, Moretti et al. 2022, Bellhouse et al. 2022). The GASP survey is based on a MUSE ESO Large Program (spatial resolution $\sim 1 $kpc, galaxy coverage out to 7 effective radii on average), complemented by multi-wavelength follow-up programs with ALMA, APEX, JVLA, MeerKAT, ATCA, UVIT@ASTROSAT, LOFAR and Chandra. Thus, the GASP multi-wavelength coverage allows us to investigate all the main processes related to star formation, including gas in different phases, the stellar content and non-thermal processes. \section{Global and spatially resolved star formation rate-mass relation in the disks of galaxies} The existence of a relation between the integrated star formation rate (SFR) and the total stellar mass of galaxies (hereafter, ``global'' SFR-M relation) has been known for many years and has been observed from z=0 to beyond z=2. A correlation between the stellar mass surface density and the SFR surface density has been found in all integral-field spectroscopy surveys, even at higher redshifts, suggesting that the global relation originates from a perhaps more fundamental relation on small (1kpc) scales (Wuyts et al. 2013, Sanchez et al. 2013). The scatter in this relation is related to Hubble type (Gonzalez-Delgado et al. 2016) and variations in star formation efficienty (Ellison et al. 2020). Reaching larger galactocentric radii than any other large IF survey, GASP has shown that the resolved SFR-M correlation of undisturbed galaxies is broad, and that the scatter mainly arises from bright off-center star-forming knots. Moreover, each galaxy has a distinct resolved relation and the global relation appears to be driven by the existence of the size-mass relation (Vulcani et al. 2019). Ram-pressure stripped galaxies show a moderate enhancement of SFR in the disk for their stellar mass, lying on average above the global SFR-M relation of undisturbed galaxies by 0.2 dex (Fig.~2, Vulcani et al. 2018, Roberts \& Parker 2020). Thus, during the initial and strongest stages of stripping, ram pressure enhances SF before leading to quenching. On spatially resolved scales, we find again an enhancement of SFR density at a given mass density at all galactocentric distances, consistent with being induced by RP compression waves (Vulcani et al. 2020b). \section{From stripping to quenching: post-starburst galaxies and AGN} RP stripping of the disk, and consequent quenching, proceeds mostly outside-in, or side-to-side if the galaxy is plunging edge-on into the ICM. It is therefore common to observe RP stripped galaxies still with ionized gas and intense star formation in their central regions as well as long tails of ionized gas, but having their outer disk regions already devoid of ionized gas and recently quenched (Gullieuszik et al. 2017, Poggianti et al. 2017, 2019b, Bellhouse et al. 2017, 2019, Werle et al. 2022). These outer regions present typical post-starburst (PSB) spectra (Dressler \& Gunn 1983, Poggianti et al. 1999), with no emission lines and strong Balmer lines in absorption, indicative of a local sudden truncation of the star formation activity at some point during the past $\sim 0.5$Gyr. It has been suggested that the strong decline of star formation is responsible for the global radio continuum excess of RP stripped galaxies compared to their ongoing SFR seen by LOFAR (Ignesti et al. 2022a,b), and that the high radio-to-$\rm H\alpha$ ratio in tails compared to disks is due to relativistic electrons advected from the disks. If gas is totally stripped, the end product of this evolution is a PSB galaxy, with no ionized gas and no SF left anywhere in the disk. The GASP samples, both at low-z and in distant clusters, include by construction several PSB galaxies whose quenching history can be studied in details applying spectrophotometric codes (Fritz et al. 2017) to the MUSE spectroscopic datacube (Vulcani et al. 2020, Werle et al. 2022). The stellar history maps confirm indeed mostly outside-in and side-to-side quenching, with characteristic signatures indicating that most cluster PSBs originate from RP stripping (Vulcani et al. 2020, Werle et al. 2022). The MUSE spectra also reveal that generally, a strong local starburst takes place before the SF truncation, accounting for significant fractions of the mass formed (Fig.~3). Moreover, the quenching timescales and the total time it takes to quench all spaxels can be derived (Werle et al. 2022). Interestingly, a small fraction of the PSB galaxies in distant clusters present an inside-out quenching, associated with AGN episodes which are still detectable from the MUSE emission lines in the centers (Werle et al. 2022). The quenching history in these cases appears to be dominated by AGN feedback, though this may be indirectly induced by RP. In fact, an usually high incidence of AGN has been found among strongly RP stripped galaxies (Poggianti et al. 2017), though not all studies find an AGN excess (Roman-Oliveira et al. 2019). In some cases these AGNs show outflows of ionized gas (Radovich et al. 2019) and strong AGN feedback sweeping a large region around the galaxy center (George et al. 2019). Recently, based on a combination of GASP galaxies and all available literature data for RP stripped galaxies, contrasted with a MANGA mass-matched undisturbed sample, Peluso et al. (2022) has confirmed that the incidence of AGN is higher in RP stripped galaxies. This phenomenon seems to occur preferably during the strongest phase of stripping, when the gas tails are longest. Hydrodynamical simulations reach similar conclusions (Ricarte et al. 2020, Farber et al. 2022). Simulations have previously shown that loss of angular momentum due to the interaction of the rotating ISM with the non-rotating ICM can potentially draw gas on lower orbits, and recent work has clarified the respective roles of mixing and torques from pressure gradients (Akerman et al. 2022 submitted). \section{Star formation in the tails of stripped gas and gas mixing with the intracluster medium} The fact that new stars can form in-situ in the stripped gas, outside of the galaxy disk and even far out in the stripped tails, has been shown by many studies using several different SF indicators (e.g. Yagi et al. 2007, Smith et al. 2010, Sun et al. 2010, Merluzzi et al. 2013, Fumagalli et al. 2014, Fossati et al. 2016, Consolandi et al. 2017, Boselli et al. 2018, Abramson et al. 2011, Kenney et al. 2014, George et al. 2018, Cramer et al. 2019). In the extraplanar $\rm H\alpha$ emitting tails of GASP stripped galaxies the dominant ionization mechanism is indeed photoionization by young massive stars, as obtained by MUSE BPT diagrams (Poggianti et al. 2019a). This SF takes place in $\rm H\alpha$-bright, dynamically cold star-forming clumps formed in-situ in the tails with luminosities similar to giant and super-giant HII regions (Fig.~1, Poggianti et al. 2019a). The star formation occurring in the tail can produce from a negligible fraction to up to $\sim 20\%$ of the total SFR of the system (disk+tail) (Poggianti et al. 2019a), and the fraction of star formation in the tails roughly follows the fraction of gas that is stripped according to traditional analytical formulation (Gullieuszik et al. 2020): the SFR in the tail can thus be roughly predicted, in a statistical sense, knowing four main observable quantities: galaxy mass, cluster mass, galaxy line-of-sight velocity within the cluster and projected clustercentric distance. With HST, using broad band filters spanning from UV to I-band and a narrow-band filter covering $\rm H\alpha$, we can now study at higher spatial resolution (70pc) the star-forming clumps, in the disks, in the vicinity of the disk but ``extraplanar'', and in the tails. For a large sample of both $\rm H\alpha$-selected (over 2400 clumps) and UV-selected (over 3700) clumps in GASP jellyfish galaxies we have been able to study sizes, luminosities, hierarchical structure as well as stellar masses, star formation histories and stellar ages (Giunchi et al. submitted, Werle et al. in prep.). $\rm H\alpha$ and UV luminosity and size distribution functions, together with luminosities-size relations, can be found in Giunchi et al., (2022, submitted). The emerging picture is that basic star-forming clump properties such as luminosity and size do not depend strongly on whether the clumps are sitting in a galaxy disk or far out in the tail without an underlying disk. These characteristics of GASP clumps resemble more closely those in highly turbulent, $\rm H\alpha$-bright galaxies (Fisher et al. 2017) than those in normal low-z spiral galaxies (see Giunchi et al. 2022 for details). Interestingly, $\rm H\alpha$ and UV-selected clumps can be embedded in larger star-forming complexes emitting at optical wavelengths ($V$-band), whose total masses range typically from $10^4$ to $10^7 M_{\odot}$ (Werle et al. in prep.). Thanks to these data, it is now possible to investigate the fate of these clumps, whether they are likely to remain intracluster isolated entities (Globular clusters? Ultracompact dwarf galaxies? Ultradiffuse galaxies?), or get dispersed and go to contribute to the general intracluster light. Stripped material and intracluster material can exchange baryons in both directions. Evidence for mixing between the stripped interstellar medium (ISM) and the intracluster medium (ICM) has accumulated over the past few years, based e.g. on X-ray (Poggianti et al. 2019b, Campitiello et al. 2020, Sun et al. 2022, Bartolini et al. 2022) and on the analysis of the diffuse ionized gas (Tomicic et al. 2021a,b). The most direct evidence for ICM-ISM mixing has come from the gas metallicity in the stripped gas, which decreases along the tails, going away from the disk (Fig.~4, Franchetto et al. 2021), as predicted also in hydrodynamical simulations (Tonnesen \& Bryan 2021). Overall, the data supports a scenario in which the stripped gas gets mixed with the ICM, accreting a significant fraction of mass, and still manages to cool and collapse to form new stars. Observations of the magnetic field aligned with the tail in the stripping direction in the JO206 GASP jellyfish galaxy favor a ``magnetic draping'' scenario (the galaxy moving within the magnetized ICM sweeping up the surrounding magnetic field, Dursi \& Pfrommer 2008) and point out that the magnetic field may play a role favoring the SF in the tails by reducing the thermal conduction between ICM and ISM, hence the ISM evaporation (Mueller et al. 2020). In order to complete the picture, there is another crucial stage in the star formation process: the molecular cloud formation from the neutral gas, as discussed below. \section{Neutral gas, molecular gas and star formation} As mentioned above, no selection method can provide the whole census of ram pressure stripped galaxies. It is thus of fundamental importance to obtain multi-wavelenght information for samples selected in different ways. Only then we will be able to understand under what conditions multiphase tails are formed, and what is the evolutionary sequence among the observations of tails seen at different wavelengths. There are still relatively few RPS galaxies for which a detailed multi-lambda coverage is available (e.g. ESO137-001 and JW100). For a subset of the GASP jellyfish galaxies, both HI and CO observations have been collected, in addition to MUSE. These GASP jellyfish galaxies are slightly HI deficient compared to similar non-stripped galaxies but have a significant excess of star formation for their HI content (Fig.~5, Ramatsoku et al. 2019, 2020, Deb et al. 2020, 2022, see also Luber et al. 2022 for an HI comparison of jellyfish and other cluster galaxies). These same galaxies have very large amounts of molecular gas, as estimated from their CO(2-1) and CO(1-0) emission (Fig.~6, Jachym et al. 2017, 2019, Moretti et al. 2018b, 2020a,b). Their molecular gas mass at a given stellar mass is 4-5 times higher than in undisturbed galaxies (Fig.~7). The molecular to neutral gas ratio in their disks is between 4 and 100 times higher than normal. Surprisingly, overall, the total (molecular plus neutral) gas mass is similar to that in normal galaxies. These results, shown in Moretti et al. 2020b, are solid irrespective of the conversion factors used for CO to $H_2$ and they suggest a very efficient conversion of neutral gas into molecular gas in jellyfish galaxies. As fas as the star formation efficiency is concerned (ratio between star formation rate and molecular gas mass) this appears to be significantly lower in the tails than in normal spirals, with long depletion times up to $10^{10}$ yr (Moretti et al. 2020b, and in prep.). MeerKAT is soon to provide HI-selected samples for which additional multi-wavelength informations, including molecular gas estimates, have been secured. \vspace{-0.7cm} \section{Conclusions} Ram-pressure stripped galaxies are unique laboratories to study the star formation process and the baryonic cycle under unusual conditions. Gas clouds in stripped tails are embedded within the hot ICM and do not experience the influence of an underlying stellar disk, yet stars are commonly formed in stripped tails of multi-phase gas. Also in the galaxy disks, SF appears to be slightly enhanced globally, and strongly enhanced locally at any given time, where gas is still left. We witness an exceptionally efficient conversion of neutral to molecular gas in these galaxies, while the star formation efficiency ranges from usually normal in disks to much lower than normal in tails. PSB galaxies are the natural end-product of the stripping and subsequent quenching, and AGN activity possibly triggered by RP itself sometimes contributes in an inside-out manner to the quenching. I have presented only some of the highlights of the GASP project concerning the star formation activity and the baryonic cycle in cluster galaxies. I have not presented the consequences of the RP-induced SF on the galaxy structure: as an example, RP stripping can cause the unwinding of spiral arms without the contribution of tidal interactions (Fig.~8, Bellhouse et al. 2021). Interested readers can find the full list of GASP publications at https://web.oapd.inaf.it/gasp/. \acknowledgements Based on observations collected at the European Organization for Astronomical Research in the Southern Hemisphere under ESO programme 196.B-0578. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 833824). \vspace{-0.5cm}
1,477,468,750,823
arxiv
\section*{Glossary} {\bf ESR/EPR:} Electron spin/paramagnetic resonance \\ {\bf ENDOR:} Electron nuclear double resonance \\ {\bf NMR:} Nuclear magnetic resonance \\ {\bf QIP:} Quantum information processing \\ {\bf 2DEG:} Two-dimensional electron gas \\ {\bf MOS:} Metal oxide semiconductor \\ {\bf FET:} Field effect transistor\\ {\bf SET:} Single-electron transistor \\ {\bf QPC:} Quantum point contact: a narrow constriction between two conducting regions whose conductance is extremely sensitive to nearby charge.\\ {\bf Qubit:} A quantum bit, i.e. a two level quantum system that is the basic building block of a quantum computer.\\ {\bf Quantum computer:} A register of interacting qubits on which unitary operations (logic gates) are performed to execute quantum algorithms. \\ {\bf QD:} Quantum dot: a semiconductor structure in which electrons can be confined, in all three dimensions, on a nanometer length scale. \\ {\bf Entanglement:} exists in two or more qubits that cannot be described as a product of individual qubit states. Leads to non-classical correlations in qubit measurements.\\ {\bf Magneto-optical Kerr Effect (MOKE):} gives a material a polarization-dependent refractive index, and used to measure single spins through the rotation of linearly polarized light. \\ {\bf ISC:} Intersystem crossing: an electronic transition between a singlet and a triplet. It is forbidden in lowest order. \\ {\bf Trion:} A composite particle consisting of two electrons and one hole or two holes and one electron.\\ {\bf Decoherence:} The loss of phase information in a quantum state or qubit, characterised by the decay time constant $T_2$.\\ {\bf Hole:} A quasi particle generated when an electron is removed from the valence band. Heavy holes are those close to the band edge with large effective mass\\ \section{Introduction} \label{sec:intro} If the full potential of quantum information processing can be realized, its implications will be far reaching across a range of disciplines and technologies. The extent of these implications is certainly not yet fully known, but the revolutionary effect that quantum physics would have on cryptography, communication, simulation and metrology is well established~\cite{nielsen00, kok10}. Even though building a quantum computer is very challenging, research focussed on achieving this goal can already claim to have produced some of the most high impact scientific results of the last decade. One of the reasons for this is that the language of quantum information has shown how to bring together and find new links between work on a staggering variety of physical systems. The past fifteen years has seen candidate embodiments of quantum bits tested to their limits of coherence time, and in some cases control over small number of such systems has become refined enough to permit the demonstration of basic quantum logic gates. However, there has been an increasing awareness that the challenge of faithfully storing, processing and measuring quantum information within physical systems is sufficiently great so as to discourage relying on one quantum degree of freedom alone. Furthermore, classical information processors use different physical systems to encode information at different stages of their operation: for example, charge states in semiconductors for processing information, the orientation of magnetic domains in hard disks for longer term storage, and optical photons for transmitting data. For a quantum computer to benefit from such an optimised use of resources, it must be able to transfer quantum information coherently between different degrees of freedom. Within the solid state, the electron spin exhibits a number of interactions that could be harnessed for this hybridising of quantum information: with nuclear spins which benefit from long coherence times, or with charge states, or optical states either of which could be used to measure, manipulate, or even entangle electron spins (see Fig.~\ref{summary1}). In this review we will examine the ways in which electron spin qubits may couple to different degrees of freedom in the solid state and how this is being used to hybridise quantum information. We will distinguish between two regimes of coupling: \begin{enumerate} \item A weak interaction that provides an opportunity to transfer some (small) amount of classical information between the electron spin and another degree of freedom. This kind of interaction can be very important in few-to-single spin measurement, for example a small change in conductivity or photoluminescence depending on the state of an electron spin. \item A stronger interaction capable of providing the coherent transfer of a quantum state between two degrees of freedom, with sufficient fidelity to permit the storage of quantum information or performing entangling operations. \end{enumerate} \begin{figure}[t] {\includegraphics[width=\columnwidth]{./Figs/summary.pdf}} \caption{Electron spins in the solid state interact with several other degrees of freedom, including nuclear spins, optical photons, charge and, potentially, superconducting qubits via circuit quantum electrodynamics. In each case, example systems are illustrated. Such interactions can be harnessed to enhance the processing, coherent storage and measurement of quantum information. [Figure includes extracts from Refs~\cite{kok10} and \cite{wallraff04} with permission from Macmillam Publishers Ltd; \cite{barthel09}, Copyright (2009) by the American Physical Society http://prl.abstract.org/PRL/v103/i16/e160503; and \cite{koiller05a} with permissions from Anais da Academia Brasileira de Ci\^encias.]} \label{summary1} \end{figure} \section{Electron spin quantum bits} \label{espins} In order to understand how to use electron spins linked with other degrees of freedom in hybrid quantum processor architectures, we must first understand the different forms that electron spin qubits can take. In this section, we examine some important implementations of electron spin qubits, with a particular focus on those in the solid state. \subsection{Quantum dots: artificial atoms} \label{subsec:qds} Quantum dots (QDs) are artificial structures that are fabricated so that electronic states are confined, on a nanometer length scale, in all three spatial dimensions. They are typically divided into two broad classes. First, there are lithographically defined structures consisting of a quantum well semiconductor heterostructure that confines a two dimensional electron gas (2DEG). The other dimensions are defined by lithographically deposited electrical top gates, (for an example configuration, see Fig.~\ref{summary1}). This allows one, two or more dots to be deposited side-by-side; gate electrodes are used to control the number of charges within the structure, and allow a single electron spin to be isolated. The second class are self-assembled nanostructures where confinement is naturally provided in all three dimensions. They are typically fabricated by molecular beam epitaxy: a large band-gap semiconductor forms the substrate and a smaller band-gap material is deposited on top (see Fig.~\ref{elecstruc}c). Under the right conditions, nanoscale islands form and subsequent overgrowth of the original material leads to three-dimensional confinement. The resultant discrete energy level structure of both conduction and valence bands is shown in Fig.~\ref{elecstruc}d and allows the physics of small numbers of electrons and holes to be investigated. The spin properties of both types of carrier are essentially determined by the corresponding bulk properties. In group IV or III-V semiconductors, valence states have $p$ like orbitals, and can have total spin $J=3/2$ or $1/2$, whereas conduction states have $s$ orbital symmetry and have have $J=1/2$~\cite{harrison00}. The confinement splits the six possible hole bands into three discrete doublets. The heavy holes ($J=3/2, J_z=3/2$ for growth direction $z$) generally have the largest mass of the valence states and under confinement form the highest lying doublet. Analogous to bulk semiconductors, optical QDs can either be intrinsic (i.e. have full valence and empty conduction states), or be doped to generate single electron or single hole ground states. Both are promising qubits and have been investigated experimentally~\cite{hanson08,ramsay08, gerardot08}, and theoretically~\cite{calarco03, gauger08} though most work has focussed on the electron spin~\cite{mikkelsen07, xu07}. Inter-band (or, more correctly, inter-state) transitions typically lie in the optical or near infra-red region and can have significant transition dipole matrix elements~\cite{basu97, kok10}. These optically active transitions are the essential ingredient for an electron-photon interface as discussed in \Sec{sec:photons}. \begin{figure}[t] \begin{psfrags} {\scriptsize \psfrag{u}{$\sigma^+$} \psfrag{v}{$\sigma^-$} \psfrag{a}{$\ket{\mbox{$\frac12$},\mbox{$\frac12$}}$} \psfrag{b}{$\ket{\mbox{$\frac12$},-\mbox{$\frac12$}}$} \psfrag{c}{$\ket{\mbox{$\frac32$},\mbox{$\frac32$}}$} \psfrag{d}{$\ket{\mbox{$\frac32$},-\mbox{$\frac32$}}$} \psfrag{h}{$\ket{\uparrow} \leftrightarrow \ket{T_h^\downarrow}$} \psfrag{e}{(a)} \psfrag{f}{(b)} \psfrag{j}{(c)} \psfrag{y}{(d)} \psfrag{z}{(e)} \psfrag{g}{20 nm} \psfrag{r}{100 nm} {\hspace{-2cm}\includegraphics[width=1\columnwidth]{Figs/selectionrules.pdf}}} \end{psfrags} \caption{Crystal structure (a) and electronic energy level diagram (b) of the NV$^-$ centre. (c) AFM micrograph showing the spontaneous formation of QDs when a smaller bandgap material (light brown) is deposited on a larger bandgap substrate (dark brown). (d) Confinement potential in a cut through one such QD, showing discrete bound states with angular momentum $\ket{J, J_z}$. Here the QD is doped with a single electron, and excitation with $\sigma^+$ light leads to trion formation only for an initial $\ket{\uparrow}$ spin state. (e) Spin selection rules for both kinds of polarized light. [Figure partly adapted and reprinted with permission from \cite{kok10}.]} \label{elecstruc} \end{figure} \subsection{Impurities in solids} The confinement achieved in quantum dots is naturally found in certain impurity or defect states within certain materials, some of which provide ideal hosts for an associated electron spin. Desirable properties of the host material include a low natural abundance of nuclear spins (such as C, Si, Ge and certain II-VI -based materials) and weak spin-orbit coupling. In the semiconductor industry, silicon is doped with phosphorous in order to increase the bulk electron concentration, however at low temperatures the electron associated with the donor becomes bound to the P impurity. In his influential proposal~\cite{kane98}, Kane suggested using arrays of P donors in silicon (Si:P) as nuclear spin qubits, whose interactions could be tuned by electrical top-gates which change the wavefunction of the bound electron. This, and related proposals for Si:P quantum computing~\cite{skinner03, morton:clustersi}, are well supported by a number of findings and achievements made over the past decade: amongst others a) control of P-donor placement in silicon with atomic precision using scanning probe methods~\cite{ruess04}; b) manipulation of donor wavefunctions through electrical gates~\cite{calderon06 ,lansbergen08, bradbury06}; c) single spin detection in silicon~\cite{morello10}; and d) measurement of very long electron and nuclear coherence times, 0.6 and 3~seconds respectively, within a $^{28}$Si isotopically-enriched environment~\cite{morton:qmemory}; (A.M. Tyryshkin and S. A. Lyon, unpublished observation). Other donors in silicon possess properties that make them of interest as electron spin qubits~\cite{stoneham03}. For example bismuth has a large nuclear spin ($I=9/2$) offering a large Hilbert space for storing quantum information~\cite{george10,morley10}, and its large hyperfine coupling ($A=1.475$~GHz) gives a zero-field splitting that may be useful for coupling to superconducting resonators~\cite{schuster10,kubo10}. The paramagnetic nitrogen-vacancy NV$^-$ centre in diamond has an $S=1$ ground state, with a zero field splitting between the $m_s=0$ and $m_s=\pm1$ states of $\sim2.88$~GHz (see Figure~\ref{elecstruc} a and b). It exhibits coupling to surrounding $^{13}$C~nuclei as well as the local nitrogen nuclear spin~\cite{jelezko04} and possesses a highly advantageous optical transition which enables initialisation and single-spin measurement~\cite{jelezko03} as discussed in detail in \Sec{sec:photons}. In addition, NV$^-$ centres offer the benefit of long coherence times at room temperature --- 1.8~ms in $^{12}$C-enriched diamond~\cite{balasubramanian09} --- which permits the measurement of coupling between NV$^-$ centres separated distances as long as 100~\AA~\cite{neumann10} (see also Box on Single-Spin Electron Spin Resonance). There are other impurity spins with an associated optical transition which are at earlier stages of investigation, such as fluorine donors in II-VI semiconductors~\cite{sanaka09}, while rare-earth impurities in glasses are being investigated as optical memories for quantum repeaters~\cite{guillot08}. \subsection{Molecular electron spin} Molecules offer highly reproducible components which can host electron spin and can be coupled together using the techniques of organic and inorganic chemistry. Simple organic radicals, such as those based on nitroxide radicals, are used extensively in the field of spin-labelling for distance measurements in biological molecules~\cite{jeschke07}. Their electron spin coherence times are limited to 1--10~$\rm{\mu}$s~in typical environments, rich in nuclear spins from hydrogen nuclei, and can be extended to $\sim$~100 $\rm{\mu}$s~for dilute spins in deuterated environments at 40~K~\cite{lindgren97}. Fullerenes, such as C$_{60}$~and C$_{82}$, act as a molecular-scale trap for atoms or ions, shielding the electron spin of the caged species from the environment. Such \emph{endohedral fullerenes}\footnote{The notation $M$@C$_{xx}$ is used to indicate the species $M$ is held within the C$_{xx}$ fullerene cage.} based on group III ions such as Sc-, Y- and La@C$_{82}$~possess $T_2$~times in excess of 200~$\rm{\mu}$s~under optimised conditions~\cite{brown10}. In the case of the remarkable N@C$_{60}$~molecule, atomic nitrogen occupies a high-symmetry position at the centre of the cage, leading to an $S=3/2$ electron spin with the longest coherence times of any molecular electron spin: 80~$\rm{\mu}$s~at room temperature rising to 500~$\rm{\mu}$s~at temperatures below 100~K~\cite{mortoncs2, mortonprb}. The organic radical created by X-ray irradiation of malonic acid crystals has been a standard in EPR/ENDOR spectrosocopy for many decades~\cite{mcconnell60}, and has also been used to explore methods of controlling coupled electron and nuclear spins qubits with a strong anisotropic hyperfine interaction~\cite{mehring03, mitrikas10}. Offering a large set of non-degenerate transitions, the high-spin ground state of many single-molecule magnets (SMMs) is capable in principle of hosting quantum algorithms such as Grover's search~\cite{leuenberger01}. Electron spin coherence ($T_2$) times up to a few microseconds have been measured~\cite{ardavan07}, permitting Rabi oscillations in the electron spin to be observed~\cite{mitrikas08}. Despite their relatively short coherence times, these tuneable systems may provide useful testbeds to explore quantum control of multi-level electron spin systems. \subsection{Spins of free electrons} Finally, it is possible to use free electrons as spin qubits. For example, using a piezeoelectric transducer over a semiconductor heterostructure, surface acoustic waves (SAWs) can be launched into a 2DEG, such that each SAW minimum contains a single electron~\cite{barnes00}. For more extreme isolation, electrons can be made to float above the surface of liquid helium, bound by their image charge. They can be directed around the surface by electrical gates beneath with very high efficiency~\cite{sabouret08}, for controlled interactions and measurement, meanwhile their spin is expected to couple very weakly to the varying electrical potentials~\cite{lyon06}. \section{Electron spin - nuclear spin coupling} \label{sec:nucspins} \subsection{Electron spins as a resource for nuclear spin qubits} \label{sec:efornucs} Since the beginning of experimental studies into quantum information processing, nuclear spins and nuclear magnetic resonance (NMR) have provided a testbed for quantum control and logic~\cite{cory97, gershenfeld97}. NMR can still claim to have hosted the most complex quantum algorithm to date through the 7-qubit implementation of Shor's factoring algorithm~\cite{vandersypen01}. However, the weak thermal nuclear spin polarisation at experimentally accessible temperatures and magnetic fields has limited the scalability of this approach, which relies on manipulating the density matrix to create states which are pseudo-pure~\cite{cory97, gershenfeld97} and thus provably separable~\cite{braunstein99}. A notable exception (albeit of limited scalability) is the use of a chemical reaction on parahydrogen to generate two nuclear spin states with a purity of $\sim0.92$~\cite{anwar04}. The magnetic moment of the electron spin is about two thousand times greater, bringing several key benefits to nuclear spin qubits: a) enhanced spin polarisation; b) faster gate manipulation time (ten nanoseconds for a typical electron spin single qubit rotation, rather than ten microseconds for the nuclear spin); and c) more sensitive detection, either via bulk magnetic resonance, or the more sensitive electrical or optical methods described in this review. The general spin Hamiltonian for an electron spin $S$ coupled to one or more nuclear spins $I_i$ in a magnetic field $B$ is, in angular frequency units: \begin{equation} \mathcal{H}=\frac{g_e\mu_B}{\hbar}\vec{S}\cdot\vec{B}+\sum_i\left(\gamma_{n,i}\vec{I}_i \cdot\vec{B}+\vec{S}{\bf A}_i \vec{I}_i \right) \end{equation} where $g_e$ is the electron g-factor, $\mu_B$ the Bohr magneton, $\gamma_n$ the nuclear gyromagnetic ratio, and ${\bf A}_i$ the hyperfine coupling tensor between the electron and nuclear spin. Additional terms, such as a zero-field splitting term $\vec{S}{\bf D}\vec{S}$ may appear in higher spin systems, such as NV$^-$ centres in diamond. For an electron and nuclear spin pair, this leads to four levels, separately addressable through resonant pulses and typically in the microwave regime (10--100~GHz) for the electron spin and in the radiofrequency regime (1--100~MHz) for the nuclear spin (see \Fig{fig:qmemory}B). By controlling the phase, power and duration of the pulse, qubit rotations about an axis perpendicular to the applied field are performed. Couplings that are stronger than the bandwidth of a typical pulse can be exploited to perform controlled-NOT (CNOT), or similar operations, through a selective microwave or rf pulse. Weaker couplings can be used to perform conditional logic through a combination of pulses and delays, exploiting the difference in the time evolution of one spin depending on the state of the other. Electron spin polarisation can be indirectly (and incoherently) transferred to surrounding nuclear spins through a family of processes termed dynamic nuclear polarisation, reviewed extensively elsewhere~\cite{maly08,barnes08}. For strongly coupled nuclear spins, electron spin polarisation can be transferred directly through the use of selective microwave and rf pulses --- such a sequence forms the basis of the Davies electron nuclear double resonance (ENDOR) spectroscopic technique~\cite{davies74, mendor}. A complementary approach is to apply algorithmic cooling, which exploits an electron spin as a fast relaxing heat bath to pump entropy out of the nuclear spin system~\cite{ryan08}. The use of an optically excited electron spin (such as a triplet) can be advantageous as i) it offers potentially large spin polarisations at elevated temperatures and ii) the electron spin, a potential source of decoherence, is not permanently present~\cite{kagawa09}. Given an isotropic electron-nuclear spin coupling of sufficient strength, it is possible to perform phase gates ($z$-rotations) on nuclear spin qubits on the timescale of an electron spin $2\pi$ microwave pulse, which is typically $\sim 50$~ns. The pulse must be selective on an electron spin transition in one particular nuclear spin manifold; hence, a weaker hyperfine coupling will necessitate a longer, more selective, microwave pulse. This kind of geometric Aharanov-Anandan phase gate~\cite{aharonov87} was experimentally demonstrated using N@C$_{60}$~and Si:P donor systems, and used to dynamically decouple nuclear spins from strong driving fields~\cite{morton:bangbang, tyryshkin06}. Given an anisotropic hyperfine coupling, the nuclear spin gate can be generalised to an arbitrary single-qubit rotation using a combination of microwave pulses and delays~\cite{mitrikas10, khaneja07}. For multiple coupled nuclear spins and a correspondingly large Hilbert space, more elaborate control is needed~\cite{hodges08}, for example using gradient ascent pulse engineering~\cite{khaneja05}. These methods exploit an effect termed electron spin echo envelope modulation (ESEEM) in the ESR community~\cite{hahn1952 ,dikanov92}. The weak and `always-on' coupling between two nuclear spins is another limitation of a nuclear spin-only NMR approach. Nuclear spin interactions can be decoupled through refocussing techniques -- for example using the ultrafast nuclear spin manipulations described above -- however, methods for gating such interactions have also been explored. An example is the proposal of exploiting mutual coupling between two nuclear spins to an intermediate electron spin which is optically excited~\cite{schaffry10}. The mediator is diamagnetic in its ground state, such that the interaction between the two nuclei is effectively off. However, an optical pulse can excite a triplet state ($S=1$) in the mediator, which can be manipulated using microwave pulses to produce gates of maximal entangling power between the two coupled nuclear spins. Preliminary ENDOR experiments on candidate functionalised fullerene molecules indicate that the key parameters of triplet polarisation, relaxation rate and hyperfine coupling, are in the correct regime to permit this kind of gate~\cite{schaffry10}. Given the above, it is clear that quantum logic between an electron and nuclear spin can also be performed. Entangling operations between and electron and nuclear spin have been demonstrated in irradiated malonic acid crystals~\cite{mehring03}, and in the N@C$_{60}$~molecule~\cite{mehring04}; however in both cases the spins were in a highly mixed initial state and so only pseudo-entanglement was generated --- the states were fully separable. Nevertheless, Ref.~\cite{mehring03} demonstrates an elegant way to perform density matrix tomography through the application of varying phase gates to the electron and nuclear spin, and a procedure which, if applied with high fidelity to spins at higher magnetic fields and lower temperatures, would lead to electron-nuclear spin entanglement. \subsection{Nuclear spin quantum memory} We may reverse the question, and ask how coupling to nuclear spins may offer advantages to electron spin qubits. One key advantage of the weak magnetic moment of nuclear spins is their correspondingly longer relaxation times $T_1$~(typically seconds to hours)~and $T_2$~(typically seconds), motivating the use of nuclear spins as quantum memories to coherently store the states of electron spin qubits. This has been achieved using a) NV$^-$ centres and neighbouring $^{13}$C~nuclei in diamond, exploiting the near-degeneracy of the nuclear spin levels in the $m_S=0$ manifold~\cite{dutt07}; and b) P-donor nuclear spins in isotopically purified $^{28}$Si, where the nuclear spin is directly excited using a radiofrequency pulse~\cite{morton:qmemory}. Both experiments are summarised in \Fig{fig:qmemory}. The lifetime of the quantum memory is determined by the nuclear spin decoherence time $T_{\rm{2n}}$, which was found to be $\gg20$~ms in the case of $^{13}$C~in the NV$^-$ diamond system at room temperature~\cite{dutt07}, and over 2 seconds for the $^{31}$P~donor nuclear spin. This approach has since been applied to other electron-nuclear spin systems such as the N@C$_{60}$~molecule ($T_{\rm{2n}}$$ = 140$~ms at 10~K) (R. M. Brown, A. M. Tyryshkin, K. Poerfyrakis, E. M. Gauger, B. W. Lovett, et al., unpublished observeration) and the substitutional nitrogen P1 centre in diamond ($T_{\rm{2n}}$$ = 4$~ms at room temperature). The large $I=9/2$ nuclear spin of the $^{209}$Bi~donor in silicon offers a large Hilbert space to store multiple electron spin coherences. Despite the wide range of nuclear transition frequencies (between 200 and 1300~MHz at X-band), it is possible to implement the same coherence transfer sequence applied previously to P-donors~\cite{george10}. A further strength of this system is the ability to optically hyperpolarise the $^{209}$Bi~nuclear spin~\cite{sekiguchi10,morley10}, as was previously shown for the P-donor~\cite{mccamey09, yang09}. An important limitation of the use of this kind of nuclear spin quantum memory is that the nuclear spin coherence time is bounded by the electron spin relaxation time: $T_{\rm{2n}}$~$\leq$~2$T_{\rm{1e}}$~\cite{morton:qmemory}. In many systems $T_{\rm{1e}}$~can be made very long, for example by operating at low temperatures, however the ability to remove the electron spin, for example through optical or electrical means, would simply remove this limit~\cite{schaffry10, morton:clustersi}. \begin{figure}[t] {\includegraphics[width=\columnwidth]{Figs/qmem2.pdf}} \caption{Two methods for coherently storing an electron spin state into a local, coupled nuclear spin. In both cases a SWAP operation is performed through two controlled-NOT (C-NOT) gates, which are achieved through a selective spin flip. (A) Using the NV$^-$ centre in diamond, it is possible to achieve the nuclear spin flip by exploiting the natural precession of the nuclear spin in the $m_S=0$ electron spin manifold. A weak ($\sim 20$~G) magnetic field is applied perpendicular to the quantisation axis which is defined by the electron spin and orientation of the defect, causing an evolution ($\omega/2\pi \sim 0.3$~MHz) in the nuclear spin between $\ket{\uparrow}$ and $\ket{\downarrow}$. (B) The alternative, shown using P-donors in $^{28}$Si, is to drive a nuclear spin flip directly with a resonant radiofrequency pulse. In each case, electron spin coherence is generated, stored in the nuclear spin, and then retrieved some time later which is considerable longer than the electron spin $T_2$. Observation of the recovered electron spin coherence can be achieved through Rabi oscillations, or by directly observing a spin echo, depending on the nature of spin measurement used. (Panel A adapted from Reference~\cite{dutt07} with permission from AAAS; Panel B adapted from Reference~\cite{morton:qmemory}, with permission from Macmillan Publishers Ltd.) } \label{fig:qmemory} \end{figure} \section{Electron spin - optical photon coupling} \label{sec:photons} \subsection{Mechanisms and candidate systems} There are various methods by which spin can couple to an optical transition in certain materials, exploiting some kind of spin-selective interaction with light. This interaction enables the initialisation, manipulation and measurement of single spins using optical techniques, which we shall review in this section. NV$^-$ centres (see \Fig{elecstruc}a,b) possess an optically active level structure that has a number of fortuitous properties. In particular, there is an intersystem crossing (ISC) which can take place between the excited $^3$E state and a metastable singlet $^1$A state, and the rate of ISC is three orders of magnitude faster for the $m_s=\pm1$ excited states than for $m_s=0$. Crucially, optical cycles between the ground and excited triplets state are essentially spin conserving, and relaxation from the $^1$A state back to the triplet ground state $^3$A occurs with greatest probability to the $m_s=0$ state~\cite{manson06}. Spin selectivity in QDs is illustrated in \Fig{elecstruc}d,e. A left (right) circularly polarized light pulse propagating in the $z$ direction carries an angular momentum of $+\hbar$ $(-\hbar)$ along the $z$ axis, and if it is resonant with the heavy-hole to electron transition will only excite the transition that has a net angular momentum loss (gain) of $\hbar$. If the QD is doped with a single electron, then this leads to a spin dependent optical transition for a given circular polarization. For example, if $\sigma_+$ light is incident on the sample propagating in the $z$ direction, then a state consisting of two electrons and one hole (known as a trion) is only created from the $J_z = +1/2$ level. This kind of `Pauli blocking' forms the basis of methods for optical initialization, readout and manipulation of spins in QDs. Optical coupling to spin qubits has been explored in other systems, such as donors in silicon~\cite{yang09}, but we will focus here on NV$^-$ and QD systems to illustrate the techniques and opportunities for coupling electron spins and photons in the solid state. \subsection{Electron spin initialisation} Electron spins can become highly polarized in strong magnetic fields and low temperatures (e.g. 90\% polarization at 2~K and 4~T). However, it is possible to use optical colling to achieve similar polarizations at much lower magnetic fields and more accessible temperatures. The initialisation of NV$^-$ centre spins at room temperature is possible because cycling the 637~nm optical transition largely preserves the electron spin state. As described above, the $m_s=0$ state has a very low probability of undergoing ISC when in the excited state $^3$E, while the $m_s=\pm1$ states have some change of crossing to $^1$A, which will relax to $m_s=0$. A few cycles is enough to generate a large spin polarisation ($\sim90\%$) in the $m_s=0$ state~\cite{wrachtrup06,neumann10}. Unfortunately, no method to increase this polarisation closer to 100$\%$ has yet been identified; the difficulty is that the optical transitions are not perfectly spin-conserving and so there is a finite change of a spin-flip on each optical cycle~\cite{manson06}. Atat\"ure {\it et al.}~\cite{atature06} demonstrated laser induced spin polarisation in a InAs/GaAs QD~\cite{xu07}. They use a $\sigma^-$ laser to depopulate the $|\downarrow\rangle$ spin level, promoting population to the trion above, see Fig.~\ref{elecstruc}e. The trion decays primarily back to the original state, but there is a small probability that it goes to the other low lying spin level ($|\uparrow\rangle$) via a spin flip Raman process that arises due to light-heavy hole mixing. However, any population in $|\uparrow\rangle$ will remain, since the pump has no effect on it. In this way, polarization builds up as eventually the $|\downarrow\rangle$ is completely emptied, so long as there is no direct spin flip mechanism with a rate comparable with the forbidden decay rate. In zero magnetic field, interaction with the nuclear spin ensemble does lead to spin flips and only in an applied magnetic field can such flips be sufficiently suppressed. The measurement of polarization is made by observing the change in transmission of the probe laser: once the spin is polarized no more absorption can occur. However, a sufficiently sensitive measurement can only be obtained by exploiting a differential transmission technique~\cite{alen03, hogele05}. Other methods of spin initialization rely on the generation of a polarized exciton in diode-like structure that permits the preferential tunneling of either an electron of hole. Both electrons~\cite{kroutvar04} and holes~\cite{ramsay08} can be prepared in this way. \subsection{Electron spin measurement} The spin-selective ISC in the excited $^3$E state enables the measurement of the spin state of a single NV$^-$ centre. The lifetime of the dark $^1$A state ($\sim250$~ns) is over an order of magnitude longer than that of $^3$E, such that the fluorescence intensity of the centre is reduced when ISC can occur~\cite{manson06}. The act of measurement (cycling the $^3$A-$^3$E optical transition to observe fluorescence) itself serves to re-initialise the spin in the $m_s=0$ state, so the signature of the spin measurement is a $\sim20\%$ difference in fluorescence intensity in the first 0.5~$\rm{\mu}$s~or so of optical excitation. This means that each measurement must be repeated many times in order to build up good contrast between the different spin states. Experiments showing single spin measurement are therefore a time-ensemble average --- in contrast to the spin-ensemble average typical of ESR experiments --- and refocusing techniques must be employed required to remove any inhomogeneity~\cite{jelezko03}. Methods to improve the efficiency of the measurement are being actively explored, for example using $^{13}$C~nuclear spins~\cite{jiang09} or the $^{14}$N~nuclear spin~\cite{steiner10} as ancillae for repeated measurement. The electron spin state is copied to ancilla(e) nuclear spin(s), and then measured\footnote{This is not a cloning of the electron spin state (which is forbidden), but rather a C-NOT operation such that $\alpha\ket{0_e0_n}+\beta\ket{1_e0_n}\rightarrow \alpha\ket{0_e0_n}+\beta\ket{1_e1_n}$}. After some time ($<1$~$\rm{\mu}$s ), any useful information on the electron spin state ceases to be present in the fluorescence and the electron spin is back in the $m_s=0$ state; the ancilla state can be mapped back to the electron spin and the measurement repeated. Crucially, the coherent state of the coupled nuclear spin is not affected by the optical cycling that forms part of the electron spin measurement~\cite{dutt07, jiang08}. A range of techniques for single spin measurement have been explored for QDs. Several of these rely on the modification of refractive index that occurs close to (but not precisely at) optical resonance. By detuning a probe laser from resonance, absorption is suppressed and this dispersive regime can be exploited. Imagine, for example, a single electron spin polarized in the $\ket{\uparrow}$ state. It will only interact with $\sigma^+$ light, and on passing through the quantum dot region, light of this polarization will experience a slightly different optical path length to an unaffected $\sigma^-$ beam. In order to enhance the difference in path lengths experienced, an optical microcavity can be exploited so that on average the light beam passes many times through the dot region. Linearly polarized light is composed of equal amounts of $\sigma^+$ and $\sigma^-$, and will therefore {\it rotate} on passing through the sample. The degree of rotation will be directly dependent on the magnitude and direction of the confined spin. The sensitivity of a typical measurement is improved by recording the degree of rotation as a function of detuning from resonance, and fitting the resulting curve with the initial spin polarization as a variable fitting parameter. Berezovsky {\it et al.}~\cite{berezovsky06} demonstrated this spin-dependent polarization rotation using a reflection geometry (in this case the effect is called {\it magneto-optical Kerr effect}) to non-destructively read out a single spin following initialization in the $|\uparrow\rangle$ or $|\downarrow\rangle$ state. Atat\"ure {\it et al.}~\cite{atature07} performed similar measurements (see \Fig{atature1fig}), but using differential transmission to detect the rotation of polarization, which in this configuration is called Faraday rotation. State readout can also be achieved using a photocurrent technique, in which a trion that has been spin selectively created is allowed to tunnel from a QD in a diode structure~\cite{kroutvar04, ramsay08}. \begin{figure}[t] \scriptsize \begin{psfrags} \psfrag{u}{$\sigma^-$} \psfrag{v}{$\sigma^+$} \psfrag{G}{Gate voltage (mV)} \psfrag{D}{Dispersive signal (a.u.)} {\includegraphics[width=0.75\columnwidth]{./Figs/atature1fig.pdf}} \end{psfrags} \caption{Experimental results of a preparation-measurement sequence for a spin in a QD. A preparation pulse is first applied with circular polarization which results in spin pumping on resonance. Both figures shows a dispersive Faraday rotation (FR)measurements as a function of an applied gate voltage that is used to change the detuning of the preparation laser from resonance through the Stark shift it generates. The upper (lower) figure show results using a $\sigma^-$ ($\sigma^+$) preparation pulse. The black dots in each case represent the signal for a far detuned preparation laser, with no spin cooling. The red (blue) dots show a preparation laser that is at resonance for a gate voltage of 415 mV, for the transitions shown on the right. A peak or dip in the FR signal is seen at resonance, which is a measurement of the prepared spin state. [Reprinted from Ref.~\cite{atature07}, with permission from Macmillan Publishers Ltd.]} \label{atature1fig} \end{figure} \subsection{Electron spin manipulation} Once initialization and measurement are established, the stage is set for the observation of controlled single spin dynamics. In the case of electron spins which are sufficiently long-lived, this can be performed by coupling the spin to a resonant microwave field, such as that used to observed coherent oscillations in a single NV$^-$ electron spin between the $m_s=0$ and the $m_s=\pm1$ levels~\cite{jelezko03}. Alternatively, by applying a magnetic field perpendicular to the direction of the initialization and measurement basis, it is possible to observe single spin precession as a function of time between the initialization `pump' and Kerr rotation `probe'~\cite{mikkelsen07}. For controlled rotation around a second axis, an optical `tipping' pulse can be applied~\cite{berezovsky08}. This pulse is circularly polarized, and cycles round one of the spin selective transitions shown in Fig.~\ref{elecstruc}e. If the pulse is resonant, and completes a cycle from spin to trion and back, a relative $\pi$ phase shift is accumulated between the two spin states\footnote{This is analogous to the nuclear spin phase gate described in \Sec{sec:efornucs} performed by driving the electron spin around a complete cycle. Similar phase gates can also be achieved in a photodiode system~\cite{ramsay08}.}. This corresponds to a rotation around the measurement basis axis, perpendicular to the applied field. Controlled rotation around two axes is sufficient for arbitrary single qubit operations~\cite{nielsen00}, though these experiments do not demonstrate how to stop the magnetic field precession. Greilich {\it et al.}~\cite{greilich09} take the first step to overcoming this problem, first by demonstrating that an arbitrary angle phase gate can be achieved by detuning the cycling laser, and then timing two $\pi/2$ phase gates such that a controlled field precession occurs in between them. Experimental progress on more than one dot has naturally been slower, but theoretical work has shown that optical manipulation of strongly interacting quantum dot molecules can lead to spin entanglement~\cite{calarco03} with relatively simple pulses~\cite{nazir04}. A potential drawback of using trion states to manipulate single spins is that the charge configuration of the system can change significantly during a control pulse. This can lead to a strong phonon coupling~\cite{ramsay10} and to decoherence. However, by using chirped pulses or slow amplitude modulation, this decoherence channel can be strongly suppressed~\cite{lovett05, gauger08, gauger08b, roszak05}. \subsection{Single shot measurement} The measurements so far discussed have relied on weak dispersive effects in the case of QDs, or small changes to the fluorescence over a limited time window in the case of NV$^-$ centres. A full measurement of a single spin is a rather slow process and each of the experiments we have described prepare and measure the spin many times over to build up enough statistics to prove the correlation between the initialized state and the measured state. For most quantum computing applications, this will not be enough. Rather, high-fidelity single-shot determination of an unknown spin is required, and for this a much stronger spin-photon coupling must be engineered and exploited. The first step towards this ideal in QDs was a demonstration that spins can be measured at resonance. Under this condition, the effect of a QD on a photon is greater than it can be in a detuned, dispersive set-up. If a two level system is driven resonantly, and the coupling between photon and exciton is stronger than the exciton decoherence rate, then the eigenstates of the system become polaritons (states with both photon and exciton character). Moreover, the emission spectrum changes from a single Lorentzian to a `Mollow' triplet~\cite{mollow69}, with two side bands split from the central peak by the QD-photon coupling.\footnote{The triplet arises from a splitting of both upper and lower levels into doublets; a triplet results since two of the transitions are degenerate. If the transition to a third level from one of the two doublets is probed, two distinct lines are indeed observed~\cite{xu07a, boyle09}.} It is a significant experimental challenge to observe the Mollow triplet, since the sidebands can be obscured by Rayleigh scattering of the incident light beam. However, Flagg {\it et al.}~\cite{flagg09} showed that it is possible, by sandwiching the QD between two distributed Bragg reflector mirrors. Laser light is coupled into the layer between the mirrors using an optical fibre, and confined there by total internal reflection. The emission is then observed in a perpendicular direction to reduce any interference from the excitation beam. This work was soon followed by an observation of a `quintuplet' by another group~\cite{vamivakas09} where the sidebands are spin-split in an applied magnetic field. However, this kind of measurement is not immune from spin flip Raman processes, which still occur at a faster rate than the readout can be performed. In a recent demonstration of single-shot optical measurement~\cite{atature10}, two QDs are used so that the initial spin and readout exciton are spatially located in different dots. The readout process then proceeds through a trion consisting of two electrons with parallel spins and a heavy hole, and from this state electron spin flips through Raman processes are not possible. In NV$^-$ centres, the emphasis has been on increasing the efficiency of optical coupling, for example using microcavities based on silicon microspheres~\cite{larsson09} or on-chip designs using GaP~\cite{barclay09}. Other approaches include the use of nanostructured diamond itself for optical confinement, for example diamond nanowires~\cite{babinec10}, or using photonic crystal cavities which can be detereministically positioned to maximise coupling with the NV$^-$ centre~\cite{englund10}. \subsection{Single-photon coupling and entanglement} A more ambitious goal is to achieve single shot measurement by detecting a single photon. If this could be achieved, it would permit the creation of spin entanglement between two matter-based electron spin placed at distant locations~\cite{benjamin09}. We will describe how this could be done using QDs (see Fig.~\ref{mbqc}), however similar schemes have proposed for NV$^-$ centres~\cite{barrett05a}. Imagine two QDs in different places with the kind of spin selectivity discussed above, and that polarized light is used to drive just one of the two transitions in both QDs. Each QD is first initialised in the state $(\ket{\uparrow}+\ket{\downarrow})/\sqrt{2}$ so that following optical excitation we have the state \begin{equation} \ket{\Psi} = \frac{1}{2}\left(\ket{\downarrow \downarrow} + \ket{\downarrow T_h^\uparrow} + \ket{T_h^\uparrow \downarrow} + \ket{T_h^\uparrow T_h^\uparrow}\right). \end{equation} This will eventually decay back to the ground state, with the emission of either no photons (corresponding to the first term on the RHS), one photon (second and third term), or two photons (last term). The QDs are placed inside cavities such that the emission is almost always in the directions shown. If the detectors can discriminate between different photon numbers, but they register just one photon between them, then the system is projected into the state corresponding to the decay products of the second and third terms. Importantly, we cannot tell which one since each passes through a 50:50 beamsplitter before being detected and if the QDs are identical this erases any information about which path the photon takes before the beamsplitter. We are therefore projected into an entangled state of the form $(\ket{\uparrow\downarrow} + \exp(i\phi)\ket{\downarrow\uparrow})/\sqrt{2}$, with the phase factor determined by which detector fires. Several theoretical ideas for entanglement creation along these lines have been put forward over the last few years~\cite{cabrillo99, bose99, barrett05a, lim05, kolli06} and aspects have been demonstrated in ion traps and atomic ensembles~\cite{moehring07, laurat07}. The most important feature of this kind of entanglement creation is that is it heralded: we know when our operation works and when it fails simply by counting detection photons. By performing successful entangling measurements on adjacent pairs of spins, a large entangled resource can be built up. NV$^-$ centres possess a local, coupled nuclear spin which is immune to the optical excitation performed while attempting to create electron spin entanglement --- this allows a broker-client approach to efficiently building up larger entangled states~\cite{benjamin06}. Any quantum algorithm can be performed using such a resource simply by making single qubit measurements~\cite{raussendorf01, kok10}. \begin{figure}[t] \footnotesize \begin{psfrags} \psfrag{a}{(a)} \psfrag{b}{(b)} \psfrag{p}{$D_1$} \psfrag{q}{$D_2$} \psfrag{0}{$\ket{\downarrow}$} \psfrag{1}{$\ket{\uparrow}$} \psfrag{f}{$\ket{T_h^\uparrow}$} \psfrag{e}{$\ket{T_h^\downarrow}$} \psfrag{x}{$\omega$} {\includegraphics[width=\columnwidth]{./Figs/mbqc.pdf}} \end{psfrags} \caption{(a) Schematic diagram showing the experimental setup needed for the creation of entanglement by measurement. Two QDs are embedded in cavities to enhance emission into the modes shown. Two detectors $D_1$ and $D_2$ are placed downstream of a beamsplitter; detection of one photon heralds an entangled spin state. (b) Required level structure of the QD. [Figure adapted and reprinted with permission from \cite{kok10}.]} \label{mbqc} \end{figure} Controlled entanglement of a single spin and a single photon would represent the ultimate interface of the two degrees of freedom\cite{yao05}. Such a static to flying qubit interconversion would allow for secure quantum communication~\cite{divincenzo00} and the construction of truly quantum networks~\cite{cirac97}. Certain key aspects such as strong dot-cavity coupling~\cite{yoshie04, reithmaier04,hennessy07} have been experimentally demonstrated, and recently (non-deterministic) entanglement between a single spin and a photon was detected in the NV$^-$ system~\cite{togan10}. \section{Electron spin - charge coupling} \label{sec:charge} \subsection{EDMR} The relationship between electrical conductivity and the spin states of conduction and localised electrons within a material has been exploited for some time in the technique of electrically detected magnetic resonance (EDMR)~\cite{schmidt66,lepine72, brandt04}. Although the effect is often very weak --- only a small fraction of a percent change in conductivity --- it offers an opportunity to detect and measure small numbers of electron spins ($<100$) by measuring the conductivity of nanostructured devices~\cite{mccamey06}. Three primary mechanisms for EDMR have been used to measure electron spins in semiconductors: spin-dependent scattering~\cite{ghosh92, lo07}, spin-dependent recombination~\cite{mccamey06} and spin-dependent tunnelling~\cite{ryan09}. Spin-dependent scattering is observed in MOSFET structures where the scattering cross-section of a 2DEG electron and a bound donor electron depends on their relative spin orientation (i.e.\ single vs.\ triplet)~\cite{ghosh92}. Spin-dependent recombination involves first optically generating charge carriers in the material, which then recombine via charge traps. Commonly, this technique involves \emph{two} species of charge trap, such as a dangling bond P$_{\rm b0}$ centre in silicon coupled, for example, to a P-donor~\cite{stegner06}. If the two trapped charges are in the singlet (vs the triplet) state, the P-donor will transfer its electron to the P$_{\rm b0}$ centre, and subsequently capture an electron from the conduction band. Recombination then takes place between the P$_{\rm b0}^-$ state and a hole, such that the overall process reduces the carrier concentration and thus the conductivity of the device. An alternative is to use just one charge trap, such as the P-donor itself which ionises to the $D^-$ state when trapping a conduction electron~\cite{morley08}. Trapping is only possible when the conduction spin and trap spin are in a singlet state, which leads to a measurable change in the recombination rate if both spins are highly polarised, requiring high magnetic fields and low temperatures (e.g.\ $>3$~T and $<4$~K). This has the advantage of being able to measure donor spins which are not necessarily coupled to interface defects which lead to shorter relaxation times~\cite{paik10}. Various approaches have been proposed and explored to extend these ideas to the single-spin level --- in most cases this involves scaling the devices used down to a sufficiently small scale so that the behaviour of a single donor has a non-negligible impact on the conductivity of the device~\cite{beveren08, lo09}. It is also possible to use the donor nuclear spin as an ancilla for repetitive measurement to enhance the sensitivity~\cite{sarovar08}. Although these methods could be used to ultimately measure a single spin state, they do so using a large number of electronic charges --- analogous to the way spins can be measured through an optical fluorescence or polarisation change detected using a large number of photons. We now examine how the state of a single charge can be coupled to an electron spin, offering a powerful electrical method for initializing, measuring and manipulating single spins. \subsection{Single charge experiments} As discussed in Section~\ref{subsec:qds}, the charge state of lithographically defined quantum dots can be controlled by applying voltages to gate electrodes, and the isolation of just a single electron in a QD was achieved many years ago (see Ref.~\cite{kouwenhoven01} for a detailed review of achievements in QD electron control that happened in the 1990s). In the last decade, experiments have shown the conversion of information carried by single spins into measurable single charge effects, so-called \emph{spin-to-charge conversion}. Several excellent reviews of this topic already exist~\cite{hanson07, hanson08}, and so here we will touch on a few key early results, and then focus on some more recent achievements. An early measurement of single spin dynamics that used a form of spin-to-charge conversion was achieved by Fujisawa {\it et al.} in 2002~\cite{fujisawa02}. They demonstrated that a single QD could be filled with one or two electrons by applying a particular voltage to the QD gate electrode, states which they termed artificial hydrogen and helium atoms. In the two electron case, the authors demonstrated that the number of charges escaping from the QD can be made proportional to triplet population, and in this way triplet to singlet relaxation times were probed. Single shot readout of a single spin was achieved two years later~\cite{elzerman04}, using a single gate-defined QD in a magnetic field that is tunnel coupled to a reservoir at one side, and has a quantum point contact (QPC) at the other side. A QPC is a one-dimensional channel for charge transport, whose conductance rises in discrete steps as a function of local voltage~\cite{wharam88, wees88}. By tuning the QPC to the edge of one of these steps, a conductance measurement can act as a sensitive charge detector. The QD is first loaded with a single electron by raising the potential of the plunger gate, which transfers a random spin from the reservoir. After a controlled waiting time, the voltage is lowered to a level such that the two Zeeman split spin levels straddle the Fermi surface of the reservoir. It is therefore only possible for the spin-up state to tunnel out, an event that can be picked up by the QPC. Such spin-dependent tunnelling lies at the heart of almost all spin-to-charge conversion measurements in these kinds of systems. By repeating the procedure many times, and for different waiting times, it is possible to measure the electron spin relaxation ($T_1$) time. Coherence times ($T_2$) of electrons can be probed using a similar single shot read-out technique; Petta et al.~\cite{petta05} did this using a double QD loaded with two electrons. Depending on the relative potentials of the two dots, the electrons can either be on the same dot (labelled (1,1)) or on different dots (2, 0). A nearby QPC is sensitive to charge on one of the two dots and so can distinguish these two situations. Importantly, for (2, 0) the electrons must be in the singlet state, due to Pauli's exclusion principle. This provides means of initializing, and reading out (1, 1) states: a triplet (1,1) will not tunnel to (2, 0) when the dot detuning is changed to make (2, 0) the ground state, but a singlet will. In this way, the dynamics of a qubit defined by the singlet and the triplet state with zero spin projection can probed. Since these two states differ only by a phase this leads to a measurement of the dephasing $T_2$ time, which is found to be primarily dependent on the interactions with the nuclear spins on the Ga and As atoms. Though it is possible to remove the effects of static nuclear spins through rephasing techniques, nuclear spin fluctuations limit $T_2$ to the microsecond regime. More recent achievements have focussed on the manipulation of electron spins using ESR, permitting the observation of Rabi oscillations. For example, in~\cite{koppens06}, an oscillating magnetic field is applied to a two QD system by applying a radio-frequency (RF) signal to an on-chip coplanar stripline. The resulting Rabi oscillations can be picked up by measuring the current through the device following the RF pulse; Pauli's exclusion principle means that the current is maximized for antiparallel spins. In an alternative approach, a magnetic field gradient was applied across a two QD device using a ferromagnetic strip so that field strength depends on the equilibrium position of the electron charge~\cite{pioroladriere08}. The application of an oscillating electric field induces an oscillation of the electron charge --- the electron spin then `feels' a periodic modulation of the magnetic field. If the modulation is at a frequency corresponding to the spin resonance condition, then Rabi oscillations occur. The last two or three years have seen a remarkable improvement in our understanding and control of the primary electron spin decoherence mechanism: interactions with the GaAs nuclear spin bath. For example, the nuclear spin bath can be polarized by transferring angular momentum from the electron spins~\cite{reilly08}, resulting in a narrower field distribution. The nuclear spin field can also adjust itself so that the condition for electron spin resonance is satisfied for a range of applied fields~\cite{vink09}; this may also result in a reduction in electron spin dephasing by narrowing the nuclear field distribution. Remarkably, fast single shot readout of an electron spin has permitted the static nuclear field to be measured and tracked through the real time impact is has on the singlet-triplet qubit discussed above~\cite{barthel09} Removing nuclear spins altogether is also possible, for example using QDs in silicon- or carbon-based devices such as those fabricated using graphene or carbon nanotubes~\cite{biercuk05}. Demonstration of a single spin to charge conversion has now been observed in silicon~\cite{morello10} (as described in \Fig{morello}), while the demonstration of the coherent exchange of electrons between two donors~\cite{verduijn10} may provide a route to generating spin entanglement. \begin{figure}[t] {\includegraphics[width=\columnwidth]{./Figs/morello.pdf}} \caption{Single-shot single spin to charge conversion in silicon using a MOS single electron transistor (SET). A) P-donors are implanted within the vicinity of the SET island, underneath a plunger gate $V_{pl}$ which controls their potential with respect to the SET. Statistically, one expects $\sim3$ donors to be sufficiently tunnel-coupled to the SET. B) An electron can be loaded onto the donor by making the chemical potential $\mu_{\rm donor}$ much lower than that of the SET island $\mu_{\rm SET}$. Placing $\mu_{\rm SET}$ in between the spin-up and spin-down levels of the donor enables spin-dependent tunnelling of the donor electron back onto the SET island. C) The electron spin relaxation time $T_1$~is measured by loading an electron (with a random spin state) onto the donor, followed by a measurement of the spin some variable time later. The $B^5$ magnetic field dependence, and absolute values of the $T_1$~measured are consistent with a P-donor electron spin (S. Simmons, R. M. Brown, H. Riemann, N. V. Abrosimov, P. Becker, et al., unpublished observation). (Reprinted with permission from Reference~\cite{morello10} with permission from Macmillan Publishers Ltd.)} \label{morello} \end{figure} \section{Electron spin and superconducting qubits} Qubits based on charge-, flux-, or phase-states in superconducting circuits can be readily fabricated using standard lithographic techniques and afford an impressive degree of quantum control~\cite{clarke08}. Multiple superconducting qubits can be coupled together through their mutual interaction with on-chip microwave stripline resonators~\cite{niskanen07}. These methods have been used to demonstrate the violation of Bell's inequalities~\cite{ansmann09} and to create three-qubit entangled states~\cite{neeley10,dicarlo10}. The weakness of superconducting qubits remains their short decoherence times, typically limited to a few microseconds depending on the species of qubit~\cite{clarke08}, for example, coherence times in the three-qubit device of Ref~\cite{dicarlo10} were limited to less than a microsecond. Electron spins, in contrast, can have coherence times up to 0.6~s in the solid state, and typically operate at similar microwave frequencies as superconducting qubits. The possibility of transferring quantum information between superconducting qubits and electron spins is therefore attractive. Although it is possible to convert a superconducting qubit state into cavity microwave photon~\cite{wallraff04}, achieving a useful strong coupling between such a photon and a single electron spin appears beyond capabilities of current technology\footnote{A superconducting cavity could be used for single spin ESR (see Box: Single-spin electron spin resonance), however, this application places fewer demands on the coupling and cavity/spin linewidths than a quantum memory}. Instead, the spin-cavity coupling (which scales as $\sqrt{N}$ for $N$ spins) can be enhanced by placing a larger ensemble of spins within the mode volume of the cavity, thus ensuring that a microwave photon in the cavity is absorbed into a collective excited state of the ensemble. The apparent wastefulness of resources (one photon stored in many spins) can be overcome by using holographic techniques to store multiple excitations in orthogonal states within the ensemble~\cite{wesenberg09}, as has been explored extensively in optical quantum memories~\cite{reim10}. A key step in demonstrating a solid state microwave photon quantum memory based on electron spins is to demonstrate strong coupling between an electron spin ensemble and single microwave photons in a superconducting resonator. Typically, electron spins are placed in a magnetic field to achieve transition frequencies in the GHz regime, however, such fields can adversely affect the performance of superconducting resonators. One solution is to apply the magnetic field strictly parallel to the plane of thin film superconductors --- this has been used to observed coupling to a) the organic radical DPPH deposited over the surface of the resonator; b) paramagnetic Cr$^{3+}$ centres within the ruby substrate on which the resonator is fabricated; and c) substitutional nitrogen defects within a sample of diamond glued onto the top of the niobium resonator (see \Fig{fig:superc}a)~\cite{schuster10}. Another solution is to seek out electron spins which possess microwave transition frequencies in the absence of an applied magnetic field --- indeed this would appear the most attractive because the superconducting qubits themselves (which are generally made of aluminium) are unlikely to survive even modest magnetic fields. Thanks to their zero-field splitting of $\sim3$~GHz, NV$^-$ centres in diamond have been used to observe strong coupling to a frequency-tuneable superconducting resonator in the absence of an appreciable magnetic field ($B<3$~mT)~\cite{kubo10} (see \Fig{fig:superc}b). The ability to rapidly tune the resonator into resonance with the spin ensemble should enable the observation of coherent oscillations between the two, and ultimately the storage of a single microwave photon in the spin ensemble. \begin{figure}[t] {\includegraphics[width=\columnwidth]{./Figs/superccoupling.pdf}} \caption{(A) The transmission spectrum of a superconducting stripline microwave resonator, fabricated on a ruby substrate, as a function of magnetic field. A coupling strength of 38~MHz is observed between the resonator and $\sim10^{12}$ Cr$^{3+}$ spins ($S=3/2$) located in the substrate, within the mode volume of the resonator. (B) The transmission $|S_{21}|$ of a resonator with a diamond on top shows two anti-crossing arising from the two $\Delta m_s=1$ transitions of the NV$^-$ centre, whose frequencies shift in opposite directions under a weak applied magnetic field $B_{\rm NV}$. The resonator itself incorporates a SQUID array, allowing frequency tuning through a locally applied magnetic flux $\Phi$. Panels (A) and (B) reprinted with permission from References~\cite{schuster10} and \cite{kubo10}, respectively, Copyright (2010) by the American Physical Society.} \label{fig:superc} \end{figure} An alternative approach is to couple an electron spin, or spins, directly to the syperconducting qubit, as proposed by Marcos et al.~\cite{marcos10}, or similarly to boost the coupling of a superconducting resonator to a single spin through the use of a persistent current loop~\cite{twamley10}. Having stored the superconducting qubit state within a collective state of electron spins, it is then possible to transfer the state into a collective state of nuclear spins~\cite{wu09}, using the coherence transfer methods described in \Sec{sec:nucspins}, thus benefitting from the even longer nuclear coherence times.\\ \\ \begin{tabular}{|p{13cm}|} \hline {\bf Single-spin electron spin resonance}\\ Some of the approaches described here could be extended to enable general single-spin ESR.\\ {\bf Cavities} - The greatly reduced mode volume and high Q-factor of nanoscale superconducting resonators should lead to sufficient coupling between a single spin and the microwave field to permit single spin ESR. Typical values of $Q \sim10^5$ and a cavity of volume 1~pL~(1~$\mu$m$^3$) and frequency 1~GHz would suggest a single spin will emit a scattered microwave photon every 10~ms~\cite{schuster10}. \\ {\bf Optics} - Single NV$^-$ electron spins can be measured close to the diamond surface and are sensitive to long-range dipolar coupling~\cite{neumann10}, enabling the indirect detection of other electron spins~\cite{maze08,balasubramanian08}. A nanocrystal of diamond could be mounted on a probe tip which scans over a surface, or the spins of interest could be deposited on a suitable diamond substrate.\\ {\bf Charge} - Transport measurements of a carbon nanotube-based double QD device have shown the signature of coupling between a QD and a nearby electron spin~\cite{chorley10}. Carbon nanotubes can be controllably activated and functionalised with other molecular species~\cite{rodriguez09}. Combining such structures with magnetic resonance techniques could permit single spin detection of an attached electron spin~\cite{wabnig09, giavaras10}.\\ \hline \end{tabular} \section{Summary and outlook} In this review, we have attempted in this review to systematically look at the coupling of electron spin to other degrees of freedom capable of hosting quantum information: nuclear spin, charge, optical photons, and superconducting qubits. We have not been able to cover all possibilities. For example, we have not mentioned electron spins coupled to mechanical states, which are explored in a recent proposal~\cite{rabl10}. In many cases, substantial benefit arises from bringing together more than one of the hybridising schemes discussed in this article. For example, in NV$^-$ centres, one can combine three degrees of freedom: the optical electronic transition, the electron spin, and the nuclear spin of a nearby $^{13}$C. Alternatively, a hybrid electrical and optical method for measuring single electron or nuclear spin states of P-donors in silicon has been demonstrated, which exploits the ability to selectively optically excite a bound exciton state of the P-donor and a nearby QPC~\cite{sleiter10}. Nevertheless, it is clear that electron spins play an essential role in a wide range of proposals for hybridising quantum information in the solid state. This can be attributed to their balance of portability, long-lived coherent behaviour, and versatility in coupling to many varied degrees of freedom. \subsection{Summary Points} \begin{enumerate} \item A versatile qubit can be represented in the electron spin of quantum dots, impurities in solids, organic molecules, and free electrons in the solid state. \item A coupled electron spin can provide many advantages to a nuclear spin qubit, including high fidelity initialisation, manipulation on the nanosecond timescale, and more sensitive measurement. \item The state of an electron spin can be coherently stored in a coupled nuclear spin, offering coherence times exceeding seconds ({\bf \Fig{fig:qmemory}}). \item The interaction of an electron spin with many optical photons can be used for single spin measurement ({\bf \Fig{atature1fig}}), and manipulation on the picosecond timescale, while coupling to a single photon could be used to generate entanglement between two macroscopically separated electron spins ({\bf \Fig{mbqc}}). \item The electron spin of lithographically defined quantum dots can be measured through spin-dependent tunnelling between and off dots, and the use of a local quantum point contact. \item A similar technique can been applied to the electron spin of a donor in silicon, the important difference being that the electron tunnels off the donor directly onto a single-electron transistor island, offering high fidelity single-shot measurement ({\bf \Fig{morello}}). \item Electron spin could be used to store the state of a superconducting qubit. An important step towards achieving this has been demonstrated in the observation of strong coupling between an electron spin ensemble and a superconducting microwave stripline resonator ({\bf \Fig{fig:superc}}). \end{enumerate}
1,477,468,750,824
arxiv
\section{Conclusion} \label{sec:conclusion} We have presented two approaches conditional density estimation that are fully nonparametric, scalable, make no independence assumptions, and leverage the underlying topological structure of the variables. Our first approach, Multiscale Nets, effectively morphs density estimation into hierarchical classication, and it is designed to be a maximally flexible model for situations with a favorable ratio of the number of samples to the feature count. Our second approach, CDE trend filtering (TF), couples a multinomial model (like option 1 above) with a trend filtering penalty to smooth the underlying density estimate via an additional regularization term. This second approach works best in situations where data is sparser as a function of the number of features. In each case, the features are mapped to raw logits---binomial in the multiscale case, multinomial in the CDE TF case---via an appropriate neural network. We presented extensive evidence that each of these two methods is superior to Gaussian mixture models (the current state of the art) in its appropriate domain. \section{Experiments} \label{sec:experiments} \subsection{Setup} \label{sec:setup} We evaluate our two approaches against the most common CDE architectures for neural networks: plain multinomial classifiers and mixture density networks (MDNs) \citep{bishop:mdn:1994}. The latter approach corresponds to having a neural network output the parameters of a Gaussian mixture model (GMM). Despite being more than two decades old, MDNs are still often the best-performing conditional density estimator \citep{sugiyama2010conditional} and have had recent success with deep architectures \citep{zen2014deep}. Most work on MDNs assumes either independence of the variables \citep[as in, e.g.][]{zen2014deep}, or only deals only with univariate densities. Modeling the joint density over variables with MDNs is in general much more difficult, as it requires outputting a positive semi-definite covariate matrix. We implement such a model by having it output the lower triangular entries in the Cholesky decomposition of the covariance matrix for each mixture component \citep[see][for more details on this approach]{lopes2011cholesky}. In part due to the difficulty in constructing multi-dimensional MDNs, even recent work that requires deep, multidimensional conditional density estimation \citep[e.g.][]{zhang2016colorful} resorts to using simple multinomial grids. We therefore compare against both methods as reasonable baselines; we also provide a point estimate model for RMSE comparisons. In all of our experiments, we focus on predicting discrete densities. As such, all target variables are discretized on an evenly-spaced grid spanning their empirical range. For the one-dimensional targets, we use a grid of $32$ bins for the synthetic experiment and $128$ bins for the S-class dataset; for the two-dimensional targets, we use a $32\times32$ lattice. Performance is measured in terms of both log-probability of the test set and root mean squared error (RMSE) of each model's point estimate. For all experiments, we select the best trend filtering model based on a grid search for the best $(\lambda, k)$ pair based on log-probability on the validation set; we conduct a similar search for the number of GMM mixture components in the real-world datasets. All real-world datasets are randomly split into 80\%/10\%/10\% train/validation/test samples, with results averaged over five, five, and ten independent trials for the Parkinson's rental rates, Mercedes datasets, respectively. \subsection{Synthetic experiment: MNIST Distributions} \label{sec:mnist} The MNIST dataset is a well-known benchmark classification task of mapping a $28\times28$ gray scale handwritten digit to its corresponding digit class. We modify this dataset by mapping each digit class to a randomly-generated, discretized, three-component Gaussian mixture model. The digit labels are then replaced with a random draw from this density. From an investigatory perspective, this dataset is ideal for demonstrating sample efficiency, since convolutional neural networks are known to perform with over $99\%$ accuracy in the classification setting. Figure \ref{fig:mnist_tv} shows how the performance of each model improves with the number of samples. Note that at 500 samples, the model is seeing just over 1 example per bin per class, on average. In these scenarios, making some sort of assumption about the underlying conditional distribution is necessary. The adaptive piecewise-polynomial assumption made by the trend filtering method is clearly effective at fitting such multi-modal mixtures when the sample size is small. Interestingly, the trend filtering method also strongly outperforms the GMM model, despite the fact that it is parameterized with the same number of components as the underlying ground truth GMM. One possible reason for this, as we see in the real-world experiments, is the difficulty of finding a good fit for a GMM without overfitting in the small-sample case. \begin{figure} \includegraphics[width=0.9\textwidth]{figures/tv-mnist.pdf} \caption{\label{fig:mnist_tv} Performance of each CDE model on the synthetic MNIST-Distributions dataset, as a function of the number of training samples. The trend filtering method strongly outperforms the other models in the low-sample regimes.} \end{figure} \begin{table} \begin{center} \begin{tabular}{|l|rr|rr|rr|} \hline & \multicolumn{2}{|c|}{Parkinson's Scores} & \multicolumn{2}{|c|}{Mercedes Prices} & \multicolumn{2}{|c|}{Rental Rates} \\ Model & Log(Prob) & RMSE & Log(Prob) & RMSE & Log(Prob) & RMSE \\\hline Point estimate & N/A & 9.92 & N/A & 3.60 & N/A & 4.76 \\ Multinomial & -7.00 & 9.93 & -2.35 & 3.87 & -3.87 & 4.71 \\ GMM (MDN) & -6.50 & 9.68 & -2.30 & 3.64 & -4.63 & 4.84 \\ Multiscale & -6.62 & \textbf{9.41} & \textbf{-2.21} & \textbf{3.46} & \textbf{-3.82} & \textbf{4.61} \\ Trend Filtering & \textbf{-6.16} & 9.48 & -2.34 & 3.90 & -3.90 & 4.79 \\\hline \end{tabular} \caption{\label{tab:results} Performance of each CDE model on the three real-world datasets. For each dataset, we provide both the log probability of the hold-out observation and the RMSE of the point estimate. The latter shows that, surprisingly, estimating the full conditional density is either free or even benefitial compared to a point estimate model.} \end{center} \end{table} \subsection{Parkinson's Disease Telemonitoring} \label{sec:parkinsons} The Parkinson's Telemonitoring dataset \citep{dataset:parkinsons} consists of biomedical voice measurements from 42 people with early-stage Parkinson's disease. The goal is to predict the motor and total Unified Parkinson's Disease Rating Scale (UPDRS) scores, which are highly correlated for each patient, but can exhibit stark discontinuities and multimodalities. Each patient appears in the dataset approximately 200 times, with each appearance corresponding to an example in the dataset, with an indicator variable specifying which patient is speaking, as well as 18 other real-valued features. After discretizing the two scores, the resulting problem is thus very similar to the low-sample scenario from \ref{sec:mnist}. The first column in Table \ref{tab:results} confirms that the situation is similar, with the trend filtering model performing much stronger than the other methods. We also note that both the multiscale and trend filtering methods have RMSE scores that strongly outperform a baseline point estimate model. Thus, even in the case of modeling point predictions, it is actually beneficial for one to model the entire joint density. This result is both surprising and promising, as we are effectively seeing a free-lunch: improved point prediction that comes with predicted error bars. \subsection{Mercedes S-class Sale Prices} \label{sec:sclass} The Mercedes dataset consists of sale prices for approximately 30K used Mercedes S-class sedans and fourteen features relating to the car. In contrast to the two previous experiments, this dataset is in a much higher sample-to-feature regime. Column 2 of Table \ref{tab:results} indicates that under this regime, the trend filtering smoothing is unnecessary, as it performs nearly the identically to the multinomial model. Instead, the multiscale method is now the clear best choice model, outperforming all other methods in both categories. \subsection{Real Estate Rentals} \label{sec:realestate} The final benchmark dataset covers approximately 8K real estate rentals, with 14 features per property. The goal is to estimate the joint conditional density of rental price and occupancy rate. The results of the CDE models are in the third column of Table \ref{tab:results}. Similarly to the Mercedes dataset, the rental rates dataset contains relatively few features relative to its sample size and thus the multiscale method performs well. We note also that this in the first dataset that is both difficult to overfit and multidimensional. In this scenario, the mixture density network substantially underfits, likely due to the difficulty of accurately estimating the covariance matrices in a mixture of multivariate normals. \section{Introduction} \label{sec:introduction} In the last decade, deep neural networks have been at the core of many state-of-the-art machine learning systems due to their exceptional ability to learn complicated, non-linear functions of large dimension. When employed to solve real- and ordinal-valued regression problems, almost invariably such networks are trained to produce a point estimate. But often an interval estimate (i.e.~a prediction interval) is necessary. One na\"ive approach is to simply base a predictive error bar using the root mean-squared error of the network. But this is rarely sensible in practice: the \textit{conditional} predictive uncertainty of the network is likely to depend strongly on the features used to train the model. In the statistics literature, this is referred to as \textit{conditional heteroskedasticity}: the variance of the model residuals is itself a function of the features. There is a pressing need for methods which produce sensible interval predictions from deep nets. If a user wishes instead to infer a conditional density rather than a point, their options are typically one of the following. \begin{enumerate} \item Discretize the variable and model it using a multinomial classifier. While this is fast, it destroys the underlying topological structure of the variable's underlying space by making each bin independent. It therefore leads to ``lumpy'' density estimates and reduces sample efficiency due to high variance. \item Make a parametric assumption about the form of the conditional density, such as a fixed-size Gaussian mixture model \citep[also called Mixture Density Networks, see][]{bishop:mdn:1994} or Gaussian-Pareto mixtures \citep{carreau2009hybrid}, and build a model for conditional parameters of that parametric distribution. When increasing in dimensionality of the target variable, this may require making independence assumptions in order to keep the covariance estimations tractable. \item Add dropout at inference time \citep{gal2015dropout}. This works well for measuring uncertainty about one's point estimate. But sampling uncertainty about a maximum likelihood point estimate is not the same as modeling the distribution of outcomes; the latter is typically much wider. \item Use a Bayesian deep learning framework. While much work in this area is just emerging \citep[e.g.][]{pu2015deep}, many of the existing architectures, such as LSTMs, do not yet have a Bayesian interpretation. Furthermore, posterior inference on such models can be prohibitively expensive in the case where billions of evaluations must be performed, as in reinforcement learning contexts, for example. \end{enumerate} Thus all four of the above options are lacking in some crucial way that prevents them from being used in practice. In this paper we seek to overcome these issues by presenting two approaches to conditional density estimation that are nonparametric, scalable, make no independence assumptions, and leverage the underlying topological structure of the variables. Our first approach, Multiscale Nets, decomposes the density into a series of half-spaces via a dyadic decomposition. This is the more flexible of our two models; it essentially turns density estimation into hierarchical classification, and it is designed to be a maximally flexible model for situations with a favorable ratio of the number of samples to the feature-set size. Our second approach, CDE trend filtering (TF), couples a multinomial model (like option 1 above) with a trend-filtering penalty to introduce smoothness in the underlying density estimate. Because this incorporates additional regularization compared to the multiscale case, we envision it as a better approach in situations where data is sparser as a function of the number of features. In each case, the features are mapped to raw logits---binomial in the multiscale case, multinomial in the CDE TF case---via an appropriate neural network. This paper presents extensive evidence that each of these two methods is superior to Gaussian mixture models (the current state of the art) in its appropriate domain. \section{Multiscale nets} \label{sec:multiscale} \begin{figure} \includegraphics[width=0.9\textwidth]{figures/multiscale-decomposition.pdf} \caption{\label{fig:multiscale-decomposition} The multiscale decomposition visualized. Every dimension in the response variable is iteratively divided into half spaces and every split becomes an output node in the network. A given example then has $\log_2(p)$ labels for discrete density with $p$ bins.} \end{figure} \subsection{Dyadic partitions} We use $F$ to denote a probability measure on $B$, $f$ the corresponding density function, and $F(A) = \int_A dF$ the probability of set $A \subset B$. Our approach to conditional density estimation relies upon constructing a recursive dyadic partition of $B$. The level-$k$ partition, denoted $\Pi^{(k)}$, via a bijection between $\Pi^{(k)}$ and all length-$k$ binary sequences $\gamma \in \{0,1\}^k$, as follows. Let the level-$1$ partition as $\Pi^{(1)} = \{B_0, B_1\}$ where $B_0 \cup B_1 = B$ and $B_0 \cap B_1 = \emptyset$. Given the partition at level $k$, the level $k+1$ partition is constructed by specifying, for all $\gamma \in \{0,1\}^k$, a pair $(B_{\gamma 0}, B_{\gamma 1})$ such that $B_{\gamma 0} \cup B_{\gamma 1} = B_{\gamma}$ and $B_{\gamma 0} \cap B_{\gamma 1} = \emptyset$. Here $\gamma0$ (or $\gamma1$) is new binary sequence defined by appending a 0 (or 1) to the end of $\gamma$. If $\gamma$ is an empty string, then $B_{\gamma}$ is the root node, i.e.~$B$. For example, if $B$ is the unit interval (i.e.~the level-0 partition), the level-$1$ partition could be $\{[0,0.5], (0.5, 1]\}$; the level-$2$ partition could be $$ \Pi^{(2)} = \{[0, 0.25], (0.25, 0.5], (0.5, 0.75], (0.75, 1]\} \, ; $$ and so on. We refer to $B_{\gamma}$ as a parent node, to $B_{\gamma0}$ as the left child, and to $B_{\gamma1}$ as the right child. Suppose that $Y \sim F$ is a draw from $F$. We characterize the probability measure $F$ via the conditional ``splitting'' probabilities $$ w_{\gamma} = P(Y \in B_{\gamma0} \mid Y \in B_{\gamma}) \, ; $$ that is, the probability that the $Y$ will fall in the left-child set, given that it falls in the parent set. Because $B_{\gamma0} \subset B_{\gamma}$ and therefore $Y \in B_{\gamma0} \implies Y \in B_{\gamma}$, we have the following representation for $w_{\gamma}$: \begin{eqnarray} P(Y \in B_{\gamma0}) &=& P(Y \in B_{\gamma0}, Y \in B_{\gamma}) \nonumber \\ &=& P(Y \in B_{\gamma0} \mid Y \in B_{\gamma}) \cdot P(Y \in B_{\gamma}) \nonumber \\ &=& w_{\gamma} \cdot P(Y \in B_{\gamma}) \label{eqn:tree_recursion_probability} \, . \end{eqnarray} Thus $w_{\gamma}$ is given by the ratio of probabilities $$ w_{\gamma} \equiv P(Y \in B_{\gamma0}) / P(Y \in B_{\gamma}) = F(B_{\gamma0}) / F(B_{\gamma}) \, . $$ Moreover, suppose we apply Equation (\ref{eqn:tree_recursion_probability}) recursively to itself, i.e.~to $P(Y \in B_{\gamma})$ on the right-hand side, and proceed up the tree until arriving at the root node $B$ (for which $P(Y \in B) = 1$). This allows us to express the probabilities at the terminal nodes of the tree, which form a discrete approximation to the probability density function, as the product of splitting probabilities $w_{\gamma}$ as one traverses up the tree to the root node. \subsection{Incorporating features} We incorporate features as follows. Let $\mathcal{X}$ denote a feature space, and let $F_{x}$ for $x \in \mathcal{X}$ denote a probability measure over $B$ specific to $x$. (We assume that all $F_x$ have the same support.) Our approach to multiscale conditional density estimation is to allow the conditional splitting probabilities in the dyadic partition to depend upon $x$ via the logistic transform of some function $r_\gamma$. Specifically, we let $$ w_{\gamma}(x) = P(Y \in B_{\gamma0} \mid Y \in B_{\gamma}, x) = \frac{ \exp\{r_\gamma(x)\}} {1 + \exp\{r_\gamma(x) \} } \, . $$ This turns the problem of density estimation into a set of independent classification problems: for every $\gamma$, we learn a function $r_{\gamma}(x)$ that predicts how likely that an outcome $y$ that falling in the parent node $B_{\gamma}$ will also fall in the left-child node $B_{\gamma 0}$. \subsection{Related work} There is a significant body of work in statistics on conditional density estimation. Most frequentist work on this subject is based on kernel methods \citep[see, e.g.][and the references contained therein]{bash:hynd:2001}. But traditional kernel methods do poorly at estimating densities which contain both spiky and smooth features ,and which require adaptivity to large jumps both. Moreover, conditional density estimation using kernel methods requres the estimation of a potentially high-dimensional joint density $p(y,x)$ as a precursor to estimation $p(y \mid x)$. We avoid the difficult task of estimating $p(y,x)$, focusing on $p(y \mid x)$ directly. Multiscale nets essentially aim to treat conditional density estimation as a hierarchical classification problem. A similar approach for doing one-dimensional CDE has been proposed by \citep{stone:etal:2003} via boosting machines in a manner similar to ordinal regression. However, their approach requires heuristics to deal with a monotonicity requirement in their decomposition bins. In the neural network literature, a very similar technique has been used in neural language models \citep{morin2005hierarchical}. Their results build on ours and our dyadic decomposition could equally be data adaptive, in the case where the depth of the tree is limited, by simply choosing splits via percentiles of the distribution. This device for exploiting the conditional-independence properties of a tree is also used to define a P\'olya-tree prior and other kinds of multiscale methods in nonparametric Bayesian inference \citep{mauldin:sudderth:williams:1992,ma:2014}. Here, a random probability measure $F$ is constructed by assuming that the conditional probability $w_{\gamma} = F(B_{\gamma0}) / F(B_{\gamma})$ is a different beta random variable for each node $\gamma$ in an infinitely deep tree. The parameters of each beta random variable are determined by a concentration parameter $\alpha$ and a base measure $F_0$. It is also similar to multiscale models for Poisson intensity estimation \citep{fryzlewicz:nason:2004,jansen:2006,willett:nowak:2007}. Our approach differs in that we incorporate covariates into the spitting probabilities, and in that we do not work explicitly within the Bayesian formalism (i.e.~by placing a prior over the space of probability measures). \section{CDE Trend Filtering} In this section we define a ``flat'' (i.e.~non-hierarchical) version of a conditional density estimator via neural nets. To do so, we generalize recent advances in trend filtering, a nonparametric method for regression and smoothing over graphs. Graph trend filtering \citep{wang2014trend} minimizes the following objective: \begin{equation} \label{eqn:graph-trend-filtering} \underset{\beta \in \mathbb{R}^n}{\text{minimize}} \quad l(\beta) + \lambda \vnorm{\Delta \beta}_1 \, , \end{equation} where $l$ is a smooth, convex loss function. Here $\Delta$ is the $k^{\text{th}}$-order trend filtering penalty matrix, where the $k=0$ base matrix is the oriented edge matrix encoding the relationship between the elements of $\beta$. The resulting $\ell_1$ regularization term aims to drive the $k^{\text{th}}$-order differences between the $\beta$'s to zero. We define conditional density estimation trend filtering (CDE-TF) as follows. Let $\Pi = (I_1, \ldots, I_D)$ be a set of (possibly multivariate) histogram bins, i.e.~a flat partition of $B$, the support of the underlying probability measure. We use $c_i$ as a bin indicator for the response variable $y_i$: that is, $c_i = j$ if $y_i \in I_j$. In CDE-TF, we model the $c_i$'s directly as categorical random variables, where the the probabilities $P(c_i = j \mid x_i)$ depend on features via the softmax function: \begin{equation} \label{eqn:categorical_model} (c_i \mid x_i) \sim \mbox{Categorical}(\eta(x)) \, , \quad \mbox{where} \quad \eta_j(x) = \frac{\exp[\psi_j(x)]} {\sum_{l=1}^D \exp[\psi_l(x)]} \, . \end{equation} To parametrize and regularize the $\psi_j(x)$'s, we combine two approaches: \begin{enumerate} \item We set the $\psi_j(x)$'s to be the output of an appropriate neural network. \item We apply a graph trend-filtering penalty directly to these outputs, by penalizing the quantity $\Vert \Delta \psi(x) \Vert_1$ where $\psi(x)$ is the stacked vector of outputs $\psi_j(x)$ from the network and $\Delta$ is the trend-filtering penalty matrix. Here the graph used to construct $\Delta$ is determined by the adjacency structure of the bins $I_1, \ldots, I_D$. In the vast majority of all cases, this graph will simply be a $K$-dimensional grid graph, where $K$ is the dimension of the response vector $y$. \end{enumerate} Thus the objective we are minimizing is \begin{equation} \label{eqn:cde-trend-filtering} {\text{minimize}} \quad \sum_{i} l_i(\psi(x_i)) + \lambda \vnorm{\Delta \psi(x) }_1 \, , \end{equation} where $l_i(\psi(x_i))$ is the contribution to the loss function from Model (\ref{eqn:categorical_model}) for the $i^{\text{th}}$ response, $\psi(x)$ is the network output that returns returns raw logits, and $x_i$ is the $i^{\text{th}}$ training example. Throughout this paper, we assume that $\psi$ is a neural network, but any differentiable function is acceptable (so that the domain of minimization will be context-dependent). Conceptually, goal of CDE-trend filtering is to bring the representational power of neural networks to the task of density estimation, while simultaneously regularizing the model output to ensure smooth estimated densities, borrowing information across adjacent bins and effecting a favorable bias--variance trade-off. \begin{figure} \begin{center} \includegraphics[width=0.9\textwidth]{figures/trend-filtering.pdf} \caption{\label{fig:trend-filtering} The graph trend filtering penalty visualized. On the left, each bin in the discrete multidimensional density has an edge to its closest neighbor, forming a lattice, and encoded as the oriented edge matrix on the right. The $k^{\text{th}}$-order trend filtering penalty is then the matrix resulting from multiplying the matrix by itself (or its transpose if on an odd step) $k$ times.} \end{center} \end{figure} \
1,477,468,750,825
arxiv
\section{Introduction} \vspace{-3mm} From deep inelastic scattering (DIS) experiments it is known, see e.g. \cite{Devenish}, that the nucleon is not a fundamental particle but consists of so-called partons as its constituents. The contribution of these partons to the nucleon momentum is described by a parton distribution function (PDF) $f_p(x)$, which is the probability to find a parton $p$ with a momentum fraction $x$. The first moment of the PDF $ \langle x \rangle_p = \int x f_p(x) dx$ is the fraction of the total nucleon momentum carried by the parton. This implies then the energy-momentum sum rule $\sum_p \langle x \rangle_p =1$. The partons were eventually identified as the quarks and the gluons as the fundamental building block of hadrons. Thus, the energy-momentum sum rule of partons translates directly to a sum rule involving all quarks and the gluons, \begin{align} \sum_q \langle x \rangle_q + \langle x \rangle_g = 1\; . \label{eq:sumrule} \end{align} Further experimental input suggests that the contribution coming only from up- and down quarks does not add up to one \cite{Blumlein:2006be}. Since it is expected that the heavier quarks will not significantly contribute to the average nucleon momentum \cite{Martin:2009iq}, this implies that the gluons carry a large amount of the nucleon momentum, such that the sum rule of eq.~(\ref{eq:sumrule}) is satisfied. Therefore, the computation of the gluon moment is necessary to fully understand the structure of the nucleon. However, at the moment, despite the fact that there are many results for the quark structure of the nucleon, see e.g. refs.~\cite{Alexandrou:2013joa, Alexandrou:2012gz}, there are just a few results for $\langle x \rangle_g$ which are, moreover, only obtained from quenched computations \cite{Horsley:2012pz, Liu:2012nz}. The work presented here aims at giving a first result from a computation on gauge configurations generated with light, strange and charm sea quarks. We can access the gluon moment of a hadron via the matrix elements of the gluon operator: \begin{align} O_{\mu\nu} = -\mbox{tr}_c G_{\mu\rho}G_{\nu\rho}\; . \label{eq:operator} \end{align} The matrix elements of this operator can be computed with a ratio of a three-point and a two-point function, where the sink time $t$ and the operator time $\tau$ are chosen properly. \begin{align} \frac{\langle h(p,t)\mathcal O(\tau)h(p,0)\rangle}{\langle h(p,t)h(p,0) \rangle} \stackrel{0\ll \tau\ll t}= (\mathcal O)_{h(p)h(p)} \label{eq:me} \end{align} where $h(p,t)$ denotes a hadron with momentum $p$ at sink time $t$. The general matrix element of eq.~(\ref{eq:me}) can be decomposed into several terms proportional to appropriate form factors, see e.g. the discussion in ref.~\cite{Alexandrou:2013joa}. The relevant form factor for our purpose is $A_{20}$, which can be related to the gluon moment. In order to proceed, we need to consider certain representations of the operator in eq.~(\ref{eq:operator}). Here we choose two of them \begin{align} \mathcal A_{i}= \mathcal O_{i4},\;\;\mathcal B= \mathcal O_{44} - \frac{1}{3}\mathcal O_{jj}. \end{align} The matrix elements of these operatoprs can be written in terms of the gluon moment as \begin{align} \label{EQN_intro_2} (\mathcal A_i)_{N(p)N(p)} = -i p_i\langle x \rangle_g,\;\;(\mathcal B)_{N(p)N(p)} = (m_N + \frac{2}{3E_N} {\vec{p}}\:^2)\langle x \rangle_g\; . \end{align} Both operators have certain drawbacks. The operator $\mathcal A_i$ can only be taken when a non-zero momentum is injected. It is known that the computation of momentum dependent operator matrix elements result in a larger noise-to-signal ratio than a momentum-zero computation, which is possible for operator $\mathcal B$. In case of the operator $\mathcal B$ there is a subtraction of two terms which are similar in magnitude. This can be understood from the lattice version of the operator, expressed in terms of plaquettes: \begin{align} \label{EQN_intro_1} \mathcal B(t)= \frac{4}{9}\frac{\beta}{a}\sum_x\left(\sum_i\mbox{tr}_c[U_{i4}(x,t)]-\sum_{i<j}\mbox{tr}_c[U_{ij}(x,t)]\right)\; . \end{align} Here, one sees that the spatial and the temporal part of the plaquette, which are very similar in size, need to be subtracted, leading potentially again to a bad signal-to-noise behavior of the corresponding matrix element. The choice we made for the following discussion is nonetheless the operator $\mathcal B$ since it is directly accessible to us. \section{Feynman-Hellman theorem} \vspace{-3mm} One approach to extract the matrix elements of the gluon operator that was applied in \cite{Horsley:2012pz} uses the Euclidean form of the Feynman-Hellman theorem. If one introduces some operator $\lambda \mathcal O$ into the action of the system, the operator's matrix elements can be derived from the derivative of the energy of the state with respect to $\lambda$. \begin{align} \label{EQN_Feynman_1} \frac{\partial E_N(\lambda)}{\partial\lambda} = ( :\frac{\partial \hat{S}(\lambda)}{\partial \lambda}:)_{N(p)N(p),\lambda}\; . \end{align} Here $:...:$ means that the vacuum expectation value of the operator has to be subtracted. For the purpose of calculating the three-point function for the gluon operator we modify the Wilson gauge action as \begin{align} S(\lambda) = \frac{1}{3}\beta(1+\lambda)\sum_i\mbox{tr}_c[1-U_{i4}]+\frac{1}{3}\beta(1-\lambda)\sum_{i<j}\mbox{tr}_c[1-U_{ij}]\; . \end{align} Note that $\lambda=0$ corresponds to the standard Wilson plaquette gauge action. When applying eq.~(\ref{EQN_intro_2}), (\ref{EQN_intro_1}) and (\ref{EQN_Feynman_1}) one can relate the derivative of the nucleon energy to $\langle x \rangle_g$. \begin{align} \frac{\partial E_N}{\partial\lambda} {\big |_{\lambda = 0}} = - \frac{3}{2}\left(m_N+\frac{2}{3E_n}\vec{p}\:^2\right)\langle x \rangle_g\; . \end{align} There is no subtraction of the vacuum expectation value here, because utilizing lattice rotational symmetry it can be shown that the expectation value of the operator in eq.~(\ref{EQN_intro_1}) is zero. When computing the nucleon mass at zero momentum, the relation can be simplified as: \begin{align} \langle x \rangle_g = \frac{2}{3m_N}\frac{\partial m_N}{\partial \lambda}{\big |_{\lambda=0}}\; . \end{align} In order to compute the nucleon mass for different, non-zero $\lambda$ values, new gauge ensembles had to be generated. In addition, due to the change of the gauge action, the hopping parameter $\kappa$ had to be re-tuned to its critical value for each ensemble, in order to regain the $\mathcal{O}(a)$ improvement. We have performed preliminary tests on small lattices with heavy quark masses to keep the computational effort affordable. The simulations were carried out with $24^3 \times 48$ lattices and $N_f=2+1+1$ flavors of maximally twisted mass fermions. We employed $\beta=1.95$ which corresponds to a lattice spacing of $a\approx 0.078$~fm and a twisted mass parameter $\mu=0.085$ which leads to a pion mass of $m_{PS} \approx 490$~MeV. As gauge action we used the Iwasaki action, however the Feynman-Hellman theorem was only applied to the Wilson part, i.e. the pure plaquette part, of the action. Our results for three different $\lambda$ values on $\sim 200$ gauge configurations and the nucleon mass at $\lambda=0$ can be seen in Fig.~\ref{FIG_FH_1}. \begin{figure}[htb] \centering \includegraphics[scale=1]{Plots/FHt/gm_fh.eps} \caption{\label{FIG_FH_1}Dependence of the nucleon mass on the change of the gauge action (different $\lambda$ values). The slope of the fit can be related to the gluon moment.} \end{figure} We performed a linear fit in $\lambda$ to the data of the nucleon mass. The fact that the data shows a $\lambda$ dependence suggests that we can obtain a non-zero signal for the gluon moment. However, the error of the slope is rather large (about 30\%). The systematic error is probably even larger, because it is not known in which $\lambda$ region a linear fit is really justified. To study this systematic effect one would need to compute the nucleon mass with a smaller error for more $\lambda$ points than used here. \section{Direct method} \vspace{-3mm} An alternative, more straightforward method of computing the matrix element of eq.~({\ref{eq:me}) is a direct approach, where, through performing the relevant Wick contractions, the three-point function can be expressed by a suitable combination of propagators and gauge links. For the gluon three-point function this is actually a trivial task, because there are no quark fields in the gluon operator. Subsequently, there are no possible contractions between the gluon operator and the interpolating fields of the nucleon. The three-point function can, in fact, be written as a product of nucleon two-point functions and the gluon operator. For the zero momentum computation we get \begin{align} \frac{\left\langle [N(t)N(0)]_{p=0} \mathcal B(\tau)\right\rangle}{\langle N(t)N(0)_{p=0} \rangle} \stackrel{0\ll \tau \ll t}=m_N \langle x \rangle_g\; . \end{align} The advantage of this method is that we can reuse existing two-point functions and only have to compute the gluon operator on the very same configurations which requires little computational effort. The following results were computed on a $32^3 \times 64$ lattice with $N_f=2+1+1$ flavors of maximally twisted mass fermions. We set $\beta=1.95$, which corresponds to a lattice spacing of $a\approx 0.078$\;fm and the twisted mass parameter $\mu = 0.055$, which is a pion mass of $m_{PS} \approx 393$\;MeV. For the two-point function we used 16 different source positions on each gauge configuration which corresponds to 32 measurements, because we considered proton and neutron fields. The first results for a local gluon operator can be seen in the left panel of Fig.~\ref{FIG_DIR_1}. \begin{figure}[htb] \centering \includegraphics[scale=0.7]{Plots/std_1/gm_ns.eps}\;\;\includegraphics[scale=0.7]{Plots/HYP/gm_ns.eps} \caption{\label{FIG_DIR_1}{\bf left:} Nucleon matrix element for a local gluon operator for a source-sink separation of 11 and different operator insertion times $\tau$. {\bf right:} Relative error of the nucleon matrix element for different HYP-smearing steps of the gluon operator.} \end{figure} Obviously, it is not possible to extract a signal, due to a large noise-to-signal ratio. A possible solution for this problem can be found in \cite{Meyer:2007tm}, where it is suggested to use HYP smearing \cite{Hasenfratz:2001hp} for the links in the gluon operator. We applied several steps of HYP smearing with parameters from \cite{Hasenfratz:2001hp} and present the relative error (noise-to-signal ratio) for the observable in the right panel of Fig.~\ref{FIG_DIR_1}. We found a significant reduction of the noise-to-signal ratio with increasing number of HYP smearing steps. Thus, we subsequently applied five steps of HYP smearing. \begin{figure}[htb] \centering \includegraphics[scale=0.7]{Plots/std_HYP/gm_HYP.eps}\;\;\includegraphics[scale=0.7]{Plots/std_HYP_T/gm_HYP.eps} \caption{\label{FIG_DIR_2}{\bf left:} Nucleon matrix element for a HYP-smeared gluon operator for a source-sink separation of 11 and different operator insertion times $\tau$. {\bf right:} The same matrix element for three different source-sink separations. $\mathcal B=\frac{4}{9}\beta\chi\tilde{\mathcal B}$} \end{figure} On the left panel of Fig.~\ref{FIG_DIR_2} one can see the signal we got from a single source-sink separation, where $\mathcal B=\frac{4}{9}\beta\chi\tilde{\mathcal B}$ and $\chi$ is a normalization factor caused by using HYP smearing. We clearly got a non-zero value with a reasonable error of about 10\%. However, this signal could still be contaminated by excited state effects. This can be checked by computing the matrix element for different source-sink separations. On the right panel one can see that there are no strong excited states effects, because the plateau value seems to be stable for different sink time positions. \section{Conclusion and outlook} \vspace{-3mm} We presented two methods which potentially can be used to extract $\langle x \rangle_g$ on the lattice: The first method makes use of the Feynman-Hellman theorem and has the advantage of yielding a statistically significant signal for rather moderate statistics. However, the calculation needs dedicated simulations with different values of $\lambda$ to establish unambiguously the linear dependence of the results on $\lambda$. Furthermore, each simulation has to be tuned to a critical value of $\kappa$, in order to ensure automatic $\mathcal O(a)$ improvement. Therefore, overall, the computational cost associated with this method is large. The second method directly computes the three-point function from which $\langle x \rangle_g$ can be extracted. In order to obtain a non-zero signal, one has to apply smearing on the gauge links entering the operator. Although one needs large statistics, one can use nucleon two-point functions computed for other observables and therefore the overall cost is small. Our study therefore suggests that the direct method may be the method of choice to calculate this particular observable. Still, the Feynman-Hellman theorem could be used as a cross-check on ensembles where it is feasible to apply. Another issue regarding the computation of the physical value of $ \langle x \rangle_g$ is that the lattice matrix element needs to be renormalized. Since the gluon operator is a singlet operator it will mix with the quark momentum fraction $ \langle x \rangle_q$. The relation between the renormalized and the bare values of both quantities is given by a $2\times2$ mixing matrix. \begin{align} \binom{\langle x \rangle_g^{\overline{MS}}}{\sum_q\langle x \rangle_q^{\overline{MS}}} = Z_{2\times2}\binom{\langle x \rangle_g^{bare}}{\sum_q\langle x \rangle_q^{bare}} . \end{align} For $ \langle x \rangle_g$ the relevant matrix elements are called $Z_{gg}$ and $Z_{qq}$ and the relation is \begin{align} \langle x \rangle_g ^{\overline{MS}} = Z_{bare\:gg}^{\overline{MS}}\langle x \rangle_g ^{bare} + [1-Z_{bare\:qq}^{\overline{MS}}]\sum_q \langle x \rangle_q ^{bare} . \end{align} As a first step we will compute these factors perturbatively. This will provide us with a first estimate of the factors and we will get insight in the renormalization process of this quantity. If we know the renormalization conditions, a non-perturbative renormalization can follow. Since the smearing of the operator should be included into the renormalization process, we will also try to use other smearing techniques for the lattice computation, i.e. HEX or stout smearing, which can be easier employed in a perturbative computation. Once the renormalization is complete we will be able to give the first physical result for $ \langle x \rangle_g$ with fully active sea quarks. The next step should be the computation of $ \langle x \rangle_g$ at physical pion mass using the recently generated ensembles with $N_f=2$ twisted-mass-clover fermions \cite{ETMC}. For heavier quark masses the continuum limit for this quantity can be studied. Furthermore, the result can be used for the determination of the gluon contribution to the nucleon spin, which at the moment is not known from the lattice. Moreover, it could also be possible to compute the gluon moment of other hadrons, e.g. the pion (cf.~\cite{Meyer:2007tm}). \section*{Acknowledgments} \vspace{-3mm} We thank Carsten Urbach for discussion and help with the tmLQCD code \cite{Jansen:2009xp}, which has been used for the computations. Latest developlments for this code were also presented at this conference \cite{tmLQCD1,tmLQCD2}. This work has been supported in part by the DFG Sonderforschungsbereich/Transregio SFB/TR9-03. B.K. is supported by the National Research Fund, Luxembourg.
1,477,468,750,826
arxiv
\section{Introduction} The Volkov solution~\cite{Volkov:1935zz} for an electron in a plane wave background is one of the key theoretical building blocks underpinning our understanding of how matter interacts with a laser. As quantum effects become significant, strong field techniques from quantum electrodynamics, QED, are required. Understanding potential new physics in this high intensity regime is of clear importance and, in turn, should influence plans for future facilities and experiments. The Volkov solution has been extensively studied over the years and applied to a wide class of problems in both linearly and circularly polarised backgrounds, see for example~\cite{Reiss:1966A}\cite{Brown:1964zz}\cite{Nikishov:1964zza}\cite{Neville:1971uc}\cite{Dittrich:1973rn}\cite{Dittrich:1973rm}\cite{Kibble:1975vz}\cite{Mitter:1974yg}\cite{Ritus:1985review}\cite{Ilderton:2012qe}\cite{Lavelle:2013wx}\cite{Lavelle:2014mka}\cite{Lavelle:2015jxa}\cite{King:2014wfa}. Working in the full elliptic class of polarisations allows for a much clearer description of these systems and helps clarify some of their physical content~\cite{Lavelle:2017dzx}. In particular, this more general approach shows that the laser induced mass shift is actually independent of the eccentricity of the background. Loop corrections in a laser background have been looked at several times, as for example in~\cite{Brouder:2002fz}\cite{ Heinzl:2011ur}\cite{DiPiazza:2011tq}\cite{Meuren:2011hv}\cite{Podszus:2018hnz}\cite{Ilderton:2019kqp}. Unitarity{} arguments are often used to directly link loop corrections to effective cross-sections. It has been argued, see for example~\cite{Baier:1975ys}, that the laser background has no impact on the renormalisation of the the{}ory. To have confidence in this result, it is important to probe the loop structures and associated renormalisation of the theory in a variety of ways. In this paper we will do this by taking a weak field perspective which has the advantage that standard perturbative techniques can be directly applied. The propagation of an electron in a laser background is often denoted by a double line. This notation represents the inclusion of multiple interactions with the laser. A physical way to think of this is that the double line incorporates all degenerate processes, i.e., the emission and absorption of photons indistinguishable from the background. This is reminiscent of the Lee-Nauenberg approach to the infrared problem~\cite{Lee:1964is}, see also \cite{Lavelle:2005bt}. We take the double line to mean the two point function for the Volkov field in the plane wave background, see Fig.~\ref{fig:double_line}. \begin{figure}[htb] \[ \raisebox{-0.5cm}{\includegraphics{double_line.jpg}}\ =\sum_{\substack{\mathrm{laser}\\ \mathrm{interactions}}}\raisebox{-0.8cm}{\includegraphics{many.jpg}} \] \caption{Double line representation of an electron propagating in a background at tree level.} \label{fig:double_line} \end{figure} We will, in this paper, make this link precise in terms of emission into and absorption from the laser. Throughout this paper, we will distinguish between absorption (dashed lines coming in from the left) and emission (dashed lines going out to the right). The incorporation of loops in the weak field limit will then follow using standard perturbative methods. This will then allow us to better understand the way in which loops are added to the double line, see Fig.~\ref{fig:double_line_loop}. As we shall see, clarifying the links between these descriptions of matter propagating in a laser will reveal important points about the renormalisation of such charges. \begin{figure}[htb] \[ \raisebox{-0.6cm}{\includegraphics{double_line_loop.jpg}}\ =\sum_{\substack{\mathrm{laser}\\ \mathrm{interactions}\\ \mathrm{and\; loops}}}\raisebox{-0.8cm}{\includegraphics{many_loop.jpg}} \] \caption{One loop correction to the double line representation.} \label{fig:double_line_loop} \end{figure} In this paper we will study the renormalisation of the theory describing an electron propagating in a plane wave background. This analysis will start in the weak intensity regime and there we will calculate the ultraviolet divergences that arise at one loop. Our loop calculations will be in the Feynman gauge. Polarisation effects will be clarified through working with the full elliptic class at all times. As well as the naively expected ultraviolet structures, that are independent of the background, we will identify an additional correction to the laser induced mass shift. We show through explicit calculations of higher order laser interactions that they are renormalised by the same additional correction, and conjecture that this is universal for this class of backgrounds. Renormalisation is most easily studied within a momentum space description of the theory, so we conclude with a discussion on how a consistent momentum space language can be applied to this system where translation invariance has been broken by the background field, and conjecture all orders results. \section{The perturbative set up} An electron propagating through a laser can absorb multiple photons from the background. Additionally, such an electron can emit photons which are degenerate with, and indistinguishable from, the background. Both types of interactions are, as we shall discuss, required for the double line description. If, however, the electron emits a photon that is distinguishable from the background then this corresponds to non-linear Compton scattering rather than propagation. We will argue that summing in a suitable way over all such degeneracies leads to the Volkov description~\cite{Volkov:1935zz} of an electron propagating through such a background. What is more, this will allow for a direct route to the incorporation of loop corrections in such processes and hence the renormalisation of the theory. The momentum of an electron in a plane wave background can be decomposed into some initial momentum $p$, along with multiples of the null momentum $k$ characterising the background. We denote by $\Prop{n}$ the resulting propagator after $n$ net laser absorptions: \begin{equation}\label{eq:Prop} \Prop{n}=\frac{i}{\slashed{p}+n\slashed{k}-m+i\epsilon}\,. \end{equation} Note that in terms of the overall momentum for the electron, we view an emission as a negative absorption from the laser. So if there were two absorptions and one emission, say, then $n=1$. This compact notation for the propagators will provide the building blocks for our description of both tree and loop corrected propagation. For example, an additional absorption by the electron is described by the incoming interaction shown in Fig.~\ref{Fig:Ab}, where the absorption factor $\mathrm{A}$ between the propagators is given by \begin{equation}\label{eq:Ais} \mathrm{A}=-i\lasersl\,. \end{equation} Here $\mathcal{A}_\mu$ is essentially the coupling, $e$, times the Fourier component of the classical potential for an elliptically polarised plane wave. We will expand on this terminology later, but see also~\cite{Lavelle:2017dzx} for more details on this formalism and the connection to the Stokes' parameter description of the background. \begin{figure}[htb] \[ \raisebox{-0.6cm}{\includegraphics{in.jpg}} =\Prop{n+1}\mathrm{A}\Prop{n} \] \caption{Single absorption from the background.} \label{Fig:Ab} \end{figure} The associated emission of a photon degenerate with the background is described by the outgoing process given in Fig.~\ref{Fig:Em}, where the emission factor $\mathrm{E}$ is given by \begin{equation}\label{eq:Eis} \mathrm{E}=-i\slashed{\laser^*}\,. \end{equation} \begin{figure}[htb] \[ \raisebox{-0.6cm}{\includegraphics{out.jpg}} =\Prop{n}\mathrm{E}\Prop{n+1} \] \caption{Single emission into the background.} \label{Fig:Em} \end{figure} It is helpful here to clarify the notation being used. By $\slashed{\laser^*}$ we mean the slashed version of the conjugated field, so $\lasersl^*=\laser^{*}_\mu\gamma^\mu$. This is a useful shorthand for the unambiguous expression for the dual field \begin{equation}\label{eq:duality} \overbar{\lasersl}:=\gamma_0\lasersl^\dagger\gamma_0\equiv\laser^{*}_\mu\gamma^\mu\,. \end{equation} Note that acting on the propagators we have the duality relation $\PropB{n}= -\Prop{n}$ and on the absorption term $\overbar{\mathrm{A}}=-\mathrm{E}$. The duality transformation needs to respect the time-ordering implicit in the $i\epsilon$ prescription. This means that formally we should take $\overbar{\epsilon}=-\epsilon$. Overall, the processes in Figs.~\ref{Fig:Ab} and~\ref{Fig:Em} are (anti) dual to each other in the sense that \begin{equation}\label{eq:dualitytree} \overbar{\Prop{n+1}\mathrm{A}\Prop{n}}=-\Prop{n}\mathrm{E}\Prop{n+1}\,. \end{equation} \begin{figure}[htb] \[ \raisebox{-0.6cm}{\includegraphics{loop_in.jpg}} \ +\ \raisebox{-0.6cm}{\includegraphics{loop_over_in.jpg}} \ +\ \raisebox{-0.6cm}{\includegraphics{in_loop.jpg}} \] \caption{Single absorption with a loop correction.} \label{Fig:In} \end{figure} We now turn to the one loop corrections to the basic interactions between the matter and its background. For the absorption process in Fig.~\ref{Fig:Ab} we have, at one loop, the three diagrams in Fig.~\ref{Fig:In}, and, for the emission process of Fig.~\ref{Fig:Em}, we get the contributions in Fig.~\ref{Fig:out}. Note that the central term for each row here has the structure of a vertex correction, while the other terms are self-energies for the external legs. So it is not immediately clear that grouping them together in this way leads to a multiplicative renormalisation of the tree level processes in Figs.~\ref{Fig:Ab} and~\ref{Fig:Em}. \begin{figure}[htb] \[ \raisebox{-0.6cm}{\includegraphics{out_loop.jpg}} \ +\ \raisebox{-0.6cm}{\includegraphics{loop_over_out.jpg}}\ + \ \raisebox{-0.6cm}{\includegraphics{loop_out.jpg}} \] \caption{Single emission with a loop correction.} \label{Fig:out} \end{figure} To clarify how renormalisation works in this context, we first need to recall how sideband structures emerge from the tree level diagrams in Figs.~\ref{Fig:Ab} and~\ref{Fig:Em}. To that end, we note that the absorption process of~Fig.~\ref{Fig:Ab} can be written as the difference of two propagators: \begin{equation}\label{eq:Sb1} \Prop{n+1}\mathrm{A}\Prop{n}=\mathrm{I}\Prop{n}-\Prop{n+1}\mathrm{I} \end{equation} where we define the \lq In\rq\ term as \begin{equation}\label{eq:In} \mathrm{I}=\frac{2p{\cdot}\laser+\slashed{k}\lasersl}{2p{\cdot}k}\,. \end{equation} The simple identity~(\ref{eq:Sb1}) lies at the heart of the sideband description of this propagation that was introduced in~\cite{Reiss:1966A}. At its heart, it is simply a partial fraction expansion which relates the products of propagators to their sums. The corresponding emission version of this sideband identity can be easily deduced by using the duality transformation (\ref{eq:dualitytree}) and (\ref{eq:Sb1}) to give \begin{equation}\label{eq:Sb2} \Prop{n}\mathrm{E}\Prop{n+1}=-\overline{\Prop{n+1}\mathrm{A}\Prop{n}}=\Prop{n}\mathrm{O}-\mathrm{O}\Prop{n+1}\,, \end{equation} where the \lq Out\rq\ insertion is given by \begin{equation}\label{eq:Out} \mathrm{O}:=\overline{\mathrm{I}}=\frac{2p{\cdot}\lasers-\slashed{k}\slashed{\laser^*}}{2p{\cdot}k}\,. \end{equation} The matrix nature of the $\mathrm{I}$ and $\mathrm{O}$ terms means that we must be careful with the ordering in (\ref{eq:Sb1}) and (\ref{eq:Sb2}). However, due to the null nature of $\slashed{k}$ and the fact that it commutes with both $\lasersl$ and $\slashed{\laser^*}$, we find that the In and Out terms commute: \begin{equation}\label{eq:OI_com} [\mathrm{I},\mathrm{O}]=0\,. \end{equation} Having clarified the tree level structures in Figs.~\ref{Fig:Ab} and~\ref{Fig:Em}, we can now analyse in much the same way the loop corrections of Figs.~\ref{Fig:In} and~\ref{Fig:out}. The ultraviolet poles related to the self energy contributions of Fig.~\ref{Fig:In} can be readily calculated by using standard results from QED, see for example chapter~18 of~\cite{Schwartz:2013pla}. Working in the Feynman gauge, and using dimensional regularisation in $D=4-2\varepsilon$ dimensions, we have for incoming momentum~$p+nk$ the contribution described in Fig.~\ref{Fig:Z2}. \begin{figure}[htb] \[ \raisebox{-0.6cm}{\includegraphics{se.jpg}} = \Prop{n}\Sigma_n\Prop{n} \] \caption{One loop self energy correction to the propagator. } \label{Fig:Z2} \end{figure} After some simplifications of the gamma matrices, we have for the ultraviolet divergent structure \begin{equation} \Sigma_n=-e^2\mu^{2\varepsilon} \int_{_\mathrm{UV}}\frac{d^{D}s}{(2\pi)^D}\,\frac{(2-D)\slashed{s}+Dm}{(s-(p+nk))^2(s^2-m^2)}\,, \end{equation} where $s$ is the four-momentum of the electron in the loop so that the photon in the loop has four-momentum $p+nk-s$. Retaining only the ultraviolet pole gives \begin{equation}\label{eq:Sigma_n} \Sigma_n = (i3m+\PropI{n})\delta_{_{^\mathrm{UV}}} \,. \end{equation} The notation here is that, from (\ref{eq:Prop}), $\PropI{n}=(-i)(\slashed{p}+n\slashed{k}-m)$ while the ultraviolet pole is given by \begin{equation}\label{eq:deltaUV} \delta_{_{^\mathrm{UV}}}=-\frac{e^2}{(4\pi)^2}\frac1{\varepsilon}\,. \end{equation} Substituting the self-energy expression~(\ref{eq:Sigma_n}) into Fig.~\ref{Fig:Z2} gives the familiar double pole mass term and a single pole. So the first and third diagrams in Fig.~\ref{Fig:In} become \begin{align} \Prop{n+1}\Sigma_{n+1}&\Prop{n+1}\mathrm{A}\Prop{n}+\Prop{n+1}\mathrm{A}\Prop{n}\Sigma_{n}\Prop{n}\\\nonumber &=\mathrm{I}\Prop{n}\Sigma_{n}\Prop{n}-\Prop{n+1}\Sigma_{n+1}\Prop{n+1}\mathrm{I}+\Prop{n+1}\Sigma_{n+1}\mathrm{I}\Prop{n}- \Prop{n+1}\mathrm{I}\Sigma_{n}\Prop{n}\,. \end{align} The first two terms on the right hand side here are the sideband structures but the final two include a mixture of momenta. \begin{figure}[htb] \[ \raisebox{-0.6cm}{\includegraphics{loop_over_in.jpg}} = \Prop{n+1} \Sigma_{\mathrm{in}}\Prop{n} \] \caption{The one loop absorption vertex correction to the propagator.} \label{Fig:Z2a} \end{figure} The vertex correction term in Fig.~\ref{Fig:In} is still to be included. The corresponding Feynman rule for this is given in Fig.~\ref{Fig:Z2a}. From this we have \begin{equation} \Sigma_{\mathrm{in}}=-e^2\mu^{2\varepsilon} \int_{_\mathrm{UV}}\frac{d^{D}s}{(2\pi)^D}\,\frac{\gamma^\rho (\slashed{s}+\slashed{k}+m)\lasersl(\slashed{s}+m)\gamma^\tau}{((s+k)^2-m^2)(s^2-m^2)}\frac{g_{\rho\tau}}{(s-(p+nk))^2}\,. \end{equation} Retaining only the ultraviolet divergent structures, which can easily be recognised by power counting, we find \begin{equation}\label{eq:inraw} \Sigma_{\mathrm{in}}= i \lasersl\,\delta_{_{^\mathrm{UV}}}=-\mathrm{A}\delta_{_{^\mathrm{UV}}}\,. \end{equation} Here we recognise the tree level absorption factor of Fig.~\ref{Fig:Ab} multiplied by the above ultraviolet pole. We emphasise that, in the last step, the $e^2$ factor from the loop is in the $\delta_{_{^\mathrm{UV}}}$ term, while the background coupling factor of $e$ has been absorbed into the definitions of~$\lasersl$ and~$\mathrm{A}$. This simple relation for the ultraviolet structure of this vertex term means that we can exploit the sideband relation~(\ref{eq:Sb1}) to rewrite \begin{align} \Sigma_{\mathrm{in}}&=\mathrm{I}(\delta_{_{^\mathrm{UV}}}\Prop{n}^{-1})-(\delta_{_{^\mathrm{UV}}}\Prop{n+1}^{-1})\mathrm{I}\,,\\\nonumber &=\mathrm{I}\Sigma_n-\Sigma_{n+1}\mathrm{I}\,. \end{align} Note that the scalar mass terms cancelled in the last step. The second diagram in Fig.~\ref{Fig:In} can thus be written as \begin{equation}\label{eq:into2} \Prop{n+1}\Sigma_{\mathrm{in}}\Prop{n}=\Prop{n+1}\mathrm{I} \Sigma_n\Prop{n}-\Prop{n+1} \Sigma_{n+1}\mathrm{I}\Prop{n}\,. \end{equation} We can now write the sum of the three diagrams in Fig.~\ref{Fig:In} as \begin{align} \begin{split} \Prop{n+1}\Sigma_{n+1}&\Prop{n+1}\mathrm{A}\Prop{n}+\Prop{n+1}\Sigma_{\mathrm{in}}\Prop{n} + \Prop{n+1}\mathrm{A}\Prop{n}\Sigma_{n}\Prop{n}\\ &=\mathrm{I}\Prop{n}\Sigma_n\Prop{n}-\Prop{n+1}\Sigma_{n+1}\Prop{n+1}\mathrm{I}\,. \end{split} \end{align} Note that all of the non-sideband structures cancel. What remains has exactly the same structure as the sideband description of the tree level result (\ref{eq:Sb1}), but with the expected self-energy corrections to the sideband propagators. We thus see the attractive result that the loop corrections to (\ref{eq:Sb1}) generate the normal one-loop propagator corrections to the tree level propagators in the sidebands: \begin{equation}\label{eq:ren_in} \mathrm{I}\Big(\Prop{n}+\Prop{n}\Sigma_{n}\Prop{n}\Big)- \Big(\Prop{n+1}+\Prop{n+1}\Sigma_{n+1}\Prop{n+1}\Big)\mathrm{I}\,. \end{equation} The interpretation of this result is then direct: it will lead to the sidebands requiring the standard mass and wave function renormalisations. The emission process of Fig.~\ref{Fig:Em}, and its loop corrections in Fig.~\ref{Fig:out}, then lead to the sidebands described in (\ref{eq:Sb2}) being renormalised in a similar way. The key out-going vertex identity, dual to (\ref{eq:into2}), is that \begin{equation}\label{eq:outto2} \Prop{n}\Sigma_{\mathrm{out}}\Prop{n+1}=\Prop{n}\Sigma_n\mathrm{O}\Prop{n+1}-\Prop{n}\mathrm{O} \Sigma_{n+1}\Prop{n+1}\,, \end{equation} where we have used $\overbar{\Sigma}_{\mathrm{in}}=-\Sigma_{\mathrm{out}}$ and $\overbar{\Sigma}_{n}=-\Sigma_{n}$. This then results in the loop corrections to the sidebands (\ref{eq:Sb2}) being given by \begin{equation}\label{eq:ren_out} \Big(\Prop{n}+\Prop{n}\Sigma_{n}\Prop{n}\Big)\mathrm{O}-\mathrm{O} \Big(\Prop{n+1}+\Prop{n+1}\Sigma_{n+1}\Prop{n+1}\Big)\,. \end{equation} This we see is precisely (minus) the dual of (\ref{eq:ren_in}) as we would naively expect from the tree level relation (\ref{eq:dualitytree}). Again, the standard mass and wave-function renormalisations will suffice. \section{Higher order background corrections} Having understood the structure of the loop correction to a single absorption or emission of a laser photon by the electron, we now want to calculate the ultraviolet divergences when multiple photons are absorbed or emitted. We shall consider the case of both absorption and emission in the following section. \begin{figure}[htb] \[ \raisebox{-0.6cm}{\includegraphics{many_in_loop.jpg}} \] \caption{Loop spanning multiple laser interactions.} \label{Fig:many_in_loop} \end{figure} The first thing to note is that loops spanning more than one laser absorption or emission, as depicted in Fig.~\ref{Fig:many_in_loop}, are all finite in the ultraviolet regime by simple power counting. \begin{figure}[htb] \[ \raisebox{0cm}{\includegraphics{2in.jpg}} =\Prop{n+2}\mathrm{A}\Prop{n+1}\mathrm{A}\Prop{n} \] \caption{Tree level double absorption process.} \label{Fig:inandin} \end{figure} This means that when, for example, we consider the tree level double absorption process, where the incoming propagator $\Prop{n}$ absorbs two additional laser photons, as in Fig.~\ref{Fig:inandin}, then we need only to consider the loop corrections straddling no more than one background vertex, as shown in Fig.~\ref{Fig:ininloop}. In this we again see a mixture of self-energy and single vertex corrections. \begin{figure}[htb] \[ \raisebox{-1cm}{\includegraphics{2in_a_loop.jpg}}\quad \raisebox{-1cm}{\includegraphics{2in_b_loop.jpg}}\quad \raisebox{-1cm}{\includegraphics{2in_c_loop.jpg}}\quad \raisebox{-1cm}{\includegraphics{2in_d_loop.jpg}}\quad \raisebox{-1cm}{\includegraphics{2in_e_loop.jpg}} \] \caption{Double absorption process with a loop correction.} \label{Fig:ininloop} \end{figure} In order to understand and interpret these corrections, we need to first identify the sideband structures in the tree level term shown in Fig.~\ref{Fig:inandin}. To that end, we write this as \begin{equation} \Prop{n+2}\mathrm{A}\Prop{n+1}\mathrm{A}\Prop{n}=\Prop{n+2}\mathrm{A}\Prop{n+1}\PropI{n+1}\Prop{n+1}\mathrm{A}\Prop{n}\,. \end{equation} This allows us to use the absorption relation (\ref{eq:Sb1}) twice, resulting in four terms: \begin{equation}\label{eq:2instep} \Prop{n+2}\mathrm{A}\Prop{n+1}\mathrm{A}\Prop{n}=\Prop{n+2}\mathrm{I}^2-\mathrm{I}\Prop{n+1}\mathrm{I}+\mathrm{I}^2\Prop{n}-\Prop{n+2}\mathrm{I}\PropI{n+1}\mathrm{I}\Prop{n}\,. \end{equation} A key identity needed here, which is straightforward to show, is that \begin{equation}\label{eq:IPI} \mathrm{I}\PropI{n+1}\mathrm{I}=\big(\tfrac12\mathrm{I}^2+\tfrac12v\big)\PropI{n}+\PropI{n+2}\big(\tfrac12\mathrm{I}^2-\tfrac12v\big) \end{equation} where \begin{equation}\label{eq:vdef} v:=\frac{\laser{\cdot}\laser}{2p{\cdot}k}\,. \end{equation} Using this identity in (\ref{eq:2instep}), we see that the sidebands for the double absorption process depicted in Fig.~\ref{Fig:inandin} are given by \begin{equation}\label{eq:ininsb} \Prop{n+2}\mathrm{A}\Prop{n+1}\mathrm{A}\Prop{n}=\big(\tfrac12\mathrm{I}^2-\tfrac12v\big)\Prop{n}-\mathrm{I}\Prop{n+1}\mathrm{I}+\Prop{n+2}\big(\tfrac12\mathrm{I}^2+\tfrac12v\big)\,. \end{equation} We will now show that the one loop diagrams in Fig.~\ref{Fig:ininloop} generate the expected, ultraviolet one-loop corrections to these three sidebands. The loop correction can be evaluated by recognising in the diagrams of Fig.~\ref{Fig:ininloop} a connection to the earlier loop processes evaluated in the previous section. The first three diagrams represent an initial absorption process followed by the loop corrections of Fig.~\ref{Fig:In}, with shifted initial momentum. In a similar way, the final three diagrams in Fig.~\ref{Fig:ininloop} can be interpreted as the loop corrections of Fig~\ref{Fig:In}, followed immediately by an absorption process. These two simplifications double count the middle process, Fig.~\ref{Fig:ininloop}c, so this needs to be subtracted from the combined sum. Following this reduction prescription, the diagrams in Fig.~\ref{Fig:ininloop} can then be evaluated using the loop results (\ref{eq:ren_in}) and the sideband identity (\ref{eq:Sb1}). This results in terms containing combinations of the form $\mathrm{I}\Sigma_{n}\mathrm{I}$ which, from the self-energy extension to (\ref{eq:IPI}), can be evaluated by using the identity \begin{equation}\label{eq:ISigmaI} \mathrm{I}\Sigma_{n+1}\mathrm{I}=\big(\tfrac12\mathrm{I}^2+\tfrac12v\big)\Sigma_{n}+\Sigma_{n+2}\big(\tfrac12\mathrm{I}^2-\tfrac12v\big)\,. \end{equation} From this we rapidly arrive at the sideband structure of the one loop corrections of Fig.~\ref{Fig:ininloop} to the double absorption process shown in Fig.~\ref{Fig:inandin}. Combined with the tree-level result, this yields \begin{align}\label{eq:ren_inin} \begin{split} \big(\tfrac12\mathrm{I}^2-\tfrac12 v\big)&\Big(\Prop{n}+\Prop{n}\Sigma_{n}\Prop{n} \Big)-\mathrm{I}\Big(\Prop{n+1}+\Prop{n+1}\Sigma_{n+1}\Prop{n+1}\Big)\mathrm{I}\\&+\Big(\Prop{n+2}+\Prop{n+2}\Sigma_{n+2}\Prop{n+2} \Big)\big(\tfrac12\mathrm{I}^2+\tfrac12 v\big)\,. \end{split} \end{align} Again we see that the sidebands pick up the expected loop corrections. \begin{figure}[htb] \[ \raisebox{-0.2cm}{\includegraphics{2out.jpg}}=\Prop{n-2}\mathrm{E}\Prop{n-1}\mathrm{E}\Prop{n} \] \caption{Tree level double emission process.} \label{Fig:outandout} \end{figure} The double emission process can be evaluated in a similar fashion, or more directly by taking the dual of the double absorption process. The result is that the loop corrections to the double emission processes described in Fig.~\ref{Fig:outandout} are given by the sideband terms: \begin{align}\label{eq:ren_outout} \begin{split} \Big(\Prop{n}+\Prop{n}\Sigma_{n}\Prop{n} \Big)&\big(\tfrac12\mathrm{O}^2-\tfrac12 v^*\big)-\mathrm{O}\Big(\Prop{n+1}+\Prop{n+1}\Sigma_{n+1}\Prop{n+1}\Big)\mathrm{O}\\&+\big(\tfrac12\mathrm{O}^2+\tfrac12 v^*\big)\Big(\Prop{n+2}+\Prop{n+2}\Sigma_{n+2}\Prop{n+2} \Big)\,, \end{split} \end{align} where now \begin{equation}\label{eq:vsdef} v^*:=\frac{\lasers\!{\cdot}\lasers}{2p{\cdot}k}\,. \end{equation} An important point to note here is that the terms $v$ and $v^*$ induced by the background do not acquire loop corrections and hence are not renormalised at one loop. We also note that both $v$ and $v^*$ are polarisation dependent and vanish for a circularly polarised laser, see~\cite{Lavelle:2017dzx}. \section{Absorption and emission from the background} It is well known that the laser induced mass shift is only generated by processes where there is both emission and absorption from the laser. This is understood at all orders in the background field and is known to be polarisation independent, see~\cite{Lavelle:2017dzx} and references therein. Here we will calculate the one-loop corrections to this important process and see the necessity for a new renormalisation. \begin{figure}[htb] \[ \raisebox{-0.1cm}{\includegraphics{in_out.jpg}} + \raisebox{-0.1cm}{\includegraphics{out_in.jpg}} \] \caption{Tree level absorption and emission corrections.} \label{Fig:inout} \end{figure} There are two contributions to the mixed absorption and emission process at the lowest order in the background interactions, as summarised in Fig.~\ref{Fig:inout}. We expect from~\cite{Lavelle:2015jxa} that these diagrams will generate three sidebands and the central one will involve a double pole corresponding to the laser induced mass shift. If the incoming momentum is again $p+nk$, then these diagrams are given by \begin{equation}\label{eq:inout} \Prop{n}\mathrm{E}\Prop{n+1}\mathrm{A}\Prop{n}+\Prop{n}\mathrm{A}\Prop{n-1}\mathrm{E}\Prop{n}\,. \end{equation} Note that each of these contributions is unchanged (up to a sign) by the duality transformations introduced earlier. The terms in (\ref{eq:inout}) can be evaluated by again inserting appropriate inverse propagators so that both the absorption and emission identities, (\ref{eq:Sb1}) and (\ref{eq:Sb2}), can be used. From this we quickly find that \begin{align} \Prop{n}\mathrm{E}\Prop{n+1}\mathrm{A}\Prop{n}+\Prop{n}\mathrm{A}\Prop{n-1}\mathrm{E}\Prop{n}&=\mathrm{I}\Prop{n-1}\mathrm{O}-2\mathrm{O}\mathrm{I}\Prop{n}-\Prop{n}2\mathrm{O}\mathrm{I}+\mathrm{O}\Prop{n+1}\mathrm{I}\\\nonumber &\qquad +\Prop{n}(\mathrm{O}\PropI{n+1}\mathrm{I}+\mathrm{I}\PropI{n-1}\mathrm{O})\Prop{n}\,. \end{align} The first four terms in this involve the expected sidebands for these processes, but the coefficients are not as expected. The final term needs more work to be interpreted, but should correct these coefficients. The structure in the brackets in the last equation is analogous to the double absorption contribution seen earlier in (\ref{eq:IPI}). The key identity now is that \begin{equation}\label{eq:massid} \mathrm{O}\PropI{n+1}\mathrm{I}+\mathrm{I}\PropI{n-1}\mathrm{O}=\mathrm{O}\mathrm{I}\PropI{n}+\PropI{n}\mathrm{O}\mathrm{I}-i\slashed{\mstar}\,, \end{equation} where we define the important quantity \begin{equation}\label{eq:vector_mass} \mathscr{M}_\mu: =-\frac{\lasers\!{\cdot}\laser }{p{\cdot}k}k_\mu\,. \end{equation} Note that $\slashed{\mstar}=\overbar{\slashed{\mstar}}$. Using (\ref{eq:massid}) we find that the sidebands for this process are given at this order by: \begin{equation}\label{eq:central_sidebands} \Prop{n}\mathrm{E}\Prop{n+1}\mathrm{A}\Prop{n}+\Prop{n}\mathrm{A}\Prop{n-1}\mathrm{E}\Prop{n}= \mathrm{I}\Prop{n-1}\mathrm{O}-\mathrm{O}\mathrm{I}\Prop{n}-\Prop{n}\mathrm{O}\mathrm{I}-\Prop{n} i\slashed{\mstar}\Prop{n}+\mathrm{O}\Prop{n+1}\mathrm{I}\,. \end{equation} Here we recognise the expected three sidebands, $\Prop{n}$, $\Prop{n\pm1 }$, and the double pole. These terms must be interpreted as corrections, induced by the laser, to the free propagator in the central sideband, $\Prop{n}$. Since $\slashed{\mstar}$ is in the double pole term for the central sideband, we can relate it to the more familiar polarisation independent effective mass, $m_{*}$, induced by the background. Following the discussion in~\cite{Lavelle:2017dzx}, we can write at this order in the laser background, $\Prop{n}-\Prop{n}i\slashed{\mstar}\Prop{n}$ as \begin{align}\label{eq:propmM} \begin{split} \frac{i}{\slashed{p}+n\slashed{k}-m+i\epsilon}&+\frac{1}{\slashed{p}+n\slashed{k}-m+i\epsilon}\slashed{\mstar}\frac{i}{\slashed{p}+n\slashed{k}-m+i\epsilon}\,,\\ &\approx\frac{i}{\slashed{p}+n\slashed{k}-(m+\slashed{\mstar})+i\epsilon}\,,\\ &=\frac{i(\slashed{p}+n\slashed{k}+m-\slashed{\mstar})}{(p+nk)^2-m^2_{*}+i\epsilon}\,, \end{split} \end{align} where \begin{equation}\label{eq:mstar} m^2_{*}=m^2+\slashed{p}\slashed{\mstar}+\slashed{\mstar}\slashed{p}=m^2-2 \lasers\!{\cdot}\laser \,. \end{equation} Note that the last result is often rewritten as $m^2_{*}=m^2-e^2a^2$ where $-a^2>0$ is the amplitude squared of the background. \begin{figure}[htb] \[ \includegraphics{in_out_a_loop.jpg}\quad \includegraphics{in_out_b_loop.jpg}\quad \includegraphics{in_out_c_loop.jpg}\quad \includegraphics{in_out_d_loop.jpg}\quad \includegraphics{in_out_e_loop.jpg} \] \caption{Absorption then emission with a loop correction.} \label{Fig:inoutloop} \end{figure} \begin{figure}[htb] \[ \includegraphics{out_in_a_loop.jpg}\quad \includegraphics{out_in_b_loop.jpg}\quad \includegraphics{out_in_c_loop.jpg}\quad \includegraphics{out_in_d_loop.jpg}\quad \includegraphics{out_in_e_loop.jpg} \] \caption{Emission then absorption with a loop correction.} \label{Fig:outinloop} \end{figure} We now want to calculate the ultraviolet loop corrections to these processes. They are given by the five diagrams in Fig.~\ref{Fig:inoutloop} and the corresponding terms in Fig.~\ref{Fig:outinloop}. Again we stress that since we are only calculating the ultraviolet divergences, loops straddling two laser lines may be ignored. The strategy for evaluating these diagrams mirrors that seen before: we can identify sub terms that have already been evaluated, then use the loop generalisation of the identity (\ref{eq:massid}), which is \begin{equation} \mathrm{O}\Sigma_{n+1}\mathrm{I}+\mathrm{I}\Sigma_{n-1}\mathrm{O}=\mathrm{O}\mathrm{I}\Sigma_{n}+\Sigma_{n}\mathrm{O}\mathrm{I}+i\slashed{\Sigma}_{_{\!\!\mstar}}\, \end{equation} where we have defined the loop correction to the final term in (\ref{eq:massid}) \begin{equation}\label{eq:massUV} \slashed{\Sigma}_{_{\!\!\mstar}}=\frac{e^2}{(4\pi)^2}\frac1\varepsilon \slashed{\mstar} \,. \end{equation} From this we find that the loop corrections to the central sidebands (\ref{eq:central_sidebands}) are given by (ignoring higher order terms in the coupling) \begin{align}\label{eq:ren_inout} \begin{split} &\mathrm{I}\Big(\Prop{n-1}+\Prop{n-1}\Sigma_{n-1}\Prop{n-1} \Big)\mathrm{O}\\ &\qquad-\mathrm{O}\mathrm{I}\Big(\Prop{n}+\Prop{n}\Sigma_{n}\Prop{n} \Big) -\Big(\Prop{n}+\Prop{n}\Sigma_{n}\Prop{n} \Big)\mathrm{O}\mathrm{I}\\ &\qquad\qquad\qquad-\Big(\Prop{n}+\Prop{n}\Sigma_{n}\Prop{n}\Big)i\big(\slashed{\mstar}+\slashed{\Sigma}_{_{\!\!\mstar}}\big)\Big(\Prop{n}+\Prop{n}\Sigma_{n}\Prop{n}\Big)\\ &\qquad\qquad+\mathrm{O}\Big(\Prop{n+1}+\Prop{n+1}\Sigma_{n+1}\Prop{n+1} \Big)\mathrm{I}\,. \end{split} \end{align} We have written it in this way to bring out the multiplicative structure of these corrections at this order. Terms without a $\Sigma$ are tree level, terms with one $\Sigma$ are our one loop results, while terms with products of two or more $\Sigma$ factors remain to be verified in further work as the calculations reported here are only to one loop. Written in this way, we see a new structure in the third line of the loop corrections: $\Prop{n}(-i\slashed{\Sigma}_{_{\!\!\mstar}})\Prop{n}$. This, as we will discuss in more detail later, corresponds to a new renormalisation being needed in this theory. This will be a renormalisation of the laser induced mass shift, $\slashed{\mstar}$. This last result is unexpected and requires further testing. To do this we now consider a process that is higher order in the background interaction and that also generates a laser induced mass shift at tree level. To be concrete, we will consider two absorptions and one emission. \begin{figure}[htb] \[ \raisebox{-.1cm}{\includegraphics{in_in_out.jpg}} + \raisebox{-.1cm}{\includegraphics{in_out_in.jpg}} + \raisebox{-.1cm}{\includegraphics{out_in_in.jpg}} \] \caption{Two absorptions and one emission at tree level.} \label{Fig:ininout} \end{figure} This is an interesting process as the mixture of absorptions and an emission will induce both $\lasers\!{\cdot}\laser $ and $\laser{\cdot}\laser$ terms, and it is not a priori clear if there will be interference between loop corrections. The tree level process of interest in this respect is thus given by the three processes in Fig.~\ref{Fig:ininout}. We now expect four sidebands with propagators $\Prop{n+2}$, $\Prop{n+1}$, $\Prop{n}$ and $\Prop{n-1}$. There will also be a mixture of the $v$ term seen in Fig.~\ref{Fig:inandin} and the mass term found in the central band of Fig.~\ref{Fig:inout}. The ultraviolet loop corrections will now generate twenty one graphs. The strategy for evaluating these is again to group terms so that we get a mixture of previously evaluated sub-terms and absorption or emission factors analogous to (\ref{eq:ISigmaI}) and (\ref{eq:massid}). The end result of this gives the loop corrections summarised within the full (tree level and loop) sideband structures: \begin{align}\label{eq:ren_ininout} \begin{split} & \big(\tfrac12\mathrm{I}^2-\tfrac12v\big)\Big(\Prop{n-1}+\Prop{n-1}\Sigma_{n-1}\Prop{n-1} \Big)\mathrm{O}\\ & -\mathrm{O}\big(\tfrac12\mathrm{I}^2-\tfrac12v\big)\Big(\Prop{n}+\Prop{n}\Sigma_{n+1}\Prop{n} \Big) -\mathrm{I}\Big(\Prop{n}+\Prop{n}\Sigma_{n}\Prop{n} \Big)\mathrm{O}\mathrm{I}\\ &\qquad-\mathrm{I}\Big(\Prop{n}+\Prop{n}\Sigma_{n}\Prop{n}\Big)i\big(\slashed{\mstar}+\slashed{\Sigma}_{_{\!\!\mstar}}\big)\Big(\Prop{n}+\Prop{n}\Sigma_{n}\Prop{n}\Big)\\ &+\Big(\Prop{n+1}+\Prop{n+1}\Sigma_{n+1}\Prop{n+1} \Big)\mathrm{O}\big(\tfrac12\mathrm{I}^2+\tfrac12v\big) +\mathrm{O}\mathrm{I}\Big(\Prop{n+1}+\Prop{n+1}\Sigma_{n+1}\Prop{n+1} \Big)\mathrm{I}\\ &\qquad+\Big(\Prop{n+1}+\Prop{n+1}\Sigma_{n+1}\Prop{n+1} \Big)i\big(\slashed{\mstar}+\slashed{\Sigma}_{_{\!\!\mstar}}\big)\Big(\Prop{n+1}+\Prop{n+1}\Sigma_{n+1}\Prop{n+1} \Big)\mathrm{I}\\ &-\mathrm{O}\Big(\Prop{n+2}+\Prop{n+2}\Sigma_{n+2}\Prop{n+2} \Big)\big(\tfrac12\mathrm{I}^2+\tfrac12v\big)\,. \end{split} \end{align} Here we see clearly the same structures in these loop corrections as encountered earlier in (\ref{eq:ren_in}), (\ref{eq:ren_out}), (\ref{eq:ren_inin}), (\ref{eq:ren_outout}) and (\ref{eq:ren_inout}). From this result we can immediately deduce the corresponding dual process involving two emissions and just one absorption from the background. We see that there is no interference between the mass terms and the $v^*$ insertions. To summarise, these detailed perturbative investigations show that the loop corrections to the propagation of an electron in a plane wave background preserve the sideband structures and, through that, induce the expected one loop corrections to the normalisation of the propagators, including the vacuum mass shift. Unexpectedly, we have seen that the laser induced mass also has an ultraviolet correction. Having exposed and isolated these loop structures, we now address the (minimal) renormalisation needed for the extraction of finite, physical results. \section{Renormalisation} We have seen, through multiple examples, that the sideband structure of the theory is preserved when loop corrections are included. To understand the renormalisation of the theory, let us consider the $\ell^{\,\mathrm{th}}$ sideband. For this sideband we have seen that the loop corrections induce a replacement \begin{equation}\label{eq:loop_prop} \Prop{\ell}\to \Prop{\ell}+\Prop{\ell}\Sigma_\ell\Prop{\ell}\,, \end{equation} together with an additional correction to the background induced, mass shift \begin{equation}\label{eq:loop_lmass} -i \slashed{\mstar}\to-i (\slashed{\mstar}+\slashed{\Sigma}_{_{\!\!\mstar}})\,. \end{equation} The ultraviolet divergences in $\Sigma_\ell$ and $\slashed{\Sigma}_{_{\!\!\mstar}}$, see (\ref{eq:Sigma_n}) and (\ref{eq:massUV}), signal the need for renormalisation. This we now introduce by shifting from bare to renormalised quantities. It is useful here to refine our notation, introduced in equation~(\ref{eq:Prop}), for the sideband propagator to include both normal and induced mass terms, as in (\ref{eq:propmM}). We thus define \begin{equation} \Prop{\ell}(m,\mathscr{M}):=\frac{i}{\slashed{p}+\ell\slashed{k}-(m+\slashed{\mstar})+i\epsilon}\,. \end{equation} Then the loop corrections, (\ref{eq:loop_prop}) and (\ref{eq:loop_lmass}), can be written more succinctly as \begin{equation}\label{eq:loop_prop_ren} \Prop{\ell}(m,\mathscr{M})\to \Prop{\ell}(m,\mathscr{M})+\Prop{\ell}(m,\mathscr{M})\big(\Sigma_\ell-i\slashed{\Sigma}_{_{\!\!\mstar}}\big)\Prop{\ell}(m,\mathscr{M})\,. \end{equation} To now renormalise this sector of the theory, we follow the usual prescription whereby we first interpret these results as arising from working with the bare Volkov fields and masses: $\psiV^{^{_\mathrm{B}}}$, $m^{^{_\mathrm{B}}}$ and $\mstar_\mu^{^{_\mathrm{B}}}$. Then we define the physical, renormalised quantities $\psiV}%^{^{_\mathrm{R}}}$, $m}%^{^{_\mathrm{R}}}$ and $\mstar_\mu}%^{^{_\mathrm{R}}}$ by \begin{equation}\label{eq:Vwf_ren} \psiV^{^{_\mathrm{B}}}:=\mu^{-\varepsilon}\sqrt{Z_2}\,\psiV}%^{^{_\mathrm{R}}}=\mu^{-\varepsilon}\sqrt{1+\delta_2}\,\psiV}%^{^{_\mathrm{R}}}\,, \end{equation} \begin{equation}\label{eq:m_ren} m^{^{_\mathrm{B}}}:=Z_m\,m}%^{^{_\mathrm{R}}}=(1+\delta_m)\,m}%^{^{_\mathrm{R}}} \end{equation} and \begin{equation}\label{eq:Vm_ren} \mstar_\mu^{^{_\mathrm{B}}}:=Z_{_{\!\!\mathscr{M}}}\,\mstar_\mu}%^{^{_\mathrm{R}}}=(1+\delta_{_{^{\!\!\mstar}}})\,\mstar_\mu}%^{^{_\mathrm{R}}}\,. \end{equation} These counterterms are then determined by the requirement that when we work with renormalised quantities, we obtain finite results. Note that the mass scale $\mu^{-\varepsilon}$ in the wave function renormalisation factor can be neglected in the leading order analysis presented here. The full, renormalised sideband propagator at this order is then, from (\ref{eq:loop_prop_ren}), \begin{equation}\label{eq:ren_prop1} Z_2^{-1} \Prop{\ell}(m^{^{_\mathrm{B}}},\mstar^{^{_\mathrm{B}}})+\Prop{\ell}(m,\mathscr{M})\big(\Sigma_\ell-i\slashed{\Sigma}_{_{\!\!\mstar}}\big)\Prop{\ell}(m,\mathscr{M})\,. \end{equation} In the second term of this expression the presence of the loop corrections means that renormalised quantities can be immediately used when working with the leading order loop corrections. In the first term, though, we are still explicitly working with the bare fields. These bare quantities can be expanded to give \begin{align} Z_2^{-1} \Prop{\ell}(m^{^{_\mathrm{B}}},\mstar^{^{_\mathrm{B}}})&=(1-\delta_2)\Prop{\ell}\big((1+\delta_m)\,m}%^{^{_\mathrm{R}}}\,,(1+\delta_{_{^{\!\!\mstar}}})\,\mathscr{M}\big)\nonumber\\ &= \Prop{\ell}(m,\mathscr{M})+\Prop{\ell}(m,\mathscr{M})\big(-\PropI{\ell}\delta_2-i(m\delta_m+\slashed{\mstar}\delta_{_{^{\!\!\mstar}}})\big) \Prop{\ell}(m,\mathscr{M})\,. \end{align} Thus the renormalised sideband propagator, (\ref{eq:ren_prop1}), becomes \begin{equation}\label{eq:ren_prop2} \Prop{\ell}(m,\mathscr{M})+\Prop{\ell}(m,\mathscr{M})\Sigma_\ell^{\mathrm{R}}\Prop{\ell}(m,\mathscr{M})\,, \end{equation} where \begin{equation} \Sigma_\ell^{\mathrm{R}}=\Sigma_\ell-\PropI{\ell}\delta_2-i(m\delta_m+\slashed{\Sigma}_{_{\!\!\mstar}}+\slashed{\mstar}\delta_{_{^{\!\!\mstar}}})\,. \end{equation} From this, and equations (\ref{eq:Sigma_n}) and (\ref{eq:massUV}), we see that, independent of the sideband being considered, the minimal renormalisation prescription corresponds to the familiar results that \begin{equation}\label{eq:ren1} \delta_2=\delta_{_{^\mathrm{UV}}}\qquad\mathrm{and}\qquad \delta_m=3\delta_{_{^\mathrm{UV}}}\,, \end{equation} along with the additional requirement that \begin{equation}\label{eq:ren2} \delta_{_{^{\!\!\mstar}}}=\delta_{_{^\mathrm{UV}}}\,, \end{equation} where $\delta_{_{^\mathrm{UV}}}$ was defined in equation~(\ref{eq:deltaUV}). Our higher order calculations, in terms of absorptions and emissions, support the expectation that the renormalisation prescriptions (\ref{eq:ren1}) and (\ref{eq:ren2}) holds also in the strong field sector. We will now recall how the sideband formulation can be extended to all such orders and, through this, conjecture the form of the one-loop corrections to the full Volkov description of an electron propagating through a plane wave, laser background. \section{The full Volkov description at one loop} The importance of the Volkov solution for the tree level results is that the sideband description, discussed above in the perturbative framework, is known to all orders in the background interaction for this wide class of polarisations, see~\cite{Lavelle:2017dzx}. We now want to develop the link between the perturbative loop structures presented here and that all orders formalism. In doing so we shall see that the perturbative results actually motivate a significant simplification to the all orders description. Armed with that result, we shall be able to conjecture a compact expression for the leading one-loop corrections at all orders in the intensity of the background. The exact tree level solution for the two point function describing an electron propagating in an elliptically polarised background can be written (see equation (44) in ~\cite{Lavelle:2017dzx} and discussions therein) as the usual momentum space integration factors times the double sum over $r$ and $s$ of the following sideband structures: \begin{align}\label{eq:two_pnt_volkov} \begin{split} \mathrm{e}^{ir\kx} \Big(J^{\ecc}_{s+r}(p)&+\frac{\slashed{k}\lasersl}{2p{\cdot}k}J^{\ecc}_{s+r+1}(p)+\frac{\slashed{k}\slashed{\laser^*}}{2p{\cdot}k}J^{\ecc}_{s+r-1}(p)\Big)\\&\qquad\quad\times \Prop{s}(m,\mathscr{M})\Big(\Js_{s}(p)-\frac{\slashed{k}\slashed{\laser^*}}{2p{\cdot}k}\Js_{s+1}(p)-\frac{\slashed{k}\lasersl}{2p{\cdot}k}\Js_{s-1}(p)\Big)\,. \end{split} \end{align} Unpicking (\ref{eq:two_pnt_volkov}) we see that, as we sum over $s$, the sideband propagator $\Prop{s}(m,\mathscr{M})$ is sandwiched between factors built out of (generalised) Bessel functions, $J^{\ecc}_\ell(p)$, where the parameter $\ell$ can be various combinations of the summation parameters $r$ and $s$. These Bessel functions are also labelled by the eccentricity parameter $\tau$ characterising the polarisation of the background in the elliptic class. The precise definition of these Bessel functions is that \begin{equation}\label{eq:genBess} J^{\ecc}_\ell(p):= J_\ell(\omega_1,v,\omega_2)=\frac1{2\pi}\int_{-\pi}^\pi \!\!d\theta \, \mathrm{e}^{i(\omega_1\sin\theta+v\sin2\theta+\omega_2\cos\theta)}\mathrm{e}^{-i\ell\theta}\,, \end{equation} where the eccentricity information is now encoded in the real parameters $\omega_1$, $v$ and $\omega_2$. The connection with the complex vector parameters $\mathcal{A}$ and $\laser^{*}$, introduced in (\ref{eq:Ais}) and (\ref{eq:Eis}), is seen in equation (\ref{eq:vdef}) for $v$, and the definitions \begin{equation} \omega_1=-\left(\frac{p{\cdot}\laser}{p{\cdot}k}+\frac{p{\cdot}\lasers}{p{\cdot}k}\right) \qquad \mathrm{and}\qquad \omega_2=-i\left(\frac{p{\cdot}\laser}{p{\cdot}k}-\frac{p{\cdot}\lasers}{p{\cdot}k}\right)\,. \end{equation} The fact that $v$ is real is perhaps surprising and seems at odds with our effort, as in (\ref{eq:vsdef}), to distinguish typographically between $v$ and $v^*$. However, this was a useful bookkeeping device to keep track of the duality structures seen earlier, and one that we will return to below. From the perturbative perspective, one of the most striking and immediate things to note about the all orders result (\ref{eq:two_pnt_volkov}) is the absence of the variables that were the building blocks in the description developed in this paper. In particular, the In and Out terms, (\ref{eq:In}) and (\ref{eq:Out}), seem to be absent. Give the central role played by these terms in our perturbative analysis, it seems logical to try to rewrite the all orders result in terms of them. To this end, we make the change of variables $\omega_1\to\Omega_1$ and $\omega_2\to\Omega_2$, with \begin{equation}\label{eq:Omeg1IO} \Omega_1=\omega_1-\frac{\slashed{k}\lasersl-\slashed{k}\slashed{\laser^*}}{2p{\cdot}k}=-(\mathrm{I}+\mathrm{O}) \end{equation} and \begin{equation}\label{eq:Omeg2IO} \Omega_2=\omega_2-i\frac{\slashed{k}\lasersl+\slashed{k}\slashed{\laser^*}}{2p{\cdot}k}=-i(\mathrm{I}-\mathrm{O})\,. \end{equation} Note that the reality requirements on $\omega_1$ and $\omega_2$ are now replaced by the duality result that $\overbar{\Omega}_1=\Omega_1$ and $\overbar{\Omega}_2=\Omega_2$. The trivial commutativity of $\omega_1$ and $\omega_2$ is now the non-trivial matrix result that $\Omega_1\Omega_2=\Omega_2\Omega_1$, which is ensured by the null properties of the background field. These commutativity and duality relations enable us to extend the domain of the Bessel functions defined in~(\ref{eq:genBess}) so that we can unambiguously write \begin{equation}\label{eq:genBessOmega} J_\ell(\Omega_1,v,\Omega_2):=\frac1{2\pi}\int_{-\pi}^\pi \!\!d\theta \, \mathrm{e}^{i(\Omega_1\sin\theta+v\sin2\theta+\Omega_2\cos\theta)}\mathrm{e}^{-i\ell\theta}\,. \end{equation} To understand the connection between these extended functions and the complicated pre and post factors in the two point function (\ref{eq:two_pnt_volkov}), we note that the null property of the vector $k$ means that \begin{equation} \mathrm{e}^{i\Omega_1\sin\theta}=\mathrm{e}^{i\omega_1\sin\theta}\left(1-i\frac{\slashed{k}\lasersl-\slashed{k}\slashed{\laser^*}}{2p{\cdot}k}\sin\theta\right) \end{equation} and \begin{equation} \mathrm{e}^{i\Omega_2\cos\theta}=\mathrm{e}^{i\omega_2\cos\theta}\left(1+\frac{\slashed{k}\lasersl+\slashed{k}\slashed{\laser^*}}{2p{\cdot}k}\cos\theta\right)\,. \end{equation} Hence we quickly see that \begin{equation} J_\ell(\Omega_1,v,\Omega_2)=J_\ell(\omega_1,v,\omega_2)+J_{\ell+1}(\omega_1,v,\omega_2)\frac{\slashed{k}\lasersl}{2p{\cdot}k}+J_{\ell-1}(\omega_1,v,\omega_2)\frac{\slashed{k}\slashed{\laser^*}}{2p{\cdot}k} \end{equation} and \begin{equation} \overbar{J}_\ell(\Omega_1,v,\Omega_2)=J_\ell^*(\omega_1,v,\omega_2)-J_{\ell+1}^*(\omega_1,v,\omega_2)\frac{\slashed{k}\slashed{\laser^*}}{2p{\cdot}k}-J_{\ell-1}^*(\omega_1,v,\omega_2)\frac{\slashed{k}\lasersl}{2p{\cdot}k}\,. \end{equation} We can thus write the two point function (\ref{eq:two_pnt_volkov}) in a much more compact way as the sum of all terms of the form \begin{equation}\label{eq:propOmega} \mathrm{e}^{ir\kx} J_{s+r}(\Omega_1,v,\Omega_2)\Prop{s}(m,\mathscr{M})\overbar{J}_{s}(\Omega_1,v,\Omega_2)\,. \end{equation} To link this with our perturbative results, it is instructive to consider the $r=-1$ terms in this double sum with $s$ ranging from -1 to 2. Expanding the Bessel functions (\ref{eq:genBessOmega}) in terms of the In and Out representations, (\ref{eq:Omeg1IO}) and (\ref{eq:Omeg2IO}), gives for this part of (\ref{eq:propOmega}) the explicit sum: \begin{align}\label{eq:rminusone} \begin{split} \mathrm{e}^{-i\kx}\bigg(&\big(\tfrac12\mathrm{I}^2-\tfrac12v\big)\Prop{-1}(m,\mathscr{M})\mathrm{O}\\ &-\mathrm{O}\big(\tfrac12\mathrm{I}^2-\tfrac12v\big)\Prop{0}(m,\mathscr{M})+\mathrm{I} \Prop{0}(m,\mathscr{M})\big(1-\mathrm{I}\mathrm{O}\big)\\ &\quad+\Prop{1}(m,\mathscr{M})\mathrm{O}\big(\tfrac12\mathrm{I}^2+\tfrac12v\big)-\big(1-\mathrm{I}\mathrm{O}\big)\Prop{1}(m,\mathscr{M})\mathrm{I}\\ &\qquad-\mathrm{O}\Prop{2}(m,\mathscr{M})\big(\tfrac12\mathrm{I}^2+\tfrac12v\big)\bigg)\,. \end{split} \end{align} These terms are precisely the sum of the sidebands derived in (\ref{eq:Sb1}) and the tree level part of (\ref{eq:ren_ininout}), both with $n=0$. This identification of the momentum dependence in (\ref{eq:rminusone}) with perturbative structures is gratifying and hints at the underlying logic of how to group the perturbative terms together. In the perturbative formulation developed here we have not yet incorporated the fact that the laser background breaks translational invariance. This means that our momentum space description, where we have presented a direct way to calculate loop corrections for the sidebands, requires modification. From the exact solution (\ref{eq:two_pnt_volkov}) we see that the modification is very simple. In addition to the standard momentum space factor $e^{-ip\cdot(x-y)}$ which multiplies~(\ref{eq:two_pnt_volkov}), we see an $e^{irk{\cdot}x}$ factor which explicitly violates translation invariance. This, though, can be exploited to organise the perturbative discussion and will allow us to group terms consistently. The key observation to note is that all the terms in (\ref{eq:rminusone}) share a common homogeneity in the absorption and emission fields. Indeed, they all include either an absorption and no emissions, or two absorptions and one emission. From this simple observation it follows that if we multiply each absorption term by a factor of $e^{-ik{\cdot}x}$ and each emission term by $e^{ik{\cdot}x}$, then we obtain the overall phase factor seen in~(\ref{eq:rminusone}). In a similar way, the terms in Fig.~\ref{Fig:inandin}, say, would be accompanied by a factor of $e^{-i2k{\cdot}x}$, while Fig.~\ref{Fig:outandout} would pick up a factor of $e^{i2k{\cdot}x}$. This motivates the following redefinition of the fundamental absorption vertex~(\ref{eq:Ais}) by including the exponential factor: \begin{equation} \mathrm{A}=-i\,\lasersl\to -i\,\mathrm{e}^{-i\kx}\lasersl\,. \end{equation} Similarly, we have the associated dual redefinition $\mathrm{E}:=- \overbar{\mathrm{A}}\to-i\,\mathrm{e}^{i\kx}\slashed{\laser^*}$. Hence from~(\ref{eq:vdef}) we see that $v\to \mathrm{e}^{-i2\kx}v$ while, from~(\ref{eq:vsdef}), $v^*\to \mathrm{e}^{i2\kx}v^{*}$. In terms of these redefinitions, $v$ and $v^*$ are not the same, so the notational convenience used earlier now becomes a genuine distinction. It is also clear now how to combine the perturbative terms in a physically correct manner in terms of commensurate powers of absorption minus emission. Note that the mass term $\slashed{\mstar}$ picks up no spatial dependence under this redefinition. Armed with this reformulation of the theory, we now conjecture how the double line description Fig.~\ref{fig:double_line} and its one loop corrections symbolised by Fig.~\ref{fig:double_line_loop} should be defined in terms of the renormalised fields (\ref{eq:Vwf_ren}), (\ref{eq:m_ren}) and (\ref{eq:Vm_ren}), within the minimal subtraction scheme defined by (\ref{eq:ren1}) and (\ref{eq:ren2}). Our conjecture is that, in terms of the renormalised masses introduced in this paper, we have the identification summarised in Fig.~\ref{Fig:conjecture }, where we have introduced the condensed notation that $J_s(\mathrm{I}:v:\mathrm{O})=J_s(-(\mathrm{I}+\mathrm{O}),v,-i(\mathrm{I}-\mathrm{O}))$. \begin{figure}[htb] \[ \raisebox{-0.5cm}{\includegraphics{double_line.jpg}}+\raisebox{-0.6cm}{\includegraphics{double_line_loop.jpg}}= \sum_r\sum_sJ_{s+r}(\mathrm{I}:v:\mathrm{O})\Prop{s}(m,\mathscr{M})\overbar{J}_{s}(\mathrm{I}:v:\mathrm{O}) \] \caption{Strong field one loop conjecture.} \label{Fig:conjecture } \end{figure} This result holds at the tree level to all orders, and we have seen in this paper that, at one loop in Feynman gauge, it also holds for the ultraviolet poles in several different sidebands. \section{Conclusions} In this paper we have developed a perturbative description of the propagation of an electron in a plane wave background. There are two expansions here: one in the interactions with the background and an expansion up to one loop in perturbation theory. Each interaction with the background generates sideband structures. We have seen that the loop corrections maintain these sidebands. This means that a multiplicative renormalisation of the theory can be carried out in this formulation. We have worked in Feynman gauge and the background chosen was the full elliptic class of polarisations to bring out any polarisation dependence. The tree level sideband approach to charge propagation in a laser has the advantage that, at its heart, it identifies with each sideband a standard propagator, with momentum shifted by some multiple of the background momentum. These propagators are then multiplied by well defined terms characterising the laser. We have carried out a weak field expansion and explicitly calculated leading contributions to various sidebands. Using dimensional regularisation, we have calculated the one loop, ultraviolet divergent poles in these sideband structures. They included multiple absorptions, multiple emissions, and, importantly, contributions from a mixture of absorptions and emissions. These last structures are responsible for the background induced electron mass shift. Our calculations have revealed that the loop corrections to the sidebands replace the propagators by their equivalent standard one loop corrections. This is a minimal requirement for multiplicative renormalisation. However, we also found one additional ultraviolet divergent correction. This pole is a correction to the laser induced mass shift. As this was unexpected, we have verified that the same correction occurs in different sidebands in the Volkov propagator. We have seen how to renormalise these divergences in terms of the usual one loop renormalisation without a background, plus an additional multiplicative renormalisation of the laser induced mass shift. Inspired by the all orders tree level description, we have been able to conjecture an all orders expression for the full one loop corrections in this class of backgrounds. To complete this conjecture for the pole structure requires a proof that it holds to all orders in emissions and absorptions from the laser. A stronger form of the conjecture involves showing that the ultraviolet finite loop corrections, and any infrared divergences~\cite{Lavelle:2005bt}, are also compatible with this structure. Finally, it is important to study these results in other gauges. \section{Acknowledgements} We thank Tom Heinzl, Anton Ilderton and Ben King for discussions and comments.
1,477,468,750,827
arxiv
\section{Motivation} In ATLAS experiment~\cite{bib:atlas}, the multiboson productions are ones of the most important processes for both Standard Model (SM) measurements (signals) and BSM searches (backgrounds). This study tries to investigate the performance of various benchmark generators used in ATLAS within the official software framework and provides a summary of such process modeling performances in ATLAS. \section{Generators} The following setups are used for the Monte Carlo (MC) simulation: \begin{itemize} \item List of Generators: \sherpa~\cite{bib:sherpa}, \powhegbox~\cite{bib:powheg1,bib:powheg2,bib:powheg3}, \mg~\cite{bib:mg5}, \mcatnlo~\cite{bib:mcatnlo}, \vbfnlo~\cite{bib:vbfnlo}. \item List of Parton Shower (PS) setups: \pythia~\cite{bib:pythia}, \herwig7/\herwig++~\cite{bib:herwig}, \sherpa~\cite{bib:sherpa}. \end{itemize} The modeling of the multi-jet associations in the investigated processes are done with multi-leg matrix elements. The merging schemes being studied are CKKW, MEPS@NLO and FxFx depending on the generators~\cite{bib:pubnote}. \section{Fully Leptonic diboson process modeling} The fully leptonic diboson processes are modeled by various generators at high precision as shown in Table~\ref{tab:fully-accuracies}. \begin{table}[htbp] \centering \caption{Overview of di-boson process accuracies for the generators. NLO: next-to-leading order, LO: leading order, PS: parton shower~\cite{bib:pubnote}.} \label{tab:fully-accuracies} \begin{tabular*}{\textwidth} { l c c c c c c } \hline & & $VV+0j$ & $VV+1j$ & $VV+2j$ & $VV+3j$ & $VV+\geq 4j$ \\ \hline \multirow{2}{*}{$ $} & \sherpa \texttt{v2.2} & NLO & NLO & LO & LO & PS \\ & \powhegboxpythia{}/\herwig{}++ & NLO & LO & PS & PS & PS \\ & \mg+\pythia & NLO & NLO & LO & PS & PS \\ & \mcatnlo+\herwig & NLO & LO & PS & PS & PS \\ \hline \hline \end{tabular*} \end{table} Figure~\ref{fig:VV_plot} shows the kinematics comparisons between different generators and the data measurement in $WZ$ production process and differential cross section predictions of \sherpa and \powhegbox in $ZZ$ production process. \begin{figure}[htb] \centering \includegraphics[width=0.4\columnwidth]{figures/WZ_plot.pdf} \includegraphics[width=0.4\columnwidth]{figures/ZZ_plot.pdf} \caption{The modeled kinematics comparison between different generators in $WZ$ and $ZZ$ processes. The plot on the left shows comparison of differential $W^{\pm}Z$ cross section as a function of the transverse mass variable $m_{T}^{WZ}$ for the $W^{\pm}Z$ system with \powhegbox + \pythia, \powhegbox + \herwig, \sherpa and \mcatnlo + \herwig predictions. The plot on the right shows the differential cross section predictions of \sherpa and \powhegbox both with higher-order electroweak effects and QCD effects corrections as the function of four-lepton invariant mass, which is rather insensitive to higher-order QCD effects according to the comparison~\cite{bib:pubnote}.} \label{fig:VV_plot} \end{figure} \section{Electroweak diboson(+$jj$) process modeling} The modeling of most electroweak diboson production processes in association with two jets is provided with LO precision for 2-jets and higher-jet multiplicities are modeled via parton showering. Only in the like-charged $W^{\pm}W^{\pm}\to\ell^{\pm}\ell^{\pm}2\nu+jj$ process, \powhegbox provides NLO order modeling of 2-jet bin and leading order modeling 3-jet bin. At QCD LO without asking for vector boson decays, the electroweak (EWK) coupling order is 2 for QCD induced processes and 4 for EWK induced processes while the QCD coupling order is 2 for QCD induced processes and 0 for EWK induced processes. Figure~\ref{fig:VVjj_plot} shows the opposite-charged $W^{\pm}W^{\mp}/ZZ\to\ell^{\pm}\ell^{\mp}2\nu+jj$ process modeling comparison between \vbfnlo and \mg, and the like-charged $W^{\pm}W^{\pm}\to\ell^{\pm}\ell^{\pm}2\nu+jj$ process modeling comparisons between different scale choices. \begin{figure}[htb] \centering \includegraphics[width=0.4\columnwidth]{figures/2l2vjj_plot.pdf} \includegraphics[width=0.4\columnwidth]{figures/ssWWjj_plot.pdf} \caption{The modeled kinematics comparison between different generators in $W^{\pm}W^{\mp}/ZZ\to\ell^{\pm}\ell^{\mp}2\nu+jj$ and with scale variations in $W^{\pm}W^{\pm}\to\ell^{\pm}\ell^{\pm}2\nu+jj$ processes. The left plot shows in the $W^{\pm}W^{\mp}+jj$ channel the comparison of predicted kinematic distributions between \mg and \vbfnlo, both of which are showered with \pythia, for di-jet invariant mass $m_{jj}$. The distributions is normalised to the same integral, and the shown uncertainties are statistical only. The right plot quantifies the impact of the scale variations on the rapidity separation $\Delta y(j1,j2)$ for the two leading jets~\cite{bib:pubnote}.} \label{fig:VVjj_plot} \end{figure} \section{Triboson process modeling} The SM rare processes of triboson productions are modeled by \sherpa and \vbfnlo at LO. \sherpa v2.2 also provides NLO modeling of on-shell tribosons and LO modeling up to 2-jets. Higher jet multiplicities are modeled via parton showering. Table~\ref{tab:vvv-accuracies} summarizes the modeled precisions. Figure~\ref{fig:VVV_plot} shows the jet multiplicity and leading lepton $p_\mathrm{T}$ distribution comparisons between different generators. \begin{table}[htbp] \centering \caption{Overview of triboson process accuracies for the chosen generators~\cite{bib:pubnote}.} \label{tab:vvv-accuracies} \begin{tabular*}{\textwidth} { l c c c c c } \hline & & $VVV+0j$ & $VVV+1j$ & $VVV+2j$ & $VVV+\geq 3j$ \\ \hline \multirow{1}{*}{$VVV$ on-shell} & \sherpa \texttt{v2.2} & NLO & LO & LO & PS \\ \hline \multirow{1}{*}{$6\ell,5\ell 1\nu, 4\ell 2\nu, 3\ell 3\nu, 2\ell 4\nu$} & \sherpa \texttt{v2.2} & LO & LO & PS & PS \\ \hline \multirow{1}{*}{$3\ell 3\nu$} & \vbfnlopythia & LO & PS & PS & PS \\ \hline \end{tabular*} \end{table} \clearpage \begin{figure}[htb] \centering \includegraphics[width=0.4\columnwidth]{figures/VVV_njet.pdf} \includegraphics[width=0.4\columnwidth]{figures/VVV_ptled.pdf} \caption{The modeled jet multiplicity and leading lepton $p_\mathrm{T}$ comparison between different generators in $WWW\to3\ell3\nu$ process, in between \sherpa and \vbfnlo~\cite{bib:pubnote}.} \label{fig:VVV_plot} \end{figure} \section{Conclusion} We present the MC process modeling of multi-boson productions used by ATLAS at 13 TeV in $pp$ collisions. State-of-the-art generators are investigated and key kinematic distributions of the processes are compared. Systematic uncertainties such as scale and PDF variations were also investigated and summarized in Ref.~\cite{bib:pubnote}.
1,477,468,750,828
arxiv
\section{Introduction} During the last years considerable attention has been paid to interaction of $\eta$-mesons with four nucleons \cite{KruscheWilkin,Machner,Willis,Wronska, Budzanowski,Moskal,Adlarson1,Kelkar,ScRiska}. Analysis of different data is mainly focused on the search for $\eta^4$He bound states. According to the available experimental results, the rise of the $dd\to\eta^4$He experimental cross section at $E_{\eta}\to 0$ seems to be not as steep as in the $pd\to\eta^3$He reaction. As discussed, e.g., in Ref.\,\cite{Willis} the most natural interpretation of this fact is that due to additional attraction caused by one extra nucleon the pole in the $\eta^4$He scattering matrix is shifted into the region of negative values of $Re\,E_{\eta}$ and turns out to be farther from the physical region than the $\eta^3$He pole. It is therefore concluded that formation of the bound $\eta^4$He state is highly probable. In view of general complexity of the five-body $\eta-4N$ problem there are still no rigorous few-body calculations of this system. At the same time, a systematic practical way of handling the $n$-body interaction is provided by the quasi-particle formalism in which the kernels of integral equations are represented by series of separable terms. This method becomes especially efficient if the driving two-particle potentials are governed by the nearly lying resonances or bound (virtual) states, like in the $NN$ and $\eta N$ case. Then reasonable accuracy may be achieved with only few separable terms retained in the series. In particular, the quasi-particle formalism is shown to be very well suited for practical calculation of $\eta NN$ \cite{Shev,Pena,FiAr2N} as well as $\eta-3N$ \cite{FiAr3N} scattering (in Ref.\,\cite{Barnea} another method based on the hypospherical function expansion has been developed). In this letter we apply the quasi-particle method to study the five-body system $\eta-4N$. As a formal basis we use the Alt-Grassberger-Sandhas $n$-body equations derived in Ref.\,\cite{GS}. For the sake of simplicity we neglect influence of the spin and isospin on the interaction between nucleons, treating them as spinless indistinguishable particles. Furthermore, since only the threshold $\eta^4$He energies are considered, we restrict all interactions to $s$-waves only. \section{Formalism} As is well known, separable expansion of the kernels allows one to reduce the $n$-body integral equations to the $(n-1)$-body equations, where two of $n$ particles in each state are effectively treated as a composite particle (quasi-particle). Therefore, the essence of the method is to approximate the $(n-1)$-particle interaction obtained in the separable-potential model again by the separable ansatz. In this respect, to simplify presentation of the formalism, we start directly from successive application of the quasi-particle technique to 2-, 3-, and 4-body subamplitudes, occurring when the five-body system is divided into groups of mutually interacting particles. In what follows, we use the concept of partitions as introduced, e.g., in Ref.\,\cite{Yakubovsky}. Different partitions (as well as the quasi-particles related to these partitions) are further denoted by the symbols $\alpha,\beta,\ldots$, whereas the Latin letters $a,b,\ldots$ are used for numbering the terms in the separable expansions of the subamplitudes. The notation $\alpha_n$ refers to the partition obtained by dividing the $\eta-4N$ system into $n$ groups. Writing $\alpha_{n+1}\subset \alpha_{n}$ means that the partition $\alpha_{n+1}$ is obtained from $\alpha_{n}$ via further division of the quasi-particle $\alpha_{n}$ into two groups of particles. \begin{figure}[ht] \begin{center} \resizebox{0.25\textwidth}{!}{% \includegraphics{Diagr1.eps}} \caption{The potential $Z^{\alpha_2}_{\alpha_3,\beta_3}(z;p,p^\prime)$ as defined in Eq.\,(\ref{eq1_25}) connecting two configurations of the type $(\eta NN)+N$. The dashed and the solid lines represent, respectively, $\eta$-mesons and nucleons. The form factors $u^{\alpha_3}_{\gamma_4}$, $u^{\beta_3}_{\gamma_4}$ are shown by the circles.} \label{fig0} \end{center} \end{figure} The basic ingredient of the formalism is a separable expansion of the quasi-particle amplitudes \begin{eqnarray}\label{eq1_10} &&X^{\alpha_{n}}_{\alpha_{n+1}a,\beta_{n+1}b}(z)= \sum_{k,l=1}^{N_{\alpha_{n}}}|u^{\alpha_{n}(k)}_{\alpha_{n+1}(a)} \rangle\Delta^{\alpha_{n}}_{kl}(z)\langle u^{\alpha_{n}(l)}_{\beta_{n+1}(b)}|\,,\\ &&\alpha_{n+1},\beta_{n+1}\subset\alpha_{n}.\nonumber \end{eqnarray} Then the integral equations for the amplitudes $X^{\alpha_{n-1}}_{\alpha_n,\beta_n}$ are transformed exactly into the quasi-two-body equations which in the operator form read \begin{eqnarray}\label{eq1_15} &&X^{\alpha_{n-1}}_{\alpha_na,\beta_nb}=Z^{\alpha_{n-1}}_{\alpha_na,\beta_nb}+\sum_{\gamma_n} \sum_{k,l=1}^{N_{\gamma_n}} Z^{\alpha_{n-1}}_{\alpha_na,\gamma_nk} \Delta^{\gamma_n}_{kl}X^{\alpha_{n-1}}_{\gamma_nl,\beta_nb}\,,\nonumber\\ &&\alpha_{n},\beta_{n},\gamma_n\subset\alpha_{n-1}\,, \end{eqnarray} or more explicitly \begin{eqnarray}\label{eq1_20} X^{\alpha_{n-1}}_{\alpha_na,\beta_nb}(z;p,p')&=&Z^{\alpha_{n-1}}_{\alpha_na,\beta_nb}(z;p,p') +\sum_{\gamma_n\subset\alpha_{n-1}}\sum_{k,l=1}^{N_{\gamma_n}}\int\,\frac{p^{\prime\prime\,2}dp^{\prime\prime}}{2\pi^2} \, Z^{\alpha_{n-1}}_{\alpha_na,\gamma_nk}(z;p,p^{\prime\prime})\nonumber\\ &\times&\Delta^{\gamma_n}_{kl}\left(z-\frac{p^{\prime\prime\,2}}{2\mu_{\gamma_{n}}}\right) X^{\alpha_{n-1}}_{\gamma_nl,\beta_nb}(z;p^{\prime\prime},p^\prime) \end{eqnarray} with $\mu_{\gamma_{n}}$ being the reduced mass associated with the partition $\gamma_{n}$. The effective potentials $Z^{\alpha_{n-1}}_{\alpha_n,\beta_n}$ are determined as matrix elements of the 'resolvent' $\Delta^{\gamma_{n+1}}$ between the form factors appearing in the expansion (\ref{eq1_10}) \begin{eqnarray}\label{eq1_25} &&Z^{\alpha_{n-1}}_{\alpha_na,\beta_nb}=\sum_{\gamma_{n+1}} \sum_{k,l}\langle u^{\alpha_n(a)}_{\gamma_{n+1}(k)}|\Delta^{\gamma_{n+1}}_{kl} |u^{\beta_n(b)}_{\gamma_{n+1}(l)}\rangle,\\ &&\gamma_{n+1}\subset\alpha_n,\ \gamma_{n+1}\subset\beta_n,\quad \alpha_n\ne\beta_n\,.\nonumber \end{eqnarray} The structure of Eq.\,(\ref{eq1_25}) is conveniently illustrated in the form of diagrams. In Fig.\,\ref{fig0} we show as an example one of the effective potentials $Z^{\alpha_2}_{\alpha_3,\beta_3}$, connecting two configurations of the type $(\eta NN)+N$. Since the nucleons are identical, the condition $\alpha_n\ne\beta_n$ in Eq.\,(\ref{eq1_25}) means that the nucleon lines, entering the quasi-particles $\alpha_n$ and $\beta_n$ and not included into the quasi-particle $\gamma_{n+1}$ should be different. To calculate the form factors $u^{\alpha_n(a)}_{\gamma_{n+1}(k)}$ and the propagators $\Delta^{\gamma_{n+1}}_{kl}$ we employed the energy dependent pole expansion (EDPE) method of Ref.\,\cite{EDPE}. \subsection{Four-body partitions} Considering nucleons as indistinguishable particles we have only two different types of four-particle partitions: \begin{equation}\label{eq2_10} 1:\ (NN)+N+N+\eta\,, \quad 2:\ (\eta N)+N+N+N\,. \end{equation} The partitions 1 and 2 and the related two-particle subsystems $NN$ and $\eta N$ will further be labeled by the index $\alpha_4=1,2$. In the present calculation, the $NN$ and $\eta N$ $s$-wave interactions were approximated by simplest rank-one separable potentials. For $NN$ we employed \begin{equation}\label{eq2_20} v_1(z)=-|g_1\rangle\langle g_1|\,. \end{equation} The corresponding $t$-matrix has the usual form \begin{equation}\label{eq2_21} t_1(z)=|g_1\rangle \tau_1(z)\langle g_1| \end{equation} with the $NN$ propagator \begin{equation}\label{eq2_15} \tau_1(z)=-\bigg[1+\frac{1}{2\pi^2}\int_0^\infty\frac{g_1(q)^2}{z-q^2/M_N}\,q^2dq \bigg]^{-1}\,, \end{equation} where $M_N$ is the nucleon mass. The form factors were chosen in the Yamaguchi form \begin{equation}\label{eq2_22} g_1(q)=\frac{\sqrt{\lambda_{NN}}}{1+(q/\beta)^2}\,. \end{equation} Since we treat nucleons as spinless particles, the strength $\lambda_{NN}$ was taken as an average of the singlet and the triplet strength \begin{equation}\label{eq2_30} \lambda_{NN}=\frac{1}{2}(\lambda^{(0)}_{NN}+\lambda^{(1)}_{NN}),\quad \lambda^{(s)}_{NN}=\frac{8\pi a_s}{M_N(a_s\beta-2)}. \end{equation} The singlet and the triplet scattering lengths, $a_0$ and $a_1$, as well as the cut-off momentum $\beta$ were taken directly from the analysis \cite{Yamag} of the low-energy $np$ scattering \begin{equation}\label{eq2_35} a_0=23.690\,\mbox{fm},\ a_1=-5.378\,\mbox{fm},\ \beta=1.4488\,\mbox{fm}^{-1}\,. \end{equation} It is well known that the Yamaguchi $NN$ potential overestimates attraction at high momenta and yields significant overbinding already in the $^3$He case (see Table \ref{tab1}). Therefore we also adopted the spin-independent $NN$ potential with exponential form factors \begin{equation}\label{eq2_23} g_1(q)=\sqrt{\lambda_{NN}}\,e^{-q^2/\beta^2}\,, \end{equation} which yields the same binding energy $E_{NN}$ of two nucleons. The form factors (\ref{eq2_23}) with parameters listed in Table \ref{tab1} give for the three- and four-nucleon binding energies, $E_{3N}$ and $E_{4N}$, the values which are rather close to those of the $^3$He and $^4$He nuclei. At the same time, with the Gauss form factors we have a visibly larger value of the $NN$ effective range $r_0$ (see Table \ref{tab1}). \begin{table} \renewcommand{\arraystretch}{1.3} \caption{The $NN$ potential parameters. $E_{NN}$, $E_{3N}$, and $E_{4N}$ are the two-, three-, and four-nucleon binding energies calculated with our model.} \begin{tabular*}{9.1cm {@{\hspace{0.2cm}}c@{\hspace{0.2cm}}|@{\hspace{0.4cm}}c@{\hspace{0.5cm}}c@{\hspace{0.5cm}} c@{\hspace{0.5cm}}c@{\hspace{0.5cm}}c@{\hspace{0.5cm}}c@{\hspace{0.2cm}}} \hline\hline Type & $\lambda_{NN}$ & $\beta$ & $E_{NN}$ &$r_0$ & $E_{3N}$ & $E_{4N}$ \\ & fm$^2$ & fm$^{-1}$ & MeV & fm & MeV & MeV \\ \hline Yamaguchi & 4.17 & 1.45 & 0.428 & 1.89 & 12.6 & 54.8 \\ Gauss & 6.51 & 1.24 & 0.428 & 2.33 & 8.05 & 30.3 \\ \hline\hline \end{tabular*} \label{tab1} \end{table} The $\eta N$ $s$-wave interaction was reduced to excitation of the resonance $N(1535)1/2^-$ only. To include pions we used a conventional coupled channel formalism, where the resulting separable $t$-matrix has the matrix form \begin{equation}\label{eq2_40} t_{\mu\nu}(z)=\frac{1}{W-M_0}|g_\mu\rangle\tau_2(z)\langle g_\nu|\,,\quad \mu,\nu\in\{\pi,\eta\} \end{equation} with \begin{equation}\label{eq2_45} g_\mu(q)=\frac{g_\mu}{1+(q/\beta_\mu)^2}\,. \end{equation} The propagator \begin{equation}\label{eq2_50} \tau_2(z)=\frac{1} {W-M_0-\Sigma_\eta(W)-\Sigma_\pi(W)+\frac{i}{2}\Gamma_{\pi\pi}(W)}\nonumber \end{equation} with $W=z+M_N+M_\eta$, where $M_\eta$ is the $\eta$ mass, is determined by the $N(1535)1/2^-$ self-energies $\Sigma_\eta(W)$ and $\Sigma_\pi(W)$. The two-pion channel was included via the $\pi\pi N$ decay width $\Gamma_{\pi\pi}$ parametrized in the form \begin{equation}\label{eq2_55} \Gamma_{\pi\pi}(W)=\gamma_{\pi\pi}\frac{W-M_N-2M_\pi}{M_\pi}\,. \end{equation} The parameters $g_{\eta}$, $\beta_{\eta}$, $g_{\pi}$, $\beta_{\pi}$, $M_0$, and $\gamma_{\pi\pi}$ were chosen in such a way that the scattering amplitude $f_{\eta N}$ corresponding to our $t$-matrix $t_{\eta\eta}$ (\ref{eq2_40}) is close to that obtained in the coupled-channel analyses in the energy region from 20 MeV above the $\eta N$ threshold to 100 MeV below the threshold. Here we took the results of two works \cite{Wycech} and \cite{KSW} predicting rather different values of $Re\,f_{\eta N}$ (see Fig.\,\ref{fig1}). \begin{table} \renewcommand{\arraystretch}{1.3} \caption{The $\eta N-\pi N$ parameters.} \begin{tabular*}{9.1cm {@{\hspace{0.3cm}}c@{\hspace{0.3cm}}|@{\hspace{0.4cm}}c@{\hspace{0.5cm}}c@{\hspace{0.5cm}} c@{\hspace{0.5cm}}c@{\hspace{0.5cm}}c@{\hspace{0.5cm}}c@{\hspace{0.2cm}}} \hline\hline Set [Ref.] & $g_{\eta}$ & $\beta_\eta$ & $g_{\pi}$ &$\beta_\pi$ & $M_0$ & $\gamma_{\pi\pi}$ \\ & & MeV & & MeV & MeV & MeV \\ \hline I \cite{Wycech} & 1.91 & 636 & 0.651 & 850 & 1577 & 4.0 \\ II \cite{KSW} & 1.23 & 636 & 1.28 & 350 & 1527 & 1.0 \\ \hline\hline \end{tabular*} \label{tab2} \end{table} \begin{figure}[ht] \begin{center} \resizebox{0.5\textwidth}{!}{% \includegraphics{EtaN.eps}} \caption{The $S_{11}$ partial wave of the $\eta N$ scattering amplitude calculated with Sets I and II of the parameters listed in Table\,\ref{tab2}. Notations: solid curve: real part, dashed curve: imaginary part. Crosses and squares represent the results of the coupled channel analysis of Refs.\,\cite{Wycech} and \cite{KSW}, respectively.} \label{fig1} \end{center} \end{figure} \subsection{Three-body partitions} We have four different three-body partitions \begin{equation}\label{eq3_10} \begin{array}{llll} 1: & (NNN)+N+\eta\,, & 2: & (\eta NN)+N+N\,, \\ 3: & (\eta N)+(NN)+N\,,\phantom{xxx} & 4: & (NN)+(NN)+\eta \end{array} \end{equation} which in the following are numerated by the index $\alpha_3=1,\ldots,4$. In the latter two cases there are two pairs of interacting particles propagating independently. The effective potentials $Z^{\alpha_3}_{\alpha_4,\beta_4}$ determined by Eq.\,(\ref{eq1_25}) for $n=4$ are matrix elements of the free resolvent $G_0$ between the form factors $g_{\alpha_4}$ $(\alpha_4=1,2)$ \begin{equation}\label{eq3_15} Z^{\alpha_3}_{\alpha_4,\beta_4}=\langle g_{\alpha_4}|G_0|g_{\beta_4}\rangle\,. \end{equation} The functions $g_{\alpha_4}(q)$ are given by Eqs.\,(\ref{eq2_22}) (or (\ref{eq2_23})) and (\ref{eq2_45}) with $g_2(q)\equiv g_\eta(q)$. Here we omit the superfluous indices $a,b$, since our separable ansatz for $NN$ and $\eta N$ amplitudes contains in both cases only one term (see Eqs.\,(\ref{eq2_21}) and (\ref{eq2_40})). \begin{figure}[ht] \begin{center} \resizebox{0.5\textwidth}{!}{% \includegraphics{Diagr4.eps}} \caption{Effective quasi-two-body equations for the $(\eta N)-(NNN)$ amplitudes $X^2_{\alpha_3,\beta_3}$. Notation of the lines as in Fig.\,\ref{fig0}. The lower and the upper indices in $u^{\alpha_3}_{\alpha_4}$ refer to the numbers of the four- and three-body partitions listed in Eqs.\,(\ref{eq2_10}) and (\ref{eq3_10}), respectively. The numerical coefficients appear due to symmetrization of the nucleon states. } \label{fig2} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \resizebox{0.5\textwidth}{!}{% \includegraphics{Diagr3.eps}} \caption{Same as in Fig.\,\ref{fig2} for the $(\eta NN)-(NN)$ amplitudes $X^3_{\alpha_3,\beta_3}$.} \label{fig3} \end{center} \end{figure} \subsection{Two-body partitions} There are four two-body partitions of the $\eta-4N$ system: \begin{equation}\label{eq4_10} \begin{array}{llll} 1: & \eta+(NNNN)\,, & 2: & (\eta N)+(NNN)\,, \\ 3: & (\eta NN)+(NN)\,,\phantom{xxxxx} & 4: & (\eta NNN)+N \end{array} \end{equation} which will be labeled by $\alpha_2=1,\ldots,4$. The effective potentials $Z^{\alpha_2}_{\alpha_3a,\beta_3b}$ are matrix elements of the 'resolvent' $\tau_{\alpha_4}$ between the form factors $u^{\alpha_3(a)}_{\alpha_4}$ appearing in the separable expansion (\ref{eq1_10}) for $n=3$: \begin{equation}\label{eq4_15} Z^{\alpha_2}_{\alpha_3a,\beta_3b}=\sum_{\gamma_4=1,2} \langle u^{\alpha_3(a)}_{\gamma_4}|\tau_{\gamma_4}|u^{\beta_3(b)}_{\gamma_4}\rangle\,. \end{equation} The propagators $\tau_{\alpha_4}$ ($\alpha_4=1,2$) are given by (\ref{eq2_15}) and (\ref{eq2_50}) with $\tau_2\equiv\tau_{\eta\eta}$. The calculation of the $NNNN$ ($\alpha_2=1$) and $\eta NNN$ ($\alpha_2=4$) amplitudes with separable $NN$ potentials may be found, e.g., in Refs.\,\cite{AGS_4N} and \cite{FiAr3N}, and we refer the reader to these works. The effective $(3+2)$ amplitudes ($\alpha_2=2,3$) describe propagation of two groups of mutually interacting particles. The corresponding integral equations are schematically presented in Figs.\,\ref{fig2} and \ref{fig3}. After the separable expansions (\ref{eq1_10}) for $n=2$ are calculated we build the effective potentials $Z_{\alpha_2a,\beta_2b}$ (\ref{eq1_25}) as \begin{equation}\label{eq4_30} Z_{\alpha_2a,\beta_2b}=\sum_{\gamma_3=1}^4\sum_{k,l=1}^{N_{\gamma_3}} \langle u^{\alpha_2(a)}_{\gamma_3(k)}|\Delta^{\gamma_3}_{kl} |u^{\beta_2(b)}_{\gamma_3(l)}\rangle\,. \end{equation} The corresponding system of the five-body $\eta-4N$ equations is diagrammatically presented in Fig.\,\ref{fig4}. After this system is solved, the $\eta ^4$He scattering amplitude can be calculated as \begin{equation}\label{eq4_45} f_{\eta^4\mathrm{He}}(p)=-N^2\frac{\mu}{2\pi}\,X_{11,11}(z;p,p)\,. \end{equation} Here $N$ is the normalization constant of the $^4$He wave function, $\mu$ is the $\eta-^4$He reduced mass, and the momentum $p$ is fixed by the on-mass-shell condition \begin{equation}\label{eq4_50} p=\sqrt{2\mu (z+E_{4N})}\,, \end{equation} where $E_{4N}>0$ is the four-nucleon binding energy given in Table \ref{tab1}. \begin{figure*}[ht] \begin{center} \resizebox{1.0\textwidth}{!}{% \includegraphics{Diagr6.eps}} \caption{Graphical representation of the effective quasi-two-body equations for $\eta-4N$ scattering. Notations as in Fig.\,\ref{fig0}. The lower and the upper indices in $u^{\alpha_2}_{\alpha_3}$ refer to the three- and two-body partitions, as given in Eqs.\,(\ref{eq3_10}) and (\ref{eq4_10}). The numerical factors arise from the identity of the nucleons.} \label{fig4} \end{center} \end{figure*} \begin{table} \renewcommand{\arraystretch}{1.5} \caption{ The scattering length $a_{\eta^4\mathrm{He}}$ as a function of $N_{\alpha_2}$ ($\alpha_2=1,\ldots,4$), the number of separable terms retained in the separable expansion (\ref{eq1_10}) for the (4+1) and (3+2) subamplitudes $X^{\alpha_2}$. The calculation is performed with the Gauss $NN$ potential and Set I of the $\eta N-\pi N$ parameters.} \begin{tabular*}{8.3cm {@{\hspace{0.6cm}}c@{\hspace{0.9cm}}c@{\hspace{0.9cm}} c@{\hspace{0.9cm}}c@{\hspace{0.9cm}} |@{\hspace{0.6cm}}c@{\hspace{0.9cm}}} \hline\hline $N_1$ & $N_2$ & $N_3$ & $N_4$ & $a_{\eta\, ^4\mathrm{He}}$ [fm] \\ \hline 2 & 2 & 2 & 2 & $5.56+0.96\,i$ \\ 4 & 4 & 4 & 4 & $4.88+1.23\,i$ \\ 4 & 4 & 6 & 6 & $4.83+1.23\,i$ \\ 6 & 6 & 8 & 8 & $4.79+1.22\,i$ \\ 10 & 10 & 12 & 12 & $4.79+1.22\,i$ \\ 20 & 20 & 20 & 20 & $4.80+1.22\,i$ \\ \hline\hline \end{tabular*} \label{tab3} \end{table} In Table\,\ref{tab3} we present the value of the $\eta^4$He scattering length calculated with different number $N_{\alpha_2}$ of terms retained in the separable expansion (\ref{eq1_10}) of the amplitudes $X^{\alpha_2}_{\alpha_3,\beta_3}$. As one can see, satisfactory accuracy is achieved with $N_1=N_2=6$, $N_3=N_4=8$. In principle, already with first four terms in each expansion the resulting scattering length is within less than 2$\%$ of the correct value. Thus, also in the five-body case $\eta-4N$ the quasi-particle approach based on the EDPE method of Ref.\,\cite{EDPE} is very suitable for practical applications. The minimum number of separable terms $N_{\alpha_2}$ only slightly exceeds that for the four-body kernels, where convergence is achieved already with first four-six terms in each subamplitude. \section{Discussion and conclusion} \label{sec:results} As our main result we present the $\eta^4$He scattering length $a_{\eta^4\mathrm{He}}=f_{\eta^4\mathrm{He}}(0)$. It is given in Table \ref{tab4} for two versions of the $NN$ potential. For comparison purposes also the $\eta^3$He scattering length calculated with the same sets of the $NN$ and $\eta N-\pi N$ parameters is presented. It is remarkable, that despite the larger number of nucleons in $^4$He the predicted value of $a_{\eta^4\mathrm{He}}$ is smaller than $a_{\eta^3\mathrm{He}}$. Direct calculation shows that the main reason of this somewhat unexpected result is rather rapid decrease of the $\eta N$ scattering amplitude in the subthreshold region (see Fig.\,\ref{fig1}). Because of essentially stronger binding of $^4$He in comparison to $^3$He, in the former case the effective in-medium $\eta N$ interaction acts at lower internal $\eta N$ energies, thus leading to general reduction of the attractive $\eta N$ forces (this question was addressed in detail in Refs.\,\cite{WyGrNisk,HaidLiu,WycechKrz}). This effective weakening may qualitatively explain why the peculiar slope in the $\eta$ spectrum at low energies seen in the data for $dd\to\eta^4$He \cite{Adlarson1} and $pd\to\eta ^3$He \cite{Mersmann,Smyrski} becomes less steep, when we turn from $\eta^3$He to $\eta^4$He. \begin{table}[h] \renewcommand{\arraystretch}{1.5} \caption{The $\eta^3$He and $\eta^4$He scattering lengths predicted by our calculation. The first and the second rows for each version of the $NN$ potential list the values obtained with Set I and Set II of the $\eta N-\pi N$ parameters, respectively.} \begin{tabular*}{8.3cm} {@{\hspace{0.2cm}}c@{\hspace{0.2cm}}c@{\hspace{0.2cm}}|@{\hspace{0.8cm}}c@{\hspace{1.0cm}} c@{\hspace{0.5cm}}} \hline\hline $NN$ & $\eta N-\pi N$ & $a_{\eta^3\mathrm{He}}$ [fm] & $a_{\eta^4\mathrm{He}}$ [fm] \\ \hline Yamaguchi & I & $6.5+3.6\,i$ & $2.2+0.3\,i$ \\ & II & $1.1+0.5\,i$ & $0.5+0.1\,i$ \\ Gauss & I & $6.7+4.0\,i$ & $4.8+1.2\,i$ \\ & II & $1.3+0.7\,i$ & $1.0+0.3\,i$ \\ \hline\hline \end{tabular*} \label{tab4} \end{table} Summarizing, $\eta^4$He interaction is calculated for the first time correctly dealing with the few-body aspects of the problem. Applying separable representation firstly to the $(3+1)$ and $(2+2)$ and then to the $(4+1)$ and $(3+2)$ kernels we have solved the five-body Alt-Grassberger-Sandhas equations reducing them to a coupled set of quasi-two-body equations having Lippmann-Schwinger structure. The predicted value of $Re\,a_{\eta^4\mathrm{He}}$ is positive and turns out to be smaller than $Re\,a_{\eta^3\mathrm{He}}$. This finding should be attributed to effective weakening of the in-medium $\eta N$ interaction. According to our calculation, increase of the attractive forces due to an extra nucleon in $^4$He is overwhelmed by stronger suppression of the subthreshold $\eta N$ interaction in a more dense nucleus. The resulting attraction in the $\eta-4N$ system is too weak and does not support existence of the $\eta^4$He bound state, at least with the $\eta N$ parameters, used in the present calculation. This might be the key reason why no signal of $\eta^4$He bound state formation is still revealed, e.g., in the $dd\to^3$He\,$n\pi^0$ and $dd\to^3$He\,$p\pi^-$ reactions \cite{Krzemien,Adlarson2}. Finally, we note that although our results obviously suffer from oversimplified treatment of the $NN$ potential, they demonstrate applicability of the quasi-particle formalism to the five-body $\eta^4$He problem. The EDPE method provides rather rapid convergence of the separable expansion, so that transition from $\eta-3N$ to the $\eta-4N$ case is performed without drastic increase of numerical complexity. At the same time, more refined treatment requires inclusion of the nucleon spin as well as more sophisticated nucleon-nucleon potential instead of our simple rank-one ansatz.
1,477,468,750,829
arxiv
\section{Introduction} \label{sec:intro} Astrophysical relativistic shocks are a prominent site for production of broadband emission because of their apparent ability to accelerate non-thermal particles very efficiently. Another, potentially very important, feature of relativistic shocks is that they boost to relativistic energies the minimum energy of the particles in the downstream \citep[see ][]{2017ApJ...835..248W}. Thus, particle distributions with inversely populated energy levels in the relativistic domain can be created. This opens the possibility for the operation of radiation mechanisms involving stimulated emission in the radio band. If such an emission is detected, it should allow an accurate diagnostic of the physical conditions, such as particle density and plasma magnetization, in the environment created by relativistic shocks. Among other situations, recently relativistic shocks were applied to explain observational properties of fast radio bursts (FRBs). These millisecond-scale transient events were discovered by \cite{2007Sci...318..777L} (see a brief recent review in \citealt{2020Natur.587...45Z}). Presently, several hundred non-repeating events have been reported\footnote{See on-line data at https://www.herta-experiment.org/frbstats/catalogue}, and several tens of repeating sources are known, from some of which tens and even hundreds bursts were detected.\footnote{See, e.g. https://www.chime-frb.ca/repeaters for the CHIME telescope data on repeating sources.} At the moment, bursts themselves are detected only in radio at frequences from $\sim100$~MHz up to $\sim10$~GHz (see \citealt{2021Univ....7...76N} about multiwavelength observations of bursts and their sources). Several observational features of FRBs suggest that a coherent radiation mechanism is responsible for their generation \citep{2014MNRAS.442L...9L,2014PhRvD..89j3009K}. In particular, these coherent emission ``smoking guns'' include high luminosity and nearly \(100\%\) linear polarization detected for some FRBs \citep[see, e.g.,][]{2018ApJ...863....2G,2018Natur.553..182M,2019MNRAS.488..868O}. This favors scenarios involving synchrotron maser emission, however the specific realization of the process sill remains debated. Synchrotron maser emission can be produced at relativistic gyro frequency \citep[see, e.g., in][note that in the case of magnetized plasma, relativistic gyro frequency and plasma frequency have similar values]{1991PhFlB...3..818H,1992ApJ...391...73G,2019MNRAS.485.3816P}. However, for weakly magnetized shocks synchrotron maser emission can be generated also at significantly higher frequencies \citep[see, e.g.,][]{1970SvA....13..797S}. We apply this scenario to estimate the frequency of synchrotron maser emission behind pulsar wind termination shock (TS) and at relativistic shocks caused by magnetar flares. Magnetar bursts were proposed as possible sources of FRBs already in 2007 (see \citealt{2010vaoa.conf..129P}). Now, leading scenarios of FRB activity are related to this type of neutron stars (see a review in \citealt{2021Univ....7...56L}). Magnetars are neutron stars with strong magnetic fields (see a review e.g. in \citealt{2015RPPh...78k6901T}). In the first place, they are known as sources of powerful bursts with total luminosity covering a wide range up to $\sim10^{47}$~erg~s$^{-1}$. Strong bursts are rare following a power-law distribution $dN/dE_\mathrm{fl}\sim E_\mathrm{fl}^{-\gamma}$ with $\gamma\approx(1.4-2)$, here $E_\mathrm{fl}$ is the total energy of the flare. Three most energetic bursts --- so-called \emph{giant flares} and/or \emph{hyper flares}, --- were detected from Galactic sources. However, several well-established candidates for extragalactic flares are also known (see e.g. \citealt{2021ApJ...907L..28B} and references therein). In 2020 simultaneous bursts in radio \citep{2020Natur.587...54C, 2020Natur.587...59B} and X/$\gamma$-rays \citep{2020ApJ...898L..29M, 2021NatAs...5..372R, 2021NatAs...5..401T, 2021NatAs.tmp...54L} were detected from a Galactic magnetar SGR 1935 (Soft Gamma Repeater). This made links between FRB sources and magnetars even stronger. In contrast to other studies \citep[see, e.g.,][]{2017ApJ...842...34W} we do not adopt assumptions regarding the isotropy of the stimulated emission in the plasma co-moving frame and consider a possibility that the maser emission features a significant anisotropy (detailed analysis will be presented in Khangulyan et al. 2022). We show that anisotropy of the maser emission would imply a short duration of the maser flashes triggered by magnetar flares. Furthermore, our estimates show that the conditions, which can be naturally achieved at powerful magnetar flares, are sufficient for generation of synchrotron maser emission in the GHz band. This supports the scenarios that suggest magnetars as sources for FRBs \citep[see, e.g.,][]{2020ApJ...896..142B, 2017MNRAS.468.2726K, 2014MNRAS.442L...9L, 2020ApJ...897....1L, 2019MNRAS.485.4091M, 2016ApJ...824L..18L, 2017ApJ...838L..13L, 2021arXiv210207010L} and alleviate the extreme assumptions required for their realizations. Moreover, if a magnetar is located in a binary system or moves with high proper speed through the interstellar medium, this can even further increase the frequency at which synchrotron maser operates. \section{Maser emission at pulsar wind termination shock} \label{sec:magnetars} \citet{1991PhFlB...3..818H} have shown that if electrons have a ``ring'' momentum distribution, i.e., follow gyrorotation for several revolutions then maser synchrotron can be generated at relativistic gyrofrequency, \begin{equation} \Omega_{L,e}=\frac{ceB}{E}\,. \end{equation} Here, \(E\) and \(B\) are particle energy and magnetic field, respectively (also note that \(m_e\), \(e\), and \(c\) are the conventional constants: electron mass, elementary change, and light speed). Particle-in-cell (PIC) simulations and theoretical considerations indicate a possible existence of several coherent gyration cycles in the downstream of a relativistic shock \citep{1988PhRvL..61..779L,1992ApJ...391...73G,2019MNRAS.485.3816P}. Such a particle distribution has population inversion thus in the region with thickness $\sim r_g$ maser-synchrotron emission can be formed. However, if the electron distribution is less regular, then it is still unclear if maser emission can be generated in the range of frequencies where gyrorotation is important. Moreover, the relativistic gyro frequency is typically quite low, and extreme assumptions are required to match it to the frequencies at which FRBs are observed. For example, \citet{2014MNRAS.442L...9L} adopted a magnetar flare magnetic field of \(B\sim10^5\rm\,G\) at the distance of \(10^{15}\rm\,cm\), this corresponds to isotrpotic luminosity exceeding \(\gg10^{50}\rm \,erg\,s^{-1}\), which significantly larger than the values typically assumed. Most likely, sufficiently strong magnetic field can be realized only in the pulsar / magnetar magnetosphere \citep[see, e.g.,][]{2020ApJ...897....1L}. Alternatively, one can assume that the maser emission is produced at a shock that moves with large bulk Lorentz factor, \(\gg100\), in the laboratory frame \citep{2019MNRAS.485.4091M,2019arXiv190807743B}. Significantly above the cyclotron frequency, \(\omega\gg\Omega_{L,e}\), the dielectric permittivity is simply \( \varepsilon = 1 - \left(\frac{\Omega_{p,e}}{\omega}\right)^2\) (for a more detailed discussion see Appendix~\ref{sec:maser}). For this regime, \citet{1967JETP...24..381Z} obtained that synchrotron emission is amplified if the main contribution to the absorption is provided by particles with sufficiently large energy, \begin{equation}\label{eq:maser_energy} E >E_{\rm min}= m_ec^2 \frac{2\omega_{L,e} \omega^2}{\Omega_{p,e}^3}\,, \end{equation} where \(\omega_{L,e}={eB}/({m_e c})\) is non-relativistic cyclotron frequency. Using this equation we can estimate the frequency below which the maser emission can be formed: \begin{equation}\label{eq:maser_energy2} \omega<\omega_{\rm max}= \sqrt{\frac{E}{2m_ec^2}\frac{ \Omega_{p,e}^3}{\omega_{L,e}}}\,, \end{equation} which is almost identical to the expression, \(\Omega_{p,e}\sqrt{\Omega_{p,e}/\Omega_{L,e}}\), obtained by \citet{2019ApJ...875..126G}. At crossing of a relativistic shock, the particles get their energy boosted by a factor, \(\Gamma_{\rm sh}\), which is approximately equal to the upstream bulk Lorentz factor measured in the frame of the shock wave. Thus, if the upstream is cold, then we can simply adopt \(E_{\rm min}\approx\Gamma_{\rm sh}m_ec^2\). Considering the theoretical and numerical results outlined above, we may expect formation of maser emission at frequency (we refer this as ``low frequency maser window'') \begin{equation}\label{eq:maser_standard} \omega_{\rm m,1}\sim \frac{eB}{m_ec\Gamma_{\rm sh}}\,, \end{equation} from particles having a ``ring'' momentum distribution (formation of such a distribution requires a cold upstream, also see analysis with PIC simulations in \citealt{2019MNRAS.485.3816P,2020MNRAS.499.2884B}); and at (we referee this as ``high frequency maser window'') \begin{equation}\label{eq:condition} \Omega_{L,e}\ll\omega_{\rm m,2}< \sqrt{\frac{\Omega_{\rm p,e}^3}{2\Omega_{\rm L,e}}}\,. \end{equation} The latter regime can be realized only if \begin{equation} \frac{\Omega_{L,e}}{\Omega_{\rm p,e}}\ll 1\,. \end{equation} This condition can be rewritten as \begin{equation} \frac{B^2/(4\pi)}{n_e \Gamma_{\rm sh} m_ec^2}\ll1\,, \end{equation} which implies a condition on the plasma magnetization (i.e., the ratio of the Poynting flux to the plasma kinetic energy flux) in the upstream of the shock: \(\sigma\ll1\) (where we ignore a factor of \(\sim3\) for simplicity). This is consistent with previous analysis of this process: high-frequency maser emission can be generated in weakly magnetized plasma \citep{1970SvA....13..797S,2002ApJ...574..861S,2019ApJ...875..126G}. The shock magnetization of pulsar winds might be quite small, \(10^{-3\dots-1}\). Thus, a priory, we cannot exclude that the conditions behind astrophysical shocks, in particular, pulsar wind TSs, are suitable for production of synchrotron maser emission in the range of frequencies \(\omega_{\rm m,2}\). In what follows we estimate the frequency that corresponds to the high-end of the range \begin{equation} \omega_{\rm max} =\sqrt{\frac{E_{\rm min}\Omega_{\rm p,e}^3}{2m_ec^2\omega_{\rm L,e}}}\,. \end{equation} According to Eq.~\eqref{eq:condition}, maser emission can be generated in the range \( \sigma^{\nicefrac{3}{4}}\omega_{\rm max} \ll\omega<\omega_{\rm max}\). To compute the actual absorption coefficient, one needs to know the electron distribution and then to take the integration in Eq.~\eqref{eq:absorption}. Figure~1 in \citet{2019ApJ...875..126G} shows that the frequency range with negative absorption coefficient is quite narrow, between \(\omega_{\rm max}/3\) and \(\omega_{\rm max}\). Generation of maser emission at relativistic gyrofrequency, Eq.\eqref{eq:maser_standard}, is discussed in a number of papers including its implication for FRBs \citep{2014MNRAS.442L...9L,2020ApJ...897....1L,2019MNRAS.485.4091M, 2020ApJ...896..142B}. The possibility of production of FRBs by synchrotron maser emission in the range given by Eq.~\eqref{eq:condition} got much less attention. However, this range has an obvious advantage --- this mechanism allows producing coherent emission at significantly higher frequencies, thus it can alleviate the need for extreme assumption adopted, e.g., in \citet{2014MNRAS.442L...9L}. Below we discuss the conditions required for its ignition at a pulsar (or a magnetar) wind TSs. We consider two cases: the TS formed by a steady pulsar wind (``steady case'') and the interaction of an intense flare with pulsar wind nebula (``non-steady case''). \subsection{ Steady case} Conditions behind a steady reverse shock in a pulsar wind are determined by a few parameters: the pulsar spin-down luminosity, \(L_{\rm sd}\), the pulsar wind magnetization, \(\sigma\), its bulk Lorentz factor, \(\Gamma_{\rm wind}\), and the radius of the TS, \(R_{\rm ts}\). As we are interested in the case with \(\sigma\ll1\) and the upstream bulk Lorentz factor is large, the downstream speed is simply \(\nicefrac{c\,}{3}\) (the bulk Lorentz factor is \(\nicefrac{3}{\sqrt{8}}\)). This allows obtaining all other parameters of the downstream. Namely, the magnetic field (in the plasma co-moving frame) is \begin{equation}\label{eq:B_steady} B\approx \sqrt{\frac{8\sigma L_{\rm sd}}{R_{\rm ts}^2 c}}\,; \end{equation} electron number density (in the plasma co-moving frame) \begin{equation} n_e\approx \frac{(1-\sigma)L_{\rm sd}}{\sqrt{2}\pi R_{\rm ts}^2 m_ec^3\Gamma_{\rm wind}}\,; \end{equation} plasma internal energy \begin{equation} \varepsilon_{\rm pwn}\approx \frac{(1-\sigma)L_{\rm sd}}{\sqrt{2}\pi R_{\rm ts}^2 c}\,. \end{equation} Thus, we obtain that the plasma and cyclotron frequencies are \begin{equation} \Omega_{\rm p,e} = 2^{\nicefrac{3}{4}}\frac{e}{m_ec}\sqrt{\frac{(1-\sigma)L_{\rm sd}}{ R_{\rm ts}^2 c\Gamma_{\rm wind}^2}}\, \end{equation} and \begin{equation} \omega_{\rm L,e} = \frac{e}{m_ec}\sqrt{\frac{8\sigma L_{\rm sd}}{R_{\rm ts}^2 c}}\,. \end{equation} Thus, the maximum frequency for the synchrotron maser radiation is \begin{equation} \begin{split} \omega_{\rm max} &\approx 2^{\nicefrac{-1}{8}}\frac{e}{m_ec} \sqrt{\frac{L_{\rm sd}}{R_{\rm ts}^2 c}} \frac{(1-\sigma)^{\nicefrac{3}{4}}}{ \Gamma_{\rm wind}\sigma^{\nicefrac{1}{4}}}\,\\ &\approx 3\times10^3L_{\rm sd,38}^{\nicefrac{1}{2}}R_{\rm ts,15}^{-1}{ \Gamma_{\rm wind,3}^{-1}\sigma_{-2}^{\nicefrac{-1}{4}}}\quad[{\rm rad \; s^{-1}}]\,, \end{split} \label{eq:omax} \end{equation} where \(L_{\rm sd} = 10^{38}L_{\rm sd, 38} \rm\,erg\,s^{-1}\), and we adopted \(E_{\rm min} = m_ec^2\Gamma_{\rm wind}\). As we can see, even for the ``generous'' values used for the normalization in Eq.~\eqref{eq:omax}, at a relativistic shock formed by a steady pulsar wind, the maser emission can be generated at very low frequencies only. \subsection{Non-stationary case} If a powerful flare hits a standing shock (which is assumed to be the TS of the wind), then a system of two relativistic shocks is to be formed. The forward shock (FS) propagates through the matter in the nebula, and the reverse shock (RS) through the material that forms the flare. In the laboratory frame both shocks (and also the contact discontinuity --- CD hereafter) can move with relativistic speed. To estimate these speeds one needs to consider the jump condition at each shock and pressure balance at the CD. Dynamics of the FS and RS is discussed in the Appendix~\ref{sec:MHD}, we just adopt two key results from there \citep[for a discussion in detail, see][]{1976PhFl...19.1130B}. The bulk Lorentz factor of the shocks \begin{equation} \Gamma_{\rm fs}\approx\Gamma_{\rm rs}\approx \Gamma \approx \frac12\sqrt[4]{\frac{L_{\rm fl}}{L_{\rm sd}}}\,; \label{eq:GGG} \end{equation} and flare penetration distance to the PWN: \begin{equation} \Delta R \approx \Delta t_{\rm fl}c\sqrt{\frac{L_{\rm fl} }{L_{\rm sd}}}\,. \end{equation} The typical energy associated with FRBs is \(\sim10^{40}\rm\,erg\), since maser mechanism can radiate away a per-cent fraction of energy \citep[see][and reference therein]{Koryagin2000}. Thus, it is feasible that FRBs require magnetar flare of energy \(\sim10^{42}\rm\,erg\) and luminosity \(L_{\rm fl}\sim10^{45}\rm\,erg\,s^{-1}\) (given the ms duration). This value is significantly smaller than the maximum recorded flare luminosity (see in Sec.~\ref{sec:intro}). If the wind magnetization is small, the magnetic field close to the TS is small, given by Eq.~\eqref{eq:B_steady}. In the frame of the FS, the strength of the magnetic field is amplified by a factor of \(\Gamma_{\rm fs}\), but the flow magnetization remains small, since the plasma internal energy is also amplified by the same factor. Thus, the conditions at the FS of flare should remain suitable for production of the maser emission independently on the flare magnetization. Relativistic cyclotron, \(\Omega_{L,e}\), and plasma, \(\Omega_{p,e}\), frequencies (notice the capital letter notation in contrast to small letters for the non-relativistic case) do not change by the compression by the forward shock, thus the maser frequency in the FS downstream is given by Eq.~\eqref{eq:omax}. We therefore need to account only for the Doppler boosting: \begin{equation} \omega_{\rm max, fs}= 2\omega_{\rm max}\Gamma_{\rm fs}\,. \end{equation} Substituting Eq.~\eqref{eq:GGG} to Eq.~\eqref{eq:omax} we obtain \begin{equation} \label{eq:omaxfs} \begin{split} \omega_{\rm max, fs}\approx & 2^{\nicefrac{-1}{8}}\frac{e}{m_ec} \sqrt{\frac{L_{\rm sd}^{\nicefrac{1}{2}}L_{\rm fl}^{\nicefrac{1}{2}}}{R_{\rm ts}^2 c}} \frac{(1-\sigma)^{\nicefrac{3}{4}}}{ \Gamma_{\rm wind}\sigma^{\nicefrac{1}{4}}}\,\\ \approx& 3\times10^4\,[{\rm rad \; s^{-1}}]\quad\times\\ &L_{\rm sd,35}^{\nicefrac{1}{4}}L_{\rm fl,45}^{\nicefrac{1}{4}}R_{\rm ts,15}^{-1}{ \Gamma_{\rm wind,3}^{-1}\sigma_{-2}^{\nicefrac{-1}{4}}}\,, \end{split} \end{equation} If we consider in the RS frame the formation of the maser emission at the RS, is identical to the emission at the pulsar wind TS. Thus, we should replace \(L_{\rm sd}\) with \(L_{\rm fl}/ {4}\Gamma_{\rm rs}^2\) and \(\Gamma_{\rm wind}\) with \(\Gamma_{\rm fl}/{2}\Gamma_{\rm rs}\) in Eq.~\eqref{eq:omax} and account for the Doppler boosting. We therefore obtain \begin{equation}\label{eq:omaxrs} \begin{split} \omega_{\rm max,rs}\approx & 2^{\nicefrac{-1}{8}}\frac{e}{m_ec^{\nicefrac{3}{2}}} \frac{L_{\rm fl}^{\nicefrac{3}{4}}}{L_{\rm sd}^{\nicefrac{1}{4}} R_{\rm ts}} \frac{(1-\sigma_{\rm fl})^{\nicefrac{3}{4}}}{ \Gamma_{\rm fl}\sigma_{\rm fl}^{\nicefrac{1}{4}}}\,\\ \approx & 3\times10^{9}\,[{\rm rad \; s^{-1}}]\quad\times\\ &L_{\rm sd,35}^{\nicefrac{-1}{4}} L_{\rm fl,45}^{\nicefrac{3}{4}} R_{\rm ts,15}^{-1}{ \Gamma_{\rm fl,3}^{-1}\sigma_{\rm fl,-2}^{\nicefrac{-1}{4}}}\,, \end{split} \end{equation} here \(\sigma_{\rm fl}\) is magnetization of the flare. Our estimates, Eqs.~(\ref{eq:omax},\ref{eq:omaxfs}) and \eqref{eq:omaxrs}, for the frequency at which synchrotron maser emission can be generated, show that in the case of the TS formed by a steady pulsar wind, given by Eq.~\eqref{eq:omax}, the maser emission appears in the kHz. Thus, it remains undetectable even if we adopt extreme assumptions regarding the pulsar wind luminosity and the shock formation distance. In contrast, in the case of the shocks created by intense magnetar flares at RS, given by Eq.~\eqref{eq:omaxrs}, the maser frequency can reach the GHz band without invoking any extreme assumptions. In what follows we mostly focus at the maser emission generated at the RS of the magnetar flare. \section{Application to the magnetar scenario for FRBs}\label{sec:application} The frequency of maser radiation produced at the RS of a magnetar flare, Eq.~\eqref{eq:omaxrs}, is determined mostly by the radius of the standing shock and luminosity of the flare, $\omega_{\rm max} \propto R_{\rm ts}^{-1}L_{\rm fl}^{\nicefrac{3}{4}}$. If this mechanism is responsible for FRBs, which are detected in the GHz band, it requires either $R_{\rm ts}< 10^{15}$~cm or \(L_{\rm fl}\gtrsim 10^{45}\rm\, erg\,s^{-1}\). The isotropic luminosity of magnetar flares achieves, in some cases, \(\sim10^{47}\rm\, erg\,s^{-1}\) (please, see the Sec.~\ref{sec:intro}), and the required luminosity of \(10^{45}\rm\,erg\,s^{-1}\) seems to be reasonable. However, given the stronger dependence on the TS radius, below we check if the used normalization of \(10^{15}\rm\,cm\) is reasonable. For isolated magnetars, in a very rough way, the TS radius, \(R_{\rm ts}\), is determined by the external pressure, \(p_{\rm ext}\): \begin{equation} R_{\rm ts} = \sqrt{\frac{L_{\rm sd}}{4\pi c p_{\rm ext}}}\sim10^{15} L_{\rm sd,34}^{1/2} p_{\rm ext,-8}^{-1/2}\rm\,cm\,. \label{eq:rts} \end{equation} Here we accounted that given the magentar typical rotation period of a few seconds, the spin-down power is very modest, \(L_{\rm sd}\sim 10^{34}\rm\,erg\,s^{-1}\), and the external pressure can be normalized to \(10^{-8}\rm\,dyn\,cm^{-2}\) \citep[see, e.g.,][for a discussion]{2014MNRAS.442L...9L}. Thus, it seems quite feasible that the TS in nebula formed by magnetar wind is \(R_{\rm ts}\sim10^{15}\rm \,cm\). There are, however, two processes that can increase the magnetar wind TS radius: (i) the energy injection by the flares and (ii) and pressure drop inside supernovae (SN) during the phase of the adiabatic expansion or late Sedov phase. Below we briefly check if these effects impose any significant constraints. As we can see from Eq.~\eqref{eq:dr}, when the TS is hit by a flare, its position is displaced by \(\Delta R\). If the shock recovery time $t_{\rm rec} \sim 3 \Delta R/ c \sim 3 \Delta t_{\rm fl}\sqrt{L_{\rm fl}/L_{\rm sd, eff}}$ is long compared to the delay between flares, $T_{\rm fl}$, then the shock position is determined by the effective magnetar ``spindown'' luminosity, which accounts for the energy injection by the flares: \begin{equation} \begin{split}\label{eq:lsdef} L_{\rm sd, eff} &= L_{\rm sd} + E_{\rm fl}/T_{\rm fl} \\ &\approx 10^{35}(E_{\rm fl,42} T_{\rm fl, 7}^{-1}+0.1L_{\rm sd,34})\, \rm erg\,s^{-1}\,. \end{split} \end{equation} To account for this effect, below we use $L_{\rm sd, eff} $ instead of $ L_{\rm sd}$. If a magnetar is located inside a SN remnant (SNR), then the radius of the nebula (and of the TS) is determined by the pressure dynamics in the center of the SNR. During the first several hundred years, the SN shell rapidly expands during the ejecta dominated phase and the RS does not reach the center of the SNR. Low pressure there should allow almost a free nebula expansion, and the radius of the magnetar wind TS can be very large, \(R_{\rm ts}\sim10^{17}\rm\,cm\), even if the spindown losses are small. After approximately \(10^3\rm\,yr\), the explosion enters the Sedov phase, the expansion slows down and the RS reaches the SNR center. This compresses the nebula and establishes the magnetar wind TS at \begin{equation} R_{\rm ts} \approx 10^{15} L_{\rm sd,35}^{1/2} t_{10.5}^{3/5}\quad {\rm cm}\,, \label{eq:tsrss} \end{equation} where \(t\) is the time elapsed since SN explosion \citep[see, e.g.,][]{2014ApJ...785..130Z}. Here we ignore the magnetar braking, which significantly decrease the radius of the TS after \(10\rm\,kyr\) \citep[see, e.g.,][]{2018ApJ...860...59K}, as it is uncertain if magnetars are capable to produce frequent powerful flares at their late evolution phase. Thus, being conservative we adopt that for magnetars reside insider a SNR, the time span during which the radius the wind TS is limited to \(10^{15}\rm \,cm\) is about \(3\)~kyr. {By studying the properties of the persistent radio source associated with FRB 121102 \citep{2017Natur.541...58C,2017ApJ...834L...8M} \citet{2017ApJ...842...34W} derived an upper limit of \(10^{2.5}\rm\,yr\) on the age of the source. This estimate seems to be consistent with the source age allowed in the framework of our model. We note, however, that the estimate by \citet{2017ApJ...842...34W} is obtained under the assumption of smoothly changing conditions in the radio source, thus in the context of our scenario this age limit should be considered as an upper limit on the time elapsed since the nebula got compressed by the reverse shock.} To escape from SNR, a large magnetar proper speed, \(v\), is required. In this case, the interaction with the interstellar medium of density \(\rho_{\rm ism}\) creates a bow shock at \begin{equation}\label{eq:rbs} \begin{split} R_{\rm ts, bow} &= A\sqrt{\frac{L_{\rm sd} }{ 4 \pi c \rho_{\rm ism} v^2}}\,,\\ &\approx 4 \times 10^{14} A_{-0.5} L_{\rm sd, 35}^{1/2} \, n_{\rm ism, 1}^{-1/2} v_{8}^{-1} \, {\rm cm}\,. \end{split} \end{equation} The factor $A = R_{\rm rs}/ R_{\rm fs}\sim 1/3$ accounts for the ratio of the RS to FS distances in bow shock nebulae \citep{2019MNRAS.484.4760B}. Finally, we note that if a magnetar is located in a binary system \citep[see][for observational hints for binary systems harboring magnetars]{2020PhRvL.125k1103Y}, then the shock locates at distances comparable to the orbital separation and it can be very small, \(10^{12}\rm\,cm\). However, because of severe free-free absorption in the circumbinary environment, FRBs can be generated in binary systems with intermediate star separations, \(\sim10^{13}\rm cm\) or more \citep{2020ApJ...893L..39L}. We therefore conclude that magnetar flares of the intermediate luminosity of \(\sim10^{45}\rm \,erg\,s^{-1}\) can generate synchrotron maser bursts in the GHz energy band in the nebulae around (i) isolated magnetars during several kyr of their evolution; (ii) in run-away magnetars moving with high proper speed; and (iii) in magnetars in binary systems with orbital separation of \(< 10^{14}\rm\,cm\). This implies that in \(\sim10\%\) nebulae around active magnetars, the standing shock is at the distance suitable for production of FRBs. \section{Time profile of the signal emitted by a maser in the relativistic blast wave}\label{ap:boosting} If in the plasma co-moving frame the produced emission is isotropic, then in the observer frame because of photon aberration it is focused into a beaming cone with opening angle of \(\Gamma^{-1}\). Then a characteristic time-scale of \(R / (\Gamma^2 c)\) determines the shortest duration of a pulse produced by a spherical blast wave. Here \(R\) and \(\Gamma\) are the blast wave radius and bulk Lorentz factor. Thus, for the typical ms duration of FRBs implies the shock radius of \(R\ll 10^{8}\Gamma^2 (\Delta t/{\rm ms}) \rm \,cm\). However, there are several physical mechanisms that could lead to an anisotropic maser emission. First of all, if an external source provides sufficiently intense field of stimulating photons then the produced maser emission should be predominately directed away from this source. It is natural to expect that such a dominant source may determine the preferred direction for the maser emission if the maser emission production site has a quasi spherical shape. In the opposite case, when the production site is significantly smaller in one of the directions, then the intensity of locally generated emission might be highest along the source largest extension direction. Thus, again this emerges into a strongly anisotropic direction diagram of the maser emission. For example, one may expect realization of such a scenario if the maser emission is generated in a thin shell (detailed analysis will be presented in Khangulyan et al. 2022). Figure~\ref{fig:variability} presents a sketch which illustrate how the signal duration depends on the anistorpy of the emission. It is assumed that an emitting shell has thickness, \(\rho(t)\), and radius, \(R(t)\), and expands with speed, \(v\). The shell is assumed to be thin, \(\rho(t)\ll R(t)\), and its expansion speed to be relativistic, \(\Gamma=1/\sqrt{1-(v/c)^2}\gg1\). The emission process starts at a time instant \(t=0\) and terminates at \(t=T\) (both are measured in the laboratory coordinate system). If the emission is isotropic in the plasma co-moving frame, then the observer mainly sees the emission originated in the shell patch with a typical size of \(2R/\Gamma\). The signal duration is determined by the delay between arrival times of the emission generated at the point label \(1\) and \(4\) in Fig.~\ref{fig:variability}. If the emission is strongly anisotropic in the plasma co-moving frame, then the signal duration is the ``delay'' between points labeled \(1\) and \(3\). Here we assume that the emission is generated radially away from the flare origin. If the emission is produced perpendicularly to that direction in the plasma co-moving frame, then the ``delay'' is determined by points \(2\) and \(4\). Simple calculations give the corresponding delays: \begin{equation}\label{eq:delays} \begin{matrix} \tau_{2}-\tau_{1}&=&\frac{R}{c}(1-\cos\theta)&\approx&\frac{R}{2c\Gamma^2}\,\\[5pt] \tau_{3}-\tau_{1}&=&T(1-\beta)&\approx&\frac{T}{2\Gamma^2}\,\\[5pt] \tau_{4}-\tau_{2}&=&T(1-\beta\cos\theta)&\approx&\frac{T}{\Gamma^2}\,. \end{matrix} \end{equation} \begin{figure \plotone{frb_variability.pdf} \caption{Depending on the anisotropy of the emission, the observer registers emission components produced in different parts of the shell. This determines the apparent signal duration. \label{fig:variability}} \end{figure} As it can be seen from Eq.~\eqref{eq:delays}, the signal duration is determined by the shell radius, only if the emission is isotropic in the plasma co-moving frame. In the case of anisotropic emission (somehow almost independently on the preferred angle), the signal duration depends only on the shell bulk Lorentz factor, \(\Gamma\), and its lifetime, \(T\). For the scenario discussed here, one should use \(T\approx\Delta R/c \approx\Delta t_{\rm fl} \sqrt{\frac{L_{\rm fl}}{L_{\rm sd}}}\), where \(\Delta R\) is the flare penetration distance (see Appendix~\ref{sec:MHD}). The shock Lorentz factor is \(\Gamma\approx \frac12\sqrt[4]{\frac{L_{\rm fl}}{L_{\rm sd}}}\). It implies that for the anisotropic emission case the synchrotron maser burst triggered by a magentar flare of duration \(\Delta t_{\rm fl}\), is seen by the observer as a flare of a similar duration \begin{equation} \Delta \tau \sim \Delta t_{\rm fl}\,. \end{equation} Thus, we obtain that if the emission is highly anisotropic in the plasma co-moving frame, then the maser emission from the shell is to be registered during a very short time interval, comparable to the duration of the flare. {We emphasize that because of the strong anisotropy of the maser emission, the shell radius and its bulk Lorentz factor have a minor influence on the duration of the radio burst. Thus, the constraints on the shell radius and bulk Lorentz factor, which are obtained under the assumption (typically hidden) of isotropic emission in the co-moving frame, seem to be irrelevant. } \section{How many of Magnetars and FRBs in the Universe?} Each magnetar undergoes many flares of different energy during its lifetime. The luminosity function for the flare energy is quite well constrained with the observations. Below we compare the expected number of the magnetar flares, which are powerful enough to produce detectable FRBs, with observational statistics of FRBs. It is assumed that the number of magnetars is proportional to the star formation rate (SFR). We use expression for SFR at different redshifts $z$ from \cite{2014ARA&A..52..415M}: \begin{equation} \psi(z) = 0.015 \frac{(1+z)^{2.7}}{1+[(1+z)/2.9]^{5.6}} \, M_\odot\, {\rm yr}^{-1} {\rm Mpc}^{-3}. \end{equation} For basic cosmological equations we follow \cite{1999astro.ph..5116H}. For a given $z$ the comoving volume is: \begin{equation} \frac{ dV_{\rm c}}{dz}=\frac{c}{H_0}\frac{4\pi D_{\rm L}^2}{(1+z)^2\sqrt{(1+z)^3\Omega_{\rm m} + \Omega_{\Lambda}}}. \end{equation} Here $D_{\rm L}$ is the luminosity distance, $H_0$ --- present day Hubble constant, $\Omega_{\rm m}$ and $\Omega_{\Lambda}$ are present day normalized matter and dark energy density (for numerical estimates we apply fiducial values 0.3 and 0.7, correspondingly). We assume that the Galactic SFR is 3 solar mass per year, and that there are 100 magnetars in the Milky way (i.e., about 10\% of all neutron stars younger than a few tens thousand years). The energy distribution of flares obeys a power-law dependence \citep{2015RPPh...78k6901T}: \begin{equation} dN=A E_{\rm fl}^{-\gamma} dE_{\rm fl}. \end{equation} Below we use $\gamma=5/3$, and coefficient $A$ is obtained from the normalization condition: $\int_{E_{\rm min}}^{E_{\rm max}} A E_{\rm fl}^{1-\gamma}dE_{\rm fl} = 10^{48}$~erg. For $E_{\rm max}=10^{48}$~erg we obtain $A\approx 3 \times 10^{31}$~erg$^{2/3}$ (slightly smaller values of $E_{\rm max}$ do not change our conclusions significantly). Total number of flares detectable from Earth from a given magnetar is limited by the energy: \begin{equation} A \int_{E_{\rm lim}}^{E_{\rm max}} E_{\rm fl}^{-5/3}dE_{\rm fl}=\frac{3}{2}\frac{A}{E_{\rm lim}^{2/3}} \approx 10^{5} E_{\rm lim,40}^{-2/3}, \end{equation} here \begin{equation} E_{\rm lim}\approx 10^{40} \left(\frac{S_{\rm lim}}{0.1 \, {\rm Jy}}\right) \left(\frac{D_{\rm L}}{1\, {\rm Gpc}}\right)^2 \left(\frac{\Omega }{ 4\pi \, {\rm sr}}\right) \, {\rm erg}, \end{equation} where $S_{\rm lim}$ is the minimum observed flux and $\Omega$ --- solid angle (we assume isotropic emission). Here we assume that the energy emitted in radio is about 1\% of the total energy of the flare and duration of the flare is about 1 msec. Also we use expression from \cite{2015ApJ...814L..20M} for the limiting radio luminosity. Thus, the daily rate of flares detectable by an observer on Earth is \begin{equation} \frac{10^{-7}}{1+z} \int_{E_{\rm lim}}^{E_{\rm max}} \frac{A}{ E_{\rm fl}^{\gamma}} dE_{\rm fl} {\rm \; days}^{-1} \sim \frac{10^{-2}}{(1+z) E_{\rm lim,40}^{2/3}} {\rm \; days}^{-1} . \end{equation} We note that the factor $1/(1+z)$ appears due to the cosmological time dilation. Here we assumed that magnetars are active for 30 kyr, i.e. approximately for $10^7$ day. So, finally the rate per day from all magnetars in the comoving volume $dV_\mathrm{c}(z)$ is: $$ 0.015 \frac{(1+z)^{2.7}}{1+[(1+z)/2.9]^{5.6}} \, M_\odot\, {\rm yr}^{-1} \, {\rm Mpc}^{-3} \times $$ $$ \times \frac{c}{H_0} \frac{4 \pi D_{\rm L}^2}{(1+z)^2\sqrt{(1+z)^3\Omega_{\rm m}+\Omega_{\Lambda}}} \times $$ $$ \times \frac{100}{3 \, M_\odot \, {\rm yr}^{-1}} \times 10^{-7} {\rm days}^{-1} \times $$ \begin{equation} \times \frac{1}{1+z}\int_{E_{\rm lim}}^{E_{\rm max}} A E_{\rm fl}^{-\gamma}dE_{\rm fl}. \end{equation} \begin{figure \plotone{FRB_rate.pdf} \caption{The dependence of the integral in Eq.~\ref{eq:frbr} on the redshift.\label{fig:frbrate}} \end{figure} After simplifications and some algebra we obtain daily magnetar's flare rate of \begin{equation} N_{\rm mag} \approx 10^9 M(z_{\rm max})\, {\rm days}^{-1}\,, \label{eq:frbr0} \end{equation} where \begin{equation} \label{eq:frbr} \begin{split} M(z_{\rm max})&\equiv \int_0^{z_{\rm max}} \frac{(1+z)^{1/3}}{1+[(1+z)/2.9]^{5.6}}\times \\ &\frac{1}{\sqrt{(1+z)^3\Omega_{\rm m}+\Omega_{\Lambda}}} I_{\rm D}^{2/3} dz, \end{split} \end{equation} and \begin{equation} I_{\rm D}\equiv\int_0^z \frac{dz'}{\sqrt{(1+z')^3\Omega_{\rm m}+\Omega_{\Lambda}}}\,. \end{equation} For the given choice of parameters the integral saturates at $\sim1$ for $z_{\rm max}>3$ (see in Fig.~\ref{fig:frbrate}). Thus, if every magnetar flare produces a radio burst, and the energy of the burst equals 1\% of the total flare energy, then we expect to see about half-billion events per day above 0.1 Jy. The observed rate is $\lesssim 10^4$ FRBs per day. So, roughly only one in $\gtrsim 10^5$ magnetar bursts produces a visible FRB. If we account that just about 10\% of magnetars should have proper conditions in the surrounding medium to generate a flare in $\sim$~GHz range (see in Sec.~\ref{sec:application}), then FRB generation by magnetar flares should have a successive rate of $\lesssim 10^{-4}$. \section{Discussion and conclusions} In the estimates above, we have ignored the requirement of small magnetization of flares. Indeed, it was shown that high frequency maser emission from the RS is possible only if the flare is weakly magnetized. Typically it is postulated that magnetars flares are strongly magnetized. However, flares with low magnetization can be formed along open magnetic field lines at the magnetic pole in the magnetar magnetosphere. The ratio of the polar caps surface to the magnetar surface can be estimated as $\eta\sim (R_{\rm M}/R_{\rm lc})/2 \sim 10^{-4} P_{\rm 0}^{-1}$, here $P$ is magnetar spin period; $R_{\rm lc}$ and \(R_{\rm M}\) are the light cylinder and magnetar radius, respectively. This simple estimate gives a result surprisingly close to the required rate of \( \sim 10^{-4}\). This suggests that weakly magnetized flares could be a very prominent source for production of FRBs through the high frequency maser emission. This is in correspondence with the fact that despite many X/$\gamma$-ray flares were registered from SGR 1935+2154 and the source was actively monitored with radio telescope during its period of activity, just one event was detected simultaneously in radio and in high energy band (see e.g. \citealt{2021NatAs...5..414K} and references therein). This burst was much harder than others in X/$\gamma$-rays \citep{2021NatAs...5..372R}. {Of course, estimates of the rate made above contains many simplifications, so more detailed population synthesis calculations are welcomed.} \begin{acknowledgments} The authors appreciate the useful discussions with Sergey Koryagin and Maxim Efremov. DK acknowledges support by the Russian Science Foundation grant No. 21-12-00416 and by JSPS KAKENHI Grant Numbers 18H03722, 18H05463, and 20H00153. SP was supported by the Ministry of science and higher education of Russian Federation under the contract 075-15-2020-778 in the framework of the Large scientific projects program within the national project ``Science''. Study of the conditions required for production of the maser emission at relativsitic shocks was supported by RSF grant No. 21-12-00416. Interpretation of the FRBs in the frameworks of the developed model was supported by the project ``Science'' (contract 075-15-2020-778). \end{acknowledgments} \vspace{5mm}
1,477,468,750,830
arxiv
\section{Introduction} The univalence axiom~\cite{unimath,hottbook,grayson} is not true or false in, say, ZFC or the internal language of an elementary topos. It cannot even be formulated. As the saying goes, it is not even wrong. This is because \begin{quote} univalence is a property of Martin-L\"of's \emph{identity type} of a universe of types. \end{quote} Nothing like Martin-L\"of's identity type occurs in ZFC or topos logic as a \emph{native} concept. Of course, we can create \emph{models} of the identity type in these theories, which will make univalence hold or fail. But in these notes we try to understand the primitive concept of identity type, directly and independently of any such particular model, as in (intensional) Martin-L\"of type theory~\cite{MR0387009,MR1686864}, and the univalence axiom for it. In particular, we don't use the equality sign ``$=$'' to denote the identity type $\operatorname{Id}$, or think of it as a path space. Univalence is a type, and the univalence axiom says that this type has some inhabitant. It takes a number of steps to construct this type, in addition to subtle decisions (e.g.\ to work with equivalences rather than isomorphisms, as discussed below). We first need to briefly introduce Martin-L\"of type theory (MLTT). We will not give a full definition of MLTT. Instead, we will mention which constructs of MLTT are needed to give a complete definition of the univalence type. This will be enough to illustrate the important fact that in order to understand univalence we first need to understand Martin-L\"of type theory well. \section{Martin-L\"of type theory, briefly} \subsection{Types and their elements} Types are the analogues of sets in ZFC and of objects in topos theory. Types are constructed together with their elements, and not by collecting some previously existing elements. When a type is constructed, we get freshly new elements for it. We write \[ x:X \] to declare that the element $x$ has type $X$. This is not something that is true or false, unlike a membership relation $x \in X$ in ZFC. In other words, $x \in X$ in ZFC is a binary relation, whereas $x:X$ in type theory simply specifies that $x$ ranges over $X$. For example, if $\mathbb{N}$ is the type of natural numbers, we may write \begin{gather*} 0 : \mathbb{N}, \\ (0,0) : \mathbb{N} \times \mathbb{N}. \end{gather*} However, the following statements are nonsensical and syntactically incorrect, rather than false: \begin{eqnarray*} 0 : \mathbb{N} \times \mathbb{N} & \text{(nonsense)}, \\ (0,0) : \mathbb{N} & \text{(nonsense)}. \end{eqnarray*} This is no different from the situation in the internal language of a topos. \subsection{Products and sums of type families} Given a family of types $A(x)$ indexed by elements $x$ of a type $X$, we can form its product and sum: \begin{gather*} \Pi(x:X), A(x), \\ \Sigma(x:X), A(x), \end{gather*} which we also write $\Pi A$ and $\Sigma A$. An element of the type $\Pi A$ is a function that maps elements $x:X$ to elements of $A(x)$. An element of the type $\Sigma A$ is a pair $(x,a)$ with $x:X$ and $a:A(x)$. (We adopt the convention that $\Pi$ and $\Sigma$ scope over the whole rest of the expression.) We also have the type $X\to Y$ of functions from $X$ to $Y$, which is the particular case of $\Pi$ with the constant family $A(x):=Y$. The cartesian product $X\times Y$, whose elements are pairs, is the particular case of $\Sigma$ with $A(x):=Y$ again. We also have the disjoint sum $X+Y$, the empty type and the one-element type, which will not be needed to formulate univalence. \subsection{Quantifiers and logic} There is no underlying logic in MLTT. Propositions are types, and $\Pi$ and $\Sigma$ play the role of universal and existential quantifiers, via the so-called Curry-Howard interpretation of logic. As for the connectives, implication is given by the function-type construction~$\to$, conjunction by the binary cartesian product~$\times$ , disjunction by the binary disjoint sum~$+$, and negation by the type of functions into the empty type. When a type is understood as a proposition, its elements correspond to proofs. In this case, instead of saying that $A$ has a given element, it is common practice to say that $A$ holds. Then a type declaration $x:A$ is read as saying that $x$ is a proof of $A$. But this is just a linguistic device, which is not reflected in the formalism. We remark that in univalent mathematics the terminology \emph{proposition} is reserved for subsingleton types (types whose elements are all identified). The propositions that arise in the construction of the univalence type are all subsingletons. \subsection{The identity type} Given a type $X$ and elements $x,y:X$, we have the identity type \[ \operatorname{Id}_X(x,y), \] with the subscript $X$ often elided. The idea is that $\operatorname{Id}(x,y)$ collects the ways in which $x$ and $y$ are identified. We have a function \[ \operatorname{refl} : \Pi(x:X), \operatorname{Id}(x,x), \] which identifies any element with itself. Without univalence, $\operatorname{refl}$ is the only given way to construct elements of the identity type. In addition to $\operatorname{refl}$, for any given type family $A(x,y,p)$ indexed by elements $x,y:X$ and $p:\operatorname{Id}(x,y)$ and any given function \[ f : \Pi(x:X), A(x,x,\operatorname{refl}(x)), \] we have a function \[ \operatorname{J}(A,f) : \Pi(x,y:X), \Pi(p:\operatorname{Id}(x,y)), A(x,y,p) \] with $\operatorname{J}(A,f)(x,x,\operatorname{refl}(x))$ stipulated to be $f(x)$. We will see examples of uses of $\operatorname{J}$ in the steps leading to the construction of the univalence type. Then, in summary, the identity type is given by the data $\operatorname{Id},\operatorname{refl},\operatorname{J}$. With this, the exact nature of the type $\operatorname{Id}(x,y)$ is fairly under-specified. It is consistent that it is always a subsingleton in the sense that $\operatorname{K}(X)$ holds, where \[ \operatorname{K}(X) := \Pi(x,y:X), \Pi(p,q:\operatorname{Id}(x,y)), \operatorname{Id}(p,q). \] The second identity type $\operatorname{Id}(p,q)$ is that of the type $\operatorname{Id}(x,y)$. This is possible because any type has an identity type, including the identity type itself, and the identity type of the identity type, and so on, which is the basis for univalent mathematics (but this is not discussed here, as it is not needed in order to construct the univalence type). The $\operatorname{K}$ axiom says that $\operatorname{K}(X)$ holds for every type $X$. In univalent mathematics, a type $X$ that satisfies $\operatorname{K}(X)$ is called a set, and with this terminology, the $\operatorname{K}$ axiom says that all types are sets. On the other hand, the univalence axiom provides a means of constructing elements other than $\operatorname{refl}(x)$, at least for some types, and hence the univalence axiom implies that some types are not sets. (Then they will instead be 1-groupoids, or 2-groupoids, \dots, or even $\infty$-groupoids, with such notions defined within MLTT rather than via models, but we will not address this important aspect of univalent mathematics here). \subsection{Universes} Our final ingredient is a ``large'' type of ``small'' types, called a universe. It is common to assume a tower of universes $\operatorname{\mathcal{U}}_0, \operatorname{\mathcal{U}}_1, \operatorname{\mathcal{U}}_2, \dots $ of ``larger and larger'' types, with \begin{gather*} \operatorname{\mathcal{U}}_0 : \operatorname{\mathcal{U}}_1, \\ \operatorname{\mathcal{U}}_1 : \operatorname{\mathcal{U}}_2, \\ \operatorname{\mathcal{U}}_2 : \operatorname{\mathcal{U}}_3, \\ \vdots \end{gather*} When we have universes, a type family $A$ indexed by a type $X:\operatorname{\mathcal{U}}$ may be considered to be a function $A:X\to \operatorname{\mathcal{V}}$ for some universe $\operatorname{\mathcal{V}}$. Universes are also used to construct types of mathematical structures, such as the type of groups, whose definition starts like this: \[ \operatorname{Grp} := \Sigma(G:\operatorname{\mathcal{U}}), \operatorname{isSet}(G) \times \Sigma(e:G), \Sigma(-\cdot- : G\times G\to G), (\Pi(x:G), \operatorname{Id}(e \cdot x,x)) \times \cdots \] Here $\operatorname{isSet}(G):=\Pi(x,y:G),\Pi(p,q:\operatorname{Id}(x,y)),\operatorname{Id}(p,q)$, as above. With univalence, $\operatorname{Grp}$ itself will not be a set, but a 1-groupoid instead, namely a type whose identity types are all sets. Moreover, if $\operatorname{\mathcal{U}}$ satisfies the univalence axiom, then for $A,B:\operatorname{Grp}$, the identity type $\operatorname{Id}(A,B)$ can be shown to be in bijection with the group isomorphisms of $A$ and $B$. \section{Univalence} Univalence is a property of the identity type $\operatorname{Id}_{\operatorname{\mathcal{U}}}$ of a universe $\operatorname{\mathcal{U}}$. It takes a number of steps to define the univalence type. \subsection{Construction of the univalence type} We say that a type $X$ is a \emph{singleton} if we have an element $c:X$ with $\operatorname{Id}(c,x)$ for all $x:X$. In Curry-Howard logic, this is \[ \operatorname{isSingleton}(X) := \Sigma(c:X), \Pi(x:X), \operatorname{Id}(c,x). \] For a function $f:X\to Y$ and an element $y:Y$, its fiber is the type of points $x:X$ that are mapped to (a point identified with) $y$: \[ f^{-1}(y) := \Sigma(x:X),\operatorname{Id}(f(x),y). \] The function $f$ is called an equivalence if its fibers are all singletons: \[ \operatorname{isEquiv}(f) := \Pi(y:Y), \operatorname{isSingleton}(f^{-1}(y)). \] The type of equivalences from $X:\operatorname{\mathcal{U}}$ to $Y:\operatorname{\mathcal{U}}$ is \[ \operatorname{Eq}(X,Y) := \Sigma(f:X\to Y), \operatorname{isEquiv}(f). \] Given $x:X$, we have the singleton type consisting of the elements $y:X$ identified with $x$: \[ \operatorname{singletonType}(x) := \Sigma(y:X), \operatorname{Id}(y,x). \] We also have the element $\eta(x)$ of this type: \[ \eta(x) := (x, \operatorname{refl}(x)). \] We now need to \emph{prove} that singleton types are singletons: \[ \Pi(x:X), \operatorname{isSingleton}(\operatorname{singletonType}(x)). \] In order to do that, we use $\operatorname{J}$ with the type family \[ A(y,x,p) := \operatorname{Id}(\eta(x),(y,p)), \] and the function \begin{eqnarray*} f & : & \Pi(x:X), A(x,x,\operatorname{refl}(x)) \\ f(x) & := & \operatorname{refl}(\eta(x)). \end{eqnarray*} With this we get a function \begin{eqnarray*} \phi & : & \Pi(y,x:X), \Pi(p:\operatorname{Id}(y,x)), \operatorname{Id}(\eta(x),(y,p)) \\ \phi & := & \operatorname{J}(A,f). \end{eqnarray*} Notice the reversal of $y$ and $x$. With this, we can in turn define a function \begin{eqnarray*} g & : & \Pi(x:X), \Pi(\sigma :\operatorname{singletonType}(x)), \operatorname{Id}(\eta(x),\sigma ) \\ g(x,(y,p)) & := & \phi(y,x,p). \end{eqnarray*} Finally, using the function $g$, we get our desired result, that singleton types are singletons: \begin{eqnarray*} h & : & \Pi(x:X), \Sigma(c:\operatorname{singletonType}(x)), \Pi(\sigma :\operatorname{singletonType}(x)), \operatorname{Id}(c,\sigma )\\ h(x) & := & (\eta(x),g(x)). \end{eqnarray*} Now, for any type $X$, its identity function $\operatorname{id}_X$, defined by $\operatorname{id}(x) := x$, is an equivalence. This is because the fiber $\operatorname{id}^{-1}(x)$ is simply the singleton type defined above, which we proved to be a singleton. We need to name this function: \[ \operatorname{idIsEquiv} : \Pi(X:\operatorname{\mathcal{U}}), \operatorname{isEquiv}(\operatorname{id}_X). \] The identity function $\operatorname{id}_X$ should not be confused with the identity type $\operatorname{Id}_X$. Now we use $\operatorname{J}$ a second time to define a function \[ \operatorname{IdToEq} : \Pi(X,Y:\operatorname{\mathcal{U}}), \operatorname{Id}(X,Y) \to \operatorname{Eq}(X,Y). \] For $X,Y:\operatorname{\mathcal{U}}$ and $p:\operatorname{Id}(X,Y)$, we set \[ A(X,Y,p) := \operatorname{Eq}(X,Y) \] and \[ f(X) := (\operatorname{id}_X , \operatorname{idIsEquiv}(X)) \] and \[ \operatorname{IdToEq} := \operatorname{J}(A,f). \] Finally, we say that the universe $\operatorname{\mathcal{U}}$ is univalent if the map $\operatorname{IdToEq}(X,Y)$ is itself an equivalence: \[ \operatorname{isUnivalent}(\operatorname{\mathcal{U}}) := \Pi(X,Y:\operatorname{\mathcal{U}}), \operatorname{isEquiv}(\operatorname{IdToEq}(X,Y)). \] \subsection{The univalence axiom} The type $\operatorname{isUnivalent}(\operatorname{\mathcal{U}})$ may or may not have an inhabitant. The univalence axiom says that it does. The $\operatorname{K}$ axiom implies that it doesn't. Because both univalence and the $\operatorname{K}$ axiom are consistent, it follows that univalence is undecided in MLTT. \subsection{Notes} \begin{enumerate} \item The minimal Martin-L\"of type theory needed to formulate univalence has \[ \Pi, \Sigma, \operatorname{Id}, \operatorname{\mathcal{U}}, \operatorname{\mathcal{U}}'. \] Two universes $\operatorname{\mathcal{U}} :\operatorname{\mathcal{U}}'$ suffice, where univalence talks about $\operatorname{\mathcal{U}}$. \item It can be shown, by a very complicated and interesting argument, that \[ \Pi(u,v: \operatorname{isUnivalent}(\operatorname{\mathcal{U}})), \operatorname{Id}(u,v). \] This says that univalence is a subsingleton type (any two of its elements are identified). In the first step we use $u$ (or $v$) to get function extensionality (any two pointwise identified functions are identified), which is \emph{not} provable in MLTT, but is provable from the assumption that $\operatorname{\mathcal{U}}$ is univalent. Then, using this, one shows that being an equivalence is a subsingleton type. Finally, again using function extensionality, we get that a product of subsingletons is a subsingleton. But then $\operatorname{Id}(u,v)$ holds, which is what we wanted to show. But this of course omits the proof that univalence implies function extensionality (originally due to Voevodsky), which is fairly elaborate. \item For a function $f:X\to Y$, consider the type \[ \operatorname{Iso}(f) := \Sigma(g:Y\to X), (\Pi(x:X), \operatorname{Id}(g(f(x)),x)) \times (\Pi(y:Y), \operatorname{Id}(f(g(y)),y)). \] We have functions $r:\operatorname{Iso}(f)\to \operatorname{isEquiv}(f)$ and $s:\operatorname{isEquiv}(f)\to \operatorname{Iso}(f)$. However, the type $\operatorname{isEquiv}(f)$ is always a subsingleton, assuming function extensionality, whereas the type $\operatorname{Iso}(f)$ need not be. What we do have is that the function $r$ is a retraction with section $s$. Moreover, the univalence type formulated as above, but using $\operatorname{Iso}(f)$ rather than $\operatorname{isEquiv}(f)$, is provably empty, e.g.\ for MLTT with $\Pi, \Sigma, \operatorname{Id}$, the empty and two-point types, and two universes, as shown by Shulman~\cite{shulman:e46}. With only one universe, the formulation with $\operatorname{Iso}(f)$ is consistent, as shown by Hofmann and Streicher's groupoid model~\cite{MR1686862}, but in this case all elements of the universe are sets and $\operatorname{Iso}(f)$ is a subsingleton, and hence equivalent to $\operatorname{isEquiv}(f)$. So, to have a consistent axiom in general, it is crucial to use the type $\operatorname{isEquiv}(f)$. It was Voevodsky's insight not only that a subsingleton version of $\operatorname{Iso}(f)$ is needed, but also how to construct it. The construction of $\operatorname{isEquiv}(f)$ is very simple and elegant, and motivated by homotopical models of the theory, where it corresponds to the concept with the same name. But the univalence axiom can be understood without reference to homotopy theory. \item Voevodsky gave a model of univalence for MLTT with $\Pi,\Sigma$, empty type, one-point type, two-point type, natural numbers, and an infinite tower of universes in simplicial sets~\cite{kapulkin:lumsdaine:voevodsky,kapulkin:lumsdaine}, thus establishing the consistency of the univalence axiom. The consistency of the univalence axiom shows that, before we postulate it, MLTT is ``proto-univalent'' in the sense that it cannot distinguish concrete isomorphic types such as $X:=\mathbb{N}$ and $Y:=\mathbb{N}\times \mathbb{N}$ by a property $P:\operatorname{\mathcal{U}}\to \operatorname{\mathcal{U}}$ such that $P(X)$ holds but $P(Y)$ doesn't. This is because, being isomorphic, $X$ and $Y$ are equivalent. But then univalence implies $\operatorname{Id}(X,Y)$, which in turn implies $P(X) \iff P(Y)$ using $\operatorname{J}$. Because univalence is consistent, it follows that for any given concrete $P:\operatorname{\mathcal{U}}\to \operatorname{\mathcal{U}}$, it is impossible to prove that $P(X)$ holds but $P(Y)$ doesn't. So MLTT is invariant under isomorphism in this doubly negative, meta-mathematical sense. With univalence, it becomes invariant under isomorphism in a positive, mathematical sense. \item Thus, we see that the formulation of univalence is far from direct, and has much more to it than the (in our opinion, misleading) slogan ``isomorphic types are equal''. What the consistency of the univalence axiom says is that one possible understanding of Martin-L\"of's identity type $\operatorname{Id}(X,Y)$ for $X,Y:\operatorname{\mathcal{U}}$ is as precisely the type $\operatorname{Eq}(X,Y)$ of equivalences, in the sense of being in one-to-one correspondence with it. Without univalence, the nature of the identity type of the universe in MLTT is fairly under-specified. It is a remarkable property of MLTT that it is consistent with this understanding of the identity type of the universe, discovered by Vladimir Voevodsky (and foreseen by Martin Hofmann and Thomas Streicher~\cite{MR1686862} in a particular case). \item It should also be emphasized that what univalence does it to express the identity type $\operatorname{Id}(X,Y)$ for $X,Y : \operatorname{\mathcal{U}}$ in terms of the identity types of the types $X$ and $Y$. This is because the notion of equivalence $X \simeq Y$ is defined in terms of the identity types of $X$ and $Y$. In this sense, univalence is an extensionality axiom: it says what identity of types $X$ and $Y$ is in terms of what identity for the elements of the types $X$ and $Y$ are. From this perspective, it is very interesting that univalence implies function extensionality (any two pointwise identified functions are themselves identified) and propositional extensionality (any two subsingletons, or truth values, which imply each other are identified). Thus, univalence is a common generalization of function extensionality and propositional extensionality. \end{enumerate} This paper only explains what the \emph{univalence axiom} is. A brief and reasonably complete introduction to \emph{univalent mathematics} is given by Grayson~\cite{grayson}. \section*{Acknowledgements} I benefitted from input by Andrej Bauer, Marta Bunge, Thierry Coquand, Dan Grayson and Mike Shulman on draft versions of these notes. \nocite{escardo:ufs} \bibliographystyle{plain} \section{Introduction} The univalence axiom~\cite{unimath,hottbook,grayson} is not true or false in, say, ZFC or the internal language of an elementary topos. It cannot even be formulated. As the saying goes, it is not even wrong. This is because \begin{quote} univalence is a property of Martin-L\"of's \emph{identity type} of a universe of types. \end{quote} Nothing like Martin-L\"of's identity type occurs in ZFC or topos logic as a \emph{native} concept. Of course, we can create \emph{models} of the identity type in these theories, which will make univalence hold or fail. But in these notes we try to understand the primitive concept of identity type, directly and independently of any such particular model, as in (intensional) Martin-L\"of type theory~\cite{MR0387009,MR1686864}, and the univalence axiom for it. In particular, we don't use the equality sign ``$=$'' to denote the identity type $\operatorname{Id}$, or think of it as a path space. Univalence is a type, and the univalence axiom says that this type has some inhabitant. It takes a number of steps to construct this type, in addition to subtle decisions (e.g.\ to work with equivalences rather than isomorphisms, as discussed below). We first need to briefly introduce Martin-L\"of type theory (MLTT). We will not give a full definition of MLTT. Instead, we will mention which constructs of MLTT are needed to give a complete definition of the univalence type. This will be enough to illustrate the important fact that in order to understand univalence we first need to understand Martin-L\"of type theory well. \section{Martin-L\"of type theory, briefly} \subsection{Types and their elements} Types are the analogues of sets in ZFC and of objects in topos theory. Types are constructed together with their elements, and not by collecting some previously existing elements. When a type is constructed, we get freshly new elements for it. We write \[ x:X \] to declare that the element $x$ has type $X$. This is not something that is true or false, unlike a membership relation $x \in X$ in ZFC. In other words, $x \in X$ in ZFC is a binary relation, whereas $x:X$ in type theory simply specifies that $x$ ranges over $X$. For example, if $\mathbb{N}$ is the type of natural numbers, we may write \begin{gather*} 0 : \mathbb{N}, \\ (0,0) : \mathbb{N} \times \mathbb{N}. \end{gather*} However, the following statements are nonsensical and syntactically incorrect, rather than false: \begin{eqnarray*} 0 : \mathbb{N} \times \mathbb{N} & \text{(nonsense)}, \\ (0,0) : \mathbb{N} & \text{(nonsense)}. \end{eqnarray*} This is no different from the situation in the internal language of a topos. \subsection{Products and sums of type families} Given a family of types $A(x)$ indexed by elements $x$ of a type $X$, we can form its product and sum: \begin{gather*} \Pi(x:X), A(x), \\ \Sigma(x:X), A(x), \end{gather*} which we also write $\Pi A$ and $\Sigma A$. An element of the type $\Pi A$ is a function that maps elements $x:X$ to elements of $A(x)$. An element of the type $\Sigma A$ is a pair $(x,a)$ with $x:X$ and $a:A(x)$. (We adopt the convention that $\Pi$ and $\Sigma$ scope over the whole rest of the expression.) We also have the type $X\to Y$ of functions from $X$ to $Y$, which is the particular case of $\Pi$ with the constant family $A(x):=Y$. The cartesian product $X\times Y$, whose elements are pairs, is the particular case of $\Sigma$ with $A(x):=Y$ again. We also have the disjoint sum $X+Y$, the empty type and the one-element type, which will not be needed to formulate univalence. \subsection{Quantifiers and logic} There is no underlying logic in MLTT. Propositions are types, and $\Pi$ and $\Sigma$ play the role of universal and existential quantifiers, via the so-called Curry-Howard interpretation of logic. As for the connectives, implication is given by the function-type construction~$\to$, conjunction by the binary cartesian product~$\times$ , disjunction by the binary disjoint sum~$+$, and negation by the type of functions into the empty type. When a type is understood as a proposition, its elements correspond to proofs. In this case, instead of saying that $A$ has a given element, it is common practice to say that $A$ holds. Then a type declaration $x:A$ is read as saying that $x$ is a proof of $A$. But this is just a linguistic device, which is not reflected in the formalism. We remark that in univalent mathematics the terminology \emph{proposition} is reserved for subsingleton types (types whose elements are all identified). The propositions that arise in the construction of the univalence type are all subsingletons. \subsection{The identity type} Given a type $X$ and elements $x,y:X$, we have the identity type \[ \operatorname{Id}_X(x,y), \] with the subscript $X$ often elided. The idea is that $\operatorname{Id}(x,y)$ collects the ways in which $x$ and $y$ are identified. We have a function \[ \operatorname{refl} : \Pi(x:X), \operatorname{Id}(x,x), \] which identifies any element with itself. Without univalence, $\operatorname{refl}$ is the only given way to construct elements of the identity type. In addition to $\operatorname{refl}$, for any given type family $A(x,y,p)$ indexed by elements $x,y:X$ and $p:\operatorname{Id}(x,y)$ and any given function \[ f : \Pi(x:X), A(x,x,\operatorname{refl}(x)), \] we have a function \[ \operatorname{J}(A,f) : \Pi(x,y:X), \Pi(p:\operatorname{Id}(x,y)), A(x,y,p) \] with $\operatorname{J}(A,f)(x,x,\operatorname{refl}(x))$ stipulated to be $f(x)$. We will see examples of uses of $\operatorname{J}$ in the steps leading to the construction of the univalence type. Then, in summary, the identity type is given by the data $\operatorname{Id},\operatorname{refl},\operatorname{J}$. With this, the exact nature of the type $\operatorname{Id}(x,y)$ is fairly under-specified. It is consistent that it is always a subsingleton in the sense that $\operatorname{K}(X)$ holds, where \[ \operatorname{K}(X) := \Pi(x,y:X), \Pi(p,q:\operatorname{Id}(x,y)), \operatorname{Id}(p,q). \] The second identity type $\operatorname{Id}(p,q)$ is that of the type $\operatorname{Id}(x,y)$. This is possible because any type has an identity type, including the identity type itself, and the identity type of the identity type, and so on, which is the basis for univalent mathematics (but this is not discussed here, as it is not needed in order to construct the univalence type). The $\operatorname{K}$ axiom says that $\operatorname{K}(X)$ holds for every type $X$. In univalent mathematics, a type $X$ that satisfies $\operatorname{K}(X)$ is called a set, and with this terminology, the $\operatorname{K}$ axiom says that all types are sets. On the other hand, the univalence axiom provides a means of constructing elements other than $\operatorname{refl}(x)$, at least for some types, and hence the univalence axiom implies that some types are not sets. (Then they will instead be 1-groupoids, or 2-groupoids, \dots, or even $\infty$-groupoids, with such notions defined within MLTT rather than via models, but we will not address this important aspect of univalent mathematics here). \subsection{Universes} Our final ingredient is a ``large'' type of ``small'' types, called a universe. It is common to assume a tower of universes $\operatorname{\mathcal{U}}_0, \operatorname{\mathcal{U}}_1, \operatorname{\mathcal{U}}_2, \dots $ of ``larger and larger'' types, with \begin{gather*} \operatorname{\mathcal{U}}_0 : \operatorname{\mathcal{U}}_1, \\ \operatorname{\mathcal{U}}_1 : \operatorname{\mathcal{U}}_2, \\ \operatorname{\mathcal{U}}_2 : \operatorname{\mathcal{U}}_3, \\ \vdots \end{gather*} When we have universes, a type family $A$ indexed by a type $X:\operatorname{\mathcal{U}}$ may be considered to be a function $A:X\to \operatorname{\mathcal{V}}$ for some universe $\operatorname{\mathcal{V}}$. Universes are also used to construct types of mathematical structures, such as the type of groups, whose definition starts like this: \[ \operatorname{Grp} := \Sigma(G:\operatorname{\mathcal{U}}), \operatorname{isSet}(G) \times \Sigma(e:G), \Sigma(-\cdot- : G\times G\to G), (\Pi(x:G), \operatorname{Id}(e \cdot x,x)) \times \cdots \] Here $\operatorname{isSet}(G):=\Pi(x,y:G),\Pi(p,q:\operatorname{Id}(x,y)),\operatorname{Id}(p,q)$, as above. With univalence, $\operatorname{Grp}$ itself will not be a set, but a 1-groupoid instead, namely a type whose identity types are all sets. Moreover, if $\operatorname{\mathcal{U}}$ satisfies the univalence axiom, then for $A,B:\operatorname{Grp}$, the identity type $\operatorname{Id}(A,B)$ can be shown to be in bijection with the group isomorphisms of $A$ and $B$. \section{Univalence} Univalence is a property of the identity type $\operatorname{Id}_{\operatorname{\mathcal{U}}}$ of a universe $\operatorname{\mathcal{U}}$. It takes a number of steps to define the univalence type. \subsection{Construction of the univalence type} We say that a type $X$ is a \emph{singleton} if we have an element $c:X$ with $\operatorname{Id}(c,x)$ for all $x:X$. In Curry-Howard logic, this is \[ \operatorname{isSingleton}(X) := \Sigma(c:X), \Pi(x:X), \operatorname{Id}(c,x). \] For a function $f:X\to Y$ and an element $y:Y$, its fiber is the type of points $x:X$ that are mapped to (a point identified with) $y$: \[ f^{-1}(y) := \Sigma(x:X),\operatorname{Id}(f(x),y). \] The function $f$ is called an equivalence if its fibers are all singletons: \[ \operatorname{isEquiv}(f) := \Pi(y:Y), \operatorname{isSingleton}(f^{-1}(y)). \] The type of equivalences from $X:\operatorname{\mathcal{U}}$ to $Y:\operatorname{\mathcal{U}}$ is \[ \operatorname{Eq}(X,Y) := \Sigma(f:X\to Y), \operatorname{isEquiv}(f). \] Given $x:X$, we have the singleton type consisting of the elements $y:X$ identified with $x$: \[ \operatorname{singletonType}(x) := \Sigma(y:X), \operatorname{Id}(y,x). \] We also have the element $\eta(x)$ of this type: \[ \eta(x) := (x, \operatorname{refl}(x)). \] We now need to \emph{prove} that singleton types are singletons: \[ \Pi(x:X), \operatorname{isSingleton}(\operatorname{singletonType}(x)). \] In order to do that, we use $\operatorname{J}$ with the type family \[ A(y,x,p) := \operatorname{Id}(\eta(x),(y,p)), \] and the function \begin{eqnarray*} f & : & \Pi(x:X), A(x,x,\operatorname{refl}(x)) \\ f(x) & := & \operatorname{refl}(\eta(x)). \end{eqnarray*} With this we get a function \begin{eqnarray*} \phi & : & \Pi(y,x:X), \Pi(p:\operatorname{Id}(y,x)), \operatorname{Id}(\eta(x),(y,p)) \\ \phi & := & \operatorname{J}(A,f). \end{eqnarray*} Notice the reversal of $y$ and $x$. With this, we can in turn define a function \begin{eqnarray*} g & : & \Pi(x:X), \Pi(\sigma :\operatorname{singletonType}(x)), \operatorname{Id}(\eta(x),\sigma ) \\ g(x,(y,p)) & := & \phi(y,x,p). \end{eqnarray*} Finally, using the function $g$, we get our desired result, that singleton types are singletons: \begin{eqnarray*} h & : & \Pi(x:X), \Sigma(c:\operatorname{singletonType}(x)), \Pi(\sigma :\operatorname{singletonType}(x)), \operatorname{Id}(c,\sigma )\\ h(x) & := & (\eta(x),g(x)). \end{eqnarray*} Now, for any type $X$, its identity function $\operatorname{id}_X$, defined by $\operatorname{id}(x) := x$, is an equivalence. This is because the fiber $\operatorname{id}^{-1}(x)$ is simply the singleton type defined above, which we proved to be a singleton. We need to name this function: \[ \operatorname{idIsEquiv} : \Pi(X:\operatorname{\mathcal{U}}), \operatorname{isEquiv}(\operatorname{id}_X). \] The identity function $\operatorname{id}_X$ should not be confused with the identity type $\operatorname{Id}_X$. Now we use $\operatorname{J}$ a second time to define a function \[ \operatorname{IdToEq} : \Pi(X,Y:\operatorname{\mathcal{U}}), \operatorname{Id}(X,Y) \to \operatorname{Eq}(X,Y). \] For $X,Y:\operatorname{\mathcal{U}}$ and $p:\operatorname{Id}(X,Y)$, we set \[ A(X,Y,p) := \operatorname{Eq}(X,Y) \] and \[ f(X) := (\operatorname{id}_X , \operatorname{idIsEquiv}(X)) \] and \[ \operatorname{IdToEq} := \operatorname{J}(A,f). \] Finally, we say that the universe $\operatorname{\mathcal{U}}$ is univalent if the map $\operatorname{IdToEq}(X,Y)$ is itself an equivalence: \[ \operatorname{isUnivalent}(\operatorname{\mathcal{U}}) := \Pi(X,Y:\operatorname{\mathcal{U}}), \operatorname{isEquiv}(\operatorname{IdToEq}(X,Y)). \] \subsection{The univalence axiom} The type $\operatorname{isUnivalent}(\operatorname{\mathcal{U}})$ may or may not have an inhabitant. The univalence axiom says that it does. The $\operatorname{K}$ axiom implies that it doesn't. Because both univalence and the $\operatorname{K}$ axiom are consistent, it follows that univalence is undecided in MLTT. \subsection{Notes} \begin{enumerate} \item The minimal Martin-L\"of type theory needed to formulate univalence has \[ \Pi, \Sigma, \operatorname{Id}, \operatorname{\mathcal{U}}, \operatorname{\mathcal{U}}'. \] Two universes $\operatorname{\mathcal{U}} :\operatorname{\mathcal{U}}'$ suffice, where univalence talks about $\operatorname{\mathcal{U}}$. \item It can be shown, by a very complicated and interesting argument, that \[ \Pi(u,v: \operatorname{isUnivalent}(\operatorname{\mathcal{U}})), \operatorname{Id}(u,v). \] This says that univalence is a subsingleton type (any two of its elements are identified). In the first step we use $u$ (or $v$) to get function extensionality (any two pointwise identified functions are identified), which is \emph{not} provable in MLTT, but is provable from the assumption that $\operatorname{\mathcal{U}}$ is univalent. Then, using this, one shows that being an equivalence is a subsingleton type. Finally, again using function extensionality, we get that a product of subsingletons is a subsingleton. But then $\operatorname{Id}(u,v)$ holds, which is what we wanted to show. But this of course omits the proof that univalence implies function extensionality (originally due to Voevodsky), which is fairly elaborate. \item For a function $f:X\to Y$, consider the type \[ \operatorname{Iso}(f) := \Sigma(g:Y\to X), (\Pi(x:X), \operatorname{Id}(g(f(x)),x)) \times (\Pi(y:Y), \operatorname{Id}(f(g(y)),y)). \] We have functions $r:\operatorname{Iso}(f)\to \operatorname{isEquiv}(f)$ and $s:\operatorname{isEquiv}(f)\to \operatorname{Iso}(f)$. However, the type $\operatorname{isEquiv}(f)$ is always a subsingleton, assuming function extensionality, whereas the type $\operatorname{Iso}(f)$ need not be. What we do have is that the function $r$ is a retraction with section $s$. Moreover, the univalence type formulated as above, but using $\operatorname{Iso}(f)$ rather than $\operatorname{isEquiv}(f)$, is provably empty, e.g.\ for MLTT with $\Pi, \Sigma, \operatorname{Id}$, the empty and two-point types, and two universes, as shown by Shulman~\cite{shulman:e46}. With only one universe, the formulation with $\operatorname{Iso}(f)$ is consistent, as shown by Hofmann and Streicher's groupoid model~\cite{MR1686862}, but in this case all elements of the universe are sets and $\operatorname{Iso}(f)$ is a subsingleton, and hence equivalent to $\operatorname{isEquiv}(f)$. So, to have a consistent axiom in general, it is crucial to use the type $\operatorname{isEquiv}(f)$. It was Voevodsky's insight not only that a subsingleton version of $\operatorname{Iso}(f)$ is needed, but also how to construct it. The construction of $\operatorname{isEquiv}(f)$ is very simple and elegant, and motivated by homotopical models of the theory, where it corresponds to the concept with the same name. But the univalence axiom can be understood without reference to homotopy theory. \item Voevodsky gave a model of univalence for MLTT with $\Pi,\Sigma$, empty type, one-point type, two-point type, natural numbers, and an infinite tower of universes in simplicial sets~\cite{kapulkin:lumsdaine:voevodsky,kapulkin:lumsdaine}, thus establishing the consistency of the univalence axiom. The consistency of the univalence axiom shows that, before we postulate it, MLTT is ``proto-univalent'' in the sense that it cannot distinguish concrete isomorphic types such as $X:=\mathbb{N}$ and $Y:=\mathbb{N}\times \mathbb{N}$ by a property $P:\operatorname{\mathcal{U}}\to \operatorname{\mathcal{U}}$ such that $P(X)$ holds but $P(Y)$ doesn't. This is because, being isomorphic, $X$ and $Y$ are equivalent. But then univalence implies $\operatorname{Id}(X,Y)$, which in turn implies $P(X) \iff P(Y)$ using $\operatorname{J}$. Because univalence is consistent, it follows that for any given concrete $P:\operatorname{\mathcal{U}}\to \operatorname{\mathcal{U}}$, it is impossible to prove that $P(X)$ holds but $P(Y)$ doesn't. So MLTT is invariant under isomorphism in this doubly negative, meta-mathematical sense. With univalence, it becomes invariant under isomorphism in a positive, mathematical sense. \item Thus, we see that the formulation of univalence is far from direct, and has much more to it than the (in our opinion, misleading) slogan ``isomorphic types are equal''. What the consistency of the univalence axiom says is that one possible understanding of Martin-L\"of's identity type $\operatorname{Id}(X,Y)$ for $X,Y:\operatorname{\mathcal{U}}$ is as precisely the type $\operatorname{Eq}(X,Y)$ of equivalences, in the sense of being in one-to-one correspondence with it. Without univalence, the nature of the identity type of the universe in MLTT is fairly under-specified. It is a remarkable property of MLTT that it is consistent with this understanding of the identity type of the universe, discovered by Vladimir Voevodsky (and foreseen by Martin Hofmann and Thomas Streicher~\cite{MR1686862} in a particular case). \item It should also be emphasized that what univalence does it to express the identity type $\operatorname{Id}(X,Y)$ for $X,Y : \operatorname{\mathcal{U}}$ in terms of the identity types of the types $X$ and $Y$. This is because the notion of equivalence $X \simeq Y$ is defined in terms of the identity types of $X$ and $Y$. In this sense, univalence is an extensionality axiom: it says what identity of types $X$ and $Y$ is in terms of what identity for the elements of the types $X$ and $Y$ are. From this perspective, it is very interesting that univalence implies function extensionality (any two pointwise identified functions are themselves identified) and propositional extensionality (any two subsingletons, or truth values, which imply each other are identified). Thus, univalence is a common generalization of function extensionality and propositional extensionality. \end{enumerate} This paper only explains what the \emph{univalence axiom} is. A brief and reasonably complete introduction to \emph{univalent mathematics} is given by Grayson~\cite{grayson}. \section*{Acknowledgements} I benefitted from input by Andrej Bauer, Marta Bunge, Thierry Coquand, Dan Grayson and Mike Shulman on draft versions of these notes. \nocite{escardo:ufs} \bibliographystyle{plain}
1,477,468,750,831
arxiv
\section{Introduction} \label{section:introduction} Transportation systems have a crucial importance for countries because of their social and economical impacts. The air transportation in particular is closely linked to the economical development of the areas in which it unfolds. This is why it is very important for policy makers to ensure a smooth development, even -- and especially -- in areas where the traffic increase forecasts are the highest. Indeed, the air traffic system will get closer and closer to its actual capacity, especially in Europe and in the US where the traffic could increase by 50\% in the next 20 years \cite{challenges}. As a consequence, it is important for the air traffic management world 1) to forecast the consequences on the current infrastructures and procedures and 2) to find the appropriate solutions to cope with the increase. For this reason, large investment programs like SESAR in Europe and SingleSky in the USA have been launched. Apart from airport capacity, one of the important bottlenecks for the increasing traffic flow will be the sectors, where the controller needs to actively separate flights in order to avoid conflicts. However, solving conflicts in areas of high traffic complexity is a demanding task, already nowadays. With the increase in traffic, the cognitive capacities of air traffic controllers will likely reach their limits and drastically increase the number of conflicts or cap the capacities of the sectors. As a consequence, navigating through the European sky will become more and more difficult in the future and will require more careful planning capacities for the network manager and for the airlines. In other words the airspace is becoming a scarce resource, especially in congested situations, like, for example, during major shutdowns of large areas (extreme weather, strikes, volcano eruptions, etc). It is thus expected that the airlines will compete fiercely for two of the most important resources: time and space. More specifically, it is foreseen that the allocation of slots at the airports will change and include market-driven elements like bids. On the other end, the airspace will be more densely populated and the airlines will also have to compete for it. From the point of view of the transportation companies, this increases the effort required to find better route allocation strategies, whose success depends, among other things, on the strategies adopted by the other users. Therefore in this paper we consider the allocation of the flight plans on the airspace from the point of view of the dynamics on a complex network. This point of view is fruitfully used in different fields, like dynamics of epidemiology, information propagation on the Internet, percolation, opinion spreading, systemic risk, etc \cite{boccaletti, caldarelli}. Recently, an increasing attention is being devoted to the network description of transport systems \cite{helbing}, in particular the air transport system \cite{li,guimera,colizza,lacasa,zanin,cardillo,plos,Sun}; for a recent review see \cite{zaninlillo}. The present study follows this stream of literature, which allows to use powerful tool to extract the main characteristic of a system, regardless on the specific details of the real system. More specifically, we present here a simplified model based on the agent-based paradigm, particularly well suited to the problem \cite{Chakraborti}. The model describes the strategic allocation of flight plans on an idealized airspace, considered as a network of interconnected sectors. The sectors are capacity-constrained, which means that the companies might not get their optimal solutions regarding the flight plans. Thus, they will fall back to suboptimal solutions, for which they will develop different strategies. By using two different strategies for companies, we show how different factors explain the satisfaction of different types of company. Some of these factors (the network topology and the departing times) can be regulated externally by the policy maker, while others (the mixing composition) depend on the airline population and market forces. We then study the evolutionary dynamics of the populations by considering a ``reproduction'' rate based on their past satisfaction, which acts as a fitness function. We show that this dynamics exhibits an equilibrium point which is distinct from the optimal point for the system. Moreover, the fluctuations around the equilibrium and the convergence time may hinder the convergence in practice. The paper is organized as follows. In section \ref{section:model}, we present briefly the model. In section \ref{section:one-route}, we present the conclusions that we drew using an earlier version of the model with only one route (two airports) and no dynamics. In section \ref{section:static}, we present some new results on the static equilibrium of the model, concerning (i) the behaviors of the airlines in different situations (ii) the effect of the infrastructure, i.e. the number of airports and in section \ref{section:evolution} we investigate the population dynamics in an evolutionary environment. Finally, we draw some conclusions in section \ref{section:conclusions}. \section{Presentation of the Model} \label{section:model} In this section we present the agent based model. We will give a brief description, whereas more details can be found in \cite{jstat} where the model has been introduced. The implementation of the model is open and can be freely downloaded for any non-commercial purpose \cite{code2}. A more detailed version of the model with a tactical part is also available \cite{tactical, code}. The model describes the strategic allocation of trajectories in the airspace. Mimicking what is done in the European airspace, the model considers airlines who submit their flight plans to the network manager (NM). The NM checks whether accepting the flight plan(s) would lead to a sector capacity violation. If this is not the case the flight plan is accepted, otherwise it is rejected and the airline submits the second best flight plan (according to its utility function). The process goes on until a flight plan is accepted or a maximal number of rejected flight plans is reached and the flight is canceled. The NM keeps track of the allocated flights and checks violations of newly submitted flight plans responding in a determined way to the requests, without making counter-propositions. {\bf Airspace.} The airspace is modeled as a network of sectors. Topological properties of the real networks of sectors have been investigated in \cite{plos}. Each sector has a capacity, here fixed to 5 for all sectors. Some of these sectors contain airports (see below for more details on our choices) and the geometry of a flight plan is a path connecting two airports. A flight plan specifies also the departing time (see next point). {\bf Airlines.} The main agents of the model are the Airline Operators (AOs) who try to obtain the best trajectories for their flights and the Network Manager who accepts or rejects the flight plans. In the simplified version we assume that the quality of a trajectory depends on its length (the shorter, the better) and the discrepancy between the desired and actual departing/arrival time (the smaller, the better). Companies might be different depending on the relative weight of two components in their cost or utility function. Companies caring more of length are called of type ``S'' (for shifting) companies, since when their flight plan is rejected by the NM, they prefer to delay the flight but keeping a short trip length. Companies caring more of departure punctuality are called of type ``R'' (for rerouting), since when the flight plan is rejected they prefer to depart on time even if they need to use a longer route to destination. Note that in the following the AOs have only one flight. Hence in our model the optimization takes place after the previous, bigger strategic allocation of flights where AOs decides or not to operate the route, with which aircraft, etc. For this reason, all the optimization here are independent from each other for each flight, except through the capacity constraints on the network. More quantitatively, for each flight an AO chooses a departing and arrival airport, a desired departing time, $t_0$, and selects a number $N_{fp}$ of flight plans. The $k-$th flight plan, $k=1,...,N_{fp}$, is the pair $(t_0^k, {\bf p}^k)$, where $t_0^k$ is the desired time of departure and ${\bf p}^k$ is an ordered set containing the sequence of sectors in the flight plan. The flight plans are selected by an AO according to its cost function. In our model it has the form \begin{equation} c(t_0^k,{\bf{p}}^k) = \alpha {\cal L}({\bf{p}}^k) + \beta (t_0^k-t_0), \end{equation} where ${\cal L}( {\bf{p}}^k)$ is the length of the path on the network (i.e. the sum of the lengths of the edges followed by the flight). We also assume that flights are only shifted ahead in time ($t_0^k \ge t_0$) by an integer multiple of a parameter $\tau$ which is taken here as 20 minutes (all duration in this article are in minutes unless specified otherwise). The parameters $\alpha$ and $\beta$ define the main characteristics of the company. Given the discussion above, companies R have $\beta/\alpha\gg 1$, while companies S have $\beta/\alpha\ll 1$. {\bf Departing waves.} As it was shown in \cite{jstat}, an important determinant of the allocations is the desired departing time $t_0$ chosen by the AO. We assume that departing times are drawn from a distribution inside the day characterized by a certain number of peaks or waves. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth]{departure_times_step1743.png} \caption{Example of pattern of departure times (departing waves) with $N_p=5$ peaks ($\Delta t=4$) and $f_S=0.4$. The blue bars are the desired departing times while the red bars are the actual departing times after the allocation by the NM has been done.} \label{fig:waves} \end{center} \end{figure} We define first $T_d$ as the length of the ``day'' (in minutes), i.e. the time window of departure for all flights. In this time window, we define $N_p$ peaks of 60 minutes, by setting a time $\Delta t$ between the end of the peak and the beginning of the next one (thus, $N_p = T_d/(\Delta t + 60)$). The parameter $T_d$ is fixed to 24 hours in the following. Figure \ref{fig:waves} shows an example of departing waves. Then we define a total number of flights $N_f$ and divide them equally between peaks. In the following, we also use the corresponding time density $d = N_f/24$, i.e. the average number of flights per hour. {\bf Dynamics.} Given a mixed population of AOs of different types, at each time step, an AO is selected randomly\footnote{The random order of arrival of bids is chosen to guarantee that neither type of company has an advantage because it arrives first to the network manager.}. The AO chooses the departing and arrival airports and the desired departing time $t_0$ for its flight, drawing it from the departing time distribution. It then computes the $N_{fp}$ best flight plans for the flight according to its cost function and submits them, one by one in increasing order of cost, to the NM. The NM accepts the first flight plan which does not cross overloaded sectors, i.e. at already maximal capacity. If none of the $N_{fp}$ flight plans is accepted, the flight is rejected and the satisfaction of the AO for this flight is set to 0. {\bf Metrics.} The metric measuring the satisfaction (or fitness) of a company about a given flight $f$ is \begin{equation} {\cal S}_f=c_f^{best}/c_f^{accepted}, \end{equation} where $c_f^{best}$ is the cost of the optimal flight plan for the flight $f$ according to the AO cost function (i.e. the first flight plan to be submitted for the flight), and $c_f^{accepted}$ is the cost of the flight plan eventually accepted for this flight. If no flight plan has been accepted, we set ${\cal S}_f$ to 0. Note that ${\cal S}_f$ is always between 0 and 1. The value 1 is obtained when the best flight plan is accepted. Since the AOs have only one flight here, the satisfaction of a flight is also the satisfaction of its company. When several companies have the same type (same ratio $\beta/\alpha$), we make use of the average satisfactions across them. For instance, $\mathcal{S}^S$ and $\mathcal{S}^R$ are respectively the average satisfactions of companies S with $\beta/\alpha\ll 1$ and R with $\beta/\alpha\gg 1$. Finally, we use also the average satisfaction across all flights as a measure of the global satisfaction of the system: \begin{equation}\label{eq:GS} {\cal S}^{TOT} = f_S \times {\cal S}^{S} + f_R \times {\cal S}^{R}, \end{equation} where $f_i$ and ${\cal S}^{i}$ are the fraction of flights and the average satisfaction of company $i$, respectively, and $f_S+f_R=1$. The simulations we describe hereafter are obtained for type of airspace different from the one used in Ref.\cite{jstat}. In order have a more controlled environment, we generate a triangular network with 50 nodes. Each node of the network represents the center of an hexagonal sector. Sectors are linked to each of their neighbors. In order to avoid paths having exactly the same duration, which could lead to ties in the optimization, we sample the crossing times between sectors from a log-normal distribution so as to have a 20 minutes average and a very small variance (inferior to $10^{-4}$ minutes). The number of airports available to the air companies can be chosen before the simulation. Unless specified otherwise, we fix the number of airports to 5. In the following, we present results in which we drew 10 times the airports randomly, then ran 100 independent simulations on each of these realizations. The previous setup is very stylized but allows to catch the main features of the model. In \ref{annex:rob} we present some robustness checks of the model by considering two more realistic setups. In the first one we consider a scale free network of airports, i.e. not all the airports are equivalent in terms of number of flights/destinations, but hubs and spokes are present. In the second we use real ECAC data to construct the network of sectors, the sector of capacities, the origin/destination frequencies, and the wave structure. We find that the results are indeed similar to the ones presented thereafter for the stylized model, where a much more controlled setting is used. \section{Competition over a single route} \label{section:one-route} Ref. \cite{jstat} considered the model in a static setting with only two airports. Static means that no evolutionary dynamics has been considered. Ref. \cite{jstat} showed that with a single type of company, there is a (congestion) transition, much like the congestion observed in other transport systems, e.g. car traffic, when the number of flights becomes too large. When two extreme types of companies (R and S) are competing for the airspace, ref. \cite{jstat} showed that there exists a unique fraction of mixing corresponding to a stable Nash equilibrium. The strategies are interacting positively, leading to an absolute maximum in satisfaction for the overall system at a mixing fraction different from 0 and 1. Finally, by performing extensive simulations, ref. \cite{jstat} showed that overlaps between different possible paths connecting two airports is an important determinant of the possibility to gain advantage from a rerouting, impacting directly the satisfaction of companies R and indirectly the one of companies S. Compared with \cite{jstat}, the main innovations presented here are: \begin{itemize} \item We consider a more realistic setting with multiple airports and we study the relation between number of flights and number of airports such that the satisfaction of airlines is preserved. \item We study the specific mechanism linking the structure of the paths on the network, the overlap, to the increase in traffic on this network. \item We consider an evolutionary setting where the capability of a type of company of continuing its business depends on the past satisfaction. \end{itemize} \section{Static equilibrium} \label{section:static} We first compare the general output of the simulations to the case where there is only one connection and two airports. For this, we study directly the case where we have two different types of company, S and R, which are competing on the airspace. Note that each company has only one flight, whose pair origin/destination is drawn randomly from the available ones. \subsection{Departing patterns and mixed populations} \label{subsubsection:traffic} Our first aim is to understand how the satisfaction of each type of company depends on its environment. For this, we fix the number of flights (to $N_f = 24 \times d = 480$ here) and we change the proportion $f_S$ of companies of type S present in the airspace. This last parameter will be called mixing parameter in the following. We also change the structure of the wave pattern, by changing the parameter $\Delta t$, defined previously. Figures \ref{fig:sats_vs_dt} and \ref{fig:sats_vs_fs} show the satisfaction of the two types of company as a function of $\Delta t$ for different values of the mixing parameter. The results for companies R are quite intuitive. These companies are better off when they are competing with a large fraction of companies S (figure \ref{fig:sats_vs_fs} left) and when there are more waves, i.e. when $\Delta t$ is small (figure \ref{fig:sats_vs_dt} left). This is expected, since more waves means more ``space'' for companies when the number of flights is fixed. Moreover, companies R have a stronger dependency on the mixing parameter when the number of waves is small, i.e. $\Delta t$ is big. In other word, they compete more strongly with each other in this case. Note that the plateau present for $\Delta t \in [700, 1300]$ is due to the fact that for this range of parameter, the number of waves is constant and they are far from each other. Indeed, for this range, there are only two peaks, which come slowly apart as $\Delta t$ increases. Since they are sufficiently apart, the flights from the previous wave do not interact with the flights in the next one, and the satisfaction do not change with $\Delta t$. In other words, the first flight from the second wave departs after the last flight from the first wave arrives. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth]{plot_SR_vs_Deltat_loop_fS_nairpt5_d20.png} \includegraphics[width=0.48\textwidth]{plot_SS_vs_Deltat_loop_fS_nairpt5_d20.png} \caption{Satisfaction of companies as a function of the time $\Delta t$ between waves. Different lines refer to different mixing parameters. The number of airports is 5 and the number of flights is $20 \times 24$. Left: satisfaction of companies R. Right: satisfaction of companies S.} \label{fig:sats_vs_dt} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth]{plot_SR_vs_fS_loop_Deltat_nairpt5_d20.png} \includegraphics[width=0.48\textwidth]{plot_SS_vs_fS_loop_Deltat_nairpt5_d20.png} \caption{Effect of the mixing parameter on the satisfaction for different wave patterns. The number of airports is 5 and the number of flight is $20 \times 24$. Left: satisfaction of companies R. Right: satisfaction of companies S.} \label{fig:sats_vs_fs} \end{center} \end{figure} The effect of the environment is more complex for companies S. Indeed, for some values of the mixing parameter $f_S$, their satisfaction is not monotonous with $\Delta t$. This behaviour is explained by the following trade-off. On one hand, the bigger $\Delta t$ is, the less waves there are. Hence companies are competing effectively with a higher number of other companies, which is the reason behind the decreasing curve of companies R in the left panel of figure \ref{fig:sats_vs_dt}. On the other hand, companies S have a strategy where they try to delay their flight if the first flight plan is rejected. This means that if the waves are too close to each other, the delayed flight plans will likely conflict with the flights in the next wave. For this reason, their satisfaction increases at the beginning when the waves comes apart and then decreases when the waves are further apart and in smaller number. Note that first effect -- the decrease of satisfaction due to concentration within waves -- is less significant when companies S compete with many R companies. Indeed, in this case, their increasing concentration within a wave is of little importance for them, because they can always shift their flight plan two or three times to get out of the wave and not conflict anymore with companies of type R. For this reason, their curve is monotonous with $\Delta t$ for high values of the mixing parameter. More strikingly, their satisfaction is higher for very high $\Delta t$ than for very small ones if $f_S << 1$. Hence, the companies are reacting differently to different environment (waves) because they are sensitive to different mechanisms. The interplay of the mechanisms lead to interesting patterns that translate in interesting behaviours when framing the model in an evolutionary environment. But before coming to this, we inspect in the following section the effect of the density of airports on the companies. \subsection{Effect of the density of airports} \label{subsubsection:infrastucture} The increase of the number of airports in real airspaces has some obvious impact. In our model, for a fixed number of flights, increasing the number of airports leads to more potential routes and thus less interactions between flights. However, it is not clear whether this effect is similar to a decrease of the number of flights with a fixed number of airports. In order to investigate this problem, we repeat some simulations with constant parameters $\Delta t = 60 \times 5$ and $f_S = 0.5$ but with different number of flights and different number of airports. The results are presented in figure \ref{fig:airport_sweep}. As one can see on the left panel, the average satisfaction decreases with the number of flights, and increases with the number of airports. In order to find a relationship between both parameters, we rescaled the abscissa by $d/n_{airpt}^{\alpha}$, trying to find the value of $\alpha$ where the curves would collapse the best. Purely empirically, we found that $\alpha \simeq 0.15$ is the best match that we could get, except for very low numbers of airports, for which the curve does not collapse well with the others (see right of figure \ref{fig:airport_sweep}). We do not have an analytical argument to ground this scaling, but we suspect that it is linked to the degree of the network, since $\alpha = 0.15 \simeq 1/6$, and 6 is the degree of the triangular lattice on which the airspace is embedded. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth]{plot_STOT_vs_d_loop_nairpt.png} \includegraphics[width=0.48\textwidth]{plot_STOT_vs_x_scaled_loop_nairpt.png} \caption{Satisfaction against density for different number of airports. Bother parameters seem to have opposed effects. Left: non-rescaled plot. Right: the abscissa is rescaled by $d/n_{airpt}^{0.15}$.} \label{fig:airport_sweep} \end{center} \end{figure} Whatever the reason is behind the exact scaling, it is thus obvious that both parameters play some inverted roles. Roughly speaking, more airports give more choices to companies, and more flights ``fill'' these choices. In order to captures this point, we computed a metric $Q$ that we call ``overlap'' and which represents how much the paths open to companies are similar to each other. More specifically, if $p_1$ and $p_2$ are two paths on the network, we compute first: $$ Q_{p_1, p_2} = \frac{p_1 \cap p_2 }{p_1 \cup p_2}, $$ which is simply the number of common nodes (sectors) in $p_1$ and $p_2$ divided by the total number of unique sectors in $p_1$ and $p_2$. To compute an aggregated value, we consider all flight plans computed by the companies, i.e. including also the suboptimal ones. From them, we consider all the paths contained in the flight plans, and we compute the overlap between ALL the pairs of possible paths to obtain $Q$. Note that this metric does not consider the time at all, so it might be that two flight plans with the same path actually departs at very different times and have no chance of interacting. This could be called a ``static'' overlap, but we consider this metric because it is simple and it is very specific to the network, rather than the companies themselves. Even with this simple metrics, one can catch an interesting feature of the model. Left panel of figure \ref{fig:airport_sweep_overlap} shows the overlap as a function of the number of airports in the airspace. As expected, the overlap between potential paths decreases with the number of airports. The overlap is a very physical quantity, which have a tight connection with the number of flights. Indeed, when one opens a path on the network, in average to capacities of the other paths to accept flights decrease by $Q$. So the ``density'' of flights per route is effectively $Q N_f/N_{routes}$, where $N_{routes}$ is the total number of routes. Since the average satisfaction is likely to be a function of this density, all curves for different number of flights and different number of airports should scale as $Q d$. This is exactly the result we obtain in the right panel of figure \ref{fig:airport_sweep_overlap}, where we plot the total satisfaction against $Q d$. As expected, all the curves collapse very well. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth]{Q_vs_n_airport.png} \includegraphics[width=0.48\textwidth]{plot_STOT_vs_Q_d_loop_d.png} \caption{Left: Overlap between paths as a function of the number of airports in the airspace. Right: Total satisfaction against $Q\cdot d$ for different densities and overlap. The curves collapse very well.} \label{fig:airport_sweep_overlap} \end{center} \end{figure} This result shows that the effect of a change of a number of airports can be deduced from the effect of a change in traffic, or vice versa. It has a very practical impact, which is that the simulations can be run on different number of airports, or different number of flights, but not necessarily both. This is why in the following we keep the number of airports to 5, and only study the effect of variations of density. It could have also a more general impact, in the sense that if the results would hold on a more realistic airspace (which should be the case because of the general scope of the overlap metric), a policy maker could for instance try to push for the creation of new airports to counter balance increasing traffic. This, of course, suppose that the demand is constant and not too localized (i.e. an additional airport in a big city). \subsection{Global satisfaction: equilibrium state} \label{subsubsection:global_eq} As seen in section \ref{subsubsection:traffic}, the effect of the environment properties on the satisfaction of two types of companies is different. What is not clear yet is the exact interplay of the mechanisms and the resulting difference of satisfaction of the populations. In figure \ref{fig:diff_S} we plot the difference of satisfactions $\Delta S$ between population S and population R, as a function of the mixing parameter $f_S$ and for several values of $\Delta t$. In the left panel, the density of flights is quite small ($d=20$), corresponding to the one used in figures \ref{fig:sats_vs_dt} and \ref{fig:sats_vs_fs}. On the right panel we show the result for a much higher density ($d=80$), corresponding to a congested airspace. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth]{plot_DeltaS_vs_fS_loop_Deltat_nairpt5_d20.png} \includegraphics[width=0.48\textwidth]{plot_DeltaS_vs_fS_loop_Deltat_nairpt5_d80.png} \caption{Difference of satisfaction $\Delta S$ between companies S and companies R versus the mixing parameters for different values of $\Delta t$. Left: low density of flights, $d=20$. Right: high density of flights, $d=80$.} \label{fig:diff_S} \end{center} \end{figure} The difference of satisfaction $\Delta S$ between the two populations depends heavily on the two parameters. At both densities, the first values of $\Delta t$ are clearly crippling population S, since in this configuration $\Delta S$ is always negative. This is due to the fact that very frequent waves prevent companies S to delay their flight, whereas companies R can find available path by suitable rerouting. For higher values of $\Delta t$, the situation becomes more favourable to companies S, since the difference is usually positive. The details of the variations of the difference are quite complex with the two parameters, but it is clear it is always decreasing monotonically with $f_S$. The point where it crosses 0 varies with $\Delta t$ but not wildly (except for small $\Delta t$). It is worth noting that these curves can be considered as fitness curves for two populations competing for the same resources in a given environment. If the higher fitness affects positively future reproduction rate (i.e. the possibility of continuing and expanding business), we show in section \ref{section:evolution} how to study the dynamics of the two populations in an evolutionary framework. Here we simply recall that the points where the difference of fitness curves vanish (i.e. its roots) are equilibrium points for the dynamics. The existence of a single root (as in Fig. \ref{fig:diff_S}) shows that there is only one equilibrium point (a part the two absorbing states at $f_S=0$ and $f_S=1$). The slope of $\Delta S$ at its root measures the stability. Since the slope we observe is negative, the equilibrium is stable. In other words, when the proportion of companies S is too high, their satisfaction/fitness decreases, thus giving a lower reproduction rate for them, favoring companies R, and driving back the system towards the equilibrium. Another important question in this kind of system is whether the equilibrium point is optimal also for the system. For this reason we compute also the global satisfaction, Eq. \ref{eq:GS}, which is the average satisfaction of all the flights. A higher global satisfaction means that globally the system is in a better shape, leading to increased profits for airlines and possibly better service for passengers. In the left panel of figure \ref{fig:global_opt} we show the value of the global satisfaction as a function of the mixing parameter $f_S$ for different values of $\Delta t$. The first conclusion is that the global satisfaction is usually better for $0<f_S<1$ than for pure populations. This should not be a surprise, because we saw that each population performs better against the other one. This is the typical case where different populations have different niches and thus their interaction is beneficial for both. The second conclusion is that for all values of $\Delta t$, there exists a unique maximum and its position varies with $\Delta t$. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth]{plot_STOT_vs_fS_loop_Deltat_nairpt5_d20.png} \includegraphics[width=0.48\textwidth]{plot_eq_opt_vs_Deltat_nairpt5_Deltat1380_d20.png} \caption{Left: average satisfaction of all companies against the mixing parameters for different wave patterns. Right: evolution of the maximum satisfaction and the equilibrium point as a function of $\Delta t$.} \label{fig:global_opt} \end{center} \end{figure} On the right panel of figure \ref{fig:global_opt}, we plot both the global optimum, extracted from the left panel, and the equilibrium point, extracted from the left panel of figure \ref{fig:diff_S}. Both exhibit similar variations. For small values of $\Delta t$, the equilibrium point and the global optimum are both at $f_S\simeq 0$. When $\Delta t$ increases, companies S increase their advantage against companies R because they are not troubled by the next wave. Then both values decrease, to stabilize at value $f_S >0.5$, showing the greater advantage of companies S when the departing pattern is composed by well separated waves. More importantly, both curves are clearly distinct for $\Delta t \gtrsim 100$, even considering error bars. This is an important result, because it shows that the equilibrium mixing condition is not the optimal at the global level. In particular the evolution of the system toward its equilibrium mixing would tend to favour drastically population S, whereas the global optimum would be reached with a much smaller market share of companies S. This is exactly where policy makers should step in and issue policies driving the system to the optimum. \section{Evolutionary Dynamics and Equilibrium} \label{section:evolution} In the previous section, we interpreted the satisfaction of each company as its fitness when competing with the others for the same resources -- namely time and space. Interpreting these fitnesses as the capability of expanding business, it is possible to develop a dynamical evolutionary model for studying the dynamics toward equilibrium and its fluctuations, as well as the role of finite size populations. In our model each type of company has a population size at time $t+1$ which depends on its satisfaction at time $t$. In order to keep the simulations under reasonable computational time, and following what is done in evolutionary biology models \cite{nowak} we keep the total population fixed. This means that only the mixing parameter $f_S$ is changing between time $t$ and $t+1$. For the reproduction rule, we use an exponential reproduction, i.e. the rate of reproduction of a population is proportional to its fitness and its current population. Combined to the fixed population conditions, this leads to a discretized version of the so-called replicator model \cite{nowak}: $$ f_S^{t+1} = f_S^{t} + \Delta S_t\, f_S^t\, (1-f_S^t), $$ where $f_S^t$ is the mixing parameter at time $t$ and $\Delta S_t$ is the difference in satisfaction between companies S and R at time $t$. In the simulations, we also choose to always keep the number of companies of each kind to a minimum of 1. This ensures that the equilibria at $f_S = 0$ and $f_S = 1$ do not act as absorbing barriers (sinks). Indeed, since the populations are finite, a small non-null $f_S^t$ could lead to exactly 0 company S, which leads in turn to $f_S^{t'} = 0$ for all $t'>t$. Analogously the same happens when $f_S^t = 1$. Note that all other parameters ($\Delta t$, number of airports, airpspace structure, etc) are being kept constant throughout the reproduction process, i.e. the environment is stable. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth]{fit_equilibrium2.png} \includegraphics[width=0.48\textwidth]{fit_equilibrium.png} \caption{Evolution of the mixing parameters with the generations, averaged over 100 realizations (and only one network realization). The blue lines are the averages, the violets lines are exponential fits, and the error bars are the average standard deviations. The coefficients of determination of the regressions are over 0.98. Left: $\Delta t = 23\times 60$. Right: $\Delta t = 0$. } \label{fig:evo} \end{center} \end{figure} Figure \ref{fig:evo} shows the results of the simulations for two distinct values of $\Delta t$. The plots show the evolution of the mixing parameter with time (i.e. the number of generations). The solid blue lines are averages over 100 runs and the solid violet lines represent the results of exponential fits. Both lines are well fitted ($R^2>0.98$), and the equilibrium is clearly reached in both cases. On the left, there is only one wave of departure, thus companies S have an advantage and the point of equilibrium for $f_S$ is above 0.5. On contrary, the figure on the right shows that when there are no waves ($\Delta t=0$), companies S are very disadvantaged, and the point of equilibrium is close to 0. Note that both figures are roughly consistent with figure \ref{fig:diff_S} on the left, where the difference in fitness $\Delta S$ has a root close to 0 when $\Delta t = 0$ and has a root close to 0.7 when $\Delta t = 23\times 60$. To investigate more in detail the difference between the two results, in figure \ref{fig:evo_eq} we plot the position of the equilibrium point -- computed by averaging the last 40 generations in each run -- as a function of $\Delta t$. The plot is directly comparable to the right panel of figure \ref{fig:global_opt}, since all the other parameters are the same. The curves are roughly similar, but the one obtained with evolutionary dynamics displays larger values of $f_S$, especially around $\Delta t = 3\times 60$, than the static one. It can be shown that the stochasticity of the fitness function can lead to such a result, see \ref{annex:sde}. This is an important results, because the noise coming from the fitness function can drive the equilibrium even further from the global optimum than in the deterministic case. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth]{sat_eq.png} \caption{Evolution of the equilibrium coming from the evolutionary simulations with $\Delta t$.} \label{fig:evo_eq} \end{center} \end{figure} We now consider how the system converges to the equilibrium and how external parameters like $\Delta t$ influences the convergence. It is worth reminding the link between the fluctuations around the equilibrium and the shape of the fitness functions. Indeed in the continuous version of the replicator model, the stability of the equilibrium is given by the slope of the difference of fitnesses at its root \cite{nowak}. Higher absolute slopes translate into a higher stability and faster convergence to the steady state. However our system does not have a deterministic fitness function, since the satisfaction depends on the specific realization of the model. This additional noise, directly linked to the mechanisms embedded in the model, affects both the fluctuations around the equilibrium and the time of convergence in general. In \ref{annex:sde}, we briefly show analytically why this is the case. The magnitude of the fluctuations around the equilibrium are depicted on the left panel of figure \ref{fig:evo_std}. There is a weak trend towards bigger fluctuations when $\Delta t$ increases but their magnitude reaches a plateau quickly. Note that the standard deviation is far from between negligible, implying that the fluctuations are typically 15\% of the value of the equilibrium point. This means that the static analysis performed in section \ref{section:static} is far from revealing all the features of the model. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth]{sat_std_eq.png} \includegraphics[width=0.48\textwidth]{time_from_fit2.png} \caption{Left: evolution of the standard deviation of the value of $f_S$ when the equilibrium is reached against $\Delta t$. Right: Typical number of generations before the equilibrium is reached (time to equilibrium) as a function of $\Delta t$.} \label{fig:evo_std} \end{center} \end{figure} The time of convergence to equilibrium is plotted in the same figure on the right panel. This is the characteristic time obtained by fitting the $f_S^t$ with an exponential function of time. This time scale is quite high for small values of $\Delta t$ -- where the fluctuations are small -- but decreases to a small value (around 7 or 8 generations) when $\Delta t$ increases -- where the fluctuations are high. As already stated, the magnitude of the fluctuations depends on two independent mechanisms, the variance of the fitness function and its slope. In order to understand which mechanism plays a major role, we performed a regression with an ordinary least-square procedure. The dependent variable is the inverse of the time to equilibrium, and the two explanatory variables are $\sigma_S$, the standard deviation of the fitness function around the equilibrium point, and $\gamma$, the slope of the fitness function. The results of the regression are presented in table \ref{tab:ols}. \begin{table} \centering \begin{tabular}{c|cccc} Variable & Weight & Std. err. & t-test & Conf. Int. \\ \hline Const & 0.0911 & 0.023 & 0.004 & [0.038, 0.144]\\ $\sigma_S$ & -0.2588 & 0.046 & 0.001 & [-0.366, -0.152]\\ $\gamma$ & -0.1279 & 0.029 & 0.002 & [-0.196, -0.060]\\ \end{tabular} \caption{Results of the ordinary least square regression of $1/tau$ with the estimated parameters, the standard errors, the p-values of a t-test, and the 5\% - 95\% confidence intervals. The coefficient of determination is $R^2 = 0.94$.} \label{tab:ols} \end{table} The regression is very good, with the coefficient of determination of 0.94. Both variables impacts negatively the inverse of time to equilibrium. This is expected, since a higher variance of the fitness function should increase the time to equilibrium, as well as a higher slope (because the slope is negative). Finally, we can conclude that both mechanisms play an important role, since both coefficients are similar in magnitude. Note however that the variance of the fitness function is twice as important as the slope to determine the dynamics of the system. As a consequence, one cannot simply infer the dynamics from the static considerations made in section \ref{section:static}. Finally, we show in figure \ref{fig:ols} the graphical results of the regression, plus a plot showing the variation of $1/\tau$ against the expression found analytically (see \ref{annex:sde}). The agreement is worse than with the regression, which might be due to the fact that the analytical model is linearised around the equilibrium, whereas the system can in fact start quite far from it. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth]{fit_ols.png} \includegraphics[width=0.48\textwidth]{fit_analytical.png} \caption{Inverse of time to equilibrium versus a combination of the standard deviation and the slope of the fitness function. Left: the weights are the results of an ordinary least square ($R^2 = 0.94$) regression. The solid red line shows the results of the regression. Right: the weights are coming from analytical arguments. The solid red line is a linear regression ($R^2 = 0.67$).} \label{fig:ols} \end{center} \end{figure} \begin{table} \centering \begin{tabular}{c|cccc} Variable & Weight & Std. err. & t-test & Conf. Int. \\ \hline Const & 0.0101 & 0.008 & 0.244 & [-0.008, 0.029]\\ $\sigma_S$ & -0.0338 & 0.016 & 0.071 & [-0.071, 0.004]\\ $\gamma$ & -0.0226 & 0.010 & 0.059 & [-0.046, 0.001]\\ \end{tabular} \caption{Results of the ordinary least square regression of the fluctuations of the mixing parameter with the estimated parameters, the standard errors, the p-values of a t-test, and the 5\% - 95\% confidence intervals. The coefficient of determination is $R^2 = 0.74$.} \label{tab:ols_std} \end{table} The same procedure can be used to study the fluctuations around the equilibrium. This time the variable to be explained is the standard deviation of the mixing parameter over the last 40 generations of each run (see Table \ref{tab:ols_std}). In this case the regression is not very good, even if some variance is still explained by the variables. Strikingly, it is the signs of the coefficients which are important. For example, when the variance of the fitness function $\sigma_S^2$ is higher, the fluctuations around the equilibrium are actually smaller. Likewise, when the slope is higher (increasing towards 0), the fluctuations are smaller. This counter-intuitive results comes from the fact that the fluctuations depend also on the position of the equilibrium point, which are obviously vanishing when $f_S \rightarrow 1$ or $f_S \rightarrow 0$. The conclusion of these two regressions is that the time to convergence and the fluctuations around the equilibrium are influenced by the stochastic behaviour of the fitness function as well as its general shape. In physical terms, it means that the air traffic system as idealized by this model can be quite far from the equilibrium, due to 1) inadequate policies (the slope) 2) the general stochasticity of the system. Hence policy makers should carefully assess if any changes in policy is likely to have an impact due to the level of randomness of the system. \section{Conclusions} \label{section:conclusions} In this paper, we have presented a stylized model of the allocation of flight plans. We used an agent-based model to simulate the behaviours of different air companies and the network manager. In the model, different types of air companies are competing for the best paths on the network of sectors and the best times of departure. Since the sectors are capacity-constrained, in some high traffic conditions the companies might be forced to choose suboptimal flight plans, according to their strategies or cost function. When different types of companies are competing on the same airspace, their relative satisfaction depends highly on the environment -- the airspace, the waves of departure -- but also the competition -- the fractions of different types of companies. In a nutshell, we find that the companies are performing better when they are competing against other types of companies, in a mechanism of ``niche'' leading to behaviours similar to those of the minority game \cite{minoritybook}. As a consequence, it is possible to re-interpret the model as an evolutionary game, through the use of the difference of satisfactions as a fitness function which sets the capacity of a type of company to expand its business by having more flights in the future. In this framework the populations are the types of companies using the ``rerouting'' or ``shifting'' strategy. The study of the shape of the fitness function shows the existence of a stable equilibrium point for the mixing parameter for nearly every values of parameters. Interestingly, this equilibrium point is distinct from the point where a global satisfaction is optimal for the system as a whole. This indicates that the system left alone will not converge to the global optimum but to a different equilibrium point. In order to study more in details the real dynamics of the system around the point of equilibrium, we iterated the model with a reproduction rule mimicking the fact that higher satisfaction for an airline may be converted in better possibilities of expanding business. We found that the dynamical point of equilibrium is different from the one derived from the root of the fitness function (static equilibrium). This is a purely dynamical effect which is driving the point of equilibrium even further from the global optimum. Moreover, we found that both the convergence time to equilibrium and the fluctuations are highly dependent not only on the slope of the fitness function but also on the variance of the fitness function. These results have several policy implications. On one hand, the fact that the root of the fitness function is distinct from the global optimum means that the regulators should step in. Indeed, issuing well-designed regulations could help the system to have a point of equilibrium closer to the global optimum. On the other hand, the dynamical effects could blur the picture. Indeed, long time to convergence and high fluctuations combined with changing business conditions mean in practice that the system is always out of equilibrium. It is thus hard for the regulators to design incentives to drive the system to the optimum. A more precise setup and a more detailed calibration would be needed to definitely assert the potential consequences of regulations. The model presented here is an idealized version of the reality, a simple, yet phenomenologically rich, toy-model. It allows however to catch some high-level, emergent, phenomena that are inaccessible to more complicated ones due to the large number of parameters. The existence of a point of equilibrium, its behaviour in certain environments, and its dynamics have certainly a scope broader than the present model. Moreover, the model is not really specific to the air traffic. In fact, it could be adapted to other situations, like packets propagation over the physical network of the internet, with minimal effort. As such, it can be viewed as a quite general model of transportation where entities need to send some material over a capacity-constrained network, thus competing for time and space. Two possible directions for extension of the present work are the following. First, it is clear that a more detailed modelling could allow to draw some more precise conclusions about the present and future scenario in ATM. A first path has been made in this direction with another version of the model \cite{tactical}, based on navigation points instead of sectors. The model is also coupled to a tactical part, allowing to simulate the conflict resolution of traffic controllers. The code for this model is freely available \cite{code}. The second direction is toward model calibration. This is in general a challenging problem because data on strategic allocation are owned by companies and hardly available, especially when the details on many airlines are needed. A potential way of overcome this problem is through indirect calibration based on traffic data which contains the original flight plan and the last filled one. Data mining techniques could be useful to infer from this data unobservable parameters of our model and therefore to calibrate it. \newpage
1,477,468,750,832
arxiv
\section{Introduction} Random fields play a central role in modeling and analyzing spatially correlated data and have a wide range of applications. As a consequence, there has been increasing interest in studying them in probability and statistics. Consider a linear random field $X = \{X_{j,k}, \, (j,k)\in \mathbb{Z}^2\}$ defined on a probability space $(\Omega, {\mathcal F}, \mathbb P)$ by \begin{equation}\label{rf} X_{j,k}=\sum_{r, s\in \mathbb{Z}} a_{r,s}\xi_{j-r, k-s}, \end{equation} where $\{a_{r,s}, (r, s) \in \mathbb{Z}^2\}$ is a square summable sequence of constants and the innovations $\{\xi_{r,s}, (r, s) \in \mathbb{Z}^2\}$ and $\xi_0$ are i.i.d. random variables with $\mathbb{E}\xi_0=0$ and $\mathbb{E}\xi^2_0=1$. Under these conditions, $X_{j,k}$ in (\ref{rf}) is well-defined because the series in the right-hand side of \eqref{rf} converges in the $L^2(\Omega, \mathbb P)$-sense and almost surely. See Lemma \ref{Lem:Conv} in the Appendix. In the literature, there have been extensive studies on limit theorems and estimation problems for linear random fields. For example, Marinucci and Poghosyan (2001), and Paulauskas (2010) studied the asymptotics for linear random fields, including law of large numbers, central limit theorems and invariance principles, by applying the Beveridge-Nelson decomposition method. Banys et al. (2010) applied ergocic theory to study strong law of large numbers for linear random fields. Mallik and Woodroofe (2011) also established the central limit theorem for linear random fields, and their method does not rely on the Beveridge-Nelson decomposition. Under various settings, Tran (1990), Hallin et al. (2004a, 2004b), El Machkoui (2007, 2014), El Machkouri and Stoica (2010), and Wang and Woodroofe (2014) studied local linear regression, kernel density estimation and their asymptotics for linear random fields. Gu and Tran (2009) developed fixed design regression study for negatively associated random fields. However, few authors have studied moderate and large deviations for linear random fields. Davis and Hsing (1995), Mikosch and Samorodnitsky (2000), Mikosch and Wintenberger (2013) established large deviation results for certain stationary sequences, including linear processes with short-range dependence. For linear processes which allow long range dependence, we mention that Djellout and Guillin (2001) proved moderate and large deviation results for linear processes with i.i.d. and bounded innovations; Djellout et al. (2006) studied moderate deviation estimate for the empirical periodogram of a linear process; Wu and Zhao (2008) obtained moderate deviations for stationary causal processes and their main theorem can be applied to functionals of linear processes; and more recently, Peligrad et al. (2014a, b) established exact moderate and large deviation asymptotics for linear processes with independent innovations. The main purpose of this paper is to extend the method in Peligrad et al. (2014a, b) to establish exact moderate and large deviations for linear random fields as in (\ref{rf}). Let $\{\Gamma_n\}$ be a sequence of finite subsets of $\mathbb{Z}^2$ and denote the cardinality of $\Gamma_n$ by $|\Gamma_n|$. To be specific, we can take $\Gamma_n = [-n, n]^2 \cap \mathbb{Z}^2$, or $[1, n]^2 \cap \mathbb{Z}^2$ or more general rectangles. Define $S_n:= S_{\Gamma_n}:=\sum_{(j,k)\in\Gamma_n} X_{j,k}$. By Lemma \ref{Lem:Conv} in the Appendix, it can be written as \begin{equation}\label{Def:S} S_n=\sum_{r,s\in\mathbb{Z}} b_{n,r,s}\xi_{-r, -s}, \end{equation} where $b_{n,r,s}=\sum_{(j,k)\in\Gamma_n} a_{j+r, k+s}$. Let $\sigma_n^2 =\mathbb{E}S_n^2$. The main results of this paper, Theorems 2.1 - 2.3, quantify the roles of the moment and right-tail properties of $\xi_0$, the magnitude of the coefficients $\{a_{r,s}\}$, as well as the speed of convergence of $x_n\rightarrow \infty$, in the moderate and large deviation probabilities for $\mathbb{P}\left(S_{n}\geq x_{n}\sigma_{n}\right)$. These results are useful for studying asymptotic properties and statistical inference of linear random fields. As examples, we show that our moderate and large deviation results can be applied for studying nonparametric regressions and for obtaining convergence rate in the law of the iterated logarithm of linear random fields. For simplicity of presentation, we focus on linear random fields indexed by $\mathbb{Z}^2$. The theorems presented in this paper can be easily extended to linear random field $X_{\bf j}=\sum_{{\bf r}\in \mathbb{Z}^N} a_{\bf r}\xi_{\bf j-r}$ on $\mathbb{Z}^N$ with $N\ge 3$. In this paper we shall use the following notations. For any constant $ p\ge 1$, we define $\|a\|_p:=\left[\sum_{r,s\in \mathbb{Z}}|a_{r,s}|^p\right]^{1/p}$. Then $\|a\|_2<\infty$ by the assumption and $\|a\|_p$ may be finite for some values of $p<2$. Similarly, for a random variable $\xi$, we use $\|\xi\|_p$ to denote its $L^p(\mathbb P)$-norm for $p \ge 1$. Let $\Phi(x)$ be the distribution function of the standard normal random variable. For two sequences $\{a_n\}$ and $\{b_n\}$ of real numbers, $a_{n}\mathbb{\sim}b_{n}$ means $a_{n}/b_{n}\rightarrow1$ as $n\rightarrow \infty$; $a_{n}\propto b_n$ means that $a_{n}/b_{n}\rightarrow C$ as $n\rightarrow \infty$ for some constant $C>0$; for positive sequences, the notation $a_{n}\ll b_{n}$ or $b_n\gg a_n$ replaces Vinogradov symbol $O$ and they mean that $a_{n}/b_{n}$ is bounded; $\lceil x\rceil$ means the smallest integer which is greater than or equal to $x$. The rest of this paper has the following structure. Section 2 gives the main results on moderate and large deviations for $S_n$ in \eqref{Def:S}. In Section 3 we apply the main results to nonparametric regression estimates and prove a Davis-Gut law of iterated logarithm for linear random fields. The Appendix provides the existing results which are useful for proving the theorems in Section 2. \\ \textbf{Acknowledgement} The authors thank the referee and the Associate Editor for their careful reading of the manuscript and for their insightful comments, which have helped to improve the quality of this paper. The research of Yimin Xiao is partially supported by NSF grants DMS-1612885 and DMS-1607089. \section{Main results} Even though the double sum $S_n$ in \eqref{Def:S} can be written (in infinitely many ways) as a single weighted sum of infinitely many i.i.d. random variables indexed by non-negative integers,\footnote{For example, $$ S_n=\sum_{i=0, |r|=i, |s|\le i \; \hbox{\tiny or } |s|=i, |r|< i }^\infty b_{n,r,s}\xi_{-r, -s}, $$ where the number of terms for each index $i$ is finite and the summation of the terms with the same $i$ can be taken in any order.} the important role of the configuration of $\Gamma_n$ is usually hidden in such a representation and a partial order in $\mathbb{Z}^2$, which may not be natural for the problem under investigation, has to be imposed. These make it difficult to solve the problems for random fields satisfactorily by applying directly the results on a weighted sum of random variables indexed by one variable. Quite often new methods have to be developed. We refer to Chapter 1 of Klesov (2014) for further illustrations on connections as well as significant differences in limit theorems of random fields and stochastic processes of one-variable. The objective of this section is to study moderate and large deviations for the partial sum $S_n$ in \eqref{Def:S} by extending the results for linear processes as in Peligrad et al. (2014a, b) to linear random fields. We will need some notations. Define $$D_{nt}:=\sum_{r,s\in\mathbb{Z}}|b_{n,r,s}|^t; \qquad U_{nt}:=(D_{n2})^{-t/2}D_{nt}.$$ Then $\sigma^2_n :=\mathbb{E}(S_n^2)=D_{n2}$. To avoid degeneracy, we assume tacitly $\sigma_n > 0$ for every $n$. Let $\rho_n^2:=\max_{r,s\in\mathbb{Z}}b_{n,r,s}^2/\sigma^2_n$. We will assume that $\rho_n^2 \to 0$ which means that the contribution of any single coefficient $b_{n,r,s}$ is negligible compared with $\sigma^2_n$. We remark that the magnitudes of $b_{n,r,s}^2$ and $D_{nt}$ depend on the coefficients $\{a_{r,s}, (r,s)\in \mathbb Z^2\}$ and the configuration of $\Gamma_n$. An interesting case is when $\{a_{r,s}\}$ is isotropic and $\Gamma_n = [1, n]^2\cap \mathbb Z^2$. See (\ref{la}) and Lemma \ref{Dorder} below for details. More generally, the case when $\{a_{r,s}\}$ is anisotropic (i.e., $|a_{r, s}|$ depends $r$ and $s$ at different rates) and $\Gamma_n$ is a rectangle in $\mathbb Z^2$ can also be considered. Our first theorem is the following moderate deviation result. \begin{theorem}\label{ModerateD} Assume that the random variable $\xi_0$ satisfies $\|\xi_0\|_p<\infty$ for some $p>2$ and $\rho_n^2\rightarrow 0$ as $n\rightarrow \infty$. Then for $x_n\ge 0$, $x_n^2\le 2\ln (U_{np}^{-1})$, the moderate deviation result holds: \begin{equation}\label{Eq:Th21a} \mathbb{P}\left( S_{n}\geq x_{n}\sigma_{n}\right) =(1-\Phi(x_{n}% ))(1+o(1))\text{ as }n\rightarrow\infty; \end{equation} \begin{equation}\label{Eq:Th21b} \mathbb{P}\left( S_{n}\leq -x_{n}\sigma_{n}\right) =(1-\Phi(x_{n}% ))(1+o(1))\text{ as }n\rightarrow\infty. \end{equation} \end{theorem} \begin{proof} We only need to prove the statement (\ref{Eq:Th21a}). The proof is a modification of that of Corollary 3, part (iii) of Peligrad et al. (2014a), which is given in the Supplementary Material, Peligrad et al. (2014b). The main idea is to applying Theorem \ref{frolov} in the Appendix for triangular arrays. For this purpose, we decompose the partial sum $S_n$ as $S_n = M_n + R_n$, where $M_n=\sum_{|r|\le k_n} \sum_{|s|\le k_n} b_{n,r,s}\xi_{-r, -s}$ for some integer $k_n$ which will be chosen later and $R_n$ is the remainder $$ R_n= \bigg(\sum_{|r|> k_n} \sum_{s\in \mathbb{Z}} +\sum_{|r|\le k_n}\sum_{|s|> k_n} \bigg)b_{n,r,s}\xi_{-r, -s}.$$ By the Cauchy-Schwarz inequality, we have \begin{equation*} b_{n,r,s}^{2}\leq |\Gamma_n|\sum_{(j,k)\in \Gamma_n} a^2_{j+r, k+s} \end{equation*} and then, \begin{equation} \label{Eq:tailsum1} \sum_{r, s\in \mathbb{Z}}b_{n,r,s}^{2} \le |\Gamma_n|\sum_{r, s\in \mathbb{Z}} \sum_{(j,k)\in \Gamma_n} a^2_{j+r, k+s} = |\Gamma_n|^2\sum_{r, s\in \mathbb{Z}}a_{r,s}^{2}. \end{equation} In the above, we have applied Fubini's theorem to change the order of summations. Therefore, for every integer $n\ge 1$, we have $\sum_{r, s\in \mathbb{Z}} b_{n,r,s}^{2}<\infty$ which yields $\sum_{r, s\in \mathbb{Z}} |b_{n,r,s}|^{p}<\infty$ since $p>2$. By applying Rosenthal's inequality (cf. de la Pe\~na and Gin\'e, 1999) to $R_n$, we see that there is a constant $C_p$ such that \begin{align*} \mathbb{E} \big(|R_{n}|^{p} \big)&\leq C_{p} \Bigg[\bigg( \Big(\sum_{|r|> k_n} \sum_{s\in \mathbb{Z}} +\sum_{|r|\le k_n}\sum_{|s|> k_n} \Big)b_{n,r,s}^{2}\bigg)^{p/2}\\ & \qquad \qquad +\mathbb{E} (|\xi_{0}|^{p})\bigg(\sum_{|r|> k_n}\sum_{s\in \mathbb{Z}} +\sum_{|r|\le k_n}\sum_{|s|> k_n} \bigg) |b_{n,r,s}|^{p}\Bigg]. \end{align*} Now for each positive integer $n$, we select integer $k_{n}$ large enough such that \[ \bigg(\sum_{|r|> k_n}\sum_{s\in \mathbb{Z}} +\sum_{|r|\le k_n}\sum_{|s|> k_n} \bigg) b_{n,r,s}^{2} \leq \|\xi_{0}\|_{p}^{2} \bigg(\sum_{r,s\in \mathbb{Z}} |b_{n,r,s}|^{p} \bigg)^{2/p}. \] This is possible because of (\ref{Eq:tailsum1}) and the fact that $\sum_{r, s \in \mathbb Z} a_{r,s}^2< \infty$. With the above selection of $k_n$, we obtain \begin{equation}\label{p-rest} \mathbb{E} \big(|R_{n}|^{p} \big)\leq 2C_{p}\mathbb{E}(|\xi_{0}|^{p})\sum_{r,s\in \mathbb{Z}} |b_{n,r,s}|^{p}. \end{equation} Similarly, we can verify that $M_n$ and thus $S_n$ also have finite moments of order $p$. Since $M_n$ is the sum of $(2k_{n}+1)^2$ independent random variables, we can view $S_{n} = M_n + R_n$ as the sum of $(2k_{n}+1)^2+1$ independent random variables and apply Theorem \ref{frolov} to prove (\ref{Eq:Th21a}). As in the proof of part (iii) of Corollary 2.3 in Peligrad et al. (2014b), we use (\ref{p-rest}) to derive \begin{equation*} \begin{split} L_{np}&= \frac1 {\sigma_n^p} \bigg(\sum_{|r|\le k_n} \sum_{|s|\le k_n} \mathbb E \big[(b_{n,r,s}\xi_{-r, -s})^p I(b_{n,r,s}\xi_{-r, -s}\ge 0)\big] + \mathbb E \big[R_n^p I(R_n\ge 0)\big]\bigg)\\ &\le (2C_p + 1)\mathbb{E}(|\xi_{0}|^{p})U_{np}. \end{split} \end{equation*} Since $\rho_n^2 \to 0$ and $p > 2$ imply $U_{np} \to 0$ as $n\to \infty$, we have $L_{np}\rightarrow0$. Similarly, for all $x \ge 0$ such that $ x^2 \le 2 \ln (U_{np}^{-1})$, we can verify that $\Lambda_{n}(x^{4},x^{5}, \varepsilon)\rightarrow0$ for any $\varepsilon>0$, and $x^{2} -2\ln(L_{np}^{-1})-(p-1)\ln\ln(L_{np}^{-1})\rightarrow-\infty$, as $n \to \infty$. Hence the conditions of Theorem \ref{frolov} are satisfied, and (\ref{Eq:Th21a}) follows. \end{proof} \begin{remark}\label{remark1} The condition $\rho_n\rightarrow 0$ in Theorem \ref{ModerateD} and the following Theorems \ref{LD} and \ref{MDLD} can be replaced by suitable conditions on $|\Gamma_n|$ and $\sigma_n^2$ that are easier to verify. By H\"older's inequality, we have $\rho_n\le \frac{\|a\|_u|\Gamma_n|^{1/v}} {\sigma_n}$, where $1\le u\le 2$ and $v$ is the conjugate of $u$, $1/u+1/v=1$. Therefore, we can replace the condition $\rho_n\rightarrow 0$ by $\|a\|_u<\infty$ and $\frac{|\Gamma_n|^{1/v}}{\sigma_n}\rightarrow 0$. In particular, if $\|a\|_1 < \infty$ which is the short range dependence case, then $\rho_n\le \|a\|_1/\sigma_n$. In this case, we can replace the condition $\rho_n\rightarrow 0$ by $\sigma_n\rightarrow \infty$ (as a consequence, we also have $|\Gamma_n|\rightarrow \infty$). See Mallik and Woodroofe (2011) for more information on bounds for $\rho_n$. If $\Gamma_n$ is a union of $l$ finitely many discrete rectangles, by Proposition 2 of the same paper, $\rho_n\le 20\left(\frac{\sqrt{l} \|a\|_2}{\sigma_n}\right)^{1/5}+\frac{8\sqrt{l}\|a\|_2}{\sigma_n}$. Therefore, in this case, we can replace the condition $\rho_n\rightarrow 0$ by $\|a\|_2<\infty$ and $\sigma_n\rightarrow \infty$. \end{remark} Next, we study precise large deviations for the partial sums $S_n$ defined in (\ref{Def:S}). We will focus only on the case when $\xi_0$ has a right regularly varying tail (see Remark 2.2 below for information on other interesting cases). More precisely, we assume that there is a constant $t>2$ such that \begin{equation}\label{tail1} \mathbb{P}(\xi_{0}\geq x)=\frac{h(x)}{x^{t}},\;\; \ \hbox{ as }\, x\rightarrow\infty. \end{equation} Here $h(x)$ is a slowly varying function at infinity. Namely, $h(x)$ is a measurable positive function satisfying $\lim_{x\rightarrow\infty}h(\lambda x)/h(x)=1$ for all constants $\lambda>0$. Bingham et al. (1987) or Seneta (1976) provide systematic accounts on regularly varying functions. For reader's convenience, we collect some useful properties of slowly varying functions in Lemma \ref{Karamata} in the Appendix. Notice that condition \eqref{tail1} is an assumption on the right tail of $\xi_0$. The left tail of $\xi_0$ can be arbitrary. In particular, it implies that $\xi_0$ does not have $p$-th moments for $p>t$, and it may or may not have $p$-th moments for $p<t$. The notion of regular variation such as defined in (\ref{tail1}) is closely related to large deviation results for sums of random variables or processes. Such results have been proved by A.V. Nagaev (1969a, b) and S.V. Nagaev (1979) for partial sums of i.i.d. random variables, and have been extended to partial sums of certain stationary sequences by Davis and Hsing (1995), Mikosch and Samorodnitsky (2000), Mikosch and Wintenberger (2013), and Peligrad et al. (2014a, b). We now comment briefly on the connections and differences of the results and methods in the aforementioned references to those in the present paper. The approach of Davis and Hsing (1995) is based on weak convergence of point processes and the link between the large deviation probability and the asymptotic behavior of extremes. As shown in Example 5.5 in Davis and Hsing (1995), their results are applicable to a class of linear processes with short-range dependence. Moreover, as pointed out by Mikosch and Wintenberger (2013, p.853), the method of Davis and Hsing (1995) could not be extended to the case of $t \ge 2$. Mikosch and Samorodnitsky (2000) studied precise large deviation results for a class linear processes with a negative drift. More specifically, they consider $$ X_n = -\mu + \sum_{j \in \mathbb Z} \varphi_{n-j} \varepsilon_j, \ \ \ n \in {\mathbb Z}, $$ where $\mu > 0$ is a constant, $\{\varepsilon_j\}$ are i.i.d. innovations that satisfy a two-sided version of \eqref{tail1} and the coefficients $\{\varphi_{j}\}$ satisfy $\sum_{j \in \mathbb Z}|j \varphi_j| < \infty$. In particular, the process $\{X_n, n \in \mathbb Z\}$ is short-range dependent. Mikosch and Wintenberger (2013) established precise large deviation results for a stationary sequence $\{X_n, n \in \mathbb Z\}$ that satisfies the following (and some other technical) conditions: (i) All finite dimensional distributions of $\{X_n, n \in \mathbb Z\}$ are regularly varying with the same index $\alpha$; and (ii) The anti-clustering conditions. See Mikosch and Wintenberger (2013) for precise descriptions of these conditions. We remark that, even though their methods and results cover a wide class of stationary sequences, the condition (i) is a lot stronger than \eqref{tail1} and is not easy to verify for a general linear process. Moreover, as pointed out by Mikosch and Wintenberger (2013, page 856), the condition (ii) excludes stationary sequences with ``long range dependencies of extremes". We believe that it would be interesting from both theoretical and application viewpoints to extend the large deviation results in Davis and Hsing (1995), Mikosch and Samorodnitsky (2000), Mikosch and Wintenberger (2013), and Peligrad et al. (2014a, b) to stationary random fields. The present paper is one step towards this direction. More specifically, we follow the approach of Peligrad et al. (2014a, b) and prove the following precise large deviation theorem, which is applicable to linear random fields with long-range dependence. \begin{theorem}\label{LD} Assume that $\{b_{n,r,s}, r, s \in \mathbb Z\}$ is a sequence of positive numbers with $\rho_n^2\rightarrow 0$ as $n\rightarrow \infty$ and $\xi_0$ satisfies condition (\ref{tail1}) for certain constant $t>2$. For $x=x_{n}\geq C_t[\ln(U_{nt}^{-1})]^{1/2}$, where $C_t>e^{t/2}(t+2)/\sqrt{2}$ is a constant, we have \begin{equation}\label{Eq:LD2-2} \begin{split} \mathbb{P} \big(S_{n}\geq x \big)&= \big(1+o(1)\big)\sum_{r,s\in\mathbb{Z}} \mathbb{P} \big(b_{n,r,s}\xi_{-r, -s}\geq x \big)\\ &= \big(1+o(1)\big) x^{-t} \sum_{r,s\in\mathbb{Z}} b_{n,r,s}^t h\Big(\frac{x}{b_{n,r,s}}\Big),\ \ \ \text{ as }\ n\rightarrow\infty. \end{split} \end{equation} \end{theorem} \begin{proof} Since the second equality in (\ref{Eq:LD2-2}) follows directly from the first and (\ref{tail1}), we only need to prove the first equality. The proof is essentially a modification of that of Theorem 2.2 in Peligrad et al. (2014b), by replacing the quantities $c_{ni}$ there by $b_{n,r,s}$, and the sum $\sum_{i=1}^{ k_n}$ by the double sum $\sum_{r,s\in \mathbb{Z}}$. A somewhat new ingredient for the proof is to use a new version of the Fuk-Nagaev inequality for the double sums of infinitely many random variables which is stated as Theorem \ref{FN} in the Appendix. Hence we will only sketch the main steps of the proof. Without loss of generality, we normalize the partial sum $S_n$ by its variance and assume \begin{equation} \sum_{r,s\in\mathbb{Z}}b_{n,r,s}^2=1\ \text{ and }\ \rho_n^2=\max_{r,s \in\mathbb{Z}}b_{n,r,s}^{2}\rightarrow0 \ \ \text{ as } n\rightarrow\infty. \label{sum1} \end{equation} Then, for any constant $t > 2$, we have $U_{nt} = D_{nt}$ and $D_{nt} = \sum_{r,s\in\mathbb{Z}} b_{n,r,s}^t \leq\max_{r,s \in\mathbb{Z}} b_{n,r,s}^{t-2} \rightarrow0$, which implies that $D_{nt}^{-1}\rightarrow\infty.$ Moreover, the sequence $S_n$ is stochastically bounded (i.e., $\lim_{K\rightarrow\infty}\sup _{n}\mathbb{P}(|S_{n}|>K)=0$) since $\mathbb{E} (S_n^2)=1$. By following the proofs of Lemma 4.1 and Proposition 4.1 in Peligrad et al. (2014b), for any $0<\eta<1,$ and $\varepsilon>0$ such that $1-\eta>\varepsilon$ and any $x_n \to \infty$, we have \begin{equation} \label{ineqprop} \begin{split} &\bigg|\mathbb{P}(S_{n}\geq x_{n})-\mathbb{P}(S_{n}^{(\varepsilon x_{n})}\geq x_{n})-\sum_{r,s\in\mathbb{Z}}\mathbb{P}(b_{n,r,s}\xi_{-r, -s}\geq(1-\eta)x_{n})\bigg|\\ & \leq o(1)\sum_{r,s\in\mathbb{Z}}\mathbb{P}(b_{n,r,s}\xi_{-r, -s}\geq\varepsilon x_{n}) \\ &\qquad + \sum_{r,s\in\mathbb{Z}}\mathbb{P}((1-\eta)x_{n}\leq b_{n,r,s}\xi_{-r, -s} <(1+\eta)x_{n}), \end{split} \end{equation} where $S_{n}^{(\varepsilon x_{n})}=\sum_{r,s\in\mathbb{Z}}b_{n,r,s} \xi_{-r, -s}I(b_{n,r,s}\xi_{-r, -s}<\varepsilon x_{n})$, $o(1)$ depends on the sequence $x_{n},$ $\eta$ and $\varepsilon$ and converges to $0$ as $n\rightarrow\infty.$ See also Lemma 4.2 and Remark 4.1 in Peligrad et al. (2014b) for sums of infinite many random variables. As in the proof of Theorem 2.2 in Peligrad et al. (2014b), by analyzing the two terms of the right-hand side and the last term of the left-hand side of \eqref{ineqprop}, we derive that for any fixed $\varepsilon>0$, \begin{equation} \label{ineqLMD} \mathbb{P}(S_{n}\geq x)= \big(1+o(1)\big) \sum_{r,s\in\mathbb{Z}}\mathbb{P}(b_{n,r,s} \xi_{-r, -s}\geq x)+\mathbb{P}(S_{n}^{(\varepsilon x)}\geq x) \end{equation} as $n\rightarrow\infty$. It remains to show that the term $\mathbb{P}(S_{n}^{(\varepsilon x)} \geq x)$ is negligible compared with the first term in (\ref{ineqLMD}). To this end, we apply Theorem \ref{FN} to the sequence $\{b_{n,r,s} \xi_{-r, -s}, \, r, s \in \mathbb Z\}$ with $y = \varepsilon x$ to derive that for any constant $m >t$, \begin{equation}\label{toshow_1} \mathbb{P} \big(S_{n}^{(\varepsilon x)}\geq x \big)\leq \exp\bigg(-\frac{\alpha^{2}x^{2}} {2e^{m}} \bigg)+ \bigg( \frac{A_{n}(m;0,\varepsilon x)} {\beta\varepsilon^{m-1}x^{m}}\bigg)^{\beta/\varepsilon}, \end{equation} where $\beta=m/(m+2),$ $\alpha=1-\beta=2/(m+2)$ and we have used the fact that $B^2_n(- \infty, \varepsilon x) \le 1$, which follows from (\ref{sum1}). Then, following the proof of Theorem 2.2 in Peligrad et al. (2014b), we can show that, for all $x=x_{n}\geq C_t[\ln(U_{nt}^{-1})]^{1/2}$, where $C_t>e^{t/2}(t+2)/\sqrt{2}$ is a constant, we have \begin{equation} \label{toshow} \begin{split} \exp\biggl(-\frac{\alpha^{2}x^{2}}{2e^{m}}\biggl) &+\left(\frac{A_{n}(m;0,\varepsilon x)}{\beta\varepsilon^{m-1}x^{m}}\right) ^{\beta/\varepsilon}=o(1)\sum_{r,s\in\mathbb{Z}} \frac{b_{n,r,s}^{t}}{x^{t}}h\bigg(\frac{x}{b_{n,r,s}}\bigg) \\ &=o(1)\sum_{r,s\in\mathbb{Z}} \sum_{r,s\in\mathbb{Z}}\mathbb{P}(b_{n,r,s} \xi_{-r, -s}\geq x)\ \text{ as } \ n\rightarrow\infty. \end{split} \end{equation} In particular, we use the observation \begin{equation} D_{nt}=\sum_{r,s\in \mathbb{Z}}b_{n,r,s}^{2\eta}b_{n,r,s}^{t-2\eta }\leq \bigg(\sum_{r,s\in \mathbb{Z}}b_{n,r,s}^{2}\bigg)^{\eta} \bigg(\sum_{r,s\in \mathbb{Z}}b_{n,r,s} ^{(t-2\eta)/(1-\eta)}\bigg)^{1-\eta}. \label{rel-cnt}% \end{equation} Here we have omitted the details for deriving (\ref{toshow}) as it is very similar to the proof in Peligrad et al. (2014b). Finally, by combining (\ref{ineqLMD}), (\ref{toshow}) and (\ref{toshow_1}), we obtain the first equality in (\ref{Eq:LD2-2}). This completes the proof of Theorem \ref{LD}. \end{proof} \begin{remark} Besides the case of regularly varying tails such as \eqref{tail1}, large deviation results for sums of independent random variables or linear procesess have been studied by several authors under the following two conditions, respectively: (a) $\xi_0$ satisfies the Cram\'er condition: there exists a constant $h_0 > 0$ such that $\mathbb E(e^{h \xi_0}) < \infty$ for $|h|\le h_0$; (b) $\xi_0$ satisfies the Linnik condition: there is a constant $\gamma \in (0, 1)$ such that $\mathbb E(e^{| \xi_0|^\gamma}) < \infty$. See Nagaev (1979), Jiang, et al. (1995), Saulis and Statulevi\u cius (2000), Djellout and Guillin (2001), Ghosh and Samorodnitsky (2008), Li, et al. (2009), among others. In light of Theorem \ref{LD} and the above discussions, we think it would be interesting to study the following problems: \begin{itemize} \item Study precise large deviation problems for linear random fields under the Cram\'er and Linnik conditions. \item Extend the methods of Davis and Hsing (1995), Mikosch and Samorodnitsky (2000), Mikosch and Wintenberger (2013) to establish large deviation results for stationary random fields. \end{itemize} \end{remark} Notice that the tail conditions of Theorems \ref{ModerateD} and \ref{LD} are different since one involves the moment and the other just involves the right tail behavior. Put these conditions together, we have the following tail probabilities over all $x_n\ge c$ for some $c>0$. This theorem is a natural extension of the uniform moderate and large deviations for sums of i.i.d. random variables (cf. Theorem 1.9 in S.V. Nagaev, 1979). \begin{theorem}\label{MDLD} \label{mix} Assume that $\xi_0$ satisfies $\|\xi_0\|_p<\infty$ for some $p>2$ and the right tail condition (\ref{tail1}) for some constant $t>2$. Assume also that $b_{n,r,s}>0$ and $\rho_n^2 \rightarrow 0$ as $n\rightarrow \infty$. Let $(x_{n})_{n\ge1}$ be any sequence such that for some $c>0$ we have $x_{n}\geq c$ for all $n$. Then, as $n\rightarrow\infty$, \begin{equation} \mathbb{P}\left( S_{n}\geq x_{n}\sigma_{n}\right) =(1+o(1))\bigg[x_n^{-t} \sum_{r,s\in\mathbb{Z}} b_{n,r,s}^t h\Big(\frac{x_n}{b_{n,r,s}}\Big)+1-\Phi(x_{n})\bigg]. \label{MD+LD}% \end{equation} \end{theorem} \begin{proof} The proof is a modification of that of Theorem 2.1 and Corollary 2.3, part (i), in Peligrad et al. (2014b). We sketch the proof here for completeness. Without loss of generality we may assume $2<p<t$. Let $x=x_{n}\rightarrow \infty.$ For simplicity, we assume (\ref{sum1}). Under the condition in this theorem, as in the proof of Theorem \ref{LD}, we have that (\ref{ineqLMD}) holds. Denote \[ X_{n, r,s}^{\prime}=b_{n,r,s}\xi_{-r,-s}I(b_{n,r,s}\xi_{-r,-s}\leq\varepsilon x). \] We now apply Lemma \ref{frolov-trunc} to the second term in the right-hand side of (\ref{ineqLMD}). To this end, we decompose the sum $S_n^{(\varepsilon x)}$ as a finite sum, i.e., the sum of $\sum_{|r|\le k_n} \sum_{|s|\le k_n} X_{n, r,s}^{\prime}$ with $(2k_n+1)^2$ terms for some $k_n$ and the remainder $$ R_n'= \bigg(\sum_{|r|> k_n}\sum_{s\in \mathbb{Z}} + \sum_{|r|\le k_n}\sum_{|s|> k_n} \bigg) X_{n, r,s}^{\prime}. $$ By Rosenthal's inequality (cf. de la Pe\~na and Gin\'e, 1999), it is easy to derive \[ \mathbb{E}|R_{n}^{\prime}|^{p}\leq2C_{p}^{\prime}\mathbb{E}|\xi_{0}|^{p}% \sum_{r,s}|b_{n,r,s}|^{p} \] for some constant $C_p'$. Consequently, the quantity $L_{np}$ in Lemma \ref{frolov-trunc} is bounded by \[ L_{np}\leq(2C_{p}^{\prime}+1)\mathbb{E}|\xi_{0}|^{p} \sum_{r,s}|b_{n,r,s}|^{p}=(2C_{p}^{\prime}+1)D_{np}\mathbb{E}|\xi_{0}|^{p}. \] See also the proof of Corollary 2.3, part (i), in Peligrad et al. (2014b). Then, by Lemma \ref{frolov-trunc} if $x^{2}\leq c\ln((2C_{p}^{\prime }+1)D_{np}\mathbb{E}|\xi_{0}|^{p})^{-1}$ for $c<1/\varepsilon$, we have $x^{2}\leq c\ln(L_{np}^{-1})$ for $c<1/\varepsilon$ and \begin{equation}\label{truncnorm} \mathbb{P}(S_n^{(\varepsilon x)}\geq x) =(1-\Phi(x))(1+o(1)). \end{equation} Notice that (\ref{truncnorm}) also holds for $x^{2}\leq c\ln(D_{np})^{-1}$ for any $c<1/\varepsilon$ and large enough $n$ since $D_{np}\rightarrow0$. Recall that $2<p<t$. By applying (\ref{rel-cnt}) with $\eta=(t-p)/(t-2)$, we have \begin{equation*} D_{nt}\ll D_{np}\ll(D_{nt})^{(p-2)/(t-2)}. \end{equation*} Then (\ref{MD+LD}) holds for $0<x\leq C[\ln(D_{nt}^{-1})]^{1/2}$ with $C$ an arbitrary positive number. On the other hand, there is a constant $c_{1}>0$ such that for $x>c_{1}[\ln(D_{nt}^{-1})]^{1/2},$ we also have \[ \mathbb{P}\left( S_{n}\geq x\right) =(1+o(1))\sum_{r,s\in \mathbb{Z}}% \mathbb{P}(b_{n,r,s}\xi_{0}\geq x) \] and% \[ 1-\Phi(x)=o\bigg(\sum_{r,s\in \mathbb{Z}}\mathbb{P}(b_{n,r,s}\xi_{0}\geq x)\bigg). \] See also the proof of Theorem 2.1 in Peligrad et al. (2014b). By choosing $c_{1}<C$, (\ref{MD+LD}) holds for all $x=x_n\rightarrow \infty$. If the sequence $x_n$ is bounded, by Theorem \ref{ModerateD}, we have the moderate deviation result. Since $x_n\ge c>0$, the second part in the right side of (\ref{MD+LD}) is dominating as $n\rightarrow \infty$. This finishes the proof. \end{proof} \section{Applications} In this section, we provide two applications of the main results in Section 2, one to nonparametric regression and the other to the Davis-Gut law for linear random fields. \subsection{Nonparametric regression} We first provide an application of the deviation results in nonparametric regression estimate. Consider the following regression model $$ Y_{n,j,k}=g(z_{n,j,k})+X_{n,j,k}, \quad (j,k)\in \Gamma_n, $$ where $g$ is a bounded continuous function on $\mathbb{R}^d$, $z_{n, j,k}$'s are the fixed design points over $\Gamma_n \subseteq \mathbb{Z}^2$ with values in a compact subset of $\mathbb{R}^d$, and $X_{n,j,k}= \sum_{r,s\in\mathbb{Z}}a_{r,s}\xi_{n, j-r,k-s}$ is a linear random field over $\mathbb{Z}^2$ with mean zero i.i.d. innovations $\xi_{n,r,s}$. Regression models with independent or weakly dependent random field errors have been studied by several authors including El Machkoui (2007), El Machkouri and Stoica (2010), Hallin et al. (2004a). For related papers that deal with density estimations for random fields, see for example Tran (1990), Hallin et al. (2004b). An estimator for the function $g$ on the basis of sample pairs $(z_{n,j,k}, Y_{n,j,k})$, $(j,k)\in \Gamma_n$, is the following general linear smoother: \begin{equation*}\label{Eq:K} g_n(z)=\sum_{(j,k)\in\Gamma_n} w_{n,j,k}(z)Y_{n, j,k}, \end{equation*} where $w_{n,j,k}(\cdot)$'s are weight functions on $\mathbb R^d$. In the particular case of kernel regression estimation, $w_{n,j,k}(z)$ has the form $$ w_{n,j,k}(z)=\frac{K(\frac{z-z_{n,j,k}}{h_n})} {\sum_{(j',k')\in\Gamma_n} K(\frac{z-z_{n,j',k'}}{h_n})}, $$ where $K: \mathbb{R}^d\rightarrow \mathbb{R}^+$ is a kernel function and $h_n$ is a sequence of bandwidths which goes to zero as $|\Gamma_n| \rightarrow \infty$. Gu and Tran (2009) developed central limit theorem and the bias for the fixed design regression estimate $g_n(z)$ in the case when $X_{n,j,k}=\xi_{n,j,k}$ for all $(j,k)\in \Gamma_n$. Theorems 2.1-2.3 in Section 2 can be applied to study, for every $z \in {\mathbb R}^d$, the speed of the a.s. convergence of $g_n(z) -\mathbb E g_n(z) \to 0$, or $g_n(z) \to g(z)$ if the weight functions are chosen to satisfy the condition $\sum_{(j,k)\in\Gamma_n} w_{n,j,k}(z) = 1$, which is the case in kernel regression estimation. Let $S_n:=g_n(z)-\mathbb{E}g_n(z)$. Then it can be written as \begin{equation*}\label{res} S_n = \sum_{(j,k)\in\Gamma_n} w_{n,j,k}(z)X_{n, j,k} =\sum_{r,s\in\mathbb{Z}} b_{n,r,s}\xi_{n,-r, -s}, \end{equation*} where $b_{n,r,s}=\sum_{(j,k)\in\Gamma_n} w_{n,j,k}(z) a_{j+r, k+s}$. We choose the weight functions $w_{n,j,k}(z)$, the coefficients $\{a_{r,s}\}$ and the random variable $\xi_0$ to satisfy the conditions of Theorems \ref{ModerateD}. Then for $x_n = \sqrt{ 2\ln (U_{np}^{-1})}$, (see Section 2 for the definition of $U_{np}$), we have \begin{align*} \mathbb{P}\big( |S_{n}|\geq x_{n}\sigma_{n}\big) = 2(1-\Phi(x_{n})) (1+o(1))\quad \text{ as }n\rightarrow\infty, \end{align*} where $\sigma_n^2=\sum_{r, s\in\mathbb{Z}}b_{n,r,s}^2={\rm Var}(g_n(z))$. If $U_{np} \to 0$ is fast enough such that $\sum_n(1-\Phi(x_{n}))<\infty$, then we can derive by using the Borel-Cantelli lemma the following upper bound on the speed of convergence of $g_n(z)-\mathbb{E}g_n(z)$. \begin{equation}\label{LIL1} \limsup_{n\rightarrow \infty}\frac{|g_n(z)-\mathbb{E}g_n(z)|} {\sigma_n\sqrt{2\ln (U_{np}^{-1}) }} \le 1, \qquad \hbox{ a.s.} \end{equation} Under further conditions, we may put (\ref{LIL1}) in a more familiar form. Since $p > 2$, we have $U_{np} \le |\rho_n|^{p-2}$. If we have information on the rate for $\rho_n \to 0$, say, $|\rho_n| \le (\ln n)^{-1/(p-2)}$, then we obtain an upper bound which coincides with the law of the iterated logarithm: \begin{equation}\label{LIL2} \limsup_{n\rightarrow \infty}\frac{|g_n(z)-\mathbb{E}g_n(z)|} {\sigma_n\sqrt{2\ln \ln n}} \le1, \qquad \hbox{ a.s.} \end{equation} In the particular case of $X_{n,j,k}=\xi_{n,j,k}$, $b_{n,r,s} =w_{n,r,s}(z)$ if $(r,s)\in \Gamma_n$ and, otherwise $b_{n,r,s}=0$, we have $D_{nt}=\sum_{(r,s)\in \Gamma_n} w_{n,r,s}^t(z)$, $U_{nt} =(D_{n2})^{-t/2}D_{nt}$ and $$ \sigma_n^2=\mathbb{E}(S_n^2)=D_{n2}=\sum_{(r,s)\in \Gamma_n} w_{n,r,s}^2(z). $$ Hence, under certain conditions on the weight functions $w_{n,j,k}(z)$, we can obtain from (\ref{LIL1}) or (\ref{LIL2}) the speed of convergence of $g_n(z)- \mathbb{E}g_n(z) \to 0$, which compliments the results in Gu and Tran (2009). \subsection{A Davis-Gut law of the iterated logarithm} Now we apply the moderate deviation result, Theorem \ref{ModerateD}, to prove a Davis-Gut type law for linear random fields. See Davis (1968), Gut (1980), Li (1991) and Li and Rosalsky (2007) for the Davis-Gut laws for partial sums of i.i.d. random variables. The Davis-Gut type law for linear processes with short memory (short-range dependence) was developed in Chen and Wang (2008). For a linear random field defined in (\ref{rf}), we consider the partial sum (\ref{Def:S}) with $\Gamma_n=[1,n]^2\cap \mathbb{Z}^2$, and assume the following condition: \begin{itemize} \item[(DG)]\, $ \|\xi_0\|_p<\infty$ for some $p>2$ and $\{a_{r,s}\}$ satisfies either \begin{align}\label{con1} A:=\sum_{r,s\in\mathbb{Z}}|a_{r,s}|<\infty, \;\;a:=\sum_{r,s\in\mathbb{Z}}a_{r,s}\ne 0, \end{align} or \begin{equation}\label{la} a_{r,s}=(|r|+|s|)^{-\beta}L(|r|+|s|) b\Big(\frac{r} {\sqrt{r^2+s^2}},\, \frac{ s} {\sqrt{r^2+s^2}}\Big) \end{equation} for $r\ne 0$ or $s\ne 0$, where $\beta \in(1,2)$, $L(\cdot)$ is a slowly varying function at infinity, $b(\cdot, \cdot)$ is a bounded piece-wise continuous function defined on the unit circle. \end{itemize} Under the condition (\ref{la}), $\sum_{r,s\in\mathbb{Z}}|a_{r,s}|=\infty$. In the literature, the random field (\ref{rf}) is said to have long memory or long range dependence. The following lemma gives the order of the quantity $D_{np}$ (see the definition in Section 2) under the condition (\ref{la}). Recall that $a_{n}\propto b_n$ means that $a_{n}/b_{n}\rightarrow C$ as $n\rightarrow \infty$ for some constant $C>0$. \begin{lemma}\label{Dorder} Assume (\ref{la}), then for $p>2$, $$D_{np}=\sum_{r, s\in \mathbb{Z}}|b_{n,r,s}|^p=O \big(n^{p(2-\beta)+2}L^p(n)\big).$$ \end{lemma} \begin{proof} We use the properties of slowly varying functions as stated in Lemma \ref{Karamata}, the condition $1<\beta<2$ and thus $1-\beta>-1$ and $1-p\beta<-1$ throughout the proof. We also use $C>0$ as a generic constant in the proof. First we consider the case $r>n$. Since $b(\cdot, \cdot)$ is bounded, \begin{align} |b_{n,r,s}|&\le C\sum_{j, k=1}^n(j+r+|k+s|)^{-\beta}L(j+r+|k+s|)\label{r>n}\\ &\propto n\sum_{k=1}^n(r+|k+s|)^{-\beta}L(r+|k+s|)\notag\\ &\propto n^2(r+|s|)^{-\beta}L(r+|s|).\notag \end{align} Then \begin{align} &\sum_{s\in \mathbb{Z}, r>n}|b_{n,r,s}|^p=\sum_{|s|\le n, r>n}|b_{n,r,s}|^p+\sum_{|s|> n, r>n}|b_{n,r,s}|^p\notag\\ &\ \ \le Cn\sum_{r>n}n^{2p}r^{-p\beta}L^p(r)+2C\sum_{s> n, r>n}n^{2p}(r+s)^{-p\beta}L^p(r+s)\notag\\ &\ \ \propto n^{2p+1}n^{1-p\beta}L^p(n)+\sum_{r>n}n^{2p}(r+n)^{1-p\beta}L^p(r+n)\notag\\ &\ \ \propto n^{p(2-\beta)+2}L^p(n)+n^{2p}n^{2-p\beta}L^p(n)\notag\\ &\ \ =2n^{p(2-\beta)+2}L^p(n).\label{term1} \end{align} For the case $r<-2n$, let $R=-r-n$, then $R>n$ and \begin{align*} |b_{n,r,s}|&\le C\sum_{j, k=1}^n(-j-r+|k+s|)^{-\beta}L(-j-r+|k+s|)\\ &=C\sum_{j, k=1}^n(-j+n+R+|k+s|)^{-\beta}L(-j+n+R+|k+s|)\\ & \propto n^2(|r|+|s|)^{-\beta}L(|r|+|s|). \end{align*} Hence, similarly to \eqref{term1}, we have \begin{align} \sum_{s\in \mathbb{Z}, r<-2n}|b_{n,r,s}|^p=O \big(n^{p(2-\beta)+2}L^p(n)\big). \end{align} By symmetry, we also have \begin{align} \sum_{r\in \mathbb{Z}, s>n}|b_{n,r,s}|^p=O\big(n^{p(2-\beta)+2}L^p(n)\big), \end{align} and \begin{align} \sum_{r\in \mathbb{Z}, s<-2n}|b_{n,r,s}|^p=O\big(n^{p(2-\beta)+2}L^p(n)\big). \end{align} In the case $-2n\le r, s\le n$, \begin{align*} |b_{n,r,s}|&\le C\sum_{j, k=1}^n(|j+r|+|k+s|)^{-\beta}L(|j+r|+|k+s|)\\ &\le 4C\sum_{j, k=1}^{2n}(j+k)^{-\beta}L(j+k)\\ &\ll \sum_{j=1}^{2n}j^{1-\beta} \max_{1\le k\le 2n}L(j+k)\\ &\propto n^{2-\beta} \max_{1\le k\le 2n}L(2n+k)\propto n^{2-\beta} L(n). \end{align*} Hence, \begin{align}\label{term5} \sum_{-2n\le s, r\le n}|b_{n,r,s}|^p\ll n^2n^{p(2-\beta)}L^p(n) =n^{p(2-\beta)+2}L^p(n). \end{align} Putting (\ref{term1})-(\ref{term5}) together, we complete the proof of the lemma. \end{proof} The theorem below gives a Davis-Gut type law for linear random fields that satisfy condition (DG). \begin{theorem}\label{DG} Assume condition (DG). Let $h(\cdot)$ be a positive nondecreasing function on $[c,\infty)$ for some constant $c\ge 1$, such that $\int_c^\infty (th(t))^{-1}dt=\infty$. Let $\Psi(t)=\int_c^t (sh(s))^{-1}ds$, $t\ge c$. Let $m=\arg\min_{t\ge c, t\in\mathbb{N}}\{\Psi(t)>1\}$. Then for real numbers $\varepsilon$ and $n\ge m$, we have \begin{align*} {\mathbb P} \left(|S_n|>(1+\varepsilon)\sigma_n\sqrt{2\ln\Psi(n)}\right) \propto \frac{1}{\sqrt{\ln\Psi(n)}}\Psi(n)^{-(1+\varepsilon)^2} \end{align*} Define \begin{align*} S_{\Psi}:=\sum_{n=m}^\infty\frac{1}{nh(n)} {\mathbb P}\left(|S_n|>(1+\varepsilon) \sigma_n\sqrt{2\ln\Psi(n)}\right). \end{align*} Then $S_{\Psi}<\infty$ if $\varepsilon>0$ and $S_{\Psi}=\infty$ if $\varepsilon\le 0$. \end{theorem} \begin{proof} First we consider the short memory case (\ref{con1}). Recall that $a=\sum_{r,s\in\mathbb{Z}} a_{r,s}\ne 0$. Under condition (\ref{con1}), since $$ \sigma_n^2=\sum_{r,s\in\mathbb{Z}} b_{n,r,s}^2 =\sum_{r,s\in\mathbb{Z}} \bigg(\sum_{j=0}^n\sum_{k=0}^n a_{j+r, k+s}\bigg)^2, $$ it is easy to see that $\sigma_n^2/n^2-a^2\rightarrow 0$. Hence $a^2n^2/\sigma_n^2\rightarrow 1$ as $n\rightarrow \infty$. Also the numbers $b_{n,r,s}$ that satisfy $|b_{n,r,s}|\ge 1$ are at most $\lceil a^2n^2\rceil$ asymptotically. Then \begin{align*} D_{np}&=\sum_{r,s\in\mathbb{Z}} |b_{n,r,s}|^p=\sum_{r,s\in\mathbb{Z},|b_{n,r,s}|\ge 1} |b_{n,r,s}|^p +\sum_{r,s\in\mathbb{Z},|b_{n,r,s}|< 1} |b_{n,r,s}|^p\notag\\ &\le \sum_{r,s\in\mathbb{Z},|b_{n,r,s}|\ge 1} A^p+\sum_{r,s\in\mathbb{Z},|b_{n,r,s}|< 1} b_{n,r,s}^2\notag\\ &\le \lceil a^2n^2\rceil A^p+\sigma_n^2 \end{align*} has order $O(n^2)$ for $p>2$. Therefore $\ln(U_{np}^{-1})=\ln( \sigma_n^p/D_{np}) \ge (p-2)\ln n$. Next we study the long memory case. Under condition (\ref{la}), Lemma \ref{Dorder} gives $D_{np}=O(n^{p(2-\beta)+2}L^p(n))$. On the other hand, by Theorem 2 of Surgailis (1982), \begin{align*} \sigma_n^2=\sum_{r,s\in\mathbb{Z}} b_{n,r,s}^2=c_\beta n^{6-2\beta}L^2(n) \end{align*} for some constant $c_\beta$ depending only on $\beta$. Hence we also have $\ln \big(U_{np}^{-1}\big)=\ln \big(\sigma_n^p/D_{np}\big) \ge (p-2)\ln n$. By the definition of $\Psi(t)$, $\Psi(n) \le \int_c^n (sh(c))^{-1}ds\le\ln n/h(c)$. Let $x_n=(1+\varepsilon)\sqrt{2\ln\Psi(n)}$. Then $x_n^2\ll2(p-2)\ln n\le 2\ln(U_{np}^{-1})$. By Remark \ref{remark1}, $\sigma_n\rightarrow\infty$ implies that $\rho_n\rightarrow 0$ in our case. Then by Theorem \ref{ModerateD}, \begin{align} &\mathbb P \left(|S_n|>(1+\varepsilon)\sigma_n\sqrt{2\ln\Psi (n)} \right)\notag\\ &=2 \left(1-\Phi(x_n)\right)\left(1+o(1)\right)\notag\\ &=2(2\pi)^{-1/2}\frac{1}{(1+\varepsilon)\sqrt{2\ln\Psi (n)}} \exp \left(-(1+\varepsilon)^2 \ln\Psi (n)\right)(1+o(1))\label{approx}\\ &\propto \frac{1} {\sqrt{\ln\Psi (n)}} \big(\Psi (n) \big)^{-(1+\varepsilon)^2}.\notag \end{align} In (\ref{approx}) we have used the well-known inequality $$ \frac{1}{(2\pi)^{1/2} (1+x)} \exp\Big(-\frac{x^{2}} 2 \Big)\le 1-\Phi(x)\le \frac 1 {(2\pi)^{1/2} x}\exp\Big(- \frac{x^2} 2\Big), \ \hbox{ for } \, x > 1. $$ Therefore, \begin{align*} S_{\Psi}&=\sum_{n=m}^\infty\frac{1}{nh(n)} \mathbb P \left(|S_n|>(1+\varepsilon) \sigma_n\sqrt{2\ln\Psi(n)}\right)\notag\\ &\propto \sum_{n=m}^\infty\frac{1}{nh(n)\sqrt{\ln\Psi(n)} \Psi(n)^{(1+\varepsilon)^2}}\notag\\ &=\sum_{n=m}^\infty\frac{\Psi'(n)}{\sqrt{\ln\Psi(n)} \Psi(n)^{(1+\varepsilon)^2}}.\notag \end{align*} It is clear that $S_{\Psi}<\infty$ if $\varepsilon>0$ and $S_{\Psi}=\infty$ if $\varepsilon\le 0$. \end{proof} \begin{corollary}\label{cor1} Assume condition (DG). Let \begin{align*} S=\sum_{n=3}^\infty\frac{1}{n(\ln\ln n)^b} \mathbb P \left(|S_n|>(1+\varepsilon)\sigma_n\sqrt{2\ln\ln n}\right). \end{align*} Then for any $b\in\mathbb{R}$, $S<\infty$ if $\varepsilon>0$ and $S=\infty$ if $\varepsilon<0$. If $\varepsilon=0$, $S<\infty$ if $b>\frac{1}{2}$ and $S=\infty$ if $b\le\frac{1}{2}$. \end{corollary} \begin{proof} Let $h(t)=1$ and $c=1$. Then $\Psi(n)=\ln n$. By Theorem \ref{DG}, for $n\ge 3$, \begin{align} \mathbb P\left(|S_n|>(1+\varepsilon)\sigma_n\sqrt{2\ln\ln n}\right) \propto \frac{1}{\sqrt{\ln\ln n}}(\ln n)^{-(1+\varepsilon)^2}.\notag \end{align} For any $b\in \mathbb{R}$, \begin{align} S&=\sum_{n=3}^\infty\frac{1}{n(\ln\ln n)^b}\mathbb P \left(|S_n|>(1+\varepsilon) \sigma_n\sqrt{2\ln\ln n}\right)\notag\\ &\propto \sum_{n=3}^\infty\frac{1}{n(\ln\ln n)^{b+1/2}} (\ln n)^{-(1+\varepsilon)^2}.\label{ep0} \end{align} It is clear that $S<\infty$ if $\varepsilon>0$ and $S=\infty$ if $\varepsilon<0$. In the case $\varepsilon=0$, by (\ref{ep0}), it is easy to see that $S<\infty$ if $b>\frac{1}{2}$ and $S=\infty$ if $b\le\frac{1}{2}$. \end{proof} \begin{corollary}\label{cor2} Assume condition (DG). For $0\le r<1$, let \begin{align*} S_r&=\sum_{n=3}^\infty\frac{1}{n(\ln n)^r} \mathbb P\left(|S_n|>(1+\varepsilon)\sigma_n\sqrt{2(1-r)\ln\ln n}\right). \end{align*} Then $S_r<\infty$ if $\varepsilon>0$ and $S_r=\infty$ if $\varepsilon\le 0$. \end{corollary} \begin{proof} Let $h(t)=(\ln t)^r/(1-r)$, $c=1$. Then $\Psi(n)=(\ln n)^{1-r}$. By Theorem \ref{DG}, for $n\ge 3$, \begin{align} \mathbb P\left(|S_n|>(1+\varepsilon)\sigma_n\sqrt{2(1-r)\ln\ln n}\right) \propto \frac{1}{\sqrt{\ln\ln n}}(\ln n)^{-(1+\varepsilon)^2(1-r) }\notag \end{align} and \begin{align*} S_r&=\sum_{n=3}^\infty\frac{1}{n(\ln n)^r}\mathbb P\left(|S_n|> (1+\varepsilon)\sigma_n\sqrt{2(1-r)\ln\ln n}\right)\notag\\ &\propto \sum_{n=3}^\infty\frac{1}{n\sqrt{\ln\ln n}(\ln n)^{ (1+\varepsilon)^2(1-r)+r}}\notag\\ &=\sum_{n=3}^\infty\frac{1}{n\sqrt{\ln\ln n}(\ln n)^{1+(2\varepsilon+\varepsilon^2) (1-r)}}.\notag \end{align*} It is clear that $S_r<\infty$ if $\varepsilon>0$ and $S_r=\infty$ if $\varepsilon\le 0$. \end{proof} \begin{corollary}\label{cor3} Assume condition (DG). Let \begin{align*} S=\sum_{n=16}^\infty\frac{1}{n\ln n} \mathbb P\left(|S_n|>(1+\varepsilon) \sigma_n\sqrt{2\ln\ln\ln n}\right). \end{align*} Then $S<\infty$ if $\varepsilon>0$ and $S=\infty$ if $\varepsilon\le 0$. \end{corollary} \begin{proof} Let $h(t)=\ln t$, $c=e$. Then $\Psi(n)=\ln\ln n$. By Theorem \ref{DG}, for $n\ge 16$, \begin{align} \mathbb P\left(|S_n|>(1+\varepsilon)\sigma_n\sqrt{2\ln\ln\ln n}\right) \propto \frac{1}{\sqrt{\ln\ln\ln n}}(\ln\ln n)^{-(1+\varepsilon)^2}\notag \end{align} and \begin{align*} S&=\sum_{n=16}^\infty\frac{1}{n\ln n}\mathbb P\left(|S_n|>(1+\varepsilon) \sigma_n\sqrt{2\ln\ln\ln n}\right)\notag\\ &\propto \sum_{n=16}^\infty\frac{1}{n\ln n(\ln\ln n)^{(1+\varepsilon)^2} \sqrt{\ln\ln\ln n}}.\notag \end{align*} It is clear that $S<\infty$ if $\varepsilon>0$ and $S=\infty$ if $\varepsilon\le 0$. \end{proof} \begin{remark} One can also prove the Davis-Gut law for linear processes by applying the moderate deviation results for linear processes in Peligrad et al. (2014a). Let $S_{n}=\sum_{k=1}^{n}X_{k}$, where \begin{equation*} X_{k}=\sum_{j=-\infty}^{\infty}a_{k-j}\xi_{j} \label{ln}% \end{equation*} and the innovations $\xi_j$ are i.i.d. random variables with $\mathbb{E}\xi_j=0$ and $\mathbb{E}\xi_j^2=1$. Consider the short memory case $\sum_{i= -\infty}^\infty |a_i|<\infty, a=\sum_{i=-\infty}^\infty a_i\ne 0$. Observe that $S_{n}=\sum_{i=-\infty}^{\infty}b_{ni}\xi_{i}$ where $b_{ni}=a_{1-i}+\cdots+a_{n-i}$. Then $\sigma_n^2={\rm Var}(S_n)= \sum_{i}b_{ni}^{2}$. For the short memory case, it is well known that $\sigma_n^2$ has order $n$. Furthermore, $a^2n/\sigma_n^2\rightarrow 1$ as $n\rightarrow\infty$; $\sum_i |b_{ni}|^p$ has order $n$ for $p>2$. Let \begin{equation* U_{np}=\bigg(\sum_{i}b_{ni}^{2}\bigg)^{-p/2}\sum_{i}|b_{ni}|^{p}. \end{equation*} Then $\ln(U_{np}^{-1})=\ln[(\sum_{i}b_{ni}^{2})^{p/2}/\sum_{i}|b_{ni}|^{p}] \sim \frac{1}{2}(p-2)\ln n$. Let $h(t)$ and $\Psi(t)$ be the functions defined as in Theorem \ref{DG}. Hence \begin{align}\label{x_n} x_n=(1+\varepsilon)\sqrt{2\ln\Psi(n)} \ll \sqrt{(p-2)\ln n}\sim \sqrt{2\ln(U_{np}^{-1})}. \end{align} Then by part (iii) of Corollary 3 in Peligrad et al. (2014a), the Davis-Gut type laws, Theorem \ref{DG} and Corollary \ref{cor1}, \ref{cor2}, \ref{cor3}, hold for short memory linear processes. \end{remark} \begin{remark} For the causal long memory linear process, $X_{k}=\sum_{j=0}^{\infty} a_{k-j}\xi_{j}$, with $\sum_{i=0}^\infty |a_i|=\infty$, $\sum_{i=0}^\infty a_i^2<\infty$, we assume that $a_{n}=(n+1)^{-\alpha}L(n+1)$, where $1/2<\alpha<1$, and $L(n)>0$ is a slowly varying function at infinity. Then $S_n=\sum_{i=1}^\infty b_{ni}\xi_{n-i}$ where $b_{ni}=\sum_{k=1}^i a_k$ for $i<n$ and $b_{ni}=\sum_{k=i-n+1}^i a_i$ for $i\ge n$. Also \begin{equation} \label{sumb} \sigma_n^2= {\rm Var}(S_n)=\sum_{i=1}^{\infty}b_{ni}^{2}\sim c_\alpha n^{3-2\alpha} L^{2}(n), \end{equation} where \begin{equation*} c_{\alpha}= \frac 1 {(1-\alpha)^{2}} \int_{0}^{\infty}[x^{1-\alpha}- \max(x-1,0)^{1-\alpha}% ]^{2}dx. \label{defcalpha}% \end{equation*} The asymptotic equivalence in (\ref{sumb}) is well known. See for instance Theorem 2 in Wu and Min (2005). On the other hand, there are constants $C_{1}$ and $C_{2}$ such that for all $n\geq1,$% \begin{equation} b_{ni}\leq C_{1}i^{1-\alpha}L(i)\text{ for }i\leq2n\text{ and }% b_{ni}\leq C_{2}n(i-n)^{-\alpha}L(i)\text{ for }i>2n.\notag \end{equation} Hence, by the properties of slowly varying functions as stated in Lemma \ref{Karamata}, \begin{align*} \sum_i b_{ni}^p&\ll\sum_{i<2n}i^{(1-\alpha)p}L^p(i)+ \sum_{i\ge 2n}n^p(i-n)^{-\alpha p} L^p(i)\notag\\ &\ll n^{1+p(1-\alpha)}L^p(n). \end{align*} Therefore, \begin{align*} \ln(U_{np}^{-1})&=\ln \bigg[(\sum_{i}b_{ni}^{2})^{p/2}/\sum_{i}b_{ni}^{p}\bigg]\notag\\ &\gg\ln[n^{p(3-2\alpha)/2}L^{p}(n)/ (n^{1+p(1-\alpha)}L^p(n))]\notag\\ &\sim \frac{1}{2}(p-2)\ln n. \end{align*} Let $h(t)$ and $\Psi(t)$ be the functions defined as in Theorem \ref{DG}. Hence (\ref{x_n}) still holds. Then by Corollary 3, part (iii) of Peligrad et al. (2014a), the Davis-Gut type laws, Theorem \ref{DG} and Corollary \ref{cor1}, \ref{cor2}, \ref{cor3}, hold for long memory linear processes \end{remark} \section{Appendix} In the Appendix, we first justify (1) and (2) in the Introduction, and then collect some results that are useful for the proofs in Section 2. \begin{lemma}\label{Lem:Conv} Let $\{\xi_{r,s}, (r, s) \in \mathbb{Z}^2\}$ and $\xi_0$ be i.i.d. random variables with $\mathbb{E}\xi_0=0$ and $\mathbb{E}\xi^2_0=1$, and let $\{a_{r,s}, (r, s) \in \mathbb{Z}^2\}$ is a square summable sequence of constants. Then the following statement hold: \begin{itemize} \item[(i).] The series $\sum_{r, s\in \mathbb{Z}} a_{r,s}\xi_{j-r, k-s} $ converges in $L^2(\Omega, \mathbb P)$ and almost surely. \item[(ii).] Equation (2) holds in $L^2(\Omega, \mathbb P)$ and almost surely. \end{itemize} \end{lemma} \begin{proof} We refer to the series $\sum_{r, s\in \mathbb{Z}} a_{r,s}\xi_{j-r, k-s} $ by (1). Let $\{\Upsilon_n, n \ge 1\}$ be an arbitrary sequence of finite subsets of $\mathbb Z^2$ that satisfy $\Upsilon_n \subset \Upsilon_{n+1}$ and $|\Upsilon_n| \to \infty$. Then for any $m < n$, \[ {\mathbb E}\left[\bigg(\sum_{(r, s)\in \Upsilon_{n}\backslash \Upsilon_m} a_{r,s}\xi_{j-r, k-s} \bigg)^2\right] = \sum_{(r, s)\in \Upsilon_{n}\backslash \Upsilon_m} a_{r,s}^2, \] which tends to 0 as $ m \to \infty$. This implies that (1) converges in $L^2(\Omega, \mathbb P)$. Since the summands in series (1) are independent random variables, the almost sure convergence of (1) follows from Kolmogorov's Three-Series Theorem. Alternatively, it follows from L\'evy's Equivalence Theorem which says that the almost sure convergence is equivalent to convergence in probability or in law. Next let $n\ge 1$ be fixed and we write the partial sum $S_n$ as \begin{align*} S_n &=\sum_{(j,k)\in\Gamma_n} \sum_{r, s\in \mathbb Z} a_{r,s}\xi_{j-r, k-s}\\ &= \sum_{(j,k)\in\Gamma_n} \sum_{r, s\in \mathbb Z} a_{j+r,k+s}\xi_{-r, -s}\\ &= \sum_{(j,k)\in\Gamma_n} \lim_{m \to \infty} \sum_{r, s\in [-m, m]} a_{j+r,k+s}\xi_{-r, -s}\\ &=\lim_{m \to \infty} \sum_{(j,k)\in\Gamma_n} \sum_{r, s\in [-m, m]} a_{j+r,k+s}\xi_{-r, -s}, \end{align*} where the limit is taken either in $L^2(\Omega, \mathbb)$ or in the almost sure sense. Since both index sets $\Gamma_n$ and $[-m, m]^2$ are finite, we change the order of summation to get \begin{align*} S_n &= \lim_{m \to \infty} \sum_{r, s\in [-m, m]} \bigg(\sum_{(j,k)\in\Gamma_n} a_{j+r,k+s}\bigg)\xi_{-r, -s}\\ &=\sum_{r, s\in \mathbb{Z}}\bigg(\sum_{(j,k)\in\Gamma_n} a_{j+r,k+s}\bigg)\xi_{-r, -s} =\sum_{r,s\in\mathbb{Z}} b_{n,r,s}\xi_{-r, -s}. \end{align*} This verifies (ii). \end{proof} The following theorem is an extended version of the Fuk--Nagaev inequality (see Corollary 1.7 in Nagaev (1979)) for a double sum of infinite many random variables. See also the extension of Fuk--Nagaev inequality for sum of infinite many random variables, Theorem 5.1 and Remark 5.1 in Peligrad et al. (2014b). \begin{theorem}\label{FN} Let $(X_{ni})_{i\in \mathbb{N}^2}$ be a set of independent random variables with mean 0. For a constant $m \ge 2$, let $\beta=m/(m+2)$ and $\alpha=1-\beta=2/(m+2)$. For any $y>0$, define $X_{ni}^{(y)}=X_{ni}I(X_{ni}\leq y)$, $A_{n}(m;0,y):=\sum_{i\in\mathbb{N}^2}\mathbb{E}[X_{ni}^{m}I(0<X_{ni}<y)]$ and $B_{n}^{2}(-\infty,y):=\sum_{i\in\mathbb{N}^2}\mathbb{E}[X_{ni}^{2}I(X_{ni}<y)].$ Then for any $x>0$ and $y>0$, \begin{equation*} \mathbb{P}\bigg(\sum_{i\in\mathbb{N}^2}X_{ni}^{(y)}\geq x \bigg)\leq\exp \bigg(-\frac{\alpha^{2} x^{2}}{2e^{m}B_{n}^{2}(-\infty,y)} \bigg)+\bigg (\frac{A_{n}(m;0,y)}{\beta xy^{m-1}}\bigg)^{\beta x/y} \end{equation*} \end{theorem} The following result is Theorem 5.2 of Peligrad et al. (2014b), which is an immediate consequence of Theorem 1.1 in Frolov (2005). \begin{theorem} \label{frolov} Let $(X_{nj})_{1\leq j\leq k_{n}}$ be an array of row-wise independent centered random variables. Let $S_{n}=\sum _{j=1}^{k_{n}}X_{nj}$ and $\sigma_n^2=\sum_{j=1}^{k_n}\mathbb{E}X_{nj}^2$. For any positive numbers $u, v$ and $\varepsilon$, denote \[ \Lambda_{n}(u,v,\varepsilon)=\frac{u}{\sigma_{n}^{2}}\sum_{j=1}^{k_{n}} \mathbb{E} \big[X_{nj}^{2}I(X_{nj}\leq-\varepsilon\sigma_{n}/v)\big]. \] Assume that for some constant $p>2$, $M_{np} =\sum_{j=1}^{k_{n}}\mathbb{E} \big[X_{nj}^{p} I(X_{nj}\geq0)\big]<\infty$ and $L_{np} :=\sigma_{n}^{-p}M_{np} \rightarrow 0$ as $n \to \infty$. If $\Lambda_{n}(x^{4},x^{5},\varepsilon)\rightarrow0$ for any $\varepsilon>0$ and $x^{2} -2\ln(L_{np}^{-1})-(p-1)\ln\ln(L_{np}^{-1})\rightarrow-\infty$ as $n \to \infty$, then \[ \mathbb{P}\left( S_{n}\geq x\sigma_{n}\right) =(1-\Phi(x))(1+o(1)). \] \end{theorem} The following lemma is useful in the proof of Theorem \ref{mix}. It is Proposition 5.1 in Peligrad et al. (2014b). \begin{lemma} \label{frolov-trunc} Assume the conditions in Theorem \ref{frolov} are satisfied. Fix $\varepsilon>0.$ Define \[ X_{nj}^{(\varepsilon x\sigma_{n})}=X_{nj}I(X_{nj}\leq\varepsilon x\sigma _{n})\ \text{ and } \ \ S_{n}^{(\varepsilon x\sigma_{n})}=\sum_{j=1}^{k_{n}} X_{nj}^{(\varepsilon x\sigma_{n})}. \] If $x^{2}\leq c\ln(L_{np}^{-1})$ with $c<1/\varepsilon$, then as $n \to \infty$ we have \[ \mathbb{P}\left( S_{n}^{(\varepsilon x\sigma_{n})}\geq x\sigma_{n}\right) =(1-\Phi(x))(1+o(1)). \] \end{lemma} The following lemma lists some properties of the slowly varying function. Their proofs can be found in Bingham et al. (1987) or Seneta (1976). \begin{lemma} \label{Karamata} A slowly varying function $l(x)$ defined on $[A,\infty)$ has the following properties: \begin{enumerate} \item For $A<c<C<\infty$, $\lim_{x\rightarrow\infty}\frac{l(tx)}{l(x)}=1$ uniformly in $c\leq t\leq C$. \item For any $\theta>-1$, $\int_{A}^{x}y^{\theta}l(y)dy\mathbb{\sim}% \frac{x^{\theta+1}l(x)}{\theta+1}$ as $x\rightarrow\infty$. \item For any $\theta<-1$, $\int_{x}^{\infty}y^{\theta}l(y)dy\mathbb{\sim }\frac{x^{\theta+1}l(x)}{-\theta-1}$ as $x\rightarrow\infty$. \item For any $\eta>0$, $\sup_{t\geq x}(t^{\eta}l(t))\mathbb{\sim}x^{\eta }l(x)$ as $x\rightarrow\infty$. Moreover, $\sup_{t\geq x}(t^{\eta}% l(t))=x^{\eta}\bar{l}(x)$, where $\bar{l}(x)$ is slowly varying and $\bar {l}(x)\mathbb{\sim}l(x).$ \end{enumerate} \end{lemma}
1,477,468,750,833
arxiv
\section{\label{sec:intro}Introduction} In the recent decades finite quantum systems have become an intensively studied subject-matter. Particular interest is due to electrons in quantum dots \cite{jacak98} (QD) or wells, forming \textit{artificial atoms} \cite{ashoori96} with molecule-like behavior and novel spectral and dynamical properties. In contrast to \textit{real atoms}, such new properties arise from dimensionality reduction and natural scale-differences as QDs embedded within semiconductor heterostructures generate charge carrier motion on typically nanometer length scales. The collective, optical and transport properties of QDs are examined by experimental \cite{brocke03} and theoretical \cite{baer04,szafran04,indlekofer05} research activities in dependence on various dot parameters and geometries. For an overview see e.g.~Refs.~\cite{jacak98,banyai93,reimann02}. Many ground state calculations are available in the literature which are based on different methods---exact numerical diagonalization \cite{kvaal07,ciftja06,jauregui93}, self-consistent Hartree-Fock \cite{yannou07,reusch01,ludwig08}, configuration interaction \cite{rontani06} and quantum Monte Carlo \cite{egger99}. Extensions to finite temperatures and to QD properties in (transversal) magnetic fields are to be found in Refs.~\cite{szafran04,yannou07,filinov01}. Typical charge densities in QD devices vary over a large range \cite{ciftja06}---from macroscopic charge carrier ensembles to mesoscopic, few- and even single-electron quantum dots. However, injecting only a small integral number of electrons into the dot reveals system properties that sensitively depend on the charge carrier number and are thus externally controllable, e.g.~by gate voltage or tip-electrode field variation or local mechanical strain (band mismatch). On the other hand, the quantum dot state is governed by the interplay of quantum and spin effects, the Coulomb repulsion between the carriers and the strength of the dot confinement. This generally leads to strong electron-electron correlation i.e.~collision or scattering effects the influence of which on the many-particle state is very important in the behavior at zero and finite temperature. The two-dimensional (2D) $N$-electron quantum dot Hamiltonian to be considered is \begin{eqnarray}\label{ham} \hat{H}_{e}&=&\sum_{i=1}^{N}\left(-\frac{\hbar^2}{2 m_e^*}\nabla_{\!i}^2+\frac{m_e^*}{2} \omega_0^2 \vec{r}_i^2\right)+\sum_{i<j}^{N}\frac{e^2}{4\pi\varepsilon\, r_{ij}}\;,\;\;\; \end{eqnarray} where the effective electron mass is denoted by $m_e^*$, the frequency $\omega_0$ adjusts the (isotropic) parabolic confinement strength, $e$ is the elementary charge and $\varepsilon$ is the background dielectric constant. The vectors $\vec{r}_i$ are the single charge carrier coordinates with respect to the quantum dot center and $r_{ij}=|\vec{r}_i-\vec{r}_j|$. The density in the QD is controllable by the confining potential which directly affects the relative electron-electron interaction strength and, tuned towards low carrier densities, continuously leads to formation of so-called electron (Wigner) molecules \cite{egger99,reusch01} or crystal-like behavior \cite{filinov01}. Melting processes owing to an increased temperature cause weakening and finally preventing of such solid-like structure formation. Also, with the restriction to Eq.~(\ref{ham}), the present analysis neglects nonideality effects such as defects and well-width fluctuations, see e.g.~Refs.~\cite{filinov04,bracker05}. The objective of the present work is to analyze the electron correlations in the QD system (\ref{ham}) on the pathway from Fermi gas/liquid towards strongly correlated Wigner molecule behavior, i.e.~during the (density) delocalization-localization transition. To this end, in Sec.~\ref{sec:theory}, we present the finite temperature formalism of nonequilibrium Green's functions (NEGF) theory which is applied by a threefold motivation: (i)~the NEGFs allow for a consistent and conserving treatment of Coulomb correlations, (ii)~previous NEGF approaches to inhomogeneous QDs incorporate to our knowledge no strong carrier-carrier coupling, and (iii), in contrast to other methods, strong importance lies in the possibility for the direct extension of the approach to nonequilibrium situations with e.g.~time-dependent gate-voltage variations, quantum transport phenomena or optical switching\cite{banyai93}. Using the NEGF technique, the properties of the investigated spin-polarized $N$-electron quantum dot in thermodynamic equilibrium follow from the self-consistently obtained imaginary time (Matsubara) Green's function. Such an approach has also more recently shown to give accurate results for real atoms and molecules, see Refs.~\cite{dahlen05,dahlen06,dahlen07}. The extension of the nonequilibrium Green's function ansatz from traditional applications on quasi-homogeneous quantum systems \cite{bonitz96,binder97,kwong98} to spatial inhomogeneity is thereby the major goal of the present analysis. The theoretical part in Sec.~\ref{sec:theory} is followed by a detailed description of the iteration technique used to numerically solve the Dyson equation (in Hartree-Fock \textit{and} second Born approximation) to self-consistency. The results are discussed in Sec.~\ref{sec:results}. The starting point is the limit of large anisotropy where in Eq.~(\ref{ham}) we consider the limit $\omega_0^2{\vec r}_i^2\rightarrow\omega^2_{x,0} x^2_i+\omega_{y,0}^2 y^2_i$ with $\omega_{y,0}\gg\omega_{x,0}$, see Sec.~\ref{subsec:results1D}. In the case of $N=3$ (quantum dot lithium) and $6$ electrons, the charge carrier density, the orbital-resolved distribution functions and total energies are computed for different values of interaction strength and temperature. Also, we demonstrate that at finite temperatures, the second Born (correlation) corrections to the mean-field treatment yield significant density changes in an intermediate regime whereas in the high- and low temperature limit the electron density is only less affected by correlations. In Sec.~\ref{subsec:results2D}, we extend the calculations to isotropic 2D confinement and analogously report on ground state results for $N=2$ electrons (quantum dot helium) which are compared with exact and Hartree-Fock results of Ref.~\cite{reusch01}. Moreover, the computation of the charge carrier spectral function\cite{kadanoff62} $a(\omega)$ allows in Sec.~\ref{subsec:spectralfct} for a collision induced renormalization of the Hartree-Fock energy spectrum. This is of high relevance for the optical properties of the few-electron QD. Sec.~\ref{sec:conclude} gives a final discussion \section{\label{sec:theory}Theory} For characterization and quantum mechanical treatment of the $N$-electron dot system (\ref{ham}) it is convenient to introduce the coupling (or Wigner) parameter $\lambda$ which relates the characteristic Coulomb energy $E_C=e^2/(4\pi\varepsilon l_0^*)$ to the confinement energy $E^*_0=\hbar\omega_0\;$: \begin{eqnarray}\label{cp} \lambda=\frac{E_C}{E^*_0}=\frac{e^2}{4\pi\varepsilon\,l_0^* \hbar\omega_0}=\frac{l_0^*}{a_B}\;, \end{eqnarray} with $l_0^*=\sqrt{\hbar/(m^*_e\omega_0)}$ being the characteristic single-electron extension in the QD and $a_B$ the effective electron Bohr radius. Using the replacement rules $\{\vec{r}_i\rightarrow \vec{r}_i/l_0^*,\,E\rightarrow E/E_0^*\}$, Hamiltonian (\ref{ham}) transforms into the dimensionless form \begin{eqnarray}\label{haml} \hat{H}_{\lambda}&=&\frac{1}{2}\sum_{i=1}^{N}(-\nabla_{\!i}^2+\vec{r}_i^2)\,+\,\sum_{i<j}^{N}\frac{\lambda}{r_{ij}}\;. \end{eqnarray} For coupling parameters $\lambda\ll1$, the quantum dot electrons will be found in a Fermi gas- or liquid-like state, whereas in the limit $\lambda\rightarrow\infty$, it is $l_0^*\gg a_B$, and quantum effects vanish in favor of classical interaction dominated charge carrier behavior \cite{ludwig08}. In the case of moderate coupling ($\lambda\gtrsim 1$) quantum dots with spatially well localized carrier density can be formed. Further, in addition to $N$ and $\lambda$, the system is characterized by the QD temperature $\beta^{-1}=k_B T$ which will be measured in units of the confinement energy $E_0^*$. \subsection{\label{subsec:2ndquant}Second quantization representation} Introducing carrier annihilation (creation) operators $\hat{\psi}^{(\dagger)}(\vec{r})$ with action at space point $\vec{r}$, the second-quantized form of Hamiltonian $\hat{H}_{\lambda}$, Eq. (\ref{haml}), is \begin{eqnarray}\label{sq} \hat{H}_\lambda\!&=&\!\int\!\textup{d}^2\!r\,\hat{\psi}^\dagger(\vec{r})\,h^0(\vec{r})\,\hat{\psi}(\vec{r})\\ &&+\,\frac{1}{2}\!\int\!\!\!\int\!\textup{d}^2\!r\,\textup{d}^2\! \bar{r}\,\hat{\psi}^\dagger(\vec{r})\,\hat{\psi}^\dagger(\bar{\vec{r}})\,\frac{\lambda}{\sqrt{(\vec{r}-\bar{\vec{r}})^2}}\,\hat{\psi}(\bar{\vec{r}})\,\hat{\psi}(\vec{r})\;,\nonumber \end{eqnarray} where $h^0(\vec{r})=(-\nabla^2+\vec{r}^2)/2$ denotes the single-particle energy and the second term in~(\ref{sq}) describes the electron-electron interactions. The field operators $\hat{\psi}^{(\dagger)}(\vec{r})$ satisfy the fermionic anti-commutation relations ${[\hat{\psi}(\vec{r}),\hat{\psi}^\dagger(\bar{\vec{r}})]}_{+}=\delta(\vec{r}-\bar{\vec{r}})$ and ${[\hat{\psi}^{(\dagger)}(\vec{r}),\hat{\psi}^{(\dagger)}(\bar{\vec{r}})]}_{+}=0$, where $[\hat{A},\hat{B}]_{+}=\hat{A} \hat{B}+\hat{B} \hat{A}$. Ensemble averaging in Eq.~(\ref{sq}) directly gives rise to the one-particle nonequilibrium Green's function which is defined as \begin{eqnarray}\label{1pngf} G(1,2)=-\frac{i}{\hbar}\mean{T_{\cal C}[\hat{\psi}(1)\hat{\psi}^\dagger(2)]}\;, \end{eqnarray} and is a generalization of the one-particle density matrix [which is recovered from $G$ in the limit of equal time arguments $t_1=t_2$, see e.g.~Ref.~\cite{kadanoff62}]. The used nomenclature is $1=(\vec{r}_1,t_1)$ and the expectation value (ensemble average) reads $\langle{\hat A}\rangle=\textup{Tr}\,{\hat{\rho}\hat A}$. The two times $t_{1}$ and $t_{2}$ entering $G(1,2)$ arise in the Heisenberg picture of the field operators and vary along the complex Schwinger/Keldysh contour ${\cal C}=\{t\in\mathbb{C}\,|\,\Re\,t\in[0,\infty]\,,\,\Im\,t\in[-\beta,0]\}$ where $T_{\cal C}$ denotes time-ordering on $\cal C$, see e.g. Refs.~\cite{dahlen06} and \cite{kadanoff62}. Note, that in the remainder of this work we use $\hbar=1$. The advantage of using the NEGF is that it allows for equal access to equilibrium and nonequilibrium averages at finite temperatures and that quantum many-body approximations can be systematically included by diagram expansions \cite{dahlen05}, see Secs.~\ref{subsec:negf} and \ref{sec:simu}. Moreover, most dynamic (spectral) and thermodynamic information \cite{kadanoff62} is contained in the NEGF, cf.~Sec.~\ref{sec:results}. \subsection{\label{subsec:negf}Nonequilibrium Green's functions formalism} The two-time nonequilibrium Green's function $G(1,2)$ obeys the Keldysh/Kadanoff-Baym equation (KBE) \cite{kadanoff62,bonitz96} \begin{eqnarray}\label{kkbe} &&[i\,\partial_{t_1}-h^0(\vec{r}_1)]\,G(1,2)\nonumber\\ &=&\delta_{\cal C}(1-2)-\!\int_{\cal C} \textup{d}3\,W(1-3)\,G_{12}(13;23^+)\;, \end{eqnarray} and its adjoint equation [with interchanged time arguments \mbox{$t_1\leftrightarrow t_2$}]. On the right hand side of Eq.~(\ref{kkbe}) the (collision) integral runs over the full configuration space and the time domain spanned by the contour $\cal C$. Further, \mbox{$W(1-2)=\lambda\,\delta_{\cal C}(t_1-t_2)/\sqrt{(\vec{r}_1-\vec{r}_2)^2}$} is the instantaneous (time-local) electron-electron interaction and \mbox{$\delta_{\cal C}(1-2)=\delta_{\cal C}(t_1-t_2)\,\delta(\vec{r}_1-\vec{r}_2)$} with the time delta function being defined on the contour. $G_{12}(12;1'2')$ denotes the two-particle NEGF \begin{eqnarray}\label{2pngf} G_{12}(12;1'2')=(-i)^2\mean{T_{\cal C}[\psi(1)\psi(2)\psi^\dagger({2'})\psi^\dagger({1'})]}\;, \end{eqnarray} where the short notation $3^+$ in Eq.~(\ref{kkbe}) indicates that the limit $t\rightarrow t_3+0$ is taken from above on the contour. In the integro-differential form (\ref{kkbe}), the KBE is not closed but constitutes the first equation of the Martin-Schwinger (MS) hierarchy \cite{martin59}. In order to decouple the hierarchy approximate expressions for the two-particle Green's function are introduced. E.g.~in a first order (spatially non-local) Hartree-Fock approach one substitutes $G_{12}(12;1'2')\rightarrow G(1,1')G(2,2')-G(1,2')G(2,1')$ which is known to preserve total energy and momentum but completely neglects correlations --- the former term leads over to the Hartree potential, the latter accounts for exchange. More generally, such \textit{conserving} approximations can be formulated in terms of a self-energy functional $\Sigma[G](1,2)$ which is defined by \begin{eqnarray}\label{sigma} &&-i\int_{\cal C}\textup{d}3\,W(1-2)\,G_{12}(13;23^+)\nonumber\\ &=&\int_{\cal C}\textup{d}3\,\Sigma[G](1,3)\,G(3,2)\;. \end{eqnarray} Other, advanced conserving approximations, such as the second Born approximation (see Sec.~\ref{sec:simu}), can be systematically derived from a generating functional $\Phi[G]$ according to $\Sigma(1,2)=\delta \Phi[G]/\delta G(2,1)$, see e.g. Ref.~\cite{baym62}.} In addition to a specific MS hierarchy decoupling, the KBE (\ref{kkbe}) must be supplied with initial or boundary conditions. In this paper, we will use the Kubo-Martin-Schwinger conditions \cite{kubo5766,kadanoff62} $G(\vec{r}_1 t_1,2)|_{t_1=0}=-G(\vec{r}_1\,0-i \beta,2)$ and $G(1,\vec{r}_2 t_2)|_{t_2=0}=-G(1,\vec{r}_2\,0-i \beta)$. In the case of thermodynamic equilibrium, where without loss of generality the electron system~(\ref{ham}) is time-independent for \mbox{$\Re t_{1,2}\leq0$}, $G(1,2)$ has no real-time dependence but extends on the imaginary contour branch $[-i\beta,0]$ only. We define the corresponding Matsubara (imaginary time) Green's function $G^M$ with respect to the transformations $t_1-t_2\rightarrow i\tau$ ($\tau\in[-\beta,\beta]$) and $G\rightarrow-iG^M$, i.e. \begin{eqnarray}\label{mgf} G^M\!(\vec{r}_1,\vec{r}_2;\tau)=-i G(1,2)\;, \end{eqnarray} which only depends on the time difference $t_1-t_2$, $t_{1,2}\in[-i\beta,0]$ and is anti-periodic in the inverse temperature $\beta$, compare with definition (\ref{1pngf}). Using expressions (\ref{sigma}) and (\ref{mgf}) in the KBE (\ref{kkbe}) leads to the general form of the Dyson equation \cite{dahlen05} for the spin-polarized QD system (\ref{sq}) \begin{eqnarray}\label{deq} &&[-\partial_\tau-h^0(\vec{r}_1)]\,G^M\!(\vec{r}_1,\vec{r}_2;\tau)\\ &=&\delta(\tau)+\!\int\!\!\textup{d}^2\bar{r}\int_0^\beta\!\!\textup{d}\bar{\tau}\,\Sigma^M_\lambda(\vec{r}_1,\bar{\vec{r}};\tau-\bar{\tau})\,G^M\!(\bar{\vec{r}},\vec{r}_2;\bar{\tau})\;,\nonumber \end{eqnarray} with the anti-periodic Matusbara self-energy $\Sigma^M_\lambda(\vec{r}_1,\vec{r}_2;\tau)$. Note that the Dyson equation in this form is exact and that many-body approximations enter via $\Sigma^M_\lambda[G^M]$. Eq.~(\ref{deq}) is the central equation which will be applied in the subsequent Secs.~\ref{sec:simu} and \ref{sec:results} to investigate the effect of carrier-carrier correlations in the $N$-electron quantum dot. However, as the self-energy $\Sigma^M_\lambda$ appears as a functional of the Matsubara Green's function $G^M\!(\vec{r}_1,\vec{r}_2;\tau)$, a self-consistent solution of the Dyson equation is required to accurately characterize the equilibrium QD state. The corresponding numerical technique is developed in the next section. \section{\label{sec:simu}Simulation technique} In this section, we discuss the computational scheme of solving the Dyson equation for the 2D few-electron quantum dot specified by Eq.~(\ref{haml}). Thereby, we proceed in two steps: First, we solve Eq.~(\ref{deq}) at the Hartree-Fock (HF) level, see Sec.~\ref{subsec:HF}, and, second, we incorporate correlations within the $\Phi$-derivable second order Born approximation, see Sec.~\ref{subsec:2ndB}. Throughout, we represent $G^M$ in the $\tau$-domain rather than solving the Dyson equation in frequency space where $G^M\!(\vec{r}_1,\vec{r}_2;\omega)=\int_0^\beta\!\textup{d}\tau\,G^M\!(\vec{r}_1,\vec{r}_2;\tau)\,e^{i\omega \tau}$ can be obtained by analytic continuation, see e.g.~Ref.~\cite{ku02}. \subsection{\label{subsec:HF}Hartree-Fock at zero and finite temperatures} At mean-field level, the solution of the Dyson equation (\ref{deq}) is fully equivalent to the Hartree-Fock self-consistent field method \cite{yannou07,echenique07} at finite temperatures $\beta^{-1}$. Hence, we primarily resort to standard HF techniques and will recover the uncorrelated Matsubara Green's function, denoted $G^0(\vec{r}_1,\vec{r}_2;\tau)$, at the end of this section. The Hartree-Fock approach leads to an effective one-particle description of the QD and gives a first estimate of exchange effects. However, as an independent-electron approximation, it does not include correlations, i.e.~the HF total energy is given by $E_{\mathrm{HF}}^0=E_{\mathrm{exact}}-E_{\mathrm{corr}}$. With respect to the second quantized Hamiltonian of Eq.~(\ref{sq}), the effective HF Hamiltonian is obtained by approximately replacing the four field operator product entering the interaction term by sums over products $\hat{\psi}^\dagger\hat{\psi}$ weighted by the generalized carrier density matrix $\rho(\vec{r},\bar{\vec{r}})$. This is consistent with the mean-field approximation for the two-particle Green's function as given in Sec.~\ref{subsec:negf} and leads to \begin{eqnarray} \hat{H}_\lambda=\!\int\!\!\!\int\!\textup{d}^2 r\,\textup{d}^2 \bar{r}\,\hat{\psi}^\dagger(\vec{r})[h^0(\vec{r})\delta(\vec{r}-\bar{\vec{r}})+\Sigma^0_\lambda(\vec{r},\bar{\vec{r}})]\hat{\psi}(\vec{r})\;,\nonumber\\ \label{HFham} \end{eqnarray} with the Hartree-Fock self-energy \begin{eqnarray}\label{HFse} \Sigma^0_\lambda(\vec{r},\bar{\vec{r}})=\!\int\!\textup{d}^2 r'\,\frac{\lambda\rho(\vec{r}',\vec{r}')}{\sqrt{(\vec{r}'-\vec{r})^2}}\,\delta(\vec{r}-\bar{\vec{r}})-\frac{\lambda\rho(\vec{r},\bar{\vec{r}})}{\sqrt{(\vec{r}-\bar{\vec{r}})^2}}\;.\;\; \end{eqnarray} Here, the first (second) term constitutes the Hartree (Fock or exchange) contribution. Computationally convenient is the introduction of a basis representation for the electron field operator according to \begin{eqnarray} \label{fobexp} \hat{\psi}^{(\dagger)}(\vec{r})=\sum_{i}\varphi^{(*)}_i(\vec{r})\,\hat{a}^{(\dagger)}_{i}\;,\hspace{1pc}i\in\{0,1,2,\ldots\}\;, \end{eqnarray} where the one-particle wave functions (orbitals) $\varphi_i(\vec{r})$ form an orthonormal complete set and $\hat{a}^{(\dagger)}_{i}$ denotes the annihilation (creation) operator of a particle on the level $i$. At this stage, the QD system (\ref{HFham}) can be transformed into the matrix representation $h_{\lambda,ij}=h^0_{ij}+\Sigma^0_{\lambda,ij}$ with the single particle quantum numbers $i$ and $j$, $h_{ij}$ being the electron HF total energy, $h^0_{ij}$ the single-particle (kinetic plus confinement) energy and $\Sigma^0_{\lambda,ij}$ the electron self-energy in mean-field approximation. More precisely, we have \begin{eqnarray} \label{HFhamm} h_{\lambda,ij}&=&h^0_{ij}+\Sigma^0_{\lambda,ij}\;,\\ \label{H0term} h_{ij}^0&=&\frac{1}{2}\int\!\textup{d}^2 r\,\varphi_i^*(\vec{r})(-\nabla^2+\vec{r}^2)\varphi_j(\vec{r})\;,\\ \label{HFterm} \Sigma^0_{\lambda,ij}&=&\lambda\sum_{kl} (w^{}_{ij,kl}-w^{}_{il,kj}) \rho_{kl}(\beta)\;, \end{eqnarray} with the finite (zero) temperature charge carrier density matrix $\rho_{ij}(\beta)=\langle\hat{a}^\dagger_i \hat{a}_j\rangle$ (in the limit $\beta\rightarrow\infty$) in the grand canonical ensemble and the two-electron integrals $w_{ij,kl}^{}$ defined as \begin{eqnarray}\label{2ei} w_{ij,kl}^{}&=&\!\int\!\!\!\int \textup{d}^2r\,\textup{d}^2\bar{r}\, \frac{\varphi^*_i(\vec{r})\,\varphi^*_k(\bar{\vec{r}})\,\varphi_j(\vec{r})\,\varphi_l(\bar{\vec{r}})}{\sqrt{(\vec{r}-\bar{\vec{r}})^2+\alpha^{2}}}\;. \end{eqnarray} Using $\alpha\rightarrow0$, the integrals in $w_{ij,kl}^{}$ can be performed analytically in 2D but, in the limit of large anisotropic confinement (quasi-1D quantum dot), a truncation parameter $0<\alpha\ll1$ is needed to regularize the (bare) Coulomb potential at $|\vec{\vec{r}-\bar{\vec{r}}}|=0$ keeping $w_{ij,kl}^{}$ finite, see e.g.~Ref.~\cite{ciftja06}. Alternatively, the parameter $\alpha$ adjusts a confining potential in the perpendicular dimension and allows (at small $r_{ij}$) for a transversal spread of the wave function \cite{jauregui93}. For the specific choice of the parameter $\alpha$, see Sec.~\ref{sec:results}. Using standard techniques, we iteratively solve the self-consistent Roothaan-Hall equations \cite{roothaan51} for the Hartree-Fock Hamiltonian $h_{\lambda,ij}$, Eq.~(\ref{HFhamm}), \begin{eqnarray}\label{rheq} \sum_{k=0}^{n_b-1}h_{\lambda,ik}\,c_{kj}-\epsilon^0_{j}\,c_{ij}=0\;, \end{eqnarray} which at finite dimension $n_b\times n_b$ ($i=0,1,\ldots,n_b-1$) yield the numerically exact eigenfunctions (HF orbitals) expanded in the form $\phi_{\lambda,i}(\vec{r})=\sum_{j=0}^{n_b-1} c_{ji}\,\varphi_j(\vec{r})$, $c_{ij}\in\mathbb{R}$, the corresponding energy spectrum (HF eigenvalues) $\epsilon_i^0$ and the chemical potential $\mu^0$. Consequently, the $N$-electron quantum dot system is fully characterized by the solution $\phi_{\lambda,i}(\vec{r})$, e.g.~its charge carrier density is given by \begin{eqnarray}\label{H0density} \rho^0(\vec{r})&=&\sum_{i=0}^{n_b-1}f(\beta,\epsilon_i^0-\mu^0)\,\phi_{\lambda,i}(\vec{r})\\ &=&\sum_{i=0}^{n_b-1}f(\beta,\epsilon_i^0-\mu^0)\sum_{j=0}^{n_b-1}c_{ji}\,\varphi_j(\vec{r})\;, \nonumber \end{eqnarray} where $f(\beta,\epsilon_i^0-\mu^0)$ denotes the Fermi-Dirac distribution. For numerical implementation of the mean-field problem (\ref{rheq}), we have chosen the Cartesian (2D) harmonic oscillator states \begin{eqnarray} \label{ost} \varphi_{m,n}(\vec{r})&=&\frac{e^{-(x^2+y^2)/2}}{\sqrt{2^{m+n}\,m!\,n!\,\pi}}\,\,{\cal H}_m(x)\,{\cal H}_n(y)\;, \end{eqnarray} with single-electron quantum numbers $i=(m,n)$, $\vec{r}=(x,y)$ in units of the oscillator length $l_0^*$, the Hermite polynomials ${\cal H}_m(x)$ and $(m+1)$-fold degenerate energies $\epsilon_{m,n}=m+n+1$, $m,n\in\{0,1,2,\ldots\}$. In the 1D quantum dot limit, these states reduce to the one-dimensional oscillator eigenfunctions $\varphi_{m}(x)=(2^m m! \sqrt{\pi})^{-1/2}\,e^{-x^2/2}\,{\cal H}_m(x)$. As mentioned before, the self-consistent Hartree-Fock result generates an uncorrelated Matsubara Green's function $G^0(\vec{r}_1,\vec{r}_2;\tau)$ which yields the same observables. For instance, the $N$-electron density of Eq.~(\ref{H0density}) is recovered from $\rho^0(\vec{r})=\,G^0(\vec{r},\vec{r};0^-)$---the energy contributions are discussed separately in Sec.~\ref{sec:results}. Expanding $G^0$ in terms of the obtained HF basis $\phi_{\lambda,i}(\vec{r})$ according to \begin{eqnarray}\label{gfexp} G^0(\vec{r}_1,\vec{r}_2;\tau)=\sum_{ij}\phi^*_{\lambda,i}(\vec{r}_1)\,\phi_{\lambda,j}(\vec{r}_2)\,g^0_{ij}(\tau)\;, \end{eqnarray} with associated $\tau$-dependent real matrix elements $g_{ij}^0(\tau)$, leads to the identity \begin{eqnarray}\label{HFgf} g^0_{ij}(\tau)=\delta_{ij}\,f(\beta,\epsilon_i^0-\mu^0)\,e^{\tau(\epsilon_i^0-\mu^0)}\;,\; \end{eqnarray} which is (band) diagonal only in the HF orbital basis and solves the Dyson equation in mean-field approximation \begin{eqnarray}\label{deqHF} [-\partial_\tau-\vec{h}^0-\vec{\Sigma}^0_\lambda]\,\vec{g}^0(\tau)=\delta(\tau)\;.\; \end{eqnarray} Here, the time-independent matrices $(\vec{h}^0)_{ij}=h^0_{ij}$ and $(\vec{\Sigma}^0_\lambda)_{ij}=\Sigma^0_{\lambda,ij}$ are defined in correspondence to Eqs.~(\ref{H0term}) and (\ref{HFterm}) with $\varphi_i$ being replaced by $\phi_{\lambda,i}$, the Hartree-Fock Green's function being denoted as $(\vec{g}^0(\tau))_{ij}=g^0_{ij}(\tau)$, and the charge carrier density matrix due to Eq.~(\ref{HFterm}) reads $\rho_{ij}(\beta)=g_{ij}^0(0^-)$ with notation $0^-$ denoting the limit from below on the contour $\cal C$. Further, it is $(\vec{a}\,\vec{b})_{ij}=\sum_{k}a_{ik}\,b_{kj}$. Note, that in Eq.~(\ref{2ei}) also the two-electron integrals are to be transformed into their HF representation, and that in the following bold-typed expressions as introduced in Eq.~(\ref{deqHF}) denote matrices with respect to the HF basis $\phi_{\lambda,i}(\vec{r})$. \subsection{\label{subsec:2ndB}Solving the self-consistent Dyson equation beyond the Hartree-Fock level} In this subsection, we focus on electron-electron correlation corrections to the self-consistent Hartree-Fock reference state \cite{dahlen05} determined by $G^0(\vec{r}_1,\vec{r}_2;\tau)$. The idea is to start from the Dyson equation (\ref{deq}) in HF orbital representation \begin{eqnarray}\label{deqm} [-\partial_\tau-\vec{h}^0]\,\vec{g}^M\!(\tau)=\delta(\tau)+\!\int_0^\beta\!\!\!\textup{d}\bar{\tau}\,\vec{\Sigma}^M_\lambda\!(\tau-\bar{\tau})\,\vec{g}^M\!(\bar{\tau})\;,\; \end{eqnarray} with the full, time-dependent Matsubara self-energy $(\vec{\Sigma}^M_\lambda(\tau))_{ij}=\Sigma^M_{\lambda,ij}(\tau)$ and the equilibrium Green's function $(\vec{g}^M(\tau))_{ij}=g^M_{ij}(\tau)$, both obtained by applying the orbital expansion of Eq.~(\ref{gfexp}). An explicit approximate expression for ${\vec{\Sigma}}^M_\lambda$ including correlation effects is introduced below, cf.~Eqs.~(\ref{msecorr}-\ref{2ndBse}). First, we discuss the general solution scheme for Eq.~(\ref{deqm}). However, we will not consider it in this form. Instead, we integrate Eq.~(\ref{deqm}) inserting Eq.~(\ref{HFgf}) and applying the anti-periodicity property of $\vec{g}^M\!(\tau)$. This leads to the integral form of the Dyson equation \begin{widetext} \begin{eqnarray}\label{deqif} \vec{g}^M\!(\tau)-\int_0^\beta\!\!\!\textup{d}\bar{\bar{\tau}}\!\int_0^\beta\!\!\!\textup{d}\bar{\tau}\,\vec{g}^0(\tau-\bar{\bar{\tau}})\,\vec{\Sigma}^r_\lambda[\vec{g}^M](\bar{\bar{\tau}}-\bar{\tau})\,\vec{g}^M\!(\bar{\tau})&=&\vec{g}^0(\tau)\;,\;\\ \label{mse} \vec{\Sigma}^r_\lambda[\vec{g}^M](\tau)&=&\vec{\Sigma}^M_\lambda[\vec{g}^M](\tau)-\delta(\tau)\,\vec{\Sigma}^s_\lambda\;, \end{eqnarray} \end{widetext} where the expression $\vec{\Sigma}^r_\lambda(\tau)$ according to definition (\ref{mse}) implicates the total Matsubara self-energy reduced by the initial (steady-state) mean-field $\vec{\Sigma}^s_\lambda=\vec{\Sigma}^0_\lambda[\vec{g}^0(0^-)]$ which is not a functional of the full (correlated) Green's function $\vec{g}^M(\tau)$. In addition, the single-particle energy $\vec{h}^0$ has already been absorbed in the HF reference state $\vec{g}^0(\tau)$ and thus does not appear explicitly in Eq.~(\ref{deqif}). For a more detailed derivation of Eq.~(\ref{deqif}) see Appendix. We highlight, that the integral form of the Dyson equation can be parameterized by the second index $j\in\{0,1,\ldots,n_b-1\}$ of the Matsubara Green's function $g^M_{ij}(\tau)$ since the matrix multiplications on the left hand side of Eq.~(\ref{deqif}) do not affect this index. Hence, at a fixed Matsubara self-energy and discretized $\tau$-interval $[-\beta,\beta]$, Eq.~(\ref{deqif}) allows for reinterpretation as a set of $n_b$ independent (but typically large-scale) linear systems of the form \begin{eqnarray}\label{leqs} {\cal A}\,{\cal X}^{(j)}&=&{\cal B}^{(j)}\;, \end{eqnarray} where the unknown quantity and the inhomogeneity are $({\cal X}^{(j)})_{ip}=g_{ij}^M(\tau_p)$ and $({\cal B}^{(j)})_{ip}=g_{ij}^0(\tau_p)$, respectively. The coefficient matrix $({\cal A})_{ip,jq}={\alpha}_{ij}(\tau_p,\tau_q)$ is defined by the expression (convolution integral) \begin{eqnarray}\label{convint} {\alpha}_{ij}(\tau,\bar{\tau})&=&\delta_{ij}\delta(\tau-\bar{\tau})\\ &&-\sum_{k=0}^{n_b-1}\!\int_{0}^\beta\!\!\!\textup{d}\bar{\bar{\tau}}\,{g}^0_{ik}(\tau-\bar{\bar{\tau}})\,{\Sigma}^r_{\lambda,kj}(\bar{\bar{\tau}}-\bar{\tau})\;,\nonumber \end{eqnarray} in which the integral over $\bar{\tau}$ in the Dyson equation (\ref{deqm}) vanishes due to its replacement by the matrix multiplication ${\cal A}\,{\cal X}^{(j)}$. In more detail, we need to specify the time-discretization of the Matsubara Green's function undertaken in Eq.~(\ref{leqs}): First, due to the anti-periodicity property of $G^M$, we can restrict ourselves to solve Eq.~(\ref{deqm}) on the negative $\tau$-interval $[-\beta,0]$. This specific choice originates from the fact that in the limit $\tau\rightarrow0^-$ the density matrix is obtained from $\vec{g}^M(\tau)$. Second, the numerical treatment must take into account the time-dependence of $G^M(\tau)$. From Eq.~(\ref{HFgf}) it follows that the Green's function is essentially peaked around $\tau=0$ and $\pm\beta$. Thus, not an equidistant grid but a uniform power mesh\cite{upmesh} (UPM) is adequate to represent the Green's function---this method is also used in Refs.~\cite{ku02,dahlen05}. With a total number of $n_m$ mesh points the dimensionality of the linear system ${\cal A}\,{\cal X}^{(j)}={\cal B}^{(j)}$ becomes $n_b n_m\times n_b n_m$. As stated above, Eq.~(\ref{leqs}) can only be processed for a fixed self-energy $\vec{\Sigma}^r_\lambda[\vec{g}^M](\tau)$. This means, that in order to provide a self-consistent solution of the Dyson equation, we have to iterate the procedure by computing, at each step, a new self-energy from the current $\vec{g}^M(\tau)$. This loop is then repeated until convergence. So far, we have not specified a certain self-energy approximation. In Eq.~(\ref{mse}), one generally can split $\vec{\Sigma}^M_\lambda(\tau)$ into a mean-field and a correlation part, i.e. \begin{eqnarray}\label{msecorr} \vec{\Sigma}_{\lambda}^M[g^M](\tau)=\delta(\tau)\,\vec{\Sigma}_{\lambda}^0[g^M(0^-)]+\vec{\Sigma}^{\mathrm{corr}}_{\lambda}[g^M](\tau)\;,\;\;\; \end{eqnarray} where the Hartree-Fock contribution, \begin{eqnarray} \label{HFtermdeqm} \Sigma^0_{\lambda,ij}&=&\lambda\sum_{kl} (w^{}_{ij,kl}-w^{}_{il,kj}) g_{kl}^M(0^-)\;, \end{eqnarray} is exact (compare with Eq.~(\ref{HFterm})) and the correlation part $\vec{\Sigma}^{\mathrm{corr}}(\tau)$, at the second Born level, is given by \begin{eqnarray}\label{2ndBse} \Sigma^{\mathrm{corr}}_{\lambda,ij}(\tau)&=&\!\!-\sum_{klmnrs}w_{ik,ms}^{}(w^{}_{lj,rn}-w^{}_{nj,rl})\\ &&\hspace{3pc}\times\,g^M_{kl}(\tau)\,g^M_{mn}(\tau)\,g^M_{rs}(-\tau)\;.\;\;\nonumber \end{eqnarray} Here, the first term denotes the direct contribution whereas the second one includes the exchange---for details see e.g. Refs.~\cite{kadanoff62,dahlen05}. Note, that the two-electron integrals $w_{ij,kl}^{}$ are given in their Hartree-Fock basis representation. Since the interaction potential $W(1,2)$ enters Eq.~(\ref{2ndBse}) in second order the present description of charge carrier correlations goes beyond the first Born approximation of conventional scattering theory. When the self-consistency cycle reaches convergence, the matrix $\vec{g}^M(\tau)$ becomes independent of the initial state $\vec{g}^0(\tau)$ and, in configuration space, the correlated Matsubara Green's function of the QD system (\ref{haml}) follows as \begin{eqnarray} \label{mgfexpression} G^M(\vec{r}_1,\vec{r}_2;\tau)=\sum_{ij}\phi^*_{\lambda,i}(\vec{r}_1)\,\phi_{\lambda,j}(\vec{r}_2)\,g^M_{ij}(\tau)\;, \end{eqnarray} where the HF orbitals $\phi_{\lambda,j}(\vec{r})$ are those obtained in Sec.~\ref{subsec:HF}. Consequently, correlations are included via the $\tau$-dependent matrix elements $g_{ij}^M(\tau)$, $\tau\in[-\beta,0]$, which give access to the electron density via $\rho(\vec{r})=G^M(\vec{r},\vec{r};0^-)$. We note, that $G^0$ as obtained from the self-consistent HF calculation is only one possible reference (initial) state which can be used in the Dyson equation (\ref{deqif}). Also different types of uncorrelated Green's functions e.g. obtained from density-functional theory (DFT) or orbitals in local density approximation (LDA) are applicable if they satisfy the correct boundary conditions. For a recent discussion on the relevance for atoms and molecules see Ref.~\cite{dahlen05}. In summary, the presented procedure is valid for arbitrary temperatures $\beta^{-1}$ and arbitrary coupling parameters $\lambda$. Thereby, the scope of numerical complexity is determined by the parameters $n_b$ (matrix dimension associated with the HF basis size) and $n_m$ (time-discretization on the UPM) which must be chosen with respect to convergence of the QD observables. Corresponding to Eq.~(\ref{deqif}) it has been found that particularly the particle number $N=\sum_{i=0}^{n_b-1}{g}_{ii}^M(0^-)$ and the correlation energy $E_{\mathrm{corr}}$ sensitively depend on $n_m$, cf. Eq.~(\ref{egycorr}) in Sec.~\ref{sec:results}. \begin{table} \begin{ruledtabular} \begin{tabular}{cccccccc} & & & & & & & \\ $N\!=\!3$& (1D) & & & & & & \\ \hline $\lambda$ & $E_{\mathrm{HF}}^0$ & $\mu^0$ & $E_{\mathrm{2ndB}}$ & $E_0$ & $E_{\mathrm{HF}}$ & $E_{\mathrm{corr}}$ & $E_{\mathrm{QMC}}$\\ \hline $\beta\!=\!1$& & & & & & & \\ \hline $1$& $8.173$ & $4.621$ & $8.201$ & $6.115$ & $2.321$ & $-0.235$ & $7.661$ \\ \underline{$2$}& \underline{$10.066$} & $6.124$ & \underline{$10.215$} & $6.556$ & $4.405$ & $-0.747$ & $9.510$ \\ \hline $\beta\!=\!2$& & & & & & & \\ \hline $1$& $7.043$ & $4.670$ & $7.065$ & $5.027$ & $2.153$ & $-0.115$ & $6.761$ \\ \underline{$2$}& \underline{$8.790$} & $6.169$ & \underline{$8.941$} & $5.311$ & $3.936$ & $-0.306$ & $8.603$ \\ $4$& $11.732$ & $8.852$ & $11.918$ & $5.920$ & $6.303$ & $-0.304$ & $11.721$ \\ $6$& $14.387$ & $11.231$ & $14.374$ & $6.712$ & $7.822$ & $-0.160$ & $14.403$ \\ $8$& $16.790$ & $13.362$ & $16.747$ & $7.514$ & $9.354$ & $-0.120$ & $16.809$ \\ $10$& $19.005$ & $15.294$ & $18.962$ & $8.257$ & $10.800$ & $-0.095$ & $19.034$ \\ \hline $\beta\!=\!10$& (GS) & & & & & & $E_{\mathrm{QMC}}^{\beta=5}$\\ \hline $1$& $6.615$ & $4.673$ & $6.591$ & $4.645$ & $1.987$ & $-0.042$ & $6.529$ \\ \underline{$2$}& \underline{$8.480$} & $6.173$ & \underline{$8.421$} & $4.966$ & $3.560$ & $-0.105$ & $8.371$ \\ $4$& $11.667$ & $8.853$ & $11.578$ & $5.817$ & $5.917$ & $-0.156$ & $11.484$ \\ $6$& $14.374$ & $11.243$ & $14.292$ & $6.710$ & $7.720$ & $-0.137$ & $14.161$ \\ $8$& $16.787$ & $13.376$ & $16.721$ & $7.534$ & $9.296$ & $-0.110$ & $16.570$ \\ $10$& $19.004$ & $15.298$ & $18.950$ & $8.285$ & $10.752$ & $-0.087$ & $18.791$ \\ \hline & & & & & & & \\ $N\!=\!6$& (1D) & & & & & & \\ \hline $\lambda$ & $E_{\mathrm{HF}}^0$ & $\mu^0$ & $E_{\mathrm{2ndB}}$ & $E_0$ & $E_{\mathrm{HF}}$ & $E_{\mathrm{corr}}$ & $E_{\mathrm{QMC}}$\\ \hline $\beta\!=\!10$& (GS) & & & & & & \\ \hline $1$& $27.600$ & $9.263$ & $27.519$ & $18.613$ & $9.028$ & $-0.123$ & --- \\ \underline{$2$}& \underline{$36.145$} & $12.195$ & \underline{$35.919$} & $19.976$ & $16.289$ & $-0.346$ & --- \\ $4$& $50.960$ & $16.110$ & $50.440$ & $23.666$ & $27.384$ & $-0.609$ & --- \\ \hline & & & & & & & \\ \end{tabular} \end{ruledtabular} \caption{Different energy contributions in dependence on the coupling parameter $\lambda$ for the 'ground state' (GS, $\beta=10$) and equilibrium states ($\beta=2$ and $1$) of $N=3$ and $6$ spin-polarized electrons in a quasi-1D quantum dot. $\mu^0$ gives the chemical potential as obtained from the Hartree-Fock calculation with total energy $E_{\mathrm{HF}}^0$, see Sec.~\ref{subsec:HF}. $E_{\mathrm{2ndB}}$, $E_0, E_{\mathrm{HF}}$, and $E_{\mathrm{corr}}$ are computed from the correlated Green's function $\vec{g}^M(\tau)$. All energies are in units of $E_0^*$ and the underlined values pertain to the results shown in Figs.~\ref{Fig:2}-\ref{Fig:4} and Fig.~\ref{Fig:5}. For comparison, $E_{\mathrm{QMC}}$ denote the total energy obtained from quantum Monte Carlo (QMC) simulations, see also Fig.~\ref{Fig:QMC}.} \label{table:1DN3} \end{table} \begin{figure}[t] \includegraphics[width=0.475\textwidth]{eval_lambda_beta.pdf} \caption{(color online) The six energetically lowest HF orbital energies $\epsilon_i^0$ in dependence on $\lambda$ for the three-electron QD at different temperatures $\beta^{-1}$. The gray area indicates the ground state HOMO-LUMU gap between the occupied and unoccupied states. The chemical potential $\mu^0(\lambda)$ (double-dotted-dashed curves) is situated within this gray area. Inset: $\lambda$-depencence of the correlation energy $E_\mathrm{corr}$, see Table ~\ref{table:1DN3}.}\label{Fig:eval} \end{figure} \begin{figure}[t] \includegraphics[width=0.475\textwidth]{energies.pdf} \caption{(color online) Total energies in dependence on $\lambda$ and $\beta$ as given in Table \ref{table:1DN3} for the three-electron QD. Comparison of the [grand canonical] Green's function result (at HF and second Born level) with quantum Monte Carlo [canonical]. For $\lambda\equiv0$, the total energy can be analytically obtained from the (grand) canonical partition function of the noninteracting system according to standard formulas\cite{thesis07,tran01}.}\label{Fig:QMC} \end{figure} \section{\label{sec:results}Numerical Results} In this section, we report on the numerical results for the few-electron quantum dots with $N=2$, $3$ and $6$ charge carriers. At that, we mainly focus on the energies and the (accumulated) single-carrier density and compare the influence of HF and second Born type self-energies, i.e.~Eq.~(\ref{HFtermdeqm}) versus Eqs.~(\ref{HFtermdeqm}) plus (\ref{2ndBse}). The energies that contribute to the total energy of the QD system are, in addition to the single-particle (kinetic [$\vec{t}^0$] and confinement [$\vec{v}^0$]) energy $E_0=\textup{Tr}\,\vec{h}^0\,\vec{g}^M(0^-)=\textup{Tr}\,(\vec{t}^0+\vec{v}^0)\,\vec{g}^M(0^-)$, the mean-field Hartree-Fock and the correlation energy \cite{dahlen06} defined as \begin{eqnarray}\label{egyHFHF} E_{\mathrm{HF}}=\frac{1}{2}\textup{Tr}\,\vec{\Sigma}_\lambda^0\,\vec{g}^M(0^-)\;, \end{eqnarray} \begin{eqnarray}\label{egycorr} E_{\mathrm{corr}}=\frac{1}{2}\int_{0}^\beta\!\!\textup{d}\tau\,\textup{Tr}\,\vec{\Sigma}_\lambda^{\mathrm{corr}}(-\tau)\,\vec{g}^M(\tau)\;. \end{eqnarray} The total energy is then $E_{\mathrm{2ndB}}=E_0+E_{\mathrm{HF}}+E_{\mathrm{corr}}$. For comparison, the total energy with respect to the HF Green's function $G^0(\vec{r}_1,\vec{r}_2;\tau)$ will be denoted by $E_{\mathrm{HF}}^0$. For evaluation of the two-electron integrals needed in Eqs.~(\ref{egyHFHF}) and (\ref{egycorr}), we have chosen the truncation parameter $\alpha=0.1$ in 1D and $\alpha\equiv0$ in 2D [no divergence of the integrals $w_{ij,kl}$, see Eq.~(\ref{2ei}) in Sec.~\ref{subsec:HF}]. Moreover, the HF orbital-resolved energy distribution functions (level occupation probabilities) $n_i(N;\lambda,\beta)$ are analyzed with respect to correlation induced scattering processes of particles into different energy levels. In general, it is \begin{eqnarray} n_i=n_i(N;\lambda,\beta)=g^M_{ii}(0^-)\;, \end{eqnarray} which, in the case of vanishing correlations (\mbox{$G^M\rightarrow G^0$}), is just the Fermi-Dirac distribution, i.e.~$n_i=g^0_{ii}(0^-)=f(\beta,\epsilon_i^0-\mu^0)=f_i^0$, cf.~Eq.~(\ref{HFgf}). \subsection{\label{subsec:results1D}Limit of large anisotropy (1D)} When in Hamiltonian (\ref{ham}) the isotropic confinement of frequency $\omega_0$ is replaced by an anisotropic entrapment according to $\omega_{y,0}\gg\omega_{x,0}$, the QD charge carriers move effectively in one dimension. With the finite regularization parameter $\alpha=0.1$ we thereby allow for a small transversal extension (perpendicular to the $x$-axis). That is why we will call the system in this regime quasi-one-dimensional (quasi-1D). In the following, let us first consider the 1D version of quantum dot lithium\cite{mikhailov02} ($N=3$) and, hereafter, a QD with $N=6$ confined electrons. For the respective HF and second Born calculations we throughout have used $n_b=30$ oscillator functions, see Sec.~\ref{subsec:HF}, and the number of mesh points $n_m(u,p)$ for the $\tau$-interval $[-\beta,0]$, discretized in Sec.~\ref{subsec:2ndB}, was varied between $60$ to $140$ in order to achieve convergence and preservation of particle number in the Dyson equation (\ref{deqif}). Table~\ref{table:1DN3} gives an overview of the relevant energies obtained at different coupling parameters and temperatures. Also, we included reference data from quantum Monte Carlo (QMC) simulations\cite{QMC} for the three-electron QD. \begin{figure}[t] \includegraphics[width=0.485\textwidth]{densityb10l2.pdf} \caption{(color online) Thermodynamic properties of $N=3$ electrons in the quasi-1D QD at $\beta=10$ and $\lambda=2.0$. \textbf{a}) Ideal and HF energy distribution functions $f_i^0$, \textbf{b}) change $n_i-f^0_i$ of the HF distribution due to correlations (in percent), and \textbf{c}) charge density profiles $\rho(x)$. The ideal (dotted line) and the HF result (solid line) are displayed together with the second Born approximation (dashed line).}\label{Fig:2} \end{figure} \begin{figure}[t] \includegraphics[width=0.485\textwidth]{densityb2l2.pdf} \caption{(color online) Same as Fig.~\ref{Fig:2}, but for temperature $\beta=2$.}\label{Fig:3} \end{figure} \begin{figure}[t] \includegraphics[width=0.485\textwidth]{densityb1l2.pdf} \caption{(color online) Same as Fig.~\ref{Fig:2}, but for temperature $\beta=1$.}\label{Fig:4} \end{figure} \begin{figure}[t] \includegraphics[width=0.485\textwidth]{densityN6b10l2.pdf} \caption{(color online) Thermodynamic properties of $N=6$ charge carriers confined in the quasi-1D QD at $\beta=10$ and $\lambda=2.0$. \textbf{a}) Ideal and HF energy distribution functions $f_i^0$, \textbf{b}) change $n_i-f^0_i$ of the HF distribution due to correlations (in percent), and \textbf{c}) charge density profiles $\rho(x)$. Labeling as in Fig.~\ref{Fig:2}-\ref{Fig:4}.}\label{Fig:5} \end{figure} The low-energetic discrete orbital energies $\epsilon_i^0$ contributing to the HF reference state $\vec{g}^0(\tau)$ are shown in Fig.~\ref{Fig:eval} in dependence on $\lambda$ and $\beta$. For the quasi ground state (GS), $\beta=10$, the occupied states $i<3$ are energetically well separated from the unoccupied states $i\geq3$ by the HOMO-LUMO gap---the energy gap between the \underline{h}ighest \underline{o}ccupied (\underline{m}olecular or Hartree-Fock) \underline{o}rbital and the \underline{l}owest \underline{u}noccupied (\underline{m}olecular) \underline{o}rbital), see the gray area. For temperatures $\beta<10$, this gap is reduced eminently for moderate coupling around $\lambda\approx3$ (see the dotted and dashed lines), while, for $\lambda\rightarrow\infty$, the curves converge due to the strength of carrier-carrier interactions exceeding the incluence of thermal fluctuations. Moreover, the HF chemical potential $\mu^0(\lambda)$, situated within the HOMO-LUMO gap, is only slightly affected by $\beta$, compare the values in Table~\ref{table:1DN3}. If we now include correlation effects, the HF spectra $\{\epsilon_i^0\}$ become renormalized---for the discussion see Sec.~\ref{subsec:spectralfct}. On the QD total energy, the influence of correlations is as follows: The correlation contribution $E_\mathrm{corr}$ is negative and increases with temperature but it is non-monotonic with regard to the coupling parameter $\lambda$, see Table~\ref{table:1DN3} and inset in Fig.~\ref{Fig:eval}. More precisely, the correlation effects are dominant at moderate coupling, between $\lambda=2$ and $6$, leading to lower total energies $E_{\mathrm{2ndB}}$ (compared to $E^0_\mathrm{HF}$) at temperature $\beta=10$ and to an energy increase for $\beta=2$ and $1$. In general, both, the HF and second Born total energies well approach the corresponding [exact] QMC data independent of $\lambda$ and $\beta$, see Table~\ref{table:1DN3}. In order to properly compare the different approximations, the total energy is shown in Fig.~\ref{Fig:QMC} relatively to the mean-field chemical potential $\mu^0$. In the whole considered $\lambda$-regime, the quasi ground state energy $E_{\mathrm{HF}}^0$ (dashed curve for $\beta=10$) is downshifted by second Born corrections (solid curve) towards the energies obtained by QMC (dotted curve). At $\lambda=2$, we identify the best agreement of the correlated result with $E_\mathrm{QMC}$. In particular, for $\lambda\rightarrow0$, the three different total energies converge to the value $E-\mu^0=\frac{3}{2}$ of the ideal QD. At larger temperatures $\beta=2$ and $1$, the correlated Green's function $\vec{g}^M(\tau)$ leads, at moderate coupling around $\lambda\approx3$, to total energies that are significantly larger than the corresponding HF energies. This is consistent with the larger absolute values of the correlation energy $E_\mathrm{corr}$ given in Table~\ref{table:1DN3}. For stronger coupling $\lambda\gtrsim6$ (at $\beta=2$), $E_{\mathrm{2ndB}}$ then crosses the HF value in order to converge to the respective GS curve. The comparison with QMC is difficult at finite temperatures: We point out that (already at $\lambda=0$) there is a general shift due to the usage of different ensemble averages. Whereas QMC uses a canonical approach, the Green's function results emerge from a grand canonical picture. Nevertheless, close to the GS, Fig.~\ref{Fig:QMC} reveals a quite similar behavior in dependence on $\lambda$. From Figs.~\ref{Fig:2}-\ref{Fig:4}~\textbf{a}), one gathers how the QD mean-field $\vec{\Sigma}^0_\lambda$ renormalizes the ideal equidistant energy spectrum $\epsilon_i=i+\frac{1}{2}$ of the noninteracting system ($\lambda\equiv0$), see the shifted HF energies $\epsilon_i^0$ (open circles) which exactly follow a Fermi-Dirac distribution according to Eq.~(\ref{HFgf}). Correlations due to $\vec{\Sigma}^\mathrm{corr}_\lambda$ now modify this statistics as can be seen from the quantity $n_i-f_i^0$ in Figs.~\ref{Fig:2}-\ref{Fig:4}~\textbf{b}), which measures the HF orbital-resolved deviation from the Fermi-Dirac distribution (in percent) and shows that charge carriers around $\mu^0$ are being scattered into higher HF orbitals (black circles). At larger temperatures, see e.g.~Fig.~\ref{Fig:3}~\textbf{b}), the change in the occupation probability $n_i$ exceeds $2$~\% for $\lambda=2.0$. Moreover, Pauli blocking inhibits energetically low lying electrons to essentially take part in the scattering process---consequently, $n_i-f_i^0$ is small for $\epsilon_i^0\ll\mu^0$. \begin{table*} \begin{ruledtabular} \begin{tabular}{cccccccccccc} $N=2$ & (2D) &&&&&&&&&\\ \hline $\lambda$ & $E_{\mathrm{exact}}$ & $E_{\mathrm{HF}}^0$ & $\mu^0$ & $E_{\mathrm{2ndB}}$ & $E_0$ & $E_{\mathrm{HF}}$ & $E_{\mathrm{corr}}$ & $\Delta_{\mathrm{HF}}^0$ [\%] & $\Delta_{\mathrm{2ndB}}$ [\%] & $\xi$ [\%] & $E_{\mathrm{QMC}}^{\beta=5}$\\ \hline $1$ & --- & $3.604$ & $2.885$ & $3.597$ & $3.023$ & $0.584$ & $-0.010$ & --- & --- & --- & $3.591$ \\ $2$ & $4.142$ & $4.168$ & $3.393$ & $4.147$ & $3.082$ & $1.092$ & $-0.026$ & $0.638$ & $0.121$ & $80.8$ & $4.148$ \\ $4$ & $5.119$ & $5.189$ & $4.304$ & $5.117$ & $3.285$ & $1.901$ & $-0.069$ & $1.367$ & $0.039$ & $102.9$ & $5.123$ \\ \end{tabular} \end{ruledtabular} \caption{Different energy contributions in dependence on coupling parameter $\lambda$ for the ground state ($\beta=50$) of $N=2$ spin-polarized electrons in an isotropic 2D quantum dot. The exact energies for $\lambda=2$ and $4$ are quoted from Refs.~\cite{reusch01,merkt91} and arise by an exact diagonalization method. $E_\mathrm{QMC}$ gives quantum Monte Carlo reference data computed at the temperature $\beta=5$. $\Delta_x=|E_x-E_{\mathrm{exact}}|/E_{\mathrm{exact}}$ gives the relative error in \%. $\xi=(E_{\mathrm{HF}}^0-E_{\mathrm{2ndB}})/(E_{\mathrm{HF}}^0-E_{\mathrm{exact}})$ measures the correlation induced improvement of the total energy. All values are given in units of $E^*_0=\hbar\omega_0$. In particular, all three decimal places of the HF energies $E_{\mathrm{HF}}^0$ agree with Ref.~\cite{reusch01}.} \label{table:2DN2} \end{table*} Figs.~\ref{Fig:2}-\ref{Fig:4}~\textbf{c}) visualize the inhomogeneous one-particle density $\rho(x)$ in the three-electron quantum dot ($\lambda=2.0$) in HF and second Born approximation. For comparison, we included also the density of the respective ideal system (blue dotted lines). Being symmetric around $x=0$, the density at low temperatures (Fig.~\ref{Fig:2} and \ref{Fig:3}~\textbf{c})) is three-fold modulated, and due to the electron-electron interactions the modulation in $\rho(x)$ is more intense compared to the ideal QD, where the oscillations originate from the Pauli principle only. Notably at $\beta=2$, correlations substantially weaken the density modulation and hence are important. At temperature $\beta=1$, the correlation effects are still present (see the quantity $n_i-f_i^0$) and reveal a more smooth, almost monotonic decay of the density with $x\rightarrow\infty$. For the quasi 'ground state' ($\beta=10$) of the QD with six electrons, see Table~\ref{table:1DN3} and Fig.~\ref{Fig:5}, respectively, we observe similar properties as for the example of quantum dot lithium. Here, $\rho(x)$ becomes six-fold modulated and the quantum dot state can be interpreted as a \textit{Wigner chain} of six aligned charge carriers held together by the parabolic confinement. However, we note that no orbitals at $\lambda=2.0$ are degenerate, and that there is strong overlap of the single-carrier (HF) wave functions $\phi_{\lambda,i}(x)$. In contrast to the $N=3$ quantum dot, the influence of correlations on the equilibrium state $G^M$ (see again $n_i-f_i^0$) is stronger leading to considerable lowering of the total energy. This is consistent with an increased number of carriers and hence increased electron-electron collision probabilities, compare Figs.~\ref{Fig:2} and \ref{Fig:5}~\textbf{b}). \subsection{\label{subsec:results2D} Isotropic quantum dot (2D)} With no extra restriction on the carrier motion, Hamiltonian (\ref{ham}) describes a 2D QD with \textit{isotropic} parabolic confinement. Here, we report on the obtained Matsubara Green's function results for the special case of $N=2$ electrons, i.e.~for spin-polarized quantum dot helium. Thereby, we restrict ourselves to very low temperatures in order to compare with ground state data available in the literature\cite{reusch01}. For the HF and second Born calculations, the results of which are shown in Table~\ref{table:2DN2}, the inverse temperature was set to $\beta=50$, and we used up to $n_b=40$ of the energetically lowest Cartesian oscillator functions, see Eq.~(\ref{ost}) in Sec.~\ref{subsec:HF}. Further, the uniform power mesh [$n_m(u,p)$] was chosen as in Sec.~\ref{subsec:results1D}, including essentially more than 100 grid points. First, the unrestricted HF energies $E_\mathrm{HF}^0$ in Table~\ref{table:2DN2} exactly agree---to more than three decimal places---with the total energies computed in analogous manner by B.~Reusch et al., Ref.~\cite{reusch01}. This is a clear indication for the HF basis to be large enough. Second, applying the second Born approximation we are able to (essentially) improve these ground state results. The values of $E_\mathrm{2ndB}$, thereby, come quite close to the exact energies, which are obtained by numerical diagonalization\cite{merkt91}, and the data are also in good agreement with the QMC results. For instance, for coupling parameter $\lambda=2.0$, the inclusion of correlations reduces the relative error $\Delta_x$ by a factor of five---for definition of $\Delta_\mathrm{HF}$ and $\Delta_\mathrm{2ndB}$ see caption of Table~\ref{table:2DN2}. Hence, with respect to the HF solution, this means an improvement of the ground state total energy by about $\xi=(E_{\mathrm{HF}}^0-E_{\mathrm{2ndB}})/(E_{\mathrm{HF}}^0-E_{\mathrm{exact}})\approx80$~\%. For the strongly correlated case $\lambda=4.0$, the calculation slightly over-estimates the influence of correlations and leads to a total energy lower than the exact value. This, most probably, can be improved by increasing the number of mesh points for the $\tau$-interval which is crucial for the convergence of the correlation energy, see integral in Eq.~(\ref{egycorr}). Nevertheless, the exact ground state energy is considerably well approximated. As a final remark, we note that our procedure does not restore the rotational symmetry of the Hamiltonian (\ref{ham}) into the two-dimensional solution. This is due the fact that we start from a symmetry broken \cite{yannou07} (HF) initial state $G^0$ when solving the Dyson equation to self-consistency. For the problem of restoring the symmetry, see e.g.~Ref.~\cite{giovannini07}. However, the presented results directly apply to QDs where impurities naturally avoid the rotational symmetry and hence lead to symmetry broken electron states. \begin{figure}[b] \includegraphics[width=0.525\textwidth]{spectralfct_index.pdf} \caption{(color online) Energetically lowest intraband spectral functions $a_k(\tau)$ (circles) for the one-dimensional quantum dot with $N=3$ charge carriers at $\beta=10$ and $\lambda=2.0$. The solid lines are the fitting results of the IHC model, Eq.~(\ref{spectralfct_IHC}), with exponential damping constants $\gamma_k=\eta_k\nu_k$ (solid exponential curves). The undamped oscillations (thin dashed curves in the background) correspond to the Hartree-Fock approximation, where no collisional broadening and thus no damping of $a_k(\tau)$ occurs.}\label{Fig.intrabandsf} \end{figure} \begin{figure*}[t] \includegraphics[width=0.975\textwidth]{spectralfct_sum.pdf} \caption{(color online) Spectral function $a(\omega)$ (dashed curves filled gray) accumulated from the orbital-resolved functions $a_k(\omega)$ (thin dotted curves) for the quasi one-dimensional QD with $N=3$ electrons at different temperatures $\beta^{-1}$ as indicated---\textbf{a}) to \textbf{c}) $\lambda=1.0$, \textbf{d}) and \textbf{e}) $\lambda=2.0$. Note that the $y$-axis in \textbf{b}), \textbf{c}) and \textbf{e}) is stretched by factor $8$. The vertical solid lines denote the Hartree-Fock energies $\epsilon_k^0$, the dotted lines correspond to the maxima of the inverse hyperbolic cosines $a_k(\omega)$, Eq.~(\ref{spectralfct_IHC}). The numbers at the peak profiles in \textbf{a}) and \textbf{d}) denote the shift of the maxima in $a_{k=2,3}(\omega) $ with respect to the HF energies. The triangles on the abscissas give the position of the chemical potential $\mu^0$. Moreover, the dotted-dashed (double-dotted-dashed) curve show the (inverse) spectral weight---for more detailed information see text.}\label{Fig.sf} \end{figure*} \subsection{\label{subsec:spectralfct} Spectral function} In HF approximation, the single-particle energy spectrum of the QD consists of discrete levels, see e.g.~$\{\epsilon_i^0\}$ for the three-carrier system considered in Fig.~\ref{Fig:eval}. When correlations are included the spectra generally turn into continuous functions of energy due to electron-electron scattering and provide additional informations such as finite line widths or temperature broadening. However, from the Matsubara Green's function, Eq.~(\ref{mgfexpression}), it is intricate to extract the correlated single-particle energy spectrum as its computation besides Fourier transformation usually involves Pad\'{e} approximations\cite{ku02}. The direct time-propagation of the equilibrium state $G^M(\vec{r}_1,\vec{r}_2;\tau)$ solving the Keldysh/Kadanoff-Baym equations (\ref{kkbe}) for the two-time NEGF $G(1,2)=\theta(t_1-t_2)G^>(1,2)-\theta(t_2-t_1)G^<(1,2)$ initiates a more systematic approach. Here, the time-dependency of $G(1,2)$, which now extends also along the real part of the contour $\cal C$, provides access to the electron energies $\omega$ via the spectral function \cite{kadanoff62} $a(\omega)=\int_{-\infty}^{+\infty} \textup{d}\tau e^{i\omega\tau} a(\tau)$, where here $\tau$ is the difference of the two real time arguments in $G$. The orbital-resolved carrier spectral function $\vec{a}(\tau)$ is given by \begin{eqnarray} \vec{a}(\tau)=i \{\vec{g}^>(T-\tau,T+\tau)-\vec{g}^<(T-\tau,T+\tau)\}\;, \end{eqnarray} where $T\geq0$ is a specific point on the diagonal of the two-time plane ${\cal P}=[0,\infty]\times[0,\infty]$, $\tau=t_1-t_2\in[-\sqrt{2}T,\sqrt{2}T]$ denotes the relative time, and $\vec{g}^\gtrless(t_1,t_2)$ are the contour-ordered correlation functions with respect to the HF basis. Further, we identify the diagonal (offdiagonal) elements of matrix $\vec{a}(\tau)$ with the intraband (interband) spectral functions. Well documented computational specifications for solving Eq.~(\ref{kkbe}) on ${\cal P}$, with initial condition $\vec{g}^M(\tau)$, are provided e.g.~by Refs.~\cite{dahlen06,dahlen07,thesis07}. We will not give a detailed description here. Nevertheless, it is instructive to note that, as the one-particle energy $\vec{h}^0$ includes no time-dependency, the dynamics of $G(1,2)$ can be obtained by propagation on the $t_{1,2}$ axes only [instead on whole ${\cal P}$] or by using the retarded Green's function\cite{banyai93} $G^R(1,2)=\frac{i}{\hbar}\langle[\hat{\psi}(1),\hat{\psi}^\dagger(2)]_{+}\rangle\theta(t_1-t_2)\,$. As a result it turns out that the QD spectral function is not of Breit-Wigner type, i.e.~does not obey a distribution function $a_{kk}(\omega)=a_k(\omega)\propto\frac{1}{(\omega-\omega_k)^2+\gamma^2}$ as it follows from a quasi-particle (local approximation) ansatz\cite{bonitz99} with $a_k(\tau)=e^{i \epsilon_k\!\tau} e^{-\gamma \tau}$, single-particle energy $\epsilon_k$ and phenomenological damping $\gamma$. In contrast, the spectral function shows clear non-Lorentzian behavior, cf.~the circles in Fig.~\ref{Fig.intrabandsf} which shows the energetically lowest intraband spectral functions of the three-electron quantum dot discussed in Sec.~\ref{subsec:results1D}. To this end, at large $T\gg1$, we have adapted the computed spectral function $a_{k}(\tau)$ to an inverse hyperbolic cosine model \cite{haug98} (IHC) \begin{eqnarray} \label{spectralfct_IHC} a_{k}(\tau)=e^{i \omega_k\!\tau}\frac{1}{\cosh^{\eta_k}(\nu_k \tau)}\;, \end{eqnarray} which has been demonstrated to yield good results for Coulomb quantum kinetics, see Ref.~\cite{bonitz99}. The ansatz (\ref{spectralfct_IHC}) leaves open a set of three parameters $\{\omega_{k},\eta_{k},\nu_{k}\}$ (obtained by fitting), and, in accordance to the numerical data, (i)~ensures zero slope of $\Re a_{k}(\tau)$ at $\tau=0$ and (ii)~exhibits an exponential decay for large $\tau$ with a damping constant $\gamma_k=\eta_k \nu_k$. The former feature is especially missing in the quasi-particle picture. The solid curves in Fig.~\ref{Fig.intrabandsf} exemplify the good quality of the IHC model including properties (i)~and (ii)~and justify its usage. In energy space, the $\gamma_k$-induced collisional broadening of the peaks in $a_k(\omega)$ can be shown to be again described by an inverse hyperbolic cosine\cite{haug98}. Explicitly, the accumulated energy spectrum follows from \begin{eqnarray} \label{spectralfct_acc} a(\omega)=\sum_{k=0}^{nb-1}a_k(\omega)=\sum_{k=0}^{nb-1}\int_{-\infty}^{+\infty}\textup{d}\tau\, e^{i\omega t}\,a_{k}(\tau)\;, \end{eqnarray} where in Hartree-Fock approximation one recovers $a(\omega)=\sum_{k=0}^{n_b-1}\delta(\omega/\omega_0-\epsilon_k^0)$, compare with Fig.~\ref{Fig.intrabandsf}. For the quasi-1D quantum dot filled with $N=3$ electrons at coupling parameters $\lambda=1$ and $\lambda=2$, Fig.~\ref{Fig.sf} shows the spectral function $a(\omega)$ including all low-energetic orbitals at different temperatures $\beta^{-1}$. The vertical solid lines indicate the discrete HF spectra $\{\epsilon_k^0\}$ and the gray filled dashed curves show $a(\omega)$ at the second Born level, being composed of the intraband functions $a_k(\omega)$ which, themselves, are represented by the thin dotted curves. Additionally, the positions of the maxima in $a_k(\omega)$ are marked by the vertical dotted lines. As a general trend, we observe a shift of almost all peaks in $a(\omega)$ compared to the HF eigenvalues $\epsilon_k^0$ of the charge carriers---in particular, the shifting is dominant around the chemical potential $\mu^0$. Moreover, the energy shifts are accompanied by a state and temperature dependent collisional broadening (finite lifetime), where, at sufficiently low temperatures, the spectral width is small and the intraband functions $a_k(\omega)$ do not overlap. Close to the ground state, less occupied HF states $k$ around $\mu^0$, typically, show larger broadening than more strongly occupied states, see Fig.~\ref{Fig.sf}~\textbf{a}) and \textbf{d}). We note that the HOMO-LUMO gap, i.e.~the energy gap between the \underline{h}ighest \underline{o}ccupied (\underline{m}olecular or Hartree-Fock) \underline{o}rbital and the \underline{l}owest \underline{u}noccupied (\underline{m}olecular) \underline{o}rbital appearing at quasi zero temperature $\beta=10$, is reduced by electron-electron collisions, and is particularly softened at larger temperatures, compare with Fig.~\ref{Fig.sf}~\textbf{b}) and \textbf{e}) where $\beta=2$. This clearly affects the optical absorption (emission) spectra of the few-electron quantum dot\cite{brocke03}. Finally, at even higher temperatures $\beta\lesssim1$, all spectra $a(\omega)$ gradually become smooth functions with no or only few distinct peaks around the chemical potential $\mu^0$. Thereby, also low energies, essentially smaller than $\epsilon_{k=0}^0$, can be adopted by the charge carriers. In addition, if in Eq.~(\ref{spectralfct_acc}) the intraband functions $a_k(\omega)$ are weighted by the respective occupation probabilities $n_k$ (or their inverse $\bar{n}_k=1-n_k$), one turns the energy spectrum $a(\omega)$ into $\sum_{k=0}^{n_b-1}n_k\,a_k(\omega)$ (or $\sum_{k=0}^{n_b-1}\bar{n}_k\,a_k(\omega)$). These quantities allow us to determine with which spectral weight specific electron energies are (are not) realized and thus are (are not) measurable in the correlated QD state $G^M$, see the corresponding dotted-dashed (double-dotted-dashed) curves in Fig.~\ref{Fig.sf}~\textbf{a}) to \textbf{e}). Beyond the spectral information, the time-propagation of $G(1,2)$ also allows us to keep track of the accuracy of the correlated initial state $\vec{g}^M(\tau)$ [solution of the Dyson equation (\ref{deqif})]. Qualitatively, the accuracy can be extracted from the temporal evolution of the electron correlation energy\cite{thesis07} which is given by \begin{eqnarray} E_{\mathrm{corr}}(t)&=&\frac{1}{2}\int\!\textup{d}^2 r\,{I^<(\vec{r},\vec{r};t)}-E_{\mathrm{HF}}^{0}\;,\\ I^<(\vec{r}_1,\vec{r}_2;t_1)&=&-\!\int_{\cal C} \textup{d}3\,W(1-3)\,G_{12}(13;23^+)|_{t_1=t_2^+} \;,\nonumber \end{eqnarray} where $G_{12}$ is used in second Born approximation. When the iterative procedure discussed in Sec.~\ref{subsec:2ndB} has led to convergence and thus to a self-consistent solution $\vec{g}^M(\tau)$ of the QD Dyson equation, the correlation energy $E_{\mathrm{corr}}(t)$ must stay constant in time. Consequently, the amplitude of any small oscillatory behavior of the correlation energy (obtained by propagation) serves as a reasonable estimator for the error $\Delta E_{\mathrm{corr}}$. Throughout, with typical errors of less than $5$\%, such a test has been found to be very sensitive and useful to verify the presented results. \section{\label{sec:conclude}Conclusion} In this paper, we have applied the method of nonequilibrium Green's functions to study inhomogeneous strongly correlated quantum few-body systems: quantum dots with up to six spin-polarized electrons in thermodynamic equilibrium. At various interaction strengths, the self-consistent solution of the Dyson equation at the level of the second Born approximation has enabled us to focus particularly on correlation phenomena. Close to the ground state as well as at finite temperatures, the Born approximation results yield considerable improvements for the total energies, the one-electron density, and the orbital-resolved distribution functions which give access to the electron-electron scattering processes being present in the correlated equilibrium state. Finally, the discussion of the spectral function in Sec.~\ref{subsec:spectralfct} has implied strong influence of correlations on the optical emission and absorption spectra of the considered few-electron QDs. Of course, the second Born approximation is a very simple model. It neglects both higher order correlations (beyond second order in the interaction) and dynamical screening (e.g.~GW approximation). Nevertheless, comparison of the Born approximation results to first principle quantum Monte Carlo and exact diagonalization data suggests that this approximation is well capable to accurately describe the present system. Further, the methods discussed in this paper should allow us to study system sizes up to $N=12$ charge-carriers in 1D and $N=6$ in 2D in a reasonable computer time on a single PC. However, the main restriction (limiting factor) is basically not the particle number itself but rather the number of basis functions, which---together with the discretized $\tau$-grid---sets up the large dimensionality of the matrices to be computed and processed. At the mean-field level, the evaluation of the two-electron integrals, Eq.~(\ref{2ei}), and the transformation of which into the HF basis are the most time-consuming parts, while solving the corresponding Dyson equation is relatively simple. For the second Born case, apart from solving the large-scale linear system (\ref{leqs}) in each iteration, in particular the computation of the self-energy $\vec{\Sigma}^\mathrm{corr}_{\lambda}(\tau)$, Eq.~(\ref{egycorr}), and the convolution integrals $\alpha_{ij}(\tau,\bar{\tau})$, Eq.~(\ref{convint}), both needed with adequate/high accuracy, account for the complexity of the calculation. Finally and most importantly, the use of NEGFs provides a very general approach as one is capable of computing also time-dependent observables solving the KBE (Sec.~\ref{subsec:negf}) for the two-time Green's function $G(1,2)$ under nonequilibrium situations. Hence, the presented approach allows for the extension to other systems such as 'QD molecules' (assemblies of single QDs) with interdot-coupling and time-dependent carrier transport and QDs coupled to electronic leads or external (optical) laser field sources. \begin{acknowledgments} Part of this work was supported by the Innovationsfond Schleswig-Holstein and the Deutsche Forschungsgemeinschaft via grant FI1252/1. \end{acknowledgments}
1,477,468,750,834
arxiv
\section{Introduction} Reinforcement Learning (RL) seeks to learn a decision strategy that advises the agent on how to take actions according to the perceived states~\cite{sutton2018reinforcement}. The state representation plays an important role in the agent's learning process \textemdash~ a proper choice of the state representation can help improve generalization~\cite{zhang2018decoupling, stooke2020decoupling, agarwal2021contrastive}, encourage exploration~\cite{pathak2017curiosity, machado2017laplacian, machado2020count} and enhance learning efficiency~\cite{dubey2018investigating, wu2018laplacian, wang2021towards}. In studying the state representation, one direction of particular interest is to learn task-agnostic representation that encodes transition dynamics of the environment~\cite{mahadevan2007proto, machado2021temporal}. Along this line, the Laplacian Representation ({\fontfamily{cmss}\selectfont{LapRep}}\xspace) has received increasing attention~\cite{mahadevan2005proto,machado2017laplacian, wu2018laplacian,wang2021towards,erraqabi2022temporal}. Specifically, the {\fontfamily{cmss}\selectfont{LapRep}}\xspace is formed by the $d$ smallest eigenvectors of the Laplacian matrix of the graph induced from the transition dynamic (see Section~\ref{sec:bg-laprep} for definition). It is assumed in prior works~\cite{wu2018laplacian, wang2021towards} that {\fontfamily{cmss}\selectfont{LapRep}}\xspace has a desirable property: the Euclidean distance in the {\fontfamily{cmss}\selectfont{LapRep}}\xspace space roughly reflects the \emph{reachability} among states~\cite{wu2018laplacian, wang2021towards}, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, smaller distance implies that it is easier to reach one state from another. This motivates the usage of the Euclidean distance under {\fontfamily{cmss}\selectfont{LapRep}}\xspace for reward shaping~\cite{wu2018laplacian, wang2021towards}. However, there lacks formal justification in previous works~\cite{wu2018laplacian, wang2021towards} for this property. In fact, it turns out that the Euclidean distance under {\fontfamily{cmss}\selectfont{LapRep}}\xspace does \emph{not} correctly capture the inter-state reachability in general. Figure~\ref{fig:intro}~(a) shows an example. Under {\fontfamily{cmss}\selectfont{LapRep}}\xspace, a state that has larger distance (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \texttt{A}) might actually be closer to the goal than another state (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \texttt{B}). Consequently, when the agent moves towards the goal, the pseudo-reward provided by {\fontfamily{cmss}\selectfont{LapRep}}\xspace would give a wrong learning signal. Such mismatch would hinder the learning process with reward shaping and result in inferior performance. \begin{wrapfigure}[15]{hr}{0.45\textwidth} \centering \vspace{-10pt} \includegraphics[width=0.45\textwidth]{figs/intro.png} \caption{Euclidean distances between each state and the goal state \texttt{G}, under {\fontfamily{cmss}\selectfont{LapRep}}\xspace \textbf{(a)} and {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace \textbf{(b)}.} \label{fig:intro} \end{wrapfigure} In this work, we introduce a Reachability-Aware Laplacian Representation ({\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace) that reliably captures the inter-state distances in the environment geometry (see Figure~\ref{fig:intro}~(b)). Specifically, {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace is obtained by scaling each dimension of the {\fontfamily{cmss}\selectfont{LapRep}}\xspace by the inverse square root of the corresponding eigenvalue. Despite its simplicity, {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace has theoretically justified advantages over {\fontfamily{cmss}\selectfont{LapRep}}\xspace in the following two aspects. First, the Euclidean distance under {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace can be interpreted as the \textit{average commute time}, which measures the expected number of steps required in a random walk to navigate between two states. Thus, such distance provides a good measure of the reachability. In contrast, to our best knowledge, there lacks a connection between the Euclidean distance under {\fontfamily{cmss}\selectfont{LapRep}}\xspace and the reachability. Second, {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace is equivalent to the embedding computed by the classic multidimensional scaling (MDS)~\cite{BorgGroenen2005}, which preserves pairwise distances globally~\cite{tenenbaum2000global}. {\fontfamily{cmss}\selectfont{LapRep}}\xspace, on the other hand, preserves only local information (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, mapping neighboring states close), since it is essentially Laplacian eigenmap~\cite{belkin2003laplacian}. Thus, {\fontfamily{cmss}\selectfont{LapRep}}\xspace is inherently incompetent for measuring the inter-state reachability. To further validate the advantages of {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace over {\fontfamily{cmss}\selectfont{LapRep}}\xspace, we conduct experiments to compare them on two discrete gridworld and two continuous control environments. The results show that {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace indeed performs much better in capturing the inter-state reachability as compared to {\fontfamily{cmss}\selectfont{LapRep}}\xspace. Furthermore, when used for reward shaping in the goal-reaching tasks, {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace significantly outperforms {\fontfamily{cmss}\selectfont{LapRep}}\xspace. In addition, we show that {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace can be used to discover the bottleneck states based on graph centrality measure, and more accurately find the key states than {\fontfamily{cmss}\selectfont{LapRep}}\xspace. The rest of the paper is organized as follows. In Section~\ref{sec:background}, we give some background about RL and {\fontfamily{cmss}\selectfont{LapRep}}\xspace in RL. In Section~\ref{sec:method}, we introduce the new {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace, explain why it is more desirable than {\fontfamily{cmss}\selectfont{LapRep}}\xspace, and discuss how to approximate it with neural networks in environments with a large or continuous state space. Then, we conduct experiments to demonstrate the advantages of {\fontfamily{cmss}\selectfont{LapRep}}\xspace over {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace in Section~\ref{sec:exp}. In Section~\ref{sec:related}, we review related works and Section~\ref{sec:conclusion} concludes the paper. \section{Background} \label{sec:background} \emph{Notations}. We use boldface letters (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, $\mathbf{u}$) for vectors, and calligraphic letters (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, $\mathcal{U}$) for sets. For a vector $\mathbf{u}$, $\lVert\mathbf{u}\rVert$ denotes its $L_2$ norm, $\mathrm{diag}(\mathbf{u})$ denotes a diagonal matrix whose main diagonal is $\mathbf{u}$. We use $\mathbf{1}$ to denote an all-ones vector, whose dimension can be inferred from the context. \subsection{Reinforcement Learning} In the Reinforcement Learning (RL) framework~\cite{sutton2018reinforcement}, an agent aims to learn a strategy that advise how to take actions in each state, with the goal of maximizing the expected cumulative reward. We consider the standard Markov Decision Process (MDP) setting~\cite{puterman1990markov}, and describe an MDP with a quintuple $(\mathcal{S},\mathcal{A}, r, P, \gamma, \mu)$. $\mathcal{S}$ is the state space and $\mathcal{A}$ is the action space. The initial state $s_0$ is generated according to the distribution $\mu\in\Delta_\mathcal{S}$, where $\Delta_{\mathcal{S}}$ denotes the space of probability distributions over $\mathcal{S}$. At timestep $t$, the agent observes from the environment a state $s_t\in\mathcal{S}$ and takes an action $a_t\in\mathcal{A}$. Then the environment provides the agent with a reward signal $r(s_t,a_t)\in\mathbb{R}$. The state observation in the next timestep $s_{t+1}\in\mathcal{S}$ is sampled from the distribution $P(s_t, a_t)\in\Delta_\mathcal{S}$. We refer to $P\in(\Delta_\mathcal{S})^{\mathcal{S}\times\mathcal{A}}$ as the transition functions and $r:\mathbb{R}^{\mathcal{S}\times\mathcal{A}}$ as the reward function. A stationary stochastic policy $\pi\in(\Delta_\mathcal{A})^\mathcal{S}$ specifies a decision making strategy, where $\pi(s,a)$ is the probability of taking action $a$ in state $s$. The agent's goal is to learn an optimal policy $\pi^*$ that maximizes the expected cumulative reward: \begin{equation} \pi^* = \argmax_{\pi\in\Pi} \mathbb{E}_{\pi,P} \sum_{t=0}^\infty \gamma^t r_t, \end{equation} where $\Pi$ denotes the policy space and $\gamma\in [0, 1)$ is the discount factor. \subsection{Laplacian representation in RL} \label{sec:bg-laprep} The Laplacian Representation ({\fontfamily{cmss}\selectfont{LapRep}}\xspace)~\citep{wu2018laplacian} is a task-agnostic state representation in RL, originally proposed in~\cite{mahadevan2005proto} (known as proto-value function). Formally, the Laplacian representations for all states are the eigenfunctions of Laplace-Beltrami diffusion operator on the state space manifold. For simplicity, here we restrict the introduction of {\fontfamily{cmss}\selectfont{LapRep}}\xspace to the discrete state case and refer readers to~\citep{wu2018laplacian} for the formulation in the continuous case. The states and transitions in an MDP can be viewed as nodes and edges in a graph. The {\fontfamily{cmss}\selectfont{LapRep}}\xspace is formed by $d$ smallest eigenvectors of the graph Laplacian (usually $d\ll\lvert\mathcal{S}\rvert$). Each eigenvector (of length $\lvert\mathcal{S}\rvert$) corresponds to a dimension of the {\fontfamily{cmss}\selectfont{LapRep}}\xspace. Formally, we denote the graph as $\mathcal{G} = (\mathcal{S}, \mathcal{E})$ where $\mathcal{S}$ is the node set consisting of all states and $\mathcal{E}$ is the edge set consisting of transitions between states. The Laplacian matrix of graph $\mathcal{G}$ is defined as $L\coloneqq D-A$, where $A$ is the adjacency matrix of $\mathcal{G}$ and $D\coloneqq\mathrm{diag}(A\mathbf{1})$ is the degree matrix~\citep{chung1997spectral}. We sort the eigenvalues of $L$ by their magnitudes and denote the $i$-th smallest one as $\lambda_i$. The unit eigenvector corresponding to $\lambda_i$ is denoted as $\mathbf{v}_i \in \mathbb{R}^{\lvert\mathcal{S}\rvert}$. Then, the $d$-dimensional {\fontfamily{cmss}\selectfont{LapRep}}\xspace of a state $s$ can be defined as \begin{equation} \boldsymbol{\rho}_d (s)\coloneqq(\mathbf{v}_1[s], \mathbf{v}_2[s], \cdots, \mathbf{v}_d[s]), \label{eqn:laprep} \end{equation} where $\mathbf{v}_i[s]$ denotes the entry in vector $\mathbf{v}_i$ corresponding to state $s$. In particular, $\mathbf{v}_1$ is a normalized all-ones vector and hence it provides no information about the environment geometry. Therefore we omit $\mathbf{v}_1$ and only consider other dimensions. For environments with a large or even continuous state space, it is infeasible to obtain the {\fontfamily{cmss}\selectfont{LapRep}}\xspace by directly computing the eigendecomposition. To approximate {\fontfamily{cmss}\selectfont{LapRep}}\xspace with neural networks, previous works~\cite{wu2018laplacian,wang2021towards} propose sample-based methods based on the spectral graph drawing~\cite{koren2005drawing}. In particular, \citet{wang2021towards} introduce a generalized graph drawing objective that ensures dimension-wise faithful approximation to the ground truth $\boldsymbol{\rho}_d (s)$. \section{Method} \label{sec:method} \subsection{Reachability-aware Laplacian representation} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/method.png} \caption{Visualizations of the Euclidean distances under {\fontfamily{cmss}\selectfont{LapRep}}\xspace between all states and the state \texttt{G}.} \label{fig:method} \end{figure} In prior works~\cite{wu2018laplacian, wang2021towards}, {\fontfamily{cmss}\selectfont{LapRep}}\xspace is believed to have a desirable property that the Euclidean distance between two states under {\fontfamily{cmss}\selectfont{LapRep}}\xspace (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathrm{dist}_{\boldsymbol{\rho}}(s,s')\coloneqq\lVert\boldsymbol{\rho}_d(s)-\boldsymbol{\rho}_d(s')\rVert$) roughly reflects the reachability between $s$ and $s'$. That is, smaller distance between two states implies that it is easier for the agent to reach one state from the other. Figure~\ref{fig:method}~(a) shows an illustrative example similar to the one in~\cite{wu2018laplacian}. In this example, $\mathrm{dist}_{\boldsymbol{\rho}}(\texttt{A},\texttt{G})$ is smaller than $\mathrm{dist}_{\boldsymbol{\rho}}(\texttt{B},\texttt{G})$, which aligns with the intuition that moving to \texttt{G} from \texttt{A} takes fewer steps than from \texttt{B}. Motivated by this, {\fontfamily{cmss}\selectfont{LapRep}}\xspace is used for reward shaping in goal-reaching tasks~\cite{wu2018laplacian, wang2021towards}. However, little justification is provided in previous works~\cite{wu2018laplacian, wang2021towards} for this argument (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, the Euclidean distance under {\fontfamily{cmss}\selectfont{LapRep}}\xspace captures the inter-state reachability). In fact, we find that it does not hold in general. As shown in Figure~\ref{fig:method}~(b-e), $\mathrm{dist}_{\boldsymbol{\rho}}(\texttt{A},\texttt{G})$ is larger than $\mathrm{dist}_{\boldsymbol{\rho}}(\texttt{B},\texttt{G})$, but \texttt{A} is clearly closer to \texttt{G} than \texttt{B}. As a result, when the agent moves towards the goal, $\mathrm{dist}_{\boldsymbol{\rho}}$ might give a wrong reward signal. Such mismatch hinders the policy learning process when we use this distance for reward shaping. In this paper, we introduce the following Reachability-Aware Laplacian Representation ({\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace): \begin{equation} \boldsymbol{\phi}_d(s) \coloneqq \left(\frac{\mathbf{v}_2[s]}{\sqrt{\lambda_2}}, \frac{\mathbf{v}_3[s]}{\sqrt{\lambda_3}}, \cdots , \frac{\mathbf{v}_d[s]}{\sqrt{\lambda_d}}\right), \label{eqn:ra-laprep} \end{equation} which can fix the issue of {\fontfamily{cmss}\selectfont{LapRep}}\xspace and better capture the reachability between states. We provide both theoretical explanation (Section~\ref{sec:method-explain}) and empirical results (Section~\ref{sec:exp}) to demonstrate the advantage of {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace over {\fontfamily{cmss}\selectfont{LapRep}}\xspace. \subsection{Why RA-LapRep is more desirable than LapRep?} \label{sec:method-explain} In this subsection, we provide theoretical groundings for {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace from two aspects, which explains why it better captures the inter-state reachability than {\fontfamily{cmss}\selectfont{LapRep}}\xspace. First, we find that the Euclidean distance under {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace is related to a quantity that measures the expected random walk steps between states. Specifically, let $\mathrm{dist}_{\boldsymbol{\phi}}(s,s')\coloneqq\lVert\boldsymbol{\phi}_d(s)-\boldsymbol{\phi}_d(s')\rVert$ denote the Euclidean distance between states $s$ and $s'$ under {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace. When $d=\lvert\mathcal{S}\rvert$, $\mathrm{dist}_{\boldsymbol{\phi}}(s,s')$ has a nice interpretation~\cite{fouss2007random}: it is proportional to the square root of the average commute time between states $s$ and $s'$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, \begin{equation} \mathrm{dist}_{\boldsymbol{\phi}}(s,s') \propto \sqrt{n(s,s')}. \label{eqn:proportional} \end{equation} Here the average commute time $n(s,s')$ measures the expected number of steps required in a random walk to navigate from $s$ to $s'$ and back (see Appendix for the formal definition). Thus, $\mathrm{dist}_{\boldsymbol{\phi}}(s,s')$ provides a good quantification of the concept of reachability. Additionally, with the proportionality in Eqn.~\eqref{eqn:proportional}, {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace can be used to discover bottleneck states (see Section~\ref{sec:exp-bottleneck} for a detailed discussion and experiments). In contrast, to the best of our knowledge, the Euclidean distance under {\fontfamily{cmss}\selectfont{LapRep}}\xspace (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot,$\mathrm{dist}_{\boldsymbol{\rho}}(s,s')$) does not have a similar interpretation that matches the concept of reachability. Second, we show that {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace preserves global information while {\fontfamily{cmss}\selectfont{LapRep}}\xspace only focuses on preserving local information. Specifically, we note that {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace is equivalent (up to a constant factor) to the embedding obtained by classic Multi-Dimensional Scaling (MDS)~\cite{BorgGroenen2005} with the squared distance matrix in MDS being $D^{(2)}_{ij}=n(i,j)$~\cite{fouss2007random} (see Appendix for a detailed derivation). Since classic MDS is known to preserve pairwise distances globally~\cite{tenenbaum2000global}, the Euclidean distance under {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace is then a good fit for measuring the inter-state reachability. In comparison, the {\fontfamily{cmss}\selectfont{LapRep}}\xspace is only able to preserve local information. This is because, when viewing the MDP transition dynamic as a graph, the {\fontfamily{cmss}\selectfont{LapRep}}\xspace is essentially the Laplacian eigenmap~\cite{belkin2003laplacian}. As discussed in \cite{belkin2003laplacian}, Laplacian eigenmap only aims to preserve local graph structure for each single neighborhood in the graph (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, mapping neighboring states close), while making no attempt in preserving global information about the whole graph (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, pairwise geodesic distances between nodes~\cite{tenenbaum2000global}). Therefore, the Euclidean distance under {\fontfamily{cmss}\selectfont{LapRep}}\xspace is inherently not intended for measuring the reachability between states, particularly for distant states. \subsection{Approximating RA-LapRep} \label{sec:method-approx} We note that the theoretical reasonings in above subsection are based on $d=\lvert\mathcal{S}\rvert$. In practice, however, for environments with a large or even continuous state space, it is infeasible to have $d=\lvert\mathcal{S}\rvert$ and hence we need to take a small $d$. One may argue that, using a small $d$ would lead to approximation error: when $d<\lvert\mathcal{S}\rvert$, the distance $\mathrm{dist}_{\boldsymbol{\phi}}(s,s')$ is not exactly proportional to $\sqrt{n(s,s')}$. Fortunately, the gap between the approximated $\tilde{n}(s,s')$ and the true $n(s,s')$ turns out to be upper bounded by $C\sum_{i=d+1}^{\lvert\mathcal{S}\rvert}\frac{1}{\lambda_i}$, where $C$ is a constant and the summation is over the $\lvert\mathcal{S}\rvert-d$ largest eigenvalues. Thus, this bound will not be very large. We will empirically show in Section~\ref{sec:exp-shaping} that a small $d$ is sufficient for good reward shaping performance and further increasing $d$ does not yield any noticeable improvement. Furthermore, even with a small $d$, it is still impractical to obtain {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace via directly computing eigendecomposition. To tackle this, we follow~\cite{wang2021towards} to approximate {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace with neural networks using sample-based methods. Specifically, we first learn a parameterized approximation $f_i(\cdot\,;\theta)$ for each eigenvector $\mathbf{v}_i$ by optimizing a generalized graph drawing objective~\cite{wang2021towards}, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $f_i(s\,;\theta)\!\approx\!\mathbf{v}_i[s]$, where $\theta$ denotes the learnable parameters of the neural networks. Next, we approximate each eigenvalue $\mathbf{\lambda}_i$ simply by \begin{equation} \lambda_i = \mathbf{v}_i^\top L\mathbf{v}_i \approx \mathbb{E}_{(s,s')\in\mathcal{T}}\,\left(f_i(s\,;\hat{\theta}) - f_i(s'\,;\hat{\theta})\right)^2, \end{equation} where $\hat{\theta}$ denotes the learned parameters, and $\mathcal{T}$ is the same transition data used to train $f$. Let $\tilde{\lambda}_i$ denote the approximated eigenvalue. {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace can be approximated by \begin{equation} \boldsymbol{\phi}_d(s) \approx \tilde{\boldsymbol{\phi}}_d(s) = \left(\frac{f_2(s\,;\hat{\theta})}{\sqrt{\tilde{\lambda}_2}}, \frac{f_3(s\,;\hat{\theta})}{\sqrt{\tilde{\lambda}_3}}, \cdots , \frac{f_d(s\,;\hat{\theta})}{\sqrt{\tilde{\lambda}_d}}\right). \label{eqn:ra-laprep-approx} \end{equation} In experiments, we find this approximation works quite well and is on par with using the true $\boldsymbol{\phi}_d(s)$. \section{Experiments} \label{sec:exp} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{figs/environments.png} \caption{Environments used in our experiments (agents shown in red, and walls in grey).} \label{fig:environments} \end{figure} In this section, we conduct experiments to validate the benefits of {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace compared to {\fontfamily{cmss}\selectfont{LapRep}}\xspace. Following~\cite{wu2018laplacian, wang2021towards}, we consider both discrete gridworld and continuous control environments in our experiments. Figure~\ref{fig:environments} shows the layouts of environments used. We briefly introduce them here and refer readers to Appendix for more details. In discrete gridworld environments, the agent takes one of the four actions (\textit{up}, \textit{down}, \textit{left}, \textit{right}) to move from one cell to another. If hitting the wall, the agent remains in the current cell. In continuous control environments, the agent picks a continuous action from $[-\pi,\pi)$ that specifies the direction along which the agent moves a fixed small step forward. For all environments, the observation is the agent's $(x,y)$-position. \subsection{Capturing reachability between states} \label{sec:exp-distance} In this subsection, we evaluate the learned {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace and {\fontfamily{cmss}\selectfont{LapRep}}\xspace in capturing the reachability among states. We also include the ground-truth {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace for comparison. Specifically, for each state $s$, we compute the Euclidean distance between $s$ and an arbitrarily chosen goal state $s_{\textrm{goal}}$, under learned {\fontfamily{cmss}\selectfont{LapRep}}\xspace $\tilde{\boldsymbol{\rho}}$, learned {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace $\tilde{\boldsymbol{\phi}}$, and ground-truth {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace $\boldsymbol{\phi}$ (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathrm{dist}_{\tilde{\boldsymbol{\phi}}}(s,s_{\textrm{goal}})$, $\mathrm{dist}_{\tilde{\boldsymbol{\rho}}}(s,s_{\textrm{goal}})$ and $\mathrm{dist}_{\boldsymbol{\phi}}(s,s_{\textrm{goal}})$). Then, we use heatmaps to visualize the three distances. For continuous environments, the heatmaps are obtained by first sampling a set of states roughly covering the state space, and then performing interpolation among sampled states. We train neural networks to learn {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace and {\fontfamily{cmss}\selectfont{LapRep}}\xspace. Specifically, we implement the two-step approximation procedure introduced in Section~\ref{sec:method-approx} to learn {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace, and adopt the method in \cite{wang2021towards} to learn {\fontfamily{cmss}\selectfont{LapRep}}\xspace. Details of training and network architectures are in Appendix. The ground truth {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace $\boldsymbol{\phi}$ is calculated using Eqn.~\eqref{eqn:ra-laprep}. For discrete environments, the eigenvectors and eigenvalues are computed by eigendecomposition; For continuous environments, the eigenfunctions and eigenvalues are approximated by the finite difference method with 5-point stencil~\cite{Knabner2003}. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/dist.png} \caption{\textbf{Left 3 columns}: Visualizations of the Euclidean distances between all states and the goals in the four environments, under learned {\fontfamily{cmss}\selectfont{LapRep}}\xspace, learned {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace, and ground-truth {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace. Example trajectories are shown in red. \textbf{Right}: Line charts of the distance values for states in the trajectories (normalized to $[0, 1]$), where the states are sorted by temporal order.} \label{fig:dist} \end{figure} The visualization results are shown in Figure~\ref{fig:dist}. For clearer comparison, we highlight in each environment an example trajectory, and plot the distance values along each trajectory in the line charts on the right. As we can see, for both discrete and continuous environments, as the agent is moving towards the goal, the distances under the learned {\fontfamily{cmss}\selectfont{LapRep}}\xspace are increasing in some segments of the trajectories. This is contradictory to that the Euclidean distances under {\fontfamily{cmss}\selectfont{LapRep}}\xspace reflects the inter-state reachability. In contrast, the distances under {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace decrease monotonically, which accurately reflect the reachability between current states and the goals. We would like to note that, apart from the highlighted trajectories, similar observations can be obtained for other trajectories or goal positions (see Appendix). Besides, we can see that the distances under the learned {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace is very close to those under the ground-truth {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace, indicating the effectiveness of our approximation approach. \subsection{Reward shaping} \label{sec:exp-shaping} The above experiments show that the {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace better captures the reachability between states than the {\fontfamily{cmss}\selectfont{LapRep}}\xspace. Next, we study if this advantage leads to a higher performance for reward shaping in goal-reaching tasks. Following \cite{wu2018laplacian, wang2021towards}, we define the shaped reward as \begin{equation} r_t = 0.5 \cdot r_t^{\textrm{env}} + 0.5 \cdot r_t^{\textrm{dist}}. \end{equation} Here $r_t^{\textrm{env}}$ is the reward obtained from the environment, which is set to $0$ when the agent reaches the goal state and $-1$ otherwise. For discrete environments, $r_t^{\textrm{env}}$ is simply formalized as $r_t^{\textrm{env}}=-\mathds{1}[s_{t+1}\ne s_\textrm{goal}]$. For continuous environments, we consider the agent to have reached the goal when its distance to goal is within a small preset radius $\epsilon$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $r_t^{\textrm{env}}=-\mathds{1}[\lVert s_{t+1} - s_\textrm{goal} \rVert > \epsilon]$. The pseudo-reward $r_t^{\textrm{dist}}$ is set to be the negative distance under the learned representations: \begin{equation} \begin{aligned} r_t^{\textrm{dist}}=-\mathrm{dist}_{\tilde{\boldsymbol{\rho}}}(s_{t+1},s_\textrm{goal}) & \quad \text{for {\fontfamily{cmss}\selectfont{LapRep}}\xspace}, \\ r_t^{\textrm{dist}}=-\mathrm{dist}_{\tilde{\boldsymbol{\phi}}}(s_{t+1},s_\textrm{goal}) & \quad \text{for {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace}. \end{aligned} \end{equation} As in \cite{wu2018laplacian, wang2021towards}, we also include two baselines: $L_2$ shaping, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $r_t^{\textrm{dist}}=-\lVert s_{t+1}-s_\textrm{goal}\rVert$, and no reward shaping, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $r_t = r_t^{\textrm{env}}$. Following~\cite{wang2021towards}, we consider multiple goal positions for each environment (see Appendix), in order to minimize the bias brought by goal positions. The final results are averaged across different goals and 10 runs per goal. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/reward_shaping.png} \caption{Reward shaping results in goal-reaching tasks, with different choices of the shaped reward.} \label{fig:reward_shaping} \end{figure} As shown in Figure~\ref{fig:reward_shaping}, on both discrete and continuous environments, {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace outperforms {\fontfamily{cmss}\selectfont{LapRep}}\xspace and other two baselines by a large margin. Compared to {\fontfamily{cmss}\selectfont{LapRep}}\xspace, using {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace for reward shaping is more sample efficient, reaching the same level of performance in fewer than half of steps. We attribute this performance improvement to that the Euclidean distance under {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace more accurately captures the inter-state reachability. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/ablation.png} \caption{ \textbf{(a)} Comparison in the reward shaping results among the learned representations and the ground-truth ones; \textbf{(b)} Comparison in the reward shaping results among using different $d$ for the learned {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace.} \label{fig:ablation} \end{figure} \paragraph{Comparing to the ground-truth} To find out if the neural network approximation limits the performance, we also use the ground-truth {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace and {\fontfamily{cmss}\selectfont{LapRep}}\xspace for reward shaping, and compare the results with those of the learned representations. From Figure~\ref{fig:ablation}, we can see that the performance of the learned {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace is as good as the ground-truth one, indicating the effectiveness of the network approximation. Besides, the performances of the learned {\fontfamily{cmss}\selectfont{LapRep}}\xspace and the ground-truth one are very close, suggesting that the inferior performance of {\fontfamily{cmss}\selectfont{LapRep}}\xspace is not due to poor approximation. \paragraph{Varying the dimension $d$} As mentioned in Section~\ref{sec:method-approx}, the theoretical approximation error of using a smaller $d$ is not very large. Here we conduct experiments to see if this error cause significant performance degradation. Specifically, we vary the dimension $d$ from 2 to 10, and compare the resulting reward shaping performance. As Figure~\ref{fig:ablation} shows, the performance first improves as we increase $d$, and then plateaus. Thus, a small $d$ (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, $d=10$, compared to $\lvert\mathcal{S}\rvert>300$) is sufficient to give a pretty good performance. \subsection{Discovering bottleneck states} \label{sec:exp-bottleneck} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/bottlenecks.png} \caption{Bottleneck discovery results of the learned {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace and {\fontfamily{cmss}\selectfont{LapRep}}\xspace. Discovered bottleneck states are those marked as dots (for discrete environments), or those within the contour lines (for continuous environments).} \label{fig:bottleneck} \end{figure} The bottleneck states are analogous to the key nodes in graph, which allows us to discover them based on the graph centrality measure. Here we consider a simple definition of the centrality~\cite{bavelas1950communication}: \begin{equation} \mathrm{cent}(s)=\left(\sum_{s'\in\mathcal{S}} \mathrm{dist}(s, s')\right)^{-1}, \label{eqn:centrality} \end{equation} where $\mathrm{dist}$ is a generic distance measure. The states with high centrality are those where many paths pass, hence they can be considered as bottlenecks. Since the Euclidean distance under {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace more accurately reflects the inter-state reachability, we aim to see if it benefits bottleneck discovery. Specifically, we calculate $\mathrm{cent}(s)$ for all states with $\mathrm{dist}$ being the Euclidean distance under the learned {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace or the learned {\fontfamily{cmss}\selectfont{LapRep}}\xspace, and take top $20\%$ states with lowest $\mathrm{cent}(s)$ as the discovered bottlenecks. For continuous environments, the summation in Eqn.~\eqref{eqn:centrality} is calculated over a set of sampled states. Figure~\ref{fig:bottleneck} visualizes the computed $\mathrm{cent}(s)$ and highlights the discovered bottleneck states. In this top row of Figure~\ref{fig:bottleneck}, we can see that the bottlenecks discovered by the learned {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace are essentially the states that many paths pass through. In comparison, the results of using {\fontfamily{cmss}\selectfont{LapRep}}\xspace are not as satisfactory. For one thing, as highlighted (with dashed red box) in Figure~\ref{fig:bottleneck}~(e) and (g), some of the discovered states are actually not in the center of the environment (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, where most trajectories pass), which does not match the concept of bottleneck. For another, as highlighted in Figure~\ref{fig:bottleneck}~(f) and (h), some regions that should have been identified are however missing. \section{Discussions} \label{sec:discuss} \subsection*{Predicting the average commute time directly} One may wonder if it is possible to directly predict the average commute time (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, in a supervised learning fashion), in contrast to our training approach (which can be viewed an unsupervised way). For example, we can minimize the following error to train the representation \begin{equation} \mathbb{E}_{s_i,s_j} \left[\lVert \phi(s_i) - \phi(s_j) \rVert - n(s_i, s_j)\right], \end{equation} where $\phi(s)$ denote the state representation. However, obtaining accurate $n(s_i,s_j)$ requires the knowledge of the whole graph, which is infeasible in general. One workaround is to sample $s_i$ and $s_j$ from the same trajectory and use the difference between their temporal indices as a proxy of $n(s_i,s_j)$. But such approximation suffers from high variance. With poor estimation of $n(s_i,s_j)$, the learned representation will consequently be of low quality. In comparison, our RA-LapRep can be viewed as an unsupervised learning method, which does not reply on estimating $n(s_i,s_j)$. \subsection*{Generalization to directed graphs} One implicit assumption in our and previous works~\cite{wu2018laplacian, wang2021towards} is that the underlying graph is undirected, which implies that the actions are reversible. In practical settings such as robot control, this is often not the case. To tackle this limitation, one way is generalizing the notion of {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace to directed graphs, for example, by taking inspiration from effective resistance on directed graphs~\cite{young2016new,Boley2011commute, fitch2019effective}. However, it is a highly non-trivial challenge. First, it is not straightforward to give a proper definition for {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace in directed cases. Moreover, due to the complex-valued eigenvalues of directed graph Laplacian matrices, designing an optimization objective to approximate the eigenvectors (as done in \cite{wu2018laplacian, wang2021towards}) would be difficult. Despite the challenges, generalization to directed graphs would be an interesting research topic and worth an in-depth study beyond this work. \subsection*{Evaluation on high-dimensional environments} In this work, we use 2D mazes because such environments allow us to easily examine whether the inter-state reachability is well captured. For applying our method to more complex environments such as Atari~\cite{bellemare2013arcade}, we foresee some non-trivial challenges. For example, most games contain irreversible transitions. So when dealing with these environments, we may need to first generalize our method to directed graphs. Nevertheless, as a first step towards applying {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace to high-dimensional environments, we conduct experiments by replacing the 2D $(x,y)$ position input with the high-dimensional top-view images input. The experiment results (see Appendix) show that, with high-dimensional input, the learned {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace is still able to accurately reflect the reachability among states and significantly boost the reward shaping performance. This suggests that learning {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace with high-dimensional input on more complex environments is possible, and we will continue to explore along this direction in future works. \subsection*{Ablation on uniform state coverage} Follow \cite{wu2018laplacian, wang2021towards}, we train {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace with pre-collected transition data that roughly uniformly covers the state space. One may wonder how the learned {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace would be affected when the uniformly full state coverage assumption breaks. To investigate this, we conduct ablative experiments (learning {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace and reward shaping) by manipulating the distribution of collected data. The results show that, the learned {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace is robust to moderate changes in the data distribution, w.r.t\onedot} \def\dof{d.o.f\onedot both its capacity in capturing the reachability and its reward shaping performance. Only when the distribution is too non-uniform, the resulted graph will be disconnected and the performance will degrade. Please refer to the Appendix for details about the experiments setup and results. \section{Related Works} \label{sec:related} Our work is built upon prior works on learning Laplacian representation with neural networks~\cite{wu2018laplacian, wang2021towards}. We introduce {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace that can more accurately reflect the inter-state reachability. Apart from Laplacian representations, another line of works also aims to learn representation that captures the inter-state reachability~\cite{hartikainen2020dynamical,savinov2018episodic,zhang2020generating}. However, their learned reachability is not satisfying (see Appendix for detailed discussions). Apart from reward shaping, Laplacian representations have also found applications in option discovery~\cite{machado2017laplacian, machado2018eigenoption, jinnai2019exploration, wang2021towards}. We would like to note that our {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace can still be used for option discovery and would yield same good results as {\fontfamily{cmss}\selectfont{LapRep}}\xspace~\cite{machado2017laplacian, wang2021towards}, since the dimension-wise scaling (in Eqn.~\ref{eqn:ra-laprep}) does not change the eigen-options. Regarding bottleneck state discovery, there are prior works~\cite{simsek2008skill, moradi2010automatic} that adopt other centrality measures in graph theory (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, betweenness) to find bottleneck states for skill characterization. \section{Conclusion} \label{sec:conclusion} Laplacian Representation ({\fontfamily{cmss}\selectfont{LapRep}}\xspace) is a task-agnostic state representation that encodes the geometry structure of the environment. In this work, we point out a misconception in prior works that the Euclidean distance in the {\fontfamily{cmss}\selectfont{LapRep}}\xspace space can reflect the reachability among states. We show that this property does not actually hold in general, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, two distant states in the environment may have small distance under {\fontfamily{cmss}\selectfont{LapRep}}\xspace. Such issue would limit the performance of using this distance for reward shaping~\cite{wu2018laplacian, wang2021towards}. To fix this issue, we introduce a Reachability-Aware Laplacian Representation ({\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace). Compared to {\fontfamily{cmss}\selectfont{LapRep}}\xspace, we show that the Euclidean distance {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace provides a better quantification of the inter-state reachability. Furthermore, this advantage of {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace leads to significant performance improvements in reward shaping experiments. In addition, we also provide theoretical explanation for the advantages of {\fontfamily{cmss}\selectfont{RA-LapRep}}\xspace. \putbib \end{bibunit} \begin{bibunit} \clearpage
1,477,468,750,835
arxiv
\section{Introduction} Let $n\geqslant 2$ and $ (M, g)$ be an $n$-dimensional compact Riemannian manifold with smooth boundary $\Gamma$. Let $\Delta$ and $\Delta_\Gamma$ denote the Laplace-Beltrami operators acting on functions on $M$ and $\Gamma$ respectively. We define the Laplacian as the negative divergence of the gradient operator. The gradient operators on $M$ and $\Gamma$ will be denoted by $\nabla$ and $\nabla_\Gamma$ respectively and the outer normal derivative on $\Gamma$ by $\partial_\mathbf{n}$. Throughout the paper we denote by $\mathrm{d}_M$ and $\mathrm{d}_\Gamma$ the (Riemannian) volume elements of $M$ and $\Gamma$. Let $\beta\in\mathbb{R}_{\geqslant 0}$, we consider the Wentzel eigenvalue problem on $M$: \begin{equation}\label{w} \begin{cases} \Delta u=0 \quad \text{in}~\Omega,\\ \beta \Delta_\Gamma u +\partial_\mathbf{n} u=\lambda u \quad \text{on}~\Gamma. \end{cases} \end{equation} Problem \eqref{w} admits a discrete sequence of eigenvalues that can be arranged as \begin{equation}\label{spectrum} 0=\lambda_{W,0}^{\beta}< \lambda_{W,1}^{\beta}\leqslant\lambda_{W,2}^{\beta}\leqslant \cdots\leqslant \lambda_{W,k}^{\beta}\leqslant \cdots \nearrow \infty. \end{equation} We adopt the convention that each eigenvalue is repeated according to its multiplicity. The eigenvalue problem of the Laplacian with Wentzel boundary has only recently been significantly investigated. There have also been new developments on the Steklov eigenvalue problem. See for example \cite{provstub,xiong2017,colbGirHas,xia2019escobars}. We adopt the philosophy of [Gal15], to interpret the Wentzel eigenvalue problem as a perturbed version (unperturbed when $\beta=0$) of the Steklov problem. This allows us to use similar methods as proposed in the recent works of Provenzano-Stubbe, Xiong and Colbois-Girouard-Hassannezhad. They use geometric properties of a well chosen distance function to bound Steklov eigenvalues. We, nevertheless, will focus on the Wentzel eigenvalues with boundary parameter $\beta>0$. Consider the map $ \wedge: L^2 (\Gamma) \longrightarrow L^2 (\Omega) $ related to the Dirichlet problem \begin{equation}\label{harm} \begin{cases} \Delta u=0\quad\text{ in } \Omega,\\ u |_\Gamma= f \quad \text{ on } \Gamma, \end{cases} \end{equation} which associates to any $ f \in L^2 (\Gamma)$ its harmonic extension, that is, the unique function $u$ in $ L^2(\Omega)$ satisfying \eqref{harm}. This map is well defined from $L^2(\Gamma)$ (respectively, $H^{\frac{1}{2}}(\Gamma)$) to $L^2(\Omega)$ (respectively, $H^1(\Omega)$). See \cite[p. 320 Prop $1.7$]{Taylor} for more details. By $H^s(\Omega)$ and $H^s(\Gamma)$, we denote the Sobolev spaces of order $s$ on $\Omega$ and $\Gamma$, and $u |_\Gamma\in H^\frac{1}{2}(\Gamma)$ stands for the trace of $u\in H^1(\Omega)$ at the boundary $\Gamma$. This will also be denoted by $u$, if there is no ambiguity. Then the Dirichlet-to-Neumann operator is defined by \begin{align} \mathrm{N_ D}: H^\frac{1}{2}(\Gamma) & \longrightarrow H^{-\frac{1}{2}} (\Gamma)\\ f&\longmapsto \partial_\mathbf{n} (\wedge f)\nonumber. \end{align} Again $\partial_\mathbf{n}\in H^{-\frac{1}{2}}(\Omega) $ stands for the normal derivative at the boundary $\Gamma$ of $\Omega$ with $\mathbf{n}$ the normal vector pointing outwards. For all $u\in H^{\frac{1}{2}}(\Gamma)$, we define the operator $\mathrm{B}_0= N_D$ (in the operator sense). For $\beta>0$, we define $\mathrm{C}_\beta u \stackrel{\scriptscriptstyle\text{def}}= \beta\Delta_\Gamma u$ for all $u\in H^1(\Gamma)$ and $$\mathrm{B}_\beta \stackrel{\scriptscriptstyle\text{def}}= \mathrm{B}_0+\mathrm{C}_\beta.$$ The eigenvalues sequence $\{ \lambda_{W,k}^{\beta}\}_{k=0}^\infty $ given in \eqref{spectrum} can be interpreted as the spectrum associated to the operator $\mathrm{B}_\beta$ and is subject to the following min-max characterisation (see e.g., \cite[Thm 1.2]{sandgren} and \cite[(2.33)]{Gal2015}). Let $\mathfrak{V}(k) $ denote the set of all $ k$-dimensional subspaces of $ \mathfrak{V}_\beta$ which is defined by \begin{align} \mathfrak{V}_0&\stackrel{\scriptscriptstyle\text{def}}=\{(u,u_\Gamma)\in H^1(\Omega)\times H^\frac{1}{2}(\Gamma): u_\Gamma=u|_\Gamma \},\\ \mathfrak{V}_\beta&\stackrel{\scriptscriptstyle\text{def}}=\{(u,u_\Gamma)\in H^1(\Omega)\times H^1(\Gamma): u_\Gamma=u|_\Gamma \}, \quad \text{for } \beta>0. \end{align} Of course, for all $\beta>0$, we have $ \mathfrak{V}_\beta\subset \mathfrak{V}_0$. For every $k\in\mathbb{N}$, the $k$th eigenvalue of the Wentzel-Laplace operator $B_\beta$ satisfies \begin{equation}\label{char} \lambda_{W,k}^{\beta}={\underset{V\in \mathfrak{V}(k)}{\min} }\underset {0\neq u\in V} {\max} R_\beta(u), \quad k\geqslant 0, \end{equation} where $R_\beta(u) $, the Rayleigh quotient for $\mathrm{B}_\beta$, is given by \begin{equation}\label{rayleigh} R_\beta(u) \stackrel{\scriptscriptstyle\text{def}}=\frac{\int_\Omega{|\nabla u|^2 \mathrm{d}_M+\beta\int_{\Gamma}{|\nabla_\Gamma u|^2 \mathrm{d}_\Gamma}}}{\int_{\Gamma}{u^2 \mathrm{d}_\Gamma}}, \quad \text{for all } u\in \mathfrak{V}_\beta\backslash\{0\}. \end{equation} The eigenvalues for the Dirichlet-to-Neumann map $\mathrm{B}_0=\mathrm{N_ D}$ are those of the well-known Steklov problem: \begin{equation}\label{Steklov} \begin{cases} \Delta u=0, & {\rm in \ }\Omega,\\ \partial_\mathbf{n} u=\lambda^S u, & {\rm on \ }\Gamma. \end{cases} \end{equation} A good discussion of this problem can be found in \cite{GP}. The Steklov eigenvalues are then $ \{ \lambda_{W,k}^{0}\}_{k=0}^\infty$ which we shall denote equivalently as $\{ \lambda_{S,k}\}_{k=0}^\infty $. They behave according to the following asymptotic formula: \begin{equation} \label{WeylS} \lambda_{S,k}= C_n k^{\frac{1}{n-1}}+O(k^{\frac{1}{n-1}}),\quad k\rightarrow\infty, \end{equation} where $C_n=\frac{2\pi}{\left(\omega_{n-1} Vol(\Gamma)\right)^{\frac{1}{n-1}}} $. We refer the reader to \cite[Section 4]{sandgren}. For $\beta>0$, the Weyl asymptotic for $ \lambda_{W,k}^{\beta}$ can be deduced directly from properties of perturbed forms using the asymptotic behaviour of the spectrum of $C_\beta$, \begin{equation} \lambda_{C_\beta,k}=\beta C_n^2k^{\frac{2}{n-1}}+O(k^{\frac{2}{n-1}}),\quad k\rightarrow\infty, \end{equation} and thanks to H\"ormander: \begin{equation} \label{WeylW} \lambda_{W,k}^{\beta}=\beta C_n^2k^{\frac{2}{n-1}}+O(k^{\frac{2}{n-1}}),\quad k\rightarrow\infty. \end{equation} See in particular \cite[Prop 2.7, (2.37)]{Gal2015}. Let $n\geqslant 2$ and $ (M, g)$ be an $n$-dimensional compact connected Riemannian manifold with smooth boundary $\Gamma$, we denote by $\{ \eta_{k}\}_{k=0}^\infty $ the eigenvalues of $\Delta_\Gamma$. From here on we always assume that $\beta>0$ is fixed. Let $n\in\mathbb{N}_{\geqslant 2}$, $\overline{h}\in\mathbb{R}_{>0}$ and $K_-,K_+,\kappa_-,\kappa_+\in \mathbb{R} $. Throughout the paper, we designate by \begin{enumerate} \item $\mathfrak{M}^n(K_-,\kappa_-)$ the class of all smooth compact Riemannian manifolds $M$ of dimension $n$ with non-empty boundary such that the Ricci curvature of $M$ is bounded from below by $(n-1)K_-$ and the mean curvature of the boundary by $\kappa_-$: \begin{itemize} \item $ (n-1)K_- \leqslant \mathrm{Ric} \text{ in } M_{\overline{h}},$ \item $ \kappa_-\leqslant H_0.$ \end{itemize} \item $\mathfrak{M}^n(K_-,K_+,\kappa_-,\kappa_+)$ the class of all smooth compact Riemannian manifolds $M$ of dimension $n$ and whose sectional curvature $K$ and principal curvatures $\{\kappa_i, i=1\ldots,n-1\}$ of the boundary $\Gamma $ are bounded in the following way: \begin{itemize} \item $K_-\leqslant K\leqslant K_+$ in $M_{\overline{h}},$ \item $\kappa_-\leqslant\kappa_1\leqslant\kappa_2\leqslant\ldots\leqslant\kappa_{n-1}\leqslant\kappa_+.$ \end{itemize} \end{enumerate} Here $\overline{h}$ stands for the rolling radius of $M$. The definition is given in \eqref{ss19022020} and $M_{\overline{h}}:=\{x\in M, d_\Gamma(x)<\overline{h}\}$ designates the tubular neighbourhood of $\Gamma$ of width $\overline{h}$ where $ d_\Gamma$ is the distance function from the boundary $\Gamma$. Because of the characterisation \eqref{char} with the min-max principle for the eigenvalues $\{\eta_k \}_{k=0}^\infty$ and $\{\lambda^S_k\}_{k=0}^\infty$, we immediately have for every $k\in \mathbb{N}$ that $$\lambda^S_k+\beta\eta_k\leqslant \lambda_{W,k}^{\beta}. $$ In our results we give reciprocal comparisons. \begin{thm}\label{mainth1} There exists an explicit constant $\overline{B} $ such that on each manifold $M$ in the class $\mathfrak{M}^n(K_-,K_+,\kappa_-,\kappa_+)$ the following inequality is satisfied \begin{equation} \lambda_{W,k}^{\beta} \leqslant \left[\frac{1}{\sqrt{\beta}}+\sqrt{\overline{B}+ \beta\eta_k} \right]^2,\qquad \forall~k\in\mathbb{N}. \end{equation} \end{thm} \begin{thm}\label{mainth2}There exists an explicit constant $\overline{A} $ such that on each manifold $M$ in the class $\mathfrak{M}^n(K_-,K_+,\kappa_-,\kappa_+)$ the following inequality is satisfied \begin{equation} \lambda_{W,k}^{\beta}\leqslant(1+\beta\overline{A})\lambda^S_k+ \beta(\lambda^S_k)^2,\qquad \forall~k\in\mathbb{N}. \end{equation} \end{thm} \begin{thm}\label{AvecReilly} Let $\kappa_->0$, then each manifold in the class $\mathfrak{M}^n(K_-,\kappa_-)$ satisfies the following inequality, \begin{equation} \lambda_{W,k}^{\beta} \leqslant \frac{1}{2(n-1)\kappa_-}\left[\left(K_-+2\eta_k \right)+\sqrt{(K_-+2\eta_k)^2-4\kappa_-(n-1)\eta_k} \right]+\beta\eta_k. \end{equation} \end{thm} To prove our main results, analytical properties of the distance functions served as a key tool. This perspective is inspired by \cite{provstub},\cite{xiong2017} and \cite{colbGirHas}. Let $(M,g)$ be a smooth compact $n$-dimensional Riemannian manifold with non-empty boundary $\Gamma$. We define the distance function from $\Gamma$ by $d_\Gamma(x) = \inf\{\mathrm{dist}(x, y), y\in \Gamma \}$ for every $x$ in $M$, where $\mathrm{dist}$ is the distance in $M$ induced by the metric tensor $g$. The level sets of $d_\Gamma$ will be denoted by $\Gamma_r:=\{x\in M, d_\Gamma(x)=r \}$. Assume that $r>0$ is small enough such that $\Gamma_r=d_\Gamma^{-1}(r)$ is open and consists entirely of regular points for $d_\Gamma$. In this case $\Gamma_r$ is clearly a hypersurface. The interesting fact about the distance function $d_\Gamma$ is that the inward normal vector field to $\Gamma_r$ is simply $\nabla d_\Gamma$, and thus the second fundamental form is the restriction of the Hessian $\mathrm{Hess~}(d_\Gamma)$ to $T\Gamma_r$; so the eigenvalues of $\mathrm{Hess~} d_\Gamma$ are exactly the principal curvatures of $\Gamma_r$. The next section is devoted to sketching relevant properties of distance functions. \section{Geometric properties of distance functions} \subsection{Geometry of a distance function to a hypersurface} If $(M,g)$ is a Riemannian manifold and $\Sigma$ a compact smooth hypersurface in $M$, the geometric quantities that we are interested in here correspond to the eigenvalues of the Hessian of the (squared) distance function from $\Sigma$, $$d_\Sigma (x):=\mathrm{dist}(x,\Sigma)=\min\{\mathrm{dist}(x,y):~ y\in \Sigma \}$$ for $x$ in a small tubular neighbourhood of $\Sigma$. These eigenvalues are related to the geometry of $\Sigma$ inasmuch as are exactly the principal curvatures of $\Sigma$. The restriction of the Hessian of $d_\Sigma$ to $T\Sigma$ coincides with the second fundamental form $\Pi_\Sigma$ of $\Sigma$. If $\Sigma$ is compact of co-dimension one and $2$-sided in $M$, to impose a unique correspondence one can constrain $d_\Sigma$ to be the signed distance function to $\Sigma$ given as the distance from $\Sigma$ with plus or minus sign depending on which side of $\Sigma$ we are on. However, since the signed distance function can make no sense, as in the case of higher co-dimension problems, a common element to work with is the squared distance function $$\eta(x):=\frac{1}{2}d^2_{\Sigma}(x). $$ See for instance \cite{AmbroisioMantegazza}. The factor $\frac{1}{2}$ is introduced for convenience, to simplify several identities. Our starting point, with the next proposition, is to set forth geometric concepts related to a specific smooth function, with the goal of arriving to curvatures on the Riemannian manifold. Let $(M,g)$ be a complete Riemannian manifold, $\Omega \subset M$ an open subset and $f:\Omega \subset M\longrightarrow \mathbb{R}$. Let $\nabla f$ denote the gradient and $\mathrm{Hess~} f$ the Hessian of $f$. Let $\nabla$ denote the Levi-Civita connection and $S(X):= \nabla_X \nabla f$ be the $(1, 1)$-tensor corresponding to $\mathrm{Hess~} f$. Let $a\in \mathbb{R}$ and $\Sigma \subset f^{-1}(a)$ open. We assume that $\Sigma$ consists entirely of regular points for $f$. In this case, $\Sigma$ is clearly a hypersurface. The second fundamental form of a hypersurface $\Sigma \subset M$ with a fixed unit normal vector field $\mathbf{n}: \Sigma \longrightarrow T^\perp =\{ v \in T_pM:~ p \in \Sigma, v \perp T_p\Sigma \}$ is defined as the $(0,2)$-tensor $\Pi(X,Y)= g(\nabla_X N, Y)$ on $\Sigma$. Since $X,Y\in T\Sigma$ are perpendicular to $N$ we have $g (\nabla_X N,N) = 0$. One can also define $\Pi(X, \cdot) = g(\nabla_X N, \cdot)=:g(\mathbf{S}(X),\cdot)$ for $X\in T\Sigma$ where $\mathbf{S}$ stands for the shape operator. We start by relating the second fundamental form of $\Sigma$ to $f$. \begin{prop}\label{prop2101} The following properties hold: \begin{enumerate} \item $ \frac{\nabla f}{|\nabla f|}$ is a unit normal to $\Sigma$, \item $\Pi(X,Y)=\frac{1}{|\nabla f|}\mathrm{Hess~} f (X,Y)$ for all $X, Y \in T\Sigma$. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item If $X\in T\Sigma$ is a tangent vector field to $\Sigma$, $D_Xf=0$, then $ \frac{\nabla f}{|\nabla f|}$ is perpendicular to $\Sigma$. It is a unit normal to $\Sigma$ since it has unit length. \item Take $N=\frac{\nabla f}{|\nabla f|}$ as unit normal from the previous point. Then \begin{align*} g(\nabla_X N, Y)&=g\left(\nabla_X \left(\frac{\nabla f}{|\nabla f|}\right), Y\right)\\ &=g\left(\frac{1}{|\nabla f|}\nabla_X \nabla f, Y\right)+g\left(\nabla_X \left(\frac{1}{|\nabla f|}\right)\nabla f, Y\right)\\ &=\frac{1}{|\nabla f|}g\left(\nabla_X \nabla f, Y\right). \end{align*} \end{enumerate} \qedhere \end{proof} In the same way, we now briefly recall some useful results connecting analytical properties of distance functions and the geometric invariants of the manifold $M$. Let $\Omega\subset (M,g)$ be an open domain.We call distance function on $\Omega$ every smooth function $f:\Omega\longrightarrow \mathbb{R}$ which is a solution to the Hamilton-Jacobi equation $$|\nabla f|\equiv 1.$$ Level sets of a distance functions are hypersurfaces (i.e. $(n-1)$-dimensional submanifolds of $M$). In what follows, we use the notation $\Gamma_s:=\{x\in \Omega, f(x)=s\}$ for the level sets of $f$. From Proposition \ref{prop2101}, one has on each level set, that $\mathrm{Hess~} f=\Pi$, since $|\nabla f|^2=1$. The $(1,1)$-tensor $S(X):=\nabla_X\nabla_f$ corresponds to both $\mathrm{Hess~} f$. It determines how the unit normal to $\Gamma_s$ changes. The intuitive idea behind the operator $S$, which stands here for the second derivative of $f$ but also for the shape operator, is to evaluate how the induced metric on $\Gamma_s$ changes. Let $(M,g)$ be a smooth compact $n$-dimensional Riemannian manifold with non-empty boundary $\Gamma$. A particular distance function we are interested in is $d_\Gamma(x)= \inf\{ y\in \Gamma d(x, y)\}$, the distance function from the boundary, where $d$ is the distance coming from the metric tensor $g$. Let $s >0$ small enough such that $\Gamma_s:=d^{-1}_\Gamma(s)$ consists entirely of regular points for $d_\Gamma$. In this case, $\Gamma_s$ is the hypersurface bounding the submanifold $\Omega_s=\{x\in M : f(x)\geqslant s\}$. Let $p\in \Gamma_s$ and $\mathrm{T}_p\Gamma_s$ denote the tangent space of $\Gamma_s$ at $p$. A given vector $v\in {\mathrm {T}}_{p}M$ is normal to $\Gamma_s$ if $g(v,v')=0$ for all $v'\in {\mathrm {T}}_{p}\Gamma_s$ (i.e. $v$ is orthogonal to ${\mathrm {T}}_{p}(\Gamma_s))$. The normal space to $\Gamma_s$ at $p$ is the space ${\mathrm {T}}_{p}^{\perp}(\Gamma_s)$ of all such $v$. From now on, we choose the unit normal vector field in such a way that the principal curvatures of the Euclidean sphere are taken to be positive. The following lemma links together the function $d_\Gamma$ and geometric data of $\Gamma_s$. \begin{prop \label{prop321} Let $X$ and $Y$ in $T\Gamma_s$. Then the following properties hold. \begin{enumerate}[label=(\roman*)] \item $|\nabla d_\Gamma|=1$, \item $\nabla d_\Gamma$ is a unit normal to $\Gamma_s$, \item $\Pi(X,Y)= \mathrm{Hess~} d_\Gamma(X,Y) $.\label{p3} \end{enumerate} \end{prop} The first point of the proof can be found in \cite[Thm 3.1]{MantegazzaMennucci2003}. From there, the rest of the proof is an immediate consequence of Proposition \ref{prop2101} after noting that $|\nabla d_\Gamma|=1$. While dealing with derivatives, regularity matters are to be kept in mind. In this section, we will present a summary of basic facts about distance functions and smoothness. We refer to Chavel's book \cite[Part \S III.2]{Chavel1993} for further details. \subsection{Regularity of distance functions}\label{ss19022020} Let $(M,g)$ be a complete Riemannian manifold and $p\in M$, we denote by $T_{p}M$ the tangent space at $p$. It is known that the distance function from the point $p$, $\mathrm{dist}(p,\cdot)$ is smooth near $p$. Proposition \ref{Chardist} characterises the failure of $\mathrm{dist}(p,\cdot)$ to be smooth at some point, but before its statement, we need to introduce some necessary terminology. Let $v \in T_pM$ and $c_v:[0,1]\longrightarrow M$ the unique geodesic satisfying $c(0)=p$ and $\dot c(0)=v$. The exponential map from $p$ is defined by: $$\exp_p(v)=c_v(1).$$ It is a standard result that for sufficiently small $v$ in $T_{p}M$, the curve defined by the exponential map, $ \gamma (t):[0,1]\ni t\longrightarrow\exp _{p}(tv)\in M$ is a segment (i.e. minimizing geodesic), and is the unique minimizing geodesic connecting the two endpoints. Let $\Omega_{p,q}:=\{c:[0,1]\longrightarrow M;~ c \text{ is piecewise } C^1 \text{ and } c(0)= p, c(1) =q\}$, for every $q\in M$. A curve $c\in \Omega_{p,q}$ is a segment if it is parametrised by arc length, i.e. $|\dot{c}|$ is constant and its length $L(c):=\int_0^1|\dot{c}|$ is equal to $\mathrm{dist}(p,q)$. The segment domain is defined by $$\mathrm{seg}(p)=\{ v \in T_pM;~ \exp_p(tv): [0,1]\longrightarrow M \text{ is a segment}\}.$$ We denote its interior by $\mathrm{seg}^0(p)$, $$\mathrm{seg}^0(p)=\{ tv;~ t\in [0,1], v\in \mathrm{seg}(p)\}.$$ \begin{prop}\label{Chardist} \begin{enumerate} \item Each element of $\exp_p\left(\mathrm{seg}^0(p)\right)$ is joined to $p$ by a unique segment, meaning that $\exp_p$ is injective on $\mathrm{seg}^0 (p)$. \item $\exp_p$ has no singularity in $\mathrm{seg}^0(p)$. \item Every $v \in \mathrm{seg}(p) \backslash \mathrm{seg}^0(p)$, satisfies at least one of the two following conditions: \begin{itemize} \item There exists $v'\in \mathrm{seg}(p)\backslash \{v\}$ such that $\exp_p(v')= \exp_p(v)$, or \item $D \exp_p$ is singular at $v$. \end{itemize} \end{enumerate} \end{prop} The cut locus of $p$ in the tangent space is defined to be the set $\mathrm{seg}(p) \backslash \mathrm{seg}^0(p$. The proposition states that it is the set of all $v \in T_{p}M$ such that the curve $\gamma: [0,1]\ni t\longmapsto\exp _{p}(tv)\in M$ is a segment but becomes ineffectual whenever $t>1$. Then it is natural to define the cut locus of $p$ in $M$ as the image of the cut locus of $p$ in the tangent space under the exponential map at $p$. This leads to the next corollary that links the cut locus of $p$ in $M$, the points in the manifold where the geodesics starting at $p$ stop being minimizing and the lack of smoothness of the distance function from $p$. \begin{cor} Let $c :[0,1]\longrightarrow M$ be a geodesic such that $c(0)= p$ and $\dot{c}(0)=v$. Let $$\mathrm{cut}\left(v\right)= \sup\{ t;~ c\mid_{[0,t]} \text{is a segment}\},$$ then the distance function from $p$ satisfies the following \begin{enumerate} \item for every $ t < \mathrm{cut}(v) $, $dist(p,\cdot)$ is smooth at $c(t)$, \item $dist(p,\cdot)$ is not smooth at $ c(\mathrm{cut}(v))$. \end{enumerate} In particular, irregularity of $\mathrm{dist}_p$ at some critical point $x$ is due to $\exp_p:\mathrm{seg}(p)\longrightarrow M$ either failing to be one-to-one at $x$ or having $x$ as a critical value. \end{cor} The smallest distance from $p$ to the cut locus is called the injectivity radius of the exponential map at $p$, denoted $\mathrm{inj}_p$ and corresponds to the largest radius $r$ for which $\exp_p:T_pM \supset B(0,r) \longrightarrow M$ is a diffeomorphism. This leads to a natural definition of cut locus for a submanifold $\Sigma$ of $M$ using the normal exponential map. The normal exponential map is the map $\exp_\mathbf{n} :(0,\infty)\times \Sigma\ni(t,p)\longmapsto \exp_p t\mathbf{n}(p) \in M$, where $\mathbf{n}(p)$ is unit normal vector to $\Sigma$ at $p$. Let $\mathbf{n}(\Sigma)$ denote the total space of normal bundles of $\Sigma$. If $\Sigma$ is a single point $p$ in $M$, then $\mathbf{n}(\Sigma)$ is the tangent vector space $T_pM$ of $M$ at $p$. If we denote $\mathbf{n}_t(\Sigma):=\{v\in\mathbf{n}(\Sigma): ||v||<t \}$ where $t>0$, then the normal exponential reads $\exp_\mathbf{n} :\mathbf{n}(\Sigma)\longrightarrow M$ and is nothing but the restriction of the exponential $\exp: TM\longrightarrow M$ to $\mathbf{n}(\Sigma)$. The global injectivity radius $\mathrm{inj}_p$ of $\exp_\mathbf{n} $ is defined to the supremum over all $t>0$ such that $\exp_\mathbf{n} \mid_{\mathbf{n}_t(\Sigma)}$ is an embedding. The injectivity radius of the distance to the boundary is often called the rolling radius of $M$. We denote it throughout the paper by $\mathrm{roll}(M)$. Point \ref{p3} of Proposition \ref{Chardist} emphasises that, at any $p\in \Gamma_r$, the eigenvalues of the Hessian of $ d_\Gamma$ $\mathrm{Hess~} d_\Gamma$ are $$\{0,-\kappa_1,- \kappa_2, \ldots, -\kappa_n\}, $$ where $\kappa_1, \kappa_2, \ldots, \kappa_n$ are the eigenvalues of the shape operator $S:\mathrm{T}\Gamma_r\ni X\longmapsto \nabla_X\mathbf{n}\in \mathrm{T}\Gamma_r$ with $\mathbf{n}=-\nabla d_\Gamma$ corresponding to the outward unit normal vector field to $M_r$. \subsection{Radial curvature equation} Let $(M,g)$ be a complete Riemannian manifold and $r$ a smooth distance function on an open subset of $M$. A Jacobi field for $r$ is a smooth vector field $J$ that does not depend on $r$. In other words, it satisfies the Jacobi equation \begin{equation}\label{Jacobi} [J,\nabla r]=0. \end{equation} A particularly interesting fact about these Jacobi fields is that they can be used to compute the Hessian of the function $r$. Let $J$ be a Jacobi field for $r$, then one has \begin{equation}\label{JacobieShapeOp} \nabla_{\nabla r}J=\nabla_J \nabla r=\nabla_J N=S(J). \end{equation} Let $R$ denote the Riemannian curvature tensor defined by $$R(X,Y)Z=[\nabla_X,\nabla_Y]Z-\nabla_{[X,Y]}Z$$ for every vector fields $X$, $Y$ and $Z$. From the above equalities \eqref{Jacobi} and \eqref{JacobieShapeOp} we get the following equations. \begin{thm} Let $s\in \mathbb{R}_{\geqslant 0}$ such that $\Gamma_s\subset r^{-1}(s)$ consists of regular points for $r$. Then we have: \begin{equation} \label{RiccatiS} \nabla_{\nabla f} S+S^2+R_{\nabla f}=0 \end{equation}and \begin{equation}\label{RiccatiH} (\nabla_{\nabla_r}\mathrm{Hess~} r)(X,Y)+\mathrm{Hess~}^2r(X,Y)+R(X,\nabla_r,\nabla_r,Y)=0, \end{equation} for every $X, Y$ in $T\Gamma_s$. Here $S^2$ corresponds to $S\circ S$, $R_{\nabla f}:=R(\cdot, \nabla f)\nabla f$ is the directional curvature operator (also called tidal force operator). The curvature tensor is changed to a (0, 4)-tensor as follows: $$R(X,Y,Z,W)= g(R(X,Y)Z,W).$$ \end{thm} \begin{proof} Notice that, $\nabla_{\nabla r}\nabla r=0$ since $$0=X(g(\nabla r,\nabla r))=2(g(\nabla_X\nabla r,\nabla r)=2g(\nabla_{\nabla r}\nabla r,X), \quad \forall X\in \mathrm{T}\Gamma_s.$$ One has \begin{equation*} R(J,\nabla r)\nabla r=\nabla_{J,\nabla r}^2\nabla r-\nabla_{\nabla r,J}^2\nabla r. \end{equation*} Together with \begin{equation*} \nabla_{J,\nabla r}^2\nabla r=\nabla_J(\nabla_{\nabla r}\nabla r)-\nabla_{\nabla_J\nabla r}\nabla r=-S\big(S(J)\big) \end{equation*} and \begin{equation*} \nabla_{\nabla r,J}^2\nabla r=\nabla_{\nabla r}(\nabla_J\nabla r)-\nabla_{\nabla_{\nabla r}J}\nabla r=\nabla_{\nabla r}\big(S(J)\big), \end{equation*} we get the first equality. The second formula follows from the last point of Proposition \ref{prop2101}. \end{proof} \subsection{Application to the Rayleigh quotient for Wentzel eigenvalues} Let $(M,g)$ be a smooth compact $n$-dimensional Riemannian manifold with non-empty boundary $\Gamma$. Let $d_\Gamma$ denote the distance function to $\Gamma$ as in Proposition \ref{prop321} and $\mathrm{roll}(M)$ the rolling radius of $M$, i.e the injectivity radius of $d_\Gamma$. Every $h\in(0,\mathrm{roll}(M))$, defines a so-called tubular neighbourhood of $\Gamma$: $$M_{h}:=\{p\in M: d_\Gamma(p)< h\}.$$ It is the subset of $M$ bounded by $\Gamma$ itself and the level hypersurface $\Gamma_h:=\{p\in M: d_\Gamma(p)= h\}$. From $d_\Gamma$ one can define a distance function on $M_H$ from each level hypersurface $\Gamma_h$ by $$d_h(p):=h-d_\Gamma(p).$$ Consider the modified distance function $\eta:=\frac{1}{2}(d_h)^2$, we have the following decomposition of the Rayleigh quotient for harmonic functions of $M$. \begin{prop}\label{PohWentzel} Let $u$ be a smooth harmonic function on $M$. Then, for every $h\in(0,\mathrm{roll}(M))$, one has \begin{align}\nonumber R_\beta(u) &=\int_M{ |\nabla u|^2 \mathrm{d}_M}\\ &\quad\quad-\frac{\beta}{h}\int_{M_h}{ |\nabla u|^2\Delta \eta+2 \mathrm{Hess~} \eta(\nabla u,\nabla u)\mathrm{d}_M}\\ &\qquad\qquad+\beta\int_\Gamma (\partial_\mathbf{n} u)^2\mathrm{d}_\Gamma. \end{align} \end{prop} \begin{proof} $R_\beta(u)=\int_M{ |\nabla u|^2 \mathrm{d}_M}+\beta \int_\Gamma { |\nabla_\Gamma u|^2 \mathrm{d}_\Gamma}$. Notice that, on $\Gamma$ one has, $|\nabla u|^2=|\nabla_\Gamma u|^2+(\partial_\mathbf{n} u)^2$ and $\nabla\delta=\mathbf{n}$. Then applying the divergence theorem, we get \begin{align*} \int_\Gamma |\nabla u|^2 \mathrm{d}_\Gamma&=\int_{M_h}{ \div(|\nabla u|^2\nabla\delta) \mathrm{d}_M}\\ &=\int_{M_h}{ D_{\nabla \delta} (|\nabla u|^2)+|\nabla u|^2\div(\nabla \delta) \mathrm{d}_M}\\ &=\int_{M_h}{ 2 g(\nabla_{\nabla \delta}\nabla u,\nabla u)+|\nabla u|^2\div(\nabla \delta) \mathrm{d}_M} \end{align*} and \begin{align}\label{eq1} R_\beta(u)=&\int_M{ |\nabla u|^2 \mathrm{d}_M}-\beta \int_\Gamma { (\partial_\mathbf{n} u)^2 \mathrm{d}_\Gamma}\nonumber\\ &+\frac{\beta}{h}\int_{M_h}{ 2 \mathrm{Hess~} u(\nabla\delta,\nabla u)+|\nabla u|^2\div(\nabla \delta) \mathrm{d}_M}. \end{align} Moreover, we have $$\mathrm{Hess~} u(\nabla\delta,\nabla u)=div(g(\nabla\delta,\nabla u)\nabla u)-\mathrm{Hess~} \delta(\nabla u,\nabla u).$$ Indeed, \begin{align*} div(g(\nabla\delta,\nabla u)\nabla u)&=D_{\nabla u}g(\nabla\delta,\nabla u)+g(\nabla\delta,\nabla u)\div(\nabla u)\\ &=g(\nabla_{\nabla u}\nabla\delta,\nabla u)+g(\nabla\delta,\nabla_{\nabla u}\nabla u)-g(\nabla\delta,\nabla u)\Delta u\\ &=\mathrm{Hess~}\delta(\nabla u,\nabla u)+\mathrm{Hess~} u(\nabla \delta,\nabla u). \end{align*} Hence, replacing in \eqref{eq1}, one has \begin{align*} R_\beta(u) &=\int_M{ |\nabla u|^2 \mathrm{d}_M}-\beta \int_\Gamma { (\partial_\mathbf{n} u)^2 \mathrm{d}_\Gamma}\\ &+\frac{\beta}{h}\int_{M_h}{ 2 \left[ div(g(\nabla\delta,\nabla u)\nabla u)-\mathrm{Hess~} \delta(\nabla u,\nabla u)\right]+|\nabla u|^2\div(\nabla \delta) \mathrm{d}_M}\\ &=\int_M{ |\nabla u|^2 \mathrm{d}_M}+\beta\int_\Gamma (\partial_\mathbf{n} u)^2 \mathrm{d}_\Gamma\\ &\quad-\frac{\beta}{h}\int_{M_h}{ |\nabla u|^2\Delta \delta+2 \mathrm{Hess~} \delta(\nabla u,\nabla u) \mathrm{d}_M}, \end{align*} where the last line is obtained after integrating by part and combining terms in $\int_\Gamma { (\partial_\mathbf{n} u)^2 \mathrm{d}_\Gamma}$. \qedhere \end{proof} \section{ Comparison estimates} As already said, the estimate in our main theorem follows from a geometric Hessian comparison theorem. The idea is to compare a geometric quantity on a Riemannian manifold with the corresponding quantity on a model space. \subsection{Constant curvature specifics} If $S$ is a surface with constant mean curvature $K$, then \begin{enumerate} \item $S$ is isometric to a sphere of radius $a$ if $K=\frac{1}{a^2}$, \item $S$ is isometric to a plane if $K=0$, \item $S$ is isometric to a pseudo-sphere determined by $a$, if $K=-\frac{1}{a^2}$. \end{enumerate} We denote the hyperbolic space of curvature $\kappa<0$, Euclidean space $(\kappa=0)$, or the sphere of curvature $\kappa>0$ either jointly by $M_k^m$, or, if the sign of $\kappa$ is specified, by $\mathbb{H}^n_\kappa$, $\mathbb{R}^n$, $\mathbb{S}_k^n$: \begin{defn}[Model spaces.] Consider $\mathbb{R}^n$ endowed with the euclidean metric. For any $R>0$, $\mathbb{S}^n(R) := \{p\in \mathbb{R}^{n+1},~ |p| = R\}$ denotes the metric sphere of radius $R$ endowed with the induced Euclidean metric from $R^{n+1}$. As well, denote by $\mathbb{H}^n(R):= \{(p_0, \ldots, p_n) \in\mathbb{R}^{n+1},~ -p_0^2+ p_1^2+ \ldots + p_n^2 =-R^2\}$ the hyperbolic space with the restriction of the Minkowski metric of $\mathbb{R}^{n+1}$. A $n$-dimensional model space $M_\kappa^n$ is a Riemannian manifold with constant curvature $\kappa$, for some $\kappa\in\mathbb{R}$. We think of $M_\kappa^n$ as \begin{equation*} M_\kappa^n:= \begin{cases} \mathbb{S}^n_{\frac{1}{\sqrt{\kappa}}}:= \mathbb{S}^n(\frac{1}{\sqrt{\kappa}})\quad & \text{if }\kappa>0,\\ \mathbb{R}^n\quad & \text{if }\kappa=0,\\ \mathbb{H}_{\frac{1}{\sqrt{-\kappa}}}^n:=\mathbb{H}^n(\kappa)\quad & \text{if }\kappa<0, \end{cases} \end{equation*} since any complete simply connected $n$-dimensional Riemannian manifold of constant curvature is isometric to one of the above model spaces. \end{defn} Let $\sigma$ be a compact smooth hypersurface of a model space $M$, a point $p$ of $\Sigma$ is umbilical if the principal curvatures of $\Sigma$ at $p$ are equal and $\Sigma$ is called umbilical if every point is umbilical. Assuming that $M=M_\kappa^m$ is the model space of constant sectional curvature $K$ with umbilical boundary having principal curvature $\kappa$, the shape operator of a parallel hypersurfaces satisfies $$ \begin{cases} a'+a^2+K=0\\ a(0)=-\kappa. \end{cases} $$ The idea is to examine the situation in $M$ comparing with the case of described model space. \subsection{ Riccati comparison}\label{SectionRiccatiComp} We start with a general result for differential inequalities. In the last section we discussed the Riccati equation as an equation of a field of endomorphisms on $T (c^\perp)$ along a curve $c$. Now we discuss the corresponding one-dimensional ODE of the same type. \begin{defn} Let $\kappa\in\mathbb{R}$ and let $u : I\longrightarrow\mathbb{R}$ be a smooth function on the interval $I \subset \mathbb{R}$. Then $u$ is a solution of \begin{enumerate} \item the Riccati inequality if $$u'+ u^2\leqslant -\kappa.$$ \item the Riccati equation if $$u' +u^2+\kappa=0,$$ \item the Jacobi equation if $$u''+ \kappa u = 0.$$ \end{enumerate} It is a maximal solution to these differential equations if it is a solution defined on the interval $I$ such that there is no solution defined on a suitable interval $I'$, which properly contains $I$. \end{defn} As (in)equations of vector fields, the study of this (in)equations is a classic topic in differential geometry. For our comparison theory we require specific solutions to the Jacobi equation, which will be usable all over the text. We also gather some useful properties. \begin{defn} We define $sn_\kappa$ as the unique solution of the Jacobi equation satisfying $$sn_\kappa(0) = 0, \qquad sn'_\kappa(0)=1$$ and by $cs_\kappa$ as the unique solution of the Jacobi equation satisfying $$cs_\kappa(0) = 1,\qquad cs'_\kappa(0) = 0.$$ \end{defn} \begin{lem}[Properties of $sn_\kappa$ and $cs_\kappa$.] \label{Proprties2701} For any $\kappa\in\mathbb{R}$, the following holds. \begin{enumerate} \item \label{item27011} The solutions are explicitly given by $sn_\kappa, cs_\kappa :\mathbb{R}\longrightarrow\mathbb{R}$ \begin{equation*} sn_\kappa(t)= \begin{cases} \frac{1}{\sqrt{\kappa}}\sin(\sqrt{\kappa}t)\quad & \kappa>0\\ t \quad & \kappa=0\\ \frac{1}{\sqrt{-\kappa}}\sinh(\sqrt{-\kappa}t)\quad & \kappa<0 \end{cases} \quad cs_\kappa(t)= \begin{cases} \cos(\sqrt{\kappa}t)\quad & if \kappa>0\\ 1 \quad & if \kappa=0\\ \cosh(\sqrt{-\kappa}t)\quad & if \kappa<0. \end{cases} \end{equation*} \item\label{item27012} Define \begin{equation*} R_\kappa:= \begin{cases} \frac{\pi}{\sqrt{\kappa}}\quad & \kappa>0\\ \infty \quad & \kappa\leqslant 0 \end{cases} \quad \text{and} \quad L_\kappa:= \begin{cases} \frac{\pi}{2\sqrt{\kappa}}\quad & \kappa>0\\ \infty \quad & \kappa\leqslant 0. \end{cases} \end{equation*} Then, we have $$sn_\kappa(t)>0 \text{ for all } t \in (0,R_\kappa) \quad,\quad cs_\kappa(t) > 0 \text{ for all } t \in (0,L_\kappa ).$$ \item \label{item27013} for every $t\in \mathbb{R}$ $$ sn_\kappa(-t) = -sn_\kappa(t)\quad\text{ and }\quad cs_\kappa(-t) = cs_\kappa(t),$$ $$ 1=cs_\kappa^2(t)+\kappa sn_\kappa^2(t).$$ \item \label{item27014} These functions satisfy $$sn_\kappa'=cs_\kappa\quad\text{ and }\quad cs'_\kappa= -\kappa sn_\kappa.$$ \end{enumerate} \end{lem} \begin{proof} \eqref{item27011} It suffices to check that these functions solve the respective initial value problems. The points \eqref{item27012} and \eqref{item27013} follow immediately from \eqref{item27011}. For \eqref{item27014}, by computation we have \begin{equation*} \begin{cases} (sn'_\kappa)''+\kappa sn'_\kappa=(sn''_\kappa)'+\kappa sn'_\kappa=0\\ sn'_\kappa(0)=1=cs_\kappa(0)\\ sn''_\kappa(0)=-\kappa sn_\kappa(0)=0=cs'_{\kappa}(0), \end{cases} \end{equation*} and \begin{equation*} \begin{cases} (cs'_\kappa)''+\kappa cs'_\kappa=(sn''_\kappa)'+\kappa cs'_\kappa=0\\ cs'_\kappa(0)=0=-\kappa sn_\kappa(0)\\ cs''_\kappa(0)=-\kappa cs_\kappa(0)=-\kappa=-\kappa sn'_{\kappa}(0). \end{cases} \end{equation*} The result follows from the uniqueness of solutions of initial value problems. \qedhere \end{proof} \begin{prop}[Riccati comparison principle ]\label{riccacomp} Let $\rho_1$ and $\rho_2$ be two smooth functions $\rho_{1,2}:(0,b)\longrightarrow \mathbb{R}$ such that $\rho_i(t)=\frac{1}{t}+O(t)$, $i=1,2$. If $\rho_1$ and $\rho_2$ satisfy $$\dot{\rho}_1+\rho^2_1 \leqslant\dot{\rho}_2+\rho^2_2,$$ then $$\rho_1\leqslant\rho_2,$$ (cf. \cite[(1.6.1)]{Karcher1989}). We note $f(x)=O(g(x)){\text{ as }}x\to 0$ if and only if there exist positive numbers $\delta$ and $M$ such that $$|f(x)|\leqslant Mg(x){\text{ when }}0<x<\delta.$$ \end{prop} \begin{proof} Define $\Phi_i(t)=t\exp(\int_0^t (\rho_i(s)-1/s)ds)$ on $[0,b)$. Then $\Phi_i(0)=0$ and $\frac{d}{dt}\left[ t\exp(\int_0^t (\rho_i(s)-1/s)ds)\right]$ exists at zero, $\Phi_i'(0)=1$. On $(0,b)$, $\Phi_i$ is smooth and strictly positive, differentiating gives \begin{align*} \Phi'_i(t)&=\frac{d}{dt}\left[ t\exp(\int_0^t (\rho_i(s)-1/s)ds)\right] \\ &=\frac{\Phi_i(t)}{t}+t\frac{d}{dt}\left[ \exp(\int_0^t (\rho_i(s)-1/s)ds)\right] \\ &=\frac{\Phi_i(t)}{t}+t\frac{d}{dt}\left[ \int_0^t (\rho_i(s)-1/s)ds\right]\frac{\Phi_i(t)}{t} \\ &=\rho_i(t) \Phi_i(t), \end{align*} and hence $\Phi''_i(t)=(\rho'_i(t)+\rho_i(t)^2)\Phi_i(t)$. It follows that $$\Phi_2'(t)\Phi_1(t)-\Phi_1'(t)\Phi_2(t)\geqslant\Phi_2'(0)\Phi_1(0)-\Phi_1'(0)\Phi_2(0)=0$$ since the function $\Phi_2'(t)\Phi_1(t)-\Phi_1'(t)\Phi_2(t)$ is increasing: $$(\Phi_2'\Phi_1-\Phi_1'\Phi_2)'=\Phi_2''\Phi_1-\Phi_1''\Phi_2=((\rho'_2+\rho_2^2)-(\rho'_1+\rho_1^2))\Phi_1\Phi_2\geq 0.$$ Computation of $\Phi_2'(t)\Phi_1(t)-\Phi_1'(t)\Phi_2(t)$ and the previous inequality lead to $\rho_2(t)- \rho_1(t)\geq 0.$ \qedhere \end{proof} Here is the complete version of Proposition \ref{riccacomp}: \begin{thm}[Ricatti comparison principle]\label{RiccatiComparison} For $i=1,2$, we let $a_i:[0,m_{a_{i}})\longrightarrow \mathbb{R}$ be the maximal solution of the Riccati eqution $a_i'+a_i^2+R_i=0$. $$\text{If } \begin{cases} a_1'+a_1^2\leqslant a_2'+a_2^2 ~(\text{i.e. } R_1\geqslant R_2) \\ \text{ and }\\ a_1(0)\leqslant a_2(0) \end{cases} \text{then }\quad \begin{cases} a_1(t)\leqslant a_2(t), t\in[0,m_{a_1})\\ \text{ and }\\ m_{a_{1}}\leqslant m_{a_{2}}. \end{cases} $$ For a proof, we refer the reader to \cite[Prop 2.3]{Eschenburg1987}. \end{thm} \subsection{Sectionnal curvature comparison} We let $n\in\mathbb{N}_{\geqslant 2}$, $\overline{h}\in\mathbb{R}_{>0}$ and $K_-,K_+,\kappa_-,\kappa_+\in \mathbb{R} $. For the reminder of this section, we assume that\\ $M\in\mathfrak{M}^n(K_-,K_+,\kappa_-,\kappa_+) $ and $\overline{h}=\mathrm{roll}(M)$ is the rolling radius of $M$. We define $M_h:=\{x\in M : d_\Gamma< h\}$ for every $h \in (0,\overline{h})$ and $d_h$ the distance function to the level set $\Gamma_h$: $$d_h : M_h\ni x \longmapsto h-d_{\Gamma}(x)\in\mathbb{R}_{\geqslant 0}.$$ Let $H(h)$ denote the mean curvature of $\Gamma_h$, then the following relation holds: \begin{thm}[Mean curvature comparison] \label{MeanCurvatureComp} Let $\mu:[0,m_\mu)\longrightarrow\mathbb{R}$ be the maximal solutions of $$ \begin{cases} \mu'+\mu^2+K_-=0\\ \mu(0)=-\kappa_-. \end{cases} $$ Then $$-\mu(h)\leqslant \mathrm{H}(h),\quad \forall~ h\in [0, \overline{h})$$ and $$\mathrm{roll}(M)\leqslant m_\mu.$$ \end{thm} \begin{proof} We apply the trace operator to equality \eqref{RiccatiH} within $r=d_\Gamma$. Indeed, take an orthonormal frame $E_i$ in $T\Gamma_s$ and set $X= Y =E_i$. Summing over $i$, we get $$(\nabla_{\nabla_r}\mathrm{Hess~} r)(X,Y)+\mathrm{Hess~}^2r(X,Y)+R(X,\nabla_r,\nabla_r,Y)=0,$$ $$\sum_{i=1}^nR(E_i,\nabla_r,\nabla_r,E_i)=\mathrm{Ric}(\nabla_r,\nabla_r).$$ $$\sum_{i=1}^n (\nabla_{\nabla_r}\mathrm{Hess~} r)(E_i,E_i)=\sum_{i=1}^n (\partial_r \mathrm{Hess~} r)(E_i,E_i)=\partial_r\Delta r=(n-1)\mathrm{H}(s)$$ $$\sum_{i=1}^n \mathrm{Hess~}^2r(E_1,E_2)= |\mathrm{Hess~} r|^2\geqslant \frac{1}{n-1}(\Delta r)^2, $$ by the Cauchy-Schwarz inequality. This leads to \begin{equation*} \partial_r\Delta r+ \frac{1}{n-1}(\Delta r)^2 +\mathrm{Ric}(\nabla r,\nabla r)\leqslant 0, \end{equation*} and equivalently \begin{equation*} -\mathrm{H}' + \big( -\mathrm{H}\big)^2 +\frac{1}{n-1}\mathrm{Ric}(\nabla r,\nabla r)\leqslant 0. \end{equation*} Then, for every $h\in(0,\overline{h})$ \begin{equation*} \begin{cases} &-\mathrm{H}'(h) + (-\mathrm{H}(h))^2 \leqslant -K_-=\mu'+\mu^2 \text{ and }\\ & -\mathrm{H}(0)\leqslant -\kappa_-. \end{cases} \end{equation*} It follows from Theorem \ref{RiccatiComparison} that $-\mathrm{H}(h)\leqslant \mu(h)$ for every $h\in[0,\overline{h})$ and $\overline{h}\leqslant m_\mu$. \end{proof} In order to state the next lemma we need to define a new constant $\tilde{\mathfrak{h}}:=\tilde{\mathfrak{h}}(K_+,\kappa_+)$ as \begin{equation} \label{eq160202020b} \tilde{\mathfrak{h}}= \begin{cases} \frac{1}{\max\{0,\kappa_+\}}\quad &\text{if } K_+=0\\ \frac{1}{\sqrt{|K_+|}}\mathrm{arcotan}\Big(\frac{\kappa_+}{\sqrt{|K_+|}}\Big) &\text{if } K_+\neq 0, \end{cases} \end{equation} where (division by zero occurs) we accept, by convention, that the value of $\frac{1}{0}$ is $+\infty$. \begin{lem}\label{lem11022020a} Let $h \in(0,\tilde{\mathfrak{h}})$ and $p\in M_h\backslash \Gamma$ such that $d_\Gamma(p)=s$. That is $p\in \Gamma_s$ and $s\in[0, h)$ since $p$ is taken in $M_h$. Set $t:=h-s$ and denote by $\{\kappa_1(t)\leqslant\kappa_2(t)\leqslant\ldots\leqslant \kappa_{n-1}(t) \}$ the principal curvatures on $\Gamma_s$. Then the eigenvalues of the Hessian $\mathrm{Hess~} \eta$ at $p$ are given by \begin{equation} \rho_i(t)= \begin{cases} t\kappa_i(t) \leqslant 1\quad \forall i\in\{1,\ldots,n-1\}\\ \rho_n= 1. \end{cases} \end{equation} \end{lem} \begin{proof} We notice that, from the point \ref{p3} of Proposition \ref{prop321}, the eigenvalues of the Hessian of $d_h$ at $p\in \Gamma_s$, $\mathrm{Hess~} d_h(p)=-\mathrm{Hess~} d_\Gamma(p)$ are zero and the principal curvatures of the boundary, $$\{0,\kappa_1, \kappa_2, \ldots, \kappa_{n-1}\}. $$ We have $\mathrm{Hess~} \eta=\nabla d_h\otimes \nabla d_h+d_h\mathrm{Hess~} d_h $ and since $\nabla d_h$ is a normal field, $\nabla d_h\otimes \nabla d_h(X,Y)$ vanishes for any tangent vector fields $X$ and $Y$. Hence, using that $d_h(p)=h-s$, we get $(h-s)\kappa_i I= \mathrm{Hess~} \eta $ in $T_p\Gamma_s$, for each $i=1,\ldots,n-1$. This means that the eigenvalues of $ \mathrm{Hess~} \eta$ at $p$ are given by $$\rho_i=(h-s)\kappa_i,\qquad \forall~i=1,\ldots,n-1 \quad \text{and} \quad\rho_n=1.$$ Let $x\in\Gamma_h$ be the nearest point to $p$ and $\mathbf{n}:=\nabla d_h$ denote the unit outer normal vector to $\Gamma_h$ at $x$. Setting $t=h-s$, one has $p=x+t\mathbf{n}$. We set $\tilde{K}:=|K_+|$, from the Riccati comparison principle (Theorem \ref{RiccatiComparison}), if $a:[0,m_a)\longrightarrow\mathbb{R}$ is the maximal solution of \begin{equation}\label{eq22022020a} \begin{cases} a'+a^2+\tilde{K}=0\\ a(0)=-\kappa_+, \end{cases} \end{equation} then \begin{equation}\label{eq19022020b} m_a\leqslant \mathrm{roll}(M) \text{ and } \end{equation} \begin{equation}\label{eq16022020a} a(t)\leqslant -\kappa_i\quad \text{for every } t\in[0, m_a). \end{equation} Now we have the following cases: \begin{itemize} \item[-] \underline{If $K_+=0$ (i.e. $\tilde{K}=0$)} \begin{itemize} \item[.]If $\kappa_+=0$, the constant function zero is the maximal solution of \eqref{eq22022020a} with maximal existence time $+\infty$. \item[.]If $\kappa_+\neq 0$, we notice that $\mu(t)=\frac{cs_{{K}}}{sn_{{K}}}(t)$ satisfies $$\mu'(t)+\mu^2(t)=-{K}$$ for every $K$ and $t$ in $\mathbb{R}$. Let $a(t):=\frac{cs_0}{sn_0}(t+t_0)$ with $t\in\mathbb{R}$ such that $a(0)=-\kappa_+$. That is $$\frac{1}{t_0}=-\kappa_+\Leftrightarrow t_0=-\frac{1}{\kappa_+}.$$ Then $$a(t)=\frac{cs_0}{sn_0}\left(t-\frac{1}{-\kappa_+}\right)=\frac{\kappa_+}{\kappa_+ t-1}$$ is the maximal solution of $$ \begin{cases} a'+a^2+\tilde{K}=0\\ a(0)=-\kappa_+. \end{cases} $$ Since the only possible pole of $a$ is at \begin{equation}\label{eq19022020b1} t=\frac{1}{\kappa_+}, \end{equation} replacing in \eqref{eq16022020a}, we get $$\kappa\leqslant \frac{\kappa_+}{1-t\kappa_+ } \text{ for every } t\in [0, m_{a}), $$ with $$m_a= \begin{cases} \frac{1}{\kappa_+}\quad &\text{ if } \kappa_+>0\\ +\infty&\text{ if } \kappa_+<0. \end{cases} $$ Hence for every $0\leqslant t<h\leqslant m_a$, we have $$\rho_i=(h-t)\kappa_i\leqslant \left(\frac{1}{\kappa_+}-t\right)\frac{\kappa_+}{1-t\kappa_+ }= 1.$$ \end{itemize} \item[-] \underline{If $K_+\neq 0$ (i.e. $\tilde{K}>0$)} Notice that $\mu(t)=-K\frac{sn_{{K}}}{cs_{{K}}}(t)$ satisfies $$\mu'(t)+\mu^2(t)=-{K} \text{ for every } K\neq 0 \text{ and } t \in \mathbb{R}.$$ Let $a(t):=-K\frac{sn_{\tilde{K}}}{cs_{\tilde{K}}}(t+t_0)$ with $t\in\mathbb{R}$ such that $a(0)=-\kappa_+$.\\ That is $$-\sqrt{\tilde{K}}\tan(\sqrt{\tilde{K}}t_0)=-\kappa_+\Leftrightarrow t_0=\frac{1}{\sqrt{\tilde{K}}}\arctan\left(\frac{\kappa_+}{\sqrt{\tilde{K}}}\right).$$ Then \begin{align*} a(t)&=-\sqrt{\tilde{K}}\tan\left(\sqrt{\tilde{K}}t+\arctan\left(\frac{\kappa_+}{\sqrt{\tilde{K}}}\right)\right)\\ &=-\sqrt{\tilde{K}}\cot\left(-\sqrt{\tilde{K}}t+\mathrm{arcotan}\left(\frac{\kappa_+}{\sqrt{\tilde{K}}}\right)\right) \end{align*} is the maximal solution of $$ \begin{cases} a'+a^2+\tilde{K}=0\\ a(0)=-\kappa_+. \end{cases} $$ Since the first pole of the function $\cot$ is $\frac{\pi}{2}$, the first pole of $a$ is \begin{equation} \label{eq19022020b2} m_a=\frac{1}{\sqrt{\tilde{K}}}\left[\frac{\pi}{2}-\arctan\left(\frac{\kappa_+}{\sqrt{\tilde{K}}}\right) \right]=\frac{1}{\sqrt{\tilde{K}}}\left[\mathrm{arcotan}\left(\frac{\kappa_+}{\sqrt{\tilde{K}}}\right) \right]. \end{equation} Replacing in \eqref{eq16022020a}, we get $$\kappa_i\leqslant \sqrt{\tilde{K}}\cot\left(-\sqrt{\tilde{K}}t+\mathrm{arcotan}\left(\frac{\kappa_+}{\sqrt{\tilde{K}}}\right)\right). $$ Hence for every $0\leqslant t<h\leqslant m_a$, we have \begin{align*} \rho_i=(h-t)\kappa_i&\leqslant (m_a-t)\sqrt{\tilde{K}}\cot\left(-\sqrt{\tilde{K}}t+\mathrm{arcotan}\left(\frac{\kappa_+}{\sqrt{\tilde{K}}}\right)\right)\\ &=\left((m_a-t)\sqrt{\tilde{K}}\right)\cot\left((m_a-t)\sqrt{\tilde{K}}\right)\\ &\leqslant 1. \end{align*}$$$$ \end{itemize} \end{proof} \begin{rem} From inequality \eqref{eq19022020b} and explicit formulas in \eqref{eq19022020b1} and \eqref{eq19022020b2}, we see that $\tilde{\mathfrak{h}}\leqslant \overline{h}$. \end{rem} We are now ready to begin proving Theorems \ref{mainth1} and \ref{mainth2}. \subsubsection{First comparison} In this section we prove Theorem \ref{mainth1}. We use the following technical lemmas. \begin{lem}\label{lem2701} Let $u\in C^\infty(M)$ be a smooth function on $M$. Let $h \in(0,\tilde{\mathfrak{h}})$ and $s \in(0,h)$, $y\in\Gamma_s$ and $\mathrm{H}(s)$ be the mean curvature of the parallel hypersurface $\Gamma_s$. From the second equality in Proposition \ref{prop2101}, at each point in $\Gamma_s$ \begin{equation*} \mathrm{H}(s)=\frac{1}{n-1}\mathrm{Tr}( \mathrm{Hess~} d_s)=\frac{1}{n-1}\sum_{i=1}^{n-1}\kappa_i \end{equation*} and the following inequality holds: \begin{equation}\label{onyep} \left[(h-s)(n-1)\mathrm{H}(s)-1\right]|\nabla u|^2\leqslant |\nabla u|^2\div(\nabla \eta)-2 \mathrm{Hess~} \eta(\nabla u,\nabla u). \end{equation} \end{lem} \begin{proof} Let $p\in\Gamma_s$, and $x=\nabla u(p)$. Choose an orthonormal frame such that $\mathrm{Hess~}\eta(p)$ is diagonal and let $A = diag(\rho_1,\ldots,\rho_n)$ is the diagonal matrix representing the Hessian of $\eta$ at $p$. From the previous lemma, one has $$\rho_n|x|^2\geqslant A x\cdot x.$$ Therefore, $$\left(\sum_{i=1}^n\rho_i-2\rho_n\right)|x|^2\leqslant|x|^2\mathrm{Tr}(A)-2 A x\cdot x.$$ From Lemma \ref{lem11022020a}, we have $\sum_{i=1}^n\rho_i(p)-2\rho_n=\sum_{i=1}^{n-1}\rho_i(p)-\rho_n=\sum_{i=1}^{n-1}(h-s)\kappa_i(p)-1=(n-1)(h-s)H(p)-1$ and the result follows. \qedhere \end{proof} \begin{lem}\label{Lem2701} Assumptions are the same as in Lemma \ref{lem2701}. Then the following inequality holds at each point in $\Gamma_s$. \begin{equation} -\left[(n-1)h \left( \sqrt{|K_-|}+|\kappa_-|\right)+1\right]|\nabla u|^2\leqslant |\nabla u|^2\div(\nabla \eta)-2 \mathrm{Hess~} \eta(\nabla u,\nabla u). \end{equation} \end{lem} \begin{proof} let $\mu:I\longrightarrow\mathbb{R}$ be the solution of \begin{equation} \begin{cases} \mu'+\mu^2+K_-=0\\ \mu(0)=-\kappa_-, \end{cases} \end{equation} then, by the mean curvature comparison in Theorem \ref{MeanCurvatureComp}, we have $H\geqslant-\mu$. Now set $\tilde{K}=-|K_-|$ and $\mu_0(t):=-\tilde{K}\frac{sn_{\tilde{K}}}{cs_{\tilde{K}}}(t)+|\kappa_-|$. We notice that $\mu_0$ satisfies: $$\mu_0'=-\tilde{K}\frac{1}{cs_{\tilde{K}}^2},$$ $$\mu_0^2=\tilde{K}^2\frac{sn_{\tilde{K}}^2}{cs_{\tilde{K}}^2}-2\tilde{K}|\kappa_-|\frac{sn_{\tilde{K}}}{cs_{\tilde{K}}}+|\kappa_-|^2.$$ Then \begin{align*} \mu_0'+\mu_0^2&=-\tilde{K}\left(1+2|\kappa_-|\frac{sn_{\tilde{K}}}{cs_{\tilde{K}}}\right)+|\kappa_-|^2\\ &\geqslant -\tilde{K}+|\kappa_-|^2\geqslant -\tilde{K}\geqslant-K_-, \end{align*} since $$ 0\leqslant-\tilde{K}\frac{sn_{\tilde{K}}(t)}{cs_{\tilde{K}}(t)}= \begin{cases} \sqrt{-\tilde{K}}\tanh(\sqrt{-\tilde{K}}t)\quad &\tilde{K}\neq 0,\\ 0\quad &\tilde{K}\neq 0. \end{cases} $$ In addition, $$\mu_0(0)=|\kappa_-|\geqslant -\kappa_-.$$ Applying again Proposition \ref{riccacomp}, we get $\mu\leqslant\mu_0$. Hence we have $H\geqslant-\mu_0\geqslant -(\sqrt{|K_-|}+|\kappa_-|) $ and the result follows from replacing the affected values in \eqref{onyep}. \qedhere \end{proof} \begin{lem} Let $u\in C^\infty(M)$ be a harmonic function on $M$. Let $\overline{B}:=2\left((n-1) \big(\sqrt{|K_-|}+|\kappa_-|\big)+\frac{1}{\tilde{\mathfrak{h}}}\right) $ then the following inequality holds. \begin{equation*} R_\beta(u) \leqslant\left[\frac{1}{\sqrt{\beta}}+\sqrt{\overline{B}+ \beta\int_\Gamma |\nabla_\Gamma u|^2 \mathrm{d}_\Gamma } \right]^2. \end{equation*} \end{lem} \begin{proof} Set \begin{equation}\label{defb13022020} B:=(n-1) \big(\sqrt{|K_-|}+|\kappa_-|\big), \end{equation} by Lemma \ref{Lem2701} we have for every $h\in(0,\tilde{\mathfrak{h}})$ \begin{equation}\label{eq20022020a} -(Bh+1)\int_{M_h}{ |\nabla u|^2 \mathrm{d}_M}\leqslant \int_{M_h}{ |\nabla u|^2\div(\nabla \eta)-2 \mathrm{Hess~} \eta(\nabla u,\nabla u) \mathrm{d}_M}. \end{equation} Therefore, applying \eqref{eq20022020a} and Proposition \ref{PohWentzel} with $h=\frac{\tilde{\mathfrak{h}}}{2}$, we get $$ \int_{M}{ |\nabla u|^2\mathrm{d}_M}+ \beta \int_\Gamma (\partial_\mathbf{n} u)^2 \mathrm{d}_\Gamma-2(B+\frac{1}{\tilde{\mathfrak{h}}}) \beta \int_{M_\frac{\tilde{\mathfrak{h}}}{2}}{ |\nabla u|^2 \mathrm{d}_M}\leqslant R_\beta(u).$$ Set $\tilde{B}:=2(B+\frac{1}{\tilde{\mathfrak{h}}}) \beta-1$, then one has \begin{align*} R_\beta(u) &\geqslant \beta \left(\int_M{ |\nabla u|^2 \mathrm{d}_M}\right)^2-\tilde{B} \int_M{ |\nabla u|^2 \mathrm{d}_M},\\ &= \left(\sqrt{\beta}\int_M{ |\nabla u| ^2 \mathrm{d}_M}\right)^2-2\left(\sqrt{\beta} \int_M{ |\nabla u| ^2 \mathrm{d}_M}\right)\left(\frac{\tilde{B}}{2\sqrt{\beta}}\right). \end{align*} Hence, \begin{align*} \left[\sqrt{\beta}\int_M{ |\nabla u| ^2 \mathrm{d}_M}-\frac{\tilde{B}}{2\sqrt{\beta}}\right]^2&\leqslant R_\beta(u)+\left(\frac{\tilde{B}}{2\sqrt{\beta}}\right)^2,\\ \left|\sqrt{\beta}\int_M{ |\nabla u| \mathrm{d}_M}-\frac{\tilde{B}}{2\sqrt{\beta}}\right|&\leqslant \sqrt{R_\beta(u)}+\frac{|\tilde{B}|}{2\sqrt{\beta}}, \end{align*} meaning that \begin{align} \sqrt{\beta}\left( R_\beta(u)-\beta\int_\Gamma |\nabla_\Gamma u|^2 \mathrm{d}_\Gamma \right)-\frac{\tilde{B}}{2\sqrt{\beta}}&\leqslant \sqrt{R_\beta(u)}+\frac{|\tilde{B}|}{2\sqrt{\beta}},\nonumber\\ \sqrt{\beta} R_\beta(u)-\sqrt{ R_\beta(u)}-\frac{|\tilde{B}|}{\sqrt{\beta}}-\beta\sqrt{\beta}\int_\Gamma |\nabla_\Gamma u|^2 \mathrm{d}_\Gamma&\leqslant 0.\label{eq2} \end{align} Solving \eqref{eq2} with unknown $\sqrt{R_\beta(u)} $, we get \begin{align*} R_\beta(u) &\leqslant\frac{1}{4\beta}\left[ 1+\sqrt{1+4|\tilde{B}|+4\beta^2\int_\Gamma |\nabla_\Gamma u|^2 \mathrm{d}_\Gamma } \right]^2\\ &\leqslant\left[\frac{1}{\sqrt{\beta}}+\sqrt{\frac{|\tilde{B}|}{\beta}+ \beta\int_\Gamma |\nabla_\Gamma u|^2 \mathrm{d}_\Gamma } \right]^2. \end{align*} The result follows since $|\tilde{B}|\leqslant \beta\overline{B}$. \qedhere \end{proof} \begin{proof}[Proof of Theorem \ref{mainth1}] Take $\{\varphi_k\}_{k=0}^\infty$ an orthonormal basis $L^2(\Gamma)$ consisting of eigenfunctions of $\Delta_\Gamma$, such that $\Delta_\Gamma\varphi_k=\eta_k\varphi_k$ for $k\leqslant 0$. Let $k$ be fixed and for each $j=0,1,\ldots,k$, $\phi_j$ be the harmonic extensions of $\varphi_j$, i.e. \begin{equation}\label{D} \begin{cases} \Delta \phi_j=0\quad\text{ in } \Omega,\\ \phi_j |_\Gamma= \varphi_j \quad \text{ on } \Gamma. \end{cases} \end{equation} Then $\phi_i\in \mathfrak{W}$, and \begin{align*} \lambda_{W,k}^{\beta} &\leqslant \underset {0\leqslant j \leqslant k} {\max} R_\beta(\phi_j)\\ &\leqslant\underset {0\leqslant j \leqslant k} {\max}\left[\frac{1}{\sqrt{\beta}}+\sqrt{\overline{B}+ \beta\int_\Gamma |\nabla_\Gamma \phi_j|^2 \mathrm{d}_\Gamma } \right]^2\\ &\leqslant\underset {0\leqslant j \leqslant k} {\max}\left[\frac{1}{\sqrt{\beta}}+\sqrt{\overline{B}+ \beta\eta_j} \right]^2\\ &\leqslant \left[\frac{1}{\sqrt{\beta}}+\sqrt{\overline{B}+ \beta\eta_k} \right]^2. \end{align*} \qedhere \end{proof} \subsubsection{Second comparison} We give here a proof of Theorem \ref{mainth2} using the following comparison. \begin{thm} Let $M\in \mathfrak{M}^n(K_-,K_+,\kappa_-,\kappa_+)$ and $\overline{h}\in\mathbb{R}_{>0}$ be the rolling radius of $M$. Let $a:[0,a_+)\longrightarrow\mathbb{R}$ and $b:[0,b_+)\longrightarrow\mathbb{R}$ be the maximal solutions of $$ \begin{cases} a'+a^2+K_-=0\\ a(0)=-\kappa_- \end{cases} \text{and}\quad \begin{cases} b'+b^2+K_+=0\\ b(0)=-\kappa_+, \end{cases} $$ respectively. Then we have $b_+\leqslant\mathrm{roll}(M)\leqslant a_+$ and \begin{equation}\label{eq12022020} \begin{cases} -a(t)\leqslant \kappa_i(t)\quad &\forall~ t\in [0, \overline{h})\\ \kappa_i(t)\leqslant-b(t), &\forall~ t\in [0, b_+). \end{cases} \end{equation} \end{thm} \begin{proof} We know from equality \eqref{RiccatiS} that the shape operator with eigenvalues $\{-\kappa_i,~i=1,\ldots, n-1\}$, in $M_{\overline{h}}$, satisfies \begin{equation*} S'+S^2+R=0. \end{equation*} Applying Theorem \ref{RiccatiComparison} with $R_1 =R$, $R_2 =K_- I$, $a_1(0) = S(0)$ and $a_2(0) = -\kappa_-$, we get \begin{equation*} \begin{cases} S(t)\leqslant a(t)I\quad \text{for all } t\in[0,\overline{h})\\ \text{ and }\\ \mathrm{roll}(M)\leqslant a_+. \end{cases} \end{equation*} Hence, the principal curvatures of the level hypersurfaces in $M_{\overline{h}}$ satisfy $$-a(t)\leqslant \kappa_i(t).$$ For the second inequality, we apply Theorem \ref{RiccatiComparison} with $R_1 =K_+$, $R_2 =R I$, $a_1(0) = -\kappa_+$ and $a_2(0) = S(0)$, we get similarly \begin{equation*} b(t)I\leqslant S(t) \quad \text{for all } t\in[0,b_+). \end{equation*} Hence, each principal curvature satisfies $\kappa_i(t)\leqslant-b(t)$. \end{proof} \begin{lem}\label{lm19022020a} Let $u\in C^\infty(M) $ be a harmonic function. Let $h\in(0,\tilde{\mathfrak{h}})$ and $s \in[0,\overline{h})$ then the following inequality holds at each point in $\Gamma_s$. \begin{align}\label{onyep12} |\nabla u|^2\div(\nabla \eta)&-2 \mathrm{Hess~} \eta(\nabla u,\nabla u)\nonumber\\ &\leqslant \left(1+\sum_{i=2}^{n-1}\rho_i-\rho_{1} \right)|\nabla u|^2. \end{align} \end{lem} \begin{proof} The proof is similar to that of \eqref{onyep}. If $p\in\Gamma_s$, and $x=\nabla u(p)$, taking an orthonormal frame such that $\mathrm{Hess~}\eta(p)$ is diagonal, one has $$\rho_1|x|^2\leqslant A x\cdot x,$$ where $A = diag(\rho_1,\ldots,\rho_n)$ is the diagonal matrix representing the Hessian of $\eta$ at $p$. Therefore $$\left(\sum_{i=1}^n\rho_i-2\rho_1\right)|x|^2\geqslant|x|^2\mathrm{Tr}(A)-2 A x\cdot x.$$ The result follows from Lemma \ref{lem11022020a}, since $\sum_{i=1}^n\rho_i(p)-2\rho_1\geqslant 1+\sum_{i=2}^{n-1}\rho_i(p)-\rho_1.$ \qedhere \qedhere \end{proof} \begin{lem}\label{lm20022020b} Assumptions are the same as in Lemma \ref{lem11022020a}. For every $h\in(0, \tilde{\mathfrak{h}})$, we have \begin{align*} \int_{M_{h}} |\nabla u|^2\div(\nabla \eta)&-2 \mathrm{Hess~} \eta(\nabla u,\nabla u) \mathrm{d}_M\\ &\leqslant \left(h\frac{B}{n-1}+(n-1)\right)\int_M{ |\nabla u|^2 \mathrm{d}_M}. \end{align*} The constant $\tilde{\mathfrak{h}}$ is defined in \eqref{eq160202020b} and $B$ is the same as in \eqref{defb13022020}. \end{lem} \begin{proof} From equations \eqref{onyep12}, \eqref{eq12022020} and Lemma \ref{lem11022020a}, for every $s\in [0, h)$, we have \begin{align} |\nabla u|^2\div(\nabla \eta)&-2 \mathrm{Hess~} \eta(\nabla u,\nabla u)\nonumber\\ &\leqslant \left(1+\sum_{i=2}^{n-1}\rho_i-\rho_{1} \right)|\nabla u|^2\nonumber\\ &\leqslant \Big(1+(n-2)+(h-s)a(h-s) \Big)|\nabla u|^2.\label{eq13022020} \end{align} We set $\tilde{K}=-|K_-|$ and $\mu_0(t):=-\tilde{K}\frac{sn_{\tilde{K}}}{cs_{\tilde{K}}}(t)+|\kappa_-|$. We notice that $\mu_0$ satisfies: $$\mu_0'=-\tilde{K}\frac{1}{cs_{\tilde{K}}^2},$$ $$\mu_0^2=\tilde{K}^2\frac{sn_{\tilde{K}}^2}{cs_{\tilde{K}}^2}-2\tilde{K}|\kappa_-|\frac{sn_{\tilde{K}}}{cs_{\tilde{K}}}+|\kappa_-|^2.$$ Then \begin{align*} \mu_0'+\mu_0^2&=-\tilde{K}\left(1+2|\kappa_-|\frac{sn_{\tilde{K}}}{cs_{\tilde{K}}}\right)+|\kappa_-|^2\\ &\geqslant -\tilde{K}+|\kappa_-|^2\geqslant -\tilde{K}\geqslant-K_-, \end{align*} since $$ 0\leqslant-\tilde{K}\frac{sn_{\tilde{K}}(t)}{cs_{\tilde{K}}(t)}= \begin{cases} \sqrt{-\tilde{K}}\tanh(\sqrt{-\tilde{K}}t)\quad &\tilde{K}\neq 0,\\ 0\quad &\tilde{K}= 0. \end{cases} $$ In addition, $$\mu_0(0)=|\kappa_-|\geqslant -\kappa_-.$$ Applying again Proposition \ref{riccacomp}, we get $a(t)\leqslant\mu_0(t)$. Hence we have $a(t)\leqslant \sqrt{|K_-|}+|\kappa_-| $ since $\tanh(x)\leqslant 1$ for every $x\in\mathbb{R}$. Replacing in inequality \eqref{eq13022020}, we get \begin{align*} |\nabla u|^2\div(\nabla \eta)&-2 \mathrm{Hess~} \eta(\nabla u,\nabla u) \\ &\leqslant \left[(n-1)+ (h-s)\left(\sqrt{|K_-|}+|\kappa_-|\right)\right]|\nabla u|^2\\ &\leqslant \left[(n-1)+ h\frac{B}{n-1}\right]|\nabla u|^2. \end{align*} \end{proof} \begin{proof}[Proof of Theorem \ref{mainth2}] We set $\overline{A}:=2\left(\frac{B}{n-1}+\frac{n-1}{\tilde{\mathfrak{h}}}\right)$, then applying Proposition \ref{PohWentzel} and Lemma \ref{lm20022020b} with $h=\frac{\tilde{\mathfrak{h}}}{2}$, one has \begin{align*} R_\beta(u)&\leqslant (1+\beta\overline{A})\int_{M}{ |\nabla u|^2\mathrm{d}_M}+ \beta \int_\Gamma (\partial_\mathbf{n} u)^2 \mathrm{d}_\Gamma\\ &\leqslant (1+\beta\overline{A})\left[\int_\Gamma (\partial_\mathbf{n} u)^2 \mathrm{d}_\Gamma\right]^{\frac{1}{2}}+ \beta \int_\Gamma (\partial_\mathbf{n} u)^2 \mathrm{d}_\Gamma \end{align*} Let $(\Psi_i)_{i\in\mathbb{N}}$ be a complete set of eigenfunctions corresponding to the Steklov eigenvalues $\lambda^S_i$ of $M$ forming an orthonormal basis of $\mathfrak{W}_0$. Let $\mathrm{span}\{\Psi_i, i=1,\ldots,k\}$ be the trial space $V$. Every $u\in V$ such that $\int_\Gamma u^2 \mathrm{d}_\Gamma=1$ can be written as $u=\sum_{i=1}^k c_i\Psi_i$ with $\sum_{i=1}^k c_i^2=1$. \begin{align*} \lambda_{W,k}^{\beta}&\leqslant (1+\beta\overline{A})\left[\int_\Gamma (\partial_\mathbf{n} u)^2 \mathrm{d}_\Gamma\right]^{\frac{1}{2}}+ \beta \int_\Gamma (\partial_\mathbf{n} u)^2 \mathrm{d}_\Gamma\\ &\leqslant (1+\beta\overline{A})\left[\int_\Gamma (\sum_{i=1}^k c_i\partial_\mathbf{n}\Psi_i )^2 \mathrm{d}_\Gamma\right]^{\frac{1}{2}}+ \beta \int_\Gamma (\sum_{i=1}^k c_i\partial_\mathbf{n}\Psi_i)^2 \mathrm{d}_\Gamma\\ &\leqslant (1+\beta\overline{A})\lambda^S_k\left[\int_\Gamma u^2 \mathrm{d}_\Gamma\right]^{\frac{1}{2}}+ (\lambda^S_k)^2\beta \int_\Gamma u^2 \mathrm{d}_\Gamma\\ &= (1+\beta\overline{A})\lambda^S_k+ \beta(\lambda^S_k)^2. \end{align*} \qedhere \end{proof} \section{Estimates based on Ricci curvature with Reilly identity} Let $(M, g)$ be an $n$-dimensional compact connected Riemannian manifold with non-empty boundary $\Gamma$. We suppose that the Ricci curvature of $M$ and the principle curvatures of $\Gamma$ are bounded from below. Then we get a quantitative comparison between the Wentzel-eigenvalues and the eigenvalues of the Laplace-Beltrami operator in $\Gamma$. We use the following Reilly's formula which has many interesting applications. \begin{thm}[Reilly, 1977] Given a smooth function $f$ on $M$, we denote $z = f|_\Gamma$ and $v = \partial_\mathbf{n} f$. Then, \begin{equation} \label{Reilly} \int_M ( \Delta f)^2-|\mathrm{Hess~} f|^2- \int_M \mathrm{Ric}( \nabla f, \nabla f)=\int_\Gamma H v^2- 2v \Delta_\Gamma z +\Pi(\nabla_{\Gamma} z,\nabla_{\Gamma} z). \end{equation} \end{thm} We refer the reader to \cite[(14)]{ReillyRobert} for further details. \begin{proof}[Proof of Theorem \ref{AvecReilly}] As is well known, $L^2(\Gamma)$ has an orthonormal basis made of eigenfunctions of $\Delta_\Gamma$ that we will denote by $\{\varphi_i\}_{k=0}^\infty$, such that, $\varphi_k$ is associated to $\eta_k$, that is $\Delta_\Gamma \varphi_k=\eta_k\varphi_k$, for all $k\geqslant 0$.\\ Let $k$ be fixed, consider the harmonic extensions $\phi_j \stackrel{\scriptscriptstyle\text{def}}= \wedge \varphi_j\in H^1(M)$: \begin{equation}\label{D2} \begin{cases} \Delta \phi_j=0\quad\text{ in } \Omega,\\ \phi_j |_\Gamma= \varphi_j \quad \text{ on } \Gamma, \end{cases} \end{equation} for $j = 1,\ldots,k$. Let $V \stackrel{\scriptscriptstyle\text{def}}= \mathrm{span}\{\phi_0,\phi_1,\ldots,\phi_k\}$ be the space generated by $\{\phi_j\}_{j=0}^k$. We have $\varphi_j\in H^1(\Gamma)$, for all $j=0,\ldots,k$, since $\int_\Gamma \varphi_j^2=1$ and $\int_\Gamma |\nabla\varphi|^2=\eta_j$. Thus, for any $\varphi_i$, we have a unique $\phi_i\in C^\infty(\overline{M})$ solving \eqref{D}, assuming each connected component of $M$ has non-empty boundary. We refer the interested reader to \cite[Sec~5]{Taylor} for further information. Moreover, $\phi_i\in H^1(M)$ since $\varphi_i\in H^1(\Gamma)$. See \cite[p. 360, (1.39) and (1.40)]{Taylor}. Then accordingly, $\phi_i\in \mathfrak{W}$, for $i=0,\ldots,k$ and $V$ is a $k$-dimensional subspace of $\mathfrak{W}_\beta$. Every function $\phi$ in $V$ can be expressed as $\phi = \Sigma_{j=1}^k \alpha_j \phi_j$, then $(\phi,\phi)= \Sigma_{i=0}^k\alpha_i^2(\phi_i, \phi_i)$ and $(\mathrm{d}\phi,\mathrm{d} \phi) =\Sigma_{i=0}^k\alpha_i^2(\mathrm{d}\phi_i, \mathrm{d}\phi_i)$. Assume that $u\in\{ \phi_0,\ldots,\phi_k\} $ realises $R_\beta(u)=\underset{0\leqslant i \leqslant k} {\max}\int_\Omega{|\nabla \phi_i|^2 \mathrm{d}_M}$ and let $m= R_\beta(u)$, then \begin{align*} R_\beta(\phi)&=\frac{\Sigma_{i=0}^k\alpha_i^2\left( \int_M{|\nabla \phi_i|^2 \mathrm{d}_M}+\beta\int_\Gamma{|\nabla_\Gamma \phi_i|^2 \mathrm{d}_\Gamma} \right)}{\Sigma_{i=0}^k\alpha_i^2 \int_{\Gamma}{\phi_i^2 \mathrm{d}_\Gamma}}\\ & \leqslant \frac{\Sigma_{i=0}^k\alpha_i^2\left(m \int_{\Gamma}{\phi_i^2 \mathrm{d}_\Gamma}\right)}{\Sigma_{i=0}^k\alpha_i^2 \int_{\Gamma}{\phi_i^2 \mathrm{d}_\Gamma} }=m, \end{align*} meaning that $\{R_\beta(\phi);~ \phi\in V\}$ is bounded from above by $R_\beta(u)$. Using the min-max principle \eqref{char} together with \eqref{Reilly} leads to \begin{align*} &\lambda_{W,k}^{\beta} \leqslant \underset {0\leqslant i \leqslant k} {\max} R_\beta(\phi_i)\leqslant \int_\Omega{|\nabla u|^2 \mathrm{d}_M}+\beta\eta_k\\ & \leqslant\frac{1}{2(n-1)\kappa_-}\left[\left(K_-+2\eta_k \right)+\sqrt{(K_-+2\eta_k)^2-4\kappa_-(n-1)\eta_k} \right]+\beta\eta_k. \end{align*} Indeed, applying \eqref{rayleigh} to $u$ one has \begin{align*} -\int_M|\mathrm{Hess~} u|^2 & +Ric(\nabla u,\nabla u) \mathrm{d}_M\\ & = \int_\Gamma \left[H(\partial_\mathbf{n} u)^2-2|\nabla_\Gamma u|^2 \partial_\mathbf{n} u\right]\mathrm{d}_\Gamma+\Pi(\nabla_\Gamma u, \nabla_\Gamma u) \mathrm{d}_\Gamma. \end{align*} Since, by Cauchy-Schwarz inequality $(\Delta u)^2\leqslant n|\mathrm{Hess~} u|^2$, it follows that $$-\frac{1}{n} \int_M ( \Delta u)^2\mathrm{d}_M+ K_- \int_M |\nabla u|^2\mathrm{d}_M\geqslant\int_\Gamma H (\partial_\mathbf{n} u)^2- 2(\partial_\mathbf{n} u) \Delta_\Gamma u +\Pi(\nabla_{\Gamma} u,\nabla_{\Gamma} u)\mathrm{d}_\Gamma. $$ Then, \begin{align}\nonumber K_-\int_M |\nabla u|^2\mathrm{d}_M & \geqslant\int_\Gamma H(\partial_\mathbf{n} u)\partial_\mathbf{n} u \mathrm{d}_\Gamma-2\eta_k\int_M |\nabla u|^2\mathrm{d}_M\nonumber\\ &~~+ \int_\Gamma \Pi(\nabla_\Gamma u,\nabla_\Gamma u)\mathrm{d}_\Gamma\nonumber\\ & \geqslant(n-1)\kappa_- \int_\Gamma (\partial_\mathbf{n} u)^2\mathrm{d}_\Gamma-2\eta_k\int_M |\nabla u|^2\mathrm{d}_M\nonumber\\ &~~+\kappa_-\eta_k.\label{papillon1} \end{align} Since $\int_M |\nabla u|^2 \mathrm{d}_M\leqslant\left(\int_\Gamma (\partial_\mathbf{n} u)^2 \mathrm{d}_\Gamma\right)^\frac{1}{2}$, \eqref{papillon1} can be transformed in two ways: \begin{align} \label{eqaigle1} (n-1)\kappa_- \left(\int_M |\nabla u|^2 \mathrm{d}_M\right)^2&-\left(2\eta_k+K_-\right)\int_M |\nabla u|^2\mathrm{d}_M\nonumber\\ &+\eta_k\kappa_-\leqslant 0 \end{align} and \begin{align}\label{eqaigle2} K_-\left(\int_\Gamma (\partial_\mathbf{n} u)^2\mathrm{d}_\Gamma\right)^\frac{1}{2} \geqslant & (n-1)\kappa_- \int_\Gamma (\partial_\mathbf{n} u)^2\mathrm{d}_\Gamma-2\eta_k\left(\int_\Gamma (\partial_\mathbf{n} u)^2\mathrm{d}_\Gamma\right)^\frac{1}{2}\\ &+\kappa_-\eta_k.\nonumber \end{align} From \eqref{eqaigle2}, we have \begin{align*} &(n-1)\kappa_-\left[\int_\Gamma \left( \partial_\mathbf{n} u-\frac{K_-+2\eta_k}{2(n-1)\kappa_-}u\right)^2-\left[\frac{K_-+2}{2(n-1)\kappa_-} \right]^2u^2\mathrm{d}_\Gamma\right]\\ &~~+\int_\Gamma |\nabla_\Gamma u|^2\mathrm{d}_\Gamma\kappa_- \leqslant 0. \end{align*} So, by a simple remarkable identity we get \begin{align*} &(n-1)\kappa_- \int_\Gamma \left( \partial_\mathbf{n} u-\frac{K_-+2\eta_k}{2(n-1)\kappa_-}u\right)^2\mathrm{d}_\Gamma\\ &~~-\kappa_-\left[\frac{(K_-+2\eta_k)^2}{4(n-1)\kappa_-^2}-\int_\Gamma |\nabla_\Gamma u|^2\mathrm{d}_\Gamma \right] \leqslant 0. \end{align*} Ignoring all obvious non-negative terms in the last inequality, we have that $$\left[\frac{(K_-+2\eta_k)^2}{4(n-1)\kappa_-^2}-\eta_k \right] \geqslant 0,$$ then $$\mathit{Q}:=\left(K_-+2\eta_k\right)^2- 4(n-1) \kappa_-\eta_k\geqslant 0.$$ So solving the inequality \eqref{eqaigle1} for the unknown $\int_M{|\nabla u|^2 \mathrm{d}_M}$, leads to \begin{multline} \frac{1}{2(n-1)\kappa_-}\left[\left(K_-+2\eta_k \right)-\sqrt{\mathit{Q}} \right]\\ \leqslant\int_M|\nabla u|^2\mathrm{d}_M\\ \leqslant\frac{1}{2(n-1)\kappa_-}\left[\left(K_-+2\eta_k \right)+\sqrt{\mathit{Q}} \right]. \end{multline} \end{proof} \bibliographystyle{plain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\noopsort}[1]{} \providecommand{\mr}[1]{\href{http://www.ams.org/mathscinet-getitem?mr=#1}{MR~#1}} \providecommand{\zbl}[1]{\href{http://www.zentralblatt-math.org/zmath/en/search/?q=an:#1}{Zbl~#1}} \providecommand{\jfm}[1]{\href{http://www.emis.de/cgi-bin/JFM-item?#1}{JFM~#1}} \providecommand{\arxiv}[1]{\href{http://www.arxiv.org/abs/#1}{arXiv~#1}} \providecommand{\doi}[1]{\url{http://dx.doi.org/#1}} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,477,468,750,836
arxiv
\section{Introduction} The recently discovered high temperature superconductivity in fluorine doped LaFeAsO have stirred new interest in the research of high-$T_c$ superconductors \cite{1}, outside of the cuprate family \cite{2}. Replacing oxygen by fluorine introduces charge carriers which is eventually transferred from the La-O(F) charge reservoir layer to the Fe-As conductive layer. Superconductivity emerges as the F doping concentration is higher than 5 \%. Subsequent research suggests that replacing the lanthanum in LaFeAsO with other rare earth elements such as cerium, samarium, neodymium and praseodymium leads to superconductors with elevated critical temperature up to 55 K \cite{3,4,5}. The first and most important question about the LaFeAsO-based new superconducting system is whether it has similar mechanism for superconductivity with cuprate superconductors or not. In cuprate superconductors, the antiferromagnetic properties of the parent compounds have provoked scenarios of purely electronically driven superconductivity, where lattice effects are mostly ignored \cite{6}. On the other hand, various anomalous lattice effects have recently been observed in cuprates which closely correlate with the onset of superconducting transition, suggesting that lattice effects play an important microscopic role in the superconducting pairing mechanism \cite{7,8,9,10,11,12}. In particular, x-ray absorption fine structure (EXAFS) measurements have shown that doping causes local lattice distortion which occurs well above $T_c$ \cite{7,8,11}. Correlating data from inelastic neutron scattering and inelastic x-ray scattering, isotope effects, Raman spectroscopy, infrared absorption spectroscopy and femtosecond optical spectroscopy have been proving that the anomalous local lattice distortion observed by EXAFS measurements is correlated with the opening of pseudogap and the formation of polarons \cite{13,14,15}. The change in dynamics, which is observed across the superconducting transition temperature, indicates an intimate link of the dynamics of these polarons with the mechanism of high-temperature superconductivity \cite{13}. In LaFeAsO-based superconductors, evidence of pseudogap evolutions similar to the high-$T_c$ cuprates has been reported \cite{16,17}. However, there is a lack of experimental data on the lattice effects. In order to find the appropriate mechanism, lattice effects can provide key information. In this paper we present results from Fe and As K edge EXAFS measurements indicating that local Fe-As lattice fluctuation occurs well above $T_c$. Similar to that in cuprates, this local lattice fluctuation is closely correlated with the onset of superconducting transition, indicating that the local lattice fluctuation is involved in the superconducting coherence in both systems. \section{Experiment} Polycrystalline samples LaFeAsO$_{1-x}$F$_x$ ($x$=0; 0.07) were prepared by solid state synthesis as described elsewhere \cite{1}. EXAFS measurements were performed at BL13B at Photon Factory, Tsukuba. Powder samples were mounted on an aluminum holder and attached to a closed-cycle helium refrigerator. The holder rotates on a high precision goniometer (Huber 420) to change the incidence angle. A novel Ge pixel array detector (PAD) with 100 segments was used in order to gain high throughput and energy resolution. The detailed description of PAD apparatus was reported elsewhere \cite{18}. The experimental EXAFS, $\chi$($k$), was analyzed by use of the Ifeffit analysis package. The fitting to experimental data was performed in both $R$ space and $k$ space, and the uncertainties were determined from a reduced $\chi$$^2$ using standard techniques of error analysis. \section{Results and discussion} The Fe and As $K$-edges EXAFS oscillations for LaFeAsO$_{1-x}$F$_x$ ($x$=0; 0.07) are measured from 5 K and 300 K. The EXAFS oscillations at the Fe and As $K$-edges are converted into $k$ space. Typical EXAFS oscillations are shown in Fig. 1(a) for the $x$=0.07 sample at the Fe $K$ edge (lower panel) and the As $K$ edge (upper panel). The Fourier transform spectra at 20 K for the Fe and As $K$ edges are shown as black curves in Fig. 1(b). The position of the coordination atoms of the Fe and As atoms are also indicated which are slightly shifted due to the phase-shift effect. The atomic radial distribution function (RDF) around Fe and As atoms are simulated using FEFF7. In the simulation the structural parameters determined by Rietveld analysis are used and all possible scattering paths (including single-scattering paths and multiple-scattering paths) are included \cite{19}. The simulated RDF around Fe and As atoms are shown as the red curves in Fig. 1(b), which can reproduce all the main peaks in the experimental Fourier transform spectra. In the EXAFS data analysis process, coordination numbers are set to the values dictated by the average structure. For the Fe $K$ edge, we fit the experimental data by including both the nearest neighboring Fe-As correlation and the next nearest neighboring Fe-Fe correlation. Typical fitting result is shown as the green curve in Fig. 1(b). It can be seen that the fitting curve can well reproduce the experimental data at 1.4$\leq$$R$$\leq$3.1 \AA\ range. For the As $K$ edge, we fit the experimental data using a single-Gaussian As-Fe RDF. The result is shown in Fig. 1(b) as the green curve. Figure 2 gives the temperature dependence of the Fe-As bond distances and the Fe-Fe bond distances for both samples. It is obvious that the F-doping leads to a shrink of Fe-As bond distance. The slight decrease of Fe-As bond distance induced by F doping has been detected by synchrotron x-ray diffraction measurement \cite{19}. The contraction of the Fe-As bond distance with F-doping indicates that the bonding between Fe and As atoms is strengthened covalent bonding. The contraction of the Fe-As bond distance is reminiscent of the shortening of the Cu-O bond distance in La$_{2-x}$Sr$_x$CuO$_4$ as charge carriers are introduced into the CuO$_2$ plane. In cuprate superconductors, the Cu-O orbital hybridization is strengthened with the shortening of the Cu-O bond. According to this fact, we can also suggest that the hybridization between the Fe 3$d$ orbitals and the As 4$p$ orbitals would be strengthened. The strengthening of the Fe-As orbital hybridization favors the flow of charge carrier in the Fe-As conductive layer. It is also obvious that the Fe-Fe bond distance is shortened in the F-doped sample, which indicates a decrease of the unit cell volume. We notice a slight increase of the Fe-Fe bond distance below $\sim$ 150 K in undoped LaFeAsO, which is consistent with the tetragonal to orthorhombic phase transition \cite{19}. In F-doped sample, such a phase transition disappears. The temperature dependence of Fe-Fe bond distance for the F-doped sample shows little change in the whole temperature region. In Fig. 3 we plot the temperature dependence of the mean-square relative displacement for the nearest neighboring Fe-As shell derived from both Fe $K$ edge EXAFS (labeled as $\sigma$$^2_{Fe-As}$) and As $K$ edge EXAFS (labeled as $\sigma$$^2_{As-Fe}$) for the LaFeAsO$_{1-x}$F$_x$ ($x$=0.07) sample together with that of the undoped LaFeAsO sample. As expected, the results give nearly the same $\sigma$$^2_{Fe-As}$ and $\sigma$$^2_{As-Fe}$ values at each temperature. At $T$$\geq$150 K range, the $\sigma$$^2_{Fe-As}$ value decreases with decreasing temperature for both samples, consistent with the non-correlated Debye-like behavior. However, below 150 K the temperature dependence of $\sigma$$^2_{Fe-As}$ exhibits distinctly different behavior. For undoped LaFeAsO, the $\sigma$$^2_{Fe-As}$ value slightly increases with further decreasing temperature, which is related to the so-called spin-density-wave transition \cite{19,20}. For the F-doped sample, the increase of $\sigma$$^2_{Fe-As}$ at about 150 K is well suppressed. The $\sigma$$^2_{Fe-As}$ decreases further with decreasing temperature. Significantly, an anomalous upturn of $\sigma$$^2_{Fe-As}$ appears at $T$$\leq$70 K. This anomaly occurs only in F-doped sample while no such anomaly is detected in undoped parent compound. This anomaly is accompanied by a sharp drop at the temperature where the onset of superconducting transition occurs ($T_c^{onset}$$\sim$29 K). Similar anomalous behavior was previously found in La$_{2-x}$Sr$_x$CuO$_4$ samples where an upturn of $\sigma$$^2_{Cu-O}$ (mean-square relative displacement of the in-plane Cu-O bond) occurs at $T$$\leq$80 K which is also accompanied by a sharp decrease at $T_c^{onset}$ \cite{11}. In order to clearly see the low temperature local lattice instability and its relation to the $T_c^{onset}$ value, we plot in the inset of Fig. 3 the normalized temperature ($T$/$T_c^{onset}$) dependence of the mean-square relative displacements for both LaFeAsO$_{0.93}$F$_{0.07}$ and La$_{1.85}$Sr$_{0.15}$CuO$_4$ samples. It can be seen that a sharp decrease in the mean-square relative displacement occurs exactly at $T_c^{onset}$ in both systems. This result indicates that the local lattice instability might be play an important role in the superconducting coherence in both systems. In order to reveal whether or not this anomaly involves the Fe-Fe bond, we studied the temperature dependence of mean square relative displacement for the Fe-Fe bond ($\sigma$$^2_{Fe-Fe}$) by fitting the Fe $K$ edge EXAFS data including the multiple scattering paths. Figure 4 shows the temperature dependence of $\sigma$$^2_{Fe-Fe}$ for the LaFeAsO$_{1-x}$F$_x$ ($x$=0; 0.07) samples. The temperature dependence of $\sigma$$^2_{Fe-Fe}$ for the undoped LaFeAsO sample exhibits a slight increase below $\sim$140 K, which may relate to the SDW transition \cite{19}. It can be clearly seen that there is no anomaly in the mean-square relative displacement of the Fe-Fe bond in $x$=0.07 sample. Thus we conclude that the anomaly in $\sigma$$^2_{Fe-As}$ below 70 K in F-doped sample involves only the Fe-As bond. Comparing the mean-square relative displacements of the Fe-Fe bond in LaFeAsO$_{1-x}$F$_x$, one can find a rather strong F-doping effect, i.e., the displacement of Fe-Fe bond is strongly decreased upon F-doping. The temperature dependence of $\sigma$$^2_{Fe-As}$ shows a complicated behavior related to a magnetic phase transition. That is, an increase of $\sigma$$^2_{Fe-As}$ occurs below 140 K in undoped LaFeAsO. In LaFeAsO$_{0.93}$F$_{0.07}$, the temperature dependence of $\sigma$$^2_{Fe-As}$ shows no anomaly in the whole temperature region, consistent with the disappearance of phase transition in F-doped samples. To our knowledge macroscopic structural study on LaFeAsO-based system did not explore any structural transition near 70 K. However, in cuprate superconductors, a similar upturn of mean-square relative displacement of the in-plane Cu-O bond has been discovered, which is related to the splitting of the Cu-O bonds into elongated and shortened Cu-O bond distances \cite{7}. Based on this fact, we suggest that a bond splitting of the Fe-As bond in F-doped LaFeAsO system also occurs below $\sim$70 K. Consequently, some As ions are shifted forward or backward the adjacent Fe ions. This bond splitting would lead to a decrease of the magnitude of the Fe-As (As-Fe) RDF peak. In Fig. 5(a) we plot the Fourier transform magnitude of the first shell As-Fe bond. In order to compare the magnitude As-Fe peak quantitatively, we plot the absolute magnitude of the As-Fe peak in Fig. 5(b). The magnitude of the As-Fe peak increases with decreasing temperature which is followed by a decrease below 70 K, consistent with the As-Fe bond splitting model. In order to quantitatively determine the length scale of the Fe-As bond-splitting, we plot the Fourier-filtered (back-transforming over 1.4$<R<$2.6 \AA) EXAFS oscillation and amplitude of the first-shell As-Fe bond at 40 K. The plot is shown in Fig. 5(c). From the EXAFS oscillation we notice that the local minimum in the amplitude and the irregularity in the phase near 10.5 \AA$^{-1}$ constitute a ``beat", which signifies the presence of two As-Fe bond distances. Using the relation $\Delta$$R$=$\pi$/2$k$ between the separation of the two shells and the position of the beat, the As-Fe distances are determined to differ by $\sim$0.15 \AA. We notice that the ``beat" feature is very weak, which possibly comes from two facts: one is that only small amount of Fe-As bonds are splitted while the other Fe-As bonds keep undistorted; another reason could be the unpolarized property of the powder sample used in the EXAFS measurements. In doped cuprates, lattice instability is observed as local distortions (creation of elongated and shortened bonds) probed by EXAFS which reflects the presence of strong hole-lattice interaction \cite{7,8,9,10,11}. In case of doped LaFeAsO system, similar behavior is found in both As and Fe $K$ edge EXAFS. We note that this distortion is of electronic origin and is different from crystallographic phase transition as it is observed only after carrier doping at low temperature. The anomalous change below 70 K is explained using a local lattice distortion model having equal number of elongated and shortened Fe-As bonds separated by about 0.15 \AA. Among candidates of distortion models characterized by the elongated and shortened Fe-As bonds, we consider two cases in Fig. 6 which illustrates the distortion in the FeAs layer in F-doped LaFeAsO. The left panels show displacement of As atoms (grey ball) tetrahedrally coordinating with Fe atoms. In the right, four-fold coordination of Fe atoms is represented by pyramids where each corner indicates the location of As atom and the direction of displacement is indicated by arrow. In the upper model all four Fe-As bonds in the same unit elongate and the shortening of bonds occur in the adjacent unit, while in the lower model the elongation and shortening occur in the same unit. In analogy to doped cuprates, the former and latter models correspond to breezing \cite{21} and Q$_2$ \cite{22,23} distortions proposed for La$_{1.85}$Sr$_{0.15}$CuO$_4$, respectively. In the LaFeAsO system, those two possible distortions may account for the detected distortion. We now discuss the implications of the present results. First, the temperature dependence of mean-square relative displacement of the Fe-As bond shows remarkable similarity with that of cuprate superconductors \cite{8,11}. That is, a significant upturn in the temperature dependence of $\sigma$$^2_{Fe-As}$ in F-doped LaFeAsO (or $\sigma$$^2_{Cu-O}$ in Sr-doped La$_2$CuO$_4$) occurs at a characteristic temperature $T^*$, which is related to the opening of pseudogap in cuprates \cite{9,13,24,25}. In LaFeAsO(F) system, the opening of pseudogap was recently reported \cite{16,17}, consistent with the observation of the onset of the upturn of $\sigma$$^2_{Fe-As}$. We interpret this anomaly as a signature of lattice instability that indicates the formation of polarons. Secondly, the increase of mean-square relative displacement continues until a sudden drop occurs at the onset of superconducting transition. The plot of mean-square relative displacement \emph{vs}. normalized temperature ($T/T_c^{onset}$) clearly indicates that the mean-square relative displacement exhibits a large decrease at the onset superconducting transition temperature in both Fe-based and cuprate superconductors, which indicates that the lattice effects might be important in both systems. However, whether or not the superconducting mechanism in these systems is driven by electron-lattice interaction needs further experimental and theoretical studies. \section{Conclusion} In conclusion, we provide evidence from EXAFS measurements that local lattice instability occurs in F-doped LaFeAsO superconductor, similar to that in cuprate superconductors. This local lattice distortion may reveal certain polaron formation well above $T_c$. The mean-square relative displacements of the Fe-As bond exhibits a sharp drop at the onset transition temperature, indicating the lattice effects might be important in this system. \section{Acknowledgments} The authors express their greatest thanks to H. Koizumi for inspiring discussions. The EXAFS experiments were conducted under the proposal 2007G071 at Photon Factory.
1,477,468,750,837
arxiv
\section{Introduction} Meson distribution amplitudes (DAs) give the probability of finding a meson in a quark-antiquark Fock state. They are formally defined as follows, where $M(P)$ is some meson operator: \begin{equation} \phi_M(x,\mu) = \frac{i}{f_M} \int \frac{d\mathcal{E}}{2\pi} e^{i(x-1)P\cdot\mathcal{E}n} \left< M(P)|\bar{\psi}(0)n \cdot \gamma \gamma_5 U(0,\mathcal{E}n) \psi(\mathcal{E}n)|0 \right>. \end{equation} Meson DAs are important for understanding how light-quark hadron masses emerge from QCD. They are also important inputs in many hard exclusive processes at large momentum transfers. In these processes, the cross-section can be factorized into a short-distance hard-scattering part and long-distance universal quantities such as lightcone DAs. The lightcone DAs can be determined from fits to experimental data or calculated from lattice QCD. Meson DAs are like parton distribution functions in that they are universal quantities (meaning they are experiment and scale independent) and they are non-perturbative in nature. However, given meson composition compared to heavier hadrons, meson DAs are less constrained by experiments. They are also subject to various model-dependent calculations and we have no global-fitting result to compare them with. \section{Lattice Setup} Our lattices come from the MILC collaboration~\cite{Bazavov:2012xda}. We use 2+1+1 flavors of HISQs for the sea quarks, and clover fermion action for the valence quarks. On each lattice ensemble, we use multiple sources uniformly distributed in the time direction and randomly distributed in the spatial directions. \begin{table*}[th!] \vspace{-2mm} \center \begin{tabular}{|c|ccc|ccc|} \hline Ensemble ID & $a$ (fm) & $M_\pi$ (MeV) & $M_\pi L$ & $P_z$ (GeV) & $N_\text{conf}$ & $N_\text{meas}$ \\\hline $a15m310$ & 0.1510(20) & 320(5) & 3.93 & \{1.02, 1.54, 2.05\} & 452 & 10,848 \\ \hline $a12m310$ & 0.1207(11) & 310(3) & 4.55 & \{1.28, 1.71, 2.14\} & 1013 & 194,496 \\ $a12m220$ & 0.1184(10) & 228(2) & 4.38 & \{1.31, 1.63, 1.96\} & 959 & 368,256 \\ \hline $a09m310$ & 0.0888(8) & 313(3) & 4.51 & \{1.31, 1.74, 2.18\} & 889 & 39,648 \\ \hline $a06m310$ & 0.0582(4) & 320(2) & 3.90 & \{1.33, 1.77, 2.22\} & 593 & 2,372 \\ \hline \end{tabular} \vspace{-2mm} \caption{Information for different lattice ensembles used in this work. \label{tab:hisq} } \end{table*} Table \ref{tab:hisq} shows the five ensembles that we have done calculations on. We have four lattice spacings at 0.06, 0.09, 0.12, and 0.15 fm. We set the pion mass to 310 MeV at each lattice spacing, and additionally to 220 MeV at the 0.12 fm spacing. The overall momentum range is roughly 1.02-2.22 GeV. Also note the number of measurements that we have for each ensemble; for the lightest mass (the 220 MeV), we have the largest number of measurements (nearly 370,000). To extract matrix elements, we first need to calculate the DA two-point correlators, which are defined for different mesons as follows: \begin{equation} C_M^{DA}(z,P,t) = \left<0 \left| \int d^3 y e^{i \vec{P}\cdot \vec{y}} \bar{\psi}_1(\vec{y},t) \gamma_z \gamma_5 U(\vec{y},\vec{y}+z\hat{z})\psi_2(\vec{y}+z\hat{z},t) \bar{\psi}_2 (0,0) \gamma_5 \psi_1 (0,0) \right|0\right> \end{equation} \begin{figure* \centering \includegraphics[width=0.7\textwidth]{figs/pion-DA.png} \caption{Diagram of our physical setup, illustrating how we construct our two-point correlators. In this work, we seek pion lightcone DAs. \label{fig:kaon_diagram}} \end{figure*} Figure \ref{fig:kaon_diagram} shows an example illustration of our physical setup, which gives context to our two-point correlators. `$\bar{\psi}_2 (0,0) \gamma_5 \psi_1 (0,0)$' is our local meson operator which creates the meson (a pion in this work). `$\bar{\psi}_1(\vec{y},t) \gamma_z \gamma_5 U(\vec{y},\vec{y}+z\hat{z})\psi_2(\vec{y}+z\hat{z},t)$' is our main operator, which corresponds to the linked quarks. Calculating these two-point correlators is the first step in obtaining our matrix elements, which are formally defined: \begin{equation \tilde{h}_M (z,P_z) = \left< M(P) \left| \bar{\psi}(0) \gamma^z \gamma_5 U(0,z) \psi(z) \right|0\right> \end{equation} \section{Matrix Elements} In practice, we extract the DA matrix elements from a two-point correlator fit to the following form: \begin{equation} C_M^{DA}(z,P,t) = A_{M,0}^{DA}(P,z)e^{-E_{M,0}(P)t} + A_{M,1}^{DA}(P,z)e^{-E_{M,1}(P)t}+... \label{eq:correlator_fit_form} \end{equation} The amplitudes $A_M^{DA}$ of each exponential are proportional to the matrix elements $h$. Figure \ref{fig:correlator_fit} shows plots of $\tilde{A}$ versus time, where $\tilde{A}$ is the $C_M^{DA}$ correlator form in Equation \ref{eq:correlator_fit_form} multiplied by $e^{E_{M,0}(P)t}$ (the inverse of the ground state energy exponential) to minimize the overall $t$ dependence. The correlator data times that same exponential are plotted in green. \begin{figure* \includegraphics[width=0.49\textwidth]{figs/a09m310d_amplitude_pion_p4_z3_Re.pdf} \includegraphics[width=0.49\textwidth]{figs/a09m310d_amplitude_pion_p4_z3_Im.pdf} \caption{Two-point correlator fit test plots. \label{fig:correlator_fit}} \end{figure*} We use two fitting approaches to verify the stability of our results. First, we fix both the ground state and first excited state energies and fit the correlator; then, we separately fix only the ground state energy and fit the correlator. From the overlapping bands in the plots of Figure \ref{fig:correlator_fit}, we see a high level of agreement between both approaches. This tells us that our fit is stable. As an example of what our fitted matrix elements look like, Figure \ref{fig:bare_me} shows $zP_z$ plots (‘$zP_z$’ being displacement times boosted momentum) of real and imaginary bare matrix element plots on the a09m310 ensemble at a momentum of 2.18 GeV. We vary the minimum $t$ value between 2,3, and 4 to determine if the resulting matrix elements show a strong dependence on $t_{min}$. In this case, even at a relatively high momentum, we see that the three sets of matrix elements all agree with each other, and in fact show very little difference from one another. This is another indication that our fit is stable. \begin{figure* \includegraphics[width=0.49\textwidth]{figs/a09m310_me_plot_re_pion_tmin_compare.pdf} \includegraphics[width=0.49\textwidth]{figs/a09m310_me_plot_im_pion_tmin_compare.pdf} \caption{Real and imaginary bare matrix element plots for the a09m310 ensemble at $P_z$=2.18 GeV. We show three different fits with the minimum $t$ value varied between 2,3, and 4. The three sets show strong agreement. \label{fig:bare_me}} \end{figure*} To renormalize these matrix elements, we use the RI/MOM scheme. The details of the renormalization process are found in this group's prior work \cite{Zhang:2020}. We use the common scale $\mu^R$=3.8 GeV and $p_z^R$=0. Still looking at the a09m310 ensemble (i.e., the lattice spacing and mass are fixed, isolating the momentum dependence), Figure \ref{fig:renormalized_me} shows real and imaginary $zP_z$ plots of our renormalized matrix elements. We choose three different values of $P_z$: 1.31, 1.74, and 2.18 GeV. There are small differences between the matrix elements at the three momenta, but overall, they reflect the same form. So we don’t see a strong $P_z$ dependence in our renormalized matrix elements. \begin{figure* \includegraphics[width=0.49\textwidth]{figs/a09m310_me_plot_re_345mom_compare_pion.pdf} \includegraphics[width=0.49\textwidth]{figs/a09m310_me_plot_im_345mom_compare_pion.pdf} \caption{Real and imaginary $zP_z$ plots of our renormalized matrix elements on the a09m310 ensemble with $P_z$={1.31, 1.74, 2.18} GeV. \label{fig:renormalized_me}} \end{figure*} Conversely, we isolate the mass and lattice spacing dependences by incorporating all five of our ensembles (see Figure \ref{fig:renorm_fixed_mom}). In this case we fix the momentum to be as close to 1.7 GeV as we can on each ensemble. The mass dependence comes from comparing the two fits at the 0.12 fm spacing: the green markers in Figure \ref{fig:renorm_fixed_mom} show the 220 MeV pion mass and the red markers show the 310 MeV pion mass. Both of these sets of matrix elements are very close to each other, indicating only minimal pion mass dependence from our fits. \begin{figure* \includegraphics[width=0.49\textwidth]{figs/pion_ME_ensemble_compare_Re.pdf} \includegraphics[width=0.49\textwidth]{figs/pion_ME_ensemble_compare_Im.pdf} \caption{Real and imaginary $z$ plots of our renormalized matrix elements for all five of our ensembles with $P_z\approx$1.7 GeV. \label{fig:renorm_fixed_mom}} \end{figure*} For the lattice spacing dependence, we look at the purple (0.06 fm), pink (0.09 fm), red (0.12 fm) and blue (0.15 fm) markers in Figure \ref{fig:renorm_fixed_mom}. These four sets of matrix elements show general agreement in the small-to-mid-$z$ region, though we do see higher deviation than we did when looking at the mass dependence. Note that the disagreement we see in the large-$z$ region is likely due to higher twist effects, and so these results meet our expectations. The dashed and dotted curves are the Fourier transformations of two asymptotic approximation forms for the pion lightcone DA. We see that the real parts of the lattice data are closer to the $(8/\pi) \sqrt{x(1-x)}$ form, while in the small-to-mid-$z$ region, the imaginary parts are closer to the $6x(1-x)$ form. \section{Pion Lightcone DA} Next, we look to determine the pion DA from the continuum-physical matrix elements. We use the following form to obtain these continuum-physical matrix elements $h$ through extrapolation: \begin{equation} h = h_0 (1 + c_2 a^2 + d_2 M_{\pi}^2). \label{eq:extrapolation_form} \end{equation} Note that we use an $a^2$ lattice spacing dependence in this form. In the continuum-physical limit, our matrix elements become inputs into the following form: \begin{equation} h(z,\mu^R,p_z^R,P_z) = \int_{-\infty}^{\infty} dx \int_0^1 dy \;\; C \left(x,y,\left(\frac{\mu^R}{p_z^R} \right)^2,\frac{P_z}{\mu^R},\frac{P_z}{p_z^R} \right) f_{m,n}(y)e^{i(1-x)zP_z} \label{eq:continuum_physical_me} \end{equation} where $C$ is the appropriate matching kernel, which is calculated perturbatively \cite{Liu:2019urm}. Our goal is to determine the pion lightcone DA, which is some unknown function $f$ of momentum $y$ that we characterize by two parameters, $m$ and $n$. Figure \ref{fig:renorm_phys_lim} again shows $zP_z$ plots of our matrix elements (interpreted as continuous curves) alongside the physical limit. \begin{figure* \includegraphics[width=0.49\textwidth]{figs/Renorm_phys-limit_re.png} \includegraphics[width=0.49\textwidth]{figs/Renorm_phys-limit_im.png} \caption{Real and imaginary plots of the renormalized matrix elements on all five of our ensembles with $P_z$=1.74 GeV. The physical limit is plotted in gray. \label{fig:renorm_phys_lim}} \end{figure*} One approach to extracting the pion DA is to assume a possible functional form for $f$ and fit our continuum-physical matrix elements to Equation \ref{eq:continuum_physical_me}. We require the DAs to vanish outside of the physical (momentum space) region $x=[0,1]$, and to-date, our most-used candidate function is the common meson PDF global-fitting form: \begin{equation} f_{m,n} = x^m (1-x)^n / B(m+1,n+1) \label{eq:functional_fitting_form} \end{equation} where $m$ and $n$ are undetermined constants that we obtain through fitting, and we divide by the beta function to normalize the lightcone DA. Figure \ref{fig:functional_fit} shows the corresponding fit results in position space using the extrapolated data at a momentum of 1.74 GeV, with the extrapolated data shown in blue. The real plot indicates a reasonable fit, but the imaginary plot shows a slight mismatch in the mid-$z$ region. Figure \ref{fig:functional_fit} also shows our preliminary pion lightcone DA in momentum space. We have plotted the results of previous calculations for comparison. We note that our lightcone DA shows a strong correlation with the $RQCD$'19 calculation. \begin{figure* \center \includegraphics[width=0.49\textwidth]{figs/extrapolated_functional_fit_p4_Re_2params.pdf} \includegraphics[width=0.49\textwidth]{figs/extrapolated_functional_fit_p4_Im_2params.pdf} \includegraphics[width=0.49\textwidth]{figs/extrapolated_functional_fit_mom_space_p4_2params.pdf} \caption{Results of the functional fit to the form in Equation \ref{eq:functional_fitting_form}. The top plots show fit result in position space against the extrapolated data. In the bottom plot, our preliminary pion lightcone DA is plotted in green against the results of earlier calculations. \label{fig:functional_fit}} \end{figure*} Alternatively, we continue to \cite{Zhang:2020} use a machine learning (ML) approach to obtain the x-dependence of the pion DA. Since we cannot solve for $f_{m,n}$ directly from Equation \ref{eq:continuum_physical_me}, we can try to predict the form of the lightcone DA with a trained ML model. We train a multilayer perceptron (MLP) regressor with a set of pseudo-random polynomials $f_{m,n}$. Each function is of the same PDF global-fitting form used in the functional fitting approach with randomly generated $m,n$ pairs. We try two different regressor models. First, we use the standard MLP regressor as it is implemented in the scikit-learn Python package. We also use a modified version of this regressor with a custom loss function $L$: \begin{equation} L = \frac{1}{2} \left[ \left( \frac{(y - y_\text{pred})}{y^{\gamma}} \right)^2 + \beta \left( \dfrac{d^2 y}{dx^2} \right)^2 \right] \label{eq:ML_custom_loss_func} \end{equation} where $\gamma$ and $\beta$ are tunable hyperparameters (this loss function is a generalization of the default mean-squared-error loss function). All of our regressors have 3 hidden layers with 20 nodes each. Figure \ref{fig:ML_pred} shows an example of our predictions on each type of model. The two $zP_z$ plots show that our real and imaginary predicted MEs are fairly consistent with the true lattice data (with slight disagreement in the mid-$z$ region), while the two models predict only marginal differences from each other. The pion DA predictions resemble the functional fitting result, though the ML DA distributions are characteristically broader and flatter-- they are also noisier, which matches our expectations. Again, the two different models show agreement though they deviate a bit at larger $x$. This work is ongoing, and we still need to work out some of the kinks of the ML approach, partly with the treatment of the loss function. \begin{figure* \center \includegraphics[width=0.49\textwidth]{figs/lat_pred_no_spline_zpz_p1.7_re_N100000_truncated.pdf} \includegraphics[width=0.49\textwidth]{figs/lat_pred_no_spline_zpz_p1.7_im_N100000_truncated.pdf} \includegraphics[width=0.49\textwidth]{figs/lat_pred_no_spline_zpz_p1.7_N100000_compare.pdf} \caption{Machine learning predictions. The $zP_z$ plots show our predicted matrix elements from both the unmodified (original) and modified models against the true lattice data. The two predictions are very similar, and they show reasonable agreement with the lattice data. The $x$-space plot shows our (preliminary) predicted pion lightcone DAs from each model. Again, both predictions are close, though there is some disagreement at the high end of the $x$ range. \label{fig:ML_pred}} \end{figure*} \section*{Acknowledgments} We thank MILC Collaboration for sharing the lattices used to perform this study. The LQCD calculations were performed using the Chroma software suite~\cite{Edwards:2004sx} with multigrid solvers~\cite{Babich:2010qb,Osborn:2010mb}. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 through ERCAP; facilities of the USQCD Collaboration, which are funded by the Office of Science of the U.S. Department of Energy, and supported in part by Michigan State University through computational resources provided by the Institute for Cyber-Enabled Research (iCER). The work of NJ is being supported by Graduate Fellowship of the college of Natural Science at Michigan State University; CH is supported by the Professional Assistant program at Honors College at MSU. RZ and HL are partly supported by the US National Science Foundation under grant PHY 1653405 ``CAREER: Constraining Parton Distribution Functions for New-Physics Searches''. \providecommand{\href}[2]{#2}\begingroup\raggedright
1,477,468,750,838
arxiv
\section{Introduction}\label{intro} One of the open questions in inflationary cosmology is the mechanism by which inflation came to an end. The current literature is dominated by two paradigms, violation of slow roll bringing inflation to an end while the field is still evolving, and a second-order phase transition of hybrid inflation type. However, Guth's original (but unsuccessful) proposal \cite{guth} invoked a first-order phase transition whereby inflation ended by nucleation of bubbles of true vacuum. First-order transitions have subsequently experienced bursts of popularity. In the late 1980s, La and Steinhardt \cite{la_steinhardt} initiated intensive investigation of `extended inflation' models, where modifications to Einstein gravity allowed bubble nucleation to complete in single-field inflation. A few years later those models were struggling in face of observations, and focus instead returned to Einstein gravity, now in a two-field context with one rolling and one tunnelling field \cite{linde, adams_freese, copeland_al}, although see \cite{notari1,notari2}. In addition to the usual quantum fluctuation mechanism, first-order inflation models produce density perturbations through the bubble collisions and subsequent thermalization. The spectrum of bubble sizes produced must be far from scale invariance to avoid clear violation with observed microwave anisotropies --- the largest of the bubbles would otherwise be blatantly visible \cite{liddle_wands91,liddle_wands92, griffiths_al}. This requirement is typically at odds with the need to maintain scale invariance in the spectrum produced by quantum fluctuations, a tension sufficient to exclude extended inflation variants except in extremely contrived circumstances \cite{liddle_wands}. The purpose of this paper is to investigate whether the strengthened constraints of the post Wilkinson Microwave Anisotropy Probe (WMAP) era have eliminated the Einstein gravity first-order models too and, by implication, assess whether it is plausible that voids exist below current detection limits. In Guth's original model, with one field, the inflaton must remain in the metastable vacuum long enough to allow for sufficient $e$-folds of inflation but in this case inflation never ends, the bubbles never thermalize and the transition doesn't complete. Introduction of a second field allows a time-dependent nucleation rate, permitting enough inflation to occur while the nucleation rate is low and a successful end when the rate rises to high enough values. This idea was proposed independently by Linde \cite{linde} and, in more detail, by Adams and Freese \cite{adams_freese} under the name `double-field inflation'. Typically the second field, which is trapped in the metastable vacuum, also provides most of the energy density for inflation, although this depends on the particular values of parameters chosen. In that regime, the usual prediction is for a blue spectrum of density perturbations, $n_{\rm S}>1$. In the last few years the trend in cosmic microwave background (CMB) observations has been a tightening of the confidence limits around a central value $n_{\rm S}$ smaller than one, disfavouring this regime. Since our goal is to investigate the general viability of this type of model we will probe the entire parameter space, including the intermediate region where the contributions of each field to the energy density are comparable, making no approximations based on inflaton or the false vacuum domination. As stated above one expects these models to run into difficulty with recent observations closing in on a nearly scale invariant scalar spectrum. CMB anisotropies observations place constraints on the maximum size of bubbles that survive from a first-order phase transition, at the time when scales of cosmological interest leave the horizon. In turn this places a strong upper limit on the nucleation rate at this time, after which it must rise sufficiently to complete the transition and provide a graceful exit for inflation. In order to meet these two requirements the field must proceed swiftly along the potential, what, in light of observations, places the model under stress. \begin{figure*}[t] \begin{center} $\begin{array}{c c} \epsfxsize=8.5cm \epsffile{2nd_order.eps} & \epsfxsize=8.5cm \epsffile{1st_order.eps}\\ \mbox{\bf (a)} & \mbox{\bf (b)} \end{array}$ \end{center} \caption{ \textbf{(a)} The potential for a second-order phase transition. The field reaches the true vacuum through a continuous transition, and the breaking of the symmetry implies that there will be defect formation at the end of the transition. The true vacuum minima develop once the field passes the point of instability, $\phi_{\rm inst}$ \textbf{(b)} The same for the first-order case. In this case the transition is discontinuous and proceeds through quantum tunneling of the $\psi$ field to the true vacuum. The second minimum develops after the point of inflection $\phi_{\rm infl}$. The couplings in both \textbf{(a)} and \textbf{(b)} have been chosen so as to produce a visible barrier height (in working models this is negligible compared to the false vacuum energy).} \label{2pot} \end{figure*} \section{The first-order model} We consider throughout a fairly general form of the potential for a first-order phase transition, given by Copeland et al \cite{copeland_al}. \begin{eqnarray}\label{pot} V(\phi,\psi)&=&\frac{1}{4}\lambda(M^4+\psi^4)+ \frac{1}{2}\alpha M^2 \psi^2 -\frac{1}{3}\gamma M \psi^3 \nonumber\\ && + \frac{1}{2} m^2 \phi^2 + \frac{1}{2} \lambda' \phi^2 \psi^2 \,. \end{eqnarray} This extends the simplest second-order hybrid inflation model by addition of the cubic term for the $\psi$ field. As in conventional hybrid inflation, one envisages that initially the inflaton field $\phi$ is displaced far from its minimum, and the auxiliary field $\psi$ is then held in a false vacuum state by its coupling to the inflaton. Perturbations are generated during this initial phase as $\phi$ rolls slowly along the flat direction. The dynamics in this region are pretty much those of single-field slow-roll inflation, though the auxiliary field $\psi$ may provide most of the energy density for inflation, see Fig.~\ref{2pot}. In a model where the phase transition is second-order, shown in Fig.~\ref{2pot}a, the false vacuum becomes unstable after $\phi$ passes a certain value, $\phi_{\rm inst}$, and the fields evolve classically to their true vacuum (here producing topological defects as causally separated regions make independent choices as to which minimum to finish in). Although not the main topic of this paper, we explore current constraints on this model in the Appendix. In the first-order case, shown in Fig.~\ref{2pot}b, if the parameters in Eq.~(\ref{pot}) are chosen appropriately, a second minimum develops once the field evolves past a point of inflection, $\phi_{\rm infl}$. At this point bubbles of the true vacuum begin to nucleate and expand at the speed of light. The percolation rate is initially very small as the vacuum energies are comparable, but as $\phi$ approaches zero the interaction between the fields triggers a steep rise in the bubble production. Inflation ends when the nucleation rate reaches high enough values that the bubbles percolate and thermalize. In this case there is only one true vacuum and hence no topological defects. The channel in which the field rolls after tunnelling is much too steep to sustain any inflation within the bubbles. \begin{figure*}[t] \begin{center} $\begin{array}{c c c} \epsfxsize=5.6cm \epsffile{pot1.eps} & \epsfxsize=5.6cm \epsffile{pot2.eps} & \epsfxsize=5.6cm \epsffile{pot3.eps} \\ \mbox{\bf (a)} & \mbox{\bf (b)} & \mbox{\bf (c)} \end{array}$ \end{center} \caption{ \textbf{(a)} At early times, away from $\phi=0$, there is only minimum available for $\psi$ and the field is trapped in the false vacuum. \textbf{(b)} Given appropriate choices for the couplings a second minimum begins to develop when the field reaches the point of inflection of the potential. \textbf{(c)} Once the transition becomes energetically favourable the $\psi$ field begins to tunnel to the newly formed minimum, which eventually becomes the true vacuum.} \label{3reg} \end{figure*} For large values of $\phi$ there is only one minimum of the potential, and in the $\psi$ direction the potential looks like Fig.~\ref{3reg}a. However if $\gamma^2 > 4\alpha \lambda$, a second minimum develops after $\phi$ reaches a point of inflection \begin{equation} \phi_{\rm infl}^2=M^2 \frac{\gamma^2-4 \alpha \lambda}{4 \lambda'}\,, \end{equation} as in Fig.~\ref{3reg}b. The presence of the cubic term in the potential then breaks the degeneracy between the two minima, making it possible for the field to tunnel to the newly formed minimum. It is this second minimum that eventually becomes the true vacuum and the $\psi$ field begins to tunnel once the transition becomes energetically favourable, Fig.~\ref{3reg}c. As mentioned in the previous section the quantum generation of perturbations occurs away from this minimum, while the inflaton is rolling in the $\phi$ direction, and we consider horizon exit to occur around 55 $e$-folds before the end of inflation \cite{liddle_leach}. This evolution of $\phi$ is a crucial feature of the model since it is the introduction of a time dependence in the tunneling rate that will allow the phase transition to complete, bringing inflation to an end. The rate at which the bubbles nucleate is given by the percolation parameter (the number of bubbles generated per unit time per unit volume), \begin{equation} p=\frac{\Gamma}{H^4}\,. \end{equation} In the limit of zero temperature (taken because the transition occurs during inflation) the nucleation rate of bubbles can be approximated by \cite{callan_coleman}, \begin{equation}\label{perc} p=\frac{\lambda M^4}{4H^4}\exp(-S_{\rm E}) \,, \end{equation} where $S_{\rm E}$ is the four-dimensional Euclidean action. $S_{\rm E}$ was obtained for first-order transition quartic potentials by Adams \cite{adams}, who fitted the result as \begin{equation} S_{\rm E}= \frac{4 \pi^2}{3 \lambda}(2-\delta)^{-3}(\alpha_1 \delta+\alpha_2 \delta^2+\alpha_3 \delta^3) \,, \label{4act} \end{equation} where $\alpha_1= 13.832 ,~\alpha_2=-10.819,~\alpha_3=2.0765$, and $\delta$ is a monotonic increasing function of $\phi^2$, \begin{equation}\label{delta} \delta=\frac{9 \lambda \alpha}{\gamma^2}+\frac{9 \lambda \lambda' \phi^2}{\gamma^2 M^2} \,. \end{equation} The allowed range has $0<\delta<2$ (outside this range solutions correspond to energetically disallowed transitions). The transition to the true vacuum is complete once the percolation reaches unity, (one bubble per Hubble time per Hubble volume), allowing the bubbles of the true vacuum to coalesce. However in the most general case inflation need not end through bubble nucleation. If the potential is too steep slow-roll is violated before bubbles thermalize and inflation ends before the transition completes. In this case the precise mechanism which completes the transition is irrelevant given that it occurs after inflation ends, and for our purposes the scenario is indistinguishable from the single-field case. (In this paper we do not consider gravitational waves produced via bubble collisions, but these may provide a further observable \cite{hogan, kosowsky_al, huber_konstandin, caprini_al} that can ultimately be used to constrain this type of model.) The distinction between the two possibilities is given by the two values of the field, that at which the nucleation rate reaches unity, and that which makes \mbox{$\epsilon \sim 1$} (violation of slow-roll), where $\epsilon$ is the usual slow roll parameter defined in Eq.~(\ref{eps}). Inflation ends by whichever value of $\phi$ is reached first, \begin{equation} \phi_{\rm end}=\max(\phi_{\rm \epsilon},\phi_{\rm crit}) \,. \end{equation} \section{Inflationary dynamics} \subsection{Regimes} Two different regimes can be distinguished, regarding which field we wish to have dominate the energy density. In the usual hybrid inflation regime the energy density of the potential is dominated by the false vacuum $\lambda M^4 \gg m^2 \phi^2$, which provides the energy for inflation. In the opposite regime, in which the inflaton dominates the energy density, the dynamics rapidly approach those of single-field inflation since, as we will see, slow-roll violation occurs sooner. Working in either of these two regimes would allow us to simplify some of the expressions governing the dynamics during inflation, such as the number of $e$-folds and the slow-roll parameters, Eqs.~(\ref{Ne}), (\ref{eps}) and (\ref{eta}), and to proceed via an analytical treatment instead of a numerical one. However our purpose here is to probe the dynamics of the full $n_{\rm S}-r$ parameter space, ($r$ is the tensor-to-scalar ratio given by Eq.~(\ref{r})) so as to determine whether there still remain models consistent with CMB observations. Hence we also include the intermediate regime in our analysis, where the energy densities of the two fields are comparable, particularly when the transition between slow roll violation and bubble nucleation occurs. For this reason we will retain the full form of the potential and proceed through numerical calculations. \subsection{Field dynamics} In order to specify the dynamics of each model we begin by finding the field value, $\phi_{\rm max}$, at which inflation ends so we need to determine $\phi_{\rm \epsilon}$ and $\phi_{\rm crit}$. $\phi_{\rm \epsilon}$ is obtained by evaluating the first slow-roll parameter for our potential and taking it to unity, \footnote{The field $\psi$ sits in the false vacuum during the inflationary phase, since this is the only minimum available to $\psi$ in this region of the potential. This happens regardless of the means to ending inflation, so $\psi$ is set to zero throughout this section. } \begin{equation} \epsilon \equiv \frac{m_{\rm Pl}^2}{16 \pi} \left(\frac{V'}{V}\right)^2 = \frac{m^4 \phi^2 \, m_{\rm Pl}^2}{\pi (\lambda M^4 + 2 m^2 \phi^2)^2}\approx 1 \,. \end{equation} Inverting for $\phi$ yields, \begin{equation}\label{eps} \phi^2_{\epsilon} = \frac{m^2 m_{\rm Pl}^2 \pm m m_{\rm Pl} \sqrt{m^2 m_{\rm Pl}^2 -8\pi\lambda M^4}-4\pi\lambda M^4} {8 \pi m^2} \,, \end{equation} and we take the largest value of $\phi$. Note that the solution exists only for large values of $m$, where $m^2 m_{\rm Pl}^2 > 8 \pi \lambda M^4$. \begin{figure*}[t] \includegraphics[width= 0.94 \textwidth]{nr1st.eps} \caption{The trajectories described in the $n_{\rm S}-r$ parameter plane for first-order models when $M$ is varying and $m$ is set by the CMB normalization. The two lines correspond to different values of the coupling constant $\alpha$, the outermost $\alpha=0.1$ and the innermost $\alpha=0.01$. The two endpoints correspond to the endpoints in Fig.~\ref{2m} and converge at $M \sim 10^{-3} m_{\rm Pl}$ corresponding to the union of the branches in Fig.~\ref{2m}, at $(n_{\rm S}, r) \sim (0.99,0.5)$. Mass values are given in Planck units.}\label{nr1} \end{figure*} To determine $\phi_{\rm crit}$ we need to find the value at which the percolation parameter reaches unity, $p_{\rm crit} \sim 1$. Solving Eq.~(\ref{perc}), we get \begin{equation} S_{\rm crit}\sim \ln \frac{\lambda M^4}{4 p_{\rm crit} H^4} \label{S_cr} \,, \end{equation} where $S_{\rm crit}$ is given by Eq.~(\ref{4act}). Inverting Eq.~(\ref{S_cr}) yields a value for $\phi_{\rm crit}$ (only one of the three roots lies in the allowed range) and in turn this allows us to determine $\phi_{\rm end}$, and, by comparison with $\phi_{\rm \epsilon}$, the mechanism by which inflation ends. Knowing $\phi_{\rm end}$ we can calculate the value of the field at horizon exit, $\phi_{\rm 55}$. In this model $\phi$ rolls towards its minimum at $\phi=0$ so $\phi_{\rm 55} > \phi_{\rm end}$. Using the expression for the number of $e$-folds between two field values $\phi_1$ and $\phi_2$ we get, \begin{equation} N(\phi_1,\phi_2)\equiv \ln \frac{a_2}{a_1} \sim -\frac{8 \pi}{m_{\rm Pl}^2} \int_{\phi_1}^{\phi_2}\frac{V}{V'} \, d\phi \,. \end{equation} For $\phi_1= \phi_{55}$ and $\phi_2= \phi_{\rm end}$, and substituting for $V$, we have \begin{equation}\label{Ne} N(\phi_{55},\phi_{\rm end})= 2 \pi \lambda \frac{M^4}{m^2 m_{\rm Pl}^2} \ln \frac{\phi_{55}}{\phi_{\rm end}} + \frac{2\pi}{m_{\rm Pl}^2}(\phi_{55} ^2 - \phi_{\rm end}^2) \,, \end{equation} where we make no assumptions on the relative size of the two masses and retain both terms. Substitution of $\phi_{\rm end}$ yields $\phi_{\rm 55}$ and now we can calculate the scalar spectral index, $n_{\rm S}$, and the tensor-to-scalar ratio, $r$, at horizon exit, by use of their expressions in terms of the usual slow-roll parameters, \begin{eqnarray}\label{nS} n_{\rm S}-1&=&-6 \epsilon + 2 \eta \,;\\ r&=& 16 \epsilon \,, \label{r} \end{eqnarray} where $\epsilon$ is given by Eq.~(\ref{eps}), and $\eta$ is \begin{equation}\label{eta} \eta \equiv \frac{m_{\rm Pl}^2}{8 \pi} \frac{V''}{V} =\frac{m^2 \, m_{\rm Pl}^2}{2\pi (\lambda M^4 + 2 m^2 \phi^2)} \,, \end{equation} where the last equality is obtained by substitution of the potential. \begin{figure}[t] \includegraphics[width=0.47 \textwidth]{2m.eps} \caption{The relation between the two mass scales. The WMAP normalization admits two solutions for $m$, corresponding to false vacuum domination over the inflaton (lower branch), and the opposite regime, for large $m$, which is nearly independent of $M$ (upper branch). The two regimes converge to common behaviour. We consider all three regimes in the analysis and set $\alpha=0.1$, given by the upper curve in Fig.~\ref{nr1}.}\label{2m} \end{figure} At this point we can locate the model in the $n_{\rm S}-r$ plane and determine its position in relation to WMAP5 confidence limits \cite{komatsu}. \subsection{Choosing parameters} Throughout we set the self-interaction and coupling constants, $\lambda$ and $\lambda '$ respectively, equal to unity. We are then left with two constants, $\alpha$ and $\gamma$, and requiring the energy density of the true vacuum to be zero fixes one of these in terms of the other. We will fix $\alpha$ in terms of $\gamma$ but the reverse option could just as well be taken. The CMB amplitude normalization can be used to relate the two masses. We use this to fix the mass of the light field $\phi$ and then we are left with only two undetermined parameters: the energy of the false vacuum, $M$, and the constant $\alpha$. For each value of $\alpha$, varying $M$ fully determines the dynamics of the fields, and describes a trajectory in the $n_{\rm S}-r$ plane shown in Fig.~\ref{nr1}. Each line is composed of two branches which correspond to the two solutions of the WMAP normalization, and converge for large values of $M \sim 2.7 \times 10^{-3} m_{\rm Pl}$. For values of $M$ larger than this there is no solution to the amplitude normalization hence no viable models. This can be seen also in Fig.~\ref{2m} which illustrates how the two different approximation schemes converge to a common behaviour and cease to exist after a certain value of $M$ (c.f.\ Fig.~1 of Ref.~\cite{copeland_al}). The right-hand branch in Fig.~\ref{nr1} corresponds to the lower branch in Fig.~\ref{2m} and to the smaller value of $m$ from the WMAP normalization. In this branch the approximate relation $M \sim m^{2/5}$ (in Planck units) holds and the false vacuum dominates. The dynamics are indistinguishable in the $n_{\rm S}-r$ plane when $M < 10^{-4} m_{\rm Pl}$. We start with the typical slightly blue tilted spectrum and negligible tensor fraction. As $m$ continues to increase so does the deviation from $n_{\rm S} \sim 1$ until the approximate relation between the two masses breaks down and we have the inflaton playing a more significant role in the relative contribution of the two fields. At this point we observe a turn in the $n_{\rm S}-r$ plane, and the solution enters the intermediate region of comparable field energy densities. Despite this we still observe inflation ending by bubble nucleation throughout this branch, from small values of $M$ to the maximum at $M \sim 2.7 \times 10^{-3} m_{\rm Pl}$. In the opposite branch, on the left-hand side, the model starts inside the WMAP5 95\% confidence contour, well inside the inflaton dominated regime. Similarly to the other branch we observe an initial period where there is little dependence on the false vacuum energy, corresponding to the plateau on Fig.~\ref{2m}, and the dynamics are very well approximated by those of standard single-field inflation with a $\phi^2$ potential, well known to satisfy WMAP5 data. This regime breaks down as the false vacuum energy increases and eventually we recover the regime where the phase transition triggers the end of inflation before the violation of slow roll, meaning we are again in the bubble production scenario. The interesting results here draw from the fact that the transition occurs inside the WMAP5 95\% confidence contour, making these viable models even away from false vacuum domination. Fig.~\ref{zoom} is a zoom of this region showing the field mass, $M$, at which the transition to bubble nucleation occurs, $M \sim 5\times 10^{-4} m_{\rm Pl}$, still allowed by the 95\% confidence limits. \begin{figure*}[t] \includegraphics[width=0.8\textwidth]{nr_zoom.eps} \caption{Zoom of Fig.~\ref{nr1} showing the transition between models ending by slow roll violation and bubble inflation. The transition happens around $M \sim 5\times 10^{-4} m_{\rm Pl}$, well within the WMAP5 95\% confidence contour. Mass values in Planck mass units.}\label{zoom} \end{figure*} \section{Three constraints} In the previous section we looked at constraints in the $n_{\rm S}-r$ plane. By specifying a value for $\alpha$, one of our two free parameters $(M,\alpha)$, the CMB normalization then allows us to recover a trajectory in this plane and assess where the density perturbations are compatible with WMAP5 data. We now compute other constraints on the scenario, in the $M-\alpha$ plane. \subsection{Model consistency} We begin with the requirement that $M$ be not larger than an upper limit above which, for a particular choice of couplings, the transition does not complete ($\phi_{\rm crit}$ does not exist). We call this the model consistency constraint, which translates to a relation for the value of $\phi_{\rm crit}$, coming from the requirement that there exists a solution of Eq.~(\ref{delta}) for $\delta$. Because of the constant term in Eq.~(\ref{delta}) this is an additional requirement to $0<\delta<2$. Since we have chosen to set $m$ by the CMB normalization this can be translated into an excluded region in the $(M,\alpha)$ plane (although alternatively we could have expressed it in terms of a region in $(M,m)$, by having $\alpha$ specified by the CMB normalization instead). This yields the region below the upper (blue) curve in Fig.~\ref{3constr}. We see that specifying a value for the false vacuum density imposes an upper limit on the coupling $\alpha$ (alternatively on the inflaton mass, $m$) in order for the model to have the possibility to complete the phase transition. \subsection{Big bubble constraint} We adopt here a fairly crude criterion to judge whether the bubbles are compatible with observations, which is that any bubbles produced at the end of inflation and expanded to astrophysical sizes must, during the epoch of recombination, have a comoving size not larger than $20h^{-1}{\rm Mpc}$ \cite{liddle_wands91}. This corresponds to a maximum filling fraction at that time of $10^{-5}$, and puts an upper bound on the percolation rate of bubbles at the time the scales we observe today left the horizon: \begin{equation} \left(\frac{\Gamma}{H^4}\right)_{55} \leq 10^{-5} \,. \end{equation} With our form for the action Eq.~(\ref{4act}) and choice of potential this becomes \begin{equation} S_{55}\sim -2.9 +4 \ln{\frac{m_{\rm Pl}}{\lambda^{1/4} M}}+ 11.5 \,. \end{equation} This gives us the region between the short dashed (black) lines in Fig.~\ref{3constr}. \begin{figure*}[t] \includegraphics[width=0.8\textwidth]{3constr.eps} \caption{Excluded regions in the $(M,\alpha)$ parameter space. The continuous (blue) line corresponds to ensuring model consistency; if the model lies above this region the phase transition will not take place. The long dashed (red) line corresponds to WMAP5 constraints on the value of the scalar perturbations tilt and allows models to the left of the bound. The region between the short dashed (black) lines indicates models satisfying the maximum size of bubbles allowed by the level of anisotropy in the CMB.} \label{3constr} \end{figure*} \subsection{WMAP constraint} We can similarly place constraints on the $(M,\alpha)$ plane, by considering the 95\% confidence limit resulting from the WMAP5 $n_{\rm S}-r$ plane when tensors are included. \begin{equation} n_{\rm S} \lesssim 1.05 \,. \label{nSwmap} \end{equation} Inverting Eq.~(\ref{nSwmap}) gives us an upper limit on $M$ in terms of $\alpha$, resulting in the region left of the long dashed (red) line in Fig.~\ref{3constr}. We also see from Fig.~\ref{3constr} that this constraint is opposed to that coming from the CMB maximum bubble size requirement, as we argued in Section \ref{intro}. Big bubbles at last scattering put an upper limit on the nucleation rate at horizon crossing while CMB constraints on the spectral tilt put a lower bound on the nucleation rate, from the requirement that $n_{\rm S}$ is not too distant from scale invariance. Nevertheless, a region of parameter space survives all constraints. \section{Conclusions} Our principal conclusion is that there do remain Einstein gravity models of first-order inflation which are compatible with observations, despite the increasing tension between the need for a scale-invariant primordial spectrum and the suppression of large-scale bubbles. We have exhibited a particular class of model and found the parameter region where the first-order model is viable. Its predictions for $n_{\rm S}$ and $r$ are similar to the simple $m^2 \phi^2$ slow-roll inflation model, though a little further from scale-invariance. In this paper we have imposed a relatively simple constraint on the bubbles, and have then assumed that their impact on the CMB is negligible as far as constraints on the primordial perturbations are concerned. A more detailed treatment would combine the two perturbation sources and refit to the CMB data, which may lead to some modification to the outcome in regimes where the bubble production is close to the observational limit. For models where the bubbles are safely within the observational limits this is not an issue. This paper demonstrates that we are still some way from having a clear view as to how the inflationary period of the Universe may have ended. The literature contains three different mechanisms --- violation of slow-roll, a second-order instability during slow-roll, and bubble nucleation --- and we have shown that the last (and least popular) of these remains a viable option. First-order models are of phenomenological interest as the bubble spectrum is an additional source of inhomogeneity that could be considered in matching high-precision observations. The bubble collisions may also generate detectable gravitational waves \cite{hogan, kosowsky_al, huber_konstandin, caprini_al}. There is therefore an ongoing need to refine understanding of the nature of perturbations induced by a primordial bubble spectrum. \begin{acknowledgments} M.C.\ was supported by FCT (Portugal) and by the Director, Office of Science, Office of High Energy Physics, of the U.S. Department of Energy under Contract No.\ DE-AC02-05CH11231. A.R.L.\ was supported by STFC (UK). We thank Andy Albrecht, Katie Freese, Andrei Linde, and Eric Linder for discussions and comments. \end{acknowledgments} \begin{figure*}[t] \includegraphics[width=0.8\textwidth]{nr2nd.eps} \caption{Trajectory in parameter space $n_{\rm S}-r$ describing second-order hybrid inflation models when $M$ evolves from small through large values. The potential has one less coupling compared to the first-order case and all models are described by a single curve, as opposed to Fig.~\ref{nr1}. Allowed models are those which approximate slow roll behaviour. False vacuum, blue tilted, models all lie outside the 95\% C.L.}\label{nr2} \end{figure*}
1,477,468,750,839
arxiv
\section{Introduction} \label{intro} Constrains to the inflationary cosmological models can be set by cosmological observations \cite{Liddle:2003,Linde:2002}. The inflationary expansion not only solves various problems, especially in the early Universe such as the Big Bang cosmology \cite{Starobinsky:1979,Sato:1981,Albrecht:1982,Linde:1982,Guth:1981}, but also provides an explanation for the large-scale structure from the quantum fluctuation of an inflationary field, $\phi$ \cite{Hawking,Guth:1982b,Starobinsky:1982b}. Furthermore, the gravitational waves and the polarization due to the existence of the inflation was discovered in the cosmic microwave background (CMB) \cite{BICEP2}. Not only the physicists around the world are very aware of the existence of the background imaging of cosmic extragalactic polarization (BICEP2) telescope at the south pole, but the world public as well. It is believed that the BICEP2 observations offer an evidence for the cosmic inflation \cite{BICEP2}. Other confirmations from Planck \cite{Planck,Planck2} and WMAP9 \cite{WMAP} measurements, for instance, are likely in near future. BICEP2 did not only provide the first direct evidence for the inflation, but also determined its energy scale and furthermore debriefed witnesses for the quantum gravitational processes in the inflationary era, in which a primordial density and gravitational wave fluctuations are created from the quantum fluctuations \cite{Mukhanov,Bardeen}. The ratio of scalar-to-tensor fluctuation, $r$, which is a canonical measurement of the gravitational waves \cite{Liddle:2003, Linde:2002}, was estimated by BICEP2, $r=0.2 _{-0.05}^{+0.07}$ \cite{BICEP2}. This value is apparently comparable with the upper bound value corresponding to PLANCK $r\leq 0.012$ and to WMAP9 experiment $r=0.2$. On the other hand, the PLANCK satellite \cite{Planck,Planck2} has reported the scalar spectral index $n_s\approx\,0.96$. If these observations are true, then the hypothesis that our Universe should go through a period of cosmic inflation will be confirmed and the energy scale of inflation should be very near to the Planck scale \cite{Amaldi}. The large value of tensor-to-scalar ratio, $r$, requires inflation fields as large as the Planck scale. This idea is known as the Lyth bound \cite{Lyth:96,Lyth:98,Green}, which estimates the change of the inflationary field $\Delta \phi$, \begin{eqnarray} \frac{\Delta \phi}{M_p} &=& \sqrt{\frac{r}{8}} \Delta N, \end{eqnarray} where $M_p$ is the Planck mass and $\Delta N$ denotes the number of e-folds corresponding to the observed scales in the CMB left the inflationary horizon. Since the Planckian effects become important and need to be taken into account during the inflation era, as indicated by the Lyth bound, then $\Delta \phi$ should be smaller than or comparable with the Planck scale $\mid\Delta \phi\mid\ll M_p$. This constrain suggests focusing on concrete inflation field models. In this case, the many corrections suppressed by the Planck scale appear less problematic but come in tension with BICEP2 discovery. Thus, more observations are required to confirm this conclusion. Various approaches to the quantum gravity (QG) offer quantized description for some problems of gravity, for details readers can consult Ref. \cite{Tawfik:BH2013}. The effects of minimal length and maximal momentum which are likely applicable at the Planck scale (inflation era) which lead to modifications in the Heisenberg uncertainty principle appear in quadratic and/or linear terms of momentum. These can be implemented at this energy scale. The quadratic GUP was predicted in different theories such as string theory, black hole physics and loop QG \cite{Tawfik:BH2013,Amati88,Amati87,Amati90,Maggiore93,Maggiore94,Kempf,Kempf97, Kempf2000,Kempf93,Kempf94,Kempf95,Kempf96,Scardigli,Scardigli2009}. The latter, the linear GUP, was introduced by doubly Special Relativity (DSR), which suggests a minimal uncertainty in position and a maximum measurable momentum \cite{DSR,Smolin,Amelino2002,Tawfik:BH2013}. Accordingly, a minimum measurable length and a maximum measurable momentum \cite{advplb,Das:2010zf,afa2} are simultaneously likely. This offers a major revision of the quantum phenomena \cite{amir,Tawfik:BH2013,Pedram}. This approach has the genetic name, {\it Generalized (gravitational) Uncertainty Principle (GUP)}. Recently, various implications of GUP approaches on different physical systems have been carried out \cite{Tawfik:2013uza,Ali:2013ma,Ali:2013ii,Tawfik:2012he,Tawfik:2012hz,Elmashad:2012mq,DiabBH2013}, App. \ref{GUPy}. In the present work, we estimate various inflationary parameters which are characterized by the scalar field $\phi$ and apparently contribute to the total energy density. Taking into account the background (matter and radiation) energy density, the scalar field is assumed to interact with the gravity and with itself. The coupling of $\phi$ to gravity is assumed to result in total inflation energy. We first review the Friedmann-Lemaitre-Robertson-Walker (FLRW) Universe and then suggest modifications in the Friedmann equation due to GUP. Using modified Friedmann equation and a single potential for a chaotic inflation model, the inflationary parameters are estimated and compared with PLANCK and BICEP2 observations. The applicability of the GUP approaches in estimating inflationary parameters comparable with the recent BICEP2 observations will be discussed. In section \ref{FRW}, we present Friedmann-Lemaitre-Robertson-Walker (FLRW) Universe and introduce the modification of Friedmann equation due to GUP at planckian scale in matter and radiation background. In section \ref{cos_inflat}, the modified Friedmann equation in cosmic inflation will be introduced. Some inflation potentials for chaotic inflation models will be surveyed. We suggest to implement the single inflationary field $\phi$. In the cosmic inflation models and quantum fluctuations, the inflationary parameters are given in section \ref{Fluctuation}. The discussion and final conclusions will be outlined in section \ref{summary}. Appendix \ref{GUPH} gives details about the higher order GUP with minimum length uncertainty and maximum measurable momentum in Hilbert space. The applicability of GUP to the cosmic inflation will be elaborated in Appendix \ref{app:appl}. The modified dispersion relation (MDR) as an alternative to the GUP will be introduced in App. \ref{app:mdr}. \section{Generalized uncertainty principle in FLRW background} \label{FRW} In $(n+1)$-dimensional FLRW Universe, the metric can be described by the line element as \cite{dinverno} \begin{eqnarray} ds^2 =\, c^2 d t^2 + a(t)^2 \left(\frac{dr^2}{1-\kappa~r^2} + r^2 \, d \theta ^2 +\, r^2 sin^2 \, d \phi ^2\right), \label{metric} \end{eqnarray} where $a(t)$ is the scale factor and $\kappa$ is the curvature constant that measures the spatial flatness $\pm 1$ and $0$. In Einstein-Hilbert space, the action reads \begin{eqnarray} S &=& \int \left( \frac{1}{8\, \pi \, G} L_{G} + L_{\phi} \right) d\,\Omega, \label{action} \end{eqnarray} where $d\Omega =d\theta \,+\,sin \theta \,d\phi$ , $G$ is the gravitational constant, $L_G$ is the geometrical Lagrangian related to the line element of the FLRW Universe and $L_{\phi}$ \cite{Rong:2005} is Lagrangian coupled to the scalar field $\phi$ \begin{eqnarray} L_{\phi}\,= -\left[g^{\mu \, \nu}\, \partial _{\mu} \phi \, \partial_{\nu} \phi +V(\phi)\right], \end{eqnarray} where $V(\phi)$ is the potential and $g^{\mu\, \nu}$ is diagonal matrix $\,diag\left\lbrace 1, -1, -1, -1\right\rbrace$. Under the assumption of homogeneity and isotropy, a standard simplification of the variables leads to the FLRW metric, where the gradient of the scalar field vanishes. The integration of the action, Eq. (\ref{action}), over a unit volume results in $L_{\phi}=-\frac{1}{2}a^3\dot{\phi}^2-a^3V(\phi)$. Since the FLRW Lagrangian of scalar field evaluated at vanishing mass, results in, \begin{eqnarray} L &=& \frac{1}{2} a^3\, \dot{\phi}^2 -\frac{3}{8\, \pi\, G}\left(a\, \dot{a}^2 - a\, \kappa \right), \label{Lagra2} \end{eqnarray} and the energy-momentum tensor reads \begin{eqnarray} T^{\mu \nu} &=& \partial ^{\nu} \frac{\partial L}{\partial (\partial_{\mu}\, \phi)} - g^{\mu\, \nu} L, \end{eqnarray} while the four momentum tensor is given by $P^{\mu}= T^{\mu \, 0}$ and the Hamiltonian constraint $\mathcal{H}=P^0 =T^{0 \, 0}$ \begin{eqnarray} \mathcal{H} &=& \pi\, \frac{\partial L}{\partial (\partial _{0} \phi)} - L, \label{Hconst0} \end{eqnarray} where the $\pi=\partial L/\partial \dot{\phi}$ is known as the canonical momentum conjugate for the scalar field $\phi$. Thus, the total Hamiltonian is given as \begin{eqnarray} \mathfrak{h} &=& \int d^3\, x \, \mathcal{H}. \end{eqnarray} The scalar field becomes equivalent to a perfect fluid with respectively energy density and pressure \begin{eqnarray} \rho &=& \frac{\dot{\phi}}{2}\,+\, V(\phi), \label{rhophi} \\ p &=& \frac{\dot{\phi}}{2}\,-\, V(\phi). \end{eqnarray} When taking into account the cosmological constant $\Lambda$, then the energy density $\rho \rightarrow \rho+\rho _v$, with $\rho_v=\Lambda/8\, \pi \, G$. Using Eq. (\ref{Lagra2}) and taking into account Eq. (\ref{rhophi}), the dynamics of such models are summarized in the Hamiltonian constraint \begin{eqnarray} \mathcal{H}&=&-\frac{2\pi \, G}{3}\, \frac{p_{a}^2}{a} -\frac{3}{8\, \pi \, G}\, \kappa a\,+\, a^3 \rho \equiv 0. \label{Hconst} \end{eqnarray} This equation is equivalent to the estimation for FLRW Universe \cite{HAWKING:1986,Roman,Farag2014}, where the momenta $p_a$ associated with the scalar factor are defined as \begin{eqnarray} p_{a}&:=& \frac{\partial \mathcal{L}}{\partial \dot{a}} = \frac{-3}{4\, \pi \, G}\, a\, \dot{a}. \end{eqnarray} The standard Friedmann equations can be extracted from the equations of motion which can be derived from the extended Hamiltonian by exchanging the negative sign in Eq. (\ref{Hconst}) in order to estimate the exact form of Friedmann equations \begin{eqnarray} \mathcal{H}_E &=& \frac{2\pi \, G}{3}\, \frac{p_{a}^2}{a} + \frac{3}{8\, \pi \, G}\, \kappa a - a^3 \rho. \end{eqnarray} Based on the relationship between the commutation relation and the Poisson bracket which was first proposed by Dirac \cite{Dirac}, we get for two quantum counterparts $\hat{A}$ and $\hat{B}$ and two observables $A$ and $B$ that \begin{eqnarray} [\hat{A},\hat{B}] &=&i\ \hbar \, \lbrace A,B \rbrace. \end{eqnarray} In the standard case, the canonical uncertainty relation for variables of scale factor $a$ and momenta $p_a$ satisfies the Poisson bracket $\lbrace a, \,p_a \, \rbrace=\, 1$. Then, the equations of motion read \begin{eqnarray} \dot{a}&=&\,\lbrace a,\,\mathcal{H}_E \rbrace = \lbrace a,\, p_a \rbrace \frac{\partial\mathcal{H}_E}{\partial p_a} = \left( \frac{4\, \pi \, G}{3}\right) \frac{p_a}{a}, \label{adot}\\ \dot{p_a}&=&\,\lbrace p_a,\,\mathcal{H}_E \rbrace \,=-\, \lbrace a,\, p_a \rbrace \frac{\partial\mathcal{H}_E}{\partial a} = \left( \frac{2\, \pi \, G}{3}\right) \frac{p_{a}^2}{a^2} -\frac{3}{8\, \pi \, G}\, \kappa \, +\, 3\,a^2 \rho \,+\, a^3 \frac{\partial \rho}{\partial a}. \label{pdot} \end{eqnarray} From Eqs. (\ref{adot}) and (\ref{pdot}) and the Hamiltonian constrain, Eq. (\ref{Hconst}), then the Friedmann equation is given as \begin{eqnarray} H^2 &=& \left( \frac{8\, \pi \, G}{3}\right) \rho \, -\, \frac{\kappa}{a^2}, \label{fried} \end{eqnarray} where $H=\dot{a}/a$ is the Hubble parameter. For a cosmic fluid, the energy density is combined from a contribution due to the inflation $\rho (\phi)$, Eq. (\ref{rhophi}) and another part related to the inclusion of the cosmological constant, $\rho_v$. Now, we consider the higher-order GUP in deformed Poisson algebra in order to study classical approaches, such as Friedmann equations, Appendix \ref{GUPy}. We introduce GUP in terms of {\it first order} $\alpha$ \cite{Tawfik:BH2013}. Accordingly, the Poisson bracket between the scale factor $a$ and momenta $p_a$ reads \begin{eqnarray} \lbrace a\, ,\, p_a \rbrace = 1 - 2\, \alpha\, p_a. \end{eqnarray} We follow the same procedure as in Eq. (\ref{fried}), but for a modified term of QG, we will use the extended Hamiltonian with the Poisson brackets to get the modified equations of motion \begin{eqnarray} \dot{a}&=& \lbrace a,\, p_a \rbrace \frac{\partial\mathcal{H}_E}{\partial p_a}\,=\,(1-2\alpha p_a)\, \frac{4 \pi G}{3} \frac{p_a}{a}, \label{adota}\\ \dot{p_a}&=& \lbrace a,\, p_a \rbrace \frac{\partial\mathcal{H}_E}{\partial a}=(1-2\alpha p_a)\,\left(\frac{2 \pi G}{3} \frac{p_a^2}{a^2} -\frac{3}{8 \pi G} \kappa + 3 a^2 \rho +a^3 \frac{d\rho}{da}\right). \label{pdotp} \end{eqnarray} By using Eqs. (\ref{adota}) and (\ref{pdotp}) with the scalar constraint, Eq. (\ref{Hconst}), we obtain the modified Friedmann equation \begin{eqnarray} H^2 &=& \left(\frac{8 \pi G}{3} \rho - \frac{\kappa}{a^2}\right) \left[1\,- \, \frac{3\,\alpha \, a^2 }{ \pi G}\left(\frac{8 \pi G}{3} \rho - \frac{\kappa}{a^2}\right)^{1/2}\right]. \label{HddDo} \end{eqnarray} By considering the standard case, Eq. (\ref{fried}), in which $\alpha$ vanishes for $\kappa=0$, we find that the modified Friedmann equation reads \begin{eqnarray} H^2 &=& \frac{8\, \pi \,G}{3} \rho\, \left[1-\, 3\,\alpha\, a^2 \sqrt{\frac{8}{3\, \pi \, G}}\, \rho^{1/2}\right]. \label{FR3} \end{eqnarray} \subsection{Bounds on GUP parameter} The GUP parameter is given as $\alpha=\alpha _0/(M_{p} c)=\alpha _0 \ell _p/\hbar$, where $c,\; \hbar$ and $M_p$ are speed of light and Planck constant and mass, respectively. The Planck length $\ell _p\, \approx\, 10^{-35}~$m and the Planck energy $M_p c^2 \,\approx \, 10^{19}~$ GeV. $\alpha _0$, the proportionality constant, is conjectured to be dimensionless \cite{advplb}. In natural units $c=\hbar=1$, $\alpha$ will be in GeV$^{-1}$, while in the physical units, $\alpha$ should be in GeV$^{-1}$ times $c$. The bounds on $\alpha_0$, which was summarized in Ref. \cite{afa2,AFALI2011,DasV2008}, should be a subject of precise astronomical observations, for instance gamma ray bursts \cite{Tawfik:2012hz}. \begin{itemize} \item Other alternatives were provided by the tunnelling current in scanning tunnelling microscope and the potential barrier problem \cite{AFALI2012}, where the energy of the electron beam is close to the Fermi level. We found that the varying tunnelling current relative to its initial value is shifted due to the GUP effect \cite{AFALI2011,AFALI2012}, $\delta I/I_0\approx 2.7 \times 10^{-35}$ times $\alpha _{0} ^{2}\,$. In case of electric current density $J$ relative to the wave function $\Psi$, the current accuracy of precision measurements reaches the level of $10^{-5}$. Thus, the upper bound $\alpha_0<10^{17}$. Apparently, $\alpha$ tends to order $10^{-2}~$GeV$^{-1}$ in natural units or $10^{-2}~$GeV$^{-1}$ times $c$ in physical units. This quantum-mechanically-derived bound is consistent with the one at the electroweak scale \cite{AFALI2011,AFALI2012,DasV2008}. Therefore, this could signal an intermediate length scale between the electroweak and the Planck scales \cite{AFALI2011,AFALI2012,DasV2008}. \item On the other hand, for a particle with mass $m$ mass, electric charge $e$ affected by a constant magnetic field ${\vec B}=B {\hat z}\approx10~$Tesla, vector potential ${\vec A}= B\,x \,{\hat y}$ and cyclotron frequency $\omega_c = eB/m$, the Landau energy is shifted due to the GUP effect \cite{AFALI2011,AFALI2012} by \begin{eqnarray} \frac{\Delta E_{n(GUP)}}{E_n} &=& -\sqrt{8\, m}\; \alpha\; (\hbar\, \omega _c )^{\frac{1}{2}} \, \left(n +\frac{1}{2}\right)^{\frac{1}{2}} \approx - 10^{-27}\; \alpha_0. \end{eqnarray} Thus, we conclude that if $\alpha_0\sim 1$, then $\Delta E_{n(GUP)}/E_n$ is too tiny to be measured. But with the current measurement accuracy of $1$ in $10^3$, the upper bound on $\alpha_0<10^{24}$ leads to $\alpha=10^{-5}$ in natural units or $\alpha=10^{-5}$ times $c$ in the physical units. \item Similarly, for the Hydrogen atom with Hamiltonian $H=H_0+H_1$, where standard Hamiltonian $H_0=p_0^2/(2m) - k/r$ and the first perturbation Hamiltonian $H_1 = -\alpha\, p_0^3/m$, it can be shown that the GUP effect on the Lamb Shift \cite{AFALI2011,AFALI2012} reads \begin{eqnarray} \frac{\Delta E_{n(GUP)}}{\Delta E_n} &\approx & 10^{-24}~\alpha_0. \end{eqnarray} Again, if $\alpha_0 \sim 1$, then $\Delta E_{n(GUP)}/E_n$ is too small to be measured, while the current measurement accuracy gives $10^{12}$. Thus, we assume that $\alpha_0>10^{-10}$. \end{itemize} In light of this discussion, should we assume that the dimensionless $\alpha_0$ has the order of unity in natural units, then $\alpha$ equals to the Planck length $\approx\, 10^{-35}~$m. The current experiments seem not be able to register discreteness smaller than about $10^{-3}$-th fm, $\approx\, 10^{-18}~$m \cite{AFALI2011,AFALI2012}. We conclude that the assumption that $\alpha_0\sim 1$ seems to contradict various observations \cite{Tawfik:2012hz} and experiments \cite{AFALI2011,AFALI2012}. Therefore, such an assumption should be relaxed to meet the accuracy of the given experiments. Accordingly, the lower bounds on $\alpha$ ranges from $10^{-10}$ to $10^{-2}~$GeV$^{-1}$. This means that $\alpha_0$ ranges between $10^9\, c$ to $10^{17}\, c$. \subsection{Standard model solution of Universe expansion} In a toy model \cite{Tawfik:2011gh,Tawfik:2010ht}, the prefect cosmic fluid contributing to the stress tensor $T_{\mu \nu}$ can be characterized by symmetries of the metric, homogeneity and isotropy of the cosmic Universe. Thus, the total stress-energy tensor $T_{\mu \nu}$ must be diagonal and the spatial components will be given as \begin{eqnarray} T_{\mu \nu}=\textit{diag}(\rho,-p,-p,-p). \end{eqnarray} Assuming that all types of energies in the early Universe are heat $Q$ captured in a closed sphere with radius equal to scale factor $a$ of volume $V=4\pi\,a^3/3$, the energy density during the expansion $\rho=U/V$, where $U$ is internal energy \cite{Tawfik:2011gh,Tawfik:2010ht}. The first law of thermodynamic satisfies of the total energy conservation \begin{eqnarray} dQ &=& d U + p\, dV = 0. \label{Henergy} \end{eqnarray} By substituting the totally differential of the energy density, $d\,\rho =dU/V-U\,dV/V^2$ into Eq. (\ref{Henergy}), we get \begin{eqnarray} d\,\rho &=& -3 \frac{da}{a}(\rho +p). \end{eqnarray} Dividing both sides over $dt$ results in \begin{eqnarray} \dot{\rho} &=& - 3 H(\rho +p). \end{eqnarray} For a very simple equation of state, $\omega=p/\rho$, where $\omega$ is independent of time, the energy density reads \begin{eqnarray} \rho \sim a^{-3(1+\omega)}. \end{eqnarray} The radiation-dominated phase is characteristic by $\omega=1/3$ or $p=\rho/3$. Therefore, $\rho \sim a^{-4}$, the scaling factor, $a \sim \textit{const.}\, t^{1/2}$ and the Hubble parameter, $H=1/(2t)$. In the matter-dominated phase, $\omega=0$, i.e. $p \ll \rho$. Therefore, $\rho \sim a^{-3}$, $a \sim \textit{const.}\, t^{2/3}$ and $H=2/(3t)$. The left-hand panel (a) of Fig. \ref{Hubble&scale} shows the Hubble parameter, $H$ in dependence on the scale factor, $a$. The standard (without GUP), Eq. (\ref{fried}) are compared with the modified (with GUP) characterizations of the cosmic fluid, Eq. (\ref{FR3}) in the flat universe. It is obvious that $H$ in both cases (with/without GUP) diverges at vanishing $a$. This would mean that a singularity exists at the beginning. The GUP has the effect to slightly slow down the expansion rate of the Universe. This is valid for both cases of cosmic background, radiation and matter. In the right-panel (b) of Fig. \ref{Hubble&scale}, the dependence of the scale factor, $a$, on the cosmic time, $t$, is given for both cases of cosmic matters, radiation and matter with and without GUP. Apparently, the GUP is not sensitive to the matter-dominated phase but has a clear effect on the radiation-dominated phase. The GUP breaks down the expansion. \begin{figure}[htb] \includegraphics[width=5cm,angle=-90]{H_vs_a.eps} \includegraphics[width=5cm,angle=-90]{a_vs_t.eps} \caption{(Color online) Left-hand panel (a) presents the variation of the Hubble parameter $H$ with respect to the scale factor $a$. Matter- and radiation-dominated phases with and without GUP are compared with each other. Right-hand panel (b) presents the scale factor $a$ as function of the cosmic time $t$. Various parameters are fixed, $G=1$ and $\alpha =10^{-2}~$GeV$^{-1}$. \label{Hubble&scale} } \end{figure} \section{Cosmic inflation} \label{cos_inflat} Here, we estimate various inflation parameters, which are characterized by the scalar field $\phi$ and apparently contribute to the total energy density \cite{Liddle:2003, Linde:2002}. Also, taken into account the background (matter and radiation) energy density, the scalar field is assumed to interact with the gravity and with itself \cite{Linde:1982,Liddle:1993,Liddle:1995,Liddle:2003,Linde:2002}. In order to reproduce the basics of the field theory, the coupling of $\phi$ to gravitation results in total inflation energy \begin{eqnarray} \frac{1}{2} \left( \dot{\phi}^2 + (\nabla \phi)^2\, \right) + V(\phi). \label{infl_ener} \end{eqnarray} The dynamics of the inflation can be described by two types of equations: \begin{itemize} \item the Friedmann equation, which describes the contraction and expansion of the Universe and \item the Klein-Gordon equation, which is the simplest equation of motion for a spatially homogeneous scalar field \begin{eqnarray} \ddot{\phi} + 3 \,H\, \dot{\phi} + \partial _\phi V(\phi) = 0, \label{KG} \end{eqnarray} where $\partial_\phi \equiv \partial/\partial \phi$. \end{itemize} In a flat Universe, $\kappa=0$, the total inflation energy, Eq. (\ref{infl_ener}) and the energy density due to the cosmological constant $\rho_v=\Lambda/8\, \pi \, G$, can be substituted in the modified Friedmann equation, Eq. (\ref{FR3}), \begin{eqnarray} H^2 &=& \frac{8 \pi G}{3} \left[ \frac{\dot{\phi}^2+(\nabla \phi)^2}{2} + V(\phi) +\rho_v \right] \left[1- 3 \alpha a^2 \sqrt{\frac{8}{3 \pi G}}\left(\frac{\dot{\phi}^2 +(\nabla \phi)^2}{2} + V(\phi) + \rho_v \right)^{1/2}\right]. \hspace*{10mm} \label{MFR5} \end{eqnarray} In rapidly expanding Universe and if the inflation field starts out sufficiently homogeneous, the inflation field becomes minimum, very slow \cite{Liddle:2003, Linde:2002}. This would be modelled by a sphere in a viscous medium, where both the energy densities due to matter $\rho _m$ and radiation $\rho_r$ are neglected \begin{eqnarray} (\nabla \phi)^2 &\ll & V(\phi), \label{eq:inql1} \\ \ddot{\phi} & \ll & 3\, H\, \dot{\phi}, \label{eq:inql2} \\ \dot{\phi}^2 & \ll & V(\phi). \label{eq:inql3} \end{eqnarray} The first inequality, Eq. (\ref{eq:inql1}), is obtained under the assumption of homogeneity and isotropy of the FLRW Universe \cite{Liddle:2003, Linde:2002}, while the second inequality, Eq. (\ref{eq:inql2}), states that the scalar field changes very slowly so that the acceleration would be neglected \cite{Liddle:2003, Linde:2002}. The third inequality, Eq. (\ref{eq:inql3}), gives a principle condition for the expansion. Accordingly, the kinetic energy is much less than the potential energy \cite{Liddle:2003, Linde:2002}. Apparently, the Universe expansion accelerates \cite{Linde:1982}. Therefore, the modified Friedmann equation, Eq. (\ref{MFR5}) and the Klein-Gordon equation, Eq. (\ref{KG}), respectively read \begin{eqnarray} H^2 &=& \frac{8 \pi \,G}{3} \left( \, V(\phi) \,+\, \rho_v \right) \left[1-\, 3\,\alpha \, a^2 \sqrt{\frac{8}{3 \pi \, G}}\,\left( \, V(\phi) \,+\, \rho_v \right) ^{1/2} \right], \label{MFE}\\ \dot{\phi} &=& - \frac{1}{3\, H}\; \partial_\phi V(\phi). \end{eqnarray} The cosmological constant characterizes the minimum mass that is related to the Planck mass $M_P =\sqrt{\hbar c/ G}$. The Planck length $\ell_p = \sqrt{\hbar G/c^{3}}$ \cite{T. Harko:2005} is also related to the mass quanta, where quantized mass \cite{Wesson:2004} is proportional to the GUP parameter $\alpha=\alpha _0/(M_p c)$. The cosmological constant $\Lambda$ is one of the foundation of gravity \cite{Wesson:2004}. It related the Planck (quantum scale) and the Einstein (in cosmological scale) masses, $M_P$ and $M_E$, respectively, with each other \cite{Wesson:2004} \begin{eqnarray} M_p &=& \left( \frac{h}{c}\right) \left(\frac{\Lambda}{3}\right)^{1/2},\\ M_E &=& \left( \frac{c^2}{G}\right) \left(\frac{3}{\Lambda}\right)^{1/2}. \end{eqnarray} By using natural units $\hbar=c=1$, the modified Friedmann equation, Eq. (\ref{MFE}), becomes \begin{eqnarray} H^2\,=\,\frac{4\pi}{3\, \, M_{p}^2}\left\lbrace \left[ V(\phi)\,+\, \frac{3\, M_{p}^4}{4\,\pi}\right] \,-\,3\alpha \, a^2\, \sqrt{\frac{16\,M_{p}^2}{3\,\pi}} \left[ V(\phi)\,+\, \frac{3\, M_{p}^4}{4\,\pi} \right]^{3/2}\right\rbrace. \label{modified HH} \end{eqnarray} There are various inflation models such as chaotic inflation models, which suggest different inflation potentials \cite{Linde:1982,Liddle:1993,Liddle:1995}. Now, it is believed that they are better motivated than other models \cite{Linde:1982,Liddle:1993,Liddle:1995}. In this context, there are two main types of models; one with a single inflation field and the other one combines two inflation fields. Here, we summarize some models requiring a single inflation-field $\phi$ which in some regions satisfies the slow-roll conditions, \begin{eqnarray} \text{Polynomial chaotic inflation} & & V(\phi)=\frac{1}{2}\,m^2 \phi ^2, \qquad V(\phi)=\lambda \phi ^4, \\ \text{Power-law inflation} & & V(\phi)=V_0 \exp \left[\sqrt{\frac{16\, \pi\, G}{p}} \phi\right], \\ \text{Natural inflation} & & V(\phi)=V_0 \left(1+\cos \frac{\phi}{f}\right), \qquad V(\phi)\propto \phi^{-\beta}. \end{eqnarray} Based on this concept, we select three different inflation potential models, Eqs. (\ref{eq:mssm}), (\ref{sdual}) and (\ref{poweri}). The first one is based on certain minimal supersymmetric extensions of the standard model for elementary particles~\cite{allahverdi-2006} and the related effects have been studied, recently~\cite{allahverdi-2006,sanchez-2007}. It has two free parameters, $m$ and $\lambda$, \begin{eqnarray} V(\phi) = \left(\frac{m^2}{2}\right)\,\phi^2 - \left(\frac{\sqrt{2\,\lambda\,(n-1)}\,m}{n}\right)\, \phi^n + \left(\frac{\lambda}{4}\right)\,\phi^{2(n-1)}, \end{eqnarray} where $n>2$ is an integer. At $n=3$, \begin{eqnarray} V_1(\phi) = \left(\frac{m^2}{2}\right)\,\phi^2 - \left(\frac{2\sqrt{\lambda}\,m}{3}\right)\, \phi^3 + \left(\frac{\lambda}{4}\right)\,\phi^{4}, \label{eq:mssm} \end{eqnarray} which is an $\mathcal{S}$-dual inflationary potential \cite{sdual} with a free parameter $f$. The $\mathcal{S}$ duality has its origin in the Dirac quantization condition of the electric and magnetic charges \cite{Montonen:1977}. This would suggest an equivalence in the description of the quantum electrodynamics \cite{Montonen:1977}, \begin{eqnarray} V_2(\phi) &=& V_0\, \text{sech} \left(\frac{\phi}{f}\right). \label{sdual} \end{eqnarray} For a power-law inflation with the free parameter $d$ \cite{Starobinsky,Liddle:1993}, \begin{eqnarray} V_3(\phi)=\frac{3 M_{p}^2 d^2}{32 \pi} \left[1-\exp \left(-\frac{16\pi}{3 M_{p}^2 }^{1/2} \phi \right) \right]^2. \label{poweri} \end{eqnarray} For these inflation potentials, Eqs. (\ref{eq:mssm}), (\ref{sdual}) and (\ref{poweri}), the inflation parameters such as potential slow-roll parameters $\epsilon$, $\eta$, tensorial $p_t$ and scalar $p_s$ density fluctuations, the ratio of tensor-to-scalar fluctuations $r$, scalar spectral index $n_s$ and the number of e-folds with the inflation era $\mathcal{N}_e$ can be estimated. \begin{figure}[htb] \includegraphics[width=6.5cm,angle=-90]{pots.eps} \caption{(Color online) The variation of inflation potentials $V/V_0$ is given in dependence on scalar field $\phi/M_p$ at limited free constants. The solid, long-dashed and dotted line stands for $V_1(\phi)$, $V_2(\phi)$ and $V_3(\phi)$, respectively. \label{Potentials} } \end{figure} Fig. \ref{Potentials} shows the variation of the different inflation potentials, Eqs. (\ref{eq:mssm}), (\ref{sdual}) and (\ref{poweri}), normalized with respect to initial potential $V_0$ with the single inflation field $\phi$ according to Lyth bound during the inflation era \cite{Lyth:96,Lyth:98,Green} and normalized with respect to $M_p$. The inflation field, $\phi \equiv \Delta \phi =(\phi _0 -\phi _{end})$ should be smaller than or comparable with the Planck scale $M_p$. This was confirmed by the BICEP2 observation conditionally with this bound of small scalar field \cite{Lyth:96,Lyth:98,Green}. The potentials, Eqs. (\ref{eq:mssm}) and (\ref{poweri}) increase with $\phi/M_p$, while the third potential, Eq. (\ref{sdual}) , decreases. This means that the latter is finite at vanishing inflation field, $\phi$, while the earlier vanishes. \section{Fluctuations and slow-roll parameters in the inflation era} \label{Fluctuation} In very early Universe, the scaler field $\phi$ is assumed to derive the inflation \cite{Linde:1982,Liddle:1993,Liddle:1995}. The main potential slow-roll parameters are given as \begin{eqnarray} \epsilon &\equiv & \frac{M_{p}^2}{16 \, \pi} \left(\frac{\partial _{\phi} V(\phi)}{ V(\phi)}\right)^2, \label{paramters1} \\ \eta & \equiv & \frac{M_{p}^2}{8 \pi} \left(\frac{\partial_{\phi}^2 V(\phi)}{ V(\phi)}\right). \label{paramters2} \end{eqnarray} Fig. \ref{slowroll} shows the variation of the potential slow-roll parameters as functions of the scalar field. Various inflation potentials, Eqs. (\ref{eq:mssm}), (\ref{sdual}) and (\ref{poweri}) are used to deduce the slow-roll parameters, Eqs. (\ref{paramters1}) and (\ref{paramters2}). The scalar fields in left- (a) and right-hand panel (c) result in slow-roll parameters, which start from large values at small field. Then, they rapidly decline (vanish) as the scalar field increases. The field presented in the middle panel gives slow-roll parameters with relatively very small values, but seem to remain stable with the field. \begin{figure}[htb] \includegraphics[width=3.5cm,angle=-90]{epsonetaV1.eps} \includegraphics[width=3.5cm,angle=-90]{epsonetaV2.eps} \includegraphics[width=3.5cm,angle=-90]{epsonetaV3.eps} \caption{(Color online) From the left- (a) middle (b) and right-hand (c) panels present the slow-roll parameters associated with $V_1(\phi)$ from Eq. (\ref{eq:mssm}), $V_2(\phi)$ from Eq. (\ref{sdual}) and $V_3(\phi)$ from (\ref{poweri}) , respectively. The solid and dot-dashed curves represent $\epsilon$ and $\eta$ parameters, respectively. \label{slowroll} } \end{figure} The tensorial and scalar density fluctuations are given as \cite{Linde:1982,Liddle:1993,Liddle:1995} \begin{eqnarray} p_t &=& \left(\frac{H}{2\pi}\right)^{2} \left[1-\frac{H}{\Lambda}\,\sin\left(\frac{2\Lambda}{H}\right)\right]=\left(\frac{H}{2\, \pi}\right)^{2} \left[1-\frac{H}{3\, M_p^2}\,\sin\left(\frac{6\, M_p^2}{H}\right)\right],\label{pt} \\ p_s &=& \left(\frac{H}{\dot{\phi}}\right)^2\left(\frac{H}{2\pi}\right)^{2} \left[1-\frac{H}{\Lambda}\,\sin\left(\frac{2\Lambda}{H}\right)\right]=\left(\frac{H}{\dot{\phi}}\right)^2\left(\frac{H}{2\, \pi}\right)^{2} \left[1-\frac{H}{3\, M_p^2}\,\sin\left(\frac{6\, M_p^2}{H}\right)\right]. \hspace*{10mm} \label{ps} \end{eqnarray} Fig. \ref{pt_ps} shows the dependence of tonsorial (top panel) and scalar (bottom) density fluctuations, Eqs. (\ref{pt}) and (\ref{ps}), on the scalar field of inflation $\phi$. We show the fluctuations corresponding to the inflation potential and find that the tonsorial and scalar density fluctuations decrease as scalar field of inflation $\phi$ increases. The tonsorial density fluctuations (top panel) corresponding the inflation potentials, $V_1(\phi)$ from Eq. (\ref{eq:mssm}), $V_2(\phi)$ from Eq. (\ref{sdual}) and $V_3(\phi)$ from (\ref{poweri}), look similar. There is a rapid rise at small and a decrease at large $\phi/M_p$. For the inflation potential given in Eq. (\ref{poweri}), $\phi/M_p$ at which the peak takes place is smaller than that for Eq. (\ref{eq:mssm}). No systematic comparison can be done for the scalar (bottom panel) density fluctuations of the different inflation potentials. The left-hand and the middle panels shows that the potential, Eqs. (\ref{eq:mssm}) and (\ref{sdual}), very rapidly decreases with $\phi/M_p$. Then, increasing $\phi/M_p$ does not change the fluctuations. The right-hand panel, Eq. (\ref{poweri}), presents another type of scalar density fluctuations, which remain almost unchanged for a wide range of $\phi/M_p$. Then, the fluctuations are almost damped, at large $\phi/M_p$. The results corresponding to $\alpha=10^{-2}~$GeV$^{-1}$ are depicted. Exactly the same curves are also obtained at $\alpha=10^{-19}~$GeV$^{-1}$ (not shown here). The earlier value is related to $\alpha_0=10^{17}\, c$ while the latter to $\alpha_0=1\, c$. In light of this, the bounds on $\alpha_0$ seem not affecting the evolution of both tonsorial and scalar density fluctuations with the scalar field of inflation $\phi$. \begin{figure}[htb] \includegraphics[width=3.5cm,angle=-90]{pt1.eps} \includegraphics[width=3.5cm,angle=-90]{pt2.eps} \includegraphics[width=3.5cm,angle=-90]{pt3.eps}\\ \includegraphics[width=3.5cm,angle=-90]{ps1.eps} \includegraphics[width=3.5cm,angle=-90]{ps2.eps} \includegraphics[width=3.5cm,angle=-90]{ps3.eps} \caption{(Color online) The top panels show the tonsorial density fluctuations, $p_t$, in dependence on the scalar field $\phi/M_p$. The bottom panels give the scalar density fluctuations $p_s$ in dependence on the scalar field $\phi/M_p$. Each column is associated with an inflation potential model. The results corresponding to $\alpha=10^{-2}~$GeV$^{-1}$ are depicted, only. Exactly same curves are also obtained at $\alpha=10^{-19}~$GeV$^{-1}$ (not shown here). \label{pt_ps} } \end{figure} Therefore, we can now study the ratio of tensor-to-scalar fluctuations, $r$, which obviously reads \cite{Linde:1982,Liddle:1993,Liddle:1995} \begin{eqnarray} r &=& \frac{p_t}{ p_s }\,=\,\left(\frac{\dot{\phi}}{H}\right)^2, \label{eq:r} \end{eqnarray} relating potential evolution with the Hubble parameter $H$. Corresponding to the tensor-to-scalar fluctuations, a spectral index $n_s$ can be defined \begin{eqnarray} n_s &=& 1 - \sqrt{\frac{r}{3}}. \label{eq:ns} \end{eqnarray} The number of $e$-folds is given by numbers of the Hubble $\mathcal{N}_e \approx 60$ \cite{Liddle:1993} or the integral of the expansion rate, \begin{eqnarray} \mathcal{N}_e &=& \int _{t_{i}}^{t_{f}} H(t)\, dt = -3 \int _{\phi}^{\phi_{f}} \frac{H^2}{\partial _{\phi}\, V(\phi)} d\phi , \end{eqnarray} where \begin{eqnarray} H(t)\, dt &=& \frac{H}{\dot{\phi}} d \phi = - 3\, \frac{H^2}{\partial_{\phi}\, V(\phi)} d \phi. \end{eqnarray} In Fig. \ref{ratio/spectrial}, the left-hand panel shows the ratio of tonsorial to scalar density fluctuations $r$ in dependence on $\phi/M_P$. The dashed curves are evaluated at $\alpha=10^{-2}~$GeV$^{-1}$, while the solid thick curves at $\alpha=10^{-19}~$GeV$^{-1}$. The earlier value is corresponding to $\alpha_0=10^{17}$ while the latter to $\alpha_0=1$. It is obvious that the bounds on $\alpha_0$ do no affect the ratio of tonsorial to scalar density fluctuations $r$ in dependence on $\phi/M_P$. The behavior of the tonsorial to scalar ratio is limited by the modified Friedmann equation (in the presence of GUP), where the GUP physics is related to the gravitational effect on such model at the Planck scale. The GUP parameter $\alpha$ - appearing in the modified Friedmann equation - should play an important role in bringing the value of $r$ very near to both PLANCK and BICEP2, $r=0.2^{+0.07}_{-0.05}$. According to Eq. (\ref{modified HH}), $\alpha$ breaks (slows) down the expansion rate, $H$, compared with Fig. \ref{Hubble&scale}. It is obvious that the parameters related to the Gaussian sections of the three curves match nearly perfectly with the results estimated by the PLANCK and BICEP2 collaborations (compare with Fig. \ref{ratio/spectrial2}). The right-hand panel of Fig. \ref{ratio/spectrial} shows the variation of the spectral index, $n_s$, with scalar field for the three inflation potentials, Eqs. (\ref{eq:mssm}), (\ref{sdual}) and (\ref{poweri}). Again, the dashed curves are evaluated at $\alpha=10^{-2}~$GeV$^{-1}$, while the solid thick curves at $\alpha=10^{-19}~$GeV$^{-1}$. It is obvious that the bounds on $\alpha_0$ do no affect the dependence of spectral index, $n_s$ on $\phi/M_P$. Figure \ref{ratio/spectrial2} summarizes the observations of PLANCK \cite{Planck,Planck2} and BICEP2 \cite{BICEP2} collaborations together with the parametric dependence of spectral index $n_s$ and the ratio $r$. Both parametric quantities are functions of $\phi$, Eqs. (\ref{eq:r}) and (\ref{eq:ns}). {\bf We find that the region of PLANCK at $1\, \sigma$ \cite{Planck,Planck2} and BICEP2 at $1\, \sigma$ \cite{BICEP2} observations for $r$ and $n_s$ is crossed by our parametric calculations for $r$ vs. $n_s$ for two different potentials Eqs. (\ref{eq:mssm}) and (\ref{sdual})}. For the inflation potential, Eq. (\ref{poweri}), the parametric calculations for $r$ vs. $n_s$ are very small. This can be interpreted due to the large minimum in the right-hand panel of Fig. \ref{ratio/spectrial}, which means that main part of $n_s$ calculated for this potential is entirely excluded (out of the range). The related part is obviously very small. The authors of Ref. \cite{Andrei Linde:2014} predict variations of the fluctuations tensor with the spectral index at $55$ {\it e}-folding corresponding to $a$ for the chaotic inflation potential, \begin{eqnarray} V(\phi)=\frac{m^2 \phi ^2}{2} (1- a\, \phi +a^2 b \phi ^2)^2. \end{eqnarray} Our results fit well with the curves of Ref. \cite{Andrei Linde:2014} (open symbols in Fig. \ref{ratio/spectrial2}) which have an excellent agreement with PLANCK and BICEP2 observations. It is worthwhile to highlight that they are deduced using other methods than ours. The main difference is the varying chaotic potential parameters at a constant inflation field. Furthermore, Ref. \cite{Andrei Linde:2014} gives $n_s (a)$ and $r(a)$, while we are varying various potential with the scalar field at constant potential parameters and estimate $n_s (\phi)$ and $r(\phi)$. The parametric dependence of $n_s (a)$ and $r(a)$ is given in Fig. \ref{ratio/spectrial2}. {\bf It is apparent that the graphical comparison in Fig. \ref{ratio/spectrial2} presents an excellent agreement between the observations of PLANCK \cite{Planck,Planck2} and BICEP2 \cite{BICEP2} and the parametric calculations, especially for the inflation potentials, Eqs. (\ref{eq:mssm}) and (\ref{sdual}). The agreement is apparently limited to the values given by the parametric calculations, while the observations are much wider. } \begin{figure}[htb] \includegraphics[width=5.5cm,angle=-90]{Ratio2.eps} \includegraphics[width=5.5cm,angle=-90]{Spectral2.eps} \caption{(Color online) Left-hand panel shows the ratio of tonsorial-to-scalar density fluctuations, $r$, in dependence on $\phi/M_p$ calculated for the inflation potentials $V_1(\phi)$, $V_2(\phi)$ and $V_3(\phi)$. The right-hand panel gives the spectral index $n_s$ vs. $\phi/M_p$. The dashed curves are evaluated at $\alpha=10^{-2}~$GeV$^{-1}$, while the solid curves at $\alpha=10^{-19}~$GeV$^{-1}$. \label{ratio/spectrial} } \end{figure} \begin{figure}[htb] \includegraphics[width=12cm,angle=-90]{ns_va_r_2.eps} \caption{(Color online) Contours showing PLANCK and BICEP2 results at $1 \sigma$ and $2 \sigma$ confidence compared with the parametric calculations for $r$ as function of scalar spectral index $n_s$. The parametric calculations for chaotic inflation potential given in Ref. \cite{Andrei Linde:2014}, the square $(b = 0.34)$ and circle $(b =5)$ balls corresponds to $(0.001<\,a\,<0.13)$ and inflation field $\phi \sim 8.2$ are also compared with. \label{ratio/spectrial2} } \end{figure} Tab. \ref{tab:1} summarizes the results of $r$ and $n_s$ at various scalar fields $\phi/M_p$ for the three inflation potentials, $V_1(\phi)$ from Eq. (\ref{eq:mssm}), $V_2(\phi)$ from Eq. (\ref{sdual}) and $V_3(\phi)$ from (\ref{poweri}). The BICEP2-relevant results are $r$ ranging from $0.15$ from $0.27$ and simultaneously $n_s$ between $0.94$ and $0.98$. It is apparent that the results from $V_3(\phi)$ do not appear in this $r-n_s$ window. The results from $V_1(\phi)$ and $V_2(\phi)$ obviously do. While $V_1(\phi)$ allows a wide range of $\phi$, $V_2(\phi)$ is only relevant for a narrower one. Fig. \ref{ratio/spectrial2} represents this comparison, graphically. \begin{table} \begin{tabular}{||c|c|c||c|c|c||c|c|c||} \hline \hline $\frac{\Delta \phi}{M_p}$ &$ r \,\,\textit{for}\,\, V_{1}({\phi})$ & $n_s \,\,\textit{for}\,\, V_{1}({\phi})$ & $\frac{\Delta \phi}{M_p}$ & $r \,\,\textit{for}\,\, V_{2}({\phi})$ & $n_s \,\,\textit{for}\,\, V_{2}({\phi}$) & $\frac{\Delta \phi}{M_p}$ & $r \,\,\textit{for}\,\, V_{3}({\phi})$ & $n_s \,\,\textit{for}\,\, V_{3}({\phi}$ ) \\ \hline \hline 0.21 & 0.263 & 0.934 & 0.07 & 0.157 & 0.961 & 0.02 & 0.118 & 0.801 \\ \hline 0.22 & 0.254 & 0.936 & 0.08 & 0.182 & 0.954 &0.03 & 0.192 & 0.746 \\ \hline 0.23 & 0.245 & 0.939 & 0.09 & 0.204 & 0.950 & 0.04 & 0.240 & 0.717 \\ \hline 0.24 & 0.236 & 0.941 & 0.10 & 0.219 & 0.945 & 0.05 & 0.258 & 0.707 \\ \hline 0.25 & 0.227 & 0.943 & 0.11 & 0.230 & 0.942 & 0.06 & 0.255 & 0.708 \\ \hline 0.26 & 0.218 & 0.945 & 0.15 & 0.227 & 0.943 &0.07 & 0.239 & 0.717 \\ \hline 0.27 & 0.210 & 0.947 & 0.16 & 0.218 & 0.945 & 0.08 & 0.218 & 0.730 \\ \hline 0.29 & 0.193 & 0.951 & 0.17 & 0.207 & 0.948 & 0.09 & 0.194 & 0.745 \\ \hline 0.3 & 0.185 & 0.953 & -- & -- &-- &0.1 & 0.172 & 0.761 \\ \hline 0.32 & 0.171 & 0.957 & -- & -- & -- & 0.11 & 0.151 & 0.775 \\ \hline 0.33 & 0.164 & 0.959 & -- & -- & -- & -- &--& -- \\ \hline 0.35 & 0.151 & 0.962 &-- & -- & -- & -- &--& -- \\ \hline 0.36 & 0.145 & 0.963 & -- & -- & -- & -- & -- & -- \\ \hline\hline \end{tabular} \caption{The ratio of tonsorial to scalar density, the fluctuations $r$ and the spectral index $n_s$ associated with the scalar field for different inflation potentials, $V_1(\phi)$ from Eq. (\ref{eq:mssm}), $V_2(\phi)$ from Eq. (\ref{sdual}) and $V_3(\phi)$ from (\ref{poweri}). \label{tab:1}} \end{table} \section{Discussion and conclusions} \label{summary} The BICEP2 results announced on March 17, 2014 made the physicists around the globe having another view about the evidence of Universe and its expansion, especially at about the inflation era. The cosmic inflation is based on the assumption that an extreme inflationary phase should take place after the Big Bang (at about the Planck time). Thus, the Universe should expand at a superluminal speed. On the other hand, the inflation would result from a hypothetical field acting as a cosmological constant to produce an acceleration expansion of the Universe. Argumentation about the applicability of GUP on the inflation era will be elaborated in Append. \ref{app:appl}. Due to the very high energy (quantum or Planck scale), the Heisenberg uncertainty principle should be modified in terms of momentum uncertainty. The QG approach in form of GUP appears in the modified Friedmann equation - in terms of $\alpha$. This term reduces the Hubble parameter, which appears in the denominator of the ratio of tonsorial-to-scalar density fluctuations. Thus, the fluctuations ratio increases due to decreasing $H$. The fluctuations ratio $r$ has been evaluated as function of the spectral index $n_s$. We found that the calculations match well with the PLANCK and BICEP2 observations. This is the main conclusion of the present work. We believe that the results point to the importance of quantum correlation during the inflation era. The estimation of the $n_s (a)$ and $r(a)$ at $55$ {\it e}-folds for a chaotic potential for different values of $b$ and varying inflation as function of $a$. The parameters $b = 0.34$ (open squares) and $b =5$ (open circles) are corresponding to $(0.001<a<0.13)$ and inflation field $\phi \sim 8.2ss$ \cite{Andrei Linde:2014}. The authors predict the variation of the fluctuation tensor with the spectral index. The best curves in Ref. \cite{Andrei Linde:2014} agree well with PLANCK and BICEP2. These are deduced using another method, varying $a$ and selecting out the suitable scalar field. The main difference with our method is the is the varying chaotic potential parameters at constant inflation field. We vary the inflation potential with the scalar field at constant potential parameters. We have reviewed different inflationary potentials and estimated the modifications of the Friedmann equation due to the GUP approach. We found that \begin{itemize} \item the first potential, Eq. (\ref{eq:mssm}), gives a power law of the scalar inflation-field. This is based on certain minimal supersymmetric extensions of the standard model \cite{allahverdi-2006}. \item The second potential, Eq. (\ref{sdual}), hypothesizes that the potential should be invariant under the $\mathcal{S}$-duality constraint $g\rightarrow 1/g$, or $\phi \rightarrow -\phi$, where $\phi$ is the dilation/inflation and $g \approx \exp \left(\phi/M\right)$ \cite{sdual}. The $\mathcal{S}$-duality had its roots in the Dirac quantization condition for the electromagnetic field. Thus, it should be equivalence to the description of quantum electrodynamics as either a weakly coupled theory of electric charges or a strongly coupled theory of magnetic monopoles \cite{JSchwarz:2002}. The latter, Eq. (\ref{poweri}) appeared in an exponential form with a power-law inflation field. These inflationary potentials seem to agree well with of the observations of PLANCK and BICEP2 collaborations at different $1\,\sigma$ and $2\, \sigma$. In the range of spectral index and fluctuation ratio. \item The potential, Eq. (\ref{poweri}) disagrees. Few remarks are now in order. The agreement should be limited to the values given by the parametric calculations. The PLANCK and BICEP2 observations are much wider but have uncertainties in $r$ of order $25\%$. We have presented through a conceivable way the effects of reasonably-sized GUP parameter of our estimation for $r$. \end{itemize} We conclude that depending on the inflation potential $V(\phi)$ and the scalar field, $\phi$, the GUP approach seems to reproduce the BICEP2 observations $r=0.2 _{-0.05}^{+0.07}$, which also have been fitted by using $55$ {\it e}-folds for a chaotic potential for varying inflation and seem to agree well with the upper bound value corresponding to PLANCK and to WMAP9 experiment.
1,477,468,750,840
arxiv
\section{Introduction} The inclusive production of quarkonia was studied intensively in the past both in elementary hadronic and nuclear reactions at SPS, RHIC and Tevatron energies. For a review see e.g.\cite{quarkonia_reviews}. In contrast, the exclusive production of heavy $Q\bar Q$ vector quarkonium states (e.g. $h_1 h_2 \to h_1 \Upsilon h_2$) in hadronic interactions was never measured, but attracted recently much attention from the theoretical side \cite{SMN,KMR,Klein,GM05,Bzdak,SS07,GM}. Due to the negative charge-parity of the vector meson, the purely hadronic Pomeron--Pomeron fusion mechanism of exclusive meson production is not available, and instead the production will proceed via photon--Pomeron fusion. A possible purely hadronic mechanism would involve the elusive Odderon exchange \cite{SMN,Bzdak}. Currently there is no compelling evidence for the Odderon, and here we restrict ourselves to the photon--exchange mechanism, which exists without doubt, and must furthermore dominate any hadronic exchange at very small momentum transfers. In our approach to the exclusive hadronic reaction, we follow closely the procedure outlined in our previous work on $J/\psi$ production \cite{SS07}. There is one crucial difference, though. While in the case of diffractive $J/\psi$ photoproduction there exist a large body of fairly detailed data, including e.g. transverse momentum distributions, the photoproduction data for exclusive $\Upsilon$'s are rather sparse \cite{ZEUS_old,H1,ZEUS_Upsilon}. Hence, different from \cite{SS07} we cannot avoid modelling the relevant $\gamma p \to \Upsilon p$ subprocess. Fortunately, due to the large mass of the $\Upsilon$'s constituents, the cross section gets its main contribution from small--size $b \bar b$--dipoles, and the production mechanism can be described in a pQCD framework (for a recent review and references, see \cite{INS06}). The two main ingredients are the unintegrated gluon distribution of the proton, and the light--cone wave function of the vector meson. The unintegrated gluon distribution is sufficiently well constrained by the precise small--$x$ data for the inclusive proton structure function, and we shall content ourselves here with a particular parametrisation which provides a good description of inclusive deep inelastic scattering data \cite{IN_glue}. As the relevant energy range of the $ \gamma p \to \Upsilon p$ subprocess at Tevatron overlaps well with the HERA energy range, any glue which fulfills the stringent constraints of the precise HERA $F_2$--data must do a similar job. Alternative unintegrated gluon distributions are discussed for example in \cite{Unintegrated}. The current experimental analyses at the Tevatron \cite{Pinfold} call for an evaluation of differential distributions including the effects of absorptive corrections. The HERA data cover the $\gamma p$ center of mass (cm--) energy range $W \sim 100 \div 200$ GeV. This energy range is in fact very much relevant to the exclusive production at Tevatron energies for not too large rapidities of the meson, say $|y| \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 3$. This will be different at the LHC, where a broad range of subsystem energies $W_{\gamma p}$, up to several TeV, is spanned for $\Upsilon$ emitted in the forward directions. This will require a long-range extrapolation to a completely new unexplored region. In this paper, however, we will concentrate on predictitions for Tevatron energies. Here our input amplitude is constrained by the HERA data, to which description we now turn. \section{Photoproduction $\gamma p \to \Upsilon p$ at HERA} We thus turn to the analysis of the photoproduction recation studied at HERA. The photoproduction amplitude will then be the major building block for our prediction of exclusive $\Upsilon$ production at the Tevatron. \subsection{Amplitude for $\gamma p \to \Upsilon p$} \begin{figure}[!h] \includegraphics[width=0.4\textwidth]{diagram1.eps} \caption{\label{fig:diagram_photon_pomeron} \small A sketch of the exclusive $\gamma p \to \Upsilon p$ amplitude.} \end{figure} The amplitude for the reaction under consideration is shown schematically in Fig.\ref{fig:diagram_photon_pomeron}. As it is explained in Ref.\cite{INS06}, the imaginary part of the amplitude for the $\gamma^* p \to \Upsilon p$ process can be written as \begin{eqnarray} \Im m \; {\cal M}_{\lambda_{\gamma},\lambda_V}(W,t=-\mbox{\boldmath $\Delta$}^2,Q^2) = W^2 \frac{c_\Upsilon \sqrt{4 \pi \alpha_{em}}}{4 \pi^2} \int \frac{d^2\mbox{\boldmath $\kappa$}}{\kappa^4} \alpha_S(q^2) {\cal F}(x_1,x_2,\mbox{\boldmath $\kappa$}_1,\mbox{\boldmath $\kappa$}_2) \nonumber \\ \times \int \frac{dz d^2 \mbox{\boldmath $k$}}{z (1-z)} I_{\lambda_{\gamma}, \lambda_V}(z,\mbox{\boldmath $k$},\mbox{\boldmath $\kappa$}_1,\mbox{\boldmath $\kappa$}_2,Q^2) \; , \label{full_imaginary} \end{eqnarray} where the transverse momenta of gluons coupled to the $Q \bar Q$ pair can be written as \begin{eqnarray} \mbox{\boldmath $\kappa$}_1 = \mbox{\boldmath $\kappa$} + {\mbox{\boldmath $\Delta$} \over 2} \, , \, \mbox{\boldmath $\kappa$}_2 = - \mbox{\boldmath $\kappa$} + {\mbox{\boldmath $\Delta$} \over 2} \, . \end{eqnarray} The quantity ${\cal F}(x_1,x_2,\mbox{\boldmath $\kappa$}_1,\mbox{\boldmath $\kappa$}_2)$ is the off diagonal unintegrated gluon distribution. Explicit expressions for $I_{\lambda_{\gamma}, \lambda_V}$ can be found in \cite{INS06}. For heavy vector mesons, helicity--flip transitions may be neglected, and we concentrate on the $s$--channel helicity conserving amplitude, $\lambda_\gamma = \lambda_V$. In the forward scattering limit, i.e. for $\mbox{\boldmath $\Delta$} =0$, azimuthal integrations can be performed analytically, and we obtain the following representation for the imaginary part of the amplitude for forward photoproduction $\gamma p \to \Upsilon p$ : \begin{eqnarray} \Im m \, {\cal M}(W,\Delta^2 = 0,Q^2=0) = W^2 \frac{c_\Upsilon \sqrt{4 \pi \alpha_{em}}}{4 \pi^2} \, 2 \, \int_0^1 \frac{dz}{z(1-z)} \int_0^\infty \pi dk^2 \psi_V(z,k^2) \\ \int_0^\infty {\pi d\kappa^2 \over \kappa^4} \alpha_S(q^2) {\cal{F}}(x_{eff},\kappa^2) \Big( A_0(z,k^2) \; W_0(k^2,\kappa^2) + A_1(z,k^2) \; W_1(k^2,\kappa^2) \Big) \, , \label{amplitude_forward} \end{eqnarray} where \begin{eqnarray} A_0(z,k^2) &=& m_b^2 + \frac{k^2 m_b}{M + 2 m_b} \, , \\ A_1(z,k^2) &=& \Big[ z^2 + (1-z)^2 - (2z-1)^2 \frac{m_b}{M + 2 m_b} \Big] \, \frac{k^2}{k^2+m_b^2} \, , \end{eqnarray} and \begin{eqnarray} W_0(k^2,\kappa^2) &=& {1 \over k^2 + m_b^2} - {1 \over \sqrt{(k^2-m_b^2-\kappa^2)^2 + 4 m_b^2 k^2}} \, , \nonumber \\ W_1(k^2,\kappa^2) &=& 1 - { k^2 + m_b^2 \over 2 k^2} \Big( 1 + {k^2 - m_b^2 - \kappa^2 \over \sqrt{(k^2 - m_b^2 - \kappa^2)^2 + 4 m_b^2 k^2 }} \Big) \, . \end{eqnarray} To obtain these results, the perturbative $\gamma \to b \bar b$ light cone wave function was used; the vertex for the $b \bar b \to \Upsilon$ transition is given below, and is obtained by projecting onto the pure $s$--wave $b \bar b$--state. Here $c_\Upsilon = e_b = -1/3$, and the mass of the bottom quark is taken as $m_b = 4.75$ GeV. The relative transverse momentum squared of (anti-)quarks in the bound state is denoted by $k^2$, their longitudinal momentum fractions are $z,1-z$, and we introduced \begin{equation} M^2 = {k^2 + m_b^2 \over z(1-z)} \, . \end{equation} The dominant contribution to the amplitude comes from the piece $ \propto A_0 W_0 \sim m_b^2 W_0$, and our exact projection onto $s$--wave states differs in fact only marginally from the naive $\gamma_\mu$--vertex for the $\Upsilon \to b \bar b$ transition. The 'radial' light-cone wave function of the vector meson, $\psi_V(z,k^2)$ will be discussed further below. The unintegrated gluon distribution ${\cal{F}}(x,\kappa^2)$ is normalized such that for a large scale $\bar Q^2$ it will be related to the integrated gluon distribution $g (x,\bar Q^2)$ through \begin{equation} x g(x,\bar Q^2 ) = \int^{\bar Q^2} {d\kappa^2 \over \kappa^2} {\cal{F}}(x,\kappa^2) \, . \end{equation} The running coupling $\alpha_S$ enters at the largest relevant virtuality $q^2 = \max \{ \kappa^2, k^2 + m_b^2 \}$. Due to the finite mass of the final state vector meson, the longitudinal momentum transfer is nonvanishing, and, as indicated above, a more precise treatment would require the use of skewed/off--diagonal gluon distributions. At the high energies relevant here it is admissible to account for skewedness by an appropriate rescaling of the diagonal gluon distribution \cite{Shuvaev}. With the specific gluon distribution used by us, the prescription of \cite{Shuvaev} can be emulated by taking the ordinary gluon distribution at \cite{INS06} \begin{equation} x_{eff} = C_{skewed} \frac{M_V^2}{W^2} \sim 0.41 \, \cdot \, {M_V^2 \over W^2} \, . \label{longitudinal_momentum_fraction} \end{equation} The full amplitude, at finite momentum transfer, well within the diffraction cone, is finally written as \begin{eqnarray} {\cal M}(W,\Delta^2) = (i + \rho) \, \Im m {\cal M}(W,\Delta^2=0) \, \exp(-B(W) \Delta^2) \, . \end{eqnarray} Here $\Delta^2$ is the (transverse) momentum transfer squared, $B(W)$ is the energy--dependent slope parameter: \begin{equation} B(W) = B_0 + 2 \alpha'_{eff} \log \Big( {W^2 \over W^2_0} \Big) \, , \end{equation} with $\alpha'_{eff} = 0.164$ GeV$^{-2}$ \cite{H1_Jpsi}, $W_0 = 95$ GeV. For the value of $B_0$ see the discussion of the numerical results below. For the small size $b\bar b$ dipoles relevant to our problem, a fast rise of the cross section can be anticipated, and it is important to include the real part, which we do by means of the analyticity relation \begin{eqnarray} \rho = {\Re e {\cal M} \over \Im m {\cal M}} = \tan \Big[ {\pi \over 2} \, { \partial \log \Big( \Im m {\cal M}/W^2 \Big) \over \partial \log W^2 } \Big] = \tan \Big( {\pi \over 2 } \, \Delta_{{\bf I\!P}} \Big) \, . \end{eqnarray} Finally, our amplitude is normalized such, that the differential cross section for $\gamma p \to V p$ is \begin{eqnarray} { d \sigma(\gamma p \to V p) \over d \Delta^2} = {1 + \rho^2 \over 16 \pi} \, \Big| \Im m { {\cal M} (W,\Delta^2) \over W^2 } \Big|^2 \exp(-B(W) \Delta^2) \, , \end{eqnarray} and thus \begin{equation} \sigma_{tot}(\gamma p \to V p) = {1 + \rho^2 \over 16 \pi B(W)} \, \Big| \Im m { {\cal M}(W,\Delta^2) \over W^2 } \Big|^2 \, . \end{equation} \subsection{$b \bar b$ wave function of the $\Upsilon$ meson} We treat the $\Upsilon, \Upsilon'$ mesons as $b\bar{b}$ $s$--wave states, the relevant formalism of light--cone wavefunctions is reviewed in \cite{INS06}. The vertex for the $\Upsilon \to b \bar b$ transition is taken as \begin{eqnarray} \varepsilon_\mu \, \bar u(p_b) \Gamma^\mu v(p_{\bar b}) = [M^2 - M_V^2] \, \psi_V(z,k^2) \, \bar u(p_b) \Big( \gamma^\mu - {p_b^\mu - p_{\bar b}^\mu \over M + 2 m_b} \Big) v(p_{\bar b}) \, \varepsilon_\mu \, , \end{eqnarray} where $\varepsilon_\mu$ is the polarization vector of the vector meson $V = \Upsilon,\Upsilon'$. and $p_{b, \bar b}^\mu$ are the on-shell four--momenta of the $b,\bar b$ quarks, $p_{b, \bar b}^2 = m_b^2$. The so--defined radial wave--function $\psi_V(z,k^2)$ can be regarded as a function not of $z$ and $k^2$ independently, but rather of the three--momentum $\vec{p}$ of, say, the quark in the rest frame of the $b \bar b$ system of invariant mass $M$, $ \vec{p} = (\mbox{\boldmath $k$} , (2z-1) M/2) $. Then, \begin{eqnarray} \psi_V(z,k^2) \to \psi_V(p^2) \, , \, {dz d^2 \mbox{\boldmath $k$} \over z(1-z)} \to {4 \, d^3\vec{p} \over M} \, , p^2 = {M^2 - 4 m_b^2 \over 4} \, . \end{eqnarray} We assume that the Fock--space components of the $\Upsilon, \Upsilon'$--states are exhausted by the $b \bar b$ components and impose on the light--cone wave function (LCWF) the orthonormality conditions ($i,j = \Upsilon,\Upsilon'$): \begin{eqnarray} \delta_{i j} = N_c \int {d^3 \vec p \over (2 \pi)^3} \, 4M \, \psi_i(p^2) \psi_j(p^2) \, . \end{eqnarray} Important constraints on the LCWF are imposed by the decay width $V \to e^+ e^-$: \begin{eqnarray} \Gamma (V \to e^+ e^-) = {4 \pi \alpha_{em}^2 c^2_\Upsilon \over 3 M_V^3} \, \cdot g_V^2 \cdot K_{NLO} \, , \, \, K_{NLO} = 1 - {16 \over 3 \pi } \alpha_S(m_b^2) \, , \end{eqnarray} where (\cite{INS06,Igor}) \begin{eqnarray} g_V = {8 N_c \over 3} \int {d^3 \vec p \over (2 \pi)^3} (M + m_b) \, \psi_V(p^2) \, . \end{eqnarray} For the -- fully nonperturbative -- LCWF we shall try two different scenarios, following again the suggestions in \cite{INS06,Igor}. Firstly, the Gaussian, harmonic--oscillator--like wave functions: \begin{equation} \psi_{1S}(p^2) = C_1 \exp\left( - \frac{p^2 a_1^2}{2} \right) \, , \, \psi_{2S}(p^2) = C_2 (\xi_0 - p^2 a_2^2) \exp\left( - \frac{p^2 a_2^2}{2} \right) \, , \label{harmonic_oscillator_WF} \end{equation} and secondly, the Coulomb--like wave functions, with a slowly decaying power--like tail: \begin{equation} \psi_{1S}(p^2) = {C_1 \over \sqrt{M}} \, {1 \over (1 + a_1^2 p^2)^2} \, , \, \psi_{2S}(p^2) = {C_2 \over \sqrt{M}} \, {\xi_0 - a_2^2 p^2 \over (1 + a_2^2 p^2)^3} \, . \end{equation} The parameters $a_i^2$ are obtained from fitting the decay widths into $e^+ e^-$, whereas $\xi_0$, and therefore the position of the node of the $2S$ wave function, is obtained from the orthogonality of the $2S$ and $1S$ states. We used the following values for masses and widthes: $M(\Upsilon(1S))= 9.46$ GeV, $M(\Upsilon(2S)) = 10.023$ GeV, $\Gamma(\Upsilon(1S) \to e^+ e^- ) = 1.34$ keV, $\Gamma(\Upsilon(2S) \to e^+ e^-) = 0.61$ keV \cite{PDG}. \subsection{Numerical results and comparison with HERA data} \label{Sec:results_HERA} \begin{figure}[!h] \includegraphics[width=0.45\textwidth]{sig_tot.eps} \includegraphics[width=0.45\textwidth]{sig_tot_G.eps} \caption{\label{fig:sig_tot} \small $\sigma_{tot} (\gamma p \to \Upsilon(1S) p)$ as a function of the $\gamma p$ cm--energy versus HERA--data. Left: dependence on the treatment of the $b \bar b \to \Upsilon$ transition; solid curves: Gaussian (G) wave function, dashed curves: Coulomb--like (C) wave function. Thick lines were obtained including the NLO--correction for the $\Upsilon$ decay width, while for the thin lines $K_{NLO}=1$. Right: dependence on the slope parameter $B_0$ (given in GeV$^{-2})$, for the Gaussian wave function. The experimental data are taken from \cite{ZEUS_old,H1,ZEUS_Upsilon} } \end{figure} \begin{figure}[!h] \includegraphics[width=0.45\textwidth]{R_xkNLO_1.eps} \includegraphics[width=0.45\textwidth]{R_xkNLO_f.eps} \caption{\label{fig:ratio_2S1S} \small The $2S/1S$-ratio $\sigma_{tot}(\gamma p \to \Upsilon(2S) p)/ \sigma_{tot}(\gamma p \to \Upsilon(1S) p )$ as a function of the $\gamma p$ cm--energy. } \end{figure} In Fig.\ref{fig:sig_tot} we show the total cross section for the exclusive $\gamma p \to \Upsilon p$ process as a function of the $\gamma p$ cm-energy. In the left panel we show results for two different wave functions discussed in the text: Gaussian (solid lines) and Coulomb-like (dashed lines). Free parameters of the wave function have been adjusted to reproduce the leptonic decay width in two ways: (a) using leading order formula (thin lines) and (b) inlcuding QCD corrections (thick lines). Including the $K_{NLO}$--factor in the width enhances the momentum--space integral over the wave function (the WF at the spatial origin), and hence enhances the prediction for the photoproduction cross section. Notice that strictly speaking inclusion of the $\alpha_S$--correction is not really warranted given that we do not have the corresponding radiative corrections to the production amplitude. Fortunately, due to the large scale $m_b^2$, the ambiguity in the two ways of adjusting the wave function parameters leads to only a marginal difference in the total cross section over most of the relevant energy range. To be fair, it should be mentioned, that the situation with the next--to--leading order corrections to diffractive vector mesons is not a very comfortable one, see for example the instabilities reported in \cite{Ivanov_NLO}. But then, the systematic extension of $k_\perp$--factorisation is yet lacking, so that at present we must be content with estimates of the theoretical uncertainties obtained by changing the principal parameters in the calculation. As can bee seen from the figure, different functional forms of the LCWF can lead to a quite substantial differences in the predicted cross section. Finally, the absence of experimental data for $t$--distributions leaves the slope parameter $B_0$ only badly constrained. The full, energy dependent slope can be decomposed into three contributions: one from the transition $\gamma \to V$, a second one from the dynamics of the gluon ladder exchanged -- which induces the main part of its energy dependence, and a third one from the elastic $p \to p$ vertex. In comparison to $J/\psi$--production, we may expect, that the slope in our case receives a smaller contribution from the $\gamma \to V$ transition, due to the smaller transverse sizes involved \cite{NNPZZ}. It may therefore be expected that $B_0$ should be somewhat smaller than in $J/\Psi$ photoproduction, where it is $\sim 4.6$ GeV$^{-2}$ \cite{H1_Jpsi}. We show the sensitivity to the slope parameter $B_0$ in the right panel of Fig.\ref{fig:sig_tot}. We observe, that in general our predictions are systematically somewhat below the experimental data. In principle, the agreement could be improved by choosing an abnormally small value for $B_0$, we shall however refrain from such an option. In our view the description of data, given the large error bars, is quite acceptable. The energy dependence of our result corresponds to an effective $\Delta_{\bf I\!P} \sim 0.39$. For our predictions for Tevatron we shall use the Gaussian LCWF option, with the NLO correction to the width included. In Fig.\ref{fig:ratio_2S1S} we show the ratio of the cross section for the first radial excitation $\Upsilon(2S)$ to the cross section for the ground state $\Upsilon(1S)$. The principal reason behind the suppression of the $2S$ state is the well--known node effect (see \cite{NNPZ} and references therein) -- a cancellation of strength in the $2S$ case due to the change of sign of the radial wave function. It is perhaps not surprising, that the numerical value of the $2S/1S$--ratio is strongly sensitive to the shape of the radial light--cone wave function. Here we assumed an equality of the slopes for $\Upsilon(1S)$ and $\Upsilon(2S)$ production. This appears to be justified, given the large spread of predictions from different wave functions. We finally note, that the ratio depends very little on the choice of the $K_{NLO}$ factor (compare left and right panel). \section{Exclusive photoproduction in $p \bar p$ collisions} \subsection{The absorbed $2 \to 3$ amplitude} \label{2_to_3} The necessary formalism for the calculation of amplitudes and cross--sections was outlined in sufficient detail in Ref. \cite{SS07}. Here we give only a brief summary. The basic mechanisms are shown in Fig.\ref{fig:diagram_2}. \begin{figure}[!h] \includegraphics[width=0.8\textwidth]{diagram2.eps} \caption{\label{fig:diagram_2} \small A sketch of the two mechanisms considered in the present paper: photon-pomeron (left) and pomeron-photon (right), including absorptive corrections.} \end{figure} The major difference from HERA, where the photon was emitted by a lepton which does not participate in the strong interactions, now, both initial state hadrons can be the source of the photon. Therefore, it is now necessary to take account of the interference between two amplitudes. The photon exchange parts of the amplitude, involve only very small, predominantly transverse momentum transfers. In fact, here we concentrate on the kinematic domain, where the outgoing protons lose only tiny fractions $z_1,z_2 \ll 1$ of their longitudinal momenta, in practice $z \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 0.1$ means $y \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 3$. In terms of the transverse momenta of outgoing hadrons, $\mbox{\boldmath $p$}_{1,2}$, the relevant four--momentum transfers are $t_i = - (\mbox{\boldmath $p$}_i^2 + z_i^2 m_p^2)/(1-z_i) \, , i = 1,2$, and $s_1 \approx (1 -z_2) s$ and $s_2 \approx (1-z_1) s$ are the familiar Mandelstam variables for the appropriate subsystems. Photon virtualities $Q_i^2$ are small (what counts here is that $Q_i^2 \ll M_\Upsilon^2$), so that the contribution from longitudinal photons can be safely neglected. Also, as mentioned above, we assume the $s$--channel--helicity conservation in the $\gamma^* \to \Upsilon$ transition. In summary we present the $2 \to 3$ Born-amplitude (without absorptive corrections) in the form of a two--dimensional vector (corresponding to the two transverse (linear) polarizations of the final state vector meson): \begin{eqnarray} \mbox{\boldmath $M$}^{(0)}(\mbox{\boldmath $p$}_1,\mbox{\boldmath $p$}_2) &&= e_1 {2 \over z_1} {\mbox{\boldmath $p$}_1 \over t_1} {\cal{F}}_{\lambda_1' \lambda_1}(\mbox{\boldmath $p$}_1,t_1) {\cal {M}}_{\gamma^* h_2 \to V h_2}(s_2,t_2,Q_1^2) + ( 1 \leftrightarrow 2 ) \end{eqnarray} Inclusion of absorptive corrections (the 'elastic rescattering') leads in momentum space to the full, absorbed amplitude \begin{eqnarray} \mbox{\boldmath $M$}(\mbox{\boldmath $p$}_1,\mbox{\boldmath $p$}_2) &&= \int{d^2 \mbox{\boldmath $k$} \over (2 \pi)^2} \, S_{el}(\mbox{\boldmath $k$}) \, \mbox{\boldmath $M$}^{(0)}(\mbox{\boldmath $p$}_1 - \mbox{\boldmath $k$}, \mbox{\boldmath $p$}_2 + \mbox{\boldmath $k$}) = \mbox{\boldmath $M$}^{(0)}(\mbox{\boldmath $p$}_1,\mbox{\boldmath $p$}_2) - \delta \mbox{\boldmath $M$}(\mbox{\boldmath $p$}_1,\mbox{\boldmath $p$}_2) \, . \nonumber \\ \label{rescattering term} \end{eqnarray} With \begin{equation} S_{el}(\mbox{\boldmath $k$}) = (2 \pi)^2 \delta^{(2)}(\mbox{\boldmath $k$}) - {1\over 2} T(\mbox{\boldmath $k$}) \, \, \, , \, \, \, T(\mbox{\boldmath $k$}) = \sigma^{p \bar p}_{tot}(s) \, \exp\Big(-{1\over 2} B_{el} \mbox{\boldmath $k$}^2 \Big) \, , \end{equation} where $\sigma^{p \bar p}_{tot}(s) = 76$ mb, $B_{el} = 17 $ GeV$^{-2}$ \cite{CDF} , the absorptive correction $\delta \mbox{\boldmath $M$}$ reads \begin{eqnarray} \delta \mbox{\boldmath $M$}(\mbox{\boldmath $p$}_1,\mbox{\boldmath $p$}_2) = \int {d^2\mbox{\boldmath $k$} \over 2 (2\pi)^2} \, T(\mbox{\boldmath $k$}) \, \mbox{\boldmath $M$}^{(0)}(\mbox{\boldmath $p$}_1-\mbox{\boldmath $k$},\mbox{\boldmath $p$}_2+\mbox{\boldmath $k$}) \, . \label{absorptive_corr} \end{eqnarray} The differential cross section is given in terms of $\mbox{\boldmath $M$}$ as \begin{equation} d \sigma = { 1 \over 512 \pi^4 s^2 } | \mbox{\boldmath $M$} |^2 \, dy dt_1 dt_2 d\phi \, , \end{equation} where $y$ is the rapidity of the vector meson, and $\phi$ is the angle between $\mbox{\boldmath $p$}_1$ and $\mbox{\boldmath $p$}_2$. \subsection{Results for Tevatron} \begin{figure}[!h] \includegraphics[width=0.45\textwidth]{dsig_dy_U1S.eps} \includegraphics[width=0.45\textwidth]{dsig_dy_U2S.eps} \caption{\label{fig:dsig_dy} \small Differential cross section $d\sigma / dy$ for $\Upsilon(1S)$ (left panel) and $\Upsilon(2S)$ (right panel) for the Tevatron energy $W$ = 1960 GeV. The thin solid line is for the calculation with bare amplitude, the thick line for the calculation with absorption effects included.} \end{figure} \begin{figure}[!h] \includegraphics[width=0.45\textwidth]{dsig_dpv2_U1S_without.eps} \includegraphics[width=0.45\textwidth]{dsig_dpv2_U1S_with.eps} \caption{\label{fig:dsig_dpt2_U1S} \small Invariant cross section $d \sigma /dy dp_t^2$ for as a function of $p_t^2$ for $\Upsilon(1S)$ for $W$ = 1960 GeV. The solid line: $y=0$, dashed line: $y=2$, dotted line: $y=4$. Left panel: without absorptive corrections; Right panel: with absorptive corrections. } \end{figure} \begin{figure}[!h] \includegraphics[width=0.45\textwidth]{dsig_dpv2_U2S_without.eps} \includegraphics[width=0.45\textwidth]{dsig_dpv2_U2S_with.eps} \caption{\label{fig:dsig_dpt2_U2S} \small Invariant cross section $d \sigma /dy dp_t^2$ for as a function of $p_t^2$ for $\Upsilon(2S)$ for $W$ = 1960 GeV. The solid line: $y=0$, dashed line: $y=2$, dotted line: $y=4$. Left panel: without absorptive corrections; Right panel: with absorptive corrections. } \end{figure} \begin{figure}[!h] \includegraphics[width=0.45\textwidth]{dsig_dy_U1S_LHC.eps} \includegraphics[width=0.45\textwidth]{dsig_dpv2_U1S_with_LHC.eps} \caption{\label{fig:dsig_LHC} \small Left panel: differential cross section $d \sigma / dy$ for $\Upsilon (1S)$ for the LHC energy $W= 14$ TeV. The thin solid line: without absorptive corrections; thick line: with absorptive corrections. Right panel: invariant cross section $d \sigma /dy dp_t^2$ for $\Upsilon(1S)$ as a function of $p_t^2$ for $W = 14$ TeV. The solid line: $y=0$, dashed line: $y=2$, dotted line: $y=4$. Absorptive corrections are included. } \end{figure} We now come to the results of differential cross sections for $\Upsilon$ production. In Fig.\ref{fig:dsig_dy} we show the distribution in rapidity of $\Upsilon(1S)$ (left panel) and $\Upsilon(2S)$ (right panel). The ratio between $2S$ and $1S$ follows closely the photoproduction ratio discussed in Sec. \ref{Sec:results_HERA}. The parameters chosen for this calculation correspond to the Gaussian wave function, with $K_{NLO}$ included in the adjustment to the decay width. Also the unintegrated gluon distribution is the same as the one used in section \ref{Sec:results_HERA}. The results obtained with bare amplitudes are shown by the thin (red) lines, and the results with absorption effects included are shown by thick (black) lines. Here the absorption effects are truly a correction and cause only about 20-30\% decrease of the cross section. This is in sharp contrast to the situation for the fusion of two QCD ladders (relevant for the production of scalar charmonia or Higgs boson). The rapidity distribution is only slightly distorted by absorptive corrections. Notice that larger rapidities mean also larger photon virtualities and therefore somewhat smaller transverse distances in the $p \bar p$ collision are relevant. Finally, in the following figures we show distributions of $\Upsilon$'s in transverse momentum. We show results for different values of rapidity: $y = 0$ (solid), $y = 2$ (dashed) and $y = 4$ (dotted). In Fig.\ref{fig:dsig_dpt2_U1S} we show the distributions for $\Upsilon(1S)$ and in Fig.\ref{fig:dsig_dpt2_U2S} for $\Upsilon(2S)$. Both, results with bare amplitudes (left panels), and with absorption (right panels) are shown. The inspection of the figures shows that absorption effects are larger for large values of the $\Upsilon$ transverse momenta -- they can lower the cross section by almost an order of magnitude at the largest transverse momenta. There is again a different effect of absorption for different rapidities. Notice, that our predictions, which use the low--$z$ approximation of the photon flux are most accurate at $y \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 3$. This is quite appropriate for Tevatron, where it seems that a measurement is possible only at rather low rapidities. We do not show here observables related to outgoing proton or/and antiproton as they cannot be studied experimentally at the Tevatron. There will be, however, such a possibility at the LHC. There are important issues regarding the extrapolation to LHC energies. Firstly the energy of the $ \gamma p \to \Upsilon p$ process can vastly exceed the HERA range, and secondly the much increased rapidity range may increase the importance of high--mass diffraction for the absorptive corrections. Still, to give the reader a rough idea of the expected cross section, we show in Fig. \ref{fig:dsig_LHC} selected spectra at the LHC energy of $W = 14$ TeV. Here, in the absorptive corrections, we used a Pomeron intercept of $\Delta_{\bf I\!P} = 0.08$. It is interesting to point out that the rise towards the maximum in the rapidity dstribution reflects the energy dependence of the $ \gamma p \to \Upsilon p$ subprocess. Absorptive corrections in that subprocess, which we neglected so far can possibly alter the shape of the rapidity distribution. Since there are many other interesting aspects at larger energies we leave a more detailed analysis for LHC for a separate publication. A brief comment on previous works is in order. In \cite{Klein,Bzdak,GM} absorptive corrections were not included. The equivalent photon approximation is used in \cite{Klein,GM}, which allows only to obtain rapidity spectra. The form of the transverse momentum distribution suggested in \cite{Klein} is not borne out by our calculation. Cross sections $d\sigma/dy$ obtained in \cite{Klein,Bzdak,GM} lie in the same ballpark as the results presented here. However the shape of the rapidity distribution in \cite{GM} is different from ours. \section{Conclusions} We have calculated the forward amplitude for $\gamma p \to \Upsilon p$ reaction within the formalism of $k_\perp$-factorization. In this approach the energy dependence of the process is encoded in the $x$-dependence of unintegrated gluon distributions. The latter object is constrained by data on inclusive deep inelastic scattering. The $t$-dependence for the $\gamma p \to \Upsilon p$ process involves a free parameter and is in effect parametrized. Regarding the $\gamma \to \Upsilon$ transition, we used different Ans\"atze for the $b \bar b$ wave functions. The results for $\Upsilon(1S)$ production depend only slightly on the model of the wave function, while the $2S/1S$ ratio shows a substantial sensitivity. We compared our results for the total cross section with a recent data from HERA. Our results are systematically somewhat lower than data, although the overall discrepancy is not worrysome, given the large uncertainties due to the rather poor experimental resolution in the meson mass. The amplitudes for the $\gamma p \to \Upsilon p$ process are used next to calculate the amplitude for the $p \bar p \to p \bar p \Upsilon$ reaction assuming the photon-Pomeron (Pomeron-photon) underlying dynamics. In the present approach the Pomeron is then described within QCD in terms of unintegrated gluon distributions. We have calculated several differential distributions including soft absorption effects not included so far in the literature. Our predictions are relevant for current experiments at the Tevatron, predictions were made -- with qualifications -- for possible future experiments at the LHC. \section{Acknowledgements} This work was partially supported by the Polish Ministry of Science and Higher Education (MNiSW) under contract 1916/B/H03/2008/34.
1,477,468,750,841
arxiv
\section{Introduction} Carbon nanotubes are fundamental materials for developing nanoeletromechanical systems because they present a unique combination of outstading electronic and mechanical properties. Cross-sectional deformations of single-wall carbon nanotubes (SWNTs) under pressure have been extensively studied by means of experimental tools \cite{TangPRL00,peters00,karmakar03,merlenPRB05,yao08} and theoretical models. \cite{CharlierPRB96,YildirimPRB00,sluiterPRB02,reichPRB02,CapazPSS04,tangney05,ChoiPSS07} On the other hand, the behavior of double-wall carbon nanotubes (DWNTs) under pressure has been considerably less studied \cite{arvanitidis05,puechPRB06,puechPRB08,kawasaki08}. DWNTs may be better candidate materials than SWNTs for the engineering of nanotube-based composite materials because of their geometry, in which the outer tube ensures the chemical coupling with the matrix and the inner tube acts as mechanical support for the whole system\cite{aguiar11}. Some theoretical studies have been performed to study the pressure dependence of the cross-sectional shape of SWNT, depending on the chirality and diameter. By starting from an almost perfect circular cross section, the cross-sectional shape of the nanotube becomes oval or polygonized as the pressure is increased, evolving later to a collapsed state with a peanut shape. \cite{tangney05,ImtaniPRB07,ImtaniCMS08,ImtaniCMS09,yangPRB07} Based on elasticity theory, Sluiter et al. have proposed a diameter-dependent phase diagram for the cross-sectional shape of SWNTs under pressure.\cite{SluiterPRB04} Some authors reported that the radial deformation of SWNTs for diameters smaller than 2.5nm is reversible from the collapsed state while the deformation of larger diameter tubes could be irreversible and the collapsed state is metastable or even absolutely stable without pressure application. \cite{tangney05,SluiterPRB04} Many experimental studies have suggested that phase transition in nanotubes could be dependent on their metallic character or on the surrounding chemical environment used for transmitting the pressure. \cite{christofilosDREL06,merlenPSS06,proctorPSS07,merlenPRB05,AbouelsayedJPCC10} However, theoretical calculations suggest that phase transitions of SWNTs under pressure is mainly dependent on the cube inverted diameter (p$_c$ $\sim$ d$_t^{-3}$) of the tubes and not on the chirality. \cite{tangney05,elliot04} Even if there is a huge dispersion of results concerning the pressure transition values from theoretical models, there is an overall agreement between different calculations on the existence of two phase transitions (circular-oval and oval-peanut) and that the critical pressures for those transitions are diameter dependent. The pressure evolution of DWNTs has been much less studied and, in opposition to SWNTs, a detailed understanding of their cross-sectional evolution is still under discussion due their complexity regarding the role of inner and outer tubes. Ye et al. has suggested that the critical collapse pressure of DWNT is essentially determined by inner tube stability and so the collapse pressure of DWNT is close to what is expected for inner tube \cite{YePRB2005}. Other authors suggested that collapse pressure of DWNTs is completely different from which is expected for the correspondent SWNT when are considerate separately and, the pressure value still depends on 1/d$_t^{*3}$ scale law but with a suitable choice of an average diameter d$_t^*$ \cite{yangAPL06,gadagkar06}. In this paper we report a study of the vibrational properties of carbon nanotubes bundle under pressure, a subject that has not been theoretically well explored in the literature up to now. This paper is organized as follows. First, we will describe in section II the methodology used to model nanotube bundled structure. In section III.A, we calculated the structural evolution of SWNTs and DWNTs bundles under pressure. We focused in structural stability of bundle structure by calculating the critical pressure for collapsing and we compare the behavior of DWNT bundle with its corresponding SWNT. In section III.B, we explored the vibrational properties of SWNT and DWNTs bundle under pressure by calculating RBM and tangential phonons modes before and after the nanotube collapse. We close our paper with conclusions in section IV. \section{Methodology} In order to access mechanical and vibrational properties of carbon nanotubes under pressure we initially perform zero-temperature structural minimizations of SWNTs and DWNTs bundles. The carbon-carbon bonding within each CNTs is modeled by a reactive empirical bond order (REBO) potential proposed by Brenner \cite{brenner02,brenner90}. Pairwise Lennard-Jones potentials are added to model the non-bonding van der Waals terms ($\epsilon/k_b$=44K, $\sigma=3.39$ \AA), which are essential to describe intertube interactions \cite{tangney05}. We studied zigzag and armchair C bundles of CNTs in triangular arrangements within orthogonal unit cells containing two SWNTs or DWNTs each, with lateral lattice constants $a_x$ and $a_y=\sqrt{3}a_x$ (the $a_y/a_x$ ratio is kept fixed). The lattice constant along the z axis, ($a_z$), is chosen to contain 5 (8) unit cells of the zigzag (armchair) tubes. We perform a sequence of small and controllable steps of unit cell volume reduction for each fixed nanotube phase studied (circular, polygonized, oval, peanut,etc.). Each step is of the order of $\Delta V/V_0$ = -0.01$\%$ , where $V_0$ is the ambient pressure volume for each nanotube system in circular phase. For each fixed volume $V$, we search for the atomic positions and lattice constants $a_x$ and $a_z$ that minimize the internal energy $U(V)$. The pressure is obtained by $p=-\Delta U/\Delta V$ and the enthalpy is $H = U + pV$. Phonon frequencies and eigenvectors are directly obtained by the diagonalization of the force constant matrix. The matrix elements are calculated by using finite difference techniques. We focus our vibrational analysis on the radial breathing mode (RBM) and longitudinal G-band (denoted G$_z$), whose displacements are along the tube axis. Therefore, RBM and G$_z$ projected density of states (PDOS) are constructed by projecting the phonon eigenvectors onto the corresponding radial and axial (with opposite phases for atoms in different sub-lattices) displacement fields, respectively. \section{SWNTs and DWNTs under pressure: Structural and Vibrational Properties} \subsection{Nanotube Collapse} We start by studying the low-pressure behavior of SWNT bundles, considering four basic cross-sectional shapes namely: circular, oval, polygonal (hexagonal) and peanut-like. Tubes of different diameters exhibit a different sequence of cross-sectional shapes as a function of pressure \cite{tangney05}. In particular, the polygonal or hexagonal phase is typical of bundled tubes and it is often obtained in calculations for large diameter nanotubes since plane-parallel facing between adjacent tubes tends to decrease the van der Walls interaction energy \cite{Charlier96,Liu05,RuPRB00}. Fig \ref{Fig1-swnt}a shows the enthalpy as a function of pressure calculated for circular, oval and polygonized phases for bundles of (8,8), (18,0) and (24,0) SWNTs. The crossing of two curves in this plot by following the lowest enthalpy for a given pressure indicates a transition between two different cross-sectional shapes. The derivative discontinuities in the $H(p)$ plots are associated with the volume variation at the phase transition. This can be seen more clearly when $p-V$ plots are constructed (not shown here) where each transition is marked by a discontinuous change in volume. For the (8,8) SWNT bundle, the transition pressure from circular to oval (so called p$_1$) occurs close to 1.5 GPa and from oval to peanut (so called p$_2$) around 2.6 GPa. For the (18,0) SWNT bundle, we found phase transitions at 1.2 GPa (circular to polygonal, p$_1^\prime$) and 1.5 GPa (polygonal to peanut, p$_2^\prime$). Finally, for the (24,0) SWNT bundle the polygonal to peanut transition (p$_2^\prime$) is found at 0.6 GPa. We see that the small diameter (8,8) SWNT bundle undergoes a circular$\rightarrow$oval$\rightarrow$peanut phase transition sequence as the pressure increases, whereas for the intermediate diameter (18,0) SWNT the sequence is circular$\rightarrow$polygonal$\rightarrow$peanut and for the large-diameter (24,0) we obtain simply a polygonal-peanut transition, since the tubes are already polygonized even at zero pressure. \begin{figure}[ht] \includegraphics[scale=0.15]{fig2.eps} \caption{\small{Enthalpy vs pressure curvers calculated for (a) (8,8), (b) (18,0) and (c) (24,0) bundled SWNTs. The pressure evolution for (10,0) SWNT is similar to (8,8) SWNT showed in (a) and the circular-oval and oval-peanut transitions is found close to 1.55 and 9.6 GPa}} \label{Fig1-swnt} \end{figure} As expected, the critical pressures are strongly diameter-dependent \cite{benedict98,tangney05,sun04,elliot04,ImtaniPRB07,ImtaniCMS09}. Actually, many authors have pointed out that the critical pressure ($p_1$) for circular-oval transition scales with d$_t^{-3}$, where d$_t$ is tube diameter.\cite{tangney05} Some authors also found that the critical pressure for oval-peanut transition ($p_2$) also scales with d$_t^{-3}$. \cite{elliot04,wuPRB04}. Our results for bundled SWNTs shown in Fig \ref{Fig1-swnt} point out that the transition to the peanut geometry ($p_2$ and $p_2^{'}$) follows approximately the same law $p_2=C/d_t^3$ with $C$=4.4nm$^3$.GPa. This scaling law was observed for all studied zigzag and armchair bundled tubes. However, the pressure transition values $p_1$ e $p_1^{'}$ for circular to intermediate phase (oval and polygonal, respectively) show a slightly dependence on the inverted tube diameter but they do not follow the d$_t^{-3}$ scaling law. We also have modeled several DWNT bundles under pressure and choose the (10,0)@(18,0) DWNT (This notation means that the (10,0) tube is inside the (18,0) tube) in order to compare its structural stability with those of their constituent SWNTs. We find a very similar behavior for the other DWNTs such (12,0)@(20,0) and (11,0)@(19,0). In Fig. \ref{Fig4-dwnt}a and \ref{Fig4-dwnt}b we show respectively the $p-V$ and enthalpy curves for the (10,0)@(18,0) DWNT bundle. Upon compressing the original circular structure of a DWNT we find four distinct structures as shown in Fig. \ref{Fig4-dwnt}a. The $A$ configuration is the non-deformed structure with a circular shape for both inner and outer tube. The $B$ configuration is characterized by the polygonization of the outer tube while the inner tube maintain its circular cross section. The $C$ configuration corresponds to a polygonal outer tube and oval inner tube. Finally, the $D$ configuration stands for the case where the outer and inner tubes display oval/peanut cross-sectional shapes. \begin{figure} \centering \includegraphics[scale=0.6]{fig3.eps} \caption{\small{p-V curves (a) and enthalpy functions (b) for the pressure-induced sequence of phase transitions in the (10,0)@(18,0) DWNT. Different cross-sectional shapes correspond to different colors in the plots and are shown in the insets.}}\label{Fig4-dwnt} \end{figure} The polygonization of outer tube ($A$-$B$ phase transition) in (10,0)@(18,0) DWNT bundle is observed at 0.81 GPa which is lower than the value (1.2 GPa) what was calculated for the same transition in the (18,0) SWNT bundle. This result suggests that the presence of the inner tube enhances the outer-outer tube interactions within the bundle, thus reducing the critical pressure value for polygonization. Upon increasing the pressure, a $B$-$C$ transformation occurs at 6.2 $\pm$ 0.2 GPa, since the inner tube ovalizes and the outer tube reaches a higher degree of polygonization. We are able to increase the pressure in the $C$ phase up to 13 GPa without inducing any strong deformation. However, from the enthalpy analysis we conclude that the $D$ phase is more stable than $C$ phase for the whole range of investigated pressures. Thus the $C$ configuration is actually metastable. By further increasing the pressure in the $D$ phase, it reaches the same enthalpy as the $B$ phase around 5.7 $\pm$ 0.2 GPa. The $D$ phase is also metastable even at lower pressures, down to 5.5 GPa. In summary, the actual sequence of phase transitions for the (10,0)@(18,0) DWNT bundle is $A$ $\rightarrow$ $B$ at 0.81 GPa and $B$ $\rightarrow$ $D$ at 5.7 $\pm$ 0.2 GPa. This value contrasts with the 1.4 GPa calculated collapse pressure of the (18,0) SWNT outer tube, cleary poiting out to the structural support given by the inner tube, as already proposed by some authors \cite{yangAPL06,gadagkar06}. We conclude that the $C$ configuration (polygonized outer tube and ovalized inner tube) is unstable when we compare with the both collapsed tubes which let us propose that the ovalization of the inner tube is the reason for the overall collapsing of DWNT. For isolated DWNTs, Ye et al \cite{YePRB2005} suggest that the DWNT transition could be essentially determined by the inner tube transition. They also found that collapse of DWNTs follow almost the same d$_t^{-3}$ law when the inner tube is used, which means that p$_c$ for DWNTs is the same as expected for SWNT inner tube \cite{YePRB2005}. In such way, our results agrees in the fact that the inner tube ovalization determines the critical pressure for collapsing of DWNTs. However, there is a clear pressure screening effect from the outer tube in the behavior of the inner tube because the critical pressure for the expected circular-oval transition for inner tube was increased to 5.7 $\pm$ 0.2 GPa when it was encapsulated in the DWNT. This value is considerably higher than that observed for the (10,0) SWNT bundle, which was 1.55 GPa. Second, the full collapsing of the inner tube (oval-peanut transition) is continuous in the DWNT case, differently from the SWNT case, in which there is a discontinuous change in the volume (around 9.6 GPa). For the DWNT case, this transition is marked by a change in compressibility (which is proportional to the slope of the $p-V$ plot) at around 10 GPa, clearly seen from Fig.\ref{Fig4-dwnt}. We found a similar behavior for (12,0)@(20,0) DWNT bundle, where the transition $B$ $\rightarrow$ $D$ is predicted to occur at 5.5 $\pm$ 0.1 GPa and change in bulk modulus takes places around 5.9 GPa close to that value expected for (12,0) SWNT (5.4 GPa). \subsection{Phonon Density of States evolution under pressure} \begin{figure}[ht] \centering \includegraphics[scale=0.19]{fig4.eps} \caption{\small{Phonon density of states (DOS) projected in RBM (a) and tangential (b) probing vectors for (18,0) SWNT. Some snapshots of the bundle structure evolution can be followed in (c).}}\label{Fig18-ph} \end{figure} Fig \ref{Fig18-ph} shows the phonon projected density of states (PDOS) calculated for (18,0) SWNT bundles. The PDOS for RBM and G$_z$ modes are shown in red (a) and blue (b), respectively. From the figure, we can follow the RBM and G$_z$ evolutions as pressure is increased, and the corresponding snapshots of the bundle structural evolution are shown in the right panels. First, we observe the RBM mode of (18,0) SWNT in the low frequency region centered at around 150 cm$^{-1}$ in the circular phase (0.0 GPa), which gradually shifts to higher frequencies as the pressure is increased. The pressure coefficient of this mode is 7.0 $\pm$ 2.5 cm$^{-1}$/GPa, in good agreement with experiments \cite{caillier08,merlenPSS06,peters00}. After collapse, the contributions to the radial mode spread in the low frequency region, which is equivalent to say that the RBM is not a single, well-defined mode any longer in good correspondance with the experimental difficulty to observe RBM after the some suggested pressure phase transitions. \cite{peters00,karmakar03,sandler03,elliot04,freire07,Proctor06} The G$_z$ band behavior is shown in the right panel. We clearly observe the splitting of this mode when polygonization takes place. After collapse, we clearly see a sudden jump to lower frequencies and intensity enhancement of the lower-frequency contribution, while the higher-frequency one continues to upshift. We studied several SWNTs bundles and similar results were observed. For (10,0) SWNT bundle the spread of radial contribution were observed in circular-oval transition i.e. before the collapsing of tube. The split of G$_z$ component due polygonization is also observed for (24,0) SWNT bundle and, evidently, not observed for (10,0) SWNT. Furthermore, the sudden red-shift of G$_z$ component after collapsing is also observed for (24,0) SWNT. The phonon evolution with pressure for (10,0)@(18,0) DWNT bundle can be observed in Fig \ref{Fig1810-ph}. It is clear from left panels the presence in low-requency region of two distinct peaks around 150 cm$^{-1}$ and 325 cm$^{-1}$ corresponding to RBM modes of outer and inner tubes, respectively. The peaks are slightly shifted from the corresponding SWNTs in the circular conformation, as expected from the intertube coupling \cite{PopovPRB02}. It is interesting to note the evolution of both RBM peaks with pressure, where we clearly observe that the RBM frequency of the outer tube shifts much faster than the RBM frequency of the inner tube before collapse. This is a clear evidence of a screening effect on the inner tube by outer tube. Pressure screening effects on RBM and G band in DWNTs have been observed in several Raman experiments \cite{christofilos07,aguiar11,puech04,PuechPSS04,arvanitidis05}. Our calculations of RBM and G$_z$ confirm that this pressure screening effect occurs well before any structural collapse. In the (b) panels of Fig. \ref{Fig1810-ph}, we can follow the G$_z$ component for outer and inner tube. It should be noted here that inner (10,0) tube G$_z$ component is well overestimated being at about 1850 cm$^{-1}$ compared with the outer (18,0) one which is located at around 1650cm$^{-1}$. However, qualitative analysis could be still performed. It is interesting to note that pressure screening effects are also observed in tangential components as we can see that the outer tube G$_z$ components shift faster than the inner ones. As observed for SWNTs, the transition of the outer tube to a polygonized phase at 0.85 GPa is clearly marked by a splitting of the G$_z$ band. After colapse, the PDOS spreads over a large frequency range for radial and tangential contributions but it is possible to see that the lowest frequency components of G$_z$ are sudden shifted to lower frequencies, as in the case of SWNTs bundles. \begin{figure}[ht] \centering \includegraphics[scale=0.50]{fig5.eps} \caption{\small{Phonon density of states (DOS) projected in RBM (a) and tangential (b) probing vectors for (10,0)@(18,0) DWNT. Some snapshots of bundle evolution can be followed in (c).}}\label{Fig1810-ph} \end{figure} In Fig. \ref{rAz}, we plot the z-displacement i.e., along the tube axis direction, of some eigenvectors as a function of the coordinate angle $\theta$ defined from the center of (18,0) tube. First, we have identified the $A_{1g}$ mode centered around 1721.4 cm$^{-1}$ for the structure at -1.0 GPa (Fig. \ref{rAz}a). This is the mode which mainly contributes to v-PDOS as we can see in Fig. \ref{rAz}b. As the pressure is increased up to 1.4 GPa, the (18,0) cross section is poligonized as we observe a split of G$_z$ PDOS contribution. Analysis of the phonon displacements (Fig. \ref{rAz}a) shows that the higher frequency component is mainly due to modes which are localized on the high-curvature regions (vertices of the hexagons). Modes centered at 1771.3, 1780.7 and 1786.1 cm$^{-1}$ have the same symmetry and the their maximums of displacement are localized for $\theta$ equal to $\pm$30$^\circ$, $\pm$90$^\circ$ and $\pm$150$^\circ$ which are the vertices of polygonized shape. After the collapse of (18,0) SWNT bundle, we have calculated phonon eigenvectors for collapsed structure at 2.2 GPa. In Fig. \ref{rAz}a, we show modes centered at 1800.6, 1803.2 and 1815.4 cm$^{-1}$ which mainly contribute to higher-frequency peak in PDOS. Then, we clearly see that high-frequency contribution in PDOS at 2.2 GPa come from modes that are localized in the high-curved regions of the peanut-shaped tube. ($\theta$ $\sim$ $\pm$180$^\circ$ and $\theta$ $\sim$ $\pm$0$^\circ$). Consequently, the low-frequency and more intense peak of PDOS is due to vibration modes which are localized in the flat regions of the peanut-shaped structure. \begin{figure}[ht!] \centering \includegraphics[scale=0.65]{fig6.eps} \caption{\small{(a) Amplitude of z-displacements of (18,0) SWNT bundle as a function of coordinate angle before the collapse (-1.0 GPa and 1.4 GPa) and after collapse (2.2 GPa). From bottom to top: Eigenvectors for $A_{1g}$ mode centered at 1721.4 cm$^{-1}$ (black) at -1.0 GPa; for modes centered at 1771.3, 1780.7 and 1786.1 cm$^{-1}$ (red) at 1.4 GPa, and for modes centered at 1800.6, 1803.2 and 1815.4 cm$^{-1}$ (blue) at 2.2 GPa. (b) v-PDOS projected in z direction for (18,0) SWNT at -1.0, 1.4 and 2.2 GPa. Arrows mark the center of modes whose eigenvectors are plotted in (a).}}\label{rAz} \end{figure} From an experimental point of view, it has been recently proposed that a saturation or even a negative pressure slope of Raman G$^+$ component for SWNT and DWNT could be associated to the collapse of the nanotubes \cite{caillier08,yao08,aguiar11}. Futhermore, after the collapse this band follows the graphite pressure evolution as observed in Fig. \ref{figexp}. with a smaller pressure coefficient \cite{aguiar11,Hanfland89}. This was also observed in PDOS of SWNT bundle calculations (cf. Fig \ref{Fig18-ph}b). Our calculations confirm this hypothesis and identifies the atomistic origin of the low-frequency bands. Collapsing of the nanotube cross-section leads to flattened regions where the reduced stress in C-C tangential bonds reduce the $G_z$ frequency. However, as we observe in our calculations, a small high-frequency component of G$_z$ component, arising from the high-curvature regions, remains after the collapse. Since the flattened portion of the tubes is much higher than curved portion, we expect that the low-frequency shift of the G band is dominant in the experiments. The experimentaly red-shift observed of the G-band is then explained from our calculations as related to the dominance of the flat region of the collapse phase in the Raman signal. This is in addition perfectly coherent with the fact in the collapsed state the pressure evolution of the G-band clearly matches (see Fig. \ref{figexp}) the one of graphite or graphene under triaxial compression \cite{Hanfland89,nicolle11}. \begin{figure}[ht] \centering \includegraphics[scale=1.10]{fig_exp.eps} \caption{\small{Experimentaly pressure Raman shift evolution of G$^+$ component for SWNTs, DWNT and graphite. Adapted from Ref. \cite{caillier08,aguiar11,Hanfland89,nicolle11}}}\label{figexp} \end{figure} \section{Conclusions} We studied high-pressure structural modifications and phonon-mode shifts of single and double wall carbon nanotubes bundles using zero-temperature enthalpy minimization with classical interatomic potentials. We confirmed that the structural transformation of SWNTs in bundles from circular to collapsed cross-section under hydrostatic pressure is strongly diameter-dependent. It is also evident that the transition from polygonized to collapsed cross section for large diameter tubes has a first-order character with a large bundle volume discontinuity whereas for small diameters the transition is continuous, with intermediate oval or racetrack cross sections. For DWNT bundles, screening pressure effects were observed on the inner tube, which keeps its circular cross section for higher pressures than would be expected from the SWNTs behavior. Furthermore, the inner tube acts as structural support to outer tube. Phonons calculation also reveals screening effects in radial and tangential component as the pressure increases. Polygonization of (18,0) SWNT bundle is characterized by a split of G$_z$ PDOS contribution which is mainly due to localized modes on high-curved region of polygonal shape. After the collapse transition, the tangential modes associated with the flat regions of the tubes are suddenly shifted to low frequencies and high-curved contribution of peanut shape is less pronounced but still observed in G$_z$ PDOS. The experimentaly observation of the G-band red-shift at nanotube collapse let us conclude on the dominance on the Raman signal of collapsed tubes of the flat regions of collapsed tubes. Moreover, this provides a coherent explanation of the lower pressure slope ($\sim$ 4cm$^{-1}$/GPa) of the G-band observed after nanotubes collapse. Consequently, the pressure slope could be a useful means for the identification of the geometry shape state of SWNT and DWNTs. \section*{Acknowledgements} The authors acknowledge CAPES-COFECUB collaboration program (grant 608) grant for the partial support of this research. \bibliographystyle{unsrt}
1,477,468,750,842
arxiv
\section{Introduction} Over the years, the field of social choice theory has focused more and more on decision making over combinatorial domains. This involves either \textit{multi-winner elections} (for the formation of a committee) or elections for a set of issues that need to be decided upon simultaneously, often referred to as {\it multiple referenda}. As an example of the latter, think of a local community that needs to decide on possible facilities or services to be established, based on current available budget. In this work, we focus on approval voting as a means for collective decision making. Approval voting offers a simple and easy to use format for running elections on multiple issues with binary domains, by having each voter express an approval or disapproval separately for each issue. There is already a range of voting rules that are based on approval ballots, including the classic Minisum solution, as well as more recently introduced methods (see Related Work). However, the rules most commonly studied for approval voting are applicable only when the issues under consideration are independent. As soon as the voters exhibit preferential dependencies between the issues, we have more challenges to handle. This is not uncommon in practical scenarios: a resident of a municipality may wish to support public project A, only if public project B is also implemented (which she evaluates as more important); a group of friends may want to go to a certain movie theater only if they decide to have dinner at a nearby location; a faculty member may want to vote in favor of hiring a new colleague only if the other new hires have a different research expertise. It is rather obvious that voting separately for each issue cannot provide a good solution in these settings. Instead, voters should be allowed to express dependencies among issues. Consequently, several approaches have been suggested to take into account preferential dependencies, see e.g., \cite{LX16}. Nevertheless, the majority of these works are suitable for rules where voters are required to express a ranking over the set of issues or have a numerical representation of their preferences instead of approval-based preferences. Barrot and Lang [\citeyear{BL16}] introduced a framework for expressing dependencies, tailored for approval voting elections. In particular, the notion of a conditional approval ballot was defined and new voting rules were introduced, that generalized some of the known rules in the standard setting of approval voting. Among the properties that were studied, it was also exhibited that a higher level of expressiveness implies higher computational complexity. Namely, the Minisum solution is efficiently computable in the standard setting but its generalization, referred to as {\it Conditional Minisum}, was proved to be $\mathbf{NP}$-hard. Beyond $\mathbf{NP}$-hardness, computational properties were not the main focus of \cite{BL16}, and therefore, it has remained open whether the problem admits approximation algorithms with favorable guarantees or even exact algorithms for special cases. \paragraph{Contribution.} We focus on algorithmic aspects of the Conditional Minisum voting rule for multi-issue elections with preferential dependencies. Our main goal is to enhance our understanding on the complexity implications due to conditional voting for a rule that is known to be efficiently computable in the absence of dependencies. In Section \ref{sec:approx}, we provide the first multiplicative approximation algorithms for the problem under the condition that for every voter, each issue can depend on at most a constant number of other issues. In a convenient graph-theoretic representation, this corresponds to voters with a dependency graph of constant maximum in-degree. Our family of approximation algorithms achieves a ratio that degrades smoothly as both the in-degree and the optimal cost grow larger. Interestingly, our algorithms are based on a reduction to \textsc{min sat}, an optimization version of \textsc{sat} that has rarely been applied in computational social choice (in contrast to \textsc{max sat}). Moving on, in Section \ref{sec:opt}, we focus on special cases that are optimally solvable in polynomial time. For this we stick to the (hard) case of maximum in-degree one. Our main insight is that one can draw conclusions by looking at the {\it global} dependency graph (taking the union of dependencies by all voters). Restrictions on the structure of the global graph allows us to identify several cases (e.g., trees, cycles, and more generally graphs with treewidth at most 2), where we can have optimal efficient algorithms. Hence we conclude with a positive confirmation that Conditional Minisum can combine enhanced expressiveness with efficient computation (for exact or approximate solutions) in many cases of well-motivated scenarios. \paragraph{Related work.} Apart from the classic Minisum solution, many other approval voting rules have been considered, such as the Minimax solution \cite{BKS07}, Satisfaction Approval Voting \cite{BK10}, and other families based on Weighted Averaging aggregation \cite{Amanatidis+15}. For surveys on the desirable properties of approval voting, we refer to \cite{BF05} and \cite{Kilgour10}. None of these rules however allow voters to express dependencies. The first work that exclusively took this direction and is most closely related to ours is \cite{BL16}. Namely, three voting rules were proposed for incorporating such dependencies (including the Conditional Minisum rule that we consider here) and some of their properties were studied mainly on the satisfiability of certain axioms. Even if one moves away from approval-based elections, the presence of preferential dependencies remains a major challenge when voting over combinatorial domains. Several methodologies have been considered achieving various levels of trade-offs between expressiveness and efficient computation. Some representative examples include, among others, sequential voting \cite{LX09}, \cite{Airiau+11}, \cite{dallaPozza+11}, \cite{CX12}, compact representation languages \cite{Bou+04}, \cite{LVK10}, \cite{GPQ08}, or completion principles for partial preferences \cite{LL09}, \cite{CL12}. An extended survey for voting in combinatorial domains can be found at \cite{LX16}. See also \cite{Chevaleyre08} for an informative work on both the proposed solution concepts and their applications in AI. \section{Formal Background} \label{sec:prelims} Let $I = \{I_1,\dots,I_m\}$ be a set of $m$ issues, each of them associated with a finite domain $D_i$. We only examine the case of binary domains so that for every $i\in[m],$ $D_i=\{d_i,\overline{d_i}\}$. Here $d_i$ depicts a ballot in favor of the issue, whereas $\overline{d_i}$ is against it. An {\it{outcome}} is an assignment of a value for every issue, and let $D = D_1 \times D_2 \times \dots \times D_m$ be the set of all possible outcomes. Let also $V = \{1,\dots, n\}$ be a group of $n$ voters who have to decide on a common outcome in $D$. \paragraph{Voting format.} To express dependencies among issues, we mostly follow the format described in \cite{BL16}. Each voter $i\in[n]$ is associated with a directed graph $G_i=(I,E_i)$, called {\it{dependency graph}}, whose vertex set coincides with the set of issues. A directed edge $(I_k, I_j)$ means that issue $I_j$ is affected by $I_k$. We explain briefly how voters submit their preferences, before giving the formal definition. For an issue $I_j$ with no predecessors in $G_i$ (its in-degree is 0), voter $i$ is allowed to cast an unconditional approval ballot, stating the outcomes of $I_j$ that are approved by her. She can be satisfied with one or with all or with none of the outcomes in $D_j$. In the case that issue $I_j$ has a positive in-degree in $G_i$, let $\{I_{j_1}, I_{j_2}, \dots, I_{j_k} \}\subseteq I$ be all its direct predecessors (also called in-neighbors). Voter $i$ then needs to specify all the combinations that she approves in the form $\{t:d\}$, where $d\in D_j$, and $t \in D_{j_1} \times D_{j_2} \times \dots \times D_{j_k}$. Every combination $\{t:d\}$ signifies the satisfaction of voter $i$ with respect to issue $I_j$, when all outcomes implied by $t$ have been realized and the outcome of $I_j$ is $d$. Both cases of zero and positive in-degree for an issue can be unified in the following definition. \begin{definition} A conditional approval ballot of a voter $i$ over issues $I=\{I_1,\dots,I_m\}$ with binary domains, is a pair $B_i=\langle G_i,\{A_j, j\in [m]\} \rangle$, where $G_i$ is the dependency graph of voter $i$, and for each issue $I_j$, $A_j$ is a set of conditional approval statements in the form $\{t:d\}$, where $d \in D_j$, $t\in \prod_{k\in N^{\unaryminus}_{i}(I_j)} D_k$, and $N^{\unaryminus}_{i}(I_j)$ is the (possibly empty) set of direct predecessors of $I_j$ in $G_i$. \end{definition} To simplify the presentation of a conditional ballot, when a voter has expressed a common dependency for the two outcomes of an issue, we can group them together and write $\{t:\{d_j, \overline{d_j}\}\}$, instead of $\{t:d_j\}$, $\{t:\overline{d_j}\}$. Additionally, for every issue $I_j$ with in-degree $0$ by some voter $i$, a vote in favor of $d_j$ will be written simply as $\{d_j\}$, since $N^{\unaryminus}_{i}(I_j) = \emptyset$. An important quantity for parameterizing families of instances in the sequel, is the maximum in-degree of each graph $G_i$. Namely, for a voter $i$ let $\Delta_i=\max\{|N^{\unaryminus}_{i}(I_j)|, j \in [m]\}$. Given a voter $i$ with conditional ballot $B_i$, we will denote by $B_i^j$ the restriction (i.e., projection) of her ballot on issue $I_j$. Moreover, a \textit{conditional approval voting profile\footnote{When $\Delta_i$ is large for some voter $i$, the size of a profile might become exponential. Alternatively, one could aim for a succinct representation, e.g., via propositional formulas. We do not examine further this issue, since we consider instances with constant in-degree.}} $P$ (often referred to simply as a \textit{profile} for brevity) is a tuple $(I,V,B)$ where $B=( B_1, B_2, \dots, B_n)$. \begin{example} \normalfont \label{example:ex1} As an illustration, we consider 3 co-authors of some joint research, several weeks before a conference submission deadline, who have to decide on $3$ issues: whether they will {\textit{work}} more before the submission on obtaining new theorems, whether they have enough material to split their work into two, or even \textit{multiple}, papers and whether they should invite a new {\textit{co-author}} to work with them because of his insights that can help on improving their results. The first author insists on more work before the submission, additionally he approves the choice of two submissions if and only if they work more on new theorems. Furthermore, he does not want to have a new co-author if and only if they split their work. The second author does not have time for more work before the deadline, she has no strong opinion on multiple submissions and approves both alternatives, and she agrees with inviting a new co-author only if they decide both to work more for new results and to submit a single paper. The last author also expresses a dependence for inviting a new co-author on the other two issues, as described below. More formally, let $I=\{I_1,I_2,I_3\}$ be the aforementioned issues, and for $i=1, 2, 3$, let $G_i=(I,E_i)$ be the dependency graph of voter $i$, so that $E_1= \{(I_1,I_2),(I_2,I_3)\}$, $E_2 = E_3 = \{(I_1,I_3),(I_2,I_3)\}$. Let also $D_1=\{w,\overline{w}\},$ $ D_2=\{m,\overline{m}\},D_3=\{c,\overline{c}\}$. The voters' preferences are: \begin{table}[H] \centering \begin{tabular}{c|c|c} \textbf{voter 1} & \textbf{voter 2} & \textbf{voter 3} \\ \hline $w$ & $\{\overline{w},m,\overline{m}\}$ & $\{w,m\}$ \\ $\overline{w}:\overline{m}$ & $wm:\overline{c}$ & $wm:\{c,\overline{c}\}$ \\ $w:m$ & $\overline{w}m:\overline{c}$ & $\overline{w}m:\{c,\overline{c}\}$ \\ $m:\overline{c}$ & $w\overline{m}:c$ & $w\overline{m}:\{c,\overline{c}\}$ \\ $\overline{m}:c$ & $\overline{w}\overline{m}:\overline{c}$ & \hspace{-0.667cm}$\overline{w}\overline{m}:\overline{c}$ \end{tabular} \end{table} \end{example} To measure the dissatisfaction of a voter given an assignment of values to all the issues, we use the following generalization of Hamming distance. \begin{definition} Given an outcome $s=(s_1,s_2,\dots, s_m) \in D$, we say that voter $i$ is dissatisfied with issue $I_j$, if the projection of $s$ on $N^{\unaryminus}_i(I_j)$, say $t$, satisfies $\{t : s_j\}\notin B_i^j$. We denote as $\delta_i(s)$ the total number of issues that dissatisfy voter $i$. \end{definition} Coming back to Example {\ref{example:ex1}}, the values of $\delta_i(s)$ follow. \begin{table}[H] \begin{tabular}{c|cccccccc} \multicolumn{1}{l|}{$\delta_{i}(\cdot)$} & \multicolumn{1}{l}{\textit{ \scalebox{.72}{{$wmc$}}}} & \multicolumn{1}{l}{ \scalebox{.72}{{$wm\overline{c}$}}} & \scalebox{.72}{{$w\overline{m}c$}} & \scalebox{.72}{{$w\overline{m}\overline{c}$}} & \scalebox{.72}{{$\overline{w}mc$}} & \scalebox{.72}{{$\overline{w}m\overline{c}$}} & \scalebox{.72}{{$\overline{w}\overline{m}c$}} & \scalebox{.72}{{$\overline{w}\overline{m}\overline{c}$}} \\ \hline \scalebox{.74}{voter $1$} & 1 & 0 & 1 & 2 & 3 & 2 & 1 & 2 \\ \scalebox{.74}{voter $2$} & 2 & 1 & 1 & 2 & 1 & 0 & 1 & 0 \\ \scalebox{.74}{voter $3$} & 0 & 0 & 1 & 1 & 1 & 1 & 3 & 2 \end{tabular} \end{table} Finally, even though there is a similarity between CP-nets and conditional ballots, Barrot and Lang [\citeyear{BL16}] highlighted that they induce different semantics and are incomparable. \paragraph{Voting rule.} In this work, we study a generalization of the classic Minisum solution in the context of conditional approval voting. We refer to this rule as {\textit{Conditional Minisum}} (\textsc{cms}), and it outputs the outcome that minimizes the total number of dissatisfactions over all voters ($wm\overline{c}$ for the profile presented in Example \ref{example:ex1}). Formally, the algorithmic problem that our work deals with is as follows. \begin{table}[H] \centering \begin{tabular}{lp{6.5cm}} \toprule \multicolumn{2}{c}{\textsc{conditional minisum (cms)} } \\ \midrule \textbf{Given:} & A voting profile $P$ with $m$ binary issues and $n$ voters casting conditional approval ballots.\\ \textbf{Output:} & A boolean assignment $s^* = (s_1^*,\dots, s_m^*)$ to all issues that achieves $\min_{s\in D}\sum_{i \in [n]} \delta_i(s)$.\\ \bottomrule \end{tabular} \end{table} \section{Approximation Algorithms} \label{sec:approx} It is well known that a Minisum solution can be efficiently computed when there are no dependencies \cite{BKS07}. In contrast to this, \textsc{cms}\ is $\mathbf{NP}$-hard even when there is only a single dependence per voter, i.e., when every voter's dependency graph has just a single edge \cite{BL16}. Given that hardness result, it is natural to resort to the framework of approximation algorithms. The only known result from this perspective is an algorithm with a {\emph {differential}} approximation ratio of $4.34/(m\sum_{j\in{I}} 2^{|N^{\unaryminus}(j)|}+4.34)$ for the case of a common acyclic dependency graph, so that for each voter $i$ and issue $j$, $N^{\unaryminus}(j)=N^{\unaryminus}_i(j)$ \cite{BL16}. However, differential approximations (we refer to \cite{DGP98} for the definition) form a less typical approach in the field of approximation algorithms. Instead, we focus on the more standard framework of {\it multiplicative} approximation algorithms, as treated also in common textbooks \cite{Vazirani03}, \cite{WS11}. We say that an algorithm for a minimization problem achieves a multiplicative ratio of $\alpha \geq 1$, if for every instance $I$, it produces a solution with cost at most $\alpha$ times the optimal one. We stress that a differential approximation ratio for minimization problems does not in general imply any multiplicative approximation ratio \cite{BP03}. Our main contribution in this section is the first class of multiplicative approximation algorithms for \textsc{cms}\ under the condition of bounded in-degree in every voter's dependency graph. To this end, we make use of approximation algorithms for the \textsc{min $k$-sat} problem, a minimization version of \textsc{sat}, where we are given a set of $m$ clauses in $k$-CNF and we search for a boolean assignment so as to minimize the total number of satisfied clauses. Interestingly, minimization versions of \textsc{sat} have rarely been applied in the context of computational social choice, see e.g., \cite{LMX18}. The use of \textsc{max sat} is much more common, but for the case of \textsc{cms}, it does not seem convenient to exploit algorithms for maximisation versions of \textsc{sat} and follow an analogous approach as in the proof of Theorem \ref{thm:approx-msat} below. We defer further discussion to the full version of our work. Before we proceed, it is important to stress that from the proof of Theorem 2 in \cite{MP21}, we can deduce that it is $\mathbf{NP}$-hard to achieve any bounded approximation for \textsc{cms}\ even if $\Delta_i\leq 1$, for every voter $i$. Hence, any polynomial time approximation algorithm with bounded multiplicative ratio, could only be possible under further assumptions. To that end, we suggest using a lower bound on the optimal cost as a further condition. The reason is that the hard cases, as induced by \cite{MP21}, are instances where the optimal solution is of zero cost (i.e. all voters can be made to be satisfied w.r.t. all issues). Hence, the intuition is that as the optimal cost gets higher, i.e., when no solution can keep the number of dissatisfactions to be very low, we expect that it may be easier to come closer to the optimal solution and have a bounded approximation ratio. To make the above discussion more precise, we first present a result for profiles where $\Delta_i\leq 1$, for every voter $i$, such that the cost of the optimal solution is at least a fraction of $nm$. This is already a superclass of the case that was proved $\mathbf{NP}$-hard in \cite{BL16}. We later generalize to profiles of bounded $\Delta_i$. \begin{theorem} \label{thm:approx-msat} Let $\mathcal{F}$ be the family of instances where the dependency graph of every voter has maximum in-degree at most 1, and there exists a $\rho>0$ such that the optimal solution has cost at least $\frac{nm}{\rho}$. Then any $\alpha$-approximation algorithm for \textsc{min 2-sat} yields an $\alpha(1+\rho)$-approximation algorithm for the family $\mathcal{F}$ of \textsc{cms}. In particular, we can have a polynomial time $1.1037(1+\rho)$-approximation for $\mathcal{F}$. \end{theorem} \begin{proof} Consider an instance $P$ of \textsc{cms}, with $n$ voters, and with the stated properties. We present a reduction to \textsc{min 2-sat} that preserves the approximation up to a factor of $(1+\rho)$. Let $I = \{I_1, \dots, I_m\}$ be the set of issues. We first create a logical formula $C_{ij}$, for every voter $i\in V$, and every issue $I_j\in I$, which indicates the cases where voter $i$ is {\it{not}} satisfied with the outcome on $I_j$. For every issue $I_j$, recall that $D_j=\{d_j,\overline{d_j}\}$ is its domain, and $x_j$ will be the corresponding boolean variable in the construction of $C_{ij}$. For this we consider two cases. The first and easier case is when for a voter $i$, and issue $I_j$, $N^{\unaryminus}_{i}(I_j)=\emptyset$. All possible forms of $B_i^j$ are depicted in the first row of Table \ref{tab:no-pred}, whereas the corresponding formula is shown in the second row. \begin{table}[H] \centering \begin{tabular}{c|c|c|c|c} $B_i^j$ & $\emptyset$ & $\{d_j\}$ & $\{\overline{d_j}\}$ & $\{d_j,\overline{d_j}\}$ \\ \hline $C_{ij}$ & $x_j\vee\overline{x}_j$ & $\overline{x}_j$ & $x_j$ & $\emptyset$ \end{tabular} \caption{The formula when issue $I_j$ has no predecessor in $G_i$. \label{tab:no-pred}} \end{table} On the other hand, if $I_j$ has an in-neighbor (it can have only one by our assumptions), say $I_k\in I$, we set $C_{ij}$ equal to the disjunction of all combinations of outcomes on issues $I_j$ and $I_k$ that dissatisfy voter $i$ with respect to $I_j$. To illustrate this construction, we describe an example with $4$ voters, $2$ issues $I=\{I_k,I_j\}$ and for every voter $i,$ $G_i=\{I,\{I_k,I_j\}\}$. The preferences for issue $I_j$ are shown in Table \ref{tab:pred}. Namely, for $i=1,2,3,4$, the first cell in the $i$-th row depicts $B_i^j$ from which $C_{ij}$ can be obtained as the disjunction of the ticked expressions in the remaining of the $i$-th row. \begin{table}[htbp] \centering \begin{tabular}{ C{2.7cm}|C{ 0.95cm}C{ 0.95cm}C{ 0.95cm}C{ 0.95cm} } \diagbox{\hspace{0.8cm}{\scalebox{.8}{$B_i^j$}}}{\hspace{0.8cm} } & {\scalebox{.8}{ $(x_k\wedge x_j)$ }}& {\scalebox{.8}{$(x_k \wedge \overline{x}_j )$}} &{\scalebox{.8}{ $( \overline{x}_k \wedge x_j )$}} &{\scalebox{.8}{ $( \overline{x}_k \wedge \overline{x}_j )$}} \\ \hline $\emptyset$ & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline $\{d_k:d_j\}$ & & $\checkmark$ & \checkmark & \checkmark \\\hline \begin{tabular}{@{}c@{}}$\{d_k:d_j\}$,\\ $\{d_k:\overline{d_j}\}$\end{tabular} & & & \checkmark & \checkmark \\\hline \begin{tabular}{@{}c@{}}$\{d_k:d_j\},$ \\ $\{\overline{d_k}:d_j\}, \{d_k:\overline{d_j}\}$\end{tabular} & & & & \checkmark \end{tabular} \caption{For $i=1,2,3,4$ the formula $C_{ij}$ is the disjunction of the ticked expressions in the $i$-th row.} \label{tab:pred} \end{table} \begin{claim} Considering an outcome $(s_1,\dots, s_m)$ for the issues and the corresponding assignment to the variables, voter $i$ is dissatisfied with $I_j$ if and only if the formula $C_{ij}$ is true. \end{claim} The constructed formula $C_{ij}$ is in DNF. To continue, we will need to make a conversion to CNF, which is easy to do given its small size as per the following lemma. Its proof (based on a case analysis), along with some proofs of subsequent results have been omitted due to space constraints. \begin{lemma} \label{lem:DNFtoCNF} The formula $C_{ij}$ for each voter $i\in V$, and each issue $I_j\in I$, can be written in CNF with at most 2 clauses, and where each clause contains at most 2 literals. \end{lemma} Using Lemma \ref{lem:DNFtoCNF} to convert each $C_{ij}$ to CNF, we can now create a \textsc{min 2-sat} instance $P'$ by the multiset\footnote{Some clauses may happen to appear more than once in the final formula but there is no harm in keeping such duplicates.} of all clauses appearing in the $C_{ij}$'s, i.e., appearing in the formula \begin{equation} \label{msat_instance} C= \bigwedge_{i\in V, I_j \in I}C_{ij}. \end{equation} In the instance $P'$, we aim for a truth assignment minimizing the number of satisfied clauses in $C$. Hence, our construction gives rise to the following algorithm for \textsc{cms}. \begin{algorithm} \caption{\hfill {$\rhd$Input: P}\label{alg:approx-msat}} \begin{algorithmic}[1] \STATE Create $P'$ from $P$ using Lemma \ref{lem:DNFtoCNF} and Equation \eqref{msat_instance}. \STATE Run an $\alpha$-approximation of \textsc{min} \textsc{2-sat}\ on $P'$. \STATE Set the value of $I_j$ in $P$ to the value of $x_j$ in $P'$. \end{algorithmic} \end{algorithm} \begin{lemma} \label{lem:opt} Let $\text{OPT}(P)$ and $\text{OPT}(P')$ be the values of the optimal solutions for the instances $P$ and $P'$ of \textsc{cms}\ and \textsc{min} \textsc{2-sat}\ respectively. When $\text{OPT}(P)\geq nm/\rho$, then $\text{OPT}(P') \leq (1+\rho)\text{OPT}(P)$. \end{lemma} To conclude the proof of Theorem \ref{thm:approx-msat}, let $\text{SOL}(P')$ be the cost of the solution to $P'$ produced in step 2 of Algorithm \ref{alg:approx-msat}, which equals the number of satisfied clauses in $C$ by the truth assignment of the $\alpha$-approximation algorithm. This corresponds to a solution for \textsc{cms}\ and let $\text{SOL}(P)$ be its total cost. We note that the total number of distinct pairs $(i, j)$ for which voter $i$ is dissatisfied by issue $I_j$ can be no more than the number of the satisfied clauses of $C$, since each $C_{ij}$ corresponds to a pair of a voter and an issue (and by Lemma \ref{lem:DNFtoCNF}, even two clauses could correspond to the same pair). Hence, together with Lemma \ref{lem:opt}, we have the following implications: $$ \text{SOL}(P) \leq \text{SOL}(P') \leq \alpha\cdot \text{OPT}(P') \leq \alpha(1+\rho)\cdot \text{OPT}(P)$$ Thus every $\alpha$-approximation algorithm for \textsc{min 2-sat} yields an $\alpha(1+\rho)$-approximation algorithm for \textsc{cms}, as long as $\text{OPT}(P)\geq \frac{nm}{\rho}$. To obtain the claimed approximation ratio of Theorem \ref{thm:approx-msat}, we just use the algorithm by Avidor and Zwick [\citeyear{AZ05}], which achieves a factor of 1.1037 for \textsc{min 2-sat}. \end{proof} Suppose now that for every voter $i$, $\Delta_i\leq 2$ and there is a $\rho>0$ such that any optimal solution has cost at least $\frac{3nm}{\rho}$. If we follow the approach in the proof of Theorem \ref{thm:approx-msat}, it is simply a matter of boolean algebra to check that, in analogy to Lemma \ref{lem:DNFtoCNF}, we can write any resulting $C_{ij}$ in CNF with at most $4$ clauses, each containing at most 3 literals, for every $i\in V, I_j \in I$. We can then proceed, with a lemma similar to Lemma \ref{lem:opt}, and finally use the $1.2136$-approximation algorithm for \textsc{min 3-sat} \cite{AZ05} to obtain a ratio of $1.2136(1+\rho)$. In fact the same approach can be further generalized, as long as the maximum in-degree in every voter's graph is bounded by a constant\footnote{For non-constant in-degree, the conversion from DNF to CNF in Lemma \ref{lem:DNFtoCNF} may take exponential time.} $k$. In that case, the approach of Theorem \ref{thm:approx-msat} yields CNF formulas with $k+1$ literals and at most $2^k$ clauses for each voter. Hence, by using the $2$-approximation algorithm for \textsc{min sat} by \cite{MR96} (it applies to \textsc{min $k$-sat} for any $k$), we have the following result\footnote{We note that in the version published in the IJCAI 2020 proceedings, the statements of Theorems 1 and 2 are incorrect, due to a mistake in Lemma 2. In the current version, we have corrected the statement of Lemma 2, and have made in turn, the appropriate adjustments for the two theorems.}. \begin{theorem} \label{thm:cmboundeddegree} Let $\mathcal{F}'$ be the family of instances, where the dependency graph of every voter has maximum in-degree at most a constant $k$, and there exists a $\rho>0$, such that the optimal solution has cost at least $\frac{nm(2^k-1)}{\rho}$. Then there is a polynomial time $2(1+\rho)$-approximation algorithm for $\mathcal{F}'$. \end{theorem} Note that as long as $\rho$ is an absolute constant, or some function of $k$ (which is still a constant, since $k$ is constant), Theorem \ref{thm:cmboundeddegree} yields a constant factor approximation. Intuituvely, we have again a constant factor approximation when the number of dissatisfactions is relatively large in every feasible solution. \section{Optimal Algorithms} \label{sec:opt} In the current section, we identify special cases of the problem where exact optimal solutions can be found in polynomial time. In doing this, we stick to the assumption that every voter has maximum in-degree at most $1$ in her graph. Since this already makes the problem $\mathbf{NP}$-hard, one needs to consider further restrictions that admit efficient algorithms. In our quest to define tractable cases, we realized that it is convenient to focus on the union of all the dependency graphs: \begin{definition} The global dependency graph of a profile $(I,[n],B)$ is the graph $(I,\bigcup_{i\in[n]}E_i)$, i.e., we take the union of the edges in every voter's dependency graph. \end{definition} To see how to exploit the global dependency graph, it is instructive to inspect the $\mathbf{NP}$-hardness proof for \textsc{cms}\ in \cite{BL16}. Their proof holds for instances where each dependency graph $G_i$ is acyclic, and the in-degree of every issue in $G_i$ is at most one. Examining the profiles created in that reduction, we notice that no restrictions can be stated for the form of the global dependency graph corresponding to the produced instances. Observe for example that an acyclic dependency graph for every voter does not necessarily lead to an acyclic global dependency graph. Furthermore, if each $G_i$ is of bounded degree, this does not imply a constant upper bound for the maximum degree of the global graph. Our insight is that it may not be only the structure of each voter's dependency graph that causes the problem's hardness, but in addition, the absence of any structural property on the global dependency graph. Motivated by this, we investigate conditions for the global dependency graph, that enable us to obtain the optimal solution in polynomial time. Our findings reveal that this is indeed feasible for certain interesting classes of graphs, as summarized in Theorem \ref{cor:opt}. We first exhibit a property that allows us to reduce the solution of certain instances to the solution of instances with smaller sets of issues. Given a directed graph $(V,E)$, the \textit{neighborhood} of a vertex $u$ is the set of its in-neighbors and out-neighbors: $N(u)=\{v\in V:(u,v)\in E \text{ or } (v,u)\in E\}$. \begin{lemma} \label{lem:basic_positive} Consider a profile $P$, where for every voter $i$, $\Delta_i\leq 1$, and let $G$ be the global dependency graph of $P$. If $G$ has a vertex $y$ with $|N(y)|\leq 2$, we can modify $P$ in polynomial time to a profile $P'$ (maintaining that every voter has maximum in-degree at most $1$) with global dependency graph $H$, such that $V(H)=V(G)\setminus \{y\}$, and \textsc{cms}\ on $P$ is reduced to optimally solving \textsc{cms}\ on $P'$. \end{lemma} \begin{proof} Fix a profile $P = (I, [n], B)$ with the aforementioned properties. For notational convenience in the proof, for every issue $x\in I$, we let $\{x_0,x_1\}$ be its domain, and recall that $B_i^x$ denotes the projection of voter $i$'s ballot on issue $x$. We will first introduce a cost function that helps us decompose the total number of disagreements by an assignment of values to issues. Namely, for any directed edge $(x,y)$ in the global dependency graph, and every assignment of values, say $x_i, y_j$, to these two issues, we let $c(x_i, y_j)=|\{v\in[n]: (x,y)\in E(G_v), \{x_i:y_j\} \notin B_v^y\}|$. In words $c(x_i, y_j)$ is the number of voters who have expressed a conditional vote for issue $y$, dependent only on $x$, and at the same time are dissatisfied with issue $y$, by the assignment $(x_i, y_j)$. In addition, we set $c(y_j)=| \{v \in [n]: N^{\unaryminus}_v(y) = \emptyset, y_j \notin B_v^y\} |$. Thus, $c(y_j)$ is the number of voters who have expressed an unconditional vote on $y$, and are dissatisfied with the value $y_j$. Let us now consider the following three cases for issue $y$: \smallskip \noindent {\it{Case 1:}} If $|N(y)|=0$, all votes for issue $y$ are unconditional. Let $P'$ be the profile that results after deleting vertex $y$. Then $\text{OPT}(P)=\text{OPT}(P') + \text{OPT}(y)$ where the optimal choice for $y$ is the value that causes the least number of disagreements. \smallskip \noindent {\it{Case 2:}} If $|N(y)|=1$, to get rid of vertex $y$, we keep track of the optimal choice for $y$ under the possible values for its in-neighbor, say $x$. WLOG, we examine the case where both directed edges $(x,y),(y,x)$ appear in $G$. If one of these edges is not present, one just needs to adjust accordingly Equation \eqref{eq:deletion} below. The fact that $y$ does not have any dependencies with any issue other than $x$, allows us to compute its optimal value, given an assignment for $x$. Namely, for $i \in \{0,1\}$, we compute and store the following quantity along with the corresponding value of $y$. \begin{equation} \label{eq:deletion} c^*(x_i) = \min_{k\in \{0, 1\}} \{ c(y_k) + c(x_i,y_k) + c(y_k,x_i)\}. \end{equation} In case the minimum is achieved in \eqref{eq:deletion} by both values of $y$, we can select one of them, according to some consistent tie-breaking rule. Hence, at the moment, we know how to set $y$, if we are given the value of $x$. Also, it is important to note that even if vertex $x$ has in-degree higher than $2$ in $G$, we have assumed that the maximum in-degree in every voter's dependency graph is at most one, and no voter would need to look at the value of $y$ in combination with other issues to decide if she is satisfied with $x$. Thus we can leave $y$ aside without causing any problems. To proceed we produce a new profile $P'$, from $P$ as follows: {\it (i)} We delete issue $y$ from $I$ and from the dependency graphs. For every voter we also delete her expressed preferences for $y$, whether conditional or not. {\it (ii)} For every $i \in \{0, 1\}$, we introduce $c^*(x_i)$ new voters who are dissatisfied only with the assignment $x_i$ of $x$, and are satisfied with any assignment on other issues. It is easy to see that the global dependency graph of the newly created profile $P'$ is exactly $G$ without $y$ and its adjacent edges. To complete the proof we have to argue that the value of the optimal solution in $P'$ is the same as in the original instance. We defer this argument to Claim \ref{cl:sameopt2}. \smallskip \noindent {\it{Case 3:}} If $|N(y)| = 2$, suppose that issue $y$ is connected to issues $x$ and $z$. As in \textit{Case 2}, WLOG, assume that all edges $(x, y),(y,x),(y, z),(z,y)$ appear in the global dependency graph. The fact that there are no dependencies between $y$ and any issues other than $x$ and $z$, allows us to compute the optimal alternative for $y$, given an assignment of values to issues $x$ and $z$. In analogy to Equation \eqref{eq:deletion}, for every $i,j \in \{0,1\}$, we compute and store a quantity which expresses the minimum number of disagreements that can be caused by issue $y$, when we fix $x$ to $x_i$ and $z$ to $z_j$. Namely, $c^*(x_i, z_j)$ equals $$\min_{k\in \{0, 1\}} \{c(y_k) + c(x_i,y_k) + c(y_k,x_i) + c(z_j,y_k) + c(y_k,z_j)\}. $$ We now produce a new profile $P'$ from $P$ as follows: \begin{itemize} \item We delete $y$ from $I$ and from the dependency graphs. We delete also each voter's expressed preferences for $y$. \item For every voter who had a conditional ballot on $x$, dependent on $y$, we replace it with the unconditional ballot $\{x_0,x_1\}$, i.e., the voter is now satisfied with any outcome on $x$. We do the analogous replacement for voters who had a conditional ballot on $z$ dependent on $y$. \item For every $i, j \in \{0, 1\}$, we introduce $c^*(x_i, z_j)$ new voters, who have a conditional ballot for issue $z$, dependent on $x$. They are dissatisfied only with $(x_i, z_j)$ and satisfied with any assignment on other issues. \end{itemize} It turns out that the global dependency graph of $P'$, say $H$, is obtained from $G$ by deleting $y$ and its adjacent edges, and by adding the edge $(x,z)$, if it was not already present. The proof can now be completed by the following claim. \begin{claim} \label{cl:sameopt2} For the constructions of Cases 2 and 3, every solution of $P$ corresponds to a solution of $P'$ with the same cost and vice versa. Hence $\text{OPT}(P) = \text{OPT}(P')$. \end{claim} To argue about complexity, observe that we add at most $\mathcal{O}(n)$ new voters in moving from $P$ to $P'$. Also when we solve $P'$, to get an assignment for issue $y$ of $P$, we need to remember either the values $\argmin c^*(x_i)$ from \eqref{eq:deletion} or in \textit{Case 3}, the values $\argmin c^*(x_i, z_j)$. \end{proof} \begin{remark} \label{rem:N(y)=3} One can generalize the construction of Lemma \ref{lem:basic_positive} for vertices $y$ with $|N(y)|=3$. But the resulting profile $P'$ may end up with voters of maximum in-degree two in their dependency graph. This prohibits a repeated use of Lemma \ref{lem:basic_positive}, that we need in the sequel for obtaining optimal algorithms. \end{remark} We can now obtain positive results for concrete classes of graphs. We first introduce some more graph-theoretic terminology. Given a directed graph $G$, we refer to its {\it{undirected version}} as the undirected graph $\overline{G}$ produced after we remove the orientation of every edge of $G$. Furthermore, if for a pair of vertices $x, y$, both $(x, y)$ and $(y, x)$ are present in $G$, then we just keep a single edge $(x, y)$ in $\overline{G}$. In the next theorem, we identify a class of instances that admit an optimal solution in polynomial time, based on the undirected version $\overline{G}$ of the global dependency graph $G$ of a given profile. Namely, the class consists of instances where $\overline{G}$ has treewidth at most 2. The treewidth is a parameter identifying how {\it close} to a tree a graph looks like. For the exact definition, we refer to \cite{RS86}. The class of instances captured by our result includes paths, trees, cycles, series-parallel graphs, or any collection of such connected components. Further interesting classes that are included are cactus graphs, and ladder graphs. \begin{theorem} \label{cor:opt} If the dependency graph of every voter has maximum in-degree at most 1 and the undirected version of the global dependency graph has treewidth at most $2$, then \textsc{cms}\ is optimally solvable in polynomial time\footnote{We are grateful to the anonymous IJCAI '20 reviewers for suggesting the condition on the treewidth, which yields a generalization of the results we claimed in the previous version of this work.}. \end{theorem} \begin{proof} Let $\overline{G}$ be the undirected version of the global dependency graph of a given profile. WLOG we may assume that $\overline{G}$ is connected, since otherwise we could solve for each connected component separately. For the case where $\overline{G}$ has treewidth at most $1$, then $\overline{G}$ is a tree and hence, it is possible to apply Lemma \ref{lem:basic_positive}, so as to delete leaves sequentially until the remaining graph consists only of a single vertex. An optimal pick for that vertex is now possible and by backtracking, we can deduce the outcome for every issue in the optimal solution. If $\overline{G}$ has treewidth equal to $2$, there exists at least one vertex $y$ with $|N(y)| \leq 2$ \cite{Bodlaender98}, and hence, we can apply Lemma \ref{lem:basic_positive} again. The operations performed, when applying Lemma \ref{lem:basic_positive}, to reduce the problem to a smaller instance cannot increase its treewidth (they involve deletions and contractions of edges and vertices). Thus, by successively applying the described procedure on the remaining instance, we end up with a graph of constant size (or even a single vertex), where \textsc{cms}\ can be computed efficiently. \end{proof} We argue that some of the graph classes captured by Theorem \ref{cor:opt} are meaningful in multi-issue elections with logically dependent issues. First, consider the case where the global dependency graph is a path with all edges oriented in the same way. We can think of the issues as ordered on a line, which is very natural when there is sequential dependence along a series of decisions. For an example, a municipality may need to decide on 3 issues: what public project to implement (say a park or a stadium), in which location to do it (dependent on the type of project, since each location can have different features), and how to connect it with existing means of public transportation (via a new bus stop or metro stop, dependent on location). Similarly, when the global dependency graph forms a directed tree oriented from the root towards the leaves, we again have a hierarchy regarding dependencies. E.g., a star graph oriented towards the leaves can arise when the senior partners of a firm have to decide on the location for a new subsidiary. This choice prominently affects a set of other decisions like the suppliers, marketing strategies, etc. \section{Conclusions and Future Work} We advocate that \textsc{cms}\ combines a higher level of expressiveness with efficient algorithms for several cases of interest. We find the assumption of bounded in-degree as a motivated starting point for studying the computational properties of \textsc{cms}, with several open questions remaining unresolved. Obtaining improved approximation ratios or inapproximability bounds for the cases we studied would provide further insights. It is a very interesting question whether efficient algorithms exist for any constant treewidth of the global dependency graph. Since we only provided a sufficient but not necessary condition for efficient algorithms, identifying parameters other than the treewidth, that would lead to optimal algorithms is also an intriguing topic. Finally, one can consider other objective functions, such as the Conditional Minimax rule, defined in \cite{BL16} or even non-binary domains. Algorithmic results there still remain elusive. \section*{Acknowledgements} This work has been supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the "First Call for H.F.R.I. Research Projects to support faculty members and researchers". \bibliographystyle{named}
1,477,468,750,843
arxiv
\section{Introduction} The goal of person re-identification (re-ID) is to match a given person across many gallery images captured at different times, locations, etc. With the development of deep learning, fully supervised person re-ID has been extensively investigated \cite{sun2018beyond, quan2019auto, luo2019strong, zheng2019joint, miao2019pose,liu2020deep,liu2020memory} and gained impressive progress. However, significant performance degradation can be observed when a trained model is tested on a previously unseen dataset. The generalizability of known algorithms is hindered by two main aspects. First, the generalizability of an algorithm is often ignored by its designer. There are only a few methods designed for domain generalization (DG). Second, the number of subjects in public datasets is limited, and their diversity is insufficient. \par Labeling large-scale and diverse real-world datasets is expensive and time-consuming. For instance, labeling a dataset of the magnitude of MSMT17 \cite{wei2018person} requires three labelers to work for two months. To address this, RandPerson \cite{wang2020surpassing} inspires us to use large-scale synthetic data for effective person re-identification training, which gets rid of the need of human annotations. However, if using synthetic data alone, the generalizability of the learned model is still limited due to the domain gap between the synthetic and real-world data. Therefore, a solution is provided in \cite{wang2020surpassing} which learns from mixed synthetic data and labeled real-world data. However, though performance is improved, this solution still relies on heavy human annotations of the real-world data, and the domain gap still exists which is sub-optimal for generalization. Therefore, the goal of this paper is to learn generalizable person re-identification completely without human annotations, so as to make use of a large amount of unlabeled real-world data. Specifically, we aim at how to combine a labeled synthetic dataset with unlabeled real-world data to learn a ready-to-use model with good generalizability. The proposed setting is illustrated in Fig. \ref{illu}, which is denoted as A (labeled) + B (unlabeled) $\rightarrow$ C (unseen target domain) with direct cross-dataset evaluation on C. The key to achieve domain generalization here is to make full use of the discriminative labels in the synthetic domain and the style and diversity of unlabeled real-world images simultaneously. A plausible method to tackle this problem would be Unsupervised Domain Adaptation (UDA) from A to B and trying to test it on C. However, the goal of UDA is different; it transfers the knowledge from the source domain A to the target domain B, and the testing is performed on the same target domain B. After the transfer, the model will learn domain-specific features from the less reliable real-world data without annotations and ignore the value of the large-scale high-quality labeled synthetic data. Therefore, directly applying UDA from A to B will have inferior generalizability on C. A task which may seem similar to the proposed one is the semi-supervised learning (SSL). However, for the SSL, both the labeled and unlabeled images are usually from the same domain while in the proposed setting, the images for training are from quite different domains. Besides, that is why we design special method to reduce the domain gap to improve generalizability.\par To address this problem, a solution called DomainMix is proposed, for discriminative, domain-invariant, and generalizable person re-identification feature learning. Specifically, to better utilize unlabeled real-world images, in each given epoch, they are clustered by DBSCAN algorithm. However, because unlike most UDA algorithms, i.e. \ there is a pre-training on a labeled source dataset, the clustering results may be unreliable and noisy. Therefore, three criteria, i.e. \ independence, compactness and quantity, are used to select reliable clusters. After clustering in each epoch, the number of identities for training is various. Therefore, it is impossible to use the same classification layer all the time. To address the problem, an adaptive initialization method is utilized: The classification layer can be divided into two parts: one for the synthetic data and the other for the real-world data. The number of identities for the synthetic data part never changes, therefore, it is initialized as the result of the last epoch. However, for the real-world data part, the number of identities changes all the time. As a result, it is initialized as the average of the features of corresponding identity. This initialization method accelerates and guarantees the convergence of training. To deal with the huge domain gap between synthetic and real-world data, a domain-invariant feature learning method is designed. Through alternate training between backbone and discriminator, and with the help of the proposed domain balance loss and other person re-ID metrics, the network can learn discriminative, domain-invariant and generalizable features from two domains jointly. With this framework, the need of human annotations is completely eliminated, and the domain gap between the synthesized and real-world data is reduced, so that the generalizability is improved thanks to the large-scale and diverse training data. The contributions are summarized as three-fold. \begin{itemize} \item{The paper proposes a novel and practical person re-identification task, i.e. \ how to combine labeled synthetic dataset with unlabeled real-world dataset to train a model with high generalizability.} \item{A novel and unified DomainMix framework is proposed to learn discriminative, domain-invariant, and generalizable person re-identification feature from two domains jointly. For the first time, domain generalizable person re-identification can be learned without human annotations completely.} \item{Experimental results show the proposed annotation-free framework achieves comparable performance with its counterparts trained with full human annotations.} \end{itemize} \vspace*{-5mm} \section{Related Work} \vspace*{-2mm} \subsection{Unsupervised Domain Adaptation for Person Re-ID} The goal of Unsupervised Domain Adaptation (UDA) for person re-ID is to learn a model on a labeled source domain and fine-tune it to an unlabeled target domain. The main UDA algorithms can be categorized into three classes. The first is image-level methods \cite{zhong2018generalizing, deng2018image, wei2018person, li2019cross}, which use a generative adversarial network (GAN) \cite{goodfellow2014generative} to translate the image style. The second class is feature-level methods \cite{li2018adaptation, chang2019disjoint, Lin2018MultitaskMF}, which aim to find domain-invariant features between different domains. The last category is cluster-based algorithms \cite{lin2019bottom, zhai2020ad, yang2020asymmetric, zeng2020hierarchical, ge2020mutual, ge2020selfpaced, fu2019self, zhao2020unsupervised,ding2020adaptive}, which generate pseudo labels to help fine-tune on the target domain. \par Although the UDA task and the proposed task both have the source and target domain, they are totally different. The goal of UDA is to use labeled source domain and unlabeled target domain to train a model which can perform well on the known target domain, while the proposed task aims to learn a model from labeled synthetic dataset and unlabeled real-world dataset to generalize well to an unseen domain. \vspace*{-2mm} \subsection{Domain Generalization for Person Re-ID} Domain Generalization (DG) for person re-ID was first studied in \cite{yi2014deep}, aiming to generalize a trained model to unseen scenes. In recent years, with the increasing accuracy of fully supervised person re-ID and the limitations of UDA, DG has begun to attract attention again. For instance, DualNorm \cite{jiebmvc} uses instance normalization to filter out variations in style statistic in earlier layers to increase the generalizability. SNR \cite{jin2020style} filters out identity-irrelevant interference and keeps discriminative features by using an attention mechanism. QAConv \cite{LiaoQAConv} constructs query-adaptive convolution kernels to find local correspondences in feature maps, which is more generalizable than using features. M$^3$T \cite{zhao2021learning} introduces meta-learning strategy and proposes a memory-based identification loss to enhance the generalization ability of the model. Other works, such as RandPerson \cite{wang2020surpassing}, focus on using synthetic data to enlarge the diversity and scale of person re-ID datasets. \vspace*{-2mm} \subsection{Methods for Reducing Domain Gap} Domain gap hinders one trained model performs well on an unseen dataset \cite{li2021DCC}. In the task of UDA for person re-ID, some methods, such as PTGAN \cite{wei2018person}, utilize GAN \cite{goodfellow2014generative} to transfer the image style of the source domain to the target domain. The methods reduce the domain gap from the image-level. Another category is feature-level and our method belongs to it. Some methods try to train a domain-invariant model by reducing the pairwise domain discrepancy with Maximum Mean Discrepancy (MMD) \cite{tzeng2014deep}. However, this pipeline, which shares the same classes between domains, is not suitable for person re-ID task because the identities in two re-ID domains are different. \begin{figure}[t] \centering \includegraphics[width=1\textwidth]{FW.pdf} \\[3pt] \caption{The design of the DomainMix framework. During training, the backbone is trained to extract discriminative, domain-invariant, and generalizable features from two domains jointly with the help of the domain balance loss and other person re-identification metrics.} \label{Framework} \vspace*{-4mm} \end{figure} \vspace*{-5mm} \section{Proposed Task and Method} \vspace*{-2mm} \subsection{Problem Definition} Two source domains $S_1$ and $S_2$, where $S_1$ is a synthetic dataset and $S_2$ is a real-world dataset, are given. For the synthetic dataset, the labels and images are both available. It is denoted as ${D_{{s_1}}} = \left\{ {\left( {x_i^{{s_1}},y_i^{{s_1}}} \right)\left| {_{i = 1}^{{N_{{s_1}}}}} \right.} \right\}$, where $x_i^{{s_1}}$ and $y_i^{{s_1}}$ are the $i$-th training sample and its corresponding person identity label, respectively, and $N_{{s_1}}$ is the number of images in the synthetic dataset. For the real-world dataset, only the images are available. The ${{N_{{s_2}}}}$ images in the real-world dataset are denoted as ${D_{{s_2}}} = \left\{ {x_i^{{s_2}}\left| {_{i = 1}^{{N_{{s_2}}}}} \right.} \right\}$. Besides, a target domain $T$, which is a real-world dataset different from ${D_{{s_2}}}$, is given. It is denoted as $D_t = \left\{ {x_i^t\left| {_{i = 1}^{{N_t}}} \right.} \right\}$, where $x_i^{{t}}$ denotes the $i$-th target-domain image and $N_t$ is the total number of target-domain images. This setting simulates the practical application scene, i.e. \ synthesizing labeled datasets is time-saving and cheap, while labeling a large-scale real-world dataset is time-consuming and expensive. Our goal is to design an algorithm that can be trained on the datasets ${D_{{s_1}}}$ and ${D_{{s_2}}}$, and then directly generalized to unseen ${D_{{t}}}$ without fine-tuning. \vspace*{-2mm} \subsection{DomainMix Framework} To tackle the problem mentioned above, we propose the DomainMix framework. In this framework, reliable training dataset is generated dynamically according to three criteria, and before training, the classification layer is initialized adaptively to accelerate the convergence of identity classifier training. When training, together with discriminative metrics, a domain balance loss is proposed to help learning domain-invariant feature. As a result, the proposed DomainMix framework can generalize well to unseen target domains. The framework is shown in Fig. \ref{Framework}. \vspace*{-2mm} \subsubsection{Two Domains Mixing} \quad\textbf{Dynamic Training Dataset Generation}\par The training dataset for DomainMix framework is generated dynamically in each epoch. Given ${D_{{s_2}}}$, the reliable images are selected according to three criteria, i.e. \ independence, compactness, and quantity. \par For independence and compactness, they are proposed in SpCL \cite{ge2020selfpaced} to judge whether a cluster is far away from others and whether the samples within the same cluster have small inter-samples distances. Together with the $eps$ parameter in DBSCAN \cite{ester1996density}, independence is realized by increasing $eps$ to figure out whether more examples are included into original cluster while compactness is realized by decreasing $eps$ to find whether a cluster can be split. Please refer to DBSCAN \cite{ester1996density} and SpCL \cite{ge2020selfpaced} to have a deeper understanding about the independence and compactness criteria. \par For quantity, we argue that a reliable cluster should contain enough number of images which brings diversity. Further, if clusters with small number of images are selected, there will be too many classes to train an identity classifier well. We denote the pseudo-labels set generated in one epoch as $L_1 = \left\{\left.l_{i}\right|_{i=1} ^{M}\right\}$, where $l_i$ is the $i$-th pseudo label, and $M$ is the total number of pseudo labels. Given the bound $b$, labels with a total number of images below $b$ are discarded. Thus, the refined pseudo-labels set is obtained as \begin{equation} L_{2}=\left\{l_{i} \mid l_{i} \in L_{1}, S\left(l_{i}\right)\geq b \right\}, \end{equation} where $S\left(l_{i}\right)$ denotes the number of images belonging to the $i$-th pseudo label. Note that the quantity criteria is different from the $min\_samples$ parameter in DBSCAN \cite{ester1996density}: the quantity criteria handles the outliers and clusters with few images while $min\_samples$ parameter controls the core points selection in the process of clustering. Simply adjusting the $min\_samples$ parameter cannot bring similar improvement with quantity criteria. \par After images from ${D_{{s_2}}}$ are encoded to features, and features are clustered by a certain algorithm (\emph{e.g}\bmvaOneDot DBSCAN \cite{ester1996density}), the generated clusters are selected by the three criteria. The ablation study part will show the proposed quantity criterion is the key to the outstanding performance while the criteria from \cite{ge2020selfpaced} only bring slight improvement. In conclusion, the images in reliable clusters are kept, pseudo labeled, and trained with ones from labeled synethetic dataset. \textbf{Adaptive Classifier Initialization}\par Because the training dataset is generated dynamically in each epoch, the number of classes is variant. It is impossible to use the same classification layer in each epoch and random initialization may bring non-convergence problems. As a result, an adaptive classifier initialization method is utilized to accelerate the training of identity classification.\par A classification layer can be formed as \begin{equation} y=W^Tx+ \bm b, \end{equation} where $x$ is a batch of features, $W$ is a matrix, and $\bm b$ is a bias which is set as $\bm 0$ for convenience. Given the number of classes $M$ in the generated training dataset and the dim of features $d$, the shape of matrix $W$ is $d\times M$. Because of the linear properties of matrix, $W$ can be written as $\left( W_{1},W_{2}\right)$ in blocks. $W_{1}$ is a matrix of shape $d\times N$ and $W_{2}$ is a matrix of shape $d\times (M-N)$, where $N$ is the number of classes in synthetic domain. \par For $W_{1}$, because the classes of synthetic domain never changes during the different epochs, in a new epoch, it is initialized as the final result of the last epoch. For $W_{2}$, because clustering and selecting are performed in each epoch, $M$ changes all the time. Denote $W_{2}$ as $\left( w_{1},w_{2},...,w_{M-N}\right) $, and $w_i$ is initialized as \begin{equation} w_{i}=\frac{1}{K_i} \sum^{K_i}_{j=1} f_{j_i}\left( i=1,2,...,M-N\right), \end{equation} where $K_i$ is the number of images belonging to the $i$-th cluster under the current epoch, and $f_{j_i}$ is the feature of the $j$-th image in the cluster.\par The advantage of this adaptive initialization method lies in two aspects. For the synthetic part, the initialization method enjoys the convenience and stability of fully-supervised learning. For the real-world part, after initialization, the probability of a given feature belongs to its class is much larger than other classes, therefore training the classifier is much easier. \vspace*{-2mm} \subsubsection{Domain-Invariant and Discriminative Feature Learning} Given the generated training dataset and a well-initialized network, this section focuses on how to learn discriminative, domain-invariant, and generalizable features from two domains. It is realized by training a discriminator and backbone alternately. The discriminator is used to classify a given feature into its domain. Specifically, features of the images from the synthetic and real-world domains are extracted by the backbone. Then a discriminator is trained to judge which domain the extracted feature comes from. When training the discriminator, the cross-entropy loss ${\cal L}_{ce}$ is adopted. Thus the domain classification loss is defined as \begin{equation}\label{wf3} {\cal L}_{d}^{s}(\theta)=\frac{1}{N_{s}} \sum_{i=1}^{N_{s}} {\cal L}_{c e}\left(C_{d}\left(F\left(x_{i}^{s} \mid \theta\right)\right), d_{i}^{s}\right), \end{equation} where $F\left( { \cdot \left| \theta \right.} \right)$ is a feature encoder function, $N_s$ is the sum of the number of images in the current generated dataset, $C_d$ denotes the discriminator and $d_i^s$ is the domain label of the $i$-th image, i.e. \ if the image belongs to the synthetic domain, $d_i^s = 0$, otherwise if it belongs to the real-world domain, $d_i^s = 1$. \par To encourage the backbone to extract domain-invariant features, it is trained to confuse the domain discriminator. Therefore, a domain balance loss is proposed, which is defined as \begin{equation} \mathcal{L}_{d b}=\frac{1}{N_{s}} \sum_{i=1}^{N_{s}}\left(\sum_{j=1}^{n}\left(x_{j}^{i} \log \left(x_{j}^{i}\right)+a\right)\right), \end{equation} where ${x_j^i}$ is the $j$-th coordinate of $C_{d}\left(F\left(x_{i}^{s} \mid \theta\right)\right)$, and $a$ is a constant to prevent a negative loss. In this loss, considering the function \begin{equation} f(x)=x \log (x)+a, x \in(0,1), \end{equation} the second derivative of $f$ is \begin{equation} f''\left( x \right) = \frac{1}{x} > 0. \end{equation} Therefore, it is a convex function. Given $\sum\nolimits_{j = 1}^n {x_j^i = } 1$, the minimum value of the function can be achieved when $x_{j}^{i}=1 / n(j=1,2, \ldots, n)$, according to Jensen's inequality. \par In conclusion, when ${\cal L}_{db}$ is minimized, the distance between $x_{j}^{i}$ and $1/n$ is shortened. Thus, the probability of a given feature belonging to two domains tends to be the same, i.e. \ the backbone can extract domain-invariant features by confusing the discriminator.\par Beyond learning domain-invariant features, the network is also trained by discriminative metrics in re-ID, therefore an identity classification loss $L_{id}^s\left( \theta \right)$ and a triplet loss $L_{tri}^s\left( \theta \right)$ \cite{hermans2017defense} are adopted. They are defined as \begin{equation} {\cal L}_{id}^s(\theta ) = \frac{1}{{{N_s}}}\sum\limits_{i = 1}^{{N_s}} {{{\cal L}_{ce}}} \left( {{C_s}\left( {F\left( {x_i^s\mid \theta } \right)} \right),y_i^s} \right), \end{equation} and \begin{equation} \small \begin{array}{l} {\cal L}_{t r i}^{s}(\theta)=\frac{1}{N_{s}} \sum_{i=1}^{N_{s}} \max \left(0, m+\left\|F\left(x_{i}^{s} \mid \theta\right)-F\left(x_{i, p}^{s} \mid \theta\right)\right\|\right. \\ \left.-\left\|F\left(x_{i}^{s} \mid \theta\right)-F\left(x_{i, n}^{s} \mid \theta\right)\right\|\right), \end{array} \end{equation} \begin{comment} \begin{algorithm}[t] \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Require}{Require} \Require{Labeled synthetic dataset ${D_{{s_1}}}$ and unlabeled real-world dataset ${D_{{s_2}}}$;} \Require{Weighting factors ${\lambda ^{m}}$ and ${\lambda ^s}$ for Eq. (\ref{wf2});} \BlankLine \For{$n\leftarrow 1$ \KwTo $num\_epochs$}{ Generate and select training dataset $D_s$ according to the three criteria;\par Initialize the identity classifier adaptively;\par \For{each mini-batch $\left\{ {x_i^s,y_i^s} \right\} \subset D_s$}{ \If{$i\equiv 0\left( mod\ iters\right) $}{ Update the discriminator by minimizing the objective function Eq. (\ref{wf3}) with backbone fixed;} \par \Else{ Update the backbone by minimizing the objective function Eq. (\ref{wf2}) with discriminator fixed;}} } \caption{DomainMix framework for generalizable person re-ID} \label{algorithm} \end{algorithm} \end{comment} where $C_s$ is an identity classifier, $\|\cdot\|$ denotes the $L^{2}$-norm distance, $m$ is the triplet distance margin, ${\cal L}_{ce}(\cdot, \cdot)$ represents the cross-entropy loss, $y_i^s$ is the corresponding label or generated label, and the subscripts $_{i,p}$ and $_{i,n}$ indicate the hardest positive and the hardest negative index for the sample $x_i^s$ in a mini-batch.\par Therefore, the final loss is calculated as \begin{equation}\label{wf2} {\cal L}^{s}(\theta)=\lambda^{m} {\cal L}_{db}(\theta)+\lambda^{s} {\cal L}_{i d}^{s}(\theta)+{\cal L}_{t r i}^{s}(\theta), \end{equation} where $\lambda^{m}$ and $\lambda^{s}$ are the balance parameters. Through alternate training with ${\cal L}_{d}^{s}(\theta)$ and ${\cal L}^{s}(\theta)$, the discriminator can classify a given feature into its domain, and the backbone can extract domain-invariant and discriminative features. To summarize the proposed algorithm, the pseudo codes are given in supplemental material. \section{Experiments} \subsection{Datasets and Evaluation Metrics} To evaluate the generalizability of the proposed DomainMix framework, extensive experiments are conducted on four widely used public person re-ID datasets. Among them, RandPerson (RP) \cite{wang2020surpassing} is selected as the synthetic dataset. Its subset contains $8,000$ persons in $132,145$ images. Nineteen cameras were used to capture them under eleven scenes. All images in the subset are used as training data, i.e. \ no gallery or query is available. The real-world datasets used are Market-1501 \cite{zheng2015scalable}, CUHK03-NP \cite{zhong2017re, li2014deepreid}, and MSMT17 \cite{wei2018person}. Note that DukeMTMC \cite{zheng2017unlabeled} dataset is not used due to the invasion of privacy. The details of real-world dataset are illustrated in the supplemental material. Evaluation metrics are mean average precision (mAP) and cumulative matching characteristic (CMC) at rank-$1$ \subsection{Implementation Details} DomainMix is trained on four Tesla-V100 GPUs. The ImageNet-pre-trained \cite{deng2009imagenet} ResNet-50 \cite{he2016deep} and IBN-ResNet-50 \cite{pan2018two} are adopted as the backbone. Adam optimizer is used to optimize the networks with a weight decay of $5 \times 10^{-4}$. For more details, please refer to supplemental results. \subsection{Ablation Study} Comprehensive ablation studies are performed to prove the effectiveness of each component in the DomainMix framework. Two different DG tasks are selected: labeled RandPerson \cite{wang2020surpassing} with unlabeled MSMT17 \cite{wei2018person} to Market-1501 \cite{zheng2015scalable} and labeled RandPerson \cite{wang2020surpassing} with unlabeled CUHK03-NP \cite{zhong2017re, li2014deepreid} to Market-1501 \cite{zheng2015scalable}. The experimental results on ResNet-50 \cite{he2016deep} are reported below, and the results on IBN-ResNet-50 \cite{pan2018two} are shown in supplemental results.\par \begin{table} \caption{Ablation studies for each component in the DomainMix framework on the two tasks. `+I/C/Q' denotes the independence/compactness/quantity criteria is used. With or without ACI/DB denotes whether using adaptive classifier initialization/domain balance loss or not. `Labeled' or `unlabeled' denotes whether real-world source training data is labeled or not. \vspace*{2mm} \footnotesize \begin{tabularx}{\hsize}{p{3.1cm}|YY|p{3.1cm}|YY} \hline RP$+$MSMT $\to$ Market & mAP & rank-$1$ & RP$+$CUHK $\to$ Market & mAP & rank-$1$ \\ \hline\hline DBSCAN & $37.5$ & $64.6$ &DBSCAN & $34.5$ & $61.3$ \\ DBSCAN + I + C& $37.0$ & $64.2$ &DBSCAN + I + C& $35.5$ & $62.8$ \\ DBSCAN + Q& $42.4$ & $69.4$ &DBSCAN + Q& $39.5$ & $66.2$ \\ DBSCAN + I + C + Q& $43.5$ & $70.2$ &DBSCAN + I + C + Q& $39.8$ & $67.5$ \\ \hline Without ACI& $29.5$ & $56.9$ &Without ACI& $33.8$ & $60.3$ \\ With ACI& $43.5$ & $70.2$ &With ACI& $39.8$ & $67.5$ \\ \hline Without DB& $40.1$ & $68.1$ &Without DB& $37.3$ & $66.0$ \\ With DB& $43.5$ & $ 70.2$ &With DB& $39.8$ & $67.5$ \\ \hline Only RandPerson & $36.5$ & $63.6$ & Only RandPerson & $36.5$ & $63.6$\\ Only MSMT (labeled) & $32.7$ & $62.0$ &Only CUHK (labeled) & $25.1$ & $50.3$ \\ DomainMix (labeled) & $45.2$ & $70.5$ &DomainMix (labeled) & $42.7$ & $69.7$ \\ DomainMix (unlabeled) & $43.5$ & $70.2$ &DomainMix (unlabeled) & $39.8$ & $67.5$ \\ \hline \end{tabularx} \\ \label{ABLLL} \end{table} \textbf{Effectiveness of Dynamic Training Dataset Generation.} To investigate the necessity of generating training dataset dynamically and the importance of each component, we compare the domain generalizability of a model trained on two different real-world datasets, i.e. \ MSMT17 \cite{wei2018person} and CUHK03-NP \cite{zhong2017re, li2014deepreid}. The baseline model performances are shown in Table \ref{ABLLL} as ``DBSCAN". If the independence and compactness criteria are used, the performances are denoted as ``DBSCAN + I + C", while if the quantity criterion is used, they are denoted as ``DBSCAN + Q". ``DBSCAN + I + C + Q" denotes all the three criteria are adopted. The quantity criterion brings $4.9\%$ in mAP improvement for the ``RP$+$MSMT $\to$ Market'' task. For the ``RP$+$CUHK $\to$ Market'' task, the mAP increases by $5.0\%$. However, if the independence and compactness criterion are used alone, no stable performance improvement can be observed. It is because, although the two criteria remove the unreliable clusters, there are still many classes including few images to participate in the training, which disturbs the training of the identity classifier and leads to the failure to improve the performance stably. Together with the proposed quantity criterion, the above problem is solved, and the two criteria in \cite{ge2020selfpaced} can further improve the performance.\par \textbf{Effectiveness of Adaptive Classifier Initialization.} To prove the effectiveness of the adaptive classifier initialization method, the experimental results without and with this method are shown in Table \ref{ABLLL} and denote as ``Without ACI" and ``With ACI", respectively. The initialization method brings significant improvement of $14.0\%$ and $6.0\%$ in mAP on the ``RP$+$MSMT $\to$ Market'' and ``RP$+$CUHK $\to$ Market'' tasks. The significant improvement comes from the guarantee and acceleration of the convergence.\par \begin{comment} \begin{table} \caption{Ablations studies for domain balance loss on CUHK dataset. The effectiveness of domain balance loss is proved by comparing ``Without DB'' with ``With DB''.} \vspace*{2mm} \small \begin{tabularx}{\hsize}{p{3.1cm}|YY|YY} \hline \multicolumn{1}{c|}{\multirow{2}{*}{Method}} & \multicolumn{2}{c|}{With DB} & \multicolumn{2}{c}{Without DB} \\ \cline{2-5} & \footnotesize{mAP} & \footnotesize{rank-$1$} & \footnotesize{mAP} & \footnotesize{rank-$1$} \\ \hline\hline RP$+$MSMT $\to$ CUHK & $13.9$ & $14.9$ & $16.7$ & $18.0$ \\ RP$+$Market $\to$ CUHK & $14.3$ & $15.6$ & $16.2$ & $17.4$ \\ \hline \end{tabularx} \vspace*{-5mm} \\ \label{ABLL} \end{table} \end{comment} \begin{comment} \begin{table} \caption{The experimental results for real-world datasets to Market or CUHK. } \vspace*{2mm} \small \begin{tabularx}{\hsize}{p{3.1cm}|YY|p{3.1cm}|YY} \hline Real-world $\to$ Market & mAP & rank-$1$ &Real-world $\to$ CUHK& mAP & rank-$1$ \\ \hline\hline MSMT (L) + CUHK (U)& $35.1$ & $62.6$ & MSMT (L)+Market (U)& $14.7$ & $14.0$ \\ MSMT (U) + CUHK (L)& $31.2$ & $58.3$ & MSMT (U)+Market (L)& $9.7$ & $9.5$ \\ MSMT (L) + CUHK (L) & $40.4$ & $66.7$ & MSMT (L)+Market (L) &$17.4$ & $16.4$ \\ \hline \end{tabularx} \\ \vspace*{-6mm} \label{REAL} \end{table} \end{comment} \textbf{Influence of Domain Balance Loss.} To verify the necessity of using the domain balance loss to learn domain-invariant features, results obtained with and without this loss are compared and shown in Table \ref{ABLLL} as ``Without DB" and ``With DB", respectively. All experiments with the use of domain balance loss show distinct improvement on both the ``RP$+$MSMT $\to$ Market'' and ``RP$+$CUHK $\to$ Market'' tasks. Specifically, the mAP increases by $3.4\%$ when the real-world source domain is MSMT17 \cite{wei2018person}. As for the `RP$+$ CUHK $\to$ Market' task, similar mAP improvement of $2.5\%$ can be observed. The improvement brought by domain balance loss on CUHK is displayed in the supplemental material. \par We further discuss the \textbf{importance of introducing unlabeled real-world dataset}, \textbf{whether human annotations are essential for generalizable person re-ID}, and the \textbf{comparison with UDA algorithms} in the supplemental material. \subsection{Comparison with the State-of-the-arts} The proposed DomainMix framework is compared with state of the art methods on three DG tasks, i.e. \ directly testing on Market1501 \cite{zheng2015scalable}, CUHK03-NP \cite{zhong2017re, li2014deepreid}, and MSMT17 \cite{wei2018person}. The experimental results are shown in Table \ref{SOTA}. Note that a fair comparison in anyway is not very feasible, because we only used unlabeled real-world data, although with additional synthesized data, while others used labeled one. So existing results in Table \ref{SOTA} are only provided as a reference to see what we can achieve with a fully annotation-free setting. Secondly, the proposed method is orthogonal to network architecture designs such as IBN-Net \cite{pan2018two} and OSNet-IBN \cite{zhou2019omni}. Thus they can also be applied into the framework. The related experimental results are shown in the supplemental material. For QAConv \cite{LiaoQAConv}, though its performance is relatively high, because it needs to store feature maps of images rather than features to match, more memory is needed. SNR \cite{jin2020style} uses attention mechanism to solve the drawback of instance normalization and improve the performance of IBN-Net \cite{pan2018two}, and the DomainMix may achieve further performance improvement with the help of this plug-and-play module. \par From the comparison in Table \ref{SOTA}, the DomainMix framework improves up to $7.0\%$ mAP. The improvement in performance is attributed to two aspects. First, directly combining the training of the synthetic dataset and unlabeled real-world dataset increases the source domain's diversity and scale. Second, the domain balance loss further forces the network to learn domain-invariant features and minimizes the domain gap between the synthetic dataset and real-world dataset in the source domain. \begin{table} \footnotesize \caption{Comparison with state-of-the-arts on Market1501 \cite{zheng2015scalable}, CUHK03-NP \cite{zhong2017re, li2014deepreid}, and MSMT17 \cite{zhong2017re, li2014deepreid}. `$^\S$' denotes the results are from github of the original paper, `$^\ast$' denotes our implementation, and `$^\dagger$' indicates that the results are reproduced based on the authors' codes. `L' or `U' denotes whether the real-world source training data is labeled or not. \vspace*{2mm} \footnotesize \begin{tabularx}{\hsize}{|p{2.82cm}|p{2.1cm}|YY|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Method}} & \multicolumn{1}{c|}{\multirow{2}{*}{Source data}} & \multicolumn{2}{c|}{Market1501} \\ \cline{3-4} & & \footnotesize{mAP} &\footnotesize{rank-$1$} \\ \hline\hline MGN \cite{wang2018learning,yuan2020calibrated}&MSMT (L)&$25.1 $ & $48.7 $ \\ ADIN \cite{yuan2020calibrated}&MSMT (L)&$22.5 $ & $50.1 $ \\ ADIN-Dual \cite{yuan2020calibrated} &MSMT (L)&$30.3 $ & $59.1 $ \\ SNR \cite{jin2020style} &MSMT (L)&$41.4$ & $70.1$ \\ QAConv$^\dagger$ \cite{LiaoQAConv} &MSMT (L)& $35.8$ & $66.9$ \\ \hline MGN$^\dagger$ \cite{wang2018learning}&RandPerson&$17.7$ & $37.4$ \\ OSNet-IBN$^\dagger$ \cite{zhou2019omni} &RandPerson&$39.0$ & $67.0$ \\ QAConv$^\S$ \cite{LiaoQAConv} &RandPerson& $34.8$ & $65.6$ \\ Baseline$^\ast$ &RandPerson& $36.5$ & $63.6$ \\ \hline DomainMix &RP$+$MSMT (U) &$43.5$ & $70.2$ \\ \footnotesize{DomainMix-OSNet-IBN} &RP$+$MSMT (U)&$\textbf{44.6}$ & $\textbf{72.9}$ \\ \hline \end{tabularx} \footnotesize \begin{tabularx}{\hsize}{|p{2.82cm}|p{2.1cm}|YY|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Method}} & \multicolumn{1}{c|}{\multirow{2}{*}{Source data}} & \multicolumn{2}{c|}{CUHK03-NP} \\ \cline{3-4} & & \footnotesize{mAP} &\footnotesize{rank-$1$} \\ \hline\hline MGN \cite{wang2018learning,qian2019leader} &Market (L)& $7.4$ & $8.5$ \\ MuDeep \cite{qian2019leader} &Market (L)& $9.1$ & $10.3$ \\ QAConv$^\dagger$ \cite{LiaoQAConv}&MSMT (L) & $15.2$ & $16.8$ \\ \hline MGN$^\dagger$ \cite{wang2018learning}&RandPerson&$7.7$ & $7.4$ \\ OSNet-IBN$^\dagger$ \cite{zhou2019omni} &RandPerson&$12.9$ & $13.6$ \\ QAConv$^\S$ \cite{LiaoQAConv} &RandPerson& $11.0$ & $14.3$ \\ Baseline$^\ast$ &RandPerson& $13.0$ & $14.6$ \\ \hline DomainMix&RP$+$MSMT (U) & $16.7 $ & $\textbf{18.0}$ \\ \footnotesize{DomainMix-OSNet-IBN} &RP$+$MSMT (U)&$\textbf{16.9}$ & $17.5$ \\ \hline \end{tabularx} \\ \footnotesize \begin{tabularx}{\hsize}{|p{2.82cm}|p{2.1cm}|YY|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Method}} & \multicolumn{1}{c|}{\multirow{2}{*}{Source data}} & \multicolumn{2}{c|}{MSMT17} \\ \cline{3-4} & & \footnotesize{mAP} &\footnotesize{rank-$1$} \\ \hline\hline QAConv$^\dagger$ \cite{LiaoQAConv}&Market (L) & $8.3$ & $26.4$ \\ \hline MGN$^\dagger$ \cite{wang2018learning}&RandPerson&$3.0$ & $10.1$ \\ OSNet-IBN$^\dagger$ \cite{zhou2019omni} &RandPerson&$12.4$ & $34.3$ \\ QAConv$^\S$ \cite{LiaoQAConv} &RandPerson& $10.7$ & $34.3$ \\ Baseline$^\ast$ &RandPerson& $7.9$ & $23.0$ \\ \hline DomainMix&RP$+$Market (U) & $9.3$ & $25.3$ \\ \footnotesize{DomainMix-OSNet-IBN}&RP$+$Market (U) & $\textbf{13.6}$ & $\textbf{36.2}$ \\ \hline \end{tabularx} \label{SOTA} \end{table} \vspace*{-5mm} \section{Conclusion} In this paper, a more practical and generalizable person re-ID task is proposed, i.e. \ how to combine a labeled synthetic dataset with unlabeled real-world data to train a more generalizable model. To deal with it, the DomainMix framework is introduced, with which the requirement of human annotations is completely removed, and the gap between synthesized and real-world data is reduced. Extensive experiments show that the proposed annotation-free method is superior for generalizable person re-ID. \section*{Acknowledgements} The authors would like to thank Anna Hennig who helped proofreading the paper. \section{Further Details of Real-world Datasets and Implementation} \subsection{Datasets and Evaluation Metrics} To evaluate the generalizability of the proposed DomainMix framework, extensive experiments are conducted on four widely used public person re-ID datasets. Among them, RandPerson (RP) \cite{wang2020surpassing} is selected as the synthetic dataset. Its subset contains $8,000$ persons in $132,145$ images. Nineteen cameras were used to capture them under eleven scenes. All images in the subset are used as training data, i.e. \ no gallery or query is available. The real-world datasets used are Market-1501 \cite{zheng2015scalable}, CUHK03-NP \cite{zhong2017re, li2014deepreid}, and MSMT17 \cite{wei2018person}. Market-1501 \cite{zheng2015scalable} includes $1,501$ labeled persons in $32,668$ images. Note that DukeMTMC \cite{zheng2017unlabeled} dataset is not used due to the invasion of privacy. The training set has $12,936$ images of $751$ identities. For testing, the query has $3,368$ images and the gallery has $19,732$ images. CUHK03-NP \cite{zhong2017re, li2014deepreid} contains $1,467$ persons from six cameras. In this dataset, $7,365$ images of $767$ identities are used for training. For testing, there are $1,400$ queries and $5,332$ gallery images. MSMT17 \cite{wei2018person} is the most diverse and challenging re-ID dataset, consisting of $126,441$ bounding boxes of $4,101$ identities taken by $15$ cameras. There are $32,621$ images for training, while the query has $11,659$ images and the gallery has $82,161$ images.\par Evaluation metrics are mean average precision (mAP) and cumulative matching characteristic (CMC) at rank-$1$. The models trained on the source domains are directly tested on the target domain without transfer learning. Single-query evaluation protocols without post-processing methods is adopted. \subsection{Implementation Details} \begin{figure}[t] \centering \includegraphics[width=1\textwidth]{Discriminator.pdf} \\[3pt] \caption{The design of the discriminator. Through multiple fully-connected and non-linear layers, the discriminator can classify the feature of a given image into its domain.} \vspace*{-4mm} \label{Discriminator} \end{figure} DomainMix is trained on four Tesla-V100 GPUs. The ImageNet-pre-trained \cite{deng2009imagenet} ResNet-50 \cite{he2016deep} and IBN-ResNet-50 \cite{pan2018two} are adopted as the backbone. Adam optimizer is used to optimize the networks with a weight decay of $5 \times 10^{-4}$. All images are resized to $256 \times 128$ before being fed into the networks. Each training batch includes $64$ person images of $16$ actual or generated identities. The design of the discriminator is shown in Fig. \ref{Discriminator}. The $\lambda^{m}$ and $\lambda^s$ in equation \ref{wf2} are both set to $1$. The total number of epochs is $60$. Due to the difficulty in training the mixed identity classifier, before alternating training, the model is trained for $30$ epochs with the metrics in re-ID. The number of iterations in each epoch is $2,000$. The initial learning rate is set to $3.5 \times {10^{ - 4}}$, and it is decreased to $1/10$ of its previous value on the $10$th, $15$th, $30$th, $40$th, and $50$th epoch. \section{Pseudo Codes for the DomainMix Framework} \begin{equation}\label{wf2} {\cal L}^{s}(\theta)=\lambda^{m} {\cal L}_{db}(\theta)+\lambda^{s} {\cal L}_{i d}^{s}(\theta)+{\cal L}_{t r i}^{s}(\theta), \end{equation} \begin{equation}\label{wf3} {\cal L}_{d}^{s}(\theta)=\frac{1}{N_{s}} \sum_{i=1}^{N_{s}} {\cal L}_{c e}\left(C_{d}\left(F\left(x_{i}^{s} \mid \theta\right)\right), d_{i}^{s}\right). \end{equation} \begin{algorithm} \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Require}{Require} \Require{Labeled synthetic dataset ${D_{{s_1}}}$ and unlabeled real-world dataset ${D_{{s_2}}}$;} \Require{Weighting factors ${\lambda ^{m}}$ and ${\lambda ^s}$ for Eq. (\ref{wf2});} \BlankLine \For{$n\leftarrow 1$ \KwTo $num\_epochs$}{ Generate and select training dataset $D_s$ according to the three criteria;\par Initialize the identity classifier adaptively;\par \For{each mini-batch $\left\{ {x_i^s,y_i^s} \right\} \subset D_s$}{ \If{$i\equiv 0\left( mod\ iters\right) $}{ Update the discriminator by minimizing the objective function Eq. (\ref{wf3}) with backbone fixed;} \par \Else{ Update the backbone by minimizing the objective function Eq. (\ref{wf2}) with discriminator fixed;}} } \caption{DomainMix framework for generalizable person re-ID} \label{algorithm} \end{algorithm} \section{Further Ablation Study} \subsection{Experimental Results on the IBN-ResNet-50 Backbone} To further prove the efficacy of each component in the proposed DomainMix framework, we also repeat the ablation study on the IBN-ResNet-50 \cite{pan2018two} backbone. The experimental results are shown in Table \ref{IBN}. Similar improvement brought by dynamic training dataset generation, adaptive classifier initialization, and domain balance loss, can be observed. \begin{table} \caption{Ablation studies for each component in the DomainMix framework on the two tasks. `+I/C/Q' denotes the independence/compactness/quantity criteria is used. With or without ACI/DB denotes whether using adaptive classifier initialization/domain balance loss or not. The used backbone is IBN-ResNet-50 \cite{pan2018two}. \vspace*{2mm} \footnotesize \begin{tabularx}{\hsize}{p{3.1cm}|YY|p{3.1cm}|YY} \hline RP$+$MSMT $\to$ Market & mAP & rank-$1$ & RP$+$CUHK $\to$ Market & mAP & rank-$1$ \\ \hline\hline DBSCAN & $41.5$ & $70.0$ &DBSCAN & $38.0$ & $65.7$ \\ DBSCAN + I + C& $42.2$ & $70.2$ &DBSCAN + I + C& $37.8$ & $65.3$ \\ DBSCAN + Q& $45.1$ & $72.5$ &DBSCAN + Q& $44.5$ & $71.2$ \\ DBSCAN + I + C + Q& $45.7$ & $73.0$ &DBSCAN + I + C + Q& $45.2$ & $71.9$ \\ \hline Without ACI& $34.1$ & $61.2$ &Without ACI& $34.3$ & $62.0$ \\ With ACI& $45.7$ & $73.0$ &With ACI& $45.2$ & $71.9$ \\ \hline Without DB& $42.3$ & $71.0$ &Without DB& $42.8$ & $70.5$ \\ With DB& $45.7$ & $ 73.0$ &With DB& $45.2$ & $71.9$ \\ \hline \end{tabularx} \\ \label{IBN} \end{table} \subsection{Improvement Brought by Domain Balance Loss on CUHK03-NP} To prove that the pivotal component, i.e. \ the domain balance loss, can improve the baseline performance significantly, experimental results on CUHK03-NP \cite{zhong2017re, li2014deepreid} dataset are displayed in the Table \ref{CUHK}. The used backbones are ResNet-50 \cite{he2016deep} and IBN-ResNet-50 \cite{pan2018two}. The proposed domain balance loss is still effective when the target dataset changed to CUHK03-NP \cite{zhong2017re, li2014deepreid}. \begin{table} \caption{Ablations studies for domain balance loss on CUHK dataset. The effectiveness of domain balance loss is proved by comparing ``Without DB'' with ``With DB''.} \vspace*{2mm} \footnotesize \begin{tabularx}{\hsize}{p{3.1cm}|YY|YY} \hline \multicolumn{1}{c|}{\multirow{2}{*}{Without DB}} & \multicolumn{2}{c|}{ResNet-50} & \multicolumn{2}{c}{IBN-ResNet-50} \\ \cline{2-5} & \footnotesize{mAP} & \footnotesize{rank-$1$} & \footnotesize{mAP} & \footnotesize{rank-$1$} \\ \hline\hline RP$+$MSMT $\to$ CUHK & $13.9$ & $14.9$ & $15.3$ & $15.6$ \\ RP$+$Market $\to$ CUHK & $14.3$ & $15.6$ & $15.7$ & $15.9$ \\ \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{With DB}} & \multicolumn{2}{c|}{ResNet-50} & \multicolumn{2}{c}{IBN-ResNet-50} \\ \cline{2-5} & \footnotesize{mAP} & \footnotesize{rank-$1$} & \footnotesize{mAP} & \footnotesize{rank-$1$} \\ \hline\hline RP$+$MSMT $\to$ CUHK & $16.7$ & $18.0$ & $18.3$ & $19.2$ \\ RP$+$Market $\to$ CUHK & $16.2$ & $17.4$ & $17.3$ & $17.4$ \\ \hline \end{tabularx} \\ \label{CUHK} \end{table} \subsection{Importance of Introducing Unlabeled Real-world Dataset} We also verify the importance of using the unlabeled real-world dataset. Further, whether human annotations are essential for generalizable person re-ID is discussed. The baselines are denoted as ``Only RP/MSMT (labeled)/CUHK (labeled)" in Table \ref{ABLLL}. On the one hand, compared to only training with synthetic data, mixing unlabeled real-world data with synthetic data brings up to $7.0\%$ improvement in mAP. Further, if only labeled real-world data is adopted for training, the mAP drops by up to $14.7\%$. On the other hand, compared to adding labeled real-world data to synthetic data, though performance decreases can be observed, using unlabeled real-world data still achieves competitive performance. Thus, the real-world data is necessary for learning domain-invariant features and improving performance. Further, the experimental results of three settings, i.e. \ MSMT (labeled) + CUHK/Market (labeled), MSMT (labeled) + CUHK/Market (unlabeled), and MSMT (unlabeled) + CUHK/Market (labeled) are shown in Table \ref{REAL}. The results show that the setting without human annotations, such as RandPerson + MSMT (unlabeled), achieves quite competitive performance. Therefore, the proposed method is quite promising in achieving competitive performance completely without human annotations. \begin{table} \caption{The experimental results on Market dataset. ``Only RP/MSMT (labeled)/CUHK (labeled)" denotes the baseline model is only trained on the RandPerson/MSMT (labeled)/CUHK (labeled) dataset.} \vspace*{2mm} \footnotesize \begin{tabularx}{\hsize}{p{3.1cm}|YY|YY} \hline \multicolumn{1}{c|}{\multirow{2}{*}{Method}} & \multicolumn{2}{c|}{ResNet-50} & \multicolumn{2}{c}{IBN-ResNet-50} \\ \cline{2-5} & \footnotesize{mAP} & \footnotesize{rank-$1$} & \footnotesize{mAP} & \footnotesize{rank-$1$} \\ \hline\hline Only RandPerson & $36.5$ & $63.6$ & $40.3$ & $68.6$\\ Only MSMT (labeled) & $32.7$ & $62.0$ & $39.3$ &$69.4$ \\ DomainMix (labeled) & $45.2$ & $70.5$ & $48.7$ & $74.6$ \\ DomainMix (unlabeled) & $43.5$ & $70.2$ & $45.7$ & $73.0$ \\ \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Method}} & \multicolumn{2}{c|}{ResNet-50} & \multicolumn{2}{c}{IBN-ResNet-50} \\ \cline{2-5} & \footnotesize{mAP} & \footnotesize{rank-$1$} & \footnotesize{mAP} & \footnotesize{rank-$1$} \\ \hline\hline Only RandPerson & $36.5$ & $63.6$ & $40.3$ & $68.6$\\ Only CUHK (labeled) & $25.1$ & $50.3$ & $36.7$ &$64.8$ \\ DomainMix (labeled) & $42.7$ & $69.7$ & $47.2$ & $72.9$ \\ DomainMix (unlabeled) & $39.8$ & $67.5$ & $45.2$ & $71.9$ \\ \hline \end{tabularx} \\ \label{ABLLL} \end{table} \begin{table} \caption{The experimental results for real-world datasets to Market or CUHK. } \vspace*{2mm} \footnotesize \begin{tabularx}{\hsize}{p{3.1cm}|YY|YY} \hline \multicolumn{1}{c|}{\multirow{2}{*}{Real-world $\to$ Market}} & \multicolumn{2}{c|}{ResNet-50} & \multicolumn{2}{c}{IBN-ResNet-50} \\ \cline{2-5} & \footnotesize{mAP} & \footnotesize{rank-$1$} & \footnotesize{mAP} & \footnotesize{rank-$1$} \\ \hline\hline MSMT (L) + CUHK (U)& $35.1$ & $62.6$ & $40.5$ & $67.7$ \\ MSMT (U) + CUHK (L)& $31.2$ & $58.3$ & $37.8$ & $64.8$ \\ MSMT (L) + CUHK (L) & $40.4$ & $66.7$ & $47.6$ & $72.4$ \\ \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Real-world $\to$ CUHK}} & \multicolumn{2}{c|}{ResNet-50} & \multicolumn{2}{c}{IBN-ResNet-50} \\ \cline{2-5} & \footnotesize{mAP} & \footnotesize{rank-$1$} & \footnotesize{mAP} & \footnotesize{rank-$1$} \\ \hline\hline MSMT (L)+Market (U)& $14.7$ & $14.0$ & $20.1$ & $20.5$ \\ MSMT (U)+Market (L)& $9.7$ & $9.5$ & $16.2$ & $15.3$ \\ MSMT (L)+Market (L) & $17.4$ & $16.4$ & $22.9$ & $21.2$ \\ \hline \end{tabularx} \\ \label{REAL} \end{table} \subsection{Comparison with UDA Algorithms} To show the state of the art UDA algorithms cannot handle the proposed task well, the performance of them is in Table \ref{UDA}. ``RP $\to$ MSMT/CUHK (SDA/MMT/SpCL)" denotes three state of the art UDA algorithms. SDA \cite{ge2020structured} uses the GAN to reduce the domain gap between RandPerson and MSMT/CUHK. However, obvious performance degradation on the unseen domain can be observed because of the bias to MSMT/CUHK and the neglection of RandPerson. SpCL \cite{ge2020selfpaced} is a cluster-based algorithm, which uses domain specific batch normalization (DSBN) \cite{chang2019domain} and combines the source domain with the target domain for training. However, the DSBN hinders the generalizability because the BN statistics are biased to a certain domain. Further, we find the contrastive loss in SpCL \cite{ge2020selfpaced} is harmful for domain generalization.\par \begin{table} \caption{Comparison between the proposed DomainMix and the state of the art UDA algorithms. SDA \cite{ge2020structured}, MMT \cite{Ge2020mutual}, and SpCL \cite{ge2020selfpaced} are three state of the art UDA algorithms. It can be seen that the UDA algorithms cannot handle the proposed task well.} \vspace*{2mm} \footnotesize \begin{tabularx}{\hsize}{p{3.1cm}|YY|YY} \hline \multicolumn{1}{c|}{\multirow{2}{*}{RP$+$MSMT $\to$ Market}} & \multicolumn{2}{c|}{ResNet-50} & \multicolumn{2}{c}{IBN-ResNet-50} \\ \cline{2-5} & \footnotesize{mAP} & \footnotesize{rank-$1$} & \footnotesize{mAP} & \footnotesize{rank-$1$} \\ \hline\hline RP$\to$MSMT (SDA) & $ 26.6$ & $ 56.3$ & $ 31.3$ &$ 60.9$ \\ RP$\to$MSMT (MMT) & $22.7 $ & $46.5$ & $ 30.0$ & $57.5 $ \\ RP$\to$MSMT (SpCL) & $24.2 $ & $ 49.8$ & $ 33.5 $ & $ 60.4$ \\ \hline DomainMix & $43.5$ & $70.2$ & $45.7$ & $73.0$ \\ \hline \multicolumn{1}{c|}{\multirow{2}{*}{RP$+$CUHK $\to$ Market}} & \multicolumn{2}{c|}{ResNet-50} & \multicolumn{2}{c}{IBN-ResNet-50} \\ \cline{2-5} & \footnotesize{mAP} & \footnotesize{rank-$1$} & \footnotesize{mAP} & \footnotesize{rank-$1$} \\ \hline\hline RP$\to$CUHK (SDA) & $ 26.6$ & $55.1 $ & $30.4 $ &$58.6 $ \\ RP$\to$CUHK (MMT) & $ 24.6$ & $ 51.2$ & $ 29.9$ & $56.3 $ \\ RP$\to$CUHK (SpCL) & $ 9.3$ & $24.1 $ & $ 18.3 $ & $ 39.4$ \\ \hline DomainMix & $39.8$ & $67.5$ & $45.2$ & $71.9$ \\ \hline \end{tabularx} \\ \label{UDA} \end{table} \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{GAN.pdf} \caption{The visualization of the transferred images from RandPerson \cite{wang2020surpassing} to MSMT17 \cite{wei2018person} or CUHK03-NP \cite{zhong2017re, li2014deepreid}.} \label{GAN} \end{figure*} \vspace*{-10mm} \section{Further Analysis of GAN-based UDA algorithms} \vspace*{-5mm} Unsupervised Domain Adaptation (UDA) for person re-identification aims at learning a model on a labeled source domain and adapting it to an unlabeled target domain. Some methods, such as \cite{zhong2018generalizing, deng2018image, wei2018person, li2019cross}, try to reduce the domain gap between two domains using a Generative Adversarial Network (GAN) \cite{goodfellow2014generative}. Our proposed task aims at combining a labeled synthetic dataset with unlabeled real-world data to learn a ready-to-use model that can generalize well to an unseen target domain. One possible solution to learn domain-invariant feature is reducing the domain gap between synthetic and real-world data. The similar point between two tasks is how to reduce the domain gap between two different datasets. However, the GAN-based UDA algorithms cannot perform well on the proposed task because after the transfer, the model will learn domain-specific features of the real-world data and ignore the diversity of the synthetic data.\par To further analyze why GAN-based UDA algorithms cannot work well on the proposed task, we visualize the images transferred from RandPerson \cite{wang2020surpassing} to MSMT17 \cite{wei2018person} or CUHK03-NP \cite{zhong2017re, li2014deepreid} in Fig. \ref{GAN}. First, the environmental lighting is diverse in RandPerson \cite{wang2020surpassing}, but when the images are transferred to CUHK03-NP \cite{zhong2017re, li2014deepreid}, the environmental lighting appears to be with a single source. Second, the colors of image backgrounds are similar when RandPerson \cite{wang2020surpassing} is transferred to MSMT17 \cite{wei2018person} or CUHK03-NP \cite{zhong2017re, li2014deepreid}. Finally, the transferring process is imperfect, and it may induce the change of colors, the distortion of people, and so on.\par The reduction of environmental lighting diversity hinders the accuracy of person matching under different cameras. Besides, similar backgrounds may prevent the model from learning domain-invariant features. Last, transferring algorithms' imperfection may cause the neural network not to fit the data well.\par Therefore, though GAN-based UDA algorithms can reduce the domain gap between two domains from image-level, they cannot generalize well to an unseen target domain, and they are not suitable for the proposed task. \vspace*{-3.5mm} \section{Further Comparisons between the DomainMix and the State-of-the-arts} Some experimental results on the other backbones are shown in this section. Further, to conduct somewhat fair comparisons with the state-of-the-art algorithms, we also use authors' codes to evaluate their performances when trained on RandPerson \cite{wang2020surpassing}. The experimental results are shown in Table \ref{SOTA}. It can be observed that, compared to existing methods trained on either labeled MSMT17 \cite{wei2018person} or RandPerson \cite{wang2020surpassing}, the proposed DomainMix generally performs better, thanks to the ability of additionally using the unlabeled MSMT17 \cite{wei2018person}. Note that QAConv \cite{LiaoQAConv} achieves the best rank-1 on MSMT17 \cite{wei2018person}. However, our method is general and can be built upon other baseline methods like QAConv \cite{LiaoQAConv}. \begin{table} \footnotesize \caption{Comparison with state-of-the-arts on Market1501 \cite{zheng2015scalable}, CUHK03-NP \cite{zhong2017re, li2014deepreid}, and MSMT17 \cite{zhong2017re, li2014deepreid}. `$^\S$' denotes the results are from github of the original paper, `$^\ast$' denotes our implementation, and `$^\dagger$' indicates that the results are reproduced based on the authors' codes. `L' or `U' denotes the used source data is labeled or unlabeled, respectively.} \vspace*{2mm} \begin{tabularx}{\hsize}{|p{2.82cm}|p{2.1cm}|YY|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Method}} & \multicolumn{1}{c|}{\multirow{2}{*}{Source data}} & \multicolumn{2}{c|}{Market1501} \\ \cline{3-4} & & \footnotesize{mAP} &\footnotesize{rank-$1$} \\ \hline\hline MGN \cite{wang2018learning,yuan2020calibrated}&MSMT (L)&$25.1 $ & $48.7 $ \\ ADIN \cite{yuan2020calibrated}&MSMT (L)&$22.5 $ & $50.1 $ \\ ADIN-Dual \cite{yuan2020calibrated} &MSMT (L)&$30.3 $ & $59.1 $ \\ OSNet-IBN$^\dagger$ \cite{zhou2019omni}&MSMT (L)&$35.2 $ & $64.9 $ \\ SNR \cite{jin2020style} &MSMT (L)&$41.4$ & $70.1$ \\ QAConv$^\dagger$ \cite{LiaoQAConv} &MSMT (L)& $35.8$ & $66.9$ \\ \hline MGN$^\dagger$ \cite{wang2018learning}&RandPerson&$17.7$ & $37.4$ \\ MGN-IBN$^\dagger$ \cite{wang2018learning}&RandPerson&$20.1$ & $41.4$ \\ OSNet-IBN$^\dagger$ \cite{zhou2019omni} &RandPerson&$39.0$ & $67.0$ \\ QAConv$^\S$ \cite{LiaoQAConv} &RandPerson& $34.8$ & $65.6$ \\ QAConv-IBN$^\S$ \cite{LiaoQAConv} &RandPerson& $36.8$ & $68.0$ \\ Baseline$^\ast$ &RandPerson& $36.5$ & $63.6$ \\ Baseline-IBN$^\ast$ &RandPerson&$40.3$ & $68.6$ \\ \hline DomainMix &RP$+$MSMT (U) &$43.5$ & $70.2$ \\ \footnotesize{DomainMix-OSNet-IBN} &RP$+$MSMT (U)&$44.6$ & $72.9$ \\ DomainMix-IBN &RP$+$MSMT (U)&$ \textbf{45.7}$ & $ \textbf{73.0}$ \\ \hline \end{tabularx} \small \begin{tabularx}{\hsize}{|p{2.82cm}|p{2.1cm}|YY|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Method}} & \multicolumn{1}{c|}{\multirow{2}{*}{Source data}} & \multicolumn{2}{c|}{CUHK03-NP} \\ \cline{3-4} & & \footnotesize{mAP} &\footnotesize{rank-$1$} \\ \hline\hline MGN \cite{wang2018learning,yuan2020calibrated} &Market (L)& $7.4$ & $8.5$ \\ MuDeep \cite{qian2019leader} &Market (L)& $9.1$ & $10.3$ \\ QAConv$^\dagger$ \cite{LiaoQAConv}&MSMT (L) & $15.2$ & $16.8$ \\ \hline MGN$^\dagger$ \cite{wang2018learning}&RandPerson&$7.7$ & $7.4$ \\ MGN-IBN$^\dagger$ \cite{wang2018learning}&RandPerson&$8.4$ & $9.1$ \\ OSNet-IBN$^\dagger$ \cite{zhou2019omni} &RandPerson&$12.9$ & $13.6$ \\ QAConv$^\S$ \cite{LiaoQAConv} &RandPerson& $11.0$ & $14.3$ \\ QAConv-IBN$^\S$ \cite{LiaoQAConv} &RandPerson& $10.8$ & $12.9$ \\ Baseline$^\ast$ &RandPerson& $13.0$ & $14.6$ \\ Baseline-IBN$^\ast$ &RandPerson& $13.6$ & $14.3$ \\ \hline DomainMix&RP$+$MSMT (U) & $16.7$ & $18.0$ \\ \footnotesize{DomainMix-OSNet-IBN} &RP$+$MSMT (U)&$16.9$ & $17.5$ \\ DomainMix-IBN &RP$+$MSMT (U)& $\textbf{18.3}$ & $\textbf{19.2}$ \\ \hline \end{tabularx} \small \begin{tabularx}{\hsize}{|p{2.82cm}|p{2.1cm}|YY|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Method}} & \multicolumn{1}{c|}{\multirow{2}{*}{Source data}} & \multicolumn{2}{c|}{MSMT17} \\ \cline{3-4} & & \footnotesize{mAP} &\footnotesize{rank-$1$} \\ \hline\hline QAConv$^\dagger$ \cite{LiaoQAConv}&Market (L) & $8.3$ & $26.4$ \\ \hline MGN$^\dagger$ \cite{wang2018learning}&RandPerson&$3.0$ & $10.1$ \\ MGN-IBN$^\dagger$ \cite{wang2018learning}&RandPerson&$ 4.0 $ & $ 12.5 $ \\ OSNet-IBN$^\dagger$ \cite{zhou2019omni} &RandPerson&$12.4$ & $34.3$ \\ QAConv$^\S$ \cite{LiaoQAConv} &RandPerson& $10.7$ & $34.3$ \\ QAConv-IBN$^\S$ \cite{LiaoQAConv} &RandPerson& $12.1$ & $\textbf{36.6}$ \\ Baseline$^\ast$ &RandPerson& $7.9$ & $23.0$ \\ Baseline-IBN$^\ast$ &RandPerson& $10.9$ & $30.6$ \\ \hline DomainMix&RP$+$Market (U) & $9.3$ & $25.3$ \\ \footnotesize{DomainMix-OSNet-IBN}&RP$+$Market (U) & $\textbf{13.6}$ & $36.2$ \\ DomainMix-IBN &RP$+$Market (U)& $ 12.1$ & $ 33.1$ \\ \hline \end{tabularx} \label{SOTA} \end{table}
1,477,468,750,844
arxiv
\section{Introduction} \subsection{Formulation of the main result} Let $M$ be a compact orientable surface. To a holomorphic one-form $\boldsymbol \omega$ on $M$ one can assign the corresponding {\it vertical} flow $h_t^+$ on $M$, i.e., the flow at unit speed along the leaves of the foliation $\Re(\boldsymbol \omega)=0$. The vertical flow preserves the measure $ {\mathfrak m}=i(\boldsymbol \omega\wedge {\overline \boldsymbol \omega})/2, $ the area form induced by $\boldsymbol \omega$. By a theorem of Katok \cite{katok}, the flow $h_t^+$ is never mixing. The moduli space of abelian differentials carries a natural volume measure, called the Masur-Veech measure \cite{masur}, \cite{veech}. For almost every Abelian differential with respect to the Masur-Veech measure, Masur \cite{masur} and Veech \cite{veech} independently and simultaneously proved that the flow $h_t^+$ is uniquely ergodic. Weak mixing for almost all translation flows has been established by Veech in \cite{veechamj} under additional assumptions on the combinatorics of the abelian differentials and by Avila and Forni \cite{AF} in full generality. The spectrum of translation flows is therefore almost surely continuous and always has a singular component. \thispagestyle{empty} Sinai [personal communication] raised the question: to find the local asymptotics for the spectral measures of translation flows. In \cite{BuSo18a,BuSo19} we developed an approach to this problem and succeeded in obtaining H\"older estimates for spectral measures in the case of surfaces of genus 2. The proof proceeds via uniform estimates of twisted Birkhoff integrals in the symbolic framework of random Markov compacta and arguments of Diophantine nature in the spirit of Salem, Erd\H{o}s and Kahane. Recently Forni \cite{Forni2} obtained H\"older estimates for spectral measures in the case of surfaces of arbitrary genus. While Forni does not use the symbolic formalism, the main idea of his approach can also be formulated in symbolic terms: namely, that instead of the {\it scalar} estimates of \cite{BuSo18a,BuSo19}, we can use the Erd\H{o}s-Kahane argument in {\it vector} form, cf. \eqref{lattice} et seq. Following the idea of Forni and directly using the vector form of the Erd\H{o}s-Kahane argument yields a considerable simplification of our initial proof and allows us to prove the H\"older property for a general class of random Markov compacta, cf. \cite{Buf-umn}, and, in particular, for almost all translation flows on surfaces of arbitrary genus. Let ${\mathcal H}$ be a stratum of abelian differentials on a surface of genus $g\ge 2$. The natural smooth Masur-Veech measure on the stratum ${\mathcal H}$ is denoted by $\mu_{\mathcal H}$. Our main result is that for almost all abelian differentials in ${\mathcal H}$, the spectral measures of Lipschitz functions with respect to the corresponding translation flows have the H{\"o}lder property. Recall that for a square-integrable test function $f$, the spectral measure $\sigma_f$ is defined by $$ \widehat{\sigma}_f(-t) = \langle f\circ h_t^+, f \rangle,\ \ \ t\in {\mathbb R}, $$ see Section~\ref{sec-twist}. A point mass for the spectral measure corresponds to an eigenvalue, so H\"older estimates for spectral measures quantify weak mixing for our systems. \begin{theorem}\label{main-moduli} There exists $\gamma>0$ such that for $\mu_{\mathcal H}$-almost every abelian differential $(M, \boldsymbol \omega)\in {\mathcal H}$ the following holds. For any $B>1$ there exist constants $C=C(\boldsymbol \omega,B)$ and $r_0=r_0(\boldsymbol \omega,B)$ such that for any Lipschitz function $f$ on $M$, for all $\lambda\in [B^{-1},B]$ we have \begin{equation} \label{eq-moduli} \sigma_f([\lambda-r, \lambda+r])\le C\|f\|_L\cdot r^\gamma\ \ \mbox{for all} \ r\in (0, r_0). \end{equation} \end{theorem} This theorem is analogous to Forni \cite[Corollary 1.7]{Forni2}. \begin{remark} \label{remark-main} {\em Our argument, as well as Forni's, see \cite[Remark 1.9]{Forni2}, remains valid for almost every translation flow under a more general class of measures. Let $\mu$ be a Borel probability measure invariant and ergodic under the Teichm{\"u}ller flow. Let $\kappa$ be the number of positive Lyapunov exponents for the Kontsevich-Zorich cocycle under the measure $\mu$. To formulate our condition precisely, recall that, by the Hubbard-Masur theorem on the existence of cohomological coordinates, the moduli space of abelian differentials with prescribed singularities can be locally identified with the space ${\widetilde H}$ of relative cohomology, with complex coefficients, of the underlying surface with respect to the singularities. Consider the subspace $H\subset {\widetilde H}$ corresponding to the absolute cohomology, the corresponding fibration of ${\widetilde H}$ into translates of $H$ and its image, a fibration $\overline {\mathcal F}$ on the moduli space of abelian differentials with prescribed singularities. Each fibre is locally isomorphic to $H$ and thus has dimension equal to $2g$, where $g$ is the genus of the underlying surface. We now restrict ourselves to the subspace of abelian differentials of area $1$, and let the fibration $\mathcal F$ be the restriction of the fibration $\overline {\mathcal F}$; the dimension of each fibre of the fibration $\mathcal F$ is equal to $2g-1$. Almost every fibre of $\mathcal F$ carries a conditional measure, defined up to multiplication by a constant, of the measure $\mu$. If there exists $\delta>0$ such that the Hausdorff dimension of the conditional measure of $\mu$ on almost every fibre of $\mathcal F$ has Hausdorff dimension at least $2g-\kappa+\delta$, then Theorem \ref{main-moduli} holds for $\mu$-almost every abelian differential. In the case of the Masur-Veech measure $\mu_{\mathcal H}$, it is well-known that the conditional measure on almost every fibre is mutually absolutely continuous with the Lebesgue measure, hence has Hausdorff dimension $2g$. By the celebrated result of Forni \cite{Forni}, there are $\kappa=g$ positive Lyapunov exponents for the Kontsevich-Zorich cocycle under the measure $\mu_{\mathcal H}$, so Theorem~\ref{main-moduli} will follow by taking any $\delta\in (0,1)$. } \end{remark} \medskip The proof of the H\"older property for spectral measures proceeds via upper bounds on the growth of twisted Birkhoff integrals $$ S_R^{(x)}(f,\lambda) := \int_0^R e^{-2\pi i \lambda \tau} f\circ h^+_\tau(x)\,d\tau. $$ \begin{theorem} \label{th-twisted} There exists $\alpha\in (0,1)$ such that for $\mu_{\mathcal H}$-almost every abelian differential $(M, \boldsymbol \omega)\in {\mathcal H}$ and any $B>1$ there exist $C'=C'(\boldsymbol \omega,B)$ and $R_0 = R_0(\boldsymbol \omega,B)$ such that for any Lipschitz function $f$ on $M$, for all $\lambda\in [B^{-1},B]$ and all $x\in M$, $$ \left|S_R^{(x)}(f,\lambda)\right| \le C' R^\alpha\ \ \mbox{for all}\ R\ge R_0. $$ \end{theorem} This theorem is analogous to Forni \cite[Theorem 1.6]{Forni2}. The derivation of Theorem~\ref{main-moduli} from Theorem~\ref{th-twisted} is standard, with $\gamma = 2(1-\alpha)$; see Lemma~\ref{lem-varr}. In fact, in order to obtain (\ref{eq-moduli}), $L^2$-estimates (with respect to the area measure on $M$) of $S_R^{(x)}(f,\lambda)$ suffice; we obtain bounds that are uniform in $x\in M$, which is of independent interest. \subsection{Quantitative weak mixing} There is a close relation between H\"older regularity of spectral measures and quantitative rates of weak mixing, see \cite{BuSo18b,Forni2}. One can also note a connection of our arguments with the proofs of weak mixing, via Veech's criterion \cite{veechamj}. A translation flow can be represented, by considering its return map to a transverse interval, as a special flow over an interval exchange transformation (IET) with a roof function constant on each sub-interval. The roof function is determined by a vector $\vec s\in H\subset {\mathbb R}^m_+$ with positive coordinates, where $m$ is the number of sub-intervals of the IET and $H$ is a subspace of dimension $2g$ corresponding to the space of absolute cohomology from Remark 1.2. Let ${\mathbb A}(n,{\bf a})$ be the Zorich acceleration of the Masur-Veech cocycle on ${\mathbb R}^m$, corresponding to returns to a ``good set'', where ${\bf a}$ encodes the IET. Veech's criterion \cite[\S 7]{veechamj} says that if $$ \limsup_{n\to \infty} \|{\mathbb A}(n,{\bf a})\cdot \omega \vec{s}\|_{{\mathbb R}^m/{\mathbb Z}^m}>0\ \ \mbox{for all}\ \omega\ne 0, $$ then the translation flow corresponding to $\vec s$ is weakly mixing. This was used by Avila and Forni \cite[Theorem A.2]{AF} to show that for almost every ${\bf a}$ the set of $\vec s\in H$, such that the special flow is not weakly mixing, has Hausdorff dimension at most $g+1$. On the other hand, our Proposition~\ref{prop-quant} says that if the set $$ \{n\in {\mathbb N}:\ \|{\mathbb A}(n,{\bf a})\cdot \omega \vec{s}\|_{{\mathbb R}^m/{\mathbb Z}^m} \ge \varrho\} $$ has positive lower density in ${\mathbb N}$ for some $\varrho>0$ uniformly in $\omega\ne 0$ bounded away from zero and from infinity, then spectral measures corresponding to Lipschitz functions have the H\"older property. The Erd\H{o}s-Kahane argument is used to estimate the dimension of those $\vec s$ for which this fails. In recent Forni's work, a version of the weak stable space for the Kontsevich-Zorich cocycle, denoted $W_K^s(h)$ and defined in \cite[Section 6]{Forni2}, seems analogous to our exceptional set ${\mathfrak E}$ defined in (\ref{def-excep}). The Hausdorff dimension of this set is estimated in \cite[Theorem 6.3]{Forni2}, with the help of \cite[Lemma 6.2]{Forni2}, which plays a r\^ole similar to our Erd\H{o}s-Kahane argument. \subsection{Organization of the paper, comparison with \cite{BuSo18a} and \cite{BuSo19}} A large part of the paper \cite{BuSo18a} was written in complete generality, without the genus 2 assumptions, and is directly used here; for example, we do not reproduce the estimates of twisted Birkhoff integrals using generalized matrix Riesz products, but refer the reader to \cite[Section 3]{BuSo18a} instead. Sharper estimates of matrix Riesz products were obtained in \cite{BuSo19}, where the matrix Riesz products products are interpreted in terms of the {\em spectral cocycle}. In particular, we established a formula relating the local upper Lyapunov exponents of the cocycle with the pointwise lower dimension of spectral measures. Note nonetheless that the cocycle structure is not used in this paper. Section 3, parallel to \cite[Section 4]{BuSo18a}, contains the main result on H\"older regularity of spectral measures in the setting of random $S$-adic, or, equivalently, random Bratteli-Verhik systems. The main novelty is that here we only require that the second Lyapunov exponent $\theta_2$ of the Kontsevich-Zorich cocycle be positive, while in \cite{BuSo18a} the assumption was that $\theta_1 > \theta_2 >0$ are the {\em only} non-negative exponents. The preliminary Section 4 closely follows \cite{BuSo18a}. The crucial changes occurs in Sections 5 and 6, where, in contrast with \cite{BuSo18a}, Diophantine approximation is established {\it in vector form}, cf. Lemma~\ref{lem-lattice}. The exceptional set is defined in (\ref{def-excep}), and the Hausdorff dimension of the exceptional set is estimated in Proposition~\ref{prop-EK}. Although the general strategy of the ``Erd\H{o}s-Kahane argument'' remains, the implementation is now significantly simplified. In \cite{BuSo18a} we worked with scalar parameters, the coordinates of the vector of heights with respect to the Oseledets basis, but here we simply consider the vector parameter and work with the projection of the vector of heights to the strong unstable subspace. In particular, the cumbersome estimates of \cite[Section 8]{BuSo18a} are no longer needed. Section 7, devoted to the derivation of the main theorem on translation flows from its symbolic counterpart, parallels \cite[Section 11]{BuSo18a} with some changes. The most significant one is that we require a stronger property of the good returns, which is achieved in Lemma~\ref{lem-combi1}. On the other hand, the large deviation estimate for the Teichm\"uller flow required in Theorem~\ref{th-main1} remains unchanged, and we directly use \cite[Prop.\,11.3]{BuSo18a}. \noindent {\bf{Acknowledgements.}} We are deeply grateful to Giovanni Forni for generously sharing his ideas with us and for sending us his manuscript containing his proof of the H\"older property in arbitrary genus. B. S. would like to thank Corinna Ulcigrai for her hospitality in Bristol and Z\"urich, and for many helpful discussions. A. B.'s research is supported by the European Research Council (ERC) under the European Union Horizon 2020 research and innovation programme, grant 647133 (ICHAOS), by the Agence Nationale de Recherche, project ANR-18-CE40-0035, and by the Russian Foundation for Basic Research, grant 18-31-20031. B. S.'s research is supported by the Israel Science Foundation (ISF), grant 396/15. \section{Preliminaries} \subsection{Translation flows and their symbolic representation} The translation flow on the surface can be realized as a special flow over an interval exchange transformation; for a detailed exposition, see e.g. Viana's survey \cite{viana2} . Veech \cite{veech} constructed, for any connected component of a stratum ${\mathcal H}$, a measurable map from the space ${\mathcal V}({\mathcal R})$ of zippered rectangles corresponding to the Rauzy class ${\mathcal R}$, to ${\mathcal H}$, which intertwines the Teichm\"uller flow on ${\mathcal H}$ and a renormalization flow $P_t$ that Veech defined on ${\mathcal V}({\mathcal R})$. Our convention here follows that of \cite{Buf-umn}. Section 4.3 of \cite{Buf-umn} gives a symbolic coding of the flow $P_t$ on ${\mathcal V}({\mathcal R})$, namely, a map \begin{equation} \label{map-veech} \Xi_{{\mathcal R}}: ({\mathcal V}({\mathcal R}),\wtil\nu) \to ({\Omega},\Prob) \end{equation} defined almost everywhere, where $\wtil\nu$ is the pull-back of $\nu$ from ${\mathcal H}$ and $(\Omega,\Prob)$ is a space of Markov compacta. The first return map of the flow $P_t$ for an appropriate Poincar\'e section is mapped by $\Xi_{\mathcal R}$ to the shift map $\sigma$ on $(\Omega,\Prob)$. This correspondence maps the Rauzy-Veech cocycle over the Teichm\"uller flow into the renormalization cocycle for the Markov compacta. Moreover, the map $\Xi_{\mathcal R}$ induces a map defined for a.e.\ ${\mathcal X}\in {\mathcal V}({\mathcal R})$, from the corresponding Riemann surface $M({\mathcal X})$ to a 2-sided Markov compactum in $X\in {\Omega}$, intertwining their vertical and horizontal flows. For the theory of Markov compacta and Bratteli-Vershik transformations, see the original papers \cite{Vershik1,Vershik2,Vershik-Livshits} ; for their spectral theory see \cite{solomyak}. The framework of 2-sided Markov compacta and their applications to translation flows was developed in \cite{Buf-umn}. It is shown in \cite{Buf-umn} that the ``vertical flow'' on a 2-sided Markov compactum $X$ is isomorphic to a special flow over the Vershik map, defined over the positive ``half'' $X_+$ of the compactum $X$. The roof function of this special flow is piecewise-constant and depends only on the 1-st symbol. An equivalent framework of random $S$-adic flows, cf. \cite{BuSo18a} and references therein, is used in \cite{BuSo19} and in this paper. \subsection{Substitutions and $S$-adic systems} The reader is referred to \cite{Fogg,Queff} for the background on substitution systems. Let ${\mathcal A}=\{1,\ldots,m\}$ be a finite alphabet; we denote by ${\mathcal A}^+$ the set of finite (non-empty) words in ${\mathcal A}$. A {\em substitution} is a map $\zeta:\, {\mathcal A} \to {\mathcal A}^+$, which is extended to an action on ${\mathcal A}^+$ and ${\mathcal A}^{\mathbb N}$ by concatenation. The {\em substitution matrix} is defined by \begin{equation} \label{sub-mat} {\sf S}_\zeta (i,j) = \mbox{number of symbols}\ i\ \mbox{in the word}\ \zeta(j). \end{equation} Denote by ${\mathfrak A}$ a set of substitutions $\zeta$ on ${\mathcal A}$ with the property that all letters appear in the set of words $\{\zeta(a):\,a\in {\mathcal A}\}$ and there exists $a$ such that $|\zeta(a)|>1$. Below we identify $\Omega$ with a space of 2-sided sequences of substitutions from ${\mathfrak A}$ and write $\Omega_+$ for the set of 1-sided sequences of substitutions. An element of $\Omega_+$ provides a symbolic coding for almost every interval exchange. The ``roof vector'' is determined by the left side of a sequence in $\Omega$. An element of $\Omega_+$ will be denoted by ${\bf a}^+ = \{\zeta_j\}_{j\ge 1}$. The Rauzy-Veech, or renormalization cocycle, in this notation becomes a matrix cocycle on the space ${\mathbb R}^m$ over the shift map on $\Omega_+$, given by the (transpose) substitution matrix of $\zeta_1$: $$ {\mathbb A}({\bf a}):= {\sf S}_{\zeta_1}^t;\ \ {\mathbb A}({\bf a},n) := {\mathbb A}(\sigma^{n-1}{\bf a})\cdot \ldots \cdot {\mathbb A}({\bf a}). $$ Let ${\bf a}^+ =\{\zeta_n\}_{n\ge 1}$ be a 1-sided sequence of substitutions on ${\mathcal A}$. Denote $$ \zeta^{[n]} := \zeta_1\circ \cdots\circ\zeta_n,\ \ n\ge 1. $$ Recall that ${\sf S}_{\zeta_1\circ \zeta_2} = {\sf S}_{\zeta_1}{\sf S}_{\zeta_2}$. We will sometimes write $$ {\sf S}_j:= {\sf S}_{\zeta_j}\ \ \mbox{and}\ \ {\sf S}^{[n]}:= {\sf S}_{\zeta^{[n]}}. $$ We will also consider subwords of the sequence ${\bf a}$ and the corresponding substitutions obtained by composition. Denote \begin{equation} \label{notation1} {\sf S}_{\bf q} = {\sf S}_n\cdots {\sf S}_\ell\ \ \mbox{and}\ \ A({\bf q}) = S_{\bf q}^t\ \ \mbox{for}\ \ {\bf q} = \zeta_{n}\ldots\zeta_{\ell} \end{equation} Given ${\bf a}^+$, denote by $X_{{\bf a}^+}\subset {\mathcal A}^{\mathbb Z}$ the subspace of all two-sided sequence whose every subword appears as a subword of $\zeta^{[n]}(b)$ for some $b\in {\mathcal A}$ and $n\ge 1$. Let $T$ be the left shift on ${\mathcal A}^{\mathbb Z}$; then $(X_{{\bf a}^+},T)$ is the (topological) $S$-adic dynamical system. We refer to \cite{BD} for the background on $S$-adic shifts. A sequence of substitutions is called (weakly) primitive if for any $n\in {\mathbb N}$ there exists $k\in{\mathbb N}$ such that ${\sf S}_n\cdots{\sf S}_{n+k}$ is a matrix with strictly positive entries. This implies minimality and unique ergodicity of the $S$-adic shift, see \cite{BD}. Weak primitivity is known to hold for almost every ${\bf a}^+\in \Omega_+$. In fact, it will be convenient for us to deal with an abstract framework of a sequence of substitutions ${\bf a}^+\in {\mathfrak A}^{\mathbb N}$, with the following standing assumptions: \medskip {\bf (A1)} {\em There is a word ${\bf q}$ which appears in ${\bf a}^+$ infinitely often, for which ${\sf S}_{\bf q}$ has all entries strictly positive.} \medskip {\bf (A2)} {\em Every substitution matric is non-singular: $\det(A_j) \ne 0$ for all $j\ge 1$.} \medskip Condition (A1) implies that $(X_{{\bf a}^+},T)$ is minimal and uniquely ergodic, and we denote the unique invariant probability measure by $\mu_{{\bf a}^+}$ (condition (A2) will be needed below). We further let $({\mathfrak X}_{{\bf a}^+}^{\vec{s}}, h_t, \wtil{\mu}_{{\bf a}^+})$ be the special flow over $(X_{{\bf a}^+},\mu_{{\bf a}^+}, T)$, corresponding to a piecewise-constant roof function $\phi$ defined by $\vec{s}\in {\mathbb R}^m_+$, that is, $$ \phi(x) = s_{x_0},\ \ x\in X_{{\bf a}^+}. $$ The measure $\wtil\mu_{{\bf a}^+}$ is induced by the product of $\mu_{{\bf a}^+}$ and the Lebesgue measure on ${\mathbb R}$. By definition, we have a union, disjoint in measure: $$ {\mathfrak X}_{{\bf a}^+}^{\vec{s}} = \bigcup_{a\in {\mathcal A}} [a]\times [0,s_a], $$ where $X_{{\bf a}^+} = \bigsqcup_{a\in {\mathcal A}} [a] $ is the partition into cylinder sets according to the value of $x_0$. It is convenient to use the normalization: $$ \vec{s} \in \Delta^{m-1}_{\bf a}:= \Bigl\{\vec s\in {\mathbb R}^m_+: \ \sum_{a\in {\mathcal A}} \mu_{{\bf a}^+}([a]) \cdot s_a = 1\Bigr\}, $$ so that $\wtil \mu_{{\bf a}^+}$ is a probability measure on ${\mathfrak X}_{{\bf a}^+}^{\vec{s}}$. Below we often omit the subscript and write $\mu = \mu_{{\bf a}^+}$, $\wtil\mu = \wtil\mu_{{\bf a}^+}$, when it is clear from the context. \subsection{Cylindrical functions} We define {\em bounded cylindrical functions} (or {\em cylindrical functions of level zero}) by the formula: \begin{equation} \label{fcyl2} f(x,t)=\sum_{a\in {\mathcal A}} {1\!\!1}_{[a]}(x) \cdot \psi_a(t),\ \ \mbox{with}\ \ \psi_a\in L^\infty[0,s_a]. \end{equation} Cylindrical functions of level zero do not suffice to describe the spectral type of the flow; rather, we need functions depending on an arbitrary fixed number of symbols. Cylindrical functions of level $\ell\ge 1$ depend on the first $\ell$ edges of the path representing a point in the Bratteli-Vershik representation. In the $S$-adic framework, we say that $f$ is a {\em bounded cylindrical function of level} $\ell$ if \begin{equation} \label{fcyl3} f(x,t)=\sum_{a\in {\mathcal A}} {1\!\!1}_{\zeta^{[\ell]}[a]}(x) \cdot \psi^{(\ell)}_a(t),\ \ \mbox{with}\ \ \psi^{(\ell)}_a\in L^\infty [0,s^{(\ell)}_a], \end{equation} where $$ \vec{s}^{\,(\ell)}= (s^{(\ell)}_a)_{a\in {\mathcal A}}:= ({\sf S}^{[\ell]})^t \vec{s}. $$ This definition depends on the notion of {\em recognizability} for the sequence of substitutions, see \cite{BSTY}, which generalizes {\em bilateral recognizability} of B. Moss\'e \cite{Mosse} for a single substitution, see also Sections 5.5 and 5.6 in \cite{Queff}. By definition of the space $X_{{\bf a}^+}$, for every $n\ge 1$, every $x\in X_{{\bf a}^+}$ has a representation of the form \begin{equation} \label{recog} x = T^k\bigl(\zeta^{[n]}(x')\bigr),\ \ \mbox{where}\ \ x'\in X_{\sigma^n {\bf a}^+},\ \ 0\le k < |\zeta^{[n]}(x_0)|. \end{equation} Here $\sigma$ denotes the left shift, and we recall that a substitution $\zeta$ acts on ${\mathcal A}^{\mathbb Z}$ by $$ \zeta(\ldots a_{-1}.a_0 a_1\ldots) = \ldots \zeta(a_{-1}).\zeta(a_0)\zeta(a_1)\ldots $$ We say that the sequence of substitutions is {\em recognizable at level $n$} if the representation (\ref{recog}) is unique. In view of condition (A2), by \cite[Theorem 3.1]{BSTY}, the sequence of substitutions ${\bf a}^+$ is recognizable at all levels, because property (A1) implies minimality of the $S$-adic system. It follows that \begin{equation} \label{KR} {\mathcal P}_n = \{T^i(\zeta^{[n]}[a]):\ a\in {\mathcal A},\ 0 \le i < |\zeta^{[n]}(a)|\} \end{equation} is a sequence of Kakutani-Rokhlin partitions for $n\ge n_0({\bf a})$, which generates the Borel $\sigma$-algebra on the space $X_{\bf a}$. We emphasize that, in general, $\zeta^{[n]}[a]$ may be a proper subset of $[\zeta^{[n]}(a)]$. Using the uniqueness of the representation (\ref{recog}) and the Kakutani-Rokhlin partitions (\ref{KR}), we obtain for $n\ge n_0$: $$ \mu([a]) = \sum_{b\in {\mathcal A}}{\sf S}^{[n]}(a,b)\,\mu(\zeta^{[n]}[b]),\ \ a\in {\mathcal A}, $$ hence \begin{equation} \label{eq-measure} \vec{\mu}_0 = {\sf S}^{[n]}\vec{\mu}_n,\ \ \mbox{where}\ \ \vec{\mu}_n = \bigl(\mu(\zeta^{[n]}[b])\bigr)_{b\in {\mathcal A}} \end{equation} is a column-vector. Similarly to (\ref{eq-measure}), we have that \begin{equation} \label{eq-measure2} \vec{\mu}_n = {\sf S}_{n+1}\vec{\mu}_{n+1},\ \ n\ge n_0. \end{equation} It follows from the above that, for any $\ell\ge 1$, the special flow $({\mathfrak X}_{{\bf a}}^{\vec{s}},\wtil{\mu},h_t)$ is measurably isomorphic to the special flow over the system $(\zeta^{[\ell]}(X_{\sigma^\ell{\bf a}}), \zeta^{[\ell]}\circ T\circ (\zeta^{[\ell]})^{-1})$, with the induced measure, and a piecewise-constant roof function given by the vector $$ \vec{s}^{\,(\ell)}= (s^{(\ell)}_a)_{a\in {\mathcal A}}:= ({\sf S}^{[\ell]})^t \vec{s}. $$ We have a union, disjoint in measure: \begin{equation} \label{ell-decom} {\mathfrak X}_{{\bf a}^+}^{\vec{s}} = \bigcup_{a\in {\mathcal A}} \zeta^{[\ell]}[a]\times [0, s_a^{(\ell)}], \end{equation} and so bounded cylindrical functions of level $\ell$ are well-defined by (\ref{fcyl3}). \subsection{Weakly Lipschitz functions.} \label{sec-weakl} Following Bufetov \cite{Bufetov1, Buf-umn}, we consider the space of {\em weakly Lipschitz functions} on the space ${\mathfrak X}_{{\bf a}^+}^{\vec{s}}$, except here we do everything in the $S$-adic framework. This is the class of functions that we obtain from Lipschitz functions on the translation surface $M$ under the symbolic representation of the translation flow, for almost every Abelian differential. \begin{defi} Suppose that ${\bf a}^+\in {\mathfrak A}^{\mathbb N}$ is such that the $S$-adic system $(X_{{\bf a}^+},T)$ is uniquely ergodic, with the invariant probability measure $\mu$. We say that a bounded function $f:{\mathfrak X}_{{\bf a}^+}^{\vec{s}}\to {\mathbb C}$ is weakly Lipschitz and write $f\in {\rm Lip}_w ({\mathfrak X}_{{\bf a}^+}^{\vec{s}})$ if there exists $C>0$ such that for all $a\in {\mathcal A}$ and $\ell\in {\mathbb N}$, for any $x,x'\in \zeta^{[\ell]}[a]$ and all $t\in [0, s_a^{(\ell)}]$, we have \begin{equation} \label{LL2} |f(x,t)-f(x',t)|\le C \mu(\zeta^{[\ell]}[a]). \end{equation} Here we are using the decomposition of ${\mathfrak X}_{{\bf a}^+}^{\vec{s}}$ from (\ref{ell-decom}). The norm in $Lip_w ({\mathfrak X}_{{\bf a}^+}^{\vec{s}})$ is defined by \begin{equation} \label{Lip-norm} {\|f\|}_L:= {\|f\|}_\infty + \widetilde{C}, \end{equation} where $\widetilde{C}$ is the infimum of the constants in (\ref{LL2}). \end{defi} Note that a weakly Lipschitz functions is not assumed to be Lipschitz in the $t$-direction. This direction corresponds to the ``past'' in the 2-sided Markov compactum and to the vertical direction in the space of the special flow under the symbolic representation, and the reason is that any symbolic representation of a flow on a manifold unavoidably has discontinuities. \begin{lemma} \label{lem-approx} Let $f:{\mathfrak X}_{{\bf a}^+}^{\vec{s}}\to {\mathbb C}$ be a weakly Lipschitz function. Then for any $\ell\in {\mathbb N}$ there exists a bounded cylindrical function of level $\ell$, denote it $f^{(\ell)}$, such that $\|f^{(\ell)}\|_\infty \le \|f\|_\infty$ and $$ {\|f - f^{(\ell)}\|}_\infty \le {\|f\|}_L\cdot \max_{a\in {\mathcal A}} \mu(\zeta^{[\ell]}[a]). $$ \end{lemma} \begin{proof} We use the decomposition (\ref{ell-decom}). For each $a\in {\mathcal A}$ and $\ell$ choose $x_{a,\ell}\in \zeta^{[\ell]}$ arbitrarily, and let $$ f^{(\ell)}(x,t):= f(x_{a,\ell},t),\ \ \mbox{where}\ \ x_0=a,\ t\in [0, s_a^{(\ell)}]. $$ By definition, the function $f^{(\ell)}$ has all the required properties. \end{proof} \subsection{Spectral measures and twisted Birkhoff integrals} \label{sec-twist} We use the following convention for the Fourier transform of functions and measures: given $\psi\in L^1({\mathbb R})$ we set $ \widehat{\psi}(t) = \int_{\mathbb R} e^{-2\pi i \omega t} \psi(\omega)\,d\omega, $ and for a probability measure $\nu$ on ${\mathbb R}$ we let $ \widehat{\nu}(t) = \int_{\mathbb R} e^{-2\pi i \omega t}\,d\nu(\omega). $ Given a measure-preserving flow $(Y, h_t,\mu)_{t\in {\mathbb R}}$ and a test function $f\in L^2(Y,\mu)$, there is a finite positive Borel measure $\sigma_f$ on ${\mathbb R}$ such that $$ \widehat{\sigma}_f(-\tau)=\int_{-\infty}^\infty e^{2 \pi i\omega \tau}\,d\sigma_f(\omega) = \langle f\circ h_\tau, f\rangle\ \ \ \mbox{for}\ \tau\in {\mathbb R}. $$ In order to obtain local bounds on the spectral measure, we use growth estimates of the twisted Birkhoff integral \begin{equation} \label{twist1} S_R^{(y)}(f,\omega) := \int_0^R e^{-2\pi i \omega \tau} f\circ h_\tau(y)\,d\tau. \end{equation} The following lemma is standard; a proof may be found in \cite[Lemma 4.3]{BuSol2}. \begin{lemma} \label{lem-varr} Suppose that for some fixed $\omega \in {\mathbb R}$, $R_0>0$, and $\alpha \in (0,1)$ we have \begin{equation} \label{L2est} \left\|S_R^{(y)}(f,\omega)\right\|_{L^2(Y,\mu)}\le C_1R^\alpha\ \ \mbox{for all}\ R\ge R_0. \end{equation} Then \begin{equation} \label{locest} \sigma_f([\omega-r,\omega+r]) \le \pi^2 2^{-2\alpha} C_1^2 r^{2(1-\alpha)}\ \ \mbox{for all}\ r \le (2R_0)^{-1}. \end{equation} \end{lemma} \section{Random $S$-adic systems: statement of the theorem} Here we consider dynamical systems generated by a {\em random} $S$-adic system. In order to state our result, we need some preparation; specifically, the Oseledets Theorem. Recall that ${\mathfrak A}$ denotes the set of substitutions $\zeta$ on ${\mathcal A}$ with the property that all letters appear in the set of words $\{\zeta(a):\,a\in {\mathcal A}\}$ and there exists $a$ such that $|\zeta(a)|>1$. Let $\Omega$ be the 2-sided space of sequences of substitutions: $$ \Omega = \{{\bf a} = \ldots \zeta_{-n}\ldots\zeta_0{\mbox{\bf .}}\zeta_1\ldots \zeta_n\ldots;\ \zeta_i \in {\mathfrak A},\ i\in {\mathbb Z}\}, $$ equipped with the left shift $\sigma$. For ${\bf a}\in\Omega$ we denote by $X_{{\bf a}^+}$ the $S$-adic system corresponding to ${\bf a}^+=\{\zeta_n\}_{n\ge 1}$. We will sometimes write $\zeta(q)$ for the substitution corresponding to $q\in {\mathfrak A}$. For a word ${\bf q} = q_1\ldots q_k\in {\mathfrak A}^k$ we can compose the substitutions to obtain $\zeta({\bf q}) = \zeta(q_1)\ldots \zeta(q_k)$. We will also need a ``2-sided cylinder set'': $$ [{\bf q}.{\bf q}] = \{{\bf a}\in \Omega:\ \zeta_{-k+1}\ldots\zeta_0 = \zeta_1\ldots\zeta_k = {\bf q}\}. $$ Following \cite{BufGur}, we say that the word ${\bf q} = q_1\ldots q_k$ is ``simple'' if for all $2 \le i \le k$ we have $q_i\ldots q_k \ne q_1\ldots q_{k-i+1}$. If the word ${\bf q}$ is simple, two occurrences of ${\bf q}$ in the sequence ${\bf a}$ cannot overlap. \begin{defi} A word $v\in {\mathcal A}^*$ is a {\em return word} for a substitution $\zeta$ if $v$ starts with some letter $c$ and $vc$ occurs in the substitution space $X_\zeta$. The return word is called ``good'' if $vc$ occurs in the substitution $\zeta(j)$ of every letter. We denote by $GR(\zeta)$ the set of good return words for $\zeta$. \end{defi} Recall that $ {\mathbb A}({\bf a}) := {\sf S}^t_{\zeta_1}. $ Let $\Prob$ be an ergodic $\sigma$-invariant probability measure on $\Omega$ satisfying the following \medskip \noindent {\bf Conditions:} {\bf (C1)} The matrices ${\mathbb A}({\bf a})$ are almost surely invertible with respect to $\Prob$. {\bf (C2)} The functions ${\bf a}\mapsto \log(1+ \|A^{\pm 1}({\bf a})\|)$ are integrable. \noindent We can use any matrix norm, but it will be convenient the $\ell^\infty$ operator norm, so $\|A\|=\|A\|_\infty$ unless otherwise stated. \medskip We obtain a measurable cocycle ${\mathbb A}: \Omega\to GL(m,{\mathbb R})$, called the {\em renormalization cocycle}. Denote \begin{equation}\label{renormcoc} {\mathbb A}(n,{\bf a}) = \left\{ \begin{array}{lr} {\mathbb A}(\sigma^{n-1}{\bf a})\cdot \ldots \cdot {\mathbb A}({\bf a}), & n> 0; \\ Id, & n=0; \\ {\mathbb A}^{-1}(\sigma^{-n}{\bf a}) \cdot \ldots \cdot {\mathbb A}^{-1}(\sigma^{-1}{\bf a}), & n<0,\end{array} \right. \end{equation} so that $${\mathbb A}(n,{\bf a}) = {\sf S}^t({\bf a}_n\ldots {\bf a}_1)= {\sf S}^t_{\zeta_1\ldots\zeta_n},\ n\ge 1.$$ By the Oseledets Theorem \cite{oseledets} (for a detailed survey and refinements, see Barreira-Pesin \cite[Theorem 3.5.5]{barpes}), there exist Lyapunov exponents $\theta_1> \theta_2 > \ldots > \theta_r$ and, for $\Prob$-a.e.\ ${\bf a}\in \Omega$, a direct-sum decomposition \begin{equation} \label{os1} {\mathbb R}^m = E^1_{\bf a} \oplus \cdots \oplus E^r_{\bf a} \end{equation} that depends measurably on ${\bf a}\in \Omega$ and satisfies the following: (i) for $\Prob$-a.e.\ ${\bf a}\in \Omega$, any $n\in{\mathbb Z}$, and any $i=1,\ldots,r$ we have $$ {\mathbb A}(n,{\bf a}) E^i_{\bf a} = E^i_{\sigma^n{\bf a}}; $$ (ii) for any $v\in E^i_{\bf a},\ v\ne 0$, we have $$ \lim_{|n|\to \infty} \frac{\log\|{\mathbb A}(n,{\bf a})v\|}{n} = \theta_i, $$ where the convergence is uniform on the unit sphere $\{u\in E^i_{\bf a}:\, \|u\|=1\}$; \smallskip (iii) $\lim_{|n|\to \infty} \frac{1}{n}\log\angle\left(\bigoplus_{i\in I} E^i_{\sigma^n{\bf a}}, \bigoplus_{j\in J} E^j_{\sigma^n {\bf a}}\right) =0$ whenever $I\cap J = \emptyset$. \medskip We will denote by $E_{\bf a}^{u}=\bigoplus\{E^i_{\bf a}:\theta_i>0\}$ and $E_{\bf a}^{st}=\bigoplus\{E^i_{\bf a}:\theta_i<0\}$ respectively the strong unstable and stable subspaces corresponding to ${\bf a}$. Any subspace of the form $E_{\bf a}^J:=\bigoplus_{j\in J} E^j_{\bf a}$ will be called an Oseledets subspace corresponding to ${\bf a}$. \medskip Let $\sigma_f$ be the spectral measure for the system $({\mathfrak X}_{{\bf a}^+}^{\vec{s}},h_t,\wtil\mu_{{\bf a}^+})$ with the test function $f$ (assuming the system is uniquely ergodic). Now we can state our theorem. \begin{theorem} \label{th-main1} Let $(\Omega,\Prob,\sigma)$ be an invertible ergodic measure-preserving system satisfying conditions {\bf (C1)-(C2)} above. Consider the cocycle ${\mathbb A}(n,{\bf a})$ defined by (\ref{renormcoc}). Assume that \begin{enumerate} \item[(a)] there are $\kappa\ge 2$ positive Lyapunov exponents and the top exponent is simple; \item[(b)] there exists a simple admissible word ${\bf q}\in {\mathfrak A}^k$ for some $k\in {\mathbb N}$, such that all the entries of the matrix ${\sf S}_{\bf q}$ are strictly positive and $\Prob([\bq.\bq])>0$; \item[(c)] the set of vectors $\{\vec\ell(v):\ v\ \mbox{is a good return word for $\zeta$}\}$ generates ${\mathbb Z}^m$ as a free Abelian group; \item[(d)] Let $\ell_{\bf q}({\bf a})$ be the ``negative'' waiting time until the first appearance of ${\bf q}.{\bf q}$, i.e. $$ \ell_{\bf q}({\bf a}) = \min\{n\ge 1:\, \sigma^{-n}{\bf a} \in [\bq.\bq]\}. $$ Let $\Prob({\bf a}|{\bf a}^+)$ be the conditional distribution on the set of ${\bf a}$'s conditioned on the future ${\bf a}^+ = {\bf a}_1{\bf a}_2\ldots$ We assume that there exist ${\varepsilon}>0$ and $1<C<\infty$ such that \begin{equation} \label{eq-star1} \int_{[\bq.\bq]} \left\|{\mathbb A}(\ell_{\bf q}({\bf a}), \sigma^{-\ell_{\bf q}({\bf a})}{\bf a})\right\|^{\varepsilon}\,d\Prob({\bf a}|{\bf a}^+) \le C\ \ \ \mbox{for all}\ \ {\bf a}^+\ \mbox{starting with}\ \q. \end{equation} \end{enumerate} Then there exists $\gamma>0$ such that for $\Prob$-a.e.\ ${\bf a} \in \Omega$ the following holds: Let $(X_{{\bf a}^+},T,\mu_{{\bf a}^+})$ be the $S$-adic system corresponding to ${\bf a}^+$, which is uniquely ergodic. Let $({\mathfrak X}_{{\bf a}^+}^{\vec{s}},h_t,\wtil \mu_{{\bf a}^+})$ be the special flow over $(X_{{\bf a}^+},T,\mu_{{\bf a}^+})$ under the piecewise-constant roof function determined by $\vec{s}$. Let $H^J_{\bf a}$ be an Oseledets subspace corresponding to ${\bf a}$, such that $E^u_{\bf a}\subset H^J_{\bf a}$. Then for Lebesgue-a.e.\ $\vec{s}\in H^J_{\bf a}\cap \Delta_{\bf a}^{m-1}$, for all $B>1$ there exist $R_0 = R_0({\bf a},\vec s, B)>1$ and $r_0=r_0({\bf a}, \vec s, B)>0$ such that for any $f\in {\rm Lip}_w^+({\mathfrak X}_{{\bf a}^+}^{\vec{s}})$, \begin{equation} \label{int-growth1} |S^{(x,t)}_R(f,\omega)| \le \widetilde{C}({\bf a},\|f\|_L)\cdot R^{1-\gamma/2},\ \ \mbox{for all}\ \omega\in[B^{-1},B]\ \mbox{and}\ R\ge R_0, \end{equation} uniformly in $(x,t)\in {\mathfrak X}_{{\bf a}^+}^{\vec{s}}$, and \begin{equation} \label{main-Hoeld} \sigma_f(B(\omega,r))\le C({\bf a},\|f\|_L)\cdot r^\gamma,\ \ \ \mbox{for all}\ \ \omega\in[B^{-1},B]\ \mbox{and}\ 0<r< r_0, \end{equation} with the constants depending only on ${\bf a}$ and $\|f\|_L$. More precisely, for any $\epsilon_1>0$ there exists $\gamma(\epsilon_1)>0$, such that for $\Prob$-a.e.\ ${\bf a} \in \Omega$ there is an exceptional set ${\mathfrak E}({\bf a},\epsilon_1)\subset H_{\bf a}^J\cap \Delta_{\bf a}^{m-1}$, satisfying \begin{equation} \label{dimesta} \dim_H({\mathfrak E}({\bf a},\epsilon_1)) < \dim(H^J_{\bf a})-\kappa+ \epsilon_1, \end{equation} such that (\ref{int-growth1}) and (\ref{main-Hoeld}) hold for all $\vec s\in H^J_{\bf a} \cap \Delta_{\bf a}^{m-1}\setminus {\mathfrak E}({\bf a},{\varepsilon}_1)$ with $\gamma = \gamma(\epsilon_1)$. \end{theorem} \noindent {\bf Remarks.} 1. Note that $\dim( H^J_{\bf a} \cap \Delta_{\bf a}^{m-1}) = \dim(H^J_{\bf a})-1$ and $\kappa\ge 2$, so (\ref{dimesta}) indeed implies that ${\mathfrak E}({\bf a},\epsilon_1)$ has zero Lebesgue measure in $H^J_{\bf a} \cap \Delta_{\bf a}^{m-1}$. 2. The assumption that ${\bf q}$ is a simple word ensures that occurrences of ${\bf q}$ do not overlap. Then we have, in view of (\ref{notation1}): \begin{equation} \label{egstar} {\mathbb A}(\ell_{\bf q}({\bf a}), \sigma^{-\ell_{\bf q}({\bf a})}{\bf a}) = A({\bf q}) A({\bf p}) A({\bf q}), \end{equation} for some ${\bf p} \in {\mathfrak A}$ (possibly trivial). For our application, it will be easy to make sure that ${\bf q}$ is simple, as we show in Section~\ref{sec-deriv}, unlike in the paper \cite{BufGur}, where additional efforts were needed to achieve the desired aims. \section{Beginning of the proof of Theorem~\ref{th-main11} } \subsection{Reduction to an induced system} Here we show that Theorem~\ref{th-main1} reduces to the case when \begin{equation} \label{induce} {\bf a}_n = {\bf q} {\bf p}_n {\bf q}\ \ \ \mbox{for all}\ \ n\in {\mathbb Z}, \end{equation} where ${\bf q}$ is a fixed word in ${\mathfrak A}^k$ for some $k\in {\mathbb N}$, such that its incidence matrix is strictly positive, and ${\bf p}_n$ is arbitrary. In the next theorem we use the same notation as in Theorem~\ref{th-main1}. \begin{theorem} \label{th-main11} Let $(\Omega_{\bf q},\Prob,\sigma)$ be an invertible ergodic measure-preserving system as in the previous section, satisfying conditions {\bf (C1)-(C2)}, (\ref{induce}). Consider the cocycle ${\mathbb A}(n,{\bf a})$ defined by (\ref{renormcoc}). Assume that \begin{enumerate} \item[(a$'$)] there are $\kappa\ge 2$ positive Lyapunov exponents and the top exponent is simple; \item[(b$'$)] the substitution $\zeta = \zeta({\bf q})$ is such that its substitution matrix ${\sf S}_\zeta=Q$ has strictly positive entries; \item[(c$'$)] there exist ``good return words'' $\{u_j\}_{j=1}^k$ for $\zeta=\zeta({\bf q})$, such that $\{\vec{\ell}(u_j)\}_{j=1}^k$ generates ${\mathbb Z}^m$ as a free Abelian group; \item[(d$'$)] there exist ${\varepsilon}>0$ and $1<C<\infty$ such that \begin{equation} \label{eq-star11} \int_{\Omega_{\bf q}} \left\|A({\bf a}_0)\right\|^{\varepsilon}\,d\Prob({\bf a}|{\bf a}^+) \le C\ \ \ \mbox{for all}\ \ {\bf a}^+. \end{equation} \end{enumerate} Then the same conclusions hold as in Theorem~\ref{th-main1}. \end{theorem} \begin{proof}[Proof of Theorem~\ref{th-main1} assuming Theorem~\ref{th-main11}] Given an ergodic system $(\Omega,\Prob,\sigma)$ from the statement of Theorem~\ref{th-main1}, we consider the induced system on the cylinder set $\Omega_{\bf q}:= [{\bf q}.{\bf q}]$. Then symbolically we can represent elements of $\Omega_{\bf q}$ as sequences satisfying (\ref{induce}). Denote by $\Prob_{\!\!{\bf q}}$ the induced (conditional) measure on $\Omega_{\bf q}$. Since $\Prob([{\bf q}.{\bf q}])>0$, standard results in Ergodic Theory imply that the resulting induced system $(\Omega_{\bf q},\Prob_{\!\!{\bf q}},\sigma)$ is also ergodic and the associated cocycle has the same properties of the Lyapunov spectrum (with the values of the Lyapunov exponents multiplied by $1/\Prob([\bq.\bq])$); that is, (a$'$) holds.The properties (b$'$) and (c$'$) follow from (b) and (c) automatically. Finally, note that (\ref{eq-star11}) is identical to (\ref{eq-star1}). On the level of Bratteli-Vershik diagrams this corresponds to the ``aggregation-telescoping procedure'', which results in a naturally isomorphic Bratteli-Vershik system. Observe also that a weakly-Lipschitz function on $\Omega$ induces a weakly-Lipschitz function on $\Omega_{\bf q}$ without increase of the norm $\|f\|_L$, see Section~\ref{sec-weakl}. Thus, Theorem~\ref{th-main11} applies, and the reduction is complete. \end{proof} \subsection{Exponential tails} For ${\bf a}\in \Omega_{\bf q}$ we consider the sequence of substitutions $\zeta({\bf a}_n)$, $n\ge 1$. In view of (\ref{induce}), we have $$ \zeta({\bf a}_n) = \zeta({\bf q}) \zeta({\bf p}_n)\zeta({\bf q}). $$ Recall that $$ A({\bf p}_n) = {\sf S}_{\zeta({\bf p}_n)}^t. $$ Denote \begin{equation} \label{def-W} W_n = W_n({\bf a}):=\log\|A({\bf a}_n)\| = \log\|Q^t A({\bf p}_n) Q^t\|. \end{equation} It is clear that $W_n\ge 0$ for $n\ge 1$. \begin{prop}[Prop.\,6.1 in \cite{BuSo18a}] \label{prop-proba} Under the assumptions of Theorem~\ref{th-main11}, there exists a positive constant $L_1$ such that for $\Prob$-a.e.\ ${\bf a}$, the following holds: for any $\delta>0$, for all $N$ sufficiently large ($N\ge N_0({\bf a},\delta)$), \begin{equation} \label{w-cond2} \max\left\{\sum_{n\in \Psi} W_{n+1}:\ \Psi\subset \ \{1,\ldots,N\},\ |\Psi| \le \delta N\right\} \le L_1\cdot \log(1/\delta) \cdot \delta N. \end{equation} \end{prop} The following is an immediate consequence. \begin{corollary} \label{cor-immed} In the setting of Proposition~\ref{prop-proba}, we have for any $C>0$: \begin{equation} \label{eq-immed} {\rm card}\left\{n\le N:\ W_{n+1} > C L_1\cdot \log(1/\delta)\right\} \le \frac{\delta N}{C}\,. \end{equation} \end{corollary} \subsection{Estimating twisted Birkhoff integrals} We will use the following notation: $\|y\|_{{\mathbb R}/{\mathbb Z}}$ is the distance from $y\in {\mathbb R}$ to the nearest integer. For a word $v$ in the alphabet ${\mathcal A}$ denote by $\vec{\ell}(v)\in {\mathbb Z}^m$ its ``population vector'' whose $j$-th entry is the number of $j$'s in $v$, for $j\le m$. We will need the ``tiling length'' of $v$ defined, for $\vec s\in {\mathbb R}^m_+$, by \begin{equation} \label{tilength} |v|_{\vec{s}}:= \langle\vec{\ell}(v), \vec{s}\rangle. \end{equation} Below we denote by $O_{{\bf a},Q}(1), O_{{\bf a},B}(1)$, etc., generic constants which depends only on the parameters indicated and which may be different from line to line. Recall that the roof function is normalized by $$ \vec{s}\in \Delta_{{\bf a}}^{m-1}:= \{\vec{s}\in {\mathbb R}^m_+:\ \sum_{a\in {\mathcal A}} \mu_{{\bf a}^+}([a]) s_a =1\}. $$ \begin{prop} \label{prop-Dioph4} Suppose that the conditions of Theorem~\ref{th-main11} are satisfied. Then for $\Prob$-a.e.\ ${\bf a}\in \Omega$, for any $\eta\in (0,1)$, there exists $\ell_\eta=\ell_\eta({\bf a})\in {\mathbb N}$, such that for all $\ell\ge \ell_\eta$ and any bounded cylindrical function $f^{(\ell)}$ of level $\ell$, for any $(x,t)\in {\mathfrak X}_{{\bf a}^+}^{\vec{s}}$, $\vec s\in \Delta_{\bf a}^{m-1}$, and $\omega\in {\mathbb R}$, \begin{equation} \label{eq-Dioph3} |S^{(x,t)}_R(f^{(\ell)},\omega)| \le O_{{\bf a},Q}(1)\cdot \|f^{(\ell)}\|_{_\infty} \Bigl(R^{1/2}+ R^{1+\eta}\!\!\!\!\!\prod_{\ell+1\le n < \frac{\log R}{4\theta_1}} \Bigl( 1 - c_1\cdot \!\!\!\! \max_{v\in GR(\zeta)}\bigl\| \omega|\zeta^{[n]}(v)|_{\vec{s}}\bigr\|^2_{{\mathbb R}/{\mathbb Z}}\Bigr)\Bigr), \end{equation} for all $ R\ge e^{8\theta_1 \ell}. $ Here $c_1\in (0,1)$ is a constant depending only on $Q$. \end{prop} The proposition was proved in \cite[Prop. 7.1]{BuSo18a}, in the equivalent setting of Bratteli-Vershik transformations, and we do not repeat it here. The proof proceeds in several steps, which already appeared in one way or the other, in our previous work \cite{BuSol2,BuSo18a,BuSo19}. In short, given a cylindrical function of the form (\ref{fcyl2}), it suffices to consider $f(x,t)={1\!\!1}_{[a]}(x) \cdot \psi_a(t)$ for $a\in {\mathcal A}$. A calculation shows that its twisted Birkhoff sum, up to a small error, equals $\widehat{\psi}_a$ times an exponential sum corresponding to appearances of $a$ in an initial word of a sequence $x\in X_{{\bf a}^+}$. Using the prefix-suffix decomposition of $S$-adic sequences, the latter may be reduced to estimating exponential sums corresponding to the substituted symbols $\zeta^{[n]}(b)$, $b\in {\mathcal A}$. These together (over all $a$ and $b$ in ${\mathcal A}$) form a matrix of trigonometric polynomials to which we give the name of a matrix Riesz product in \cite[Section 3.2]{BuSo18a} and whose cocycle structure is studied in \cite{BuSo19}. The next step is estimating the norm of a matrix product from above by the absolute value of a scalar product, which was done in \cite[Prop.\,3.4]{BuSo18a}. Passing from cylindrical functions of level zero to those of level $\ell$ follows by a simple shifting of indices, see \cite[Section 3.5]{BuSo18a}.The term $R^{1/2}$ (which can be replaced by any positive power of $R$ at the cost of a change in the range of $n$ in the product) absorbs several error terms. One tiny difference with \cite[Prop. 7.1]{BuSo18a} is that there we assumed a different normalization: ${\|s\|}_1=1$, hence ${\|s\|}_\infty\le 1$, which was used in the proof. Here we we have ${\|s\|}_\infty \le \bigl(\min_{a\in {\mathcal A}} \mu_{{\bf a}^+}([a])\bigr)^{-1}$, which is absorbed into the constant $O_{{\bf a},Q}(1)$. \subsection{Reduction to the case of cylindrical functions} \label{sec-reduc2} Our goal is to prove that for all $B>1$, for ``typical'' \ $\vec{s}$ in the appropriate set, for any weakly Lipschitz function $f$ on ${\mathfrak X}_{{\bf a}^+}^{\vec{s}}$, holds \begin{equation} \label{Hoeld1} \sigma_f(B(\omega,r)) \le C({\bf a},\|f\|_L)\cdot r^\gamma\ \mbox{for}\ \omega\in [B^{-1},B]\ \mbox{and} \ 0 < r \le r_0({\bf a},\vec{s}, B), \end{equation} for some $\gamma\in (0,1)$, uniformly in $(x,t)\in {\mathfrak X}_{{\bf a}^+}^{\vec{s}}$. We will specify $\gamma$ at the end of the proof, see (\ref{def-gamma}) and (\ref{defK}). The dependence on $\vec{s}$ in the estimate is ``hidden'' in $\sigma_f$, the spectral measure of the special flow corresponding to the roof function given by $\vec{s}$. In view of Lemma~\ref{lem-varr}, the estimate (\ref{Hoeld1}) will follow, once we show \begin{equation} \label{wts1} |S^{(x,t)}_R(f,\omega)| \le \widetilde{C}({\bf a},\|f\|_L)\cdot R^{1-\gamma/2},\ \mbox{for}\ \omega\in [B^{-1},B]\ \mbox{and}\ R\ge R_0({\bf a},\vec s, B). \end{equation} \begin{lemma} \label{lem-reduc2} Fix $B>1$. Let ${\bf a}$ be Oseledets regular for the renormalization cocycle ${\mathbb A}(n)$, vector $\vec s\in \Delta_{\bf a}^{m-1}$, and suppose that for all $\ell\ge \ell_0({\bf a},\vec s,B)$ we have for any bounded cylindrical function $f^{(\ell)}$ of level $\ell$: \begin{equation} \label{eq-wts111} |S^{(x,t)}_R(f^{(\ell)},\omega)| \le O_{{\bf a},\|f^{(\ell)}\|_\infty}(1)\cdot R^{1-\gamma/2},\ \mbox{for}\ \omega\in [B^{-1},B]\ \mbox{and}\ R\ge e^{\gamma^{-1}\theta_1 \ell}. \end{equation} Then (\ref{wts1}) holds for any weakly Lipschitz function $f$ on ${\mathfrak X}_{{\bf a}^+}^{\vec{s}}$. \end{lemma} \begin{proof} Let $f$ be a weakly Lipschitz function $f$ on ${\mathfrak X}_{{\bf a}^+}^{\vec{s}}$. By Lemma~\ref{lem-approx}, we have $$ {\|f - f^{(\ell)}\|}_\infty \le {\|f\|}_L\cdot \max_{a\in {\mathcal A}} \mu_{{\bf a}^+}(\zeta^{[\ell]}[a]), $$ for some cylindrical function of level $\ell$, with $\|f^{(\ell)}\|_\infty \le \|f\|_\infty$. By (\ref{eq-measure2}) and the Oseledets Theorem, since ${\bf a}$ is Oseledets regular, we have $$ \lim _{n\to \infty} \frac{\log \mu_{{\bf a}^+}(\zeta^{[n]}[a])}{n} = -\theta_1,\ \ \mbox{for all}\ a\in {\mathcal A}, $$ hence for $\ell\ge \ell_1({\bf a})$, \begin{equation} \label{approx} \|f-f^{(\ell)}\|_\infty \le \|f\|_L\cdot e^{-\frac{1}{2}\theta_1\ell}. \end{equation} Recall that $S_R^{(x,t)}(f,\omega) = \int_0^R e^{-2\pi i \omega \tau} f\circ h_\tau(x,t)\,d\tau$. Let $\ell_2=\ell_2({\bf a},\vec s,B):= \max\{\ell_1({\bf a}), \ell_0({\bf a},\vec s,B)\}$ and $$ R_0=R_0({\bf a},\vec s,B) := \exp\bigl[\gamma^{-1}\theta_1 \ell_2\bigr]. $$ For $R\ge R_0$ let \begin{equation}\label{def-ell} \ell := \left\lfloor \frac{\gamma \log R}{\theta_1}\right\rfloor, \end{equation} so that $\ell\ge \ell_2$ and both (\ref{eq-wts111}) and (\ref{approx}) hold. We obtain $$ |S_R^{(x,t)}(f,\omega) - S_R^{(x,t)}(f^{(\ell)},\omega)| \le R\cdot \|f\|_L \cdot e^{-\frac{1}{2}\theta_1 \ell} \le e^{\theta_1/2}\cdot \|f\|_L\cdot R^{1-\gamma/2}, $$ which together with (\ref{eq-wts111}) imply (\ref{wts1}). \end{proof} \section{Quantitative Veech criterion and the exceptional set} By the definition of tile length (\ref{tilength}) and population vector, we have $$ \|\omega|\zeta^{[n]}(v)|_{\vec{s}}\|_{{\mathbb R}/{\mathbb Z}} = \|\langle \vec\ell(v), \omega ({\sf S}^{[n]})^t \vec{s} \rangle \|_{{\mathbb R}/{\mathbb Z}} = \|\langle \vec\ell(v), {\mathbb A}(n,{\bf a}) (\omega\vec{s} )\rangle \|_{{\mathbb R}/{\mathbb Z}}. $$ For $\vec{x} = (x_1,\ldots,x_m)$ denote by $$ \|\vec{x}\|_{{\mathbb R}^m/{\mathbb Z}^m} = \max_j \|x_j\|_{{\mathbb R}/{\mathbb Z}}, $$ the distance from $\vec x$ to the nearest integer lattice point in the $\ell^\infty$ metric. \begin{lemma} \label{lem-lattice} Let $\{v_j\}_{j=1}^k$ be the good return words for the substitution $\zeta$, such that $\{\vec\ell(v_j)\}_{j=1}^k$ generate ${\mathbb Z}^m$ as a free Abelian group. Then there exists a constant $C_\zeta>1$ such that \begin{equation} \label{ineq-lattice} C_\zeta^{-1} \|\vec x\|_{{\mathbb R}^m/{\mathbb Z}^m} \le \max_{j\le k} \|\langle \vec\ell(v_j), \vec{x}\rangle \|_{{\mathbb R}/{\mathbb Z}} \le C_\zeta \|\vec x\|_{{\mathbb R}^m/{\mathbb Z}^m}. \end{equation} \end{lemma} \begin{proof} Let $\vec{x} = \vec{n} + \vec{\epsilon}$, where $\vec n\in {\mathbb Z}^m$ is the nearest point to $\vec x$. Then $\|\vec x\|_{{\mathbb R}^m/{\mathbb Z}^m}= \|\vec{\epsilon}\|_\infty$, and $$ \|\langle \vec\ell(v_j), \vec{x}\rangle \|_{{\mathbb R}/{\mathbb Z}} = \|\langle \vec\ell(v_j), \vec{\epsilon}\rangle \|_{{\mathbb R}/{\mathbb Z}} \le \|\vec\ell(v_j)\|_1\cdot \|\vec{\epsilon}\|_\infty, $$ proving the right inequality in (\ref{ineq-lattice}). On the other hand, the assumption that $\{\vec\ell(v_j)\}_{j=1}^k$ generate ${\mathbb Z}^m$ as a free Abelian group means that for each $j\le m$ there exist $a_{i,j}\in {\mathbb Z}$ such that $\sum_{j=1}^k a_{i,j} \vec\ell(v_j) = {\bf e}_i$, the $i$-th unit vector. Then $$ \|\vec{x}\|_{{\mathbb R}^m/{\mathbb Z}^m} = \max_i \|x_i\|_{{\mathbb R}/{\mathbb Z}} = \max_i {\left\|\Bigl\langle\sum_{j=1}^k a_{i,j} \vec\ell(v_j), \vec x \Bigr\rangle \right\|}_{{\mathbb R}/{\mathbb Z}} \!\!\!\!\le\ \ \max_{i\le m} \sum_{j=1}^k |a_{i,j}| \cdot \max_{j\le k} \|\langle \vec\ell(v_j), \vec{x}\rangle \|_{{\mathbb R}/{\mathbb Z}}, $$ finishing the proof. \end{proof} In view of (\ref{ineq-lattice}), the product in (\ref{eq-Dioph3}) can be estimated as follows: \begin{equation} \label{lattice} \prod_{\ell+1\le n < \frac{\log R}{4\theta_1}} \Bigl( 1 - c_1\cdot \!\!\!\! \max_{v\in GR(\zeta)}{\bigl\| \omega|\zeta^{[n]}(v)|_{\vec{s}}\bigr\|}^2_{{\mathbb R}/{\mathbb Z}}\Bigr)\le \prod_{\ell+1 \le n < \frac{\log R}{4\theta_1}} \left( 1 - \wtil c_1\cdot \bigl\|{\mathbb A}(n,{\bf a}) (\omega\vec{s})\bigr\|^2_{{\mathbb R}^m/{\mathbb Z}^m} \right), \end{equation} where $\wtil c_1\in (0,1)$ is a constant depending only on $\zeta$. \begin{prop}[Quantitative Veech criterion] \label{prop-quant} Let ${\bf a}$ be Oseledets regular, $\vec s\in \Delta_{{\bf a}}^{m-1}$, $B>1$, and suppose that there exists $\varrho\in (0,\frac{1}{2})$ such that the set $ \bigl\{n\in {\mathbb N}:\ \big\|{\mathbb A}(n,{\bf a}) (\omega\vec{s})\bigr\|_{{\mathbb R}^m/{\mathbb Z}^m} \ge \varrho\bigr\} $ has lower density greater than $\delta>0$ uniformly in $\omega\in [B^{-1},B]$, that is, \begin{equation} \label{eq-densi} {\rm card} \left\{n\in {\mathbb N}:\ \big\|{\mathbb A}(n,{\bf a}) (\omega\vec{s})\bigr\|_{{\mathbb R}^m/{\mathbb Z}^m} \ge \varrho\right\} \ge \delta N\ \ \mbox{for all}\ \omega\in [B^{-1},B]\ \mbox{and}\ N \ge N_0({\bf a},\vec s, B). \end{equation} Then then H\"older property (\ref{Hoeld1}) holds with \begin{equation} \label{def-gamma} \gamma= \min\left\{\frac{\delta}{16},\frac{-\delta\log(1-\wtil c_1\varrho^2)}{8\theta_1}\right\}. \end{equation} \end{prop} \begin{proof} By Lemma~\ref{lem-reduc2}, it is enough to verify (\ref{eq-wts111}) for a bounded cylindrical function $f^{(\ell)}$, with $\ell \ge \ell_0=\ell_0({\bf a},\vec s, B)$. We use (\ref{eq-Dioph3}) and (\ref{lattice}), with $\eta = \gamma/2$, to obtain: \begin{equation} \label{kuku1} |S^{(x,t)}_R(f^{(\ell)}),\omega)| \le O_{{\bf a},Q}(1)\cdot \|f^{(\ell)}\|_{_\infty} \left(R^{1/2}+ R^{1+\gamma/2}\!\!\!\!\!\!\prod_{\ell+1\le n < \frac{\log R}{4\theta_1}} \left( 1 - \wtil c_1\cdot \bigl\|{\mathbb A}(n,{\bf a}) (\omega\vec{s})\bigr\|^2_{{\mathbb R}^m/{\mathbb Z}^m} \right) \right), \end{equation} for $\ell\ge \ell_{\gamma/2}$ and $R\ge e^{8\theta_1\ell}$. Since our goal is (\ref{eq-wts111}), we can discard the $R^{1/2}$ term immediately. Let $N_0 = N_0({\bf a},\vec s, B)$ and $$ \ell_0 =\max\left\{ \ell_{\gamma/2}, \lceil 4\gamma (N_0+1)\rceil\right\}. $$ For $\ell\ge \ell_0$ take $R\ge e^{\gamma^{-1}\theta_1 \ell}$, as required by Lemma~\ref{lem-reduc2}. Then $R\ge e^{8\theta_1\ell}$, since $\gamma\le 1/16$, so (\ref{kuku1}) applies. Let $$ N = \left\lfloor\frac{\log R}{4\theta_1}\right\rfloor, $$ then the choice of $R$ and $\ell_0$ implies that $N\ge N_0$. Thus we have by (\ref{eq-densi}): \begin{eqnarray} \prod_{\ell+1\le n < \frac{\log R}{4\theta_1}} \left( 1 - \wtil c_1\cdot \bigl\|{\mathbb A}(n,{\bf a}) (\omega\vec{s})\bigr\|^2_{{\mathbb R}^m/{\mathbb Z}^m} \right) & \le & (1-\wtil c_1 \varrho^2)^{\delta N-\ell-2} \nonumber \\[-3.3ex] & \le & 3 (1-\wtil c_1 \varrho^2)^{(\delta \log R)/(8\theta_1)} \nonumber \\ & \le & 3 R^{-\gamma}, \label{kuku2} \end{eqnarray} where in the second line we used $$ \delta N - \ell - 2 \le \delta \Bigl[\frac{\log R}{4\theta_1} - 1\Bigr] -\ell-2 \le \frac{\delta\log R}{4\theta_1} - \ell- 3, $$ $\ell \le \gamma\log R/\theta_1 \le \delta \log R/(16\theta_1)$, and a trivial estimate $(1-\wtil c_1 \varrho^2)^{-3} \le 3$ for $\wtil c_1\in (0,1),\ \varrho\in (0,\frac{1}{2})$. In the last line we used (\ref{def-gamma}). Combining (\ref{kuku1}) with (\ref{kuku2}) yields the desired (\ref{eq-wts111}). \end{proof} For $\omega >0$ and $\vec s\in \Delta_{\bf a}^{m-1}$ let $\vec K_n(\omega\vec s)\in {\mathbb Z}^m$ be the nearest integer lattice point to ${\mathbb A}(n,{\bf a}) (\omega\vec{s})$, that is, \begin{equation} \label{def-eps} {\mathbb A}(n,{\bf a}) (\omega\vec{s}) = \vec{K}_n(\omega\vec s) + \vec{{\varepsilon}}_n(\omega\vec s),\ \ \|\vec{{\varepsilon}}_n(\omega\vec s)\|_\infty = \bigl\|{\mathbb A}(n,{\bf a}) (\omega\vec{s})\bigr\|_{{\mathbb R}^m/{\mathbb Z}^m}\,. \end{equation} \begin{defi}[Definition of the exceptional set] Given $\varrho,\delta>0$ and $B>1$, define $$ E_N(\varrho,\delta,B) :=\left\{\omega\vec{s}\in {\mathbb R}^m_+:\ \vec s\in \Delta_{\bf a}^{m-1},\ \omega\in [B^{-1},B],\ {\rm card}\{n\le N: \|\vec{\varepsilon}_{n}(\omega\vec s)\|_\infty\ge \varrho\} < \delta N\right\}, $$ $$ {\mathcal E}_N(\varrho,\delta,B):= \left\{\vec s\in \Delta_{\bf a}^{m-1}:\ \exists\,\omega\in [B^{-1},B],\ \omega\vec s\in E_N(\rho,\delta,B)\right\}, $$ and \begin{equation} \label{def-excep} {\mathfrak E}={\mathfrak E}(\varrho,\delta,B):= \bigcap_{N_0=1}^\infty \bigcup_{N=N_0}^\infty {\mathcal E}_N(\varrho,\delta,B). \end{equation} \end{defi} The definition of the exceptional set is related to that in \cite[Section 9]{BuSo18a}; however, here we added an extra step --- the set $E_N$ of exceptional vectors $\omega\vec s$ at scale $N$. The reason is that dimension estimates will focus on the sets $P_{\bf a}^u(E_N)$. On the other hand, it is crucial that the ``final'' exceptional set ${\mathfrak E}$ be in terms of $\vec s$ in order to obtain uniform H\"older estimates for all $\omega\in [B^{-1},B]$, for $\vec s\not\in {\mathfrak E}$. \begin{prop} \label{prop-EK} There exist $\varrho>0$ such that for $\Prob$-a.e.\ ${\bf a}\in \Omega_{\bf q}$ and any $\epsilon_1>0$ there exists $\delta_0$, such that for all $\delta\in (0,\delta_0)$, for all $B>1$, and every Oseledets subspace $H_{\bf a}^J$ corresponding to ${\bf a}$, containing the unstable subspace $E^u_{\bf a}$, \begin{equation} \label{eq-dimba} \dim_H({\mathfrak E}(\varrho,\delta,B)\cap H_{\bf a}^J) \le \dim(H_{\bf a}^J) -\kappa + \epsilon_1, \end{equation} where $\kappa=\dim(E^u_{\bf a})$. \end{prop} Now we derive Theorem~\ref{th-main11} from Proposition~\ref{prop-EK}, and in the next section we prove the proposition. \begin{proof}[Proof of Theorem~\ref{th-main11} assuming Proposition~\ref{prop-EK}] Fix $\epsilon_1>0$ and choose $\rho>0$ and $\delta>0$ such that (\ref{eq-dimba}) holds. It is enough to verify (\ref{Hoeld1}) and (\ref{wts1}) for all $\vec s\in \Delta_{\bf a}^{m-1}\setminus {\mathfrak E}(\rho,\delta,B)$. By definition, $\vec{s} \in \Delta_{{\bf a}}^{m-1}\setminus {\mathfrak E}(\rho,\delta,B)$ means that there exists $N_0=N_0({\bf a},\vec s, B)\in {\mathbb N}$ such that $\vec s\not\in {\mathcal E}_N(\varrho,\delta,B)$ for all $N\ge N_0$. This, in turn, means that for all $\omega\in[B^{-1},B]$, we have $\omega\vec s\not \in E_N(\varrho,\delta,B)$. However, this is exactly the quantitative Veech criterion (\ref{eq-densi}), and the proof is finished by an application of Proposition~\ref{prop-quant}. \end{proof} \section{The Erd\H{o}s-Kahane method: proof of Proposition~\ref{prop-EK} } We now present the Erd\H{o}s-Kahane argument in {\it vector form}. The argument was introduced by Erd\H{o}s-\cite{Erd} , Kahane \cite{Kahane} for proving Fourier decay for Bernoulli convolutions, see \cite{sixty} for a historical review. Scalar versions of the argument were used in \cite{BuSol2,BuSo18a} to prove H\"older regularity of spectral measures in genus $2$. In this section we fix a $\Prob$-generic 2-sided sequence ${\bf a}\in \Omega_{\bf q}$. Under the assumptions of Theorem~\ref{th-main11}, for $\Prob$-a.e.\ ${\bf a}$, the sequence of substitutions $\zeta({\bf a}_n)$, $n\in {\mathbb Z}$, satisfies several conditions. To begin with, we assume that the point ${\bf a}$ is generic for the Oseledets Theorem; that is, assertions (i)-(iii) from Section 3 hold. We further assume validity of the conclusions of Proposition~\ref{prop-proba}, and when necessary, we can impose on it additional conditions which hold $\Prob$-almost surely. The symbol $P^u_{\bf a}$ denotes the projection to the unstable Oseledets subspace corresponding to ${\bf a}$. We defined $$ {\mathbb A}(n,{\bf a}) (\omega\vec s) = \vec{K}_n(\omega\vec s) + \vec{{\varepsilon}}_n(\omega\vec s), $$ in (\ref{def-eps}), where $\vec K_n(\omega\vec s)\in {\mathbb Z}^m$ is the nearest integer lattice point to ${\mathbb A}(n,{\bf a}) (\omega\vec s)$. Below we write $$ \vec K_n = \vec{K}_n(\omega\vec s),\ \ \vec{\varepsilon}_n = \vec{{\varepsilon}}_n(\omega\vec s), $$ and $\|{\varepsilon}_n\| = \|{\varepsilon}_n\|_\infty$, to simplify notation. The idea is that the knowledge of $\vec K_n$ for large $n$ provides a good estimate for the projection of $\omega\vec s$ onto the unstable subspace. Indeed, we have $$ {\mathbb A}(n,{\bf a}) P_{\bf a}^u(\omega\vec s)=P_{\sigma^n{\bf a}}^u {\mathbb A}(n,{\bf a}) (\omega\vec s)= P_{\sigma^n{\bf a}}^u \vec{K}_n + P_{\sigma^n{\bf a}}^u \vec{{\varepsilon}}_n, $$ hence $$ P^u_{\bf a}(\omega\vec s) = {\mathbb A}(n,{\bf a})^{-1} P_{\sigma^n{\bf a}}^u \vec{K}_n + {\mathbb A}(n,{\bf a})^{-1} P_{\sigma^n{\bf a}}^u \vec{{\varepsilon}}_n. $$ By the Oseledets Theorem, we have for $\Prob$-a.e.\ ${\bf a}$, for any $\epsilon>0$, for all $n$ sufficiently large, $$ \|{\mathbb A}(n,{\bf a})^{-1}P^u_{\sigma^n{\bf a}}\|\le e^{-(\theta_\kappa-\epsilon)n}, $$ where $\theta_\kappa>0$ is the smallest positive Lyapunov exponent of the cocycle ${\mathbb A}$. By definition \begin{equation} \label{up1} \|\vec{{\varepsilon}}_n\|\le 1/2<1,\ \ n\ge 0, \end{equation} whence \begin{equation} \label{good-est} \|P^u_{\bf a}(\omega\vec s) -{\mathbb A}(n,{\bf a})^{-1} P_{\sigma^n{\bf a}}^u \vec{K}_n\| < e^{-(\theta_\kappa-\epsilon)n}, \end{equation} for $n$ sufficiently large. Recall that we defined $W_n = \log\|A({\bf a}_n)\|$ in (\ref{def-W}). Let \begin{equation} \label{def-Mn} M_n:= \bigl(2+\exp(W_{n+1})\bigr)^m\ \ \mbox{and}\ \ \rho_n:= \frac{1/2}{1+\exp(W_{n+1})}\,. \end{equation} \begin{lemma} \label{lem-vspom2} For all $n\ge 0$, we have the following, independent of $\omega\vec s\in {\mathbb R}_+^m$: {\bf (i)} Given $\vec{K}_{n}\in{\mathbb Z}^m$, there are at most $M_n$ possibilities for $\vec K_{n+1}\in{\mathbb Z}^m$; {\bf (ii)} if $\max\{\|\vec{\varepsilon}_{n}\|,\|\vec{\varepsilon}_{{n+1}}\| \}< \rho_n$, then $\vec K_{{n+1}}$ is uniquely determined by $\vec K_{n}$. \end{lemma} \begin{proof} We have by (\ref{def-eps}), $$ {\mathbb A}(n,{\bf a}) (\omega\vec s) = \vec{K}_n + \vec{{\varepsilon}}_n,\ \ A({\bf a}_{n+1}) {\mathbb A}(n,{\bf a}) (\omega\vec s) = \vec{K}_{n+1} + \vec{{\varepsilon}}_{n+1}, $$ hence $$ \vec{K}_{n+1} - A({\bf a}_{n+1})\vec{K}_n = -\vec{{\varepsilon}}_{n+1} + A({\bf a}_{n+1})\vec{{\varepsilon}}_n. $$ It follows that $$ \left\|\vec{K}_{n+1} - A({\bf a}_{n+1})\vec{K}_n\right\| \le (1 + \exp (W_{n+1}))\max\{\|\vec{\varepsilon}_{n}\|,\|\vec{\varepsilon}_{{n+1}}\|\}. $$ Now both parts of the lemma follow easily. (i) We have by (\ref{up1}), $$ \left\|\vec{K}_{n+1} - A({\bf a}_{n+1})\vec{K}_n\right\| \le (1 + \exp (W_{n+1}))/2=:\Upsilon, $$ and it remains to note that the $\ell^\infty$ ball of radius $R$ centered at $A({\bf a}_{n+1})\vec{K}_n$ contains at most $(2\Upsilon+1)^m$ points of the lattice ${\mathbb Z}^m$. (ii) If $\max\{\|\vec{\varepsilon}_{n}\|,\|\vec{\varepsilon}_{{n+1}}\|\}< \rho_n$, then the radius of the ball is less than $\frac{1}{2}$, and it contains at most one point of ${\mathbb Z}^m$, thus $\vec{K}_{n+1} = A({\bf a}_{n+1})\vec{K}_n$. We are using here that ${\mathbb Z}^m$ is invariant under $A({\bf a}_{n+1})$ since it is an integer matrix. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop-EK}] Let $\wtil E_N(\delta,B)$ be defined by \begin{eqnarray*} \wtil E_N(\delta,B) & := & \bigl\{\omega\vec s\in {\mathbb R}^m_+:\ \vec s\in \Delta_{\bf a}^{m-1},\ \omega\in [B^{-1},B],\\[1.2ex] & & {\rm card}\{n\le N: \max\{\|\vec{\varepsilon}_{n}\|,\|\vec{\varepsilon}_{{n+1}}\|\}\ge \rho_n\} < \delta N\bigr\}. \end{eqnarray*} We recall that $\vec{\varepsilon}_n =\vec{\varepsilon}_n(\omega\vec s)$, but use the shortened notation for simplicity. First we claim that $\Prob$-almost surely, \begin{equation} \label{claima} \wtil E_N(\delta,B) \supset E_N(\varrho, \delta/4, B) \end{equation} for $N\ge N_0({\bf a})$, where \begin{equation} \label{defK} \varrho = \frac{1/2}{1+e^K}, \ \ \mbox{with}\ \ K =2 L_1 \log(1/\delta). \end{equation} Here $L_1$ is from Proposition~\ref{prop-proba}. Suppose $\omega\vec{s} \not\in \wtil E_N(\delta,B)$. Then there exists a subset $\Gamma_N \subset \{1,\ldots,N\}$ of cardinality $\ge \delta N$ such that $$ \max\{\|\vec{\varepsilon}_{n}\|,\|\vec{\varepsilon}_{{n+1}}\|\}\ge \rho_n\ \ \ \mbox{for all}\ n \in \Gamma_N. $$ Note that $\rho_n < \varrho$ is equivalent to $W_{n+1} >K$ by (\ref{defK}) and (\ref{def-Mn}). Observe that there are fewer than $\delta N/2$ integers $n\le N$ for which $W_{n+1}>K$, for $N\ge N_0({\bf a})$ by Corollary~\ref{cor-immed}. Thus $$ {\rm card}\bigl\{n\le N:\ \max\{\|\vec{\varepsilon}_{n}\|,\|\vec{\varepsilon}_{{n+1}}\|\}\ge \varrho \bigr\}\ge \delta N/2, $$ hence there are at least $\delta N/4$ integers $n\le N$ with $\|\vec{\varepsilon}_n\|\ge \varrho$, and so $\omega\vec{s}\not \in E_N(\varrho, \delta/4, B)$ which confirms (\ref{claima}). It follows that it is enough to show that if $\delta>0$ is sufficiently small, then for every Oseledets subspace $H_{\bf a}^J\supset E^u_{\bf a}$, $$ \dim_H(\widetilde{\mathfrak E}(\delta,B)\cap H_{\bf a}^J) \le \dim(H_{\bf a}^J) - \kappa + \epsilon_1=:\beta. $$ where $$ \widetilde{\mathfrak E}(\delta,B):=\bigcap_{N_0=1}^\infty \bigcup_{N=N_0}^\infty \widetilde {\mathcal E}_N(\delta,B), $$ $$ \widetilde {\mathcal E}_N(\delta,B) := \left\{\vec s\in \Delta_{\bf a}^{m-1}:\ \exists\,\omega\in[B^{-1},B],\ \omega\vec s\in \widetilde E_N(\delta,B)\right\}. $$ Let ${\mathcal H}^\beta$ denote the $\beta$-dimensional Hausdorff measure. A standard method to prove \begin{equation} \label{dimen} {\mathcal H}^\beta(\widetilde{\mathfrak E}(\delta,B)\cap H_{\bf a}^J) <\infty, \end{equation} whence $\dim_H(\widetilde{\mathfrak E}(\delta,B)\cap H_{\bf a}^J)\le \beta$, is to show that $\widetilde {\mathcal E}_N(\delta,B)\cap H_{\bf a}^J$ may be covered by $\approx e^{\beta\cdot N\alpha }$ balls of radius $\approx e^{-N\alpha}$ for $N$ sufficiently large, for some $\alpha>0$. Observe that the map from $\widetilde {\mathcal E}_N(\delta,B)$ to $\widetilde E_N(\delta,B)$ is simply $\omega\vec s\mapsto \vec s$ for $\omega\in [B^{-1},B]$ and $\vec s\in \Delta_{\bf a}^{m-1}$, which is Lipschitz, with a Lipschitz constant $O_{{\bf a},B}(1)$; thus it is enough to produce a covering of $\widetilde E_N(\delta,B)\cap H_{\bf a}^J$. By assumption, $H_{\bf a}^J = E^u_{\bf a} \oplus (H_{\bf a}^J \ominus E_{\bf a}^u)$, hence \begin{equation} \label{eq-prod} \widetilde E_N(\delta,B)\cap H_{\bf a}^J \subset P_{\bf a}^u\left(\widetilde E_N(\delta,B)\right) \times \Bigl\{\vec y\in H_{\bf a}^J \ominus E_{\bf a}^u:\ \|\vec y\|_1 \le \frac{B}{\min_a\mu_{{\bf a}^+}([a])}\Bigr\}=: F_1\times F_2. \end{equation} Observe that $\dim(H_{\bf a}^J \ominus E_{\bf a}^u)=\dim(H_{\bf a}^J)-\kappa$, so $F_2$ may be covered by \begin{equation} \label{cover1} \exp\left[\alpha N (\dim(H_{\bf a}^J)-\kappa)\right]\ \ \mbox{balls of radius}\ \ e^{-\alpha N},\ \ \mbox{for any}\ \alpha>0. \end{equation} Thus it remains to produce a covering of $F_1=P_{\bf a}^u\left(\widetilde E_N(\delta,B)\right)$. Suppose $\omega\vec{s} \in \wtil E_N(\delta,B)$ and find the corresponding sequence $\vec K_{n}, \vec{\varepsilon}_{n}$ from (\ref{def-eps}). We have from (\ref{good-est}) that for $N$ sufficiently large, \begin{equation} \label{eqrad} P_{\bf a}^u(\omega\vec s)\ \ \mbox{is in the ball centered at}\ \ {\mathbb A}(N,{\bf a})^{-1} P_{\sigma^N{\bf a}}^u \vec{K}_N\ \ \mbox{of radius}\ \ e^{-(\theta_\kappa-\epsilon)N}. \end{equation} Since ${\bf a}$ is fixed, it is enough to estimate the number of sequences $\vec K_{n}$, $n\le N$, which may arise. Let $\Psi_N$ be the set of $n\in \{1,\ldots,N\}$ for which we have $\max\{\|\vec{\varepsilon}_{n}\|,\|\vec{\varepsilon}_{{n+1}}\|\}\ge \rho_n$. By the definition of $\wtil E_N(\delta,B)$ we have $|\Psi_N| <\delta N$. There are $\sum_{i< \delta N} {N\choose i}$ such sets. For a fixed $\Psi_N$ the number of possible sequences $\{\vec K_{n}\}$ is at most $$ {\mathcal B}_N:= \prod_{n\in \Psi_N} M_n, $$ times the number of ``beginnings'' $\vec K_{0}$, by Lemma~\ref{lem-vspom2}. The number of possible $\vec K_{0}$ is bounded, independent of $N$, by a constant depending on $B$, since $\omega\in [B^{-1},B]$ and $\vec s\in \Delta_{\bf a}^{m-1}$. In view of $M_n \le 3^m e^{mW_{n+1}}$, see (\ref{def-Mn}), we have by (\ref{w-cond2}), for $N$ sufficiently large, $$ {\mathcal B}_N \le O_{{\bf a},B}(1) \cdot 3^{m\delta N} \exp\left(m \sum_{n\in \Psi_N}W_{n+1}\right)\le O_{{\bf a},B}(1) \cdot 3^{m\delta N} \exp\left[m\cdot L_1\log(1/\delta)(\delta N)\right]. $$ Thus, by (\ref{eqrad}), the number of balls of radius $e^{-(\theta_\kappa-\epsilon)N}$ needed to cover $P_{\bf a}^u(\widetilde E_N(\delta,B))$ is at most \begin{equation} \label{lasta1} O_{{\bf a},B}(1)\cdot \sum_{i<\delta N} {N\choose i} \cdot 3^{m\delta N} \exp\left[m\cdot L_1\log(1/\delta)(\delta N)\right] \le O_{{\bf a},B}(1)\cdot \exp\left[L_2 \log(1/\delta)(\delta N)\right], \end{equation} for some $L_2>0$, using the standard entropy estimate (or Stirling formula) for the binomial coefficients. Since $\delta\log(1/\delta)\to 0$ as $\delta\to 0$, we can choose $\delta_0>0$ so small that $\delta< \delta_0$ implies $$ \left[L_2\log(1/\delta)(\delta N)\right] < \epsilon_1 (\theta_\kappa-\epsilon)N. $$ Combining this with (\ref{eq-prod}) and (\ref{cover1}), we obtain that $\wtil E_N(\delta,B)\cap H_{\bf a}^J$ may be covered by $$ \exp\left[(\theta_\kappa-{\varepsilon})N\beta\right]\ \ \mbox{balls of radius}\ \ e^{-(\theta_\kappa-\epsilon)N} $$ for $N$ sufficiently large, where $\beta = \dim(H_{\bf a}^J) - \kappa + \epsilon_1$. This confirms (\ref{dimen}). The proof of Proposition~\ref{prop-EK}, and hence of Theorem~\ref{th-main11}, is now complete. \end{proof} \section{Derivation of Theorem~\ref{main-moduli} and Theorem~\ref{th-twisted} from Theorem~\ref{th-main1} } \label{sec-deriv} This section is parallel to \cite[Section 11]{BuSo18a}; however, we need to make a number of changes, in view of the requirements on the word ${\bf q}$. Recall the discussion in Section 2.1 and Remark~\ref{remark-main}. Consider our surface $M$ of genus $g\ge 2$. By the results of \cite[Section 4]{Buf-umn} there is a correspondence between almost every translation flow with respect to the Masur-Veech measure and a natural flow on a ``random'' 2-sided Markov compactum, which is, in turn, measurably isomorphic to the special flow over a one-sided Markov compactum, or, equivalently, an $S$-adic system $X_{{\bf a}^+}$ for ${\bf a}^+\in {\mathfrak A}$. The roof function of the special flow is piecewise constant, depending only on the first symbol, and we can express it as a vector of ``heights''. This symbolic realization uses Veech's construction \cite{veech} of the space of zippered rectangles which corresponds to a connected component of a stratum ${\mathcal H}$. Given a Rauzy class ${\mathcal R}$, we get a space of $S$-adic systems on $m\ge 2g$ symbols, which provide a symbolic realization of the interval exchange transformations (IET's) from ${\mathcal R}$. It was shown by Veech \cite{veechamj} that the ``vector of heights'' obtained in this construction necessarily belongs to a subspace $H(\pi)$, which is invariant under the Rauzy-Veech cocycle and depends only on the permutation of the IET. In fact, the subspace $H(\pi)$ has dimension $2g$ and is the sum of the stable and unstable subspaces for the Rauzy-Veech cocycle. By the result of Forni \cite{Forni}, there are $g$ positive Lyapunov exponents, thus $\dim(E^u_{\bf a})=g\ge 2$. In the setting of Theorem~\ref{th-main1} we will take $H(\pi) = H_{\bf a}(\pi)$ to be our Oseledets subspace $H^J_{\bf a}$, containing the strong unstable subspace $E^u_{\bf a}$. Theorem~\ref{main-moduli} (in the expanded form) will follow from Theorem~\ref{th-main1} by the argument given in Remark~\ref{remark-main}. Indeed, by the assumption, for $\mu$-a.e.\ Abelian differential, the induced conditional measure on a.e. fibre has Hausdorff dimension $\ge 2g-\kappa + \delta$, hence taking $\epsilon_1=\delta$ and using the estimate (\ref{dimesta}) for the exceptional set, we see that exceptional set has zero $\mu$-measure. Thus it remains to check that the conditions of Theorem~\ref{th-main1} are satisfied. The property (C1) holds because the renormalization matrices in the Rauzy-Veech induction all have determinant $\pm 1$. Condition (C2) holds by a theorem of Zorich \cite{Zorich}. As already mentioned, property (a) (on Lyapunov exponents) in Theorem~\ref{th-main1} holds by a theorem of Forni \cite{Forni}. Next we explain how to achieve the combinatorial properties (b) and (c) of a word ${\bf q}\in {\mathfrak A}^k$. To this end, we need to recall the construction of the Markov compactum and Bratteli-Vershik realization of the translation flow from \cite{Buf-umn}. The symbolic representation of the translation flow on the 2-sided Markov compactum is obtained as the natural extension of the 1-sided symbolic representation for the IET which we now describe. An interval exchange is denoted by $(\lambda,\pi)$, where $\pi$ is the permutation of $m$ subintervals and $\lambda$ is the vector of their lengths. The well-known Rauzy induction (operations ``a'' and ``b'') proceeds by inducing on a smaller interval, so that the first return map is again an exchange of $m$ intervals. The Rauzy graph is a directed labeled graph, whose vertices are permutations of IET's and the edges lead to permutations obtained by applying one of the operations. Moreover, the edges are labeled by the type of the operation (``a'' or ``b''). As is well-known, for almost every IET, there is a corresponding infinite path in the Rauzy graph, and the length of the interval on which we induce tends to zero. For any finite ``block'' of this path, we have a pair of intervals $J\subset I$ and IET's on them, denoted $T_I$ and $T_J$, such that both are exchanges of $m$ intervals and $T_J$ is the first return map of $T_I$ to $J$. Let $I_1, \dots, I_m$ be the subintervals of the exchange $T_I$ and $J_1, \dots, J_m$ the subintervals of the exchange $T_J$. Let $r_i$ be the return time for the interval $J_i$ into $J$ under $T_I$, that is, $ r_i=\min\{ k>0: T_I^kJ_i\subset J\}. $ Represent $I$ as a Rokhlin tower over the subset $J$ and its induced map $T_J$, and write $$I=\bigsqcup\limits_{i=1,\dots, m, k=0, \dots, r_i-1} T^{k}J_i. $ By construction, each of the ``floors'' of our tower, that is, each of the subintervals $T_I^{k}J_i$, is a subset of some, of course, unique, subinterval of the initial exchange, and we define an integer $n(i,k)$ by the formula $$ T_I^{k}J_i\subset I_{n(i,k)}. $$ To the pair $I,J$ we now assign a substitution $\zeta_{IJ}$ on the alphabet $\{1, \dots, m\}$ by the formula \begin{equation} \label{zeta-def} \zeta_{IJ}: i\to n(i,0)n(i,1)\dots n(i, r_i-1). \end{equation} Words obtained form finite paths in the Rauzy graph will be called {\em admissible} (this agrees with the notion of ``admissible word'' in Section 3). By results of Veech, every admissible word appears infinitely often with positive probability in a typical infinite path. Condition (c) of Theorem~\ref{th-main1} and the simplicity of the admissible word from (b) are verified in the next lemma. \begin{lemma} \label{lem-combi1} There exists an admissible word ${\bf q}$, which is {\bf simple}, whose associated matrix $A(\q)$ has strictly positive entries, and the corresponding substitution $\zeta$, with $Q = {\sf S}_\zeta=A(\q)^t$ having the property that there exist {\bf good return words} $u_1,\ldots,u_m\in GR(\zeta)$, such that $\{\vec{\ell}(u_j):\ j\le m\}$ generate the entire ${\mathbb Z}^m$ as a free Abelian group. \end{lemma} \begin{proof} Recall that by ``word'' here we mean a sequence of substitutions corresponding to a finite path in the Rauzy graph. Denote by $\zeta_V$ the substitution corresponding to a path $V$. The alphabet may be identified with the set of edges. By construction, if we concatenate two paths $V_1 V_2$, we obtain $ \zeta_{V_1 V_2} = \zeta_{V_2} \zeta_{V_1}. $ First we claim that there exists a loop $V$ in the Rauzy graph, such that $A(V)$ is strictly positive and $\zeta_V(j)$ starts with the same letter $c=1$ for every $j\le m$. Indeed, start with an arbitrary loop $V$ in the Rauzy graph such that the corresponding renormalization matrix has all entries positive. Consider the interval exchange transformation with periodic Rauzy-Veech expansion obtained by going along the loop repeatedly (it is known from \cite{veech} that such an IET exists). As the number of passages through the loop grows, the length of the interval forming phase space of the new interval exchange (the result of the induction process) goes to zero. In particular, after sufficiently many moves, this interval will be completely contained in the first subinterval of the initial interval exchange --- but this means, in view of (\ref{zeta-def}) that $n(i,0)=1$ for all $i$, and hence the resulting substitution $\zeta_{V^n}$ has the property that $\zeta_{V^n}(j)$ starts with $c=1$ for all $j$. Next, observe that for any loop $W$ starting at the same vertex, the substitution $\zeta= \zeta_{WV^{2n}} = \zeta_{V^n}\zeta_{V^n}\zeta_W$ has the property that every $\zeta_{V^n}(j)$ is a return word for it. Indeed, applying $\zeta$ to any sequence we obtain a concatenation of words of the form $\zeta_{V^n}(j)$ for $j\le m$, in some order, and every one of them starts with $c=1$. Therefore, they are all return words. Moreover, every letter $j$ appears in every word $\zeta_{V^n}(i)$, since the substitution matrix of $\zeta_V$ is strictly positive. Thus, every word $u_j:= \zeta_{V^n}(j)$ appears in every $\zeta(i)$, which means that all these words are good return words for $\zeta$. The corresponding population vectors $\vec\ell(u_j)$ are the columns of the substitution matrix of $\zeta_{V^n}$. As is well-known, the matrices corresponding to Rauzy operations are invertible and unimodular, which means that the columns of ${\sf S}_{\zeta_{V^n}}$ are linearly independent and generate ${\mathbb Z}^m$ as a free Abelian group. It remains to choose $W$ to make sure that the word $WV^{2n}$ is simple. It is is known that in the Rauzy graph there are $a$- and $b$-cycles starting at every vertex. Assume that the loop $V$ ends with an edge labelled by $a$ (the other cases is treated similarly). Then first fix an $a$-loop $W_2$ starting at the same vertex as $V$, so that $W_2 V^{2n}$ starts and ends with an $a$-edge. Then choose a $b$-loop $W_1$ starting at the same vertex, with the property that $|W_1|> |W_2 V^{2n}|$. We will then consider the admissible word $W_1 W_2 V^{2n}$, and we claim that it is simple. Indeed, the word in the alphabet $\{a,b\}$ corresponding to it, has the form $b^k a\ldots a$, and it is simple, because its length is less than $2k$. The proof is complete. \end{proof} Condition (\ref{eq-star1}), a variant of the exponential estimate for return times of the Teichm\"uller flow into compact sets, is proved for all genera in \cite[Prop.\,11.3]{BuSo18a} by modifying an argument from \cite{buf-jams}. Theorem~\ref{main-moduli} and Theorem~\ref{th-twisted} are proved completely.
1,477,468,750,845
arxiv
\section{introduction} Medium of deconfined gluons and light quarks, called the quark-gluon plasma(QGP)~\cite{Bazavov:2011nk}, can be produced in ultrarelativistic heavy-ion collisions (URHICs). Charmonium as a bound state of charm ($c$) and anti-charm ($\bar{c}$) quark pair has been proposed to be a clean probe to study the formation of the QGP in heavy-ion collisions~\cite{Matsui:1986dk}. Different charmonium states bound by $c\bar{c}$ potential with color screening from in-medium light partons are sequentially melted when the temperature of the QGP medium increases~\cite{Satz:2005hx}. Besides, charmonium states suffer a direct dissociation from in-medium parton inelastic scattering~\cite{Peskin:1979va,Bhanot:1979vb,Grandchamp:2001pf,Brambilla:2011sg,Brambilla:2013dpa} which corresponds to an additional imaginary part of the $c\bar{c}$ potential~\cite{Laine:2006ns,Brambilla:2008cx}. In nucleus-nucleus (AA) collisions at the LHC, the initial temperatures of QGP can be far above the dissociation temperature $T_d$ of charmonium ground state $J/\psi$~\cite{Liu:2012zw,Liu:2017qah}. Most of primordially produced charmonia are dissociated in the medium, and the final production is dominated by the coalescence of abundant charm and anti-charm quarks in the regions where medium local temperature become smaller than the dissociation temperatures of charmonium~\cite{Thews:2000rj,Yan:2006ve,Grandchamp:2002wp}. The final spectrum of the charmonia are affected by the charm quark diffusion in medium and their coalescence probability below dissociation temperature~\cite{Zhao:2007hh,Du:2015wha,Zhao:2017yan,Zhao:2018jlw,He:2021zej}. In p-Pb collisions at $\sqrt{s_{NN}}=5.02$ TeV, a small deconfined medium is also believed to be generated~\cite{Zhao:2020wcd}, where the medium temperature is slightly above the critical temperature $T_c$ but still below the dissociation temperature $T_d\simeq 2T_c$ of $J/\psi$, which is on the order of its binding energy $T_d\simeq E_{b}$. Consider that the mass and typical momentum of charm quark are larger than charmonium binding energy $m > p > 3T >T_d \simeq E_b$, one can integrate out the hard scale $m$, soft scale $p$ and live with a non-relativistic potential description~\cite{Pineda:1997bj}. Besides, the recombination of charmonium production becomes negligible in pA collisions~\cite{Chen:2016dke,Du:2018wsj} due to small $c\bar{c}$ production. That excludes the contamination from the coalescence of $c\bar{c}$ pair and the correlation between different $c\bar{c}$ pairs which lead to the recombination contribution of charmonium. Those features make the Schr\"odinger equation which evolves only one $c\bar{c}$ pair in a potential viable for a quantum description of charmonium in pA collisions. With similar considerations, recombination of bottomonia in URHICs is negligible~\cite{Grandchamp:2005yw,Liu:2010ej,Emerick:2011xu,Du:2017qkv,Yao:2020xzw}, and open quantum system (OQS) description of bottomonium evolution in QCD medium which tracks one bottom/anti-bottom ($b\bar{b}$) pair has been carried out by various models in recent years~\footnote{Another consideration for tremendous discussions on bottomonium is due to its heavy mass which makes the relativistic effects negligible (NRQCD)~\cite{Caswell:1985ui,Bodwin:1994jh} and typical momentum relatively larger compared to the binding energy, rendering a potential (pNRQCD)~\cite{Pineda:1997bj}.}. The QCD medium environment can be encoded in the Hamiltonian of the heavy quark/antiquark ($Q\bar{Q}$) subsystem as additional terms equivalent to real and imaginary parts of the potential. Those studies start with solving Schr\"odinger equation with a complex potential~\cite{Strickland:2011mw,Krouppa:2015yoa,Boyd:2019arx}, to a stochastic potential~\cite{Akamatsu:2011se} and Schr\"odinger-Langevin type equation~\cite{Katz:2015qja}. More involved calculations tend to evolve the density matrix of the $Q\bar{Q}$ subsystem with Lindblad formalism~\cite{Brambilla:2016wgg}, whose different terms represent color screening, primordial dissociation and recombination of one pair respectively~\cite{Yao:2018nmy}. One of such calculations incorporated with quantum trajectory method can be found in~\cite{Brambilla:2020qwo}. In this paper, we will not go for more complicated quantum treatment as people discussed for bottomonium, but rather solve the Schr\"odinger equation with complex potential. We parameterized the in-medium temperature dependent complex potential of $c\bar{c}$ dipole according to lattice QCD data. Then we evolve a $c\bar{c}$ pair wave-function by solving time-dependent Schr\"odinger in position space with Crank-Nicolson~\cite{crank_nicolson_1947} implicit method. The final production of $J/\psi$ and $\psi(2S)$ are obtained via projecting the wave-functions of $c\bar c$ dipoles to the charmonium wave-function (from solving time-independent Schr\"odinger equation with vacuum potential) after they leave the hot medium along different trajectories. Since the geometric sizes of different charmonium wave-functions are different, $J/\psi$ and $\psi(2S)$ experience different magnitudes of the screening effect and the inelastic collisions with thermal partons. This results in different dissociations of $J/\psi$ and $\psi(2S)$. Since the suppressions of $J/\psi$ and $\psi(2S)$ are clearly distinguished shown by experimental data, it is essential to employ different scenarios of potential in the Schr\"odinger equation and understand the potential in play behind data, especially their overall suppression and relative suppression. In order to understand the role the color screening plays, we implement a strong and a weak screening scenario with and without imaginary part of the potential. This paper is organized as follows. In Sec.~\ref{sec.2}, the Schr\"odinger equation model and parametrizations of the heavy quark potential are introduced. Medium evolution which provides the space-time dependent temperature profile is described by hydrodynamic equations. In Sec.~\ref{sec.3}, we discuss the application of the model to p-Pb collisions at the LHC energy, $R_{pA}$s of $J/\psi$ and $\psi(2S)$ are calculated with different in-medium potentials and compared with the experimental data. We conclude in Sec.~\ref{sec.4}. \section{Schr\"odinger equation model} \label{sec.2} \subsection{Initial distributions} Heavy quark dipoles are produced in initial parton hard scatterings and then evolve into charmonium eigenstates. The momentum distribution of $c\bar c$ dipoles is approximated to be the $J/\psi$ momentum distribution in proton-proton (pp) collisions. Therefore in p-Pb collisions, the initial distribution of primordially produced $c\bar c$ dipoles can be obtained through a superposition of the effective pp collisions~\cite{Chen:2016dke}, \begin{align} \label{eq-initial} f_{\Psi}({\bf p},{\bf x}|{\bf b}) &= (2\pi)^3\delta(z)T_\text{p}({\bf x}_T)T_\text{A}({\bf x}_T-{\bf b}) \nonumber \\ &\times {\cal R}_{\text g}(x_g,\mu_{\text F},{\bf x}_{\text T}-{\bf b}) {d \bar \sigma^\Psi_{pp}\over d^3{\bf p}}, \end{align} where $\bf b$ is the impact parameter, $\bf x_T$ is the transverse coordinate, $T_A({\bf x_T})=\int dz\rho_A({\bf x_T},z)$ is the nuclear thickness function, and the nuclear density is taken as Woods-Saxon distribution. $T_p({\bf x_T})$ is the proton thickness, where proton density is taken as a Gaussian distribution~\cite{Chen:2016dke}. The width of Gaussian function is determined with the proton charge radius $\langle r\rangle_p=0.9$ fm~\cite{Gao:2021sml}. The shadowing effect is included with the inhomogeneous modification factor ${\cal R}_\text{g}$~\cite{Vogt:2004dh} for the gluons with the longitudinal momentum $x_g=e^{y}\ E_\text{T}/\sqrt{s_{\text{NN}}}$ and the factorization factor $\mu_\text{F}=E_\text{T}$. The transverse energy and the momentum rapidity are defined as $E_\text{T}=\sqrt{m_\Psi^2+{\bf p}_\text{T}^2}$ and $y=1/2\ln((E+p_\text{z})/(E-p_\text{z}))$. The values of the gluon shadowing factor $\mathcal{R}_g$ is obtained with EPS09 model~\cite{Eskola:2009uj}. The effective initial momentum distribution ${d \bar \sigma^\Psi_{pp}\over d^3{\bf p}}$ of charmonium in p-Pb collisions have included the Cronin effect~\cite{Cronin:1974zm}. Before two gluons fuse into a heavy quark dipole, they obtain additional transverse momentum via multi-scatterings with the surrounding nucleons. The extra momentum will be inherited by the produced $c\bar c$ dipole or charmonium states. With the random walk approximation, the Cronin effect is included with the modification in the momentum-differential cross section measured in pp collisions, \begin{align} \label{eq-cronin} {d\bar \sigma_{pp}^{\Psi}\over d^3{\bf p}} = {1\over \pi a_{gN}l}\int d^2{\bf q}_T e^{-{\bf q}_T^2\over a_{gN}l} {d\sigma_{pp}^{\Psi}\over d^3{\bf p}} \end{align} { where $l({\bf x}_T)=0.5T_A({\bf x}_T)/\rho_A({\bf x}_T,z=0)$ is the averaged path length of the gluon in the nucleus travelling through before scattering with the other gluon in the proton to produce a heavy quark dipole at the position ${\bf x}_T$.} $a_{gN}$ represents the extra transverse momentum square in a unit of length of nucleons before the fusion process. Its value is taken to be $a_{gN}=0.15\ \rm{GeV^2/fm}$~\cite{Chen:2018kfo}. The charmonium distribution in pp collisions have been measured by ALICE Collaboration at 2.76 TeV and 7 TeV~\cite{ALICE:2012vup,ALICE:2011zqe}. With these data, we parametrize the normalized $p_T$ distribution of charmonium at $\sqrt{s_{NN}}=5.02$ TeV to be, \begin{align} {dN_{J/\psi}\over 2\pi p_T d{ p_T}} = {(n-1)\over \pi (n-2) \langle p_T^2\rangle_{pp}}[1+{p_T^2\over (n-2)\langle p_T^2\rangle_{pp}}]^{-n} \end{align} where $n=3.2$ and the mean transverse momentum square of charmonium is parametrized as $\langle p_T^2\rangle_{pp}(y)=12.5\times [1- (y/y_{\rm max})^2] \rm{(GeV/c)^2}$ where the maximum rapidity of charmonium is defined with $y_{\rm max}=\ln(\sqrt{s_{NN}}/m_\Psi)$~\cite{Chen:2015iga}. $m_\Psi$ is the charmonium mass. \subsection{Evolution of $c\bar c$ dipoles in the medium} The heavy quark potential of $c\bar c$ dipole is modified by the hot medium, which affects the evolution of charmonium wave functions~\cite{Kajimoto:2017rel,Guo:2015nsa, Chen:2016vha}. Hot medium effects can be included in the Hamiltonian of $c\bar c$ dipoles. As a charm quark is heavy compared with the inner movement of charmonium bound states, the relativistic effect is ignored when considering the inner structure of a charmonium. We employ the time-dependent Schr\"odinger equation to describe the evolution of $c\bar c$ dipole wave functions with in-medium complex potentials. Assume the heavy quark-medium interaction is spherical without angular dependence, there is no mixing between charmonium eigenstates with different angular momentum in the wave function of the $c\bar c$ dipole. Radial part of the wave function of $c\bar c$ dipole at the center of mass frame is separated as, \begin{align} \label{fun-rad-sch} i\hbar {\partial \over \partial t}\psi( r, t) = \Big[-{\hbar^2\over 2m_\mu}{\partial ^2\over \partial r^2} +V( r, T) + {L(L+1)\hbar^2\over 2 m_\mu r^2}\Big]\psi(r,t) \end{align} where $r$ is the relative distance between charm and anti-charm quarks and $t$ is the proper time in the center of mass frame. $m_\mu=m_1m_2/(m_1+m_2)=m_c/2$ is the reduced mass and $m_c$ is charm quark mass. $\psi(r,t)$ is defined to be $\psi(r,t)=r R(r,t)$, where $R(r,t)$ is the radial part of the $c\bar c$ dipole wave function. The complete wave function of the $c\bar c$ dipole can be expanded in the eigenstates of the vacuum Cornell potential, $\Psi(r,\theta, \phi)= \sum_{nlm}c_{nlm}R_{nl}(r)Y_{lm}(\theta, \phi)$. $Y_{lm}$ is the spheric harmonics function. $L=(0,1,...)$ is the quantum number of the angular momentum. In the ideal fluid with zero viscosity, heavy quark potential $V(r,T)$ is radial. There is no transitions between charmonium eigenstates with different angular momentum $L$. The potential depends on local temperature of the medium which is given by hydrodynamic model in the next Section. Radial Schr\"odinger equation Eq.~(\ref{fun-rad-sch}) is solved numerically with the Crank--Nicolson method (take natural units $\hbar=c=1$). The numerical form of the Schr\"odinger equation is simplified as, \begin{align} \label{eq-sch-num} {\bf T}_{j,k}^{n+1}\psi_{k}^{n+1} = \mathcal{V}_{j}^{n}. \end{align} Here $j$ and $k$ are the index of rows and columns in the matrix $\bf T$ respectively. The non-zero elements in the matrix are, \begin{align} \label{eq-sim-cn} &{\bf T}^{n+1}_{j,j}= 2+2a+bV_j^{n+1}, \nonumber \\ &{\bf T}^{n+1}_{j,j+1}={\bf T}^{n+1}_{j+1,j}= -a, \nonumber \\ &\mathcal{V}_j^n= a\psi_{j-1}^n +(2-2a-bV_j^n)\psi_j^n +a\psi_{j+1}^n , \end{align} where $i$ is an imaginary number, $a= i\Delta t/(2m_\mu (\Delta r)^2)$, and $b=i\Delta t$. The subscript $j$ and superscript $n$ in $\psi_j^n$ represents the coordinate $r_j=r_0 +j\cdot \Delta r$ and the time $t^n=t_0 +n\cdot\Delta t$ respectively. $\Delta r$ and $\Delta t$ are the steps of the radius and the time in numerical simulation. Their values are taken to be $\Delta t=0.001$ fm/c and $\Delta r=0.03$ fm, respectively. $t_0$ is the start time of the Schr\"odinger equation. The matrix ${\bf T}^{n}$ at each time step depends on the in-medium heavy quark potential $V(r,T)$ which will be given later. The Schr\"odinger equation Eq.~(\ref{fun-rad-sch}) describes the evolution of the wave function of the $c\bar c $ dipole from $t\ge t_0$. The initial wave function of the $c\bar c$ dipole is taken to be one of charmonium eigenstates. After traveling through the hot medium, the fractions $|c_{nl}(t)|^2$ of each charmonium eigenstate (1S, 1P, 2S, etc) in the $c\bar c$ dipoles changes with time. $c_{nl}(t)$ is defined as, \begin{align} c_{nl}(t) &= \int R_{nl}(r)e^{-iE_{nl} t} \psi(r,t) rdr \end{align} where the radial wave function $\psi(r,t)$ is given by Eq.~(\ref{eq-sch-num}). The ratio of final and initial fractions of a certain charmonium state in one $c\bar c$ dipole is written as, $R^{\rm direct}(t) ={|c_{nl}(t)|^2\over |c_{nl}(t_0)|^2}$. In p-Pb collisions, the initial spatial and momentum distributions of the primordially produced $c\bar c$ dipoles are given by Eq.~(\ref{eq-initial}). After averaging over the positions and momentum bins of different $c\bar c$ dipoles in p-Pb collisions, one can get the ensemble-averaged fractions of a certain charmonium state in the $c\bar c$ dipole $\langle |c_{nl}(t)|^2\rangle_{\rm en}$. The direct nuclear modification factor of charmonium eigenstate ($n,l$) is written as, \begin{align} \label{eq-directRAA} R_{pA}^{\rm direct}(nl) &={\langle |c_{nl}(t)|^2\rangle_{\rm en}\over \langle |c_{nl}(t_0)|^2\rangle_{\rm en}}\nonumber \\ &={\int d{\bf x}_{\Psi}d{\bf p}_{\Psi} |c_{nl}(t, {\bf x}_{\Psi}, {\bf p}_{\Psi})|^2{{dN^{\Psi}_{pA}}\over d{\bf x}_{\Psi} d{\bf p}_{\Psi}} \over \int d{\bf x}_{\Psi}d{\bf p}_{\Psi} |c_{nl}(t_0,{\bf x}_0, {\bf p}_{\Psi})|^2 {\overline {dN^{\Psi}_{pA}}\over d{\bf x}_{\Psi} d{\bf p}_{\Psi}}} \end{align} where ${\bf x}_{\Psi}$ and ${\bf p}_{\Psi}$ is the position and the total momentum of the correlated $c\bar c$ dipole. If without the hot medium effects, these correlated $c\bar c$ dipoles are just charmonium eigenstates without dissociation. ${dN_{pA}^{\Psi}\over d{\bf x}_{\Psi}d{\bf p}_{\Psi}}$ is the initial spatial and momentum distributions of primordially produced charmonium in p-Pb collisions. It is given by Eq.~(\ref{eq-initial}). Note that in the denominator, ${\overline{dN_{pA}^{\Psi}}\over d{\bf x}_{\Psi}d{\bf p}_{\Psi}}$ is calculated by Eq.~(\ref{eq-initial}) excluding the cold nuclear matter effects. After considering the feed-down contributions from excited states, one can get the nuclear modification factor of $J/\psi$ (which is given in experimental data), \begin{align} \label{eq-promptRAA} R_{pA}(J/\psi) = {\sum_{nl} \langle |c_{nl}(t)|^2\rangle_{\rm en} f_{pp}^{nl} \mathcal{B}_{nl\rightarrow J/\psi}\over \sum_{nl} \langle |c_{nl}(t_0)|\rangle^2\rangle_{\rm en} f_{pp}^{nl} \mathcal{B}_{nl\rightarrow J/\psi}} \end{align} where $\mathcal{B}_{nl\rightarrow J/\psi}$ is the branching ratio of charmonium eigenstates with the quantum number $(n,l)$ decaying into the ground state $J/\psi$. We consider the decay channels of $\chi_c\rightarrow J/\psi$ and $\psi(2S)\rightarrow J/\psi$. $f_{pp}^{nl}$ is the direct production of charmonium eigenstate ($J/\psi$, $\chi_c$, $\psi(2S)$) without the feed-down process in pp collisions. The ratio of the charmonium direct production is extracted to be $f_{pp}^{J/\psi}:f_{pp}^{\chi_c}:f_{pp}^{\psi(2S)} =0.68:1:0.19$~\cite{ParticleDataGroup:2018ovx}. \subsection{In-medium heavy quark potential} In vacuum, heavy quark potential in the quarkonium can be approximated as the Cornell potential. At finite temperature, the Cornell potential is screened by thermal light partons. The real part of in-medium heavy quark potential is between the limits of the free energy $F$ and the internal energy $U$ of charmonium. The in-medium potential has been studied by Lattice QCD calculations and the potential models~\cite{Kaczmarek:2005ui,Digal:2005ht,Shi:2021qri,Lafferty:2019jpr}. We parametrize the temperature and coordinate dependence of the free energy with the formula, \begin{align} \label{fun-latreal} F(T,r) =& -{\alpha\over r}[e^{-\mu r}+\mu r]\nonumber \\ & -{\sigma \over 2^{3/4}\Gamma[3/4]}({r\over \mu})^{1/2} K_{1/4}[(\mu r)^2] +{\sigma\over 2^{3/2}\mu }{\Gamma[1/4]\over \Gamma[3/4]} \end{align} where $\alpha=\pi/12$ and $\sigma=0.2\ \rm{GeV^2}$ are given in the Cornell potential $V_c(r)={-\alpha/r}+\sigma r$. The $\Gamma$ and $K_{1/4}$ are the Gamma function and the modified Bessel function, respectively. The screened mass in Eq.~(\ref{fun-latreal}) is taken as~\cite{Digal:2005ht}, \begin{align} {\mu(\bar T)\over \sqrt{\sigma}} = s\bar{T} +a \sigma_t \sqrt{\pi \over 2} [\mathrm{erf}({b\over \sqrt{2}\sigma_t}) - \mathrm{erf}({b-\bar{T}\over \sqrt{2}\sigma_t})] \end{align} with ${\bar T}\equiv T/T_c$, where $T_c$ is the critical temperature of the deconfined phase transition. Other parameters are taken as $s=0.587$, $a=2.150$, $b=1.054$, $\sigma_t=0.07379$. $\mathrm{erf}(z)$ is the error function. The internal energy of heavy quarkonium can be obtained via the relation $U(T,r)=F+T(-\partial F/\partial T)$. When the slope of the line becomes flat, it indicates that there is no attractive force to restraint the wave function at this distance $r$. At temperatures around $T_c$, there is a sudden shift in the screened mass $\mu(\bar T)$~\cite{Digal:2005ht}. The internal energy may become slightly larger than the vacuum Cornell potential. This behavior can be seen in $U(T,r)$ at $r\sim 0.4$ fm in Fig.~\ref{fig-diffV} and become more evident at $T\rightarrow T_c$. To avoid this subtlety, we take heavy quark potential to be the free energy as the limit of strong color screening, and the vacuum Cornell potential as the limit of extremely weak color screening. The realistic potential is between these two limits. Different heavy quark potentials in Fig.~\ref{fig-diffV} will be taken into the Schr\"odinger equation to calculate the nuclear modification factors of $J/\psi$ and $\psi(2S)$ in the next section. \begin{figure}[!hbt] \centering \includegraphics[width=0.33\textwidth]{fig1.pdf} \caption{(Color online) Different parametrizations of the real part of heavy quark potentials as a function of $r$ at $T=1.5T_c$. The free energy $F(r,T)$, internal energy $U(r,T)$ and the Cornell potential $V_c(r)$ are plotted with different color lines. } \hspace{-0.1mm} \label{fig-diffV} \end{figure} \begin{figure}[!hbt] \centering \includegraphics[width=0.33\textwidth]{fig2.pdf} \caption{(Color online) The imaginary part of the heavy quark potential as a function of distance. The gray band represents the $95\%$ confidence region whereas the black curve corresponds to the Maximum Posteriori parameter set. The data is cited from \cite{Burnier:2016mxc}. Symbols from purple to red correspond to results from low to high temperature. } \hspace{-0.1mm} \label{lab-fig-imagV} \end{figure} In the hot medium, quarkonium bound states can also be dissociated by the inelastic scatterings with thermal light partons. This process contributes an imaginary part in the potential $V(T,r)$. We parameterize the temperature and spatial dependence of the imaginary potential by \begin{align} \label{eq-imagV} &V_I(T,\bar r)= -i\,T(a_1\, {\bar r} + a_2 {\bar r}^2)\,, \end{align} where $i$ is the imaginary unit, and $\bar r\equiv r/{\rm fm}$ is a dimensionless variable. The dimensionless coefficients, $a_1$ and $a_2$, are obtained by invoking the Bayesian inference to fit the lattice QCD calculations~\cite{Burnier:2016mxc}. We focused on the temperature relevant to p-Pb collision $T_c <T< 1.9~T_c$. Results are shown in Fig.~\ref{lab-fig-imagV}, where the gray band represents the $95\%$ confidence interval, and the black curve corresponds to the parameter set $a_1=-0.040$ and $a_2=0.50$ which maximizes the Posterior distribution. In $V_I$, the magnitude of the imaginary potential becomes smaller at smaller distance. This results in a weaker reduction on the $J/\psi$ component than $\psi(2S)$ component in the wave function of $c\bar c$ dipole. { As the imaginary potential in Fig.\ref{lab-fig-imagV} is calculated in the gluonic medium, we take the same formula in the quark-gluon plasma in heavy-ion collisions, which contributes some uncertainty in the suppression of charmonium in p-Pb collisions~\cite{Lafferty:2019jpr,Burnier:2014ssa}. The uncertainty of the imaginary potential is partially considered with the theoretical band in Fig.\ref{lab-fig-imagV}, which will be reflected in the charmonium $R_{pA}$. } { In the hot medium produced in p-Pb collisions, heavy quark dipoles experience different local temperatures when they move along different trajectories. The real and imaginary parts of the potential depending on the local temperatures also changes with time. The wave package at each time step is obtained from the Schr\"odinger equation, while its normalization is reduced by the imaginary part of the Hamiltonian. Therefore, the fractions of charmonium eigenstates in the wave package are changed with time due to the in-medium potentials. } \vspace*{0pt} \subsection{Hot medium evolution in p-Pb collisions} The dynamical evolution of the hot medium produced in p-Pb collisions at $\sqrt{s_{NN}}=5.02$ TeV is described by the hydrodynamic equations~\cite{Zhao:2020wcd}, \begin{align} \partial_{\mu\nu} T^{\mu\nu}=0 \end{align} where $T^{\mu\nu}=(e+p)u^\mu u^\nu-g^{\mu\nu}p$ is the energy-momentum tensor. $e$ and $p$ is the energy density and the pressure respectively. $u^\mu$ is the four velocity of the medium. The equation of state is needed to close the hydrodynamic equations. The deconfined phase is treated as an ideal gas of gluons and massless $u$ and $d$ quarks plus $s$ quarks with the mass $m_s=150$ MeV. The confined phase is treated with Hadron Resonance Gas model (HRG)~\cite{Sollfrank:1996hd}. Two phases are connected with a first-order phase transition. The critical temperature of the phase transition is determined to be $T_c=165$ MeV by choosing the mean field repulsion parameter and the bag constant to be $K=450 \ \rm{MeV\,fm^3}$ and $B^{1/4}=236$ MeV~\cite{Zhu:2004nw}. { With the multiplicity of light hadrons measured in p-Pb collisions and the theoretical simulations from other hydrodynamic models~\cite{ALICE:2014xsp,Zhao:2020wcd}, } we take the maximum initial temperature of the hot medium to be $T_0({\bf x}_T=0|b=0)=248$ MeV in forward rapidity and $289$ MeV in backward rapidity, respectively. {Event-by-event fluctuations in the hydrodynamic evolutions are not included yet.} The profile of the initial energy density is also consistent with the results from a multiple phase transport (AMPT) model~\cite{Liu:2013via}. Hydrodynamic equations start evolution from $\tau_0=0.6$ fm/c where the hot medium is assumed to reach local equilibrium. At most central collisions with the impact parameter b=0, the time evolution of the local temperature at ${ x}_T=0$ in forward and backward rapidity is plotted in Fig.~\ref{fig-hydro-plot}. Medium evolutions at other impact parameter can be obtained via the scale of initial entropy which depends on $N_p(b)$ and $N_{coll}(b)$. \begin{figure}[!hbt] \centering \includegraphics[width=0.33\textwidth]{fig3.pdf} \caption{(Color online) Time evolution of the temperature in the center of the medium (${x}_T=0$) in forward and backward rapidity in $\sqrt{s_{NN}}=5.02$ TeV p-Pb collisions. The impact parameter is $b=0$ fm, which is defined to be the distance between the centers of the proton and the nucleus. } \hspace{-0.1mm} \label{fig-hydro-plot} \end{figure} \section{Applications in p-Pb collisions}\label{sec.3} We apply Schr\"odinger equation to charmonium dynamical evolution in $\sqrt{s_{NN}}=5.02$ TeV p-Pb collisions. In Fig.~\ref{fig-RAA-forwd}, the {$R_{pA}s$} of $J/\psi$ and $\psi(2S)$ at forward rapidity (defined as proton-going direction) are plotted. The shadowing effect modifies the parton densities in the colliding nucleus, which changes the gluon density and charmonium production { in nucleus collisions} compared with { that} in pp collisions. As the shadowing effect exists before the initial production of heavy quark pair in parton hard scatterings, it gives the same modification factor of $J/\psi$ and $\psi(2S)$, shown as the black dotted line in Fig.~\ref{fig-RAA-forwd}. However, the experimental data show different degrees of suppression on the production of $J/\psi$ and $\psi(2S)$, which indicates different strength of final state interactions on different charmonium states. The magnitude of the color screening effect on charmonium still deserves further investigation. In order to study the color screening effect on charmonium observables, we firstly test the scenario without imaginary potential in Fig.\ref{fig-RAA-realV}. The calculations with a strong screening scenario with F potential and a weak screening scenario with vacuum potential are presented in the figure. In the strong color screening scenario with F potential, the wave function of $c\bar c$ dipole expands outside due to the weak attractive force between $c$ and $\bar c$. This reduces the overlap of wave-function between $c\bar{c}$ pair and charmonium eigenstate. This suppresses the $R_{pA}$ of $J/\psi$ and $\psi(2S)$. The color screening effect is not strong enough to explain the strong suppression of $\psi(2S)$ $R_{pA}$, which indicates the necessity of including the imaginary potential in a phenomenological perspective. \begin{figure}[!hbt] \centering \includegraphics[width=0.35\textwidth]{fig4.pdf} \caption{(Color online) Nuclear modification factors of $J/\psi$ and $\psi(2S)$ as a function of the number of binary collisions $N_{coll}$ in the forward rapidity of $\sqrt{s_{NN}}=5.02$ TeV p-Pb collisions. Only real part of the heavy quark potential is included. Black dashed-dotted line is the calculation with only cold nuclear matter effects.The strong and weak limits of the potential are taken as the vacuum Cornell potential $V=V_c(r)$ and the free energy $V=F(r,T)$ respectively. The experimental data are from the ALICE Collaboration~\cite{ALICE:2015kgk,Leoncino:2016xwa}. Red circles and blue squares respectively correspond to $J/\psi$ and $\psi(2S)$. } \hspace{-0.1mm} \label{fig-RAA-realV} \end{figure} \begin{figure}[!hbt] \centering \includegraphics[width=0.35\textwidth]{fig5.pdf} \caption{(Color online) Nuclear modification factors of $J/\psi$ and $\psi(2S)$ as a function of the number of binary collisions $N_{coll}$ in the forward rapidity of $\sqrt{s_{NN}}=5.02$ TeV p-Pb collisions. Black dashed-dotted line is the calculation with only cold nuclear matter effects. The in-medium potential is taken to be $V=V_c(r)+V_I(T,r)$ in upper panel and $V=F(T,r)+V_I$ in lower panel. Red and blue bands are the results of $J/\psi$ and $\psi(2S)$ respectively. The experimental data are from the ALICE Collaboration~\cite{ALICE:2015kgk,Leoncino:2016xwa}. Red circles and blue squares respectively correspond to $J/\psi$ and $\psi(2S)$. } \hspace{-0.1mm} \label{fig-RAA-forwd} \end{figure} In Fig.\ref{fig-RAA-forwd}, both color screened real potential and imaginary potential are included. In the upper panel of Fig.\ref{fig-RAA-forwd}, only imaginary potential is considered without color screening effect. The theoretical band in $R_{pA}$ represents the uncertainty in the parametrization of $V_I$. As one can see, the imaginary potential can explain well both $R_{pA}$s of $J/\psi$ and $\psi(2S)$. Lower $R_{pA}$ corresponds to the upper limit of the $V_I$ parametrization . As the magnitude of $V_I$ increases with the distance, $\psi(2S)$ component in the $c\bar c$ dipole wave function is more suppressed. In the lower panel of Fig.~\ref{fig-RAA-forwd}, in a strong screening scenario, the real part of heavy quark potential is taken as a free energy $V_R=F(T,r)$. Charmonium wave function is loosely bound in the $c\bar{c}$ wave function. The wave function expands outside, which reduces the overlap of wave function between $c\bar c$ wave package and the $J/\psi$ eigenstate. This results in a transition { of the final yields} from $J/\psi$ to $\psi(2S)$ states and scattering states. Value of $R_{pA}$ is strongly reduced with $V=F+V_I$. The feed-down process ($\chi_c,\psi(2S)\rightarrow J/\psi X$) which happens after charmonium escape the hot medium has been included in $R_{pA}$. Comparing the model calculated $R_{pA}s$ with the experimental data, the vacuum potential is favored and it seems that the color screening effect is weak for charmonium at the temperatures available in p-Pb collisions. The imaginary potential is essential to explain the difference between $R_{pA}^{J/\psi}$ and $R_{pA}^{\psi(2S)}$ since the real potential in vacuum alone does not change the final projection of the wave-function of $c\bar{c}$ pair to different charmonium species. \begin{figure}[!hbt] \centering \includegraphics[width=0.35\textwidth]{fig6.pdf} \caption{(Color online) The $p_T$ dependence of $J/\psi$ and $\psi(2S)$ nuclear modification factors in the forward rapidity in minimum-bias $\sqrt{s_{NN}}=5.02$ TeV p-Pb collisions. Other conditions are similar to Fig.~\ref{fig-RAA-forwd}. The experimental data are from the ALICE Collaboration~\cite{ALICE:2014cgk}. Red circles and blue squares respectively correspond to $J/\psi$ and $\psi(2S)$. } \hspace{-0.1mm} \label{fig-RAApt-forwd} \end{figure} The $p_T$ dependence of $J/\psi$ and $\psi(2S)$ $R_{pA}$ is also studied in Fig.~\ref{fig-RAApt-forwd}. Black dashed-dotted line is the calculation with only cold nuclear matter effects {\color{red}}. In the forward rapidity of p-Pb collisions, shadowing effect reduce the charmonium production. The value of $R_{pA}$ from cold nuclear matter suppression alone increases with transverse momentum due to a weaker shadowing effect at larger transverse energy. The dashed-dotted line and the bands increase with $p_T$. Besides, $c\bar c$ dipoles with large velocities move fast out of the hot medium, where $R_{pA}$ becomes larger due to the weaker hot medium suppression. In the upper panel of Fig.~\ref{fig-RAApt-forwd}, theoretical calculations with only imaginary potential can explain the $R_{pA}^{J/\psi}$ and $R_{pA}^{\psi(2S)}$ better compared with the case of strong color screening effect in the lower panel of Fig.~\ref{fig-RAApt-forwd}. The theoretical bands correspond to the uncertainty of $V_I$. In the backward rapidity defined as the Pb-going direction, the anti-shadowing effect can increase the $R_{pA}$ of $J/\psi$ and $\psi(2S)$, see the black dashed-dotted line in Fig.~\ref{fig-RAAncoll-back}. Due to the uncertainty of the anti-shadowing effect, we consider an upper-limit anti-shadowing effect where the $R_{pA}$ is around 1.27 in most central collisions. The $R_{pA}$ with only cold nuclear matter effect is greater than unity. After considering the imaginary potential, production of charmonium excited states are suppressed, and $R_{pA}^{\psi(2S)}$ is below unity. Since around 40\% of the final $J/\psi$ comes from the decay of excited states ($\chi_c$, $\psi(2S)$) to $J/\psi$, the suppression of excited states affect the $R_{pA}^{J/\psi}$ via the feed-down process. Shown in the upper panel of Fig.~\ref{fig-RAAncoll-back}, theoretical calculation of $J/\psi$ $R_{pA}$ reproduces the experimental data in peripheral and semi-central collisions, while in central collisions $N_{coll}\sim 12$, the theoretical band is at the edge of the experimental data. This discrepancy between the theoretical results and the experimental data is also reflected in semi-classical transport model~\cite{Chen:2016dke} and the comover model~\cite{Ferreiro:2014bia}, where $R_{pA}^{J/\psi}\lesssim 1$ at $N_{coll}\sim 12$. In a strong color screening scenario with F-potential, $J/\psi$ theoretical bands strongly underestimate the experimental data. This observation is consistent in both backward and forward rapidities. \begin{figure}[hbt] \centering \includegraphics[width=0.35\textwidth]{fig7.pdf} \caption{(Color online) Nuclear modification factors of $J/\psi$ and $\psi(2S)$ as a function of the number of binary collisions $N_{coll}$ in the backward rapidity of $\sqrt{s_{NN}}=5.02$ TeV p-Pb collisions. Red and blue bands are the results of $J/\psi$ and $\psi(2S)$. The bands come from the uncertainty of $V_I$. In-medium heavy quark potentials are taken as $V=V_c(r)+V_I(T,r)$ in upper panel and $V=F(T,r)+V_I(T,r)$ in lower panel, respectively. The experimental data are from the ALICE Collaboration~\cite{ALICE:2015kgk,Leoncino:2016xwa}. Red circles and blue squares respectively correspond to $J/\psi$ and $\psi(2S)$.} \hspace{-0.1mm} \label{fig-RAAncoll-back} \end{figure} \begin{figure}[hbt] \centering \includegraphics[width=0.35\textwidth]{fig8.pdf} \caption{(Color online) The $p_T$ dependence of $J/\psi$ and $\psi(2S)$ nuclear modification factors in the backward rapidity in minimum-bias $\sqrt{s_{NN}}=5.02$ TeV p-Pb collisions. Other conditions are similar to Fig.~\ref{fig-RAAncoll-back}. The experimental data are from the ALICE Collaboration~\cite{ALICE:2014cgk}. Red circles and blue squares respectively correspond to $J/\psi$ and $\psi(2S)$.} \hspace{-0.1mm} \label{fig-RAApt-back} \end{figure} The $p_T$ dependence of charmonium $R_{pA}$ is also calculated at the backward rapidity and presented in Fig.~\ref{fig-RAApt-back}. The black dashed-dotted line only includes the cold nuclear matter effects. Hot medium effects reduce the $R_{pA}$ of $J/\psi$ and $\psi(2S)$ at low $p_T$ region. At high $p_T$, anti-shadowing effect make $R_{pA}^{J/\psi}$ become larger than the unity. When the real part of the heavy quark potential is taken as the vacuum potential, theoretical bands describe the data well in the upper panel of Fig.~\ref{fig-RAApt-back}, while the calculations with F potential in lower panel give small $R_{pA}$ of $J/\psi$ due to the expansion of $c\bar c$ wave package. \section{conclusion}\label{sec.4} In this work, we employ a time-dependent Schr\"odinger model to study the hot medium effects on charmonium observables in proton-nucleus collisions at $\sqrt{s_{NN}}=5.02$~TeV. We initialize the $c\bar{c}$ distribution with the cold nuclear matter effects including the (anti-)shadowing effect and the Cronin effect. Both color screening and parton scattering encodes in the real and the imaginary parts of the potential, which is further incorporated into the Hamiltonian utilized in the quantum evolution. In order to probe the strength of color screening effect, the imaginary part of the potential is constrained by a statistical fit to the lattice QCD data, while two scenarios of the real potential is considered. In the simulation, $c\bar{c}$ dipole initialized with different position and momentum move along different trajectories in the hydrodynamics medium, while their internal evolution is described by the Schr\"odinger equation. The comparison of simulated result with experimental data favors a weak screening scenario, or a strong binding scenario. Meanwhile, the imaginary potential is crucial to consistently describe the suppression of $J/\psi$ and $\psi(2S)$ states and the gap between their suppressions due to different width of their wave-functions, indicating the importance of parton scattering for different charmonium species. The essential phenomenological results from quantum evolution presented in the paper are consistent with those thoroughly studied semi-classical transport approaches. In a semi-classical approach, the color screening affects the in-medium binding energies of charmonium states, leading to different dissociation widths. While in the potential model discussed in the present work, the color screening broadens the $c\bar{c}$ wave-function, leading to a transfer of bound states to scattering states. The non-Hermitian imaginary part of the potential directly eliminate the tracking of a $c\bar{c}$ pair, corresponding to the dissociation width. Both effects lead to different suppression strengths but the imaginary part (dissociation) is shown to be crucial for the suppression within both the potential approach and other semi-classical approaches. There are limitations of this approach as well. Since the size of the $c\bar{c}$ pair is not much smaller than the size of the medium produced in pA collisions, the screening at different positions of the potential could vary. Thus a potential model may not be well-defined in this case. However, this approach is one angle of investigating charmonium production in small systems. A potential model on bottomonium would be favored to study. There are statistical extraction of in-medium heavy quark potential with a semi-classical transport approach~\cite{Du:2019tjf} which incorporates the potential in the binding energies and dissociation widths of bottomonium states. Within this potential approach, a direct extraction of the in-medium heavy-quark potential could be performed for bottomonium in AA collisions. We leave that to further publications. \vspace{1cm}
1,477,468,750,846
arxiv
\section{INTRODUCTION} \label{S:intro} Coping with disruptions is a core requirement for autonomous systems operating in the real world. Indeed, as these complex systems leave the controlled setting of the lab, it becomes increasingly important to enable them to safely negotiate adverse situations arising from the dynamic and fast-evolving environments in which they must operate~\cite{Gunes14a, Stankovic05o}. In the context of dynamical systems and control, this issue is often addressed through the concept of \emph{robustness}. The robust approach plans for the worst so that the resulting system can achieve its objective~(e.g., state regulation) regardless of the conditions in which it operates. Techniques such as~$\ensuremath{\mathcal{H}}_\infty$ control, tube model predictive control~(MPC), and robust system-level synthesis have been developed specifically to address this issue~\cite{Dullerud13a, Li00r, Borrelli17p, Anderson19s}. In simple terms, robust systems are ``hard to break.'' Yet, the success of robustness may also be the root of its shortcomings. It is often not viable to plan for every contingency as it would lead to over-conservative behaviors whose performance is deficient even under normal operating conditions. In extreme cases, the resulting control problem may simply be infeasible. Hence, the question is no longer how to operate under or deal with a certain level of disturbance, but what to do when things go so catastrophically wrong that the original equilibrium is no longer viable. In such cases, the only solution is to modify the system requirements, e.g., by removing unlikely contingencies or relaxing specifications, to find an alternative equilibrium. In ecology, this capacity of systems to adapt and recover from disruptions by modifying their underlying operation is known as \emph{resilience}~\cite{Holling73r, Holling96e}. Since its introduction in the 1970s, it has been observed in a myriad of ecosystems and incorporated in fields such as psychology and dynamical/cyber-physical systems~\cite{Werner89v, Rodin14t, Rieger09r, Zhu15game, Ramachandran19r}. Contrary to stability, characterized by the persistence of a system near an equilibrium, resilience emphasizes conditions far from steady state, where instabilities can flip a system into another behavior regime~\cite{Holling96e}. In simple terms, resilient systems are ``easy to fix.'' In dynamical systems and control, robustness and resilience are often conflated. Even when resilience is described, the sought after behaviors are often robust in the sense of the above definitions, e.g.,~\cite{Chen18r, Tzoumas17r, Guerrero17f}. Even in his seminal works, Holling discriminates between ``engineering resilience''~(robustness) and ``ecological resilience,'' by distinguishing systems with a single equilibrium from those with multiple equilibria~\cite{Holling96e}. Though resilient solutions involving adaptation to disruptions have been studied, such as in~\cite{Gunes14a, Stankovic05o, Zhu15game, Ramachandran19r}, a formal, general definition of resilient control akin to its robust counterpart is still lacking. The goal of this work is to formalize resilience in the context of optimal control. We begin by introducing the general problem of constrained control under disturbances and its robust solution~(Section~\ref{S:prob}). We then formulate the resilient optimal control problem by allowing controlled constraint violations in optimal control problems~(Section~\ref{S:resilience}). To be useful, however, these violations must be appropriately designed, which cannot be done manually for any moderately-sized problem. To address this issue, we put forward a framework to obtain requirement modifications by trading off control performance and violation costs. We analyze this formulation to obtain inverse optimality results and quantify the effect of disturbances on the violations. By proving that robustness and resilience optimize different objectives, we show that they are complementary properties that in many applications, may be simultaneously required~(Section~\ref{S:resilience_theory}). We conclude by deriving a practical algorithm to solve resilient control problems~(Section~\ref{S:algorithm}) and illustrating its use~(Section~\ref{S:sims}). \section{PROBLEM FORMULATION} \label{S:prob} Let~$\ensuremath{\bm{\Xi}}$ be a random variable taking values in a compact set~$\ensuremath{\mathcal{K}} \subseteq \bbR^d$ according to some measure~$\mathfrak{p}$. We assume for simplicity that~$\mathfrak{p}$ is absolutely continuous with respect to the Lebesgue measure, so that~$\ensuremath{\bm{\Xi}}$ has a probability density function~(Radon-Nikodym derivative) denoted~$f_{\ensuremath{\bm{\Xi}}}$. Its realizations~$\ensuremath{\bm{\xi}}$ denote states of the world that may be construed as disturbances to the normal operation of an autonomous system represented by the prototypical constrained optimal control problem \begin{prob}\label{P:generic} P^\star(\ensuremath{\bm{\Xi}}) = \min_{\ensuremath{\bm{z}} \in \bbR^p}& &&J(\ensuremath{\bm{z}}) \\ \subjectto& &&g_i(\ensuremath{\bm{z}}, \ensuremath{\bm{\Xi}}) \leq 0 \text{,} \quad i = 1,\dots,m \text{,} \end{prob} where~$\ensuremath{\bm{z}}$ denotes the decision variable, e.g., actuation strength, $J$ is a control performance measure, and the~$g_i(\cdot,\ensuremath{\bm{\xi}})$ describe the control requirements under~$\ensuremath{\bm{\xi}}$. \begin{assumption}\label{A:convexity} The control performance~$J: \bbR^p \to \bbR$ is a strongly convex, continuously differentiable function, $g_i(\ensuremath{\bm{z}}, \cdot) \in L_2$ are~$L_i$-Lipschitz continuous with respect to the~$\ell_\infty$-norm for all~$\ensuremath{\bm{z}} \in \bbR^p$, and~$g_i(\cdot, \ensuremath{\bm{\xi}})$ are coercive~(radially unbounded), convex functions for all~$\ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}}$. The requirement functions~$g_i$ have continuous derivatives with respect to~$\ensuremath{\bm{z}}$ and~$\ensuremath{\bm{\xi}}$. \end{assumption} Note that since~\eqref{P:generic} is parameterized by a random variable, its optimal solution~$\ensuremath{\bm{z}}^\star(\ensuremath{\bm{\Xi}})$ and value~$P^\star(\ensuremath{\bm{\Xi}})$ are random and depend on the \emph{a priori} unknown disturbance realization. \emph{Our goal is to obtain a deterministic~$\ensuremath{\bm{z}}^\dagger$ that is feasible for most~(if not all) realizations~$\ensuremath{\bm{\xi}}$ and whose performance~$P^\dagger = J(\ensuremath{\bm{z}}^\dagger)$ is similar to the optimal~$P^\star(\ensuremath{\bm{\xi}})$.} Though the latter objective is less critical, it is certainly desired. To illustrate the use of~\eqref{P:generic} in control, note that it can cast the following constrained LQR problem~\cite{Borrelli17p}: \begin{prob}\label{P:lqr} \minimize_{\ensuremath{\bm{x}}_k,\,\ensuremath{\bm{u}}_k}& &&\ensuremath{\bm{x}}_N^T \ensuremath{\bm{P}} \ensuremath{\bm{x}}_N + \sum_{k = 0}^{N-1} \ensuremath{\bm{x}}_k^T \ensuremath{\bm{Q}} \ensuremath{\bm{x}}_k + \ensuremath{\bm{u}}_k^T \ensuremath{\bm{R}} \ensuremath{\bm{u}}_k \\ \subjectto& &&\abs{\ensuremath{\bm{x}}_k} \leq \ensuremath{\bm{{\bar{x}}}} \text{,} \quad \abs{\ensuremath{\bm{u}}_k} \leq \ensuremath{\bm{{\bar{u}}}} - \ensuremath{\bm{\Xi}}_{u,k} \text{,} \\ &&&\ensuremath{\bm{x}}_{k+1} = \ensuremath{\bm{A}} \ensuremath{\bm{x}}_k + \ensuremath{\bm{B}} \ensuremath{\bm{u}}_k + \ensuremath{\bm{\Xi}}_{d,k} \text{,} \end{prob} where~$\ensuremath{\bm{x}}_k$ and~$\ensuremath{\bm{u}}_k$ are the state and control action at time~$k$, respectively, of a linear dynamical system described by the state-space matrices~$\ensuremath{\bm{A}}$ and~$\ensuremath{\bm{B}}$, $\ensuremath{\bm{{\bar{x}}}}$ and~$\ensuremath{\bm{{\bar{u}}}}$ are bounds on the state and actions, and the initial state~$\ensuremath{\bm{x}}_0$ is given. Here, $\ensuremath{\bm{z}}$ collects the~$\{\ensuremath{\bm{x}}_k,\ensuremath{\bm{u}}_{k-1}\}$ for~$k = 1,\dots,N$. The disturbances in~\eqref{P:lqr} model changes in the dynamics~($\ensuremath{\bm{\Xi}}_{d,k}$) and/or disruptions to the system's actuation capabilities~($\ensuremath{\bm{\Xi}}_{u,k}$). Namely, a realization~$[\ensuremath{\bm{\xi}}_{u,k}]_i = [\ensuremath{\bm{{\bar{u}}}}]_i$ is equivalent to actuator~$i$ being unavailable at instant~$k$. Hence, while the abstract~\eqref{P:generic} is the object of study of this paper, we are ultimately interested in the control problems it represents, e.g., \eqref{P:lqr}. In the control literature, a common approach to obtaining the desired~$\ensuremath{\bm{z}}^\dagger$ is to use the robust formulation of~\eqref{P:generic} \begin{prob}[\textup{P-RO}]\label{P:robust} P^\star_\text{Ro} = \min_{\ensuremath{\bm{z}} \in \bbR^p}& &&J(\ensuremath{\bm{z}}) \\ \subjectto& &&\Pr\left[ \ensuremath{\bm{g}}(\ensuremath{\bm{z}}, \ensuremath{\bm{\Xi}}) \leq \ensuremath{\bm{0}} \right] \geq 1-\delta \text{,} \end{prob} where the probability is taken with respect to the distribution of~$\ensuremath{\bm{\Xi}}$ and the requirements~$g_i$ are collected in the vector-valued function~$\ensuremath{\bm{g}}$ for conciseness. The probability of violation parameter~$\delta \in [0,1]$ trades-off feasibility for control performance~\cite{Schwarm99c, Li00r, Borrelli17p}. From the additivity of measures, it is straightforward that reducing~$\delta$ reduces the feasibility set of~\eqref{P:robust}, which may increase the control cost. For~$\delta = 0$, the constraints in~\eqref{P:robust} reduce to the classical worst-case formulation of robustness, enforcing that~$\max_{\ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}}}\ g_i(\ensuremath{\bm{z}}, \ensuremath{\bm{\xi}}) \leq 0$, i.e., that the solution is feasible for all possible conditions~$\ensuremath{\bm{\xi}}$~\cite{Dullerud13a}. Yet, these conditions can render the control problem infeasible or lead to solutions with impractical levels of performance. These issues are sometimes overcome by the statistical formulation in~\eqref{P:robust}. Under mild conditions, feasible solutions of~\eqref{P:robust} can be obtained using a deterministic optimization problem~\cite{Li00r, Borrelli17p}. \begin{proposition} Let~$\ensuremath{\bm{\Xi}}$ be a sub-Gaussian random vector~(e.g., Gaussian or Bernoulli), i.e., $\E\left[ e^{\nu \ensuremath{\bm{u}}^T \left( \ensuremath{\bm{\Xi}} - \E[\ensuremath{\bm{\Xi}}] \right)} \right] \leq e^{\nu^2 \sigma^2/2}$ for all~$\nu \in \bbR$ and~$\ensuremath{\bm{u}} \in \bbR^d$ such that~$\norm{\ensuremath{\bm{u}}} = 1$. Then, under Assumption~\ref{A:convexity}, the unique \begin{prob}[$\widehat{\textup{P}}\textup{-RO}$]\label{P:equivalentRobust} \ensuremath{\bm{{\hat{z}}}}_\text{Ro} = \argmin_{\ensuremath{\bm{z}} \in \bbR^p}& &&J(\ensuremath{\bm{z}}) \\ \subjectto& &&g_i(\ensuremath{\bm{z}}, \E[\ensuremath{\bm{\Xi}}]) \leq -\epsilon \text{,} \end{prob} with~$\epsilon = L \sigma \sqrt{2 \log(2 m d/\delta)}$, for~$L = \max_i L_i$, is \eqref{P:robust}-feasible. In particular, if~$\ensuremath{\mathcal{K}} \subseteq [0,\bar{\xi}]^d$, then~$\sigma \leq \bar{\xi}/2$. \end{proposition} \begin{proof} Recall that since~$J$ is strongly convex, the solution of~\eqref{P:equivalentRobust} is unique~\cite{Bertsekas09c}. The proof then follows by bounding~$\Pr \left[ \max_i g_i(\ensuremath{\bm{z}}^\star_\text{Ro}, \ensuremath{\bm{\Xi}}) \leq 0 \right]$ using concentration of measure~\cite{Ledoux01t}. From the Lipschitz continuity of~$g_i$ we get \begin{align*} g_i(\ensuremath{\bm{z}}^\dagger_\text{Ro}, \ensuremath{\bm{\xi}}) &\leq g_i(\ensuremath{\bm{z}}^\dagger_\text{Ro}, \E[\ensuremath{\bm{\Xi}}]) + L_i \norm{\ensuremath{\bm{\xi}} - \E[\ensuremath{\bm{\Xi}}]}_\infty \\ {}&\leq -\epsilon + L_i \norm{\ensuremath{\bm{\xi}} - \E[\ensuremath{\bm{\Xi}}]}_\infty \text{.} \end{align*} Note that since~$g_i(\ensuremath{\bm{z}}^\dagger_\text{Ro}, \E[\ensuremath{\bm{\Xi}}]) \leq 0$ we care only about the positive tail of the Lipschitz inequality. To proceed, use the union bound and Hoeffding's inequality to obtain that \begin{equation}\label{E:hoeffding} \Pr \left[ \max_i L_i \norm{\ensuremath{\bm{\xi}} - \E[\ensuremath{\bm{\Xi}}]}_\infty \leq \epsilon \right] \geq 1 - 2 d \sum_{i = 1}^m \exp\left( \frac{-\epsilon^2}{2 L_i^2 \sigma^2} \right) \text{.} \end{equation} Using~$\epsilon$ as in the hypothesis ensures that~\eqref{E:hoeffding} is greater than~$1-\delta$, thus concluding the proof. \end{proof} Robust controllers are often deployed in critical applications, such as industrial process control and security constrained power allocation~\cite{Borrelli17p, Capitanescu11s}. Nevertheless, their worst case approach has two shortcomings. First, too stringent requirements on the probability of failure~$\delta$ can result in an infeasible problem or render the solution of~\eqref{P:robust} useless in practice due to its poor performance even in favorable conditions. What is more, sensitive requirements~(i.e., large~$L_i$) lead to large~$\epsilon_i$ in~\eqref{P:equivalentRobust}, considerably reducing its feasible set. Though~\eqref{P:robust} may be feasible even if~\eqref{P:equivalentRobust} is not, obtaining a solution of the former is challenging without the latter except in special cases~\cite{Dullerud13a, Schwarm99c, Li00r, Borrelli17p}. Second, even if~\eqref{P:robust} is feasible and its solution has reasonable performance, the issue remains of what happens in the~$\delta$ portion of the realizations in which a stronger than anticipated disturbance occurs. Indeed, though robust autonomous systems make failures unlikely, they do not account for how the system fails once it does. Hence, though unlikely, failures can be catastrophic. Resilience overcomes these limitations by adapting the underlying optimal control problem to disruptions. \section{RESILIENT CONTROL} \label{S:resilience} In a parallel to the ecology literature, we define resilience in autonomous systems as \emph{the ability to adapt to, and possibly recover from, disruptions}. In particular, we are interested in dealing with disturbances so extreme that the original control problem becomes ineffective or infeasible. Where robust control would declare failure, resilient control attempts to remain operational by modifying the underlying control problem, reverting to an alternative trajectory that violates requirements in a controlled manner. In practice, this means that when a resilient system suffers a disastrous shock that jeopardizes its ability to solve its original task, it will adapt and modify its requirements in an attempt to at least partially salvage its mission. Resilience is therefore not a replacement for robustness, which may be the only sensible course of action for critical requirements, but a complementary set of behaviors that a control system can display. \subsection{Resilient optimal control} \label{S:resilient_control} To operationalize the above definition of resilient dynamical system, we must embed the optimal control problem~\eqref{P:generic} with the ability to modify its requirements depending on the disruption suffered by the system. A natural way to do so is by associating a disturbance-dependent relaxation~$s_i: \ensuremath{\mathcal{K}} \to \bbR_+$, $s_i \in L_2$, to the $i$-th~requirement as in \begin{prob}[\textup{P-RE}]\label{P:parametrized} P^\star_\text{Re}(\ensuremath{\bm{s}}) = \min_{\ensuremath{\bm{z}} \in \bbR^p}& &&J(\ensuremath{\bm{z}}) \\ \subjectto& &&\ensuremath{\bm{g}}(\ensuremath{\bm{z}}, \ensuremath{\bm{\xi}}) \leq \ensuremath{\bm{s}}(\ensuremath{\bm{\xi}}) \text{,} \quad \ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}} \text{,} \end{prob} where the vector-valued function~$\ensuremath{\bm{s}}$ collects the slacks~$s_i$. Depending on~$\ensuremath{\mathcal{K}}$, \eqref{P:parametrized} may have a finite or infinite number of constraints. The latter case can be tackled using semi-infinite programming algorithms~\cite{Reemtsen98s, Bonnans00p}. The violations~$\ensuremath{\bm{s}}(\ensuremath{\bm{\xi}})$ in~\eqref{P:parametrized} determine how the underlying control problem is modified to adapt to the operational conditions~$\ensuremath{\bm{\xi}}$. In~\eqref{P:lqr}, for instance, it could correspond to relaxing the state constraints and allowing the system to visit higher risk regions of the state space. If damage to the actuators renders the original control problem infeasible, this may be the only course of action to remain operational. Observe that for~$\ensuremath{\bm{s}} \equiv \ensuremath{\bm{0}}$, \eqref{P:parametrized} solves the worst-case robust control problem~\eqref{P:robust} for~$\delta = 0$. Indeed, if~$\ensuremath{\bm{g}}(\ensuremath{\bm{z}},\ensuremath{\bm{\xi}}) \leq \ensuremath{\bm{0}}$ for all~$\ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}}$, then~$\Pr[\ensuremath{\bm{g}}(\ensuremath{\bm{z}},\ensuremath{\bm{\Xi}}) \leq \ensuremath{\bm{0}}] = 1$. This formulation is often found in settings where controllers must abide to requirements under specific contingencies, such as security constrained power allocation~\cite{Capitanescu11s}. In the case of resilience, however, the goal is not to obtain solutions for vanishing slacks, but to adjust~$\ensuremath{\bm{s}}$ to allow constraint violations for disruptions under which the requirements become too stringent for a robust controller to satisfy. Hence, we are typically interested in solving~\eqref{P:parametrized} with~$\ensuremath{\bm{s}}(\ensuremath{\bm{\xi}}) \succ \ensuremath{\bm{0}}$ for some, if not all, disruptions~$\ensuremath{\bm{\xi}}$. For any predetermined~$\ensuremath{\bm{s}}$, \eqref{P:parametrized} is a smooth convex problem that can be solved using any of a myriad of existing methods~\cite{Bertsekas15c}. Yet, designing~$\ensuremath{\bm{s}}$, which ultimately determines the resilient behavior of the controller, can be quite challenging. Even for a moderate number of contingencies~(cardinality of~$\ensuremath{\mathcal{K}}$), finding the right requirement to violate and determining by how much to do so for each state of the world is intricate. This problem is only exacerbated as the number of requirements and/or contingencies grows. In Section~\ref{S:resilience_theory}, we propose a principled approach to designing resilient behavior based on trading off the control performance~$P^\star_\text{Re}(\ensuremath{\bm{s}})$ and a measure of violation. Before proceeding, however, we derive the dual problem of~\eqref{P:parametrized} and introduce the results from duality theory needed in the remainder of the paper. \subsection{Dual resilient control} \label{S:duality} Start by associating the dual variable~$\lambda_i \in L_2^+$ with the~$i$-th requirement, where~$L_2^+ = \{\lambda \in L_2 \mid \lambda \geq 0 \text{ a.e.}\}$. Depending on~$\ensuremath{\mathcal{K}}$, $\lambda_i$ may be a function or reduce to a (in)finite-dimensional vector. For conciseness, we collect the~$\lambda_i$ in a vector~$\ensuremath{\bm{\lambda}} \in \bbR_+^m$. Then, define the Lagrangian of~\eqref{P:parametrized} as \begin{equation}\label{E:lagrangian} \begin{aligned} \ensuremath{\mathcal{L}}(\ensuremath{\bm{z}}, \ensuremath{\bm{\lambda}}, \ensuremath{\bm{s}}) &= J(\ensuremath{\bm{z}}) + \int_\ensuremath{\mathcal{K}} \ensuremath{\bm{\lambda}}(\ensuremath{\bm{\xi}})^T \big[ \ensuremath{\bm{g}}(\ensuremath{\bm{z}}, \ensuremath{\bm{\xi}}) - \ensuremath{\bm{s}}(\ensuremath{\bm{\xi}}) \big] d\ensuremath{\bm{\xi}} \text{.} \end{aligned} \end{equation} From the Lagrangian~\eqref{E:lagrangian}, we obtain the dual problem \begin{prob}[\textup{D-RE}]\label{P:dual_parametrized} D^\star_\text{Re}(\ensuremath{\bm{s}}) = \max_{[\ensuremath{\bm{\lambda}}]_i \in L_2^+} \min_{\ensuremath{\bm{z}} \in \bbR^p}\ \ensuremath{\mathcal{L}}(\ensuremath{\bm{z}}, \ensuremath{\bm{\lambda}}, \ensuremath{\bm{s}}) \text{.} \end{prob} Under mild conditions, $D^\star_\text{Re}(\ensuremath{\bm{s}})$ attains~$P^\star_\text{Re}(\ensuremath{\bm{s}})$ and solving~\eqref{P:dual_parametrized} becomes equivalent to solving~\eqref{P:parametrized}. This fact together with the convexity of~\eqref{P:parametrized} imply that the well-known KKT necessary conditions are also sufficient. In these cases, we obtain a direct relation between the solutions of~\eqref{P:dual_parametrized} and the sensitivity of~$P_\text{Re}$ with respect to~$\ensuremath{\bm{s}}$. These facts are formalized in Propositions~\ref{T:zdg} and~\ref{T:diff_P}. \begin{assumption}\label{A:slater} There exists~$\ensuremath{\bm{{\bar{z}}}}$ such that~$\ensuremath{\bm{g}}(\ensuremath{\bm{{\bar{z}}}},\ensuremath{\bm{\xi}}) < \ensuremath{\bm{0}}$ for all~$\ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}}$. \end{assumption} \begin{proposition}[{\cite[Prop.~5.3.4]{Bertsekas09c}}]\label{T:zdg} Under Assumptions~\ref{A:convexity} and~\ref{A:slater}, strong duality holds for~\eqref{P:parametrized}, i.e., $P^\star_\text{Re}(\ensuremath{\bm{s}}) = D^\star_\text{Re}(\ensuremath{\bm{s}})$. Moreover, \begin{enumerate}[(i)] \item if~$\ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{s}})$ is a solution of~\eqref{P:dual_parametrized}, then~$\ensuremath{\bm{z}}_\text{Re}^\star(\ensuremath{\bm{s}}) = \argmin_{\ensuremath{\bm{z}} \in \bbR^p}\ \ensuremath{\mathcal{L}}(\ensuremath{\bm{z}}, \ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{s}}), \ensuremath{\bm{s}})$ is a solution of~\eqref{P:parametrized}; \item if~$\ensuremath{\bm{z}}^\prime$ is a feasible point of~\eqref{P:parametrized} and~$[\ensuremath{\bm{\lambda}}^\prime]_i \in L_2^+$, then~$\ensuremath{\bm{z}}^\prime$ is the solution of~\eqref{P:parametrized} and~$\ensuremath{\bm{\lambda}}^\prime$ is a solution of~\eqref{P:dual_parametrized} if and only if \begin{subequations}\label{E:kkt} \begin{align} \nabla \ensuremath{\mathcal{L}}(\ensuremath{\bm{z}}^\prime,\ensuremath{\bm{\lambda}}^\prime,\ensuremath{\bm{s}}) &= \ensuremath{\bm{0}} \label{E:kkt_grad} \\ [\ensuremath{\bm{\lambda}}^\prime(\ensuremath{\bm{\xi}})]_i \left[ g_i(\ensuremath{\bm{z}}^\prime,\ensuremath{\bm{\xi}}) - s_i(\ensuremath{\bm{\xi}}) \right] &= \ensuremath{\bm{0}} \text{, for all } \ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}} \label{E:kkt_slackness} \text{.} \end{align} \end{subequations} \end{enumerate} \end{proposition} \begin{proposition}\label{T:diff_P} Let~$\ensuremath{\bm{\lambda}}^\star$ be a solution of~\eqref{P:dual_parametrized}. Under Assumptions~\ref{A:convexity} and~\ref{A:slater}, it holds that~$\nabla_{\ensuremath{\bm{s}}} P^\star_\text{Re}(\ensuremath{\bm{s}}) \big\vert_{\ensuremath{\bm{\xi}}} = -\ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{\xi}})$. \end{proposition} \begin{proof} This is a direct consequence of~\cite[Thm.~3.2]{Shapiro95d}. The only non-trivial condition is that the solution set of~\eqref{P:parametrized} is \emph{inf-compact}. This stems from the fact that the~$g_i$ are radially unbounded and continuous, in which case the feasible set of~\eqref{P:parametrized} is respectively bounded and closed. \end{proof} Having established these duality results, we now introduce a method to design resilient behavior based on compromising between control performance and requirement violations. \section{RESILIENCE BY COMPROMISE} \label{S:resilience_theory} While straightforward and tractable, the resilient optimal control problem~\eqref{P:parametrized} can lead to a multitude of behaviors, not all of them useful, depending on the choice of slacks. In this section, we take a compromise approach to designing resilient behavior by balancing the control performance~$P^\star_\text{Re}(\ensuremath{\bm{s}})$ resulting from the violations~$\ensuremath{\bm{s}}$ and a measure of the magnitude of this violation. The rationale behind this compromise is that even after adapting to a disruption, the behavior of the resilient system should remain similar to that of the undisturbed one in at least some aspects. If the specifications of the original problem must be completely replaced, then it was most likely ill-posed to begin with. Still, regardless of the disruption caused by~$\ensuremath{\bm{\xi}}$, increasing violations always improves the control performance. Indeed, $P^\star_\text{Re}$ is a non-increasing function of~$\ensuremath{\bm{s}}$ in the sense that since the feasible set of~\eqref{P:parametrized} with slacks~$\ensuremath{\bm{s}}^\prime$ is contained in that of~\eqref{P:parametrized} with slacks~$\ensuremath{\bm{s}} \preceq \ensuremath{\bm{s}}^\prime$, it immediately holds that~$P^\star_\text{Re}(\ensuremath{\bm{s}}^\prime) \leq P^\star_\text{Re}(\ensuremath{\bm{s}})$. Hence, all resilient systems must strike a balance between violating requirements to remain operational~(or improve their performance) and stay close to the original specifications. This balance is naturally mediated by the likelihood of the violation occurring, i.e., on the probability of the operating conditions~$\ensuremath{\bm{\xi}}$, in the sense that larger deviations of the original problem are allowed for less likely disruptions. Explicitly, associate to each relaxation~$\ensuremath{\bm{s}}$ a scalar violation cost~$h(\ensuremath{\bm{s}})$. Then, the specification~$\ensuremath{\bm{s}}^\star$ is compromise-resilient if any further requirement violations would improve performance~(reduce control cost) as much as it would increase the violation cost, i.e., \begin{equation}\label{E:resilient} \nabla P^\star_\text{Re}(\ensuremath{\bm{s}}) \big\vert_{\ensuremath{\bm{s}}^\star,\, \ensuremath{\bm{\xi}}} = -\nabla h(\ensuremath{\bm{s}}^\star(\ensuremath{\bm{\xi}})) f_{\ensuremath{\bm{\Xi}}}(\ensuremath{\bm{\xi}}) \text{,} \end{equation} where~$\nabla h$ is the gradient of~$h$. Without loss of generality, we assume~$h(\ensuremath{\bm{0}}) = \ensuremath{\bm{0}}$. The existence of the derivative of the optimal value function~$P^\star_\text{Re}$ obtains from Proposition~\ref{T:diff_P}. \begin{assumption}\label{A:h} The cost~$h$ is a twice differentiable, strongly convex function. \end{assumption} Observe that~$\ensuremath{\bm{s}}^\star$ need not vanish even if~\eqref{P:parametrized} is feasible for~$\ensuremath{\bm{s}} \equiv \ensuremath{\bm{0}}$. Hence, contrary to robustness from~\eqref{P:robust}, a compromise-resilient system may violate the original requirements even for mild disturbances that would not, in principle, warrant it. Nevertheless, whenever it does, it does so in a controlled and parsimonious manner. Though obtaining a solution of~\eqref{P:parametrized} under the resilient equilibrium~\eqref{E:resilient} may appear challenging, it is in fact straightforward since it is equivalent to a convex optimization problem~(Section~\ref{S:equivalent_problem}). Hence, the balance~\eqref{E:resilient} induces relaxations that explicitly minimize the expected violation cost. Still, this does not characterize the resilient behavior resulting from~\eqref{E:resilient}. We therefore proceed to quantify the effect of the operational conditions~$\ensuremath{\bm{\xi}}$ on resilient behavior~$\ensuremath{\bm{s}}$, showing that it identifies and relaxes requirements that are harder to satisfy under each disruption. To conclude, we construct a cost such that the resilience-by-compromise solution from~\eqref{E:resilient} is also a solution of the robust control problem~\eqref{P:robust}. Hence, resilience and robustness effectively optimize different objectives and may, in many applications, both be desired properties. \subsection{Inverse optimality of resilience by compromise} \label{S:equivalent_problem} Consider the optimization problem \begin{prob}\label{P:resilient} P^\star_\text{Re} = \min_{\substack{\ensuremath{\bm{z}} \in \bbR^p\\ s_i \in L_2^+}}& &&J(\ensuremath{\bm{z}}) + \E\left[ h\big( \ensuremath{\bm{s}}(\ensuremath{\bm{\Xi}}) \big) \right] \\ \subjectto& &&g_i(\ensuremath{\bm{z}}, \ensuremath{\bm{\xi}}) \leq s_i(\ensuremath{\bm{\xi}}) \text{,} \quad \text{for all } \ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}} \text{,} \\ &&&\qquad i = 1,\dots,m \text{,} \end{prob} where the expectation is taken with respect to the distribution of the random variable~$\ensuremath{\bm{\Xi}}$. The solution of~\eqref{P:resilient} is the same as the modified problem~\eqref{P:parametrized} with slacks satisfying the resilient equilibrium~\eqref{E:resilient}. \begin{proposition}\label{T:equivalent_problem} Let~$(\ensuremath{\bm{z}}_\text{Re}^\star,\ensuremath{\bm{s}}^\star)$ be the solution of~\eqref{P:resilient}. Then, $P^\star_\text{Re} = P^\star_\text{Re}(\ensuremath{\bm{s}}^\star)$ and~$\ensuremath{\bm{s}}^\star$ are the unique slacks that satisfy the equilibrium~\eqref{E:resilient}. \end{proposition} \begin{proof} To show~\eqref{P:resilient} is equivalent solving~\eqref{P:parametrized} subject to the compromise~\eqref{E:resilient}, we leverage the fact that the KKT conditions in Proposition~\ref{T:zdg}(ii) are necessary and sufficient for convex programs under Assumption~\ref{A:slater}. Start by defining the Lagrangian of~\eqref{P:resilient} as \begin{equation}\label{E:lagrangian_resilient} \begin{aligned} \ensuremath{\mathcal{L}}^\prime((\ensuremath{\bm{z}},\ensuremath{\bm{s}}), \ensuremath{\bm{\mu}}) &= f_0(\ensuremath{\bm{z}}) + \E\left[ h\big( \ensuremath{\bm{s}}(\ensuremath{\bm{\Xi}}) \big) \right] \\ {}&+ \int_\ensuremath{\mathcal{K}} \ensuremath{\bm{\mu}}(\ensuremath{\bm{\xi}})^T \big[ \ensuremath{\bm{g}}(\ensuremath{\bm{z}}, \ensuremath{\bm{\xi}}) - \ensuremath{\bm{s}}(\ensuremath{\bm{\xi}}) \big] d\ensuremath{\bm{\xi}} \text{,} \end{aligned} \end{equation} where we write~$(\ensuremath{\bm{z}},\ensuremath{\bm{s}})$ to emphasize that they are both primal variables of~\eqref{P:resilient} as opposed to~\eqref{P:parametrized} in which~$\ensuremath{\bm{z}}$ is an optimization variable and~$\ensuremath{\bm{s}}$ is a parameter. From Proposition~\ref{T:zdg}(ii), if~$(\ensuremath{\bm{z}}_\text{Re}^\star,\ensuremath{\bm{s}}^\star)$ is a solution of~\eqref{P:resilient}, then there exists~$\ensuremath{\bm{\mu}}^\star$ such that~$\nabla \ensuremath{\mathcal{L}}^\prime((\ensuremath{\bm{z}}_\text{Re}^\star,\ensuremath{\bm{s}}^\star), \ensuremath{\bm{\mu}}^\star) = \ensuremath{\bm{0}}$ and~$[\ensuremath{\bm{\mu}}^\star(\ensuremath{\bm{\xi}})]_i \left[ g_i(\ensuremath{\bm{z}}_\text{Re}^\star,\ensuremath{\bm{\xi}}) - s_i(\ensuremath{\bm{\xi}}) \right] = 0$, for all~$\ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}}$. Separating the elements of the gradient of~\eqref{E:lagrangian_resilient} for~$\ensuremath{\bm{z}}$ and~$\ensuremath{\bm{s}}$, its KKT conditions become \begin{equation}\label{E:kkt_resilient} \nabla_{\ensuremath{\bm{z}}} \ensuremath{\mathcal{L}}(\ensuremath{\bm{z}}_\text{Re}^\star, \ensuremath{\bm{\mu}}^\star, \ensuremath{\bm{s}}^\star) = \ensuremath{\bm{0}} \text{ and } \nabla h \big( \ensuremath{\bm{s}}^\star(\ensuremath{\bm{\xi}}) \big) - \ensuremath{\bm{\mu}}^\star(\ensuremath{\bm{\xi}}) = \ensuremath{\bm{0}} \text{,} \end{equation} where~$\ensuremath{\mathcal{L}}$ is the Lagrangian~\eqref{E:lagrangian} of~\eqref{P:parametrized} with slacks~$\ensuremath{\bm{s}}^\star$. The first equation in~\eqref{E:kkt_resilient} shows that~$\ensuremath{\bm{z}}_\text{Re}^\star$ is also a solution of~\eqref{P:parametrized} for the slacks~$\ensuremath{\bm{s}}^\star$. Using Proposition~\ref{T:diff_P}, the second equation shows that~$\ensuremath{\bm{s}}^\star$ satisfies the equilibrium~\eqref{E:resilient}. The reverse relation holds directly, since the KKT conditions of both problems are actually identical. \end{proof} Proposition~\ref{T:equivalent_problem} shows that under the resilience equilibrium~\eqref{E:resilient}, \eqref{P:parametrized} optimizes both the control performance function~$J$ and the expected requirement violation cost. In other words, though the resilient formulation may violate the requirements for most states of the world, it does so in a parsimonious manner. It is worth noting that relaxing constraints as in~\eqref{P:resilient} is common in convex programming and is used, for instance, in phase~1 solvers for interior-point methods~\cite{Bertsekas15c}. The goal in~\eqref{P:resilient}, however, is notably different. Indeed, resilience does not seek a solution~$\ensuremath{\bm{z}}^\dagger$ for which the slacks~$\ensuremath{\bm{s}}(\ensuremath{\bm{\xi}})$ vanish for all~$\ensuremath{\bm{\xi}}$. Its aim is to adapt to situations in which disruptions are so extreme that only by modifying the underlying control problem is it possible to remain operational. Hence, it seeks~$\ensuremath{\bm{s}} \succ \ensuremath{\bm{0}}$ for some, if not all, disruptions~$\ensuremath{\bm{\xi}}$. Another consequence of Proposition~\ref{T:equivalent_problem} is that the compromise-resilient control problem~\eqref{P:parametrized}--\eqref{E:resilient} has a straightforward solution since it is equivalent to a convex optimization program, namely~\eqref{P:resilient}. Nevertheless, it turns out that a more efficient algorithm can be obtained by understanding how resilience violates the requirements to respond to disruptions. That is the topic of the next section. \subsection{Quantifying the effect of disturbances} \label{S:counterfactual} Proposition~\ref{T:equivalent_problem} shows that resilient control minimizes the problem modifications through the cost~$h$. In constrast, the following proposition explicitly describes the effect of a disturbance~$\ensuremath{\bm{\xi}}$ on the violations~$\ensuremath{\bm{s}}$. \begin{proposition}\label{T:slacks} Let~$\ensuremath{\bm{z}}_\text{Re}^\star(\ensuremath{\bm{s}}^\star)$ be the solution of~\eqref{P:parametrized} for the resilient slacks~$\ensuremath{\bm{s}}^\star$ from~\eqref{E:resilient} and~$\ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{s}}^\star)$ be the solution of its dual problem~\eqref{P:dual_parametrized}. Then, % \begin{equation}\label{E:slacks} \ensuremath{\bm{s}}^\star = \left( \nabla h \right)^{-1} \left[ \frac{\ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{s}}^\star)}{f_{\ensuremath{\bm{\Xi}}}} \right] \text{.} \end{equation} % \vspace{-4pt} \end{proposition} \begin{proof} Follows by applying Proposition~\ref{T:diff_P} to the equilibrium~\eqref{E:resilient} to obtain~$\ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{s}}^\star) = \nabla h(\ensuremath{\bm{s}}^\star) f_{\ensuremath{\bm{\Xi}}}$. Recall that the Jacobian of the gradient~$\nabla h$ is the Hessian~$\nabla^2 h$ and that since~$h$ is strongly convex~(Assumption~\ref{A:h}), it holds that~$\nabla^2 h \succ 0$. Immediately, the inverse of the gradient exists by the inverse function theorem, yielding~\eqref{E:slacks}. \end{proof} Proposition~\ref{T:slacks} establishes a fixed point relation between the resilient slacks~$\ensuremath{\bm{s}}^\star$ and the optimal dual variables~$\ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{s}})$. This is not surprising in view of the well-known sensitivity interpretation of dual variables for convex programs. Indeed, dual variable represent how much the objective stands to change if a constraint were relaxed or tightened. Given the monotone increasing nature of~$\nabla h$~(due to the strong convexity of~$h$, Assumption~\ref{A:h}), it is clear from~\eqref{E:slacks} that the resilient formulation identifies and relaxes constraints that are harder to satisfy. Hence, if a disruption~$\ensuremath{\bm{\xi}}$ makes it difficult for the resilient system to meet a requirement, it will modify that requirement according to its difficulty. This change is mediated by the variation in the resilience cost~$h$ and the likelihood of the disruption~$f_{\ensuremath{\bm{\Xi}}}(\ensuremath{\bm{\xi}})$, which determine the amount by which the requirement is relaxed. The choice of~$h$ therefore plays an important role in the resulting resilient behavior. For instance, if the violation cost is linear, i.e., $h(\ensuremath{\bm{s}}) = \ensuremath{\bm{\gamma}}^T \ensuremath{\bm{s}}$, $\ensuremath{\bm{\gamma}} \in \bbR_+^m$, the equilibrium~\eqref{E:resilient} occurs for~$[\ensuremath{\bm{s}}^\star]_i = [\ensuremath{\bm{\gamma}}]_i^{-1}$. Hence, the violations are independent of the disruptions and the solution is the same as if~\eqref{P:parametrized} were solved for predetermined slacks. A more interesting phenomenon occurs for quadratic cost structures, e.g., $h(\ensuremath{\bm{s}}) = \ensuremath{\bm{s}}^T \ensuremath{\bm{\Gamma}} \ensuremath{\bm{s}}$, for~$\ensuremath{\bm{\Gamma}} \succ 0$. Then, the violations are proportional to the dual variables as in~$\ensuremath{\bm{s}}^\star = \ensuremath{\bm{\Gamma}}^{-1} \ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{s}}^\star) / f_{\ensuremath{\bm{\Xi}}}$. In this case, the resilient violations are proportional to the requirement difficulty and inversely proportional to the likelihood of the disruption. Given this wide range of resilient behaviors, a question that arises is how they relate to those induced by the robust formulation. We explore this question in the sequel by relating the resilient control problem~\eqref{P:resilient} to its robust counterpart~\eqref{P:robust}. \subsection{Resilience vs.\ robustness} \label{S:resilient_vs_robust} On the surface, the robust~\eqref{P:robust} and resilient~\eqref{P:resilient} control problems are strikingly different. And in fact, it is clear from the discussion in the previous section that depending on the choice of~$h$, their behaviors can be quite dissimilar. Yet, it turns out that~\eqref{P:robust} and~\eqref{P:resilient} are equivalent under mild conditions for an appropriate choice of~$h$, as shown in the following proposition. \begin{proposition}\label{T:resilient_v_robust} Let~$\ensuremath{\bm{z}}_\text{Re}^\dagger$ be a solution of~\eqref{P:resilient} with~$h_\text{Ro}(\ensuremath{\bm{s}}) = -\gamma \prod_{i=1}^m \big( 1 - \mathbb{H}(s_i) \big)$, where~$\mathbb{H}$ is the Heaviside function, i.e., $\mathbb{H}(x) = 1$ if~$x \geq 0$ and zero otherwise. For each~$\gamma \geq 0$ there exists a~$\delta^\dagger \in [0,1]$ such that~$\ensuremath{\bm{z}}_\text{Re}^\dagger$ is a solution of~\eqref{P:robust} with probability of failure~$\delta^\dagger$. \end{proposition} \begin{proof} Fix~$\gamma$ in the violation cost~$h_\text{Ro}$ defined in the hypothesis and let~$(\ensuremath{\bm{z}}_\text{Re}^\dagger, \ensuremath{\bm{s}}_\text{Re}^\dagger)$ be a solution pair of the resilience-by-compromise problem~\eqref{P:resilient} and~$\ensuremath{\bm{z}}_\text{Ro}^\star$ be a solution of the robust~\eqref{P:robust} with~$1-\delta^\dagger = \Pr\left[ \ensuremath{\bm{g}}(\ensuremath{\bm{z}}_\text{Re}^\dagger, \ensuremath{\bm{\Xi}}) \leq 0 \right]$. Immediately, the value of~\eqref{P:resilient} is achieved for~$\ensuremath{\bm{z}} = \ensuremath{\bm{z}}_\text{Re}^\dagger$ and~$\ensuremath{\bm{s}} = \ensuremath{\bm{s}}_\text{Re}^\dagger$. What is more, note that the solution pair~$(\ensuremath{\bm{z}},\ensuremath{\bm{s}}) = (\ensuremath{\bm{z}}_\text{Ro}^\star, \ensuremath{\bm{g}}(\ensuremath{\bm{z}}_\text{Ro}^\star,\cdot))$ is trivially feasible for~\eqref{P:resilient} and can therefore be used to upper bound its value as in \begin{equation}\label{E:rvr1} \begin{aligned} P_\text{Re}^\star &= J(\ensuremath{\bm{z}}_\text{Re}^\dagger) - \gamma \E\left[ \prod_{i = 1}^m \left( 1-\mathbb{H} \left( \big[ \ensuremath{\bm{s}}_\text{Re}^\dagger(\ensuremath{\bm{\Xi}}) \big]_i \right) \right) \right] \\ &\leq J(\ensuremath{\bm{z}}_\text{Ro}^\star) - \gamma \E\left[ \prod_{i = 1}^m \bigg( 1-\mathbb{H} \left( g_i(\ensuremath{\bm{z}}_\text{Ro}^\star, \ensuremath{\bm{\Xi}}) \right) \bigg) \right] \text{.} \end{aligned} \end{equation} Due to the form of~$\mathbb{H}$, the expectations in~\eqref{E:rvr1} reduce to probabilities. We then obtain \begin{multline}\label{E:rvr2} J(\ensuremath{\bm{z}}_\text{Re}^\dagger) - \gamma \Pr\left[ \ensuremath{\bm{s}}^\dagger(\ensuremath{\bm{\Xi}}) \leq \ensuremath{\bm{0}} \right] \\ {}\leq J(\ensuremath{\bm{z}}_\text{Ro}^\star) - \gamma \Pr\left[ \ensuremath{\bm{g}}(\ensuremath{\bm{z}}_\text{Ro}^\star, \ensuremath{\bm{\Xi}}) \leq \ensuremath{\bm{0}} \right] \text{.} \end{multline} Since~$\ensuremath{\bm{z}}_\text{Ro}^\star$ is a solution of~\eqref{P:robust} with probability of failure~$\delta^\dagger$, \eqref{E:rvr2} becomes \begin{equation}\label{E:rvr3} \begin{aligned} J(\ensuremath{\bm{z}}_\text{Re}^\dagger) - \gamma \Pr\left[ \ensuremath{\bm{s}}^\dagger(\ensuremath{\bm{\Xi}}) \leq 0 \right] \leq J(\ensuremath{\bm{z}}_\text{Ro}^\star) - \gamma (1-\delta^\dagger) \text{.} \end{aligned} \end{equation} To conclude, recall from~\eqref{P:resilient} that~$\ensuremath{\bm{g}}(\ensuremath{\bm{z}}_\text{Re}^\dagger(\gamma), \ensuremath{\bm{\xi}}) \leq \ensuremath{\bm{s}}_\text{Re}^\dagger(\ensuremath{\bm{\xi}})$ for all~$\ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}}$, which by monotonicity of the Lebesgue integral implies that \begin{equation*} \Pr\left[ \ensuremath{\bm{s}}_\text{Re}^\dagger(\ensuremath{\bm{\Xi}}) \leq 0 \right] \leq \Pr\left[ \ensuremath{\bm{g}}(\ensuremath{\bm{z}}_\text{Re}^\dagger, \ensuremath{\bm{\Xi}}) \leq 0 \right] = 1-\delta^\dagger \text{.} \end{equation*} Hence, we obtain from~\eqref{E:rvr3} that~$J(\ensuremath{\bm{z}}_\text{Re}^\dagger) \leq P_\text{Ro}^\star$. Since~$\ensuremath{\bm{z}}_\text{Re}^\dagger$ is a feasible point of~\eqref{P:robust} with probability of failure~$\delta^\dagger$ by design and its control performance achieves the optimal value~$P_\text{Ro}^\star$, it must be a solution of~\eqref{P:robust}. \end{proof} Proposition~\ref{T:resilient_v_robust} gives conditions on the violation cost~$h$ such that a resilience-by-compromise controller behaves as a robust one. In particular, it states that there exists a fixed, strict violation cost, i.e., one that charges a fixed price only if some requirement is violated, such that resilience by compromise reduces to robustness. This cost essentially determines the level of control performance~$J$ above which the controller chooses to pay~$\gamma$ to give up on satisfying the requirements altogether. Notice that Proposition~\ref{T:resilient_v_robust} holds even though the resulting problem is not convex. In that sense, resilience can be thought of as a soft version of robustness: whereas the violation magnitude matters for the former, only whether the requirement is violated impacts the latter. For certain critical requirements, this all-or-nothing behavior may be the only acceptable one. In these cases, constraints should be treated as robust with appropriate satisfaction levels. Other engineering requirements, however, are nominal in nature and can be relaxed as long as violations are small and short-lived. Treating these constraints as resilient enables the system to continue operating under disruptions while remaining robust with respect to critical specifications. For instance, if a set of essential requirements needs a level of satisfaction so high that the control problem becomes infeasible, nominal constraints can be adapted to recover a useful level of operation. By leveraging Proposition~\ref{T:resilient_v_robust}, this can be achieved by posing a control problem that is both robust and resilient. To do so, let~$\ensuremath{\mathcal{S}} \subseteq [m]$ be the set of soft~(nominal) requirements, i.e., those that can withstand relaxation, and~$\ensuremath{\mathcal{H}} \subseteq [m]$ be the set of hard~(critical) requirements, i.e., those that cannot be violated under any circumstances. Naturally, $\ensuremath{\mathcal{S}} \cap \ensuremath{\mathcal{H}} = \emptyset$ and~$\ensuremath{\mathcal{S}} \cup \ensuremath{\mathcal{H}} = [m]$. We can then combine~\eqref{P:robust} and~\eqref{P:resilient} into a single problem, namely \begin{prob}\label{P:complete} \minimize_{\substack{\ensuremath{\bm{z}} \in \bbR^p,\\s_i \in \ensuremath{\mathcal{F}}}}& &&f_0(\ensuremath{\bm{z}}) + \E\left[ \sum_{i \in \ensuremath{\mathcal{S}}} h_i\big( s_i(\ensuremath{\bm{\Xi}}) \big) + \sum_{i \in \ensuremath{\mathcal{H}}} h_\text{Ro}\big( s_i(\ensuremath{\bm{\Xi}}) \big)\right] \\ \subjectto& &&f_i(\ensuremath{\bm{z}}, \ensuremath{\bm{\xi}}) \leq s_i(\ensuremath{\bm{\xi}}) \text{,} \ \ \forall \ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}} \text{, } i = 1,\dots,m \text{.} \end{prob} Whereas~\eqref{P:complete} provides a complete solution to designing robust/resilient systems, it is worth noting that it is not a convex optimization problem. What is more, the non-smooth nature of~$\mathbb{H}$ poses a definite challenge to even approximating its solution. Enabling the solution of this general problem is therefore beyond the scope of this paper. Nevertheless, we describe in the sequel an efficient algorithm to tackle resilience-by-compromise by directly solving~\eqref{P:parametrized} for the resilient equilibrium~\eqref{E:resilient}. \section{A MODIFIED ARROW-HURWICZ ALGORITHM} \label{S:algorithm} \begin{figure}[tb] \centering \includesvg[width=\columnwidth]{fig1} \vspace{-14pt} \caption{Robust and resilient solution to the shepherd problem: (a)~Shepherd plans; (b)~distribution of maximum distance between shepherd and sheep.} \label{F:shepherd} \vspace{-10pt} \end{figure} \begin{figure*} \centering \includesvg{fig2} \vspace{-14pt} \caption{Robust and resilient controllers for the quadrotor navigation problem. The radius of the markers are proportional to the actuation strength.} \label{F:navigation} \vspace{-10pt} \end{figure*} In view of Proposition~\ref{T:equivalent_problem}, solving the resilient control problem~\eqref{P:parametrized} subject to the equilibrium~\eqref{E:resilient} reduces to obtaining a solution of~\eqref{P:resilient}. Given its a smooth, convex nature, this can be done using any of a myriad of methods~\cite{Bertsekas15c}. One approach that is particularly promising is to use a modified primal-dual algorithm that takes into account the results in Proposition~\ref{T:slacks}. Explicitly, consider the classical Arrow-Hurwicz algorithm for solving~\eqref{P:parametrized}~\cite{Arrow58s}. This method seeks a points that satisfy the KKT conditions~[Proposition~\ref{T:zdg}(2)] by updating the primal and dual variables using gradients of the Lagrangian~\eqref{E:lagrangian}. Explicitly, $\ensuremath{\bm{z}}$ is updated by \emph{descending} along the negative gradient of the Lagrangian, i.e., \begin{subequations}\label{E:arrow} \begin{equation}\label{E:primal_arrow} \begin{aligned} \dot{\ensuremath{\bm{z}}} &= -\nabla_{\ensuremath{\bm{z}}} \ensuremath{\mathcal{L}}(\ensuremath{\bm{z}},\ensuremath{\bm{\lambda}},\ensuremath{\bm{s}}) \\ {}&= - \nabla_{\ensuremath{\bm{z}}} J(\ensuremath{\bm{z}}) - \int_{\ensuremath{\mathcal{K}}} \ensuremath{\bm{\lambda}}(\ensuremath{\bm{\xi}})^T \nabla_{\ensuremath{\bm{z}}} \ensuremath{\bm{g}}(\ensuremath{\bm{z}},\ensuremath{\bm{\xi}}) d\ensuremath{\bm{\xi}} \text{,} \end{aligned} \end{equation} and the dual variables~$\ensuremath{\bm{\lambda}}$ are updated by \emph{ascending} along the gradient of the Lagrangian~$\nabla_{\lambda} \ensuremath{\mathcal{L}}(\ensuremath{\bm{z}},\ensuremath{\bm{\lambda}},\ensuremath{\bm{s}})$ using the projected dynamics \begin{equation}\label{E:dual_arrow} \begin{aligned} \dot{\ensuremath{\bm{\lambda}}}(\ensuremath{\bm{\xi}}) &= \Pi_+\big[ \ensuremath{\bm{\lambda}}(\ensuremath{\bm{\xi}}), \nabla_{\ensuremath{\bm{\lambda}}} \ensuremath{\mathcal{L}}(\ensuremath{\bm{z}},\ensuremath{\bm{\lambda}},\ensuremath{\bm{s}}) \vert_{\ensuremath{\bm{\xi}}} \big] \\ {}&= \Pi_+\big[ \ensuremath{\bm{\lambda}}(\ensuremath{\bm{\xi}}),\, \ensuremath{\bm{g}}(\ensuremath{\bm{z}},\ensuremath{\bm{\xi}}) - \ensuremath{\bm{s}}(\ensuremath{\bm{\xi}}) \big] \text{.} \end{aligned} \end{equation} \end{subequations} The projection~$\Pi_+$ is introduced to ensure that the Lagrange multipliers remain non-negative and is defined as \begin{equation}\label{E:projection} \Pi_+(\ensuremath{\bm{x}},\, \ensuremath{\bm{v}}) = \lim_{a \to 0} \frac{[\ensuremath{\bm{x}} + a \ensuremath{\bm{v}}]_+ - \ensuremath{\bm{x}}}{a} \text{,} \end{equation} where~$[\ensuremath{\bm{x}}]_+ = \argmin_{\ensuremath{\bm{y}} \in \bbR_+^m} \norm{\ensuremath{\bm{y}} - \ensuremath{\bm{x}}}$ is the projection onto the non-negative orthant~\cite{Nagurney12p}. The main drawback of~\eqref{E:arrow} is that it solves~\eqref{P:parametrized} for a fixed slack~$\ensuremath{\bm{s}}$ and the desired compromise~$\ensuremath{\bm{s}}^\star$ in~\eqref{E:resilient} is not known \emph{a priori}. To overcome this limitation, we can use Proposition~\ref{T:slacks} and replace~\eqref{E:dual_arrow} by \begin{equation}\label{E:alt_dual_arrow} \dot{\ensuremath{\bm{\lambda}}}(\ensuremath{\bm{\xi}}) = \Pi_+\left[ \ensuremath{\bm{\lambda}}(\ensuremath{\bm{\xi}}),\, \ensuremath{\bm{g}}(\ensuremath{\bm{z}},\ensuremath{\bm{\xi}}) - \nabla h^{-1}\left( \frac{\ensuremath{\bm{\lambda}}(\ensuremath{\bm{\xi}})}{f_{\ensuremath{\bm{\Xi}}}(\ensuremath{\bm{\xi}})} \right) \right] \text{.} \end{equation} The dynamics~\eqref{E:primal_arrow}--\eqref{E:alt_dual_arrow} can be shown to converge to a point that satisfies the KKT conditions in Proposition~\ref{T:zdg}(2) as well as the equilibrium~\eqref{E:resilient} using an argument similar to~\cite{Cherukuri16a} that relies on classical results on projected dynamical systems~\cite[Thm.~2.5]{Nagurney12p} and the invariance principle for Carath\'{e}odory systems~\cite[Prop.~3]{Bacciotti06n}. Hence, they simultaneously solves three problems by obtaining (i)~requirement violations~$\ensuremath{\bm{s}}^\star$ that satisfies~\eqref{E:resilient}, (ii)~the solution~$\ensuremath{\bm{z}}^\star(\ensuremath{\bm{s}}^\star)$ of~\eqref{P:parametrized} for the violations~$\ensuremath{\bm{s}}^\star$, and~(iii)~dual variables~$\ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{s}}^\star)$ that solve~\eqref{P:dual_parametrized} for~$\ensuremath{\bm{s}}^\star$. Due to space constraints, details of this proof are left for a future version of this work. \section{NUMERICAL EXPERIMENTS} \label{S:sims} In this section, we illustrate the use of resilient optimal control in two applications: \emph{the shepherd problem}, in which we plan a configuration in order to surveil targets~(Section~\ref{S:shepherd}), and \emph{navigation in partially known environments}, in which a quad-rotor must follow way-points to a target that is behind an obstruction of unknown mass~(Section~\ref{S:navigation}). We also illustrate an online extension of our resilience framework in which a quad-rotor adapts to wind gusts~(Section~\ref{S:online}). Due to space constraints, we only provide brief problem descriptions in the sequel. Details can be found in~\cite{extended}. \subsection{The shepherd problem} \label{S:shepherd} We begin by illustrating the differences between robustness and resilience in a static surveillance planning problem. Suppose an agent~(\emph{the shepherd}) must position itself to supervise a set of targets~(\emph{the sheep}). Without prior knowledge of their position, the shepherd assumes the sheep are distributed uniformly at random within a perimeter of radius~$R$. The surveillance radius~$r$ of the shepherd is enough to cover only~$90\%$ of that area. The shepherd also seeks to minimize its displacement from its home situated at~$\ensuremath{\bm{x}}^o$. If we let~$\ensuremath{\bm{\Xi}}_i$ denote the position of the~$i$-th sheep and~$\ensuremath{\mathcal{K}}$ be the ball described by the radius-$R$ perimeter, the robust formulation~\eqref{P:robust} becomes \begin{prob} \minimize_{\ensuremath{\bm{x}}}& &&\norm{\ensuremath{\bm{x}} - \ensuremath{\bm{x}}^o}^2 \\ \subjectto& &&\Pr\left[ \norm{\ensuremath{\bm{x}}- \ensuremath{\bm{\Xi}}_i}^2 \leq r^2 \right] \geq 1-\delta \end{prob} and the resilient problem~\eqref{P:resilient} yields \begin{prob} \minimize_{\ensuremath{\bm{x}}}& &&\norm{\ensuremath{\bm{x}} - \ensuremath{\bm{x}}^o}^2 + \E \left[ \sum_{i = 1}^m s_i^2(\ensuremath{\bm{\Xi}}_i) \right] \\ \subjectto& &&\norm{\ensuremath{\bm{x}}- \ensuremath{\bm{\xi}}_i}^2 \leq r^2 + s_i(\ensuremath{\bm{\xi}}_i) \text{,} \quad \ensuremath{\bm{\xi}}_i \in \ensuremath{\mathcal{K}} \text{.} \end{prob} Fig.~\ref{F:shepherd} show results for~$\delta = 0.2$. In order to meet the set probability of failure, the robust moves away from the origin only as much as necessary, leading to a plan that has lower cost than the resilient. The resilient solution, on the other hand, is willing to pay the extra cost to move to the center of the perimeter so that when a sheep steps out of its surveillance radius, it does not go too far~(Fig.~\ref{F:shepherd}b). This example illustrates the difference between robust and resilient planning. While the robust system saves on cost by minimally meeting the specified requirement violation, the resilient system takes into account the magnitude of the violations. Hence, it is willing to pay the extra cost in order to reduce future violations. \subsection{Way-point navigation in a partially known environment} \label{S:navigation} A quadrotor of mass~$m$ must plan control actions to navigate the hallway shown in Fig.~\ref{F:navigation} by going close to the way-points~(stars) at specific instants while remaining within a safe distance of the walls and limiting the maximum input thrust. Between the quadrotor and its target, however, there may exist an obstruction of \emph{a priori} unknown mass~(brown box). This box modifies the dynamics of the quadrotor in a predictable way depending on its mass, i.e., the quadrotor can push the box by applying additional thrust but the magnitude of this thrust is not known beforehand. Since it is not possible to find a set of control actions that is feasible for all obstruction masses, we set~$\delta = 0.1$ for the robust controller. On the other hand, the resilient controller is allowed to relax both thrust limits and the terminal set. Hence, it can choose between actuating harder to push the box or deem it too heavy and stop before entering the room. Notice that while the robust plan reaches the terminal set for the light obstruction, it is unable to do so in the other two cases. This is to be expected given that it was not designed to do so. The resilient controller, however, displays a smoother degradation as the weight of the obstruction increases. Notice that it chooses which requirement to violate by compromising between their satisfaction and the control objective~(LQR). While it violates the maximum thrust constraint enough to push the medium box almost into the terminal set, it deems the heavy box to not be worth the effort and relaxes the terminal set instead. This leads to a more graceful behavior degradation than the one induced by the robust controller. Moreover, observe that the resilient controller also uses additional actuation in the beginning to more quickly approach the wall and reduce the distance traveled. This is an example of the ``unnecessary yet beneficial'' requirement violations that resilient control may perform in order to improve the control performance. Naturally, if thrust requirements are imposed by hardware limitations, then the robust solution is the only practical one. \begin{figure} \centering \includesvg{fig3} \caption{Online resilient control using MPC: quadrotor under wind disruption} \label{F:online} \vspace{-12pt} \end{figure} \subsection{Online extension: adapting to wind gusts} \label{S:online} The previous examples illustrated the behavior of the resilient optimal control problem formulation introduced in this paper. Another aspect of resilience, beyond planning to mitigate disruptions, is the ability to adapt to disturbances as they occur. This can be achieved by using the resilient optimal control problem in an MPC fashion. We show the result of doing so in Fig.~\ref{F:online}. Here, a quadrotor navigates towards its target~(grey zone) by planning over a~$10$-steps horizon, but executing only the first control action. During this execution, the quadrotor may be hit by an unpredictable wind gust that pushes him towards a wall~(left of the diagram). The quadrotor takes the wind gust suffered into account in its future plan by assuming that the wind will continue to blow at that speed. The resilient controller is allowed to modify the safety set and maximum thrust requirements. Similar behaviors to Fig.~\ref{F:navigation} can be observed. The resilient controller chooses to violate the thrust constraint in order to pick up speed initially. It does so because the price of using extra actuation is compensated by the improvement in control performance~(LQR). When a gust of wind pushes the quadrotor close to the left boundary of the safety set, it again violates the actuation constraints to stay withing the safe region. It does so in full view that it must now overshoot the safety region on the right. Notice that the resilient behavior of the quadrotor is adaptive: as disruptions occur, the controller plans which requirements should be violated to remain operational. Without these violations, such intense wind gusts would crash the quadrotor into the wall. \section{CONCLUSION AND FUTURE WORK} We defined resilient control by embedding control problems with the ability to violate requirements and proposed a method to automatically design these violations by compromising between the control objective and a constraint violation cost. We showed that such a compromise explicitly minimizes changes to the original control problem and that for properly selected costs, robust behaviors can be induced. These results are the first steps toward a resilient control solution capable of adapting to disruptions online. Such behavior can be achieved by combining~\eqref{P:resilient} and MPC as shown in Section~\ref{S:online}. Future works involve analyzing the stability of such solutions and leverage system level synthesis techniques~\cite{Anderson19s} to directly design resilient controllers. \bibliographystyle{aux_files/IEEEbib} \section{INTRODUCTION} \label{S:intro} Coping with disruptions is a core requirement for autonomous systems operating in the real world. Indeed, as these complex systems leave the controlled setting of the lab, it becomes increasingly important to enable them to safely negotiate adverse situations arising from the dynamic and fast-evolving environments in which they must operate~\cite{Gunes14a, Stankovic05o}. In the context of dynamical systems and control, this issue is often addressed through the concept of \emph{robustness}. The robust approach plans for the worst so that the resulting system can achieve its objective~(e.g., state regulation) regardless of the conditions in which it operates. Techniques such as~$\ensuremath{\mathcal{H}}_\infty$ control, tube model predictive control~(MPC), and robust system-level synthesis have been developed specifically to address this issue~\cite{Dullerud13a, Li00r, Borrelli17p, Anderson19s}. In simple terms, robust systems are ``hard to break.'' Yet, the success of robustness may also be the root of its shortcomings. It is often not viable to plan for every contingency as it would lead to over-conservative behaviors whose performance is deficient even under normal operating conditions. In extreme cases, the resulting control problem may simply be infeasible. Hence, the question is no longer how to operate under or deal with a certain level of disturbance, but what to do when things go so catastrophically wrong that the original equilibrium is no longer viable. In such cases, the only solution is to modify the system requirements, e.g., by removing unlikely contingencies or relaxing specifications, to find an alternative equilibrium. In ecology, this capacity of systems to adapt and recover from disruptions by modifying their underlying operation is known as \emph{resilience}~\cite{Holling73r, Holling96e}. Since its introduction in the 1970s, it has been observed in a myriad of ecosystems and incorporated in fields such as psychology and dynamical/cyber-physical systems~\cite{Werner89v, Rodin14t, Rieger09r, Zhu15game, Ramachandran19r}. Contrary to stability, characterized by the persistence of a system near an equilibrium, resilience emphasizes conditions far from steady state, where instabilities can flip a system into another behavior regime~\cite{Holling96e}. In simple terms, resilient systems are ``easy to fix.'' In dynamical systems and control, robustness and resilience are often conflated. Even when resilience is described, the sought after behaviors are often robust in the sense of the above definitions, e.g.,~\cite{Chen18r, Tzoumas17r, Guerrero17f}. Even in his seminal works, Holling discriminates between ``engineering resilience''~(robustness) and ``ecological resilience,'' by distinguishing systems with a single equilibrium from those with multiple equilibria~\cite{Holling96e}. Though resilient solutions involving adaptation to disruptions have been studied, such as in~\cite{Gunes14a, Stankovic05o, Zhu15game, Ramachandran19r}, a formal, general definition of resilient control akin to its robust counterpart is still lacking. The goal of this work is to formalize resilience in the context of optimal control. We begin by introducing the general problem of constrained control under disturbances and its robust solution~(Section~\ref{S:prob}). We then formulate the resilient optimal control problem by allowing controlled constraint violations in optimal control problems~(Section~\ref{S:resilience}). To be useful, however, these violations must be appropriately designed, which cannot be done manually for any moderately-sized problem. To address this issue, we put forward a framework to obtain requirement modifications by trading off control performance and violation costs. We analyze this formulation to obtain inverse optimality results and quantify the effect of disturbances on the violations. By proving that robustness and resilience optimize different objectives, we show that they are complementary properties that in many applications, may be simultaneously required~(Section~\ref{S:resilience_theory}). We conclude by deriving a practical algorithm to solve resilient control problems~(Section~\ref{S:algorithm}) and illustrating its use~(Section~\ref{S:sims}). \section{PROBLEM FORMULATION} \label{S:prob} Let~$\ensuremath{\bm{\Xi}}$ be a random variable taking values in a compact set~$\ensuremath{\mathcal{K}} \subseteq \bbR^d$ according to some measure~$\mathfrak{p}$. We assume for simplicity that~$\mathfrak{p}$ is absolutely continuous with respect to the Lebesgue measure, so that~$\ensuremath{\bm{\Xi}}$ has a probability density function~(Radon-Nikodym derivative) denoted~$f_{\ensuremath{\bm{\Xi}}}$. Its realizations~$\ensuremath{\bm{\xi}}$ denote states of the world that may be construed as disturbances to the normal operation of an autonomous system represented by the prototypical constrained optimal control problem \begin{prob}\label{P:generic} P^\star(\ensuremath{\bm{\Xi}}) = \min_{\ensuremath{\bm{z}} \in \bbR^p}& &&J(\ensuremath{\bm{z}}) \\ \subjectto& &&g_i(\ensuremath{\bm{z}}, \ensuremath{\bm{\Xi}}) \leq 0 \text{,} \quad i = 1,\dots,m \text{,} \end{prob} where~$\ensuremath{\bm{z}}$ denotes the decision variable, e.g., actuation strength, $J$ is a control performance measure, and the~$g_i(\cdot,\ensuremath{\bm{\xi}})$ describe the control requirements under~$\ensuremath{\bm{\xi}}$. \begin{assumption}\label{A:convexity} The control performance~$J: \bbR^p \to \bbR$ is a strongly convex, continuously differentiable function, $g_i(\ensuremath{\bm{z}}, \cdot) \in L_2$ are~$L_i$-Lipschitz continuous with respect to the~$\ell_\infty$-norm for all~$\ensuremath{\bm{z}} \in \bbR^p$, and~$g_i(\cdot, \ensuremath{\bm{\xi}})$ are coercive~(radially unbounded), convex functions for all~$\ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}}$. The requirement functions~$g_i$ have continuous derivatives with respect to~$\ensuremath{\bm{z}}$ and~$\ensuremath{\bm{\xi}}$. \end{assumption} Note that since~\eqref{P:generic} is parameterized by a random variable, its optimal solution~$\ensuremath{\bm{z}}^\star(\ensuremath{\bm{\Xi}})$ and value~$P^\star(\ensuremath{\bm{\Xi}})$ are random and depend on the \emph{a priori} unknown disturbance realization. \emph{Our goal is to obtain a deterministic~$\ensuremath{\bm{z}}^\dagger$ that is feasible for most~(if not all) realizations~$\ensuremath{\bm{\xi}}$ and whose performance~$P^\dagger = J(\ensuremath{\bm{z}}^\dagger)$ is similar to the optimal~$P^\star(\ensuremath{\bm{\xi}})$.} Though the latter objective is less critical, it is certainly desired. To illustrate the use of~\eqref{P:generic} in control, note that it can cast the following constrained LQR problem~\cite{Borrelli17p}: \begin{prob}\label{P:lqr} \minimize_{\ensuremath{\bm{x}}_k,\,\ensuremath{\bm{u}}_k}& &&\ensuremath{\bm{x}}_N^T \ensuremath{\bm{P}} \ensuremath{\bm{x}}_N + \sum_{k = 0}^{N-1} \ensuremath{\bm{x}}_k^T \ensuremath{\bm{Q}} \ensuremath{\bm{x}}_k + \ensuremath{\bm{u}}_k^T \ensuremath{\bm{R}} \ensuremath{\bm{u}}_k \\ \subjectto& &&\abs{\ensuremath{\bm{x}}_k} \leq \ensuremath{\bm{{\bar{x}}}} \text{,} \quad \abs{\ensuremath{\bm{u}}_k} \leq \ensuremath{\bm{{\bar{u}}}} - \ensuremath{\bm{\Xi}}_{u,k} \text{,} \\ &&&\ensuremath{\bm{x}}_{k+1} = \ensuremath{\bm{A}} \ensuremath{\bm{x}}_k + \ensuremath{\bm{B}} \ensuremath{\bm{u}}_k + \ensuremath{\bm{\Xi}}_{d,k} \text{,} \end{prob} where~$\ensuremath{\bm{x}}_k$ and~$\ensuremath{\bm{u}}_k$ are the state and control action at time~$k$, respectively, of a linear dynamical system described by the state-space matrices~$\ensuremath{\bm{A}}$ and~$\ensuremath{\bm{B}}$, $\ensuremath{\bm{{\bar{x}}}}$ and~$\ensuremath{\bm{{\bar{u}}}}$ are bounds on the state and actions, and the initial state~$\ensuremath{\bm{x}}_0$ is given. Here, $\ensuremath{\bm{z}}$ collects the~$\{\ensuremath{\bm{x}}_k,\ensuremath{\bm{u}}_{k-1}\}$ for~$k = 1,\dots,N$. The disturbances in~\eqref{P:lqr} model changes in the dynamics~($\ensuremath{\bm{\Xi}}_{d,k}$) and/or disruptions to the system's actuation capabilities~($\ensuremath{\bm{\Xi}}_{u,k}$). Namely, a realization~$[\ensuremath{\bm{\xi}}_{u,k}]_i = [\ensuremath{\bm{{\bar{u}}}}]_i$ is equivalent to actuator~$i$ being unavailable at instant~$k$. Hence, while the abstract~\eqref{P:generic} is the object of study of this paper, we are ultimately interested in the control problems it represents, e.g., \eqref{P:lqr}. In the control literature, a common approach to obtaining the desired~$\ensuremath{\bm{z}}^\dagger$ is to use the robust formulation of~\eqref{P:generic} \begin{prob}[\textup{P-RO}]\label{P:robust} P^\star_\text{Ro} = \min_{\ensuremath{\bm{z}} \in \bbR^p}& &&J(\ensuremath{\bm{z}}) \\ \subjectto& &&\Pr\left[ \ensuremath{\bm{g}}(\ensuremath{\bm{z}}, \ensuremath{\bm{\Xi}}) \leq \ensuremath{\bm{0}} \right] \geq 1-\delta \text{,} \end{prob} where the probability is taken with respect to the distribution of~$\ensuremath{\bm{\Xi}}$ and the requirements~$g_i$ are collected in the vector-valued function~$\ensuremath{\bm{g}}$ for conciseness. The probability of violation parameter~$\delta \in [0,1]$ trades-off feasibility for control performance~\cite{Schwarm99c, Li00r, Borrelli17p}. From the additivity of measures, it is straightforward that reducing~$\delta$ reduces the feasibility set of~\eqref{P:robust}, which may increase the control cost. For~$\delta = 0$, the constraints in~\eqref{P:robust} reduce to the classical worst-case formulation of robustness, enforcing that~$\max_{\ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}}}\ g_i(\ensuremath{\bm{z}}, \ensuremath{\bm{\xi}}) \leq 0$, i.e., that the solution is feasible for all possible conditions~$\ensuremath{\bm{\xi}}$~\cite{Dullerud13a}. Yet, these conditions can render the control problem infeasible or lead to solutions with impractical levels of performance. These issues are sometimes overcome by the statistical formulation in~\eqref{P:robust}. Under mild conditions, feasible solutions of~\eqref{P:robust} can be obtained using a deterministic optimization problem~\cite{Li00r, Borrelli17p}. \begin{proposition} Let~$\ensuremath{\bm{\Xi}}$ be a sub-Gaussian random vector~(e.g., Gaussian or Bernoulli), i.e., $\E\left[ e^{\nu \ensuremath{\bm{u}}^T \left( \ensuremath{\bm{\Xi}} - \E[\ensuremath{\bm{\Xi}}] \right)} \right] \leq e^{\nu^2 \sigma^2/2}$ for all~$\nu \in \bbR$ and~$\ensuremath{\bm{u}} \in \bbR^d$ such that~$\norm{\ensuremath{\bm{u}}} = 1$. Then, under Assumption~\ref{A:convexity}, the unique \begin{prob}[$\widehat{\textup{P}}\textup{-RO}$]\label{P:equivalentRobust} \ensuremath{\bm{{\hat{z}}}}_\text{Ro} = \argmin_{\ensuremath{\bm{z}} \in \bbR^p}& &&J(\ensuremath{\bm{z}}) \\ \subjectto& &&g_i(\ensuremath{\bm{z}}, \E[\ensuremath{\bm{\Xi}}]) \leq -\epsilon \text{,} \end{prob} with~$\epsilon = L \sigma \sqrt{2 \log(2 m d/\delta)}$, for~$L = \max_i L_i$, is \eqref{P:robust}-feasible. In particular, if~$\ensuremath{\mathcal{K}} \subseteq [0,\bar{\xi}]^d$, then~$\sigma \leq \bar{\xi}/2$. \end{proposition} \begin{proof} Recall that since~$J$ is strongly convex, the solution of~\eqref{P:equivalentRobust} is unique~\cite{Bertsekas09c}. The proof then follows by bounding~$\Pr \left[ \max_i g_i(\ensuremath{\bm{z}}^\star_\text{Ro}, \ensuremath{\bm{\Xi}}) \leq 0 \right]$ using concentration of measure~\cite{Ledoux01t}. From the Lipschitz continuity of~$g_i$ we get \begin{align*} g_i(\ensuremath{\bm{z}}^\dagger_\text{Ro}, \ensuremath{\bm{\xi}}) &\leq g_i(\ensuremath{\bm{z}}^\dagger_\text{Ro}, \E[\ensuremath{\bm{\Xi}}]) + L_i \norm{\ensuremath{\bm{\xi}} - \E[\ensuremath{\bm{\Xi}}]}_\infty \\ {}&\leq -\epsilon + L_i \norm{\ensuremath{\bm{\xi}} - \E[\ensuremath{\bm{\Xi}}]}_\infty \text{.} \end{align*} Note that since~$g_i(\ensuremath{\bm{z}}^\dagger_\text{Ro}, \E[\ensuremath{\bm{\Xi}}]) \leq 0$ we care only about the positive tail of the Lipschitz inequality. To proceed, use the union bound and Hoeffding's inequality to obtain that \begin{equation}\label{E:hoeffding} \Pr \left[ \max_i L_i \norm{\ensuremath{\bm{\xi}} - \E[\ensuremath{\bm{\Xi}}]}_\infty \leq \epsilon \right] \geq 1 - 2 d \sum_{i = 1}^m \exp\left( \frac{-\epsilon^2}{2 L_i^2 \sigma^2} \right) \text{.} \end{equation} Using~$\epsilon$ as in the hypothesis ensures that~\eqref{E:hoeffding} is greater than~$1-\delta$, thus concluding the proof. \end{proof} Robust controllers are often deployed in critical applications, such as industrial process control and security constrained power allocation~\cite{Borrelli17p, Capitanescu11s}. Nevertheless, their worst case approach has two shortcomings. First, too stringent requirements on the probability of failure~$\delta$ can result in an infeasible problem or render the solution of~\eqref{P:robust} useless in practice due to its poor performance even in favorable conditions. What is more, sensitive requirements~(i.e., large~$L_i$) lead to large~$\epsilon_i$ in~\eqref{P:equivalentRobust}, considerably reducing its feasible set. Though~\eqref{P:robust} may be feasible even if~\eqref{P:equivalentRobust} is not, obtaining a solution of the former is challenging without the latter except in special cases~\cite{Dullerud13a, Schwarm99c, Li00r, Borrelli17p}. Second, even if~\eqref{P:robust} is feasible and its solution has reasonable performance, the issue remains of what happens in the~$\delta$ portion of the realizations in which a stronger than anticipated disturbance occurs. Indeed, though robust autonomous systems make failures unlikely, they do not account for how the system fails once it does. Hence, though unlikely, failures can be catastrophic. Resilience overcomes these limitations by adapting the underlying optimal control problem to disruptions. \section{RESILIENT CONTROL} \label{S:resilience} In a parallel to the ecology literature, we define resilience in autonomous systems as \emph{the ability to adapt to, and possibly recover from, disruptions}. In particular, we are interested in dealing with disturbances so extreme that the original control problem becomes ineffective or infeasible. Where robust control would declare failure, resilient control attempts to remain operational by modifying the underlying control problem, reverting to an alternative trajectory that violates requirements in a controlled manner. In practice, this means that when a resilient system suffers a disastrous shock that jeopardizes its ability to solve its original task, it will adapt and modify its requirements in an attempt to at least partially salvage its mission. Resilience is therefore not a replacement for robustness, which may be the only sensible course of action for critical requirements, but a complementary set of behaviors that a control system can display. \subsection{Resilient optimal control} \label{S:resilient_control} To operationalize the above definition of resilient dynamical system, we must embed the optimal control problem~\eqref{P:generic} with the ability to modify its requirements depending on the disruption suffered by the system. A natural way to do so is by associating a disturbance-dependent relaxation~$s_i: \ensuremath{\mathcal{K}} \to \bbR_+$, $s_i \in L_2$, to the $i$-th~requirement as in \begin{prob}[\textup{P-RE}]\label{P:parametrized} P^\star_\text{Re}(\ensuremath{\bm{s}}) = \min_{\ensuremath{\bm{z}} \in \bbR^p}& &&J(\ensuremath{\bm{z}}) \\ \subjectto& &&\ensuremath{\bm{g}}(\ensuremath{\bm{z}}, \ensuremath{\bm{\xi}}) \leq \ensuremath{\bm{s}}(\ensuremath{\bm{\xi}}) \text{,} \quad \ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}} \text{,} \end{prob} where the vector-valued function~$\ensuremath{\bm{s}}$ collects the slacks~$s_i$. Depending on~$\ensuremath{\mathcal{K}}$, \eqref{P:parametrized} may have a finite or infinite number of constraints. The latter case can be tackled using semi-infinite programming algorithms~\cite{Reemtsen98s, Bonnans00p}. The violations~$\ensuremath{\bm{s}}(\ensuremath{\bm{\xi}})$ in~\eqref{P:parametrized} determine how the underlying control problem is modified to adapt to the operational conditions~$\ensuremath{\bm{\xi}}$. In~\eqref{P:lqr}, for instance, it could correspond to relaxing the state constraints and allowing the system to visit higher risk regions of the state space. If damage to the actuators renders the original control problem infeasible, this may be the only course of action to remain operational. Observe that for~$\ensuremath{\bm{s}} \equiv \ensuremath{\bm{0}}$, \eqref{P:parametrized} solves the worst-case robust control problem~\eqref{P:robust} for~$\delta = 0$. Indeed, if~$\ensuremath{\bm{g}}(\ensuremath{\bm{z}},\ensuremath{\bm{\xi}}) \leq \ensuremath{\bm{0}}$ for all~$\ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}}$, then~$\Pr[\ensuremath{\bm{g}}(\ensuremath{\bm{z}},\ensuremath{\bm{\Xi}}) \leq \ensuremath{\bm{0}}] = 1$. This formulation is often found in settings where controllers must abide to requirements under specific contingencies, such as security constrained power allocation~\cite{Capitanescu11s}. In the case of resilience, however, the goal is not to obtain solutions for vanishing slacks, but to adjust~$\ensuremath{\bm{s}}$ to allow constraint violations for disruptions under which the requirements become too stringent for a robust controller to satisfy. Hence, we are typically interested in solving~\eqref{P:parametrized} with~$\ensuremath{\bm{s}}(\ensuremath{\bm{\xi}}) \succ \ensuremath{\bm{0}}$ for some, if not all, disruptions~$\ensuremath{\bm{\xi}}$. For any predetermined~$\ensuremath{\bm{s}}$, \eqref{P:parametrized} is a smooth convex problem that can be solved using any of a myriad of existing methods~\cite{Bertsekas15c}. Yet, designing~$\ensuremath{\bm{s}}$, which ultimately determines the resilient behavior of the controller, can be quite challenging. Even for a moderate number of contingencies~(cardinality of~$\ensuremath{\mathcal{K}}$), finding the right requirement to violate and determining by how much to do so for each state of the world is intricate. This problem is only exacerbated as the number of requirements and/or contingencies grows. In Section~\ref{S:resilience_theory}, we propose a principled approach to designing resilient behavior based on trading off the control performance~$P^\star_\text{Re}(\ensuremath{\bm{s}})$ and a measure of violation. Before proceeding, however, we derive the dual problem of~\eqref{P:parametrized} and introduce the results from duality theory needed in the remainder of the paper. \subsection{Dual resilient control} \label{S:duality} Start by associating the dual variable~$\lambda_i \in L_2^+$ with the~$i$-th requirement, where~$L_2^+ = \{\lambda \in L_2 \mid \lambda \geq 0 \text{ a.e.}\}$. Depending on~$\ensuremath{\mathcal{K}}$, $\lambda_i$ may be a function or reduce to a (in)finite-dimensional vector. For conciseness, we collect the~$\lambda_i$ in a vector~$\ensuremath{\bm{\lambda}} \in \bbR_+^m$. Then, define the Lagrangian of~\eqref{P:parametrized} as \begin{equation}\label{E:lagrangian} \begin{aligned} \ensuremath{\mathcal{L}}(\ensuremath{\bm{z}}, \ensuremath{\bm{\lambda}}, \ensuremath{\bm{s}}) &= J(\ensuremath{\bm{z}}) + \int_\ensuremath{\mathcal{K}} \ensuremath{\bm{\lambda}}(\ensuremath{\bm{\xi}})^T \big[ \ensuremath{\bm{g}}(\ensuremath{\bm{z}}, \ensuremath{\bm{\xi}}) - \ensuremath{\bm{s}}(\ensuremath{\bm{\xi}}) \big] d\ensuremath{\bm{\xi}} \text{.} \end{aligned} \end{equation} From the Lagrangian~\eqref{E:lagrangian}, we obtain the dual problem \begin{prob}[\textup{D-RE}]\label{P:dual_parametrized} D^\star_\text{Re}(\ensuremath{\bm{s}}) = \max_{[\ensuremath{\bm{\lambda}}]_i \in L_2^+} \min_{\ensuremath{\bm{z}} \in \bbR^p}\ \ensuremath{\mathcal{L}}(\ensuremath{\bm{z}}, \ensuremath{\bm{\lambda}}, \ensuremath{\bm{s}}) \text{.} \end{prob} Under mild conditions, $D^\star_\text{Re}(\ensuremath{\bm{s}})$ attains~$P^\star_\text{Re}(\ensuremath{\bm{s}})$ and solving~\eqref{P:dual_parametrized} becomes equivalent to solving~\eqref{P:parametrized}. This fact together with the convexity of~\eqref{P:parametrized} imply that the well-known KKT necessary conditions are also sufficient. In these cases, we obtain a direct relation between the solutions of~\eqref{P:dual_parametrized} and the sensitivity of~$P_\text{Re}$ with respect to~$\ensuremath{\bm{s}}$. These facts are formalized in Propositions~\ref{T:zdg} and~\ref{T:diff_P}. \begin{assumption}\label{A:slater} There exists~$\ensuremath{\bm{{\bar{z}}}}$ such that~$\ensuremath{\bm{g}}(\ensuremath{\bm{{\bar{z}}}},\ensuremath{\bm{\xi}}) < \ensuremath{\bm{0}}$ for all~$\ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}}$. \end{assumption} \begin{proposition}[{\cite[Prop.~5.3.4]{Bertsekas09c}}]\label{T:zdg} Under Assumptions~\ref{A:convexity} and~\ref{A:slater}, strong duality holds for~\eqref{P:parametrized}, i.e., $P^\star_\text{Re}(\ensuremath{\bm{s}}) = D^\star_\text{Re}(\ensuremath{\bm{s}})$. Moreover, \begin{enumerate}[(i)] \item if~$\ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{s}})$ is a solution of~\eqref{P:dual_parametrized}, then~$\ensuremath{\bm{z}}_\text{Re}^\star(\ensuremath{\bm{s}}) = \argmin_{\ensuremath{\bm{z}} \in \bbR^p}\ \ensuremath{\mathcal{L}}(\ensuremath{\bm{z}}, \ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{s}}), \ensuremath{\bm{s}})$ is a solution of~\eqref{P:parametrized}; \item if~$\ensuremath{\bm{z}}^\prime$ is a feasible point of~\eqref{P:parametrized} and~$[\ensuremath{\bm{\lambda}}^\prime]_i \in L_2^+$, then~$\ensuremath{\bm{z}}^\prime$ is the solution of~\eqref{P:parametrized} and~$\ensuremath{\bm{\lambda}}^\prime$ is a solution of~\eqref{P:dual_parametrized} if and only if \begin{subequations}\label{E:kkt} \begin{align} \nabla \ensuremath{\mathcal{L}}(\ensuremath{\bm{z}}^\prime,\ensuremath{\bm{\lambda}}^\prime,\ensuremath{\bm{s}}) &= \ensuremath{\bm{0}} \label{E:kkt_grad} \\ [\ensuremath{\bm{\lambda}}^\prime(\ensuremath{\bm{\xi}})]_i \left[ g_i(\ensuremath{\bm{z}}^\prime,\ensuremath{\bm{\xi}}) - s_i(\ensuremath{\bm{\xi}}) \right] &= \ensuremath{\bm{0}} \text{, for all } \ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}} \label{E:kkt_slackness} \text{.} \end{align} \end{subequations} \end{enumerate} \end{proposition} \begin{proposition}\label{T:diff_P} Let~$\ensuremath{\bm{\lambda}}^\star$ be a solution of~\eqref{P:dual_parametrized}. Under Assumptions~\ref{A:convexity} and~\ref{A:slater}, it holds that~$\nabla_{\ensuremath{\bm{s}}} P^\star_\text{Re}(\ensuremath{\bm{s}}) \big\vert_{\ensuremath{\bm{\xi}}} = -\ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{\xi}})$. \end{proposition} \begin{proof} This is a direct consequence of~\cite[Thm.~3.2]{Shapiro95d}. The only non-trivial condition is that the solution set of~\eqref{P:parametrized} is \emph{inf-compact}. This stems from the fact that the~$g_i$ are radially unbounded and continuous, in which case the feasible set of~\eqref{P:parametrized} is respectively bounded and closed. \end{proof} Having established these duality results, we now introduce a method to design resilient behavior based on compromising between control performance and requirement violations. \section{RESILIENCE BY COMPROMISE} \label{S:resilience_theory} While straightforward and tractable, the resilient optimal control problem~\eqref{P:parametrized} can lead to a multitude of behaviors, not all of them useful, depending on the choice of slacks. In this section, we take a compromise approach to designing resilient behavior by balancing the control performance~$P^\star_\text{Re}(\ensuremath{\bm{s}})$ resulting from the violations~$\ensuremath{\bm{s}}$ and a measure of the magnitude of this violation. The rationale behind this compromise is that even after adapting to a disruption, the behavior of the resilient system should remain similar to that of the undisturbed one in at least some aspects. If the specifications of the original problem must be completely replaced, then it was most likely ill-posed to begin with. Still, regardless of the disruption caused by~$\ensuremath{\bm{\xi}}$, increasing violations always improves the control performance. Indeed, $P^\star_\text{Re}$ is a non-increasing function of~$\ensuremath{\bm{s}}$ in the sense that since the feasible set of~\eqref{P:parametrized} with slacks~$\ensuremath{\bm{s}}^\prime$ is contained in that of~\eqref{P:parametrized} with slacks~$\ensuremath{\bm{s}} \preceq \ensuremath{\bm{s}}^\prime$, it immediately holds that~$P^\star_\text{Re}(\ensuremath{\bm{s}}^\prime) \leq P^\star_\text{Re}(\ensuremath{\bm{s}})$. Hence, all resilient systems must strike a balance between violating requirements to remain operational~(or improve their performance) and stay close to the original specifications. This balance is naturally mediated by the likelihood of the violation occurring, i.e., on the probability of the operating conditions~$\ensuremath{\bm{\xi}}$, in the sense that larger deviations of the original problem are allowed for less likely disruptions. Explicitly, associate to each relaxation~$\ensuremath{\bm{s}}$ a scalar violation cost~$h(\ensuremath{\bm{s}})$. Then, the specification~$\ensuremath{\bm{s}}^\star$ is compromise-resilient if any further requirement violations would improve performance~(reduce control cost) as much as it would increase the violation cost, i.e., \begin{equation}\label{E:resilient} \nabla P^\star_\text{Re}(\ensuremath{\bm{s}}) \big\vert_{\ensuremath{\bm{s}}^\star,\, \ensuremath{\bm{\xi}}} = -\nabla h(\ensuremath{\bm{s}}^\star(\ensuremath{\bm{\xi}})) f_{\ensuremath{\bm{\Xi}}}(\ensuremath{\bm{\xi}}) \text{,} \end{equation} where~$\nabla h$ is the gradient of~$h$. Without loss of generality, we assume~$h(\ensuremath{\bm{0}}) = \ensuremath{\bm{0}}$. The existence of the derivative of the optimal value function~$P^\star_\text{Re}$ obtains from Proposition~\ref{T:diff_P}. \begin{assumption}\label{A:h} The cost~$h$ is a twice differentiable, strongly convex function \end{assumption} Observe that~$\ensuremath{\bm{s}}^\star$ need not vanish even if~\eqref{P:parametrized} is feasible for~$\ensuremath{\bm{s}} \equiv \ensuremath{\bm{0}}$. Hence, contrary to robustness from~\eqref{P:robust}, a compromise-resilient system may violate the original requirements even for mild disturbances that would not, in principle, warrant it. Nevertheless, whenever it does, it does so in a controlled and parsimonious manner. Though obtaining a solution of~\eqref{P:parametrized} under the resilient equilibrium~\eqref{E:resilient} may appear challenging, it is in fact straightforward since it is equivalent to a convex optimization problem~(Section~\ref{S:equivalent_problem}). Hence, the balance~\eqref{E:resilient} induces relaxations that explicitly minimize the expected violation cost. Still, this does not characterize the resilient behavior resulting from~\eqref{E:resilient}. We therefore proceed to quantify the effect of the operational conditions~$\ensuremath{\bm{\xi}}$ on resilient behavior~$\ensuremath{\bm{s}}$, showing that it identifies and relaxes requirements that are harder to satisfy under each disruption. To conclude, we construct a cost such that the resilience-by-compromise solution from~\eqref{E:resilient} is also a solution of the robust control problem~\eqref{P:robust}. Hence, resilience and robustness effectively optimize different objectives and may, in many applications, both be desired properties. \subsection{Inverse optimality of resilience by compromise} \label{S:equivalent_problem} Consider the optimization problem \begin{prob}\label{P:resilient} P^\star_\text{Re} = \min_{\substack{\ensuremath{\bm{z}} \in \bbR^p\\ s_i \in L_2^+}}& &&J(\ensuremath{\bm{z}}) + \E\left[ h\big( \ensuremath{\bm{s}}(\ensuremath{\bm{\Xi}}) \big) \right] \\ \subjectto& &&g_i(\ensuremath{\bm{z}}, \ensuremath{\bm{\xi}}) \leq s_i(\ensuremath{\bm{\xi}}) \text{,} \quad \text{for all } \ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}} \text{,} \\ &&&\qquad i = 1,\dots,m \text{,} \end{prob} where the expectation is taken with respect to the distribution of the random variable~$\ensuremath{\bm{\Xi}}$. The solution of~\eqref{P:resilient} is the same as the modified problem~\eqref{P:parametrized} with slacks satisfying the resilient equilibrium~\eqref{E:resilient}. \begin{proposition}\label{T:equivalent_problem} Let~$(\ensuremath{\bm{z}}_\text{Re}^\star,\ensuremath{\bm{s}}^\star)$ be the solution of~\eqref{P:resilient}. Then, $P^\star_\text{Re} = P^\star_\text{Re}(\ensuremath{\bm{s}}^\star)$ and~$\ensuremath{\bm{s}}^\star$ are the unique slacks that satisfy the equilibrium~\eqref{E:resilient}. \end{proposition} \begin{proof} To show~\eqref{P:resilient} is equivalent solving~\eqref{P:parametrized} subject to the compromise~\eqref{E:resilient}, we leverage the fact that the KKT conditions in Proposition~\ref{T:zdg}(ii) are necessary and sufficient for convex programs under Assumption~\ref{A:slater}. Start by defining the Lagrangian of~\eqref{P:resilient} as \begin{equation}\label{E:lagrangian_resilient} \begin{aligned} \ensuremath{\mathcal{L}}^\prime((\ensuremath{\bm{z}},\ensuremath{\bm{s}}), \ensuremath{\bm{\mu}}) &= f_0(\ensuremath{\bm{z}}) + \E\left[ h\big( \ensuremath{\bm{s}}(\ensuremath{\bm{\Xi}}) \big) \right] \\ {}&+ \int_\ensuremath{\mathcal{K}} \ensuremath{\bm{\mu}}(\ensuremath{\bm{\xi}})^T \big[ \ensuremath{\bm{g}}(\ensuremath{\bm{z}}, \ensuremath{\bm{\xi}}) - \ensuremath{\bm{s}}(\ensuremath{\bm{\xi}}) \big] d\ensuremath{\bm{\xi}} \text{,} \end{aligned} \end{equation} where we write~$(\ensuremath{\bm{z}},\ensuremath{\bm{s}})$ to emphasize that they are both primal variables of~\eqref{P:resilient} as opposed to~\eqref{P:parametrized} in which~$\ensuremath{\bm{z}}$ is an optimization variable and~$\ensuremath{\bm{s}}$ is a parameter. From Proposition~\ref{T:zdg}(ii), if~$(\ensuremath{\bm{z}}_\text{Re}^\star,\ensuremath{\bm{s}}^\star)$ is a solution of~\eqref{P:resilient}, then there exists~$\ensuremath{\bm{\mu}}^\star$ such that~$\nabla \ensuremath{\mathcal{L}}^\prime((\ensuremath{\bm{z}}_\text{Re}^\star,\ensuremath{\bm{s}}^\star), \ensuremath{\bm{\mu}}^\star) = \ensuremath{\bm{0}}$ and~$[\ensuremath{\bm{\mu}}^\star(\ensuremath{\bm{\xi}})]_i \left[ g_i(\ensuremath{\bm{z}}_\text{Re}^\star,\ensuremath{\bm{\xi}}) - s_i(\ensuremath{\bm{\xi}}) \right] = 0$, for all~$\ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}}$. Separating the elements of the gradient of~\eqref{E:lagrangian_resilient} for~$\ensuremath{\bm{z}}$ and~$\ensuremath{\bm{s}}$, its KKT conditions become \begin{equation}\label{E:kkt_resilient} \nabla_{\ensuremath{\bm{z}}} \ensuremath{\mathcal{L}}(\ensuremath{\bm{z}}_\text{Re}^\star, \ensuremath{\bm{\mu}}^\star, \ensuremath{\bm{s}}^\star) = \ensuremath{\bm{0}} \text{ and } \nabla h \big( \ensuremath{\bm{s}}^\star(\ensuremath{\bm{\xi}}) \big) - \ensuremath{\bm{\mu}}^\star(\ensuremath{\bm{\xi}}) = \ensuremath{\bm{0}} \text{,} \end{equation} where~$\ensuremath{\mathcal{L}}$ is the Lagrangian~\eqref{E:lagrangian} of~\eqref{P:parametrized} with slacks~$\ensuremath{\bm{s}}^\star$. The first equation in~\eqref{E:kkt_resilient} shows that~$\ensuremath{\bm{z}}_\text{Re}^\star$ is also a solution of~\eqref{P:parametrized} for the slacks~$\ensuremath{\bm{s}}^\star$. Using Proposition~\ref{T:diff_P}, the second equation shows that~$\ensuremath{\bm{s}}^\star$ satisfies the equilibrium~\eqref{E:resilient}. The reverse relation holds directly, since the KKT conditions of both problems are actually identical. \end{proof} Proposition~\ref{T:equivalent_problem} shows that under the resilience equilibrium~\eqref{E:resilient}, \eqref{P:parametrized} optimizes both the control performance function~$J$ and the expected requirement violation cost. In other words, though the resilient formulation may violate the requirements for most states of the world, it does so in a parsimonious manner. It is worth noting that relaxing constraints as in~\eqref{P:resilient} is common in convex programming and is used, for instance, in phase~1 solvers for interior-point methods~\cite{Bertsekas15c}. The goal in~\eqref{P:resilient}, however, is notably different. Indeed, resilience does not seek a solution~$\ensuremath{\bm{z}}^\dagger$ for which the slacks~$\ensuremath{\bm{s}}(\ensuremath{\bm{\xi}})$ vanish for all~$\ensuremath{\bm{\xi}}$. Its aim is to adapt to situations in which disruptions are so extreme that only by modifying the underlying control problem is it possible to remain operational. Hence, it seeks~$\ensuremath{\bm{s}} \succ \ensuremath{\bm{0}}$ for some, if not all, disruptions~$\ensuremath{\bm{\xi}}$. Another consequence of Proposition~\ref{T:equivalent_problem} is that the compromise-resilient control problem~\eqref{P:parametrized}--\eqref{E:resilient} has a straightforward solution since it is equivalent to a convex optimization program, namely~\eqref{P:resilient}. Nevertheless, it turns out that a more efficient algorithm can be obtained by understanding how resilience violates the requirements to respond to disruptions. That is the topic of the next section. \subsection{Quantifying the effect of disturbances} \label{S:counterfactual} Proposition~\ref{T:equivalent_problem} shows that resilient control minimizes the problem modifications through the cost~$h$. In constrast, the following proposition explicitly describes the effect of a disturbance~$\ensuremath{\bm{\xi}}$ on the violations~$\ensuremath{\bm{s}}$. \begin{proposition}\label{T:slacks} Let~$\ensuremath{\bm{z}}_\text{Re}^\star(\ensuremath{\bm{s}}^\star)$ be the solution of~\eqref{P:parametrized} for the resilient slacks~$\ensuremath{\bm{s}}^\star$ from~\eqref{E:resilient} and~$\ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{s}}^\star)$ be the solution of its dual problem~\eqref{P:dual_parametrized}. Then, % \begin{equation}\label{E:slacks} \ensuremath{\bm{s}}^\star = \left( \nabla h \right)^{-1} \left[ \frac{\ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{s}}^\star)}{f_{\ensuremath{\bm{\Xi}}}} \right] \text{.} \end{equation} % \vspace{-4pt} \end{proposition} \begin{proof} Follows by applying Proposition~\ref{T:diff_P} to the equilibrium~\eqref{E:resilient} to obtain~$\ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{s}}^\star) = \nabla h(\ensuremath{\bm{s}}^\star) f_{\ensuremath{\bm{\Xi}}}$. Recall that the Jacobian of the gradient~$\nabla h$ is the Hessian~$\nabla^2 h$ and that since~$h$ is strongly convex~(Assumption~\ref{A:h}), it holds that~$\nabla^2 h \succ 0$. Immediately, the inverse of the gradient exists by the inverse function theorem, yielding~\eqref{E:slacks}. \end{proof} Proposition~\ref{T:slacks} establishes a fixed point relation between the resilient slacks~$\ensuremath{\bm{s}}^\star$ and the optimal dual variables~$\ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{s}})$. This is not surprising in view of the well-known sensitivity interpretation of dual variables for convex programs. Indeed, dual variable represent how much the objective stands to change if a constraint were relaxed or tightened. Given the monotone increasing nature of~$\nabla h$~(due to the strong convexity of~$h$, Assumption~\ref{A:h}), it is clear from~\eqref{E:slacks} that the resilient formulation identifies and relaxes constraints that are harder to satisfy. Hence, if a disruption~$\ensuremath{\bm{\xi}}$ makes it difficult for the resilient system to meet a requirement, it will modify that requirement according to its difficulty. This change is mediated by the variation in the resilience cost~$h$ and the likelihood of the disruption~$f_{\ensuremath{\bm{\Xi}}}(\ensuremath{\bm{\xi}})$, which determine the amount by which the requirement is relaxed. The choice of~$h$ therefore plays an important role in the resulting resilient behavior. For instance, if the violation cost is linear, i.e., $h(\ensuremath{\bm{s}}) = \ensuremath{\bm{\gamma}}^T \ensuremath{\bm{s}}$, $\ensuremath{\bm{\gamma}} \in \bbR_+^m$, the equilibrium~\eqref{E:resilient} occurs for~$[\ensuremath{\bm{s}}^\star]_i = [\ensuremath{\bm{\gamma}}]_i^{-1}$. Hence, the violations are independent of the disruptions and the solution is the same as if~\eqref{P:parametrized} were solved for predetermined slacks. A more interesting phenomenon occurs for quadratic cost structures, e.g., $h(\ensuremath{\bm{s}}) = \ensuremath{\bm{s}}^T \ensuremath{\bm{\Gamma}} \ensuremath{\bm{s}}$, for~$\ensuremath{\bm{\Gamma}} \succ 0$. Then, the violations are proportional to the dual variables as in~$\ensuremath{\bm{s}}^\star = \ensuremath{\bm{\Gamma}}^{-1} \ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{s}}^\star) / f_{\ensuremath{\bm{\Xi}}}$. In this case, the resilient violations are proportional to the requirement difficulty and inversely proportional to the likelihood of the disruption Given this wide range of resilient behaviors, a question that arises is how they relate to those induced by the robust formulation. We explore this question in the sequel by relating the resilient control problem~\eqref{P:resilient} to its robust counterpart~\eqref{P:robust}. \subsection{Resilience vs.\ robustness} \label{S:resilient_vs_robust} On the surface, the robust~\eqref{P:robust} and resilient~\eqref{P:resilient} control problems are strikingly different. And in fact, it is clear from the discussion in the previous section that depending on the choice of~$h$, their behaviors can be quite dissimilar. Yet, it turns out that~\eqref{P:robust} and~\eqref{P:resilient} are equivalent under mild conditions for an appropriate choice of~$h$, as shown in the following proposition. \begin{proposition}\label{T:resilient_v_robust} Let~$\ensuremath{\bm{z}}_\text{Re}^\dagger$ be a solution of~\eqref{P:resilient} with~$h_\text{Ro}(\ensuremath{\bm{s}}) = -\gamma \prod_{i=1}^m \big( 1 - \mathbb{H}(s_i) \big)$, where~$\mathbb{H}$ is the Heaviside function, i.e., $\mathbb{H}(x) = 1$ if~$x \geq 0$ and zero otherwise. For each~$\gamma \geq 0$ there exists a~$\delta^\dagger \in [0,1]$ such that~$\ensuremath{\bm{z}}_\text{Re}^\dagger$ is a solution of~\eqref{P:robust} with probability of failure~$\delta^\dagger$. \end{proposition} \begin{proof} Fix~$\gamma$ in the violation cost~$h_\text{Ro}$ defined in the hypothesis and let~$(\ensuremath{\bm{z}}_\text{Re}^\dagger, \ensuremath{\bm{s}}_\text{Re}^\dagger)$ be a solution pair of the resilience-by-compromise problem~\eqref{P:resilient} and~$\ensuremath{\bm{z}}_\text{Ro}^\star$ be a solution of the robust~\eqref{P:robust} with~$1-\delta^\dagger = \Pr\left[ \ensuremath{\bm{g}}(\ensuremath{\bm{z}}_\text{Re}^\dagger, \ensuremath{\bm{\Xi}}) \leq 0 \right]$. Immediately, the value of~\eqref{P:resilient} is achieved for~$\ensuremath{\bm{z}} = \ensuremath{\bm{z}}_\text{Re}^\dagger$ and~$\ensuremath{\bm{s}} = \ensuremath{\bm{s}}_\text{Re}^\dagger$. What is more, note that the solution pair~$(\ensuremath{\bm{z}},\ensuremath{\bm{s}}) = (\ensuremath{\bm{z}}_\text{Ro}^\star, \ensuremath{\bm{g}}(\ensuremath{\bm{z}}_\text{Ro}^\star,\cdot))$ is trivially feasible for~\eqref{P:resilient} and can therefore be used to upper bound its value as in \begin{equation}\label{E:rvr1} \begin{aligned} P_\text{Re}^\star &= J(\ensuremath{\bm{z}}_\text{Re}^\dagger) - \gamma \E\left[ \prod_{i = 1}^m \left( 1-\mathbb{H} \left( \big[ \ensuremath{\bm{s}}_\text{Re}^\dagger(\ensuremath{\bm{\Xi}}) \big]_i \right) \right) \right] \\ &\leq J(\ensuremath{\bm{z}}_\text{Ro}^\star) - \gamma \E\left[ \prod_{i = 1}^m \bigg( 1-\mathbb{H} \left( g_i(\ensuremath{\bm{z}}_\text{Ro}^\star, \ensuremath{\bm{\Xi}}) \right) \bigg) \right] \text{.} \end{aligned} \end{equation} Due to the form of~$\mathbb{H}$, the expectations in~\eqref{E:rvr1} reduce to probabilities. We then obtain \begin{multline}\label{E:rvr2} J(\ensuremath{\bm{z}}_\text{Re}^\dagger) - \gamma \Pr\left[ \ensuremath{\bm{s}}^\dagger(\ensuremath{\bm{\Xi}}) \leq \ensuremath{\bm{0}} \right] \\ {}\leq J(\ensuremath{\bm{z}}_\text{Ro}^\star) - \gamma \Pr\left[ \ensuremath{\bm{g}}(\ensuremath{\bm{z}}_\text{Ro}^\star, \ensuremath{\bm{\Xi}}) \leq \ensuremath{\bm{0}} \right] \text{.} \end{multline} Since~$\ensuremath{\bm{z}}_\text{Ro}^\star$ is a solution of~\eqref{P:robust} with probability of failure~$\delta^\dagger$, \eqref{E:rvr2} becomes \begin{equation}\label{E:rvr3} \begin{aligned} J(\ensuremath{\bm{z}}_\text{Re}^\dagger) - \gamma \Pr\left[ \ensuremath{\bm{s}}^\dagger(\ensuremath{\bm{\Xi}}) \leq 0 \right] \leq J(\ensuremath{\bm{z}}_\text{Ro}^\star) - \gamma (1-\delta^\dagger) \text{.} \end{aligned} \end{equation} To conclude, recall from~\eqref{P:resilient} that~$\ensuremath{\bm{g}}(\ensuremath{\bm{z}}_\text{Re}^\dagger(\gamma), \ensuremath{\bm{\xi}}) \leq \ensuremath{\bm{s}}_\text{Re}^\dagger(\ensuremath{\bm{\xi}})$ for all~$\ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}}$, which by monotonicity of the Lebesgue integral implies that \begin{equation*} \Pr\left[ \ensuremath{\bm{s}}_\text{Re}^\dagger(\ensuremath{\bm{\Xi}}) \leq 0 \right] \leq \Pr\left[ \ensuremath{\bm{g}}(\ensuremath{\bm{z}}_\text{Re}^\dagger, \ensuremath{\bm{\Xi}}) \leq 0 \right] = 1-\delta^\dagger \text{.} \end{equation*} Hence, we obtain from~\eqref{E:rvr3} that~$J(\ensuremath{\bm{z}}_\text{Re}^\dagger) \leq P_\text{Ro}^\star$. Since~$\ensuremath{\bm{z}}_\text{Re}^\dagger$ is a feasible point of~\eqref{P:robust} with probability of failure~$\delta^\dagger$ by design and its control performance achieves the optimal value~$P_\text{Ro}^\star$, it must be a solution of~\eqref{P:robust}. \end{proof} Proposition~\ref{T:resilient_v_robust} gives conditions on the violation cost~$h$ such that a resilience-by-compromise controller behaves as a robust one. In particular, it states that there exists a fixed, strict violation cost, i.e., one that charges a fixed price only if some requirement is violated, such that resilience by compromise reduces to robustness. This cost essentially determines the level of control performance~$J$ above which the controller chooses to pay~$\gamma$ to give up on satisfying the requirements altogether. Notice that Proposition~\ref{T:resilient_v_robust} holds even though the resulting problem is not convex. In that sense, resilience can be thought of as a soft version of robustness: whereas the violation magnitude matters for the former, only whether the requirement is violated impacts the latter. For certain critical requirements, this all-or-nothing behavior may be the only acceptable one. In these cases, constraints should be treated as robust with appropriate satisfaction levels. Other engineering requirements, however, are nominal in nature and can be relaxed as long as violations are small and short-lived. Treating these constraints as resilient enables the system to continue operating under disruptions while remaining robust with respect to critical specifications. For instance, if a set of essential requirements needs a level of satisfaction so high that the control problem becomes infeasible, nominal constraints can be adapted to recover a useful level of operation. By leveraging Proposition~\ref{T:resilient_v_robust}, this can be achieved by posing a control problem that is both robust and resilient. To do so, let~$\ensuremath{\mathcal{S}} \subseteq [m]$ be the set of soft~(nominal) requirements, i.e., those that can withstand relaxation, and~$\ensuremath{\mathcal{H}} \subseteq [m]$ be the set of hard~(critical) requirements, i.e., those that cannot be violated under any circumstances. Naturally, $\ensuremath{\mathcal{S}} \cap \ensuremath{\mathcal{H}} = \emptyset$ and~$\ensuremath{\mathcal{S}} \cup \ensuremath{\mathcal{H}} = [m]$. We can then combine~\eqref{P:robust} and~\eqref{P:resilient} into a single problem, namely \begin{prob}\label{P:complete} \minimize_{\substack{\ensuremath{\bm{z}} \in \bbR^p,\\s_i \in \ensuremath{\mathcal{F}}}}& &&f_0(\ensuremath{\bm{z}}) + \E\left[ \sum_{i \in \ensuremath{\mathcal{S}}} h_i\big( s_i(\ensuremath{\bm{\Xi}}) \big) + \sum_{i \in \ensuremath{\mathcal{H}}} h_\text{Ro}\big( s_i(\ensuremath{\bm{\Xi}}) \big)\right] \\ \subjectto& &&f_i(\ensuremath{\bm{z}}, \ensuremath{\bm{\xi}}) \leq s_i(\ensuremath{\bm{\xi}}) \text{,} \ \ \forall \ensuremath{\bm{\xi}} \in \ensuremath{\mathcal{K}} \text{, } i = 1,\dots,m \text{.} \end{prob} Whereas~\eqref{P:complete} provides a complete solution to designing robust/resilient systems, it is worth noting that it is not a convex optimization problem. What is more, the non-smooth nature of~$\mathbb{H}$ poses a definite challenge to even approximating its solution. Enabling the solution of this general problem is therefore beyond the scope of this paper. Nevertheless, we describe in the sequel an efficient algorithm to tackle resilience-by-compromise by directly solving~\eqref{P:parametrized} for the resilient equilibrium~\eqref{E:resilient}. \section{A MODIFIED ARROW-HURWICZ ALGORITHM} \label{S:algorithm} \begin{figure}[tb] \centering \includesvg[width=\columnwidth]{fig1} \vspace{-14pt} \caption{Robust and resilient solution to the shepherd problem: (a)~Shepherd plans; (b)~distribution of maximum distance between shepherd and sheep.} \label{F:shepherd} \vspace{-10pt} \end{figure} \begin{figure*} \centering \includesvg{fig2} \vspace{-14pt} \caption{Robust and resilient controllers for the quadrotor navigation problem. The radius of the markers are proportional to the actuation strength.} \label{F:navigation} \vspace{-10pt} \end{figure*} In view of Proposition~\ref{T:equivalent_problem}, solving the resilient control problem~\eqref{P:parametrized} subject to the equilibrium~\eqref{E:resilient} reduces to obtaining a solution of~\eqref{P:resilient}. Given its a smooth, convex nature, this can be done using any of a myriad of methods~\cite{Bertsekas15c}. One approach that is particularly promising is to use a modified primal-dual algorithm that takes into account the results in Proposition~\ref{T:slacks}. Explicitly, consider the classical Arrow-Hurwicz algorithm for solving~\eqref{P:parametrized}~\cite{Arrow58s}. This method seeks a points that satisfy the KKT conditions~[Proposition~\ref{T:zdg}(2)] by updating the primal and dual variables using gradients of the Lagrangian~\eqref{E:lagrangian}. Explicitly, $\ensuremath{\bm{z}}$ is updated by \emph{descending} along the negative gradient of the Lagrangian, i.e., \begin{subequations}\label{E:arrow} \begin{equation}\label{E:primal_arrow} \begin{aligned} \dot{\ensuremath{\bm{z}}} &= -\nabla_{\ensuremath{\bm{z}}} \ensuremath{\mathcal{L}}(\ensuremath{\bm{z}},\ensuremath{\bm{\lambda}},\ensuremath{\bm{s}}) \\ {}&= - \nabla_{\ensuremath{\bm{z}}} J(\ensuremath{\bm{z}}) - \int_{\ensuremath{\mathcal{K}}} \ensuremath{\bm{\lambda}}(\ensuremath{\bm{\xi}})^T \nabla_{\ensuremath{\bm{z}}} \ensuremath{\bm{g}}(\ensuremath{\bm{z}},\ensuremath{\bm{\xi}}) d\ensuremath{\bm{\xi}} \text{,} \end{aligned} \end{equation} and the dual variables~$\ensuremath{\bm{\lambda}}$ are updated by \emph{ascending} along the gradient of the Lagrangian~$\nabla_{\lambda} \ensuremath{\mathcal{L}}(\ensuremath{\bm{z}},\ensuremath{\bm{\lambda}},\ensuremath{\bm{s}})$ using the projected dynamics \begin{equation}\label{E:dual_arrow} \begin{aligned} \dot{\ensuremath{\bm{\lambda}}}(\ensuremath{\bm{\xi}}) &= \Pi_+\big[ \ensuremath{\bm{\lambda}}(\ensuremath{\bm{\xi}}), \nabla_{\ensuremath{\bm{\lambda}}} \ensuremath{\mathcal{L}}(\ensuremath{\bm{z}},\ensuremath{\bm{\lambda}},\ensuremath{\bm{s}}) \vert_{\ensuremath{\bm{\xi}}} \big] \\ {}&= \Pi_+\big[ \ensuremath{\bm{\lambda}}(\ensuremath{\bm{\xi}}),\, \ensuremath{\bm{g}}(\ensuremath{\bm{z}},\ensuremath{\bm{\xi}}) - \ensuremath{\bm{s}}(\ensuremath{\bm{\xi}}) \big] \text{.} \end{aligned} \end{equation} \end{subequations} The projection~$\Pi_+$ is introduced to ensure that the Lagrange multipliers remain non-negative and is defined as \begin{equation}\label{E:projection} \Pi_+(\ensuremath{\bm{x}},\, \ensuremath{\bm{v}}) = \lim_{a \to 0} \frac{[\ensuremath{\bm{x}} + a \ensuremath{\bm{v}}]_+ - \ensuremath{\bm{x}}}{a} \text{,} \end{equation} where~$[\ensuremath{\bm{x}}]_+ = \argmin_{\ensuremath{\bm{y}} \in \bbR_+^m} \norm{\ensuremath{\bm{y}} - \ensuremath{\bm{x}}}$ is the projection onto the non-negative orthant~\cite{Nagurney12p}. The main drawback of~\eqref{E:arrow} is that it solves~\eqref{P:parametrized} for a fixed slack~$\ensuremath{\bm{s}}$ and the desired compromise~$\ensuremath{\bm{s}}^\star$ in~\eqref{E:resilient} is not known \emph{a priori}. To overcome this limitation, we can use Proposition~\ref{T:slacks} and replace~\eqref{E:dual_arrow} by \begin{equation}\label{E:alt_dual_arrow} \dot{\ensuremath{\bm{\lambda}}}(\ensuremath{\bm{\xi}}) = \Pi_+\left[ \ensuremath{\bm{\lambda}}(\ensuremath{\bm{\xi}}),\, \ensuremath{\bm{g}}(\ensuremath{\bm{z}},\ensuremath{\bm{\xi}}) - \nabla h^{-1}\left( \frac{\ensuremath{\bm{\lambda}}(\ensuremath{\bm{\xi}})}{f_{\ensuremath{\bm{\Xi}}}(\ensuremath{\bm{\xi}})} \right) \right] \text{.} \end{equation} The dynamics~\eqref{E:primal_arrow}--\eqref{E:alt_dual_arrow} can be shown to converge to a point that satisfies the KKT conditions in Proposition~\ref{T:zdg}(2) as well as the equilibrium~\eqref{E:resilient} using an argument similar to~\cite{Cherukuri16a} that relies on classical results on projected dynamical systems~\cite[Thm.~2.5]{Nagurney12p} and the invariance principle for Carath\'{e}odory systems~\cite[Prop.~3]{Bacciotti06n}. Hence, they simultaneously solves three problems by obtaining (i)~requirement violations~$\ensuremath{\bm{s}}^\star$ that satisfies~\eqref{E:resilient}, (ii)~the solution~$\ensuremath{\bm{z}}^\star(\ensuremath{\bm{s}}^\star)$ of~\eqref{P:parametrized} for the violations~$\ensuremath{\bm{s}}^\star$, and~(iii)~dual variables~$\ensuremath{\bm{\lambda}}^\star(\ensuremath{\bm{s}}^\star)$ that solve~\eqref{P:dual_parametrized} for~$\ensuremath{\bm{s}}^\star$. Due to space constraints, details of this proof are left for a future version of this work. \section{NUMERICAL EXPERIMENTS} \label{S:sims} In this section, we illustrate the use of resilient optimal control in two applications: \emph{the shepherd problem}, in which we plan a configuration in order to surveil targets~(Section~\ref{S:shepherd}), and \emph{navigation in partially known environments}, in which a quad-rotor must follow way-points to a target that is behind an obstruction of unknown mass~(Section~\ref{S:navigation}). We also illustrate an online extension of our resilience framework in which a quad-rotor adapts to wind gusts~(Section~\ref{S:online}). Due to space constraints, we only provide brief problem descriptions in the sequel. Details can be found in~\cite{extended}. \subsection{The shepherd problem} \label{S:shepherd} We begin by illustrating the differences between robustness and resilience in a static surveillance planning problem. Suppose an agent~(\emph{the shepherd}) must position itself to supervise a set of targets~(\emph{the sheep}). Without prior knowledge of their position, the shepherd assumes the sheep are distributed uniformly at random within a perimeter of radius~$R$. The surveillance radius~$r$ of the shepherd is enough to cover only~$90\%$ of that area. The shepherd also seeks to minimize its displacement from its home situated at~$\ensuremath{\bm{x}}^o$. If we let~$\ensuremath{\bm{\Xi}}_i$ denote the position of the~$i$-th sheep and~$\ensuremath{\mathcal{K}}$ be the ball described by the radius-$R$ perimeter, the robust formulation~\eqref{P:robust} becomes \begin{prob} \minimize_{\ensuremath{\bm{x}}}& &&\norm{\ensuremath{\bm{x}} - \ensuremath{\bm{x}}^o}^2 \\ \subjectto& &&\Pr\left[ \norm{\ensuremath{\bm{x}}- \ensuremath{\bm{\Xi}}_i}^2 \leq r^2 \right] \geq 1-\delta \end{prob} and the resilient problem~\eqref{P:resilient} yields \begin{prob} \minimize_{\ensuremath{\bm{x}}}& &&\norm{\ensuremath{\bm{x}} - \ensuremath{\bm{x}}^o}^2 + \E \left[ \sum_{i = 1}^m s_i^2(\ensuremath{\bm{\Xi}}_i) \right] \\ \subjectto& &&\norm{\ensuremath{\bm{x}}- \ensuremath{\bm{\xi}}_i}^2 \leq r^2 + s_i(\ensuremath{\bm{\xi}}_i) \text{,} \quad \ensuremath{\bm{\xi}}_i \in \ensuremath{\mathcal{K}} \text{.} \end{prob} Fig.~\ref{F:shepherd} show results for~$\delta = 0.2$. In order to meet the set probability of failure, the robust moves away from the origin only as much as necessary, leading to a plan that has lower cost than the resilient. The resilient solution, on the other hand, is willing to pay the extra cost to move to the center of the perimeter so that when a sheep steps out of its surveillance radius, it does not go too far~(Fig.~\ref{F:shepherd}b). This example illustrates the difference between robust and resilient planning. While the robust system saves on cost by minimally meeting the specified requirement violation, the resilient system takes into account the magnitude of the violations. Hence, it is willing to pay the extra cost in order to reduce future violations. \subsection{Way-point navigation in a partially known environment} \label{S:navigation} A quadrotor of mass~$m$ must plan control actions to navigate the hallway shown in Fig.~\ref{F:navigation} by going close to the way-points~(stars) at specific instants while remaining within a safe distance of the walls and limiting the maximum input thrust. Between the quadrotor and its target, however, there may exist an obstruction of \emph{a priori} unknown mass~(brown box). This box modifies the dynamics of the quadrotor in a predictable way depending on its mass, i.e., the quadrotor can push the box by applying additional thrust but the magnitude of this thrust is not known beforehand. Since it is not possible to find a set of control actions that is feasible for all obstruction masses, we set~$\delta = 0.1$ for the robust controller. On the other hand, the resilient controller is allowed to relax both thrust limits and the terminal set. Hence, it can choose between actuating harder to push the box or deem it too heavy and stop before entering the room. Notice that while the robust plan reaches the terminal set for the light obstruction, it is unable to do so in the other two cases. This is to be expected given that it was not designed to do so. The resilient controller, however, displays a smoother degradation as the weight of the obstruction increases. Notice that it chooses which requirement to violate by compromising between their satisfaction and the control objective~(LQR). While it violates the maximum thrust constraint enough to push the medium box almost into the terminal set, it deems the heavy box to not be worth the effort and relaxes the terminal set instead. This leads to a more graceful behavior degradation than the one induced by the robust controller. Moreover, observe that the resilient controller also uses additional actuation in the beginning to more quickly approach the wall and reduce the distance traveled. This is an example of the ``unnecessary yet beneficial'' requirement violations that resilient control may perform in order to improve the control performance. Naturally, if thrust requirements are imposed by hardware limitations, then the robust solution is the only practical one. \begin{figure} \centering \includesvg{fig3} \caption{Online resilient control using MPC: quadrotor under wind disruption} \label{F:online} \vspace{-12pt} \end{figure} \subsection{Online extension: adapting to wind gusts} \label{S:online} The previous examples illustrated the behavior of the resilient optimal control problem formulation introduced in this paper. Another aspect of resilience, beyond planning to mitigate disruptions, is the ability to adapt to disturbances as they occur. This can be achieved by using the resilient optimal control problem in an MPC fashion. We show the result of doing so in Fig.~\ref{F:online}. Here, a quadrotor navigates towards its target~(grey zone) by planning over a~$10$-steps horizon, but executing only the first control action. During this execution, the quadrotor may be hit by an unpredictable wind gust that pushes him towards a wall~(left of the diagram). The quadrotor takes the wind gust suffered into account in its future plan by assuming that the wind will continue to blow at that speed. The resilient controller is allowed to modify the safety set and maximum thrust requirements. Similar behaviors to Fig.~\ref{F:navigation} can be observed. The resilient controller chooses to violate the thrust constraint in order to pick up speed initially. It does so because the price of using extra actuation is compensated by the improvement in control performance~(LQR). When a gust of wind pushes the quadrotor close to the left boundary of the safety set, it again violates the actuation constraints to stay withing the safe region. It does so in full view that it must now overshoot the safety region on the right. Notice that the resilient behavior of the quadrotor is adaptive: as disruptions occur, the controller plans which requirements should be violated to remain operational. Without these violations, such intense wind gusts would crash the quadrotor into the wall. \section{CONCLUSION AND FUTURE WORK} We defined resilient control by embedding control problems with the ability to violate requirements and proposed a method to automatically design these violations by compromising between the control objective and a constraint violation cost. We showed that such a compromise explicitly minimizes changes to the original control problem and that for properly selected costs, robust behaviors can be induced. These results are the first steps toward a resilient control solution capable of adapting to disruptions online. Such behavior can be achieved by combining~\eqref{P:resilient} and MPC as shown in Section~\ref{S:online}. Future works involve analyzing the stability of such solutions and leverage system level synthesis techniques~\cite{Anderson19s} to directly design resilient controllers. \bibliographystyle{aux_files/IEEEbib}
1,477,468,750,847
arxiv
\section{Introduction} Real-world complex systems can often be described by interconnected structures known as multilayer networks \cite{de2013mathematical,kivela2014multilayer,boccaletti2014structure, de2015ranking}. Transportation, social or economic networks, to name just a few general examples, can have various types of connections, see Fig.~\ref{fig:aarhus_struct} for an example depiction of such a system. Each such type of a connection in a network can be represented as a specific sub-system or sub-network. Railway, flights and bus connections can all be described with a network but to have a full description of the transportation system, they need to be joined and described with a multilayer network. In reality, obtaining full information which would allow to create a complete multilayer network is rarely possible. Moreover, in some cases, even the knowledge about all existing layers is limited. As a result, researchers often have to deal with uncertainty which arise from dealing with partial information about connectivity in analysed system. This specifically concerns one of the fundamental problems in network science, the spreading processes on networks \cite{barrat2008dynamical,pastor2015epidemic,de2016physics,de2018fundamentals, paluch2020source, gomez2013diffusion, sole2013spectral} but is also of significance for opinion dynamics \cite{chmiel2015phase, chmiel2017tricriticality, chmiel2020veritable, gajewski2021bifurcations}. In the following article we focus on the problem of detecting hidden layers based on observations of a dynamical processes on graphs. By dynamical process we mean a realisation of a spreading process described by a model of the susceptible-infected-recovered (SIR) type. Note that such models can describe not only infections, but also spreading of information, opinions or failures. Furthermore, we assume that the observation of such a process is limited to the states of the nodes, without the knowledge of the actual spreading path. In the rest of the text we will refer to a single spreading realisation as a cascade. We also propose and explore methods for finding missing connections of different types. Finally, we analyse potential limitations and difficulties as well as beneficial settings, i.e. when solving the problem is easier, for these methods. The problem of detecting hidden layers has appeared recently in the literature in a non-markovian setting \cite{lacasa2018multiplex} and on quantum graphs \cite{gajewski2020discovering}. It is also closely related to the problem of network reconstruction which was extensively analysed in the past \cite{gomez2012inferring,abrahao2013trace,pouget2015inferring,braunstein2019network, netrapalli2012learning, braunstein2019network} and also in a partial observation setting \cite{lokhov2016reconstructing,woo2020iterative,wilinski2020scalable}. Our setting is a bit simpler in some regards but at the same time still fairly realistic and thus should still be viable for real-world problems. We feel that our simplifications are justified since solving the general problem was proved to be limited \cite{abrahao2013trace} and previous papers often approached only limited cases anyway, such as very short cascades \cite{gripon2013reconstructing}. This is not to say that successful approximations are not possible \cite{gomez2012inferring}, however, the goal of this paper is to investigate the challenges associated with detecting hidden layers in interconnected networks in the context of spreading processes. Reader should not confuse the problem of finding hidden layers based on observed spreading with extensively analysed branch of network science called \textit{link prediction}, where the hidden connections are estimated using only the network structure. A seminal paper in this direction is \cite{guimera2009missing}. An extension, including multilayer networks, can be found in \cite{de2017community}. The paper is structured as follows: in the next section we describe all the methods used in the analysis, starting with tools which allow for detecting hidden layers and then proposing methods for identifying unobserved connections. In the third section we analyse both synthetic and real world networks and show how our methods work under different structural circumstances. Finally, we discuss all the results and present our conclusions in the last section. \begin{figure*}[tb] \centering \includegraphics[width=0.95\textwidth]{cs-aarhus.png} \caption{Visualisation of the multilayer network representing the Aarhus data \cite{magnani2013combinatorial}. We utilise this network as a real world example of possible application of our methods. It is quite natural to imagine we know only one of the layers presented here and would like to infer the existence and possibly structure of others. Note that co-author and leisure layers have lower connections' density in comparison with others. Thus we do not use them as visible layers in our analyses as that makes the task of detecting hidden connections potentially much easier. } \label{fig:aarhus_struct} \end{figure*} \section{Methods} In our analysis we focus on one of the simplest spreading model -- the Susceptible Infected (SI) model \cite{hurley2006basic}. We use it in a network version where the dynamics can be described as follows: for each node $i$, which is in the infected state $I$ at time $t$, each of its neighbors $j$ (in susceptible state $S$) will become infected at time $t+1$ with probability $\beta$. This model was used because of its simplicity on one hand and the mechanism of multiple infection opportunities (in comparison with, e.g. an Independent Cascade model) on the other. The latter makes the combinatorial analysis much more difficult, as we will see in further sections. The former is reflected by an easy to derive likelihood of any observed spreading, including a multilayer scenario, assuming that the full knowledge about connectivity is available. We are well aware that for specific applications other, often more complex, models may be better suited for the problem. Our approach, however, can effectively be used for other spreading description. As an example, we provide the full derivation and results also for the Independent Cascade model (see Appendix \ref{sec:app_ic}). Note that SI and IC are limiting versions of the SIR model, which means that the presented results give limits for the whole family of SIR models. Let us denote a multilayer graph with $G$ and let the probability of infection (spreading) on each layer $j$ be equal to $\beta_j$. We will refer to a single spreading dynamics as a cascade and denote it with $\Sigma^c$. A single cascade can be described by a set of infection times $\tau_i^c$ for each node $i$. We will also assume that a cascade ends at time $t_{max}$ and if a given node was not infected at all, its infection time will be equal to $t_{max}$. In other words, if node's $i$ activation times is equal to $t_{max}$, it was either activated at $t_{max}$ or later -- this will be more clear once the likelihood is derived. The set of all available cascades will be denoted with $\Sigma$. \subsection{Cascade likelihood for the Susceptible-Infected model} As mentioned before, the likelihood of a given set of cascades, for a specific and fully known multilayer network can be derived, similarly as it was done in \cite{lokhov2016reconstructing}. In short, the probability of a given data-set can be written as a product over cascades, which are independent, and nodes, because the problem can be considered locally: \begin{equation} P(\Sigma | G, \{ \beta_j \}) = \prod_{i \in V} \prod_{c \in C} P_i(\tau_i^c | \Sigma^c, G, \{ \beta_j \}), \label{eq:likelihood} \end{equation} where $V$ is the set of nodes in graph $G$. Note that we can use our knowledge about the activation times of each node and compute the probability of node $i$ not being activated by its neighbor $k$ (from layer $j$) before $\tau_i^c - 1$ under cascade $c$, which is equal to $\prod_{t=0}^{\tau_i^c - 2} (1 - \beta_j \mathbf{1}_{\tau_k^c \leq t})$, where $\mathbf{1}_{x}$ is the indicator function (it is equal to $1$ when $x$ is true). Similarly, the probability of the same node not being activated by its neighbor exactly at time $\tau_i^c - 1$ is equal to $(1 - \beta_j \mathbf{1}_{\tau_k^c \leq \tau_i^c - 1})$. Have in mind that $\tau_i^c = t_{max}$ is equivalent to node $i$ not being activated at all. Then each element of the product in Eq. (\ref{eq:likelihood}), being the probability of node activation under a specific cascade, can be computed as follows: \begin{equation} \begin{split} &P_i(\tau_i^c | \Sigma^c, G, \{ \beta_j \}) = \Bigg(\prod_{t=0}^{\tau_i^c - 2} \prod_j \prod_{k \in \partial_j i} (1 - \beta_j \mathbf{1}_{\tau_k^c \leq t}) \Bigg) \\ &\times \Bigg( 1 - \prod_j \prod_{k \in \partial_j i} (1 - \beta_j \mathbf{1}_{\tau_k^c \leq \tau_i^c - 1}) \mathbf{1}_{\tau_i^c < t_{max}} \Bigg), \end{split} \label{eq:local_likelihood} \end{equation} where $\partial_j i$ is the set of neighbors of node $i$ in layer $j$. If we know one or more layers of the network, we can use the above equation to compute the probability of observed dynamics, assuming that there are no other propagation channels. As seen from Eq. (\ref{eq:likelihood}), adding more cascades reduces the likelihood. Nevertheless increased statistics of cascades makes it easier to find hidden edges, as it is shown in Fig.~\ref{fig:scal}. \subsection{Detecting the existence of a hidden layer} An unobserved layer of propagation may lead to a situation where nodes become active despite not interacting, on the observed network, with any other active node. Such a situation leads to the probability described by Eq. (\ref{eq:likelihood}) being equal to zero. In such a case, one can be sure that there exist spreading paths that are not present in the observed layers. In reality, different layers share certain similarities and may be strongly correlated which would decrease the probability of observing a forbidden activation -- in other words, it is less likely that an activation, which occur through a hidden connection, will have a zero probability. Real-life social networks are a good example of this because of a significant overlap between connections on different social media platforms. Moreover, social connectivity is characterised by high clustering \cite{white1976social} which increases the probability of observed neighborhood being active, even though the activation came through an unobserved edge. To address such a situation and still be able to evaluate whether a given set of cascades indicates existence of an unobserved layer or not, we assume a single layer model and utilise the Vysochanskij–Petunin inequality (VP) \cite{pukelsheim1994three} to evaluate whether a given cascade could be generated by such a model. We expect that cascades, which involved spreading through unobserved edges, have significantly lower likelihood than those in the assumed null model. Assuming that the likelihood distribution is unimodal, we can quantify how distant the value of an observed cascade likelihood is from the value expected from the single-layer model using the VP inequality. It states that if $X$ is a random variable with a unimodal distribution (mean $\mu$, finite and positive variance $\sigma$) and $\lambda > \sqrt{8/3}$, then: \[ \mathrm{Pr}(|X-\mu|\geq \lambda\sigma)\leq\frac{4}{9\lambda^2}, \] which after normalisation $\tilde{X}=\frac{|X-\mu|}{\sigma}$ gives: \[ \mathrm{Pr}\left(\tilde{X}\geq \lambda\right)\leq\frac{4}{9\lambda^2}. \] Inserting $\lambda=\tilde{x}$: \[ \mathrm{Pr}\left(\tilde{X}\geq \tilde{x}\right)\leq\frac{4}{9\tilde{x}^2}, \] which means that an upper bound of probability of obtaining a result $\tilde{x}$ or greater from a normalized unimodal distribution describing $\tilde{X}$ is: \begin{equation} p(\tilde{x}) = \min\left(\frac{4}{9\tilde{x}^2},1\right) \label{eq:p_max} \end{equation} for $\tilde{x}>\sqrt{8/3}$. In our case, $x$ is the likelihood of a cascade given by Eq. (\ref{eq:likelihood}), $X$ is a random variable describing the likelihood values for an assumed single layer model. After performing simulations, we calculate $\mu$ and $\sigma$, and normalise the $x$ value obtaining $\tilde{x}$. In our experience the distributions, which describe $X$ in the single-layer case are unimodal and $\tilde{x}>\sqrt{8/3}$ with a sufficient observation time ($t_{max}$), thus the assumptions of the VP hold, however, as we are the ones producing these distributions then it is trivial to inspect whether that is the case before proceeding with the rest of our method. In the event, it is not true an alternative theorem, analogous to the VP inequality (e.g. the Chebyshev's inequality \cite{saw1984chebyshev}), can be introduced or even more general approaches such as bootstrapping \cite{efron1994introduction} can be used to determine the confidence level of the observation. In the paper, we use $p(\tilde{x})$ to measure how surprising given cascades are, assuming they were generated by an SI process with known $\beta$ simulated on a visible layer of the network. We validate our approach in the next section, using simulations and synthetic networks. \subsection{Detecting the hidden edges} In the previous section, we shown how one can discard the possibility that observed cascades were generated by the given network. Once the existence of the hidden layer is established, the same data can be used to estimate the topology of a hidden layer(s). This requires assuming some topology (given visible layer and estimated hidden layer) and $\beta_{hidden}$ to simulate the process again and calculate new likelihoods. We will try to find the topology by finding cases of node activation that could not be explained by the assumed single layer model. Then for each such activation we identify the set of potential hidden edges. The details of the procedure are as follows: \begin{enumerate} \item Let $\mathcal{J}^c(t)=\{i\in \mathcal{N}:\tau^c_i\leq t\}$ be a set of nodes that \textit{are infected} in a simulation step $t$ of cascade $c$. $\mathcal{N}$ represents the set of nodes in graph $G$ and $\tau^c_i$ is the activation time of node $i$ in cascade $c$. $\Delta\mathcal{J}^c(t)=\left\{i\in \mathcal{N}:\tau^c_i= t\right\}$ will be a set of nodes that \textit{became infected} in a simulation step $t$ of cascade $c$. \item Using above notation we can introduce a set of nodes that became infected in simulation step $t$ of cascade $c$ but were not a neighbor of any node infected at $t-1$: \[ \mathcal{U}^c(t)=\Delta\mathcal{J}^c(t) \setminus \bigcup_{m \in \mathcal{J}^c(t-1)}\partial m, \] where $\partial m$ is the known neighborhood of node $m$. \item If the likelihood of a cascade is zero then there is at least one hidden edge in a set: \[\mathcal{E}(i,c)=\left\{(i,k):k \in \mathcal{J}^c(\tau^c_i-1), \,\,i \in \bigcup_{t=1}^{t_{max}}\mathcal{U}^c(t)\right\}.\] Likelihood of such an edge being the one that was activated to infect $i$ is unknown and difficult to find. However we can say heuristically that said likelihood is: \[P^c(i,k) \sim (1-\beta_{hidden})^{|\tau^c_i-\tau^c_k|}.\] \item Then we must unify the candidates amongst the cascades. Namely if an edge (a, b) was detected in $c=1$ but not in $c=2$ we need to associate it with the likelihood of \textit{not being detected} which is similarly non-trivial. Also similarly we can say whatever that likelihood is it must be $\sim (1-\beta_{hidden})^{|\tau^c_i-\tau^c_k|}$ \item Finally, for each edge $e$ we multiply its likelihoods, therefore obtaining a \textit{joint likelihood} $J$: \[ J(e) = \prod_c P^c(e). \] \item Edges that maximise $J$ are most likely the hidden edges we seek. \end{enumerate} In order to evaluate the quality of our approach, we will use two metrics: \textbf{Sensitivity} - the ratio between true-positives and all positives. In our case it is the fraction of hidden edges that were detected. As we simulate our systems many times, we take the mean of the sensitivity across all simulations. \textbf{$\alpha$ - Credible Set Size} ($\alpha$-CSS) - a measure introduced in \cite{paluch2020optimizing}. It represents the number of candidates one must investigate in order to have $\alpha$ level of certainty of finding the sought entity. In practice one computes the rank, i.e., the position, of the entity one wishes to find, in the list of the candidates, which is in a descending order, in accordance to a given measure said entity should maximise. This is repeated many times in order to get a distribution of that rank, and finally, one takes a quantile $q=\alpha$ of that distribution, thus acquiring the $\alpha$-CSS value. In our case the measure is the \textit{joint likelihood} $J$ described above in point 5., and there are multiple entities -- edges -- thus we have taken the liberty of adapting said measure such that we take the \textit{highest recorded rank} amongst the hidden edges and follow the rest as usual. I.e., we compute $J$ for the appropriate edges (see the procedure description 1.-6. above), order these edges by their $J(e)$ (highest to lowest), find the position of the actual hidden edges (their rank), take the highest recorded rank, repeat $10^4$ times, take the quantile $q=0.95$ of these ranks. \begin{figure}[!htb] \centering \includegraphics[width=0.47\textwidth]{null_model.pdf} \caption{The ratio of edges required to check in order to have 95\% certainty, according to the null model, of testing all hidden edges, as a function of number of hidden edges. The plot is done for a network with $N=100$ nodes, but the shape of the curve scales with the size of the network.} \label{fig:null} \end{figure} A null model would naturally be random guessing. There are ${N\choose2}$ edges to check in a system with $N$ nodes. So for instance, with $N=100$ that is: ${100\choose2} = 4950$ in which case 95\% certainty of finding 1 hidden link requires checking $0.95 \times 4950 = 4703$ (rounded up) edges. In general the number $r$ of links required to check in order to have $\alpha$ certainty, according to the null model, can be obtained from: \begin{equation} \frac{{r \choose k}}{{{N \choose 2} \choose k}} = \frac{r!}{\left(r-k\right)!}\frac{\left({N \choose 2}-k\right)!}{{N \choose 2}!} = \alpha, \label{eq:null} \end{equation} where $k$ is the number of hidden edges and $N$ is the number of nodes. Fig. \ref{fig:null} presents the normalised $\bar{r} = \frac{2r}{N(N-1)}$ as a function of $k$ in the case of $N=100$ nodes. As we show later on our method requires substantially less edges to be checked. Since Eq. (\ref{eq:null}) requires to be solved numerically, we also derive an asymptotic approximation, which can be find in the Appendix \ref{sec:app_null}, and from which we can see that $\bar{r} \sim \sqrt[k]{\alpha}$. \section{Experiments} We use both real and synthetic data in our experiments. In the latter case we build networks that are realistic and not trivial in the sense that we do not want the occurrence of an activation not explained by the visible network to be likely. To achieve that, we need a way to control the correlation between different layers of the network -- where by correlation we mean the percentage of overlapping edges. Therefore we propose our own models for generating multilayer networks. As for the cascades, these are in both cases generated using the SI model dynamics. As an initial condition for each cascade we randomly pick a node and change its state into \textit{infected}. Each cascade is generate independently. \subsection{Synthetic Networks} In the first setting, we generate a two-layer network using Barabási-Albert algorithm \cite{barabasi1999emergence}. There are two parameters, which need to be selected -- $m_{hidden}$ and $m_{observed}$, which represent the number of edges added at each step of the algorithm, for hidden and observed layers respectively. The two layers are independent but both have power-law degree distribution, which is believed to resemble real social networks \cite{clauset2009power} (although lately it is seen more as an idealised approximation \cite{broido2019scale}). Since lack of correlation makes the problem of detecting hidden layers much easier we also propose another setting. We take a square lattice as the first layer and then apply a rewiring procedure similar to the one introduced by Watts and Strogatz \cite{watts1998collective} to produce another layer. We start with a square lattice as an equivalent of real relationships (affected by distance) but we also explore scale-free network as a starting layer. The correlation between the two layers is parameterised by $p$ -- the probability of each node being rewired to any random (other) node. In all described settings we keep the spreading probability of observed layer equal to $\beta_{observed} = 0.5$. For the hidden layer, this probability takes the values of $\beta_{hidden} = 0.3$ and $\beta_{hidden} = 0.7$. \begin{figure}[!htb] \centering \includegraphics[width=0.47\textwidth]{infinite_prcnts.pdf} \caption{The percentage of log-likelihoods resulting with $-\infty$ as a function of the probability of rewiring $p$. We investigate two different cases of networks: a square lattice and the Barabási-Albert network for different lengths of cascades. The simulations were made for networks of size $N=100$ with periodic boundary conditions in the lattice case. The inset zooms in on the curves obtained for $t_{max} > 2$.} \label{fig:infinite} \end{figure} When it comes to detecting hidden layers using any two-layer network with independent layers results with majority of cascades giving likelihood equal to $0$. This is a result of many non-overlapping connections, which activate nodes in a way, which would be impossible for the visible layer. Therefore the problem becomes trivial for such setting. The only interesting case is the model with rewiring as there the correlations between layers can be large. Fig. \ref{fig:infinite} shows the percentage of cascades resulting with likelihood equal to zero as a function of rewiring probability. The higher the rewiring probability, the more independent are the networks (with rewiring probability of $1$ being the fully independent case). We test it for local networks (represented by a square lattice) and when there are many long connections (like in the case of a scale-free Barabási-Albert network). As expected the problem is easier for short cascades and quickly becomes more difficult when the length of cascades grows. Note that even a very small rewiring probability results in a drastic change of the discussed percentage. It practically means that detecting an unknown transmission channel is fairly simple with our approach. A much more challenging task, however, is to find the actual unknown connections. We shall focus on that in further experiments but first let us discuss the case when there is no prohibited dynamics and if we can investigate the likelihood of such observed data. \begin{figure*}[!htb] \centering \includegraphics[width=0.48\textwidth]{lattice_p_hists.pdf} \hfill \includegraphics[width=0.465\textwidth]{ba_p_hists.pdf} \caption{Histograms of $p(\tilde{x})$ -- the probability that the system has only one layer (see Eq. (\ref{eq:p_max})) -- for various combinations of $t_{max}$ and $\beta_{hidden}$. Plots on the left are generated for square lattice with rewiring, while the ones on the right are generated for BA network with rewiring. Parameters for all the networks are as follows: $\beta_{observed}=0.5$, $N=100$, $p=0.01$ and $m=3$ (in case of BA networks). Each histogram was made with $10^4$ realisations. (a) $t_{max}=5$, $\beta_{hidden}=0.3$, (b) $t_{max}=5$, $\beta_{hidden}=0.7$, (c) $t_{max}=10$, $\beta_{hidden}=0.3$, (d) $t_{max}=10$, $\beta_{hidden}=0.7$, (e) $t_{max}=5$, $\beta_{hidden}=0.3$, (f) $t_{max}=5$, $\beta_{hidden}=0.7$, (g) $t_{max}=10$, $\beta_{hidden}=0.3$, (h) $t_{max}=10$, $\beta_{hidden}=0.7$, } \label{fig:p_max} \end{figure*} If the probability of known cascades is positive we can compare it with the empirical distribution of cascades simulated on the observed network. This allows us to use the Vysochanskij–Petunin inequality and decide whether the observed data was generated by process run a graph with an additional (hidden from us) layer. As seen in Fig. \ref{fig:p_max} using a typical significance level of $0.05$ allows to successfully reject the hypothesis about a single layer in significant number of cases (or even all of them). Low $t_{max}$ and $\beta_{hidden}$ especially in the case of local networks like the square lattice decrease the effectiveness of the test but apart from the extreme case (lattice with $t_{max}=5$ and $\beta_{hidden}=0.3$) our proposed approach is an efficient tool for detecting hidden layers. \begin{figure*}[!htb] \centering \includegraphics[width=0.48\textwidth]{rank_dist_lattice_ws.pdf} \hfill \includegraphics[width=0.47\textwidth]{rank_dist_ba_ws.pdf} \caption{Distribution of ranks of hidden edges with medians as vertical lines. Left: lattice with rewiring ($N=100$, $t_{max}=10$, $p=0.01$). Right: BA network with rewiring ($N=100$, $t_{max}=10$, $m=3$, $p=0.01$). Results obtained with $10^4$ realisations per scenario where each scenario had $10$ independent cascades from different sources. These results are for those realisations where all hidden edges were detected. Solid lines show a Gaussian kernel density estimate.} \label{fig:ranks} \end{figure*} Once we know that there is a hidden layer affecting dynamics we aim at finding its edges. Tables \ref{tab:lattice_sensitivity} and \ref{tab:ba_sensitivity} below show the results of applying our method to both lattice and Barabási-Albert networks with hidden layers produced by rewiring (with probability $p=0.01$). When comparing the two settings one can observe a certain interplay between sensitivity and $\alpha$-CSS. For lattice based network the sensitivity is significantly higher than in Barabási-Albert case but at the same time scale-free case is characterised by a much lower $\alpha$-CSS for both $\alpha=0.5$ and $\alpha=0.95$. In other words, it is easier to correctly identify hidden edges when we have a locally connected network (lattice) but at the same time a scale-free network requires a smaller set to find all hidden edges (despite reaching a lower sensitivity level). Note that in both cases the observed 0.5-CSS and 0.95-CSS are significantly lower than for the null model where, depending on the number of rewired links, they would be larger than 2475 and 4703 respectively (see Eq. (\ref{eq:null}) for $N=100$ and $k=1$ -- the higher the $k$, the more links need to be checked for the null-model). Full distribution of ranks from which the $\alpha$-CSS was computed is shown for both networks at Fig. \ref{fig:ranks}. \begin{table}[!htb] \centering \begin{tabular}{p{0.08\textwidth}p{0.08\textwidth}p{0.092\textwidth}p{0.092\textwidth}p{0.092\textwidth}} $\beta_{hidden}$ & $\beta_{observed}$ & \textit{sensitivity} & \textit{0.5-CSS} & \textit{0.95-CSS} \\ \hline 0.3 & 0.5 & 0.81 & 108 & 322 \\ 0.7 & 0.5 & 0.85 & 162 & 422 \end{tabular} \caption{Sensitivity and $\alpha$-CSS for a square lattice with rewiring ($N=100$, $t_{max} = 10$, $p=0.01$). Results obtained for $10^4$ realisations per scenario where each scenario had $10$ independent cascades with randomly selected sources.} \label{tab:lattice_sensitivity} \end{table} \begin{table}[!htb] \centering \begin{tabular}{p{0.08\textwidth}p{0.08\textwidth}p{0.092\textwidth}p{0.092\textwidth}p{0.092\textwidth}} $\beta_{hidden}$ & $\beta_{observed}$ & \textit{sensitivity} & \textit{0.5-CSS} & \textit{0.95-CSS} \\ \hline 0.3 & 0.5 & 0.53 & 19 & 87 \\ 0.7 & 0.5 & 0.69 & 30 & 116 \end{tabular} \caption{Sensitivity and $\alpha$-CSS for a Barabási-Albert network with rewiring ($N=100$, $t_{max}=10$, $m=3$, $p=0.01$). Results obtained for $10^4$ realisations per scenario where each scenario had $10$ independent cascades with randomly selected sources.} \label{tab:ba_sensitivity} \end{table} \begin{figure}[!htb] \centering \includegraphics[width=0.5\textwidth]{ba_sens_10.pdf} \vfill \includegraphics[width=0.5\textwidth]{ba_sens_100.pdf} \caption{The sensitivity as a function of $m_{hidden}$ and $m_{observed}$ for two layer Barabási-Albert network with $\beta_{hidden}=0.7$, $\beta_{observed}=0.5$ and $t_{max}=10$ after $10$ (top) and $100$ (bottom) cascades. The results are averaged over 20 independent runs.} \label{fig:heat_sens} \end{figure} As already discussed when the layers are not correlated it is easy to identify that there is a hidden spreading channel. Nevertheless, finding the actual unobserved links may still be challenging. Both Fig. \ref{fig:heat_sens} and \ref{fig:heat_css} show that density of the observed network is an important factor. From the sensitivity perspective it is better to have a denser observed network. Unfortunately the 0.95-CSS also grows with the density of known connections, making it more demanding to find all the connections. Additionally, although the effect is weaker, it is beneficial for both measures if the hidden layer is sparser. This aligns with intuition since more hidden connections can make the observed dynamics much more complex and unsurprisingly having more data about the cascades also makes the task easier. The actual dependence between the number of cascades and the sensitivity is shown in Fig. \ref{fig:scal}. For a relatively big BA network we need around 30-40 cascades to reach a fairly satisfactory sensitivity level of around $0.8$. Depending on the specifics of the problem such amounts of data may be considered a lot (e.g., in epidemic spreading) or easily available (e.g. information spreading on social media). In the next subsection we will see that scaling for real-life networks. \begin{figure}[!htb] \centering \includegraphics[width=0.5\textwidth]{ba_css_10.pdf} \vfill \includegraphics[width=0.5\textwidth]{ba_css_100.pdf} \caption{The 0.95-CSS as a function of $m_{hidden}$ and $m_{observed}$ for two layer Barabási-Albert network $N=1000$ with $\beta_{hidden}=0.7$, $\beta_{observed}=0.5$ and $t_{max}=10$ after $10$ (top) and $100$ (bottom) cascades. The results are from 20 independent runs.} \label{fig:heat_css} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.47\textwidth]{n_casc_scal.pdf} \caption{The sensitivity as a function of number of cascades for a) two layer Barabási-Albert network with $m=4$ for both layers, $\beta_{hidden}=0.7$, $\beta_{observed}=0.5$, $N=1000$ and $t_{max}=10$ (red line); b) Aarhus data with different layers as the observed network (\textit{lunch} -- blue line, \textit{facebook} -- yellow line and \textit{work} -- green line). The results are averaged over 20 independent runs and the error bars represent one standard deviation.} \label{fig:scal} \end{figure} \subsection{Real World Networks} On top of synthetic networks we also use real-world data to build a multilayer network and empirically test our methods. For that purpose we choose the data collected among employees of the Department of Computer Science at Aarhus University \cite{magnani2013combinatorial}. It is a multilayer network consisting of Facebook friendships, co-authorships, work, leisure (repeated leisure activities) and a lunch layer (regularly eating lunch together). Its full structure is presented on Fig. \ref{fig:aarhus_struct} with each layer being shown as a separate network. The whole network has $61$ nodes and $620$ edges in total. Main results for the Aarhus data are shown in Tables \ref{tab:facebook_sensitivity}, \ref{tab:work_sensitivity} and \ref{tab:lunch_sensitivity}, which use respectively \textit{facebook}, \textit{work} and \textit{lunch} layers as the visible parts of the graph. We omitted the other two possible cases because of their low density of connections. We treat remaining hidden layers as one, aggregated layer as it does not matter how many layers exactly there are in our detection method. Sensitivity and CSS values are consistent with synthetic results in the sense that they both grow with the density of the visible layer. It is also apparent that our approach far exceeds the performance of the null model. Independently of which layer will be chosen as the visible one, random guessing would require us to check all possible ${61 \choose 2} = 1830$ links, see Eq.~(\ref{eq:null}) and Appendix \ref{sec:app_null}, with $k\in\{159, 160, 229\}$ for \textit{work}, \textit{lunch}, \textit{facebook} as the visible layer respectively. Our method, on the other hand, needs significantly less than that. Here, $k$ is the number of unique edges in the whole graph ($353$) minus the number of edges in the visible layer. Note that in order to account for the fact that our method does not always find \textit{all} the links, i.e., sensitivity < 1, one can adjust Eq.~\eqref{eq:null} such that one multiplies $k$ by the expected sensitivity. That, however, barely changes the result, i.e., null model requires maybe one or two edges less than all possible at best. Moreover, our method gets more successful the more cascades we can observe. A more detailed dependence between sensitivity and the number of cascades is shown in Fig. \ref{fig:scal}, where different colors represent different observed layers. In Fig. \ref{fig:aarhus} we show the distributions of ranks for the \textit{work} layer as the visible network and when comparing them with the synthetic experiments the two distributions for $\beta_{hidden}=0.3$ and $\beta_{hidden}=0.7$ are much more symmetric and separated. The distributions of the other two analysed visible layers are qualitatively similar further supporting the merit of our approach (see Appendix \ref{sec:app_aarhus}). \begin{figure}[!htb] \centering \includegraphics[width=0.47\textwidth]{aarhus_work.pdf} \caption{Distribution of ranks of hidden edges, with medians as vertical lines, for the Aarhus data, with the \textit{work} layer as the observed network. Results obtained for 10 cascades with $t_{max}=10$, $\beta_{observed}=0.5$ and two values of $\beta_{hidden}$ -- 0.3 and 0.7. Results from $10^4$ simulations per $\beta_{hidden}$. Solid lines show a Gaussian kernel density estimate.} \label{fig:aarhus} \end{figure} \begin{table}[!htb] \centering \begin{tabular}{p{0.08\textwidth}p{0.08\textwidth}p{0.092\textwidth}p{0.092\textwidth}p{0.092\textwidth}} $\beta_{hidden}$ & $\beta_{observed}$ & \textit{sensitivity} & \textit{0.5-CSS} & \textit{0.95-CSS} \\ \hline 0.3 & 0.5 & 0.92 & 1398 & 1437 \\ 0.7 & 0.5 & 0.91 & 1424 & 1475 \end{tabular} \caption{Sensitivity and $\alpha$-CSS for the Aarhus data, with the \textit{facebook} layer as the observed network. Results obtained for 10 cascades with $t_{max}=10$. Results from $10^4$ simulations per $\beta_{hidden}$.} \label{tab:facebook_sensitivity} \end{table} \begin{table}[!htb] \centering \begin{tabular}{p{0.08\textwidth}p{0.08\textwidth}p{0.092\textwidth}p{0.092\textwidth}p{0.092\textwidth}} $\beta_{hidden}$ & $\beta_{observed}$ & \textit{sensitivity} & \textit{0.5-CSS} & \textit{0.95-CSS} \\ \hline 0.3 & 0.5 & 0.42 & 387 & 515 \\ 0.7 & 0.5 & 0.58 & 583 & 729 \end{tabular} \caption{Sensitivity and $\alpha$-CSS for the Aarhus data, with the \textit{work} layer as the observed network. Results obtained for 10 cascades with $t_{max}=10$. Results from $10^4$ simulations per $\beta_{hidden}$.} \label{tab:work_sensitivity} \end{table} \begin{table}[!htb] \centering \begin{tabular}{p{0.08\textwidth}p{0.08\textwidth}p{0.092\textwidth}p{0.092\textwidth}p{0.092\textwidth}} $\beta_{hidden}$ & $\beta_{observed}$ & \textit{sensitivity} & \textit{0.5-CSS} & \textit{0.95-CSS} \\ \hline 0.3 & 0.5 & 0.71 & 732 & 838 \\ 0.7 & 0.5 & 0.78 & 841 & 966 \end{tabular} \caption{Sensitivity and $\alpha$-CSS for the Aarhus data, with the \textit{lunch} layer as the observed network. Results obtained for 10 cascades with $t_{max}=10$. Results from $10^4$ simulations per $\beta_{hidden}$.} \label{tab:lunch_sensitivity} \end{table} \section{Discussion} Spreading processes on networks are a valuable tool when describing real-life global diffusion processes, like epidemics, information spreading, cascading failures etc. These processes may have several spreading channels and rarely do we know, or are even aware, of all of them. It is therefore crucial to identify whether observed spreading was in fact generated only by the observed network. Furthermore, should one confirm the existence of an unobserved spreading path, finding these hidden connections can be of the utmost importance. In this paper we focused on identifying both the existence and the structure of a hidden spreading layer by observing a diffusion process unraveling on a graph. We provide methods for i) determining whether a hidden layer exists and ii) estimating what links are present in that layer. Our approach is based on an exact formula for the likelihoods of an observed cascade given knowledge of the system's topology. Using said likelihood and the fact its distribution can be assumed to be unimodal we established a practical and effective way of discerning the existence of a hidden layer. Furthermore using a series of heuristics we obtain an algorithm for estimating the joint likelihood of given (hidden) edge taking part in the observed cascade therefore providing a tool for assessing which nodes are most likely to exchange information via channel we do not know of that is vastly superior to random guessing. In short, a typical situation where our approach could be used is when we observe a single network and a spreading process described by a known model and we suspect that there are hidden layers through which the spread may progress. Note that the observed network can already have multiple layers and the hidden part can also consist of more than one layer. Finally, one could also use our method in a setting where only one layer exist, but at the same time some links are not observed. Data from synthetic and empirical networks alike confirm that uncovering the hidden spreading channel is a relatively simple task with our approach - especially when the layers are uncorrelated. It is, however, more difficult to identify specific hidden connections. Despite the general similarities there are some quantitative differences between the results obtained with synthetic and real data. One of the most significant differences is how $\beta_{hidden}$ relates to the distribution of ranks of hidden edges, influencing the difficulty of hidden connections reconstruction. This effect is much stronger in real world networks than in synthetic ones. It can, however, be explained by the difference in density between hidden and observed networks. In the corresponding plots for synthetic data (see Fig. \ref{fig:ranks}) both layers have the same density. Here the hidden layer is denser (it is a sum of four hidden layers) and so changing the hidden spreading probability affects majority of connections. An important factor in being able to successfully recover the hidden connections turns out to be the density of both the hidden and the observed transmission layers. Specifically, we observe that the denser the hidden layer the harder it is to find the exact connections. An interesting interplay takes place when it comes to the density of the observed layer. On one hand the sensitivity decreases with the density of observed layer, on the other hand, the $\alpha$-CSS is also decreasing with the density. This observation, confirmed by both synthetic and real data, means that as the number of connections on a visible layer increases, we are able to identify less hidden edges on average but we need to take into account a smaller set of potential edges in order to find all of the hidden connections. It should be pointed out that we only focus on the hidden connections which are not overlapping with the observed ones. This means that for correlated layers there might be only few unknown connections whereas the overlapping edges are also influencing the dynamics. Focusing on the more general picture and including the overlapping connections is an interesting subject for future research. Another research direction would be to focus on further improving the hidden connections identification algorithms. These improvements should include both the effectiveness and scalability of proposed methods. The latter is specifically important since real world networks are often quite substantial in size. From the perspective of empirical data it would also be useful to have a way of handling a scenario where different layers have different values of $\beta$ which may or may not be known. Finally, a more radical generalisations like including temporal networks could also prove to be an interesting research problem. While we do hope to address some of the above topics in the near future we feel that methods presented here already provide effective and practical tools for real world applications. \acknowledgements{Ł.G.G and J.C. were supported by National Science Centre, Poland Grant No. 2015/19/B/ST6/02612. M.W. acknowledges support from the Laboratory Directed Research and Development program of Los Alamos National Laboratory under projects numbers 20200121ER and 20210529CR.} \bibliographystyle{unsrt}
1,477,468,750,848
arxiv
\section{Introduction} \subsection{One-variable Hankel operators} Let $\mathbb{N}_0=\{0,1,2,3,\dots\}$. Given a complex valued sequence $a=\{a(j)\}_{j\in\mathbb{N}_0}$, a Hankel operator (matrix) $H_a$ on $\ell^2(\mathbb{N}_0)$ is formally defined by \begin{equation*} \big(H_ax\big)(i)=\sum_{j\in\mathbb{N}_0}a(i+j)x(j), \ \forall i\in\mathbb{N}_0, \ \forall x=\{x(j)\}_{j\in\mathbb{N}_0}\in\ell^2(\mathbb{N}_0). \end{equation*} We will call the sequence $a$ parameter sequence. Nehari's theorem \cite[Theorem 1.1]{peller2012hankel} gives a necessary and sufficient condition for $H_a$ to be bounded on $\ell^2(\mathbb{N}_0)$. A simple sufficient condition is given by $a(j)=O(j^{-1})$, when $j\to+\infty$. A sufficient condition for compactness is $a(j)=o(j^{-1})$, when $j\to+\infty$. Note that these two conditions are also necessary in the case of positive Hankel operators \cite[Theorems 3.1, 3.2]{widom1966hankel}. Let $\alpha>0$ and consider the Hankel operator $H_a$, where \begin{equation*} a(j)=(j+1)^{-\alpha}, \ \forall j\in\mathbb{N}_0. \end{equation*} For $\alpha\in(0,1)$, $H_a$ is not bounded. When $\alpha=1$, $H_a$ is bounded but not compact. In this case, $H_a$ is known as Hilbert's matrix. Finally, for $\alpha>1$, $a(j)=o(j^{-1})$, as $j\to+\infty$, and so, $H_a$ is bounded and compact. From this discussion, it is inferred that the exponent $\alpha=1$ is the boundedness-compactness threshold. In \cite{widom1966hankel} (see the example after Theorem 3.3) it is proved that the eigenvalue asymptotics of the Hankel operators with parameter sequence $a(j)=(j+1)^{-\alpha}$, where $\alpha>1$, are described by \begin{equation*} \lambda_{n}(H_a)=\exp(-\pi \sqrt{2\alpha n}+o(\sqrt{n})), \ n\to+\infty. \end{equation*} In \cite{pushnitski2015asymptotic} A. Pushnitski and D. Yafaev studied a whole class of Hankel operators that lies between the cases $\alpha=1$ and $\alpha>1$. That was achieved by considering parameter sequences $a=\{a(j)\}_{j\in\mathbb{N}_0}$ of the following type \begin{equation*} a(j)=j^{-1}(\log j)^{-\gamma}, \ \forall j\geq 2, \end{equation*} where $\gamma>0$. They concluded that if $\{\lambda_n^+(H_a)\}_{n\in\mathbb{N}}$ is the non-increasing sequence of positive eigenvalues of $H_a$, and $\lambda_n^{-}(H_a)=\lambda_n^+(-H_a)$, then the eigenvalues of the corresponding Hankel operator $H_a$ obey the following asymptotic law \begin{equation}\label{eq0} \lambda_n^{+}(H_a)=C_\gamma n^{-\gamma}+o(n^{-\gamma}) \ \ \text{and } \ \lambda_n^-(H_a)=o(n^{-\gamma}), \ n\to+\infty, \end{equation} where \begin{equation}\label{17} C_\gamma=\left[\frac{1}{2\pi}\int_\mathbb{R}\left(\frac{\pi}{\cosh(\pi x)}\right)^{\frac{1}{\gamma}}\dd{x}\right]^\gamma=2^{-\gamma}\pi^{1-2\gamma}B\left(\frac{1}{2\gamma},\frac{1}{2}\right)^\gamma; \end{equation} here $B(\cdot,\cdot)$ is the Beta function. \subsection{Multi-variable Hankel operators} The purpose of this section is to introduce the multi-variable Hankel operators and develop the $d$-variable analogue of the asymptotics (\ref{eq0}). From now on, we will denote all the multi-variable functions and their arguments by boldface letters. So, for $d\geq 2$ consider the set $\mathbb{N}_0^d=\{\mathbf{j}=(j_1,j_2,\dots,j_d): \ j_i\in\mathbb{N}_0, \ \text{for } i=1,2,\dots,d\}$ and the space $\ell^2(\mathbb{N}^d_0)$ of $d$-variable square summable sequences $\mathbf{x}=\{\mathbf{x}(\mathbf{j})\}_{\mathbf{j}\in\mathbb{N}_0^d}$. Let $\mathbf{a}=\{\mathbf{a}(\mathbf{j})\}_{\mathbf{j}\in\mathbb{N}_0^d}$ be a complex valued sequence and define, formally, the Hankel operator $H_\mathbf{a}$ on $\ell^2(\mathbb{N}_0^d)$ by \begin{equation*} \big(H_\mathbf{a}\mathbf{x}\big)(\mathbf{i}):=\sum_{\mathbf{j}\in\mathbb{N}_{0}^{d}}\mathbf{a}(\mathbf{i}+\mathbf{j})\mathbf{x}(\mathbf{j}), \ \forall \mathbf{i}\in\mathbb{N}_0^d, \ \forall \mathbf{x}=\{\mathbf{x}(\mathbf{j})\}_{\mathbf{j}\in\mathbb{N}_0^d}\in\ell^2(\mathbb{N}_0^d). \end{equation*} We will call the sequence $\mathbf{a}$ parameter sequence. To the best of the author's knowledge, no necessary and sufficient conditions for the boundedness or compactness of $H_\mathbf{a}$ are available at present. Heuristically, $\mathbf{a}(\mathbf{j})$ can go to zero at different rates in different directions, which makes the problem more subtle than in one-variable case. One can make progress by focusing on a subclass of sequences $\mathbf{a}(\mathbf{j})$. In this paper, we consider the following subclass. Let $a=\{a(j)\}_{j\in\mathbb{N}_0}$ be a one-variable sequence and define \begin{equation*} \mathbf{a}(\mathbf{j})=a(\left|\mathbf{j}\right|), \ \text{where } \left|\mathbf{j}\right|=\sum_{i=1}^{d}j_i, \ \forall \mathbf{j}=(j_1,j_2,\dots,j_d)\in\mathbb{N}_0^d. \end{equation*} In this case, it can be verified that $H_\mathbf{a}$ is bounded if $a(j)=O(j^{-d})$ and compact if $a(j)=o(j^{-d})$, when $j\to+\infty$. Moreover, for $\alpha>0$ consider the sequence \begin{equation*} a(j)=(j+1)^{-\alpha}, \ \forall j\in\mathbb{N}_0. \end{equation*} If $\alpha\in(0,d)$, then $H_\mathbf{a}$ is unbounded. If $\alpha=d$, $H_\mathbf{a}$ is bounded but not compact and for $\alpha>d$, the aforementioned tests imply boundedness and compactness. Therefore, the boundedness-compactness threshold exponent, for this choice of the parameter sequence $\mathbf{a}$, is $\alpha=d$. The main result of this paper is the $d$-variable analogue of (\ref{eq0}). We first give a simple version of our result, Theorem \ref{thm}; a more complete statement is Theorem \ref{thm0} below. In order to formulate Theorem \ref{thm}, let $\mathcal{F}$ be the Fourier transform on the real line; i.e. \begin{equation}\label{F} \big(\mathcal{F}f\big)(x)=\int_{\mathbb{R}}f(y)e^{-2\pi i xy}\dd{y}, \ \forall x\in\mathbb{R}. \end{equation} The inverse Fourier transform, $\mathcal{F}^*f$, of $f$ will be often denoted by $\check{f}$. \begin{thm}\label{thm} Let $\gamma>0$ and consider the parameter sequence $\mathbf{a}(\mathbf{j})=a(\left|\mathbf{j}\right|)$, for all $\mathbf{j}\in\mathbb{N}_0^d$, where \begin{equation}\label{0} a(j)=j^{-d}(\log j)^{-\gamma}, \ \forall j\geq2. \end{equation} Moreover, for any $j\in\mathbb{N}$, define the function \begin{equation}\label{eq2} \phi_j(x):=\frac{1}{\cosh[j](\frac{x}{2})}, \ \forall x\in\mathbb{R}. \end{equation} Then the corresponding Hankel operator $H_\mathbf{a}$ is self-adjoint, compact, and it presents power eigenvalue asymptotics of the form below: \begin{equation*} \lambda_n^+(H_\mathbf{a})=C_{d,\gamma}n^{-\gamma}+o(n^{-\gamma}) \ \ \text{and } \ \lambda_n^-(H_\mathbf{a})=o(n^{-\gamma}), \ n\to+\infty, \end{equation*} where \begin{equation}\label{18} C_{d,\gamma}=\frac{1}{2^d(d-1)!}\left( \int_\mathbb{R}\check{\phi}_d^{\frac{1}{\gamma}}(x)\dd{x} \right)^{\gamma}. \end{equation} \end{thm} \begin{remark} It is worth to notice that relation (\ref{18}) gives (as expected) (\ref{17}) when $d=1$. For observe that $\check{\phi}_1(x)=\frac{2\pi}{\cosh(2\pi^2 x)}$. Then, by applying the change of variables $y=2\pi x$, we get \begin{equation*} C_{1,\gamma}=\frac{1}{2}\left(\int_\mathbb{R}\check{\phi}_1^{\frac{1}{\gamma}}(x)\dd{x}\right)^\gamma=\left[\frac{1}{2\pi}\int_\mathbb{R}\left(\frac{\pi}{\cosh(\pi y)}\right)^{\frac{1}{\gamma}}\dd{y}\right]^\gamma=C_\gamma, \end{equation*} where $C_\gamma$ is the constant that is defined by (\ref{17}). \end{remark} \subsection{Main result} A generalisation of Theorem \ref{thm} leads to the main main result, Theorem \ref{thm0}. For any $\gamma>0$, define \begin{equation}\label{8} M(\gamma):=\begin{cases} 0, & \gamma\in(0,\frac{1}{2})\\ \left[\gamma\right]+1, & \gamma\geq\frac{1}{2} \end{cases}, \end{equation} where $\left[\gamma\right]=\max\{x\in\mathbb{Z}: \ x\leq\gamma\}$. In addition, for any sequence $a=\{a(j)\}_{j\in\mathbb{N}_0}$, define the sequence of iterated differences $a^{(m)}=\{a^{(m)}(j)\}_{j\in\mathbb{N}_0}$, where $m\in\mathbb{N}_0$, with \begin{equation*} a^{(0)}=a \ \ \text{and } \ a^{(m)}(j)=a^{(m-1)}(j+1)-a^{(m-1)}(j), \ \forall j\in\mathbb{N}_0, \ \forall m\in\mathbb{N}. \end{equation*} \begin{thm}\label{thm0} Let $\gamma>0$, $b_1,b_{-1}\in\mathbb{R}$, and $a$ be a real valued sequence of $\mathbb{N}_0$, such that \begin{equation}\label{eq5} a(j)=\big(b_1+(-1)^jb_{-1}\big)j^{-d}(\log j)^{-\gamma}+g_1(j)+(-1)^jg_{-1}(j), \ \forall j\geq2, \end{equation} where both $g_1$ and $g_{-1}$ satisfy the following condition: \begin{equation*} g_{\pm 1}^{(m)}(j)=o\big(j^{-d-m} (\log j)^{-\gamma} \big), \ j\to+\infty, \end{equation*} for $m=0,1,\dots,M(\gamma)$. If $H_\mathbf{a}$ is the Hankel operator, where $\mathbf{a}(\mathbf{j})=a\left(|\mathbf{j}|\right), \ \forall \mathbf{j}\in\mathbb{N}_0^d$, then it is a self-adjoint, compact operator and its eigenvalues satisfy the following asymptotic law \begin{equation}\label{eq1} \lambda^\pm(H_\mathbf{a})=C^\pm n^{-\gamma}+o(n^{-\gamma}), \ n\to+\infty. \end{equation} The leading term coefficients are given by \begin{equation}\label{eq2_12} C^{\pm}=\left((b_1)^{\frac{1}{\gamma}}_{\pm}+(b_{-1})^{\frac{1}{\gamma}}_{\pm}\right)^{\gamma}C_{d,\gamma}, \end{equation} where $C_{d,\gamma}$ is defined in (\ref{18}) and $(x)_\pm:=\max\{0,\pm x\}$, for any $x\in\mathbb{R}$. \end{thm} \subsection{Proof outline} In order to derive the spectral asymptotics for the class of operators that were introduced in Theorem~\ref{thm0}, we follow the steps that are listed below. In the sequel, we give a brief description of each one of them. \begin{itemize} \item Construction of a model operator (see \S\ref{MO}), \item reduction of the model operator to pseudo-differential operators (see \S\ref{PDO}), \item use of Weyl-type spectral asymptotics of the respective pseudo-differential operators (see \S\ref{WA}), \item reduction of the error terms to one-variable weighted Hankel operators (see \S\ref{RWHO}), and \item Schatten class inclusions of the error terms (see \S\ref{SCI}). \end{itemize} The construction of the model operator aims to give the leading term in the eigenvalue asymptotics. More precisely, the model operator will be a Hankel operator which behaves ``similarly" to the initial Hankel operator but whose eigenvalue asymptotics are retrieved much easier and explicitly. By examining for simplicity the case of $a$ given by (\ref{0}), the model operator will be a Hankel operator of the form $\tilde{H}:=H_{\tilde{\mathbf{a}}}$, with parameter sequence $\tilde{\mathbf{a}}(\mathbf{j})=\tilde{a}(|\mathbf{j}|)$, for all $\mathbf{j}\in\mathbb{N}_0^d$. \begin{remark} From now on, objects related with the model operator will be declared with the tilde symbol; e.g. $\tilde{H}$, $\tilde{\mathbf{a}}$, $\tilde{a}$, etc. \end{remark} \noindent The sequence $\tilde{a}$ will be chosen to be the Laplace transform of a suitable function $w$, i.e. \begin{equation*} \tilde{a}(j)=\big(\mathcal{L}w\big)(j)=\int_{0}^{+\infty}w(\lambda)e^{-\lambda j}\dd{\lambda}, \ \forall j\in\mathbb{N}_0. \end{equation*} The function $w$ is chosen in a way such that $\tilde{a}(j)\sim a(j)$, as $j\to+\infty$; i.e. $\frac{\tilde{a}(j)}{a(j)}\to1$, as $j\to+\infty$. The latter is obtained via a lemma for Laplace transform asymptotics. In the sequel, the spectral analysis of the model operator, $\tilde{H}$, is reduced to that one of a pseudo-differential operator. To see this, consider the inner product \begin{equation*} (\tilde{H}\mathbf{x},\mathbf{y})=\sum_{\mathbf{i},\mathbf{j}\in\mathbb{N}_0^d}\tilde{a}(|\mathbf{i}+\mathbf{j}|)\mathbf{x}(\mathbf{j})\overline{\mathbf{y}}(\mathbf{i}). \end{equation*} By using the fact that $\tilde{a}(j)=\big(\mathcal{L}w\big)(j)$, we can swap summation and integration to obtain \begin{equation*} (\tilde{H}\mathbf{x},\mathbf{y})=\int_{0}^{+\infty} \big(L\mathbf{x}\big)(t)\overline{\big(L\mathbf{y}\big)}(t)\dd{t}, \end{equation*} where $L:\ell^2(\mathbb{N}_0^d)\to L^2(\mathbb{R}_+)$ is given by \begin{equation*} \big(L\mathbf{x}\big)(t)=\sqrt{w(t)}\sum_{\mathbf{j}\in\mathbb{N}_0^d}e^{-|\mathbf{j}|t}\mathbf{x}(\mathbf{j}), \ \forall t\in\mathbb{R}_+, \ \forall \mathbf{x}\in\ell^2(\mathbb{N}_0^d); \end{equation*} note that $\mathbb{R}_+:=(0,+\infty)$. \begin{remark} Notice that in order $L$ to be well-defined, $w$ has to be non-negative. For sake of simplicity, in this introduction, we assume that this is true and the general case is addressed properly in the proofs. \end{remark} \noindent Therefore, $\tilde{H}$ can be expressed as a product of two operators, $\tilde{H}=L^*L$, and we can apply the following lemma (\cite[\S8.1, Theorem 4]{birman2012spectral}). \begin{lem}\label{lem} Let $L$ be a linear bounded operator, defined on a Hilbert space $\mathscr{H}$. Then, the restrictions $L^*L\upharpoonright(\mathrm{Ker}L^*L)^\perp$ and $LL^*\upharpoonright(\mathrm{Ker}LL^*)^\perp$ are unitarily equivalent. \end{lem} \noindent\begin{remark} We will denote this equivalence by $\simeq$; e.g. $L^*L\simeq LL^*$. \end{remark} \noindent Thus, $\tilde{H}$ is unitarily equivalent (modulo kernels) to $LL^*:L^2(\mathbb{R}_+)\to L^2(\mathbb{R}_+)$. Finally, by an exponential change of variable, $LL^*$ is proved to be unitarily equivalent (modulo kernels) to a pseudo-differential operator $\beta(X)\alpha(D)\beta(X)$, where $D$ is the differentiation operator in $L^2(\mathbb{R})$, $Df=-if'$, and $X$ is the multiplication operator (in $L^2(\mathbb{R})$) by the function $\mathrm{id}(x)=x$. Then, by exploiting a Weyl-type spectral asymptotics formula for the operator $\beta(X)\alpha(D)\beta(X)$, we retrieve its eigenvalue asymptotics and thus, those of $\tilde{H}$. \begin{remark} The technique of considering the inner product $(H_\mathbf{a}\mathbf{x},\mathbf{y})$ and changing the order of summation and integration was also applied by Widom in \cite{widom1966hankel} for one-variable Hankel operators. In order to derive the eigenvalue asymptotics, Widom also applied Lemma \ref{lem}. This yielded the equivalence to the pseudo-differential operator that we would obtain, if we followed the steps that are described above (for $d=1$). The same equivalence, but in greater generality, is also obtained by Yafaev in \cite[Theorem 7.7]{yafaev2017quasi}. \end{remark} Finally, the initial Hankel operator, $H_\mathbf{a}$, can be expressed as a sum of operators, $H_\mathbf{a}=\tilde{H}+(H_\mathbf{a}-\tilde{H})$. Having obtained the eigenvalue asymptotics for $\tilde{H}$, the next step is to prove that the spectral contribution of the operator $H_\mathbf{a}-\tilde{H}$ is negligible, compared to that one of $\tilde{H}$. This will be achieved by proving certain Schatten-Lorentz class inclusions for $H_\mathbf{a}-\tilde{H}$. These inclusions depend on the range of the exponent $\gamma$ in (\ref{0}) and are obtained by a combination of interpolation and reduction to one-variable weighted Hankel operators. \subsection{List of notation} For the reader's convenience, we close our introduction by summarising the introduced notation.\\ \ \\ \noindent\underline{Set notation}: Let $\mathbb{R}$ be the set of real numbers, $\mathbb{Z}$ the set of integers, $\mathbb{N}=\{1,2,3,\dots\}$ be the set of natural numbers, and $\mathbb{N}_0=\mathbb{N}\cup\{0\}$. In addition, $\mathbb{R}_+=(0,+\infty)$. We denote by $\mathbb{C}$ the set of complex numbers. Then $\mathbb{D}=\{z\in\mathbb{C}: \ \left|z\right|<1\}$ and $\mathbb{T}=\{z\in\mathbb{C}: \ \left|z\right|=1\}$. Moreover, $\mathbb{T}$ can be identified with the interval $[0,1)$, via the map $t\mapsto e^{2\pi i t}$, for all $t\in[0,1)$. For any $d\geq2$, we can define $d$-Cartesian products of the aforementioned sets; e.g. $\mathbb{R}^d=\{\mathbf{x}=(x_1,x_2,\dots,x_d): \ x_i\in\mathbb{R}, \ \text{for } i=1,2,\dots,d\}$.\\ \ \\ \noindent\underline{Dimension notation}: We use the Roman (standard) typeface for one-dimensional/variable objects and boldface letters for $d$-dimensional/variable ones. For example, let $f(x)$ describe a function defined on $\mathbb{R}$ and $\mathbf{a}=\{\mathbf{a}(\mathbf{j})\}_{\mathbf{j}\in\mathbb{N}_0^d}$ be a $d$-variable sequence.\\ \ \\ \noindent\underline{Sequence notation}: We say that two (real valued) sequences $\{a(j)\}_{j\in\mathbb{N}_0}$ and $\{b(j)\}_{j\in\mathbb{N}_0}$ present the same asymptotic behaviour at infinity, and denote by $a(j)\sim b(j)$, as $j\to+\infty$, when $\frac{a(j)}{b(j)}\to1$, as $j\to+\infty$. For a (complex valued) sequence $a=\{a(j)\}_{j\in\mathbb{N}_0}$, define the sequence of iterated differences $a^{(m)}=\{a^{(m)}(j)\}_{j\in\mathbb{N}_0}$, where $m\in\mathbb{N}_0$, with \begin{equation*} a^{(0)}=a \ \ \text{and } \ a^{(m)}(j)=a^{(m-1)}(j+1)-a^{(m-1)}(j), \ \forall j\in\mathbb{N}_0, \ \forall m\in\mathbb{N}. \end{equation*} \noindent\underline{Number notation}: For any real number $x$, we define its integer part $\left[x\right]=\max\{m\in\mathbb{Z}: \ m\leq x\}$ and its positive (resp. negative) part $\left(x\right)_+=\max\{0,x\}$ (resp. $(x)_-=\max\{0,-x\}$). Furthermore, let $\left<x\right>=\sqrt{1+x^2}$. For any real numbers $x$ and $y$, we write $x\lesssim y$ when there exists a non-zero number $c$ such that $x\leq cy$. Finally, for any $d\geq2$, \begin{equation*} \left|\mathbf{j}\right|=\sum_{i=1}^d j_i, \ \forall \mathbf{j}=(j_1,j_2,\dots,j_d)\in\mathbb{N}_0^d. \end{equation*} \noindent\underline{Fourier transform}: For a function $\phi:\mathbb{T}\to\mathbb{C}$ the sequence of its Fourier coefficients $\{\big(\Phi\phi\big)(n)\}_{n\in\mathbb{Z}}$ is given by \begin{equation*} \big(\Phi\phi\big)(n)=\int_0^1 \phi(t) e^{-2\pi int}\dd{t}, \ \forall n\in\mathbb{Z}. \end{equation*} The Fourier transform $\mathcal{F}f$ of a function $f:\mathbb{R}\to\mathbb{C}$ is given by (\ref{F}). We denote by $\mathcal{F}^*$ its inverse and $\check{f}=\mathcal{F}^*f$.\\ \ \\ \noindent\underline{Operator notation}: For any operator $A$, let $A\upharpoonright S$ be the restriction of $A$ to a subset $S$ of its domain. Two operators $A$ and $B$, in general defined on different spaces, will be called unitarily equivalent modulo kernels (write $A\simeq B$), when they have unitarily equivalent non-zero parts. Namely, when there exists a unitary operator $U$ such that \begin{equation*} A\upharpoonright(\mathrm{Ker}A)^{\perp}=U^*B\upharpoonright(\mathrm{Ker}B)^{\perp}U. \end{equation*} We denote by $H_\mathbf{a}$ (resp. $H_a$) all the $d$-variable (resp. one-variable) Hankel operators with parameter sequence $\mathbf{a}$ (resp. $a$). Moreover, when $H_{\mathbf{a}}$ has been defined, objects related with the model operator $\tilde{H}$ that corresponds to $H_{\mathbf{a}}$ will be indicated with the tilde symbol; e.g. $\tilde{\mathbf{a}}$ will refer to the parameter sequence of the model operator, so that $\tilde{H}=H_{\tilde{\mathbf{a}}}$. Finally, for weighted Hankel operators, we use the capital gamma; e.g. $\Gamma$, $\Gamma_a^{\alpha,\beta}$, etc. (see \S\ref{WHO} for the relevant definitions).\\ \ \\ \noindent\underline{Eigenvalue notation}: Let $A$ be an operator and $\{\lambda_n^+(A)\}_{n\in\mathbb{N}}$ be the sequence of its positive eigenvalues. Then $\lambda_n^-(A)=\lambda_n^+(-A)$, $\forall n\in\mathbb{N}$. \section{Preliminaries} \subsection{Besov classes}\label{BC} We define Besov classes of analytic functions on the unit circle $\mathbb{T}$. If $C^\infty_c(\mathbb{R})$ is the set of infinitely many times differentiable functions on $\mathbb{R}$, with compact support, let $v$ be a $C^{\infty}_c(\mathbb{R})$ function, such that $\mathrm{supp}(v)=[2^{-1},2]$, $v(1)=1$, and $v([2^{-1},2])=[0,1]$; notice that $v(2^{-1})=v(2)=0$. Then consider a sequence of $C^{\infty}_c(\mathbb{R})$ non-negative valued functions $\{v_n\}_{n\in\mathbb{N}}$, such that, \begin{equation*} v_n(t)=v\left( \frac{t}{2^n} \right), \ \forall n\in\mathbb{N}, \end{equation*} for any $t\in\mathbb{R}$, and \begin{equation*} \sum_{n\geq 0}v\left(\frac{t}{2^n}\right)=1, \ \forall t\geq 1. \end{equation*} Ensuing, define the polynomials \begin{equation}\label{1'} V_0(z)=\overline{z}+1+z, \ \forall z\in\mathbb{T}, \end{equation} and, for every $n\in\mathbb{N}$, \begin{equation}\label{2} V_n(z)=\sum_{j\in\mathbb{N}}v_n(j)z^j= \sum_{j=2^{n-1}}^{2^{n+1}}v_n(j)z^j, \ \forall z\in\mathbb{T}. \end{equation} Then we say that an analytic function $f$ of $\mathbb{T}$ belongs to the Besov class $B^{p}_{q,r}$ if and only if \begin{equation*} \|f\|_{B^{p}_{q,r}}:=\left( \sum_{n\in\mathbb{N}_0}2^{npr}\|f*V_n\|^r_q \right)^{\frac{1}{r}}<+\infty. \end{equation*} \noindent The lemma below can be found in \cite[Lemma 4.6]{pushnitski2015sharp}. \begin{lem}\label{lem_2} Assume that $\gamma\geq\frac{1}{2}$ and let $M(\gamma)$ be as defined in (\ref{8}). Moreover, let $\{a(j)\}_{j\in\mathbb{N}_{0}}$ be a sequence of complex numbers which satisfies (\ref{eq_3}) and consider the function \begin{equation*} \phi(z)=\sum_{j\in\mathbb{N}_0}a(j)z^j, \ \forall z\in\mathbb{T}. \end{equation*} If $V_n$ are as defined in (\ref{2}), then, for every $q>\frac{1}{M(\gamma)}$ and every $n\in\mathbb{N}$ such that $2^{n-1}\geq M(\gamma)$, \begin{equation}\label{eq3} \left\| \phi*V_n \right\|_{\infty}\leq \sum_{j=2^{n-1}}^{2^{n+1}}|a(j)|, \end{equation} and \begin{equation}\label{eq4} 2^n\left\| \phi*V_n \right\|_q^q\leq C_q \left( \sum_{m=0}^{M(\gamma)}\sum_{j=2^{n-1}-M(\gamma)}^{2^{n+1}}(1+j)^m \left| a^{(m)}(j) \right| \right)^q, \end{equation} for some positive constant $C_q$, depending only on $q$. \end{lem} \subsection{Schatten classes} Consider a compact operator $T$ and the sequence of its singular values $\{s_n\}_{n\in\mathbb{N}}$; i.e. the sequence of (positive) eigenvalues of $\sqrt{T^*T}$. Denote by $\mathbf{S}_\infty$ the space of compact operators. For $p\in(0,+\infty)$, we define the Schatten class $\mathbf{S}_p$, the Schatten-Lorentz classes $\mathbf{S}_{p,q}$ and $\mathbf{S}_{p,\infty}$, and the class $\mathbf{S}_{p,\infty}^{0}$ by the following conditions: \begin{equation*} \begin{array}{ccc} T\in\mathbf{S}_p & \Leftrightarrow & \left\|T\right\|_{\mathbf{S}_p}:=\left(\displaystyle\sum_{n\in\mathbb{N}}s_n^p\right)^{\frac{1}{p}}<+\infty\\ \ \\ T\in\mathbf{S}_{p,q} & \Leftrightarrow & \left\|T\right\|_{\mathbf{S}_{p,q}}:=\left(\sum_{n\in\mathbb{N}}\frac{(n^{\frac{1}{p}}s_n)^q}{n}\right)^{\frac{1}{q}}<+\infty\\ T\in\mathbf{S}_{p,\infty} & \Leftrightarrow & \left\|T\right\|_{\mathbf{S}_{p,\infty}}:=\displaystyle\sup_{n\in\mathbb{N}}n^{\frac{1}{p}}s_n<+\infty\\ \ \\ T\in\mathbf{S}_{p,\infty}^{0} & \Leftrightarrow & \displaystyle\lim_{n\to+\infty}n^{\frac{1}{p}}s_n=0. \end{array} \end{equation*} Recall that $\mathbf{S}_{p}\subset\mathbf{S}_{p,\infty}^0\subset\mathbf{S}_{p,\infty},$ with $\|\cdot\|_{\mathbf{S}_{p,\infty}}\leq\|\cdot\|_{\mathbf{S}_p}, \ \forall p>0$. \noindent Finally, Lemma \ref{lem2_1} \cite[Corollary 2.2]{gohberg1978introduction} suggests that perturbations with $\mathbf{S}_{p,\infty}^0$ operators leave the eigenvalue asymptotics unaffected. \begin{lem}[K. Fan]\label{lem2_1} Let $S$ and $T$ be two compact, self-adjoint operators on a Hilbert space. If \begin{equation*} \begin{array}{rclr} \lambda_{n}^{\pm}(S)=K^{\pm}n^{-\gamma}+o(n^{-\gamma}), & \text{and} & \ s_n(T)=o(n^{-\gamma}), & \ n\to+\infty, \end{array} \end{equation*} then $$\lambda_{n}^{\pm}(S+T)=K^{\pm}n^{-\gamma}+o(n^{-\gamma}),$$ for some constants $K^{\pm}$. \end{lem} \noindent For the next lemma, define $C^\infty_c(\mathbb{R}^2)$ to be the set of all infinitely differentiable, compactly supported functions $\varkappa:\mathbb{R}^2\to\mathbb{R}$. \begin{lem}\label{lem3} Let $\varkappa\in C^{\infty}_{c}(\mathbb{R}^2)$. Then the integral operator $\mathcal{K}:L^2(\mathbb{R})\to L^2(\mathbb{R})$, with \begin{equation*} \big(\mathcal{K}f\big)(x)=\int_\mathbb{R}f(y)\varkappa(x,y)\dd{y}, \ \forall x\in\mathbb{R}, \end{equation*} belongs to all Schatten classes $\mathbf{S}_p$, for $p>0$. \end{lem} \begin{remark} Lemma \ref{lem3} is a minor modification of \cite[Chapter 30.5, Theorem 13]{lax2014functional}. More precisely, Theorem~13 proves the $\mathbf{S}_1$ inclusion of $\mathcal{K}$, but the proof for the rest of $\mathbf{S}_p$ is obtained by the same argument. \end{remark} \subsection{Asymptotic orthogonality in $\mathbf{S}_{p,\infty}$} Let $A$ and $B$ be two operators that belong to the class $\mathbf{S}_{p,\infty}$, where $p>0$. Notice that $B^*A$ and $BA^*$ belong to $\mathbf{S}_{\frac{p}{2},\infty}$. We will call $A$ and $B$ orthogonal if $B^*A=BA^*=0$, and asymptotically orthogonal if $B^*A$ and $BA^*$ belong to $\mathbf{S}_{\frac{p}{2},\infty}^0$. Asymptotic orthogonality plays an important role when we want to obtain the spectral asymptotics of the operator $A+B$, while we know those of $A$ and $B$. More precisely, for compact, self-adjoint operators, there is the following Lemma, which is a special case of \cite[Theorem 2.3]{pushnitski2016spectral}. \begin{lem}\label{lem1} Let $A$ and $B$ be two self-adjoint operators in $\mathbf{S}_{p,\infty}$, for some $p>0$. Assume that the asymptotics of their positive and negative eigenvalues, $\lambda^{\pm}_n(A)$ and $\lambda_n^\pm(B)$, are given by \begin{equation*} \lambda_n^\pm(A)=C^\pm_An^{-\frac{1}{p}}+o(n^{-\frac{1}{p}}), \ n\to+\infty; \end{equation*} and \begin{equation*} \lambda_n^\pm(B)=C^\pm_Bn^{-\frac{1}{p}}+o(n^{-\frac{1}{p}}), \ n\to+\infty. \end{equation*} If $A$ and $B$ are asymptotically orthogonal, then \begin{equation*} \lambda_n^\pm(A+B)=\big((C_A^\pm)^p+(C_B^\pm)^p\big)^{\frac{1}{p}}n^{-\frac{1}{p}}+o(n^{-\frac{1}{p}}), \ n\to+\infty. \end{equation*} \end{lem} \subsection{Weighted Hankel operators}\label{WHO} Let $\{w_1(j)\}_{j\in\mathbb{N}_0}$, $\{w_2(j)\}_{j\in\mathbb{N}_0}$ and $a=\{a(j)\}_{j\in\mathbb{N}_0}$ be three complex valued sequences and define, formally, the operator $\Gamma:\ell^2(\mathbb{N}_0)\to\ell^2(\mathbb{N}_0)$ by $\Gamma=\mathcal{M}_{w_1}H_a\mathcal{M}_{w_2}$, where $\mathcal{M}_{w}$ is the multiplication operator by a sequence $w=\{w(j)\}_{j\in\mathbb{N}_0}$. In addition, for any $\alpha$, $\beta>0$, define the special class of weighted Hankel operators $\Gamma_a^{\alpha,\beta}=\mathcal{M}_{w_1}H_a\mathcal{M}_{w_2}$, where $w_1(j)=(j+1)^{\alpha}$ and $w_2(j)=(j+1)^{\beta}$, for all $j\in\mathbb{N}_0$. A Schatten class criterion for this class of weighted operators is given by the Theorem \ref{thm2_8} \cite[Theorem B]{aleksandrov2004distorted}. \begin{thm}\label{thm2_8} Let $p\in(0,+\infty)$, $\alpha$, $\beta>0$, and $\phi$ be an analytic function on $\mathbb{T}$ and $\Phi\phi$ the sequence of its Fourier coefficients. Then \begin{equation*} \|\phi\|_{B_{p}^{\frac{1}{p}+\alpha+\beta}} \lesssim \|\Gamma_{\Phi\phi}^{\alpha,\beta}\|_{\mathbf{S}_p}\lesssim \|\phi\|_{B_{p}^{\frac{1}{p}+\alpha+\beta}}, \end{equation*} where $\Gamma_{\Phi\phi}^{\alpha,\beta}$ is the weighted Hankel operator described by the matrix $[(i+1)^{\alpha}(\Phi\phi)(i+j)(j+1)^{\beta}]_{i,j\geq0}$. \end{thm} \noindent The following lemma is a combination of Theorem \ref{thm2_8} and \cite[Theorem 6.4.4]{peller2012hankel}. The reader can find a sketch of proof in the Appendix \ref{A}. \begin{lem}\label{lem0} Define the measure space \begin{equation*} (\mathcal{M},\mu):=\bigoplus_{n\in\mathbb{N}_0}\left( \mathbb{T}, 2^{n}\mathbf{m} \right), \end{equation*} where $\mathbf{m}$ is the Lebesgue measure on $\mathbb{T}$. Let $p\in(0,+\infty)$, $q\in(0,+\infty]$ and $\mathcal{B}_{p,q}^{\frac{1}{p}+d-1}$ be the space of analytic functions $\phi$ on $\mathbb{D}=\{z\in\mathbb{C}: \ |z|<1\}$ such that \begin{equation}\label{4} \bigoplus_{n\in\mathbb{N}_0}2^{n(d-1)}\phi*V_n\in L^{p,q}(\mathcal{M},\mu), \end{equation} where the polynomials $V_n$ are defined in (\ref{1'}) and (\ref{2}). Then \begin{equation*} \left\|\Gamma_{\Phi\phi}^{\frac{d-1}{2},\frac{d-1}{2}}\right\|_{\mathbf{S}_{p,q}}\lesssim\left\| \phi \right\|_{\mathcal{B}_{p,q}^{\frac{1}{p}+d-1}}, \end{equation*} where $\Phi\phi$ is the sequence of the Fourier coefficients of $\phi$. \end{lem} \subsection{Laplace transform estimates} Let $\mathcal{L}:L^2(\mathbb{R}_+)\to L^2(\mathbb{R}_+)$ be the Laplace transform, given by \begin{equation*} \big(\mathcal{L}f\big)(t)=\int_0^{+\infty}f(\lambda)e^{-\lambda t}\dd{\lambda}, \ \forall t>0, \ \forall f\in L^2(\mathbb{R}_+). \end{equation*} In this paper, we are interested in the $t\to+\infty$ asymptotic behaviour of the Laplace transform of functions with logarithmic singularities near zero of the form $f(\lambda)=\lambda^n\left|\log\lambda\right|^{-\gamma}$, for $\gamma>0$. This asymptotic behaviour is obtained by Lemma \ref{lem2_3} \cite[Lemma 3.3]{pushnitski2015asymptotic}. \begin{lem}\label{lem2_3} Let $$I_n(t)=\int_{0}^{\lambda_0}\lambda^n|\log\lambda|^{-\gamma} e^{-\lambda t}\dd{\lambda},$$ where $\gamma>0$, $n\in\mathbb{N}_{0}$ and $\lambda_0\in(0,1)$. Then $$I_n(t)=n! \ t^{-1-n}|\log t|^{-\gamma}\left( 1+O(|\log t|^{-1}) \right), \ t\to+\infty.$$ \end{lem} \subsection{Weyl-type spectral asymptotics for pseudo-differential operators} Let $X$ and $D$ be, respectively, the multiplication and the differentiation operator in $L^2(\mathbb{R})$. They are self-adjoint operators, defined on appropriate domains, and given by \begin{equation*} \big(Xf\big)(x)=xf(x), \ \text{and } \big(Df\big)(x)=-if'(x). \end{equation*} The following lemma (\cite[Theorem 2.4]{pushnitski2015asymptotic}) deals with pseudo-differential operators of the form $\Psi=\beta(X)\alpha(D)\beta(X)$. Notice that $\alpha(D)=\mathcal{F}^*\alpha(2\pi X)\mathcal{F}$, an expression which will prove to be useful in the sequel. \begin{lem}\label{lem2_2} Let $\alpha$ be a real valued function in $C^{\infty}(\mathbb{R})$, such that \[ \alpha(x)=\left\{ \begin{array}{rl} \alpha(+\infty)x^{-\gamma} + o(x^{-\gamma}), & x\to+\infty\\ \ \\ \alpha(-\infty)|x|^{-\gamma} + o(x^{-\gamma}), & x\to-\infty, \end{array} \right. \] for some real constants $\alpha(+\infty)$, $\alpha(-\infty)$ and $\gamma>0$. Now let $\beta$ be a real valued function on $\mathbb{R}$ such that $$|\beta(x)|\leq M \left< x \right>^{-s}, \ \forall x\in\mathbb{R},$$ where $s>\frac{\gamma}{2}$ and $M$ is a non-negative constant. Define the pseudo-differential operator $\Psi=\beta(X)\alpha(D)\beta(X)$ on $L^2(\mathbb{R})$. Then $\Psi$ is compact and obeys the following eigenvalue asymptotic formula: $$\lambda_{n}^{\pm}=C^{\pm}n^{-\gamma}+o(n^{-\gamma}), \ n\to+\infty,$$ where \begin{equation*} C^{\pm}=\left[ \frac{1}{2\pi} \left( \alpha(+\infty)_{\pm}^{\frac{1}{\gamma}} + \alpha(-\infty)_{\pm}^{\frac{1}{\gamma}} \right) \int_{\mathbb{R}} |\beta(x)|^{\frac{2}{\gamma}}\dd{x} \right]^{\gamma}. \end{equation*} \end{lem} \noindent Above $\left<x\right>:=\sqrt{1+x^2}, \forall x\in\mathbb{R}$. \section{Construction of the model operator}\label{MO} Consider the cut-off function $\chi_0\in C^\infty(\mathbb{R})$ such that \begin{equation}\label{eqn0'} \mathcal{\chi}_{0}(t)=\left\{ \begin{array}{rl} 1, & 0< t\leq\frac{1}{2}\\ \ \\ 0, & t\geq \frac{3}{4} \end{array}, \right. \end{equation} and $0\leq\mathcal{\chi}_{0}\leq1$. Let $\gamma>0$ and define the function \begin{equation}\label{eq2_11'} w(t)=\frac{1}{(d-1)!}t^{d-1}\left|\log t\right|^{-\gamma}\chi_0(t), \ \forall t>0. \end{equation} If \begin{equation*} \big(\mathcal{L}w\big)(t)=\int_{0}^{+\infty}w(\lambda)e^{-\lambda t}\dd{\lambda}, \ \forall t>0, \end{equation*} let $b_1,b_{-1}\in\mathbb{R}$ and define the sequence $\tilde{a}=\{\tilde{a}(j)\}_{j\in\mathbb{N}}$ by \begin{equation}\label{eq2_4'} \tilde{a}(j)=b_1\big(\mathcal{L}w\big)(j)+(-1)^jb_{-1}\big(\mathcal{L}w\big)(j), \ \forall j\in\mathbb{N}. \end{equation} Then we define the model operator $\tilde{H}:=H_{\tilde{\mathbf{a}}}$, with parameter sequence $\tilde{\mathbf{a}}(\mathbf{j})=\tilde{a}(|\mathbf{j}|)$, $\forall \mathbf{j}\in\mathbb{N}_0^d$. For the sequence $\tilde{a}$, we have the following lemma. \begin{lem}\label{lem3''} Let $w$ be the function described in (\ref{eq2_11'}) and $\tilde{a}$ be the sequence defined in (\ref{eq2_4'}). Then $\tilde{a}$ satisfies the following formula: \begin{equation}\label{0''''} \tilde{a}(j)=\left(b_1+(-1)^jb_{-1}\right)j^{-d}(\log j)^{-\gamma}+\tilde{g}_1(j)+(-1)^j\tilde{g}_{-1}(j), \ \forall j\geq2, \end{equation} where the error sequences $\tilde{g}_{\pm 1}$ present the following asymptotic behaviour: \begin{equation}\label{eqc_18'} \tilde{g}^{(m)}_{\pm1}(j)=O\left( j^{-d-m}(\log j)^{-\gamma-1} \right), \ j\to+\infty, \end{equation} for all $m\in\mathbb{N}_0$. \end{lem} \begin{proof} First assume that $b_{-1}=0$ and $b_1\neq0$. Then \begin{equation*} \tilde{a}(j)=b_1\big(\mathcal{L}w\big)(j), \ \forall j\in\mathbb{N}_0, \end{equation*} and we aim to prove that \begin{equation}\label{0'} \tilde{a}(j)=b_1j^{-d}\left(\log j\right)^{-\gamma}+\tilde{g}_1(j), \ \forall j\geq 2, \end{equation} where the error term $\tilde{g}_1$ satisfies (\ref{eqc_18'}). Moreover, without loss of generality, assume that $b_1=1$, otherwise work with $\tfrac{\tilde{a}}{b_1}$. Let $\tilde{g}_1$ be the function below \begin{equation}\label{0''} \tilde{g}_1(t)=\frac{1}{(d-1)!}\int\limits_{0}^{+\infty}\lambda^{d-1}|\log\lambda|^{-\gamma}\chi_0(\lambda)e^{-\lambda t}\dd{\lambda}-t^{-d}|\log t|^{-\gamma}, \ \forall t>1, \end{equation} and notice that $\tilde{g}_1\in C^{\infty}(1,+\infty)$. More precisely, for every $m\in\mathbb{N}$ and any $t>1$, \begin{multline}\label{lll'} \tilde{g}_1^{(m)}(t)=\frac{(-1)^m}{(d-1)!}\int\limits_{0}^{+\infty}\lambda^{d+m-1}|\log\lambda|^{-\gamma}\chi_0(\lambda)e^{-\lambda t}\dd{\lambda}-\sum_{n=0}^m{m \choose n}\left(\frac{\dd^nt^{-d}}{\dd{t^n}}\right)\left(\frac{\dd^{m-n}(\log t)^{-\gamma}}{\dd{t^{m-n}}}\right). \end{multline} Moreover, for every $m\in\mathbb{N}_0$ and any $t>0$, \begin{multline*} \int\limits_{0}^{+\infty}\lambda^{d+m-1}|\log\lambda|^{-\gamma}\chi_0(\lambda)e^{-\lambda t}\dd{\lambda}=\int\limits_{0}^{\frac{1}{2}}\lambda^{d+m-1}|\log\lambda|^{-\gamma}e^{-\lambda t}\dd{\lambda}\\ +\int\limits_{\frac{1}{2}}^{\frac{3}{4}}\lambda^{d+m-1}|\log\lambda|^{-\gamma}e^{-\lambda t}\chi_0(\lambda)\dd{\lambda}. \end{multline*} Notice that the second integral converges to zero exponentially fast when $t\to+\infty$. Thus Lemma \ref{lem2_3} yields \begin{equation}\label{l'} \int\limits_{0}^{+\infty}\lambda^{d+m-1}|\log\lambda|^{-\gamma}\chi_0(\lambda)e^{-\lambda t}\dd{\lambda}=(d+m-1)!\,t^{-d-m}(\log t)^{-\gamma}\left(1+O\left((\log t)^{-1}\right)\right), \end{equation} when $t\to+\infty$. Besides, notice that, for every $k\in\mathbb{N}$, \begin{equation*} \frac{\dd^k}{\dd{t^k}}(\log t)^{-\gamma}=O\left(t^{-k}(\log t)^{-\gamma-1}\right), \ t\to+\infty. \end{equation*} Thus, it is easily verified that, \begin{multline}\label{ll'} \sum_{n=0}^m{m \choose n}\left(\frac{\dd^nt^{-d}}{\dd{t^n}}\right)\left(\frac{\dd^{m-n}(\log t)^{-\gamma}}{\dd{t^{m-n}}}\right)=\frac{(-1)^m}{(d-1)!}(d+m-1)!\,t^{-d-m}(\log t)^{-\gamma}\\ +O\left(t^{-d-m}(\log t)^{-\gamma-1}\right), \ \text{when } t\to+\infty. \end{multline} Then by putting (\ref{l'}) and (\ref{ll'}) back to (\ref{lll'}), we obtain that for every $m\in\mathbb{N}_0$, \begin{multline*} \tilde{g}_1^{(m)}(t)=\frac{(-1)^m}{(d-1)!}(d+m-1)!\,t^{-d-m}(\log t)^{-\gamma}\left(1+O\left((\log t)^{-1}\right)\right)\\ -\frac{(-1)^m}{(d-1)!}(d+m-1)!\,t^{-d-m}(\log t)^{-\gamma}+O\left(t^{-d-m}(\log t)^{-\gamma-1}\right), \ \text{for } t\to+\infty. \end{multline*} Therefore, $\tilde{g}_1(t)$ satisfies the following smoothness property: \begin{equation}\label{lll} \tilde{g}_1^{(m)}(t)=O\left(t^{-d-m}(\log t)^{-\gamma-1}\right), \ \text{for } t\to+\infty, \ \forall m\in\mathbb{N}_0. \end{equation} In addition, by (\ref{0''}), the function $\tilde{a}(t):=b_1\big(\mathcal{L}w\big)(t)$, $\forall t>0$, satisfies \begin{equation*} \tilde{a}(t)=t^{-d}\left|\log t\right|^{-\gamma}+\tilde{g}_1(t), \ \forall t>1. \end{equation*} Thus, by restricting $\tilde{a}$ on the set of integers greater than or equal to $2$, we get (\ref{0'}). The relation (\ref{eqc_18'}) for $\tilde{g}_1(j)$ is obtained by noticing that $\{\tilde{g}_1(j)\}_{j\geq2}$ is the restriction of the function $\tilde{g}_1$ on the set of integers greater than one, so \begin{equation*} \tilde{g}_1(j)=O(j^{-d}\left|\log j\right|^{-\gamma-1}), \ t\to+\infty, \end{equation*} and also \begin{equation*} \tilde{g}_1^{(m)}(j)=\int_{0}^1\int_{0}^1\dots\int_{0}^1\tilde{g}^{(m)}_1(j+t_1+t_2+\dots+t_m)\dd{t_m}\dots\dd{t_2}\dd{t_1}, \ \forall j\geq 2, \ \forall m\in\mathbb{N}, \end{equation*} where $\tilde{g}_1(t)$ satisfies (\ref{lll}). As a result, \begin{equation*} \tilde{g}_1^{(m)}(j)=O(j^{-d-m}\left|\log j\right|^{-\gamma-1}), \ t\to+\infty, \end{equation*} for every $m\in\mathbb{N}$. \noindent Finally, by repeating the same arguments when $b_1=0$ and $b_{-1}\neq0$, we obtain that \begin{equation}\label{0'''} \tilde{a}(j)=(-1)^jb_{-1}j^{-d}\left(\log j\right)^{-\gamma}+(-1)^j\tilde{g}_{-1}(j), \ \forall j\geq 2, \end{equation} where the error term $\tilde{g}_{-1}$ satisfies (\ref{eqc_18'}). By combining (\ref{0'}) and (\ref{0'''}) together we eventually obtain (\ref{0''''}). \end{proof} \section{Reduction to pseudo-differential operators}\label{PDO} Let $\tilde{a}$ (see (\ref{eq2_4'})) be the parameter sequence of the model operator $\tilde{H}$. Then \begin{equation*} \tilde{a}(j)=\tilde{a}_1(j)+\tilde{a}_{-1}(j), \end{equation*} where \begin{equation}\label{1''} \tilde{a}_{\pm1}(j)=(\pm1)^{j}b_{\pm1}\big(\mathcal{L}w\big)(j), \ \forall j\in\mathbb{N}_0, \end{equation} and $w$ is defined in (\ref{eq2_11'}). Then $\tilde{a}_{1}$ (resp. $\tilde{a}_{-1}$) defines the Hankel operator $\tilde{H}_1$ (resp. $\tilde{H}_{-1}$), with parameter sequence $\tilde{a}_1(|\mathbf{j}|)$, for all $\mathbf{j}\in\mathbb{N}_0^d$ (resp. $\tilde{a}_{-1}(|\mathbf{j}|)$). Thus, $\tilde{H}=\tilde{H}_{1}+\tilde{H}_{-1}$. We reduce the spectral analysis of $\tilde{H}_{\pm1}$ to that of some pseudo-differential operators $\Psi_{\pm1}$. \begin{lem}\label{R} For $j=1,2,\dots,d-1$, let $R_j:L^2(\mathbb{R}_+)\to L^2(\mathbb{R}_+)$ be the integral operator \begin{equation*} \big(R_jf\big)(t)=\int_0^{+\infty}\sqrt{w(t)}\frac{f(s)}{(s+t)^j}\sqrt{w(s)}\dd{s}, \ \forall t>0, \ \forall f\in L^2(\mathbb{R}_+). \end{equation*} Then $R_j\in\bigcap_{p>0}\mathbf{S}_p$, for all $j=1,2,\dots,d-1$. \end{lem} \begin{proof} Let $U:L^2(\mathbb{R}_+)\to L^2(\mathbb{R})$ be the unitary transformation that is given by \begin{equation}\label{13} \big(Uf\big)(x)=e^{\frac{x}{2}}f(e^x), \ \forall x\in\mathbb{R}, \ \forall f\in L^2(\mathbb{R}_+). \end{equation} Therefore, by applying the change of variable $s=e^y$ and setting $x=\log t$, \begin{equation*} \big(R_jf\big)(e^x)=\int_{\mathbb{R}}\sqrt{w(e^x)}\frac{f(e^y)e^y}{(e^x+e^y)^j}\sqrt{w(e^y)}\dd{y}, \ \forall x\in\mathbb{R}, \ \forall f\in L^2(\mathbb{R}_+). \end{equation*} Moreover, observe that $e^x+e^y=2e^{\frac{x+y}{2}}\cosh(\frac{x-y}{2})$, so that, for any $f\in L^2(\mathbb{R}_+)$, \begin{equation*} \big(UR_jf\big)(x)=\int_{\mathbb{R}}\sqrt{2^{-d}e^{-(j-1)x}w(e^x)}\frac{\big(Uf\big)(y)}{\cosh[j](\frac{x-y}{2})}\sqrt{2^{-d}e^{-(j-1)y}w(e^y)}\dd{y}, \ \forall x\in\mathbb{R}, \end{equation*} for all $j=1,2,\dots,d-1$. For any $j=1,2,\dots,d-1$, define the functions \begin{equation}\label{llll} \alpha_j(x)=2^{-d}e^{-(j-1)x}w(e^x), \ \forall x\in\mathbb{R}. \end{equation} Then \begin{equation*} R_j=U^*\alpha_j^{1/2}(X)T_j\alpha_j^{1/2}(X)U, \end{equation*} where the operator $T_j:L^2(\mathbb{R})\to L^2(\mathbb{R})$ is the convolution operator with the function $\phi_j$; i.e. for any $j\in\mathbb{N}$, \begin{equation}\label{l"} \big(T_jf\big)(x)=\big(\phi_j*f\big)(x), \ \forall x\in\mathbb{R}, \ \forall f\in L^2(\mathbb{R}), \end{equation} where $\phi_j$ is given by (\ref{eq2}). In order $R_j$ to belong to a Schatten class $\mathbf{S}_p$, it is enough to prove that $\alpha_j^{1/2}(X)T_j\alpha_j^{1/2}(X)\in\mathbf{S}_p$, for $j=1,2,\dots,d-1$. To see this, observe that $T_j=\mathcal{F}\beta_j^2(X)\mathcal{F}^*$, where \begin{equation}\label{l''} \beta_j(x)=\sqrt{\check{\phi}_j(x)}, \ \forall x\in\mathbb{R}, \ \forall j\in\mathbb{N}. \end{equation} Note that $\check{\phi}_1(x)=2\pi(\cosh(2\pi^2x))^{-1}$, and the latter is positive for any $x\in\mathbb{R}$. Since the convolution of positive functions is positive, $\check{\phi}_j>0$, and thus, $\beta_j$ is well-defined. Then $$\alpha_j^{1/2}(X)T_j\alpha_j^{1/2}(X)=\alpha_j^{1/2}(X)\mathcal{F}\beta_j^2(X)\mathcal{F}^*\alpha_j^{1/2}(X),$$ and Lemma \ref{lem} implies that the latter is unitarily equivalent (modulo kernels) to the pseudo-differential operator $\beta_j(X)\alpha_j(\tfrac{1}{2\pi}D)\beta_j(X)$. Moreover, (\ref{llll}) implies that \begin{equation*} \alpha_j(x)=\begin{cases} 0, & \text{when } x\to+\infty\\ \frac{1}{2^d(d-1)!}e^{-(d-j)|x|}\left|x\right|^{-\gamma}, & \text{when } x\to-\infty \end{cases}, \ \forall j=1,2,\dots,d-1. \end{equation*} Since $\alpha_j(x)$ decays exponentially fast, when $x\to-\infty$, Lemma \ref{lem2_2} indicates that the pseudo-differential operator $\beta_j(X)\alpha(\tfrac{1}{2\pi}D)\beta_j(X)$ and thus, $\alpha_j^{1/2}(X)T_j\alpha_j^{1/2}(X)$, belong to $\bigcap_{p>0}\mathbf{S}_p$, for all $j=1,2,\dots,d~-~1$. \end{proof} \begin{lem}\label{lem2'} Let $\tilde{H}_1$ and $\tilde{H}_{-1}$ be the Hankel operators that were defined before, with parameter sequences $\tilde{a}_1(|\mathbf{j}|)$ and $\tilde{a}_{-1}(|\mathbf{j}|)$, for all $\mathbf{j}\in\mathbb{N}_0^d$, respectively; where $\tilde{a}_{\pm1}$ have been defined in (\ref{1''}). Then there exist two couples of operators $S_1$, $S_{-1}$ and $E_1$, $E_{-1}$, defined on $L^2(\mathbb{R}_+)$, such that \begin{enumerate}[(i)] \item $\tilde{H}_{\pm1}$ is unitarily equivalent (modulo kernels) to $S_{\pm1}$, \item $E_{\pm1}\in\bigcap_{p>0}\mathbf{S}_p$, and \item $S_{\pm1}-E_{\pm1}$ is unitarily equivalent (modulo kernels) to a pseudo-differential operator $\Psi_{\pm1}:L^2(\mathbb{R})\to L^2(\mathbb{R})$. More precisely, $\Psi_{\pm1}=\beta(X)\alpha_{\pm1}(\frac{1}{2\pi}D)\beta(X)$, where \begin{equation}\label{L17} \alpha_{\pm1}(x)=2^{-d}e^{-(d-1)x}b_{\pm1}w(e^x), \ \ \beta(x)=\sqrt{\check{\phi}_d(x)}, \ \forall x\in\mathbb{R}, \end{equation} and $\phi_d$ is defined in (\ref{eq2}). \end{enumerate} \end{lem} \begin{proof} First of all, notice that in Lemma \ref{R}, we proved that $\beta$ is well defined, since $\beta=\beta_d$, where the latter is given by (\ref{l''}). We prove the assertion for $\tilde{H}_1$ and the proof for $\tilde{H}_{-1}$ is completely analogous. Moreover, we can assume that $b_1=1$, otherwise work with $\frac{1}{b_1}\tilde{H}_1$.\\ \ \\ \noindent\textit{(i)} Let $\mathbf{x},\mathbf{y}\in\ell^2(\mathbb{N}_0^d)$. Then \begin{equation*} \begin{split} (\tilde{H}_1\mathbf{x},\mathbf{y}) & = \sum_{\mathbf{i},\mathbf{j}\in\mathbb{N}_0^d} \tilde{a}_1(|\mathbf{i}+\mathbf{j}|)\mathbf{x}(\mathbf{j})\overline{\mathbf{y}}(\mathbf{i})\\ & = \sum_{\mathbf{i},\mathbf{j}\in\mathbb{N}_0^d}\int_{0}^{+\infty}w(t)e^{-(|\mathbf{i}+\mathbf{j}|)t}\dd{t}\mathbf{x}(\mathbf{j})\overline{\mathbf{y}}(\mathbf{i})\\ & = (L_1\mathbf{x},L_1\mathbf{y}), \end{split} \end{equation*} where $L_1:\ell^2(\mathbb{N}_0^d)\to L^2(\mathbb{R}_+)$ is defined by \begin{equation}\label{14'} \big(L_1\mathbf{x}\big)(t)=\sqrt{w(t)}\sum_{\mathbf{j}\in\mathbb{N}_0^d}e^{-|\mathbf{j}|t}\mathbf{x}(\mathbf{j}), \ \forall t\in\mathbb{R}_+, \ \forall \mathbf{x}\in\ell^2(\mathbb{N}_0^d). \end{equation} Notice that the interchange of summation and integration is justified by the uniform convergence of $\sum_{\mathbf{j}\in\mathbb{N}_0^d}e^{-|\mathbf{j}|t}$, in $\mathbb{R}_+$. Therefore, $\tilde{H}_1=L^*_1L_1$. Moreover, it is not difficult to verify that the formula for the adjoint operator $L_1^*:L^2(\mathbb{R}_+)\to\ell^2(\mathbb{N}_0^d)$ is the following: \begin{equation}\label{15'} \big(L^*_1f\big)(\mathbf{j})=\int_0^{+\infty}\sqrt{w(t)}f(t)e^{-|\mathbf{j}|t}\dd{t}, \ \forall \mathbf{j}\in\mathbb{N}_0^d, \ \forall f\in L^2(\mathbb{R}_+). \end{equation} In addition, Lemma \ref{lem} implies that the non-zero parts of $\tilde{H}_1$ and $S_1:=L_1L_1^*$ are unitarily equivalent. Now observe that $S_1:L^2(\mathbb{R}_+)\to L^2(\mathbb{R}_+)$ and \begin{equation}\label{16'} \begin{split} \big(S_1f\big)(t) & = \sqrt{w(t)} \sum_{\mathbf{j}\in\mathbb{N}_{0}^{d}}\int_{0}^{+\infty} f(s) \sqrt{w(s)} e^{-(t+s)|\mathbf{j}|}\dd{s}\\ & = \int_{0}^{+\infty} \sqrt{w(t)}\frac{f(s)}{\left( 1-e^{-(s+t)} \right)^d}\sqrt{w(s)}\dd{s}, \ \forall t\in\mathbb{R}_+, \ \forall f\in L^2(\mathbb{R}_+). \end{split} \end{equation} \begin{remark} Observe that the respective formulae for $L_{-1}$ and $L_{-1}^*$, assuming that $b_{-1}=1$, will be \begin{equation}\label{14''} \big(L_{-1}\mathbf{x}\big)(t)=\sqrt{w(t)}\sum_{\mathbf{j}\in\mathbb{N}_0^d}(-1)^{|\mathbf{j}|}e^{-|\mathbf{j}|t}\mathbf{x}(j), \ \forall t\in\mathbb{R}_+, \ \forall \mathbf{x}\in\ell^2(\mathbb{N}_0^d), \end{equation} and \begin{equation}\label{15''} \big(L^*_{-1}f\big)(\mathbf{j})=(-1)^{|\mathbf{j}|}\int_0^{+\infty}\sqrt{w(t)}f(t)e^{-|\mathbf{j}|t}\dd{t}, \ \forall \mathbf{j}\in\mathbb{N}_0^d, \ \forall f\in L^2(\mathbb{R}_+), \end{equation} so that $S_{-1}=S_1$. \end{remark} \ \\ \ \noindent\textit{(ii)} Observe the formula (\ref{16'}) for $S_1$ and that \begin{equation*} \frac{1}{\left( 1-e^{-(s+t)} \right)^d}=\frac{1}{(s+t)^d}+\rho(s+t), \end{equation*} where $\rho$ is real analytic with a pole of order $d-1$ at $0$. Now define the operator $E_1:L^2(\mathbb{R}_+)\to ~L^2(\mathbb{R}_+)$, with \begin{equation}\label{E} \big(E_1f\big)(t)=\int_{0}^{+\infty} \sqrt{w(t)}f(s)\rho(s+t)\sqrt{w(s)}\dd{s}, \ \forall t\in\mathbb{R}_+, \ \forall f\in L^2(\mathbb{R}_+). \end{equation} The function $\rho$ can be written as \begin{equation*} \rho(t)= \sum_{j=1}^{d-1}\frac{c_{-j}}{t^j}+\rho_{\text{an}}(t), \ \forall t\neq0, \end{equation*} where $\rho_{\text{an}}$ is real analytic and $c_{-j}$ are real constants. Now notice that the function $\sqrt{\chi_0(t)}\rho_{\text{an}}(s+t)\sqrt{\chi_0(s)}$, where $\chi_0$ is defined in (\ref{eqn0'}), belongs to $C^\infty_c(\mathbb{R}^2)$. Then, according to Lemma \ref{lem3}, the integral operator with kernel $\sqrt{\chi_0(t)}\rho_{\text{an}}(s+t)\sqrt{\chi_0(s)}$, belongs to any Schatten class $\mathbf{S}_p$. Moreover, the function $t^{d-1}|\log t|^{-\gamma}$ is bounded near $0$, so that the integral operator with kernel $\sqrt{w(t)}\rho_{\mathrm{an}}(s+t)\sqrt{w(s)}$ belongs to any Schatten class $\mathbf{S}_p$. It remains to prove the same for the integral operators $R_j$ with kernel $\sqrt{w(t)}(s+t)^{-j}\sqrt{w(s)}$, where $j=1,2,\dots,d-1$, which holds true due to Lemma \ref{R}.\\ \ \\ \noindent\textit{(iii)} By recalling the definitions of $S_1$ in (\ref{16'}) and $E_1$ in (\ref{E}), $S_1-E_1$ is also an operator on $L^2(\mathbb{R}_+)$, described by \begin{equation*} (S_1-E_1)f(t)=\int_{0}^{+\infty} \sqrt{w(t)}\frac{f(s)}{(s+t)^d}\sqrt{w(s)}\dd{s}, \ \forall t\in\mathbb{R}_+, \ \forall f\in L^2(\mathbb{R}_+). \end{equation*} Let $U:L^2(\mathbb{R}_+)\to L^2(\mathbb{R})$ be the unitary transformation that was defined in (\ref{13}). Then, by applying the change of variable $s=e^y$ and setting $x=\log t$, \begin{equation*} (S_1-E_1)f(e^x)=\int_{\mathbb{R}}\sqrt{w(e^x)}\frac{f(e^y)e^y}{(e^x+e^y)^d}\sqrt{w(e^y)}\dd{y}, \ \forall x\in\mathbb{R}. \end{equation*} As a result, for any $f\in L^2(\mathbb{R}_+)$, \begin{equation*} U(S_1-E_1)f(x)=\int_{\mathbb{R}}\sqrt{2^{-d}e^{-(d-1)x}w(e^x)}\frac{\big(Uf\big)(y)}{\cosh[d](\frac{x-y}{2})}\sqrt{2^{-d}e^{-(d-1)y}w(e^y)}\dd{y}, \ \forall x\in\mathbb{R}. \end{equation*} Then, $S_1-E_1=U^*\alpha_1^{1/2}(X)T_d\alpha_1^{1/2}(X)U$, where \begin{equation*} \alpha_1(x)=2^{-d}e^{-(d-1)x}w(e^x), \ \forall x\in\mathbb{R}, \end{equation*} and $T_d$ is defined in (\ref{l"}). Notice that $T_d=\mathcal{F}\beta^2(X)\mathcal{F}^*$, where $\beta$ is defined in (\ref{L17}). Therefore, \begin{equation*} \begin{split} S_1-E_1 & =U^*\alpha_1^{1/2}(X)T_d\alpha_1^{1/2}(X)U\\ & = U^*\alpha_1^{1/2}(X)\mathcal{F}\beta^2(X)\mathcal{F}^*\alpha_1^{1/2}(X)U\\ & \simeq \beta(X)\mathcal{F}^*\alpha_1(X)\mathcal{F}\beta(X), \end{split} \end{equation*} where the last equivalence is obtained by Lemma \ref{lem} and the fact that $U$ is unitary. Therefore, if $\Psi_1:=\beta(X)\alpha_1(\tfrac{1}{2\pi}D)\beta(X)$, where $\alpha_1$ and $\beta$ are given by (\ref{L17}), then $S_1-E_1$ is unitarily equivalent (modulo kernels) to $\Psi_1$. \end{proof} \section{Weyl-type spectral asymptotics}\label{WA} In this section we derive Weyl-type spectral asymptotics for the operators $\tilde{H}_{\pm1}$, that were defined in \S\ref{PDO}, and for the model operator $\tilde{H}$. The latter is obtained by using the asymptotic orthogonality of $\tilde{H}_1$ and $\tilde{H}_{-1}$; see Lemma \ref{lem4'}. \begin{lem}\label{lem5'} The eigenvalue asymptotics for the operator $\tilde{H}_1$, that was obtained in \S\ref{PDO}, are given by \begin{equation}\label{20'} \lambda_n^{\pm}(\tilde{H}_1)=C_1^\pm n^{-\gamma}+o(n^{-\gamma}), \ n\to+\infty, \end{equation} where the constants $C^\pm_1$ are given by a formula similar to (\ref{eq2_12}): \begin{equation}\label{19'} C^\pm_1=\frac{1}{2^d(d-1)!}\left(b_{1}\right)_{\pm}\left[ \int_{\mathbb{R}} \check{\phi}_d^{\frac{1}{\gamma}}(x) \dd{x} \right]^{\gamma}, \end{equation} where the function $\phi_d$ is defined in (\ref{eq2}) and $\left(b_{1}\right)_{\pm}=\max\{\pm b_1,0\}$. Similar asymptotics are obtained for $\tilde{H}_{-1}$ by substituting $b_1$ with $b_{-1}$ and thus, obtaining the constants $C_{-1}^\pm$. \end{lem} \begin{proof} In Lemma \ref{lem2'} we proved that $\tilde{H}_1$ is unitarily equivalent (modulo kernels) to an operator $S_1$, so that its spectral asymptotics can be retrieved from those of $S_1$. Moreover, $S_1=(S_1-E_1)+E_1$, where $E_1$ is also described in Lemma \ref{lem2'}. In order to obtain the spectral asymptotics of $S_1$, we aim to use Lemma \ref{lem2_1}. In Lemma \ref{lem2'}, it is proved that $S_1-E_1$ is unitarily equivalent (modulo kernels) to the pseudo-differential operator $\Psi_1=\beta(X)\alpha_1(\tfrac{1}{2\pi}D)\beta(X)$, where $\alpha$ and $\beta$ are given by (\ref{L17}). Then \begin{equation*} \alpha_1(\tfrac{x}{2\pi})=\left\{ \begin{array}{lcl} \frac{b_1(2\pi)^\gamma}{2^d(d-1)!}|x|^{-\gamma}(1+o(1)) & , & \text{when } x\to-\infty\\ \ \\ 0 &, & \text{when } x\to+\infty \end{array}. \right. \end{equation*} Moreover, $\beta^2$ belongs to the Schwartz class $\mathcal{S}(\mathbb{R})$. Indeed, by differentiating, we can see that $\frac{1}{\cosh[d](\frac{\cdot}{2})}\in\mathcal{S}(\mathbb{R})$ and consequently, $\beta^2\in\mathcal{S}(\mathbb{R})$, too. Therefore, $$|\beta(x)|=O\left( \left< x \right>^{-s} \right), \ x\to+\infty,$$ for every $s>0$. Thus, all the conditions of Lemma \ref{lem2_2} are satisfied and therefore, the eigenvalues of $\Psi_1$, $\lambda_n^\pm(\Psi_1)$, follow the asymptotics below: \begin{equation*} \lambda_n^{\pm}(\Psi_1)=C_1^\pm n^{-\gamma}+o(n^{-\gamma}), \ n\to+\infty, \end{equation*} where the constants $C^\pm_1$ are described by (\ref{19'}). Finally, in order to apply Lemma \ref{lem2_1}, it remains to prove that $s_n(E_1)=o(n^{-\gamma})$, for $n\to+\infty$. For notice that, according to Lemma \ref{lem2'}, $E_1\in\cap_{p>0}\mathbf{S}_p$. Thus, the singular values of $E_1$ decay faster than any polynomial. As a result, Lemma \ref{lem2_1} yields that the eigenvalue asymptotics of $\tilde{H}_1$ are given by (\ref{20'}). \end{proof} \begin{lem}\label{lem4'} Let $\tilde{H}_1$ and $\tilde{H}_{-1}$ be the operators that were defined in \S\ref{PDO}. Then $\tilde{H}_{-1}\tilde{H}_1$ and $\tilde{H}_{1}\tilde{H}_{-1}$ belong to $\mathbf{S}_p$, for any $p>0$. Therefore, $\tilde{H}_{1}$ and $\tilde{H}_{-1}$ are asymptotically orthogonal. \end{lem} \begin{proof} First assume that both $b_{-1}$ and $b_1$ are equal to 1, otherwise work with $\frac{1}{b_{-1}}\tilde{H}_{-1}$ or $\frac{1}{b_1}\tilde{H}_1$. In the proof of Lemma \ref{lem2'} we saw that $\tilde{H}_{\pm1}=L^*_{\pm1}L_{\pm1}$. Recall that $L_1$ and $L_1^*$ are given by (\ref{14'}) and (\ref{15'}), respectively, while $L_{-1}$ and $L_{-1}^*$ are defined in (\ref{14''}) and (\ref{15''}). Then \begin{equation*} \tilde{H}_{-1}\tilde{H}_1=L^*_{-1}L_{-1}L^*_{1}L_{1}. \end{equation*} Because $L_{-1}^*$ and $L_1$ are bounded, it is enough to prove that $L_{-1}L^*_{1}\in\mathbf{S}_p$, for all $p>0$. To this end, we follow the steps that yielded formula (\ref{16'}) (for $S_1$). Then, for every $f\in L^2(\mathbb{R}_+)$, \begin{equation*} \begin{split} \big(L_{-1}L^*_{1}f\big)(t) & =\int_0^{+\infty}\sqrt{w(t)}\left(\sum_{\mathbf{j}\in\mathbb{N}_0^d}(-1)^{|\mathbf{j}|}e^{-(t+s)|\mathbf{j}|}\right)f(s)\sqrt{w(s)}\dd{s}\\ & =\int_{0}^{+\infty}\sqrt{w(t)}\frac{f(s)}{\left(1+e^{-(t+s)}\right)^d}\sqrt{w(s)}\dd{s}, \ \forall t\in\mathbb{R}_+. \end{split} \end{equation*} Observe that $(1+e^{-(t+s)})^{-d}\in C^\infty(\mathbb{R})$. Moreover, by the way that the function $\chi_0$ has been defined (see (\ref{eqn0'})), $\sqrt{\chi_0(t)}(1+e^{-(t+s)})^{-d}\sqrt{\chi_0(s)}\in C^\infty_c(\mathbb{R})$. Thus, Lemma \ref{lem3} implies that the integral operator with kernel $\sqrt{\chi_0(t)}(1+e^{-(t+s)})^{-d}\sqrt{\chi_0(s)}$ belongs to any Schatten class $\mathbf{S}_p$. Finally, the same holds true for the operator $L_{-1}L_1^*$, since the function $t^{d-1}|\log t|^{-\gamma}$, is bounded near $0$. In order to prove that $\tilde{H}_{1}\tilde{H}_{-1}\in\bigcap_{p>0}\mathbf{S}_p$, it is enough to notice that $\tilde{H}_{1}\tilde{H}_{-1}=(\tilde{H}_{-1}\tilde{H}_1)^*$. \noindent Regarding the asymptotic orthogonality, it is enough to notice that, due to Lemma \ref{lem5'}, both $\tilde{H}_{-1}$ and $\tilde{H}_1$ belong to $\mathbf{S}_{\frac{1}{\gamma},\infty}$, and that $\tilde{H}_{-1}\tilde{H}_1$ and $\tilde{H}_{1}\tilde{H}_{-1}$ belong to $\bigcap_{p>0}\mathbf{S}_p\subset\mathbf{S}^0_{\frac{1}{2\gamma},\infty}$. \end{proof} \begin{lem}\label{prop0} The eigenvalues of the model operator $\tilde{H}$ obey the asymptotic formula below: \begin{equation}\label{eq_9} \lambda_n^\pm(\tilde{H})=C^{\pm}n^{-\gamma}+o(n^{-\gamma}), \end{equation} where the constants $C^{\pm}$ are defined in (\ref{eq2_12}). \end{lem} \begin{proof} According to Lemma \ref{lem5'}, the eigenvalue asymptotics of $\tilde{H}_{1}$ are described by (\ref{20'}) and those of $\tilde{H}_{-1}$ by a similar formula (with constants $C_{-1}^\pm$). But, according to Lemma \ref{lem4'}, $\tilde{H}_{-1}$ and $\tilde{H}_1$ are asymptotically orthogonal. Then, since $\tilde{H}=\tilde{H}_{1}+\tilde{H}_{-1}$, Lemma \ref{lem1} yields that \begin{equation*} \lambda_n^\pm(\tilde{H})=\big( (C_1)_\pm^{\frac{1}{\gamma}}+(C_{-1})_\pm^{\frac{1}{\gamma}} \big)^\gamma n^{-\gamma}+o(n^{-\gamma}), \ n\to+\infty, \end{equation*} which gives (\ref{eq_9}). \end{proof} \section{Reduction to one-variable weighted Hankel operators}\label{RWHO} In this section we demonstrate the reduction of multi-variable Hankel matrices to one-variable weighted Hankel operators. This will prove to be a useful tool for the derivation of spectral estimates for the error terms. Define \begin{equation*} W_d(j):=|\{\mathbf{k}\in\mathbb{N}_0^d: \ |\mathbf{k}|=j\}|= {j+d-1 \choose d-1}, \ \forall j\in\mathbb{N}_0; \end{equation*} where the last equality can be checked by induction in $d$. In the sequel, consider the linear bounded operator $J: \ell^2(\mathbb{N}_0^d) \to \ell^2(\mathbb{N}_0)$, given by \begin{equation*} \big(J\mathbf{x}\big)(i)=\big(W_d(i)\big)^{-\frac{1}{2}}\sum_{\{\mathbf{k}\in\mathbb{N}_0^d: \ |\mathbf{k}|=i\}}\mathbf{x}(\mathbf{k}), \ \forall i\in\mathbb{N}_0, \ \forall \mathbf{x}\in\ell^2(\mathbb{N}_0^d). \end{equation*} Besides, it is not difficult to check that the adjoint of $J$ is given by \begin{equation*} \big( J^*x \big)(\mathbf{i})=\big(W_d(|\mathbf{i}|)\big)^{-\frac{1}{2}}x(|\mathbf{i}|), \ \forall \mathbf{i}\in\mathbb{N}_0^d, \ \forall x\in\ell^2(\mathbb{N}_0). \end{equation*} In addition, $J^*$ is an isometry. Indeed, for any $x\in\ell^2(\mathbb{N}_0)$, \begin{equation*} \left\| J^*x \right\|^2 = \sum_{\mathbf{i}\in\mathbb{N}_0^d}\big(W_d(|\mathbf{i}|)\big)^{-1}\big|x(|\mathbf{i}|)\big|^2 = \sum_{i\in\mathbb{N}_0} |x(i)|^2=\|x\|^2. \end{equation*} This indicates that $J$ is a partial isometry. Furthermore, for an arbitrary Hankel operator $H_\mathbf{a}:\ell^2(\mathbb{N}_0^d)\to\ell^2(\mathbb{N}_0^d)$, with parameter sequence $\mathbf{a}(\mathbf{j})=a(|\mathbf{j}|)$, $\forall \mathbf{j}\in\mathbb{N}_0^d$ the following relation holds true: \begin{equation}\label{00} (H_\mathbf{a}\mathbf{x},\mathbf{y})=(J^*\Gamma J\mathbf{x},\mathbf{y}), \ \forall \mathbf{x},\mathbf{y}\in\ell^2(\mathbb{N}_0^d), \end{equation} where $\Gamma:\ell^2(\mathbb{N}_0)\to \ell^2(\mathbb{N}_0)$ is the weighted Hankel operator defined by \begin{equation}\label{eq14} \left( \Gamma x \right)(i) = \sum_{j\in\mathbb{N}_0}\sqrt{W_d(i)}a(i+j)\sqrt{W_d(j)}x(j), \ \forall i\in\mathbb{N}_0, \ \forall x\in\ell^2(\mathbb{N}_0). \end{equation} Indeed, it is enough to observe that, for any $\mathbf{x}$ and $\mathbf{y}$ in $\ell^2(\mathbb{N}_0^d)$, \begin{equation*} \begin{split} (H_\mathbf{a}\mathbf{x},\mathbf{y}) & = \sum_{\mathbf{i},\mathbf{j}\in\mathbb{N}_0^d}\mathbf{a}(\mathbf{i}+\mathbf{j})\mathbf{x}(\mathbf{j})\overline{\mathbf{y}(\mathbf{i})}\\ & = \sum_{\mathbf{i},\mathbf{j}\in\mathbb{N}_0^d}a(|\mathbf{i}+\mathbf{j}|)\mathbf{x}(\mathbf{j})\overline{\mathbf{y}(\mathbf{i})}\\ & = \sum_{i,j\in\mathbb{N}_0}a(i+j)\sum_{\{\mathbf{k}\in\mathbb{N}_0^d:|\mathbf{k}|=j\}}\mathbf{x}(\mathbf{k}) \overline{\sum_{\{\mathbf{k}\in\mathbb{N}_0^d:|\mathbf{k}|=i\}}\mathbf{y}(\mathbf{k})}\\ & = \left(\Gamma J\mathbf{x},J\mathbf{y}\right). \end{split} \end{equation*} This discussion leads to the following lemma. \begin{lem}\label{label2} Let $a=\{a(n)\}_{n\in\mathbb{N}_0}$ and $H_\mathbf{a}:\ell^2(\mathbb{N}^d_0)\to\ell^2(\mathbb{N}^d_0)$ be a Hankel operator with parameter sequence $\mathbf{a}(\mathbf{j})=a(\left|\mathbf{j}\right|)$, for all $\mathbf{j}\in\mathbb{N}^d_0$. Then $H_\mathbf{a}$ belongs to any of the ideals $\mathbf{S}_p$, $\mathbf{S}_{p,q}$, $\mathbf{S}^{0}_{p,\infty}$, where $p>0$ and $q\in(0,+\infty]$, if and only if $\Gamma_{\Phi\phi}^{\frac{d-1}{2},\frac{d-1}{2}}$ does; where $(\Phi\phi)(n)=a(n)$, for all $n\in\mathbb{N}_0$, and $\Gamma_{\Phi\phi}^{\frac{d-1}{2},\frac{d-1}{2}}$ the weighted Hankel operator that is described by the matrix $[(i+1)^{\frac{d-1}{2}}(\Phi\phi)(i+j)(j+1)^{\frac{d-1}{2}}]_{i,j\geq0}$. \end{lem} \begin{proof} In (\ref{00}), we showed that $H_\mathbf{a}$ and $\Gamma$ have unitarily equivalent non-zero parts. Thus, the operators $H_\mathbf{a}$ and $\Gamma$ have identical non-zero spectra. Besides, it is easily verified that $\Gamma=\mathcal{D}\Gamma_{\Phi\phi}^{\frac{d-1}{2},\frac{d-1}{2}}\mathcal{D}$, where $\mathcal{D}=[\mathcal{D}(j)]_{j\in\mathbb{N}_0}$ is a diagonal matrix defined by $\mathcal{D}(j)=\left(\frac{W_d(j)}{(j+1)^{d-1}}\right)^{\frac{1}{2}}$, and $(\Phi\phi)(j)=a(j)$, $\forall j\in\mathbb{N}_0$. Notice that $\mathcal{D}$ is an invertible bounded operator on $\ell^2(\mathbb{N}_0)$. The boundedness of $\mathcal{D}$ can be checked by noticing that $W_d(j)\sim\frac{j^{d-1}}{(d-1)!}$, when $j\to+\infty$. Therefore, since the classes $\mathbf{S}_p$, $\mathbf{S}_{p,q}$ and $\mathbf{S}^{0}_{p,\infty}$ are ideals of compact operators, and $\mathcal{D}$ is an invertible bounded operator, $\Gamma$ belongs to any of these ideals if and only if $\Gamma_{\Phi\phi}^{\frac{d-1}{2},\frac{d-1}{2}}$ does. Thus, the observation that the non-zero spectra of $H_\mathbf{a}$ and $\Gamma$ are identical gives the result. \end{proof} \section{Schatten class inclusions of the error terms}\label{SCI} In this section we present the spectral estimates of Hankel matrices $H_{\mathbf{a}}$ with parameter sequence $\mathbf{a}(\mathbf{j})=a(\left|\mathbf{j}\right|)$, where $a(j)$ decays faster than $j^{-d}\left|\log j\right|^{-\gamma}$ at infinity, for some positive $\gamma$. These estimates will eventually yield the spectral estimates of the error terms $g_1(j)$ and $(-1)^{j}g_{-1}(j)$, that are defined in Theorem \ref{thm0}. let $\mathbf{v}=\{\mathbf{v}(\mathbf{j})\}_{\mathbf{j}\in\mathbb{N}^d_0}$ be a sequence that attains positive values. For any $p\in(0,+\infty)$, we define the spaces $\ell_{\mathbf{v}}^{p}(\mathbb{N}_0^d)$ and $\ell_{\mathbf{v}}^{p,\infty}(\mathbb{N}_0^d)$ as follows: \begin{equation*} \begin{array}{ccc} \mathbf{x}\in\ell_{\mathbf{v}}^{p}(\mathbb{N}_0^d) & \Leftrightarrow & \|\mathbf{x}\|_{\ell_{\mathbf{v}}^{p}}:=\displaystyle\left(\sum_{\mathbf{j}\in\mathbb{N}_{0}^{d}}|\mathbf{x}(\mathbf{j})|^p\mathbf{v}(\mathbf{j})\right)^{\frac{1}{p}}<+\infty, \ p\in(0,+\infty),\\ \ \\ \mathbf{x}\in\ell_{\mathbf{v}}^{p,\infty}(\mathbb{N}_0^d) & \Leftrightarrow & \|\mathbf{x}\|_{\ell_{\mathbf{v}}^{p,\infty}}:= \displaystyle\sup_{\lambda>0}\lambda \left(\sum_{\left\{\mathbf{j}\in\mathbb{N}_{0}^{d}: \ |\mathbf{x}(\mathbf{j})|>\lambda\right\}}\mathbf{v}(\mathbf{j})\right)^{\frac{1}{p}}. \end{array} \end{equation*} For $p=+\infty$, the space $\ell_{\mathbf{v}}^{\infty}(\mathbb{N}_0^d)$ is identified with the usual $\ell^{\infty}(\mathbb{N}_0^d)$. The case of $\gamma\in(0,\frac{1}{2})$ will be addressed by using the following interpolation lemma. \begin{lem}\label{lem_0} Let $H_{\mathbf{a}}$ be a Hankel matrix with parameter sequence $\mathbf{a}$ and, $\mathbf{v}=\{\mathbf{v}(\mathbf{j})\}_{\mathbf{j}\in\mathbb{N}^d_0}$, with $\mathbf{v}(\mathbf{j})=\left(|\mathbf{j}|+1\right)^{-d}, \ \forall \mathbf{j}\in\mathbb{N}_0^d$. Then, for any $p\in[2,+\infty)$, there exists a positive constant $M_p$ such that \begin{equation}\label{label1} \|H_\mathbf{a}\|_{\mathbf{S}_{p,\infty}}\leq M_p \left\|\frac{\mathbf{a}}{\mathbf{v}}\right\|_{\ell_{\mathbf{v}}^{p,\infty}}. \end{equation} \end{lem} \begin{proof} The proof is based on the real interpolation method (cf. \cite[Chapter 3]{bergh2012interpolation}). For observe that \begin{equation*} \begin{split} \|H_\mathbf{a}\|_{\mathbf{S}_2}^{2} & = \sum_{\mathbf{i},\mathbf{j}\in\mathbb{N}_0^d}|\mathbf{a}(\mathbf{i}+\mathbf{j})|^2\\ & = \sum_{i_1,j_1\geq 0}\sum_{i_2,j_2\geq 0}\dots\sum_{i_d,j_d\geq 0}|\mathbf{a}(i_1+j_1,i_2+j_2,\dots,i_d+j_d)|^2\\ & = \sum_{j_1,j_2,\dots,j_d\geq 0}(j_1+1)(j_2+1)\dots(j_d+1)|\mathbf{a}(j_1,j_2,\dots,j_d)|^2\\ & \leq \sum_{\mathbf{j}\in\mathbb{N}_0^d}(|\mathbf{j}|+1)^d|\mathbf{a}(\mathbf{j})|^2\\ & = \sum_{\mathbf{j}\in\mathbb{N}_0^d}(|\mathbf{j}|+1)^{2d}|\mathbf{a}(\mathbf{j})|^2 (|\mathbf{j}|+1)^{-d}, \end{split} \end{equation*} so that $\|H_\mathbf{a}\|_{\mathbf{S}_2}\leq\|\frac{\mathbf{a}}{\mathbf{v}}\|_{\ell_{\mathbf{v}}^2}$. In addition, if $\frac{\mathbf{a}}{\mathbf{v}}\in\ell^{\infty}$, then \begin{equation*} |\mathbf{a}(\mathbf{j})|\leq\frac{\|\frac{\mathbf{a}}{\mathbf{v}}\|_{\ell^\infty}}{(|\mathbf{j}|+1)^d} \leq \frac{\|\frac{\mathbf{a}}{\mathbf{v}}\|_{\ell^\infty}}{(j_1+1)(j_2+1)\dots(j_d+1)}, \ \forall \mathbf{j}\in\mathbb{N}_0^d. \end{equation*} Thus, \begin{equation*} \begin{split} \left| (H_\mathbf{a}\mathbf{x},\mathbf{y}) \right| & \leq \sum_{\mathbf{i},\mathbf{j}\in\mathbb{N}_{0}^{d}} \left|\mathbf{a}(\mathbf{i}+\mathbf{j})\right|\left|\mathbf{x}(\mathbf{j})\right|\left|\mathbf{y}(\mathbf{i})\right|\\ & \leq \left\| \frac{\mathbf{a}}{\mathbf{v}} \right\|_{\infty} \sum_{i_1,\dots,i_d,j_1,\dots,j_d\geq 0}\frac{\left|\mathbf{x}(\mathbf{j})\right|\left|\mathbf{y}(\mathbf{i})\right|}{(i_1+j_1+1)\dots(i_d+j_d+1)}\\ & \leq \pi^d \left\| \frac{\mathbf{a}}{\mathbf{v}} \right\|_{\infty} \left\|\mathbf{x}\right\| \left\|\mathbf{y}\right\|, \ \forall \mathbf{x},\mathbf{y}\in\ell^2(\mathbb{N}_0^d), \end{split} \end{equation*} where the last line is derived from the boundedness of the tensor product of $d$ Hilbert matrices. Therefore, we have shown that there are constants $M_2=1$ and $M_\infty=\pi^d$ such that \begin{equation*} \|H_\mathbf{a}\|_{\mathbf{S}_2} \leq M_2 \left\| \frac{\mathbf{a}}{\mathbf{v}} \right\|_{\ell_{\mathbf{v}}^2} \ \ \text{and } \ \|H_\mathbf{a}\|\leq M_\infty \left\| \frac{\mathbf{a}}{\mathbf{v}} \right\|_{\ell^\infty}, \end{equation*} and the real interpolation implies that, for any $p\in(2,+\infty)$, there exists a positive constant $M_p$ such that (\ref{label1}) holds true. \end{proof} \begin{lem}\label{thm_1} Let $\gamma>0$, $M(\gamma)$ be defined in (\ref{8}), and $\{a(j)\}_{j\in\mathbb{N}_{0}}$ be a real valued sequence that satisfies \begin{equation}\label{eq_3} a^{(m)}(j)=O\big(j^{-d-m} (\log j)^{-\gamma} \big), \ j\to+\infty, \end{equation} for every $m=0,1,\dots,M(\gamma)$. Then the Hankel operator $H_\mathbf{a}$, with parameter sequence $\mathbf{a}(\mathbf{j})=a(|\mathbf{j}|)$, for all $\mathbf{j}\in\mathbb{N}_0^d$, is compact and its singular values satisfy the following estimate \begin{equation}\label{label} s_n(H_\mathbf{a})=O(n^{-\gamma}), \ n\to+\infty. \end{equation} In addition, there exists a positive constant $C_{\gamma}=C(\gamma)$ such that \begin{equation}\label{eq_4} \|H_\mathbf{a}\|_{\mathbf{S}_{p,\infty}}\leq C_{\gamma} \sum_{m=0}^{M(\gamma)}\sup_{j\geq 0}(j+1)^{d+m}\big( \log(j+2) \big)^{\gamma}|a^{(m)}(j)|, \end{equation} where $p=\frac{1}{\gamma}$. \end{lem} \begin{proof} We split the proof in steps. In the first step, we prove the result when $\gamma\in(0,\frac{1}{2})$. In the second step, we treat the case of $\gamma\geq\frac{1}{2}$.\\ \ \\ \noindent\underline{\textbf{Step 1:}} Let $\gamma\in(0,\frac{1}{2})$. Then $p=\frac{1}{\gamma}\in(2,+\infty)$ and thus, in order to prove that $H_\mathbf{a}\in\mathbf{S}_{p,\infty}$, it is enough to apply Lemma \ref{lem_0}. To this end, it only needs to show that if $a$ satisfies (\ref{eq_3}), then $\frac{\mathbf{a}}{\mathbf{v}}\in\ell_{\mathbf{v}}^{p,\infty}$, for every $p\in(2,+\infty)$, where $\mathbf{v}$ is defined in Lemma \ref{lem_0}. For $\lambda>0$, \begin{equation*} \begin{split} \left\{ \mathbf{j}\in\mathbb{N}_0^d: \ \frac{|\mathbf{a}(\mathbf{j})|}{\mathbf{v}(\mathbf{j})}>\lambda \right\} & = \left\{ \mathbf{j}\in\mathbb{N}_0^d: \ (|\mathbf{j}|+1)^d |a(|\mathbf{j}|)|>\lambda \right\}\\ & \subset \left\{ \mathbf{j}\in\mathbb{N}_0^d: \ \log(|\mathbf{j}|+2)<\left(\frac{A_0}{\lambda}\right)^p \right\}, \end{split} \end{equation*} where $A_0:=\sup_{j\geq 0}(j+1)^d\big( \log(j+2) \big)^{\gamma}|a(j)|$. Therefore, \begin{equation*} \begin{split} \sum_{\left\{\mathbf{j}\in\mathbb{N}_{0}^{d}: \ \frac{|\mathbf{a}(\mathbf{j})|}{\mathbf{v}(\mathbf{j})}>\lambda \right\}}\frac{1}{(|\mathbf{j}|+1)^d} & \lesssim \sum_{\left\{\mathbf{j}\in\mathbb{N}_{0}^{d}: \ \log(|\mathbf{j}|+2)< \left(\frac{A_0}{\lambda}\right)^{p} \right\}}\frac{1}{(|\mathbf{j}|+2)^d}\\ & = \sum_{\left\{j\in\mathbb{N}_{0}: \ \log(j+2)< \left(\frac{A_0}{\lambda}\right)^{p} \right\}}\frac{W_d(j)}{(j+2)^d}\\ & \lesssim \sum_{\left\{ j\in\mathbb{N}_0: \ \log(j+2)<\left( \frac{A_0}{\lambda} \right)^p \right\}} \frac{(j+2)^{d-1}}{(j+2)^d}\\ & \lesssim \int_{\left\{ \log(x+2)<\left( \frac{A_0}{\lambda} \right)^p \right\}} \frac{1}{x+2}\dd{x}\\ & \lesssim\left(\frac{A_0}{\lambda}\right)^p. \end{split} \end{equation*} Thus, there exists a positive constant $C$ such that $$\lambda^p \sum_{\left\{\mathbf{j}\in\mathbb{N}_{0}^{d}: \ \frac{|\mathbf{a}(\mathbf{j})|}{\mathbf{v}(\mathbf{j})}>\lambda \right\}}\frac{1}{(|\mathbf{j}|+1)^d}\leq (CA_0)^p,$$ which implies, by taking supremum over positive $\lambda$'s, that $\left\| \frac{\mathbf{a}}{\mathbf{v}} \right\|_{\ell_{\mathbf{v}}^{p,\infty}}\leq CA_0$. From the last relation and Lemma \ref{lem_0}, we conclude that $$\|H_\mathbf{a}\|_{\mathbf{S}_{p,\infty}}\leq M_p\left\| \frac{\mathbf{a}}{\mathbf{v}} \right\|_{\ell_{\mathbf{v}}^{p,\infty}}\leq M_p CA_0,$$ so that relation (\ref{eq_4}) comes true, by setting $C_{\gamma}=M_pC$.\\ \ \\ \noindent\underline{\textbf{Step 2:}} Assume that $\gamma\geq\frac{1}{2}$ and let $\phi$ be given by \begin{equation*} \phi(z)=\sum_{j\in\mathbb{N}_0}a(j)z^j, \ \forall z\in\overline{\mathbb{D}}. \end{equation*} According to Lemma \ref{label2}, $H_\mathbf{a}$ and $\Gamma_{\Phi\phi}^{\frac{d-1}{2},\frac{d-1}{2}}$ satisfy the same Schatten class inclusions, where $(\Phi\phi)(j)=a(j)$. Therefore, in order to derive (\ref{label}), Lemma \ref{lem0} suggests that it is enough to show that $\bigoplus_{n\geq 0}2^{n(d-1)}\phi*V_n\in L^{p,\infty}(\mathcal{M},\mu)$, or, in other words, that \begin{equation}\label{eq11} \sup_{s>0} s^p \sum_{n\in\mathbb{N}_0} 2^{n}\big| \{ t\in [-\pi,\pi): |2^{n(d-1)}(\phi*V_n)(e^{it})|>s \} \big|<+\infty. \end{equation} For every non-negative integer $n$ and any positive number $s$, set \begin{equation*} E_n(s):=\{ t\in[-\pi,\pi): |2^{n(d-1)}(\phi*V_n)(e^{it})|>s \}. \end{equation*} The goal is to find an estimate for $|E_n(s)|$ which proves the finiteness of (\ref{eq11}). First of all, notice that $E_n(s)=\varnothing$, for every $s\geq\|2^{n(d-1)}\phi*V_n\|_{\infty}$. An application of (\ref{eq3}) gives that $E_n(s)=\varnothing$, for every $s\geq 2^{n(d-1)} \sum_{j=2^{n-1}}^{2^{n+1}}|a(j)|$. Let $$A_m:=\sup_{j\geq 0}\big|a^{(m)}\big|(j+1)^{d+m}\big( \log(j+2) \big)^{\gamma}, \ \forall m=0,1,\dots,M(\gamma).$$ Therefore, condition (\ref{eq_3}) implies that $E_n(s)=\varnothing$ when \begin{equation*} s\geq2^{n(d-1)} A_0 \sum_{j=2^{n-1}}^{2^{n+1}} (j+1)^{-d} \big( \log(j+2) \big)^{-\gamma}. \end{equation*} Besides, for every $n\geq3$, \begin{equation*} \begin{split} \sum_{j=2^{n-1}}^{2^{n+1}} (j+1)^{-d} \big( \log(j+2) \big)^{-\gamma} & \leq \int_{2^{n-1}-1}^{2^{n+1}} (t+1)^{-d} \big( \log(t+2) \big)^{-\gamma}\dd{t}\\ & \lesssim \int_{n-1}^{n+1} 2^{-s(d-1)} s^{-\gamma}\dd{s}\ (\text{change of variables} \ s=\log_2t)\\ & \lesssim 2^{-n(d-1)}n^{-\gamma}, \end{split} \end{equation*} so that in general, $$\sum_{j=2^{n-1}}^{2^{n+1}} (j+1)^{-d} \big( \log(j+2) \big)^{-\gamma} \leq C 2^{-n(d-1)} \left< n \right>^{-\gamma}, \ \forall n\geq 0,$$ for some positive constant $C$; without loss of generality, we may assume that $C=C_q$, where $C_q$ appears in (\ref{eq4}). Therefore, $E_n(s)=\varnothing$, for every $n\geq 0$ such that $\left< n \right>\geq N(s)$, where $N(s):=(\frac{C_qA_0}{s})^p$, $\forall s>0$. Besides, by following exactly the same steps, it can be shown that \begin{equation*} \sum_{j=2^{n-1}-M(\gamma)}^{2^{n+1}}(j+1)^m|a^{(m)}(j)|\lesssim A_m 2^{-n(d-1)}\left< n \right>^{-\gamma}, \ \forall m=1,2,\dots,M(\gamma). \end{equation*} Thus, Lemma \ref{lem_2} gives that, for every $q>\frac{1}{M(\gamma)}$ and $n\in\mathbb{N}_0$ such that $M(\gamma)\leq 2^{n-1}$, \begin{equation*} 2^n\|\phi*V_n\|_q^q\lesssim C_q A^q 2^{-n(d-1)q}\left< n \right>^{-\gamma q}, \end{equation*} where $A:=\sum_{m=0}^{M(\gamma)}A_m$. Now notice that, for any positive $q$, \begin{equation*} \begin{split} s^q|E_n(s)| & = \int\limits_{\left\{t\in[-\pi,\pi): \ 2^{n(d-1)}|(\phi*V_n)(e^{it})|>s\right\}} s^q\dd{t}\\ & \leq 2^{n(d-1)q} \|\phi*V_n\|_q^q, \ \forall n\in\mathbb{N}_0. \end{split} \end{equation*} Putting all these together results that, for every $q\in (\frac{1}{M(\gamma)},p)$ and $s>0$, \begin{equation*} \begin{split} s^p\sum_{n\in\mathbb{N}_0}2^n |E_n(s)| & =s^{p-q} \left( s^q \sum_{\left<n\right>\leq N(s)}2^n |E_n(s)| \right)\\ & \leq s^{p-q} \sum_{\left<n\right>\leq N(s)}2^n 2^{n(d-1)q}\|\phi*V_n\|_q^q\\ & \lesssim s^{p-q} C_q A^q \sum_{\left<n\right>\leq N(s)} 2^{n(d-1)q} 2^{-n(d-1)q} \left< n \right>^{-\gamma q}\\ & \lesssim C_qA^q s^{p-q} N^{1-\gamma q}(s), \ (\text{since} \ \gamma q=\frac{q}{p} \ \text{and} \ q<p). \end{split} \end{equation*} Finally, notice that $s^{p-q}N^{1-\gamma q}(s)= s^{p-q} (C_qA_0)^{p-q}s^{-(p-q)}=(C_qA_0)^{p-q},$ so there is a positive constant $K$, independent of $s$, such that $$s^p\sum_{n\in\mathbb{N}_0}2^n |E_n(s)|\leq K^p A^p, \ \forall s>0,$$ and this proves the desired result. Finally, Lemma \ref{lem0} suggests that there is a positive constant $K_{\gamma}$ such that $$\|H_\mathbf{a}\|_{\mathbf{S}_{p,\infty}}=\|\Gamma\|_{\mathbf{S}_{p,\infty}}\leq K_{\gamma} \|\phi\|_{\mathcal{B}_{p,\infty}^{\frac{1}{p}+d-1}}\leq K_{\gamma} K A,$$ where $\Gamma$ is given by (\ref{eq14}). This gives relation (\ref{eq_4}), with $C_{\gamma}=K_{\gamma}K$. \end{proof} \begin{lem}\label{thm2_7} Let $\gamma>0$ and $\{a(j)\}_{j\in\mathbb{N}_{0}}$ be a real valued sequence such that \begin{equation}\label{eq12} a^{(m)}(j)=o\big(j^{-d-m} (\log j)^{-\gamma} \big), \ j\to+\infty, \end{equation} for every $m=0,1,\dots,M(\gamma)$. Then the Hankel operator $H_\mathbf{a}$, with parameter sequence $\mathbf{a}(\mathbf{j})=a(|\mathbf{j}|)$, for all $\mathbf{j}\in\mathbb{N}_0^d$, is compact and its singular values satisfy the following estimate $$s_n(H_\mathbf{a})=o(n^{-\gamma}), \ n\to+\infty.$$ \end{lem} \begin{proof} The goal is to show that $H_\mathbf{a}\in\mathbf{S}_{p,\infty}^{0}$, for $p=\frac{1}{\gamma}$. The ideal $\mathbf{S}_{p,\infty}^{0}$ is the $\left\|\cdot\right\|_{\mathbf{S}_{p,\infty}}$-closure of finite rank operators. So, it is enough to approximate $H_\mathbf{a}$ by finite rank operators in the $\left\|\cdot\right\|_{\mathbf{S}_{p,\infty}}$ quasi-norm. For consider the cut-off function \begin{equation*} \chi_0(t)=\begin{cases} 1, & t\in[0,1]\\ 0, & t\geq2, \end{cases} \end{equation*} such that $\chi_0\in C^{\infty}(\mathbb{R}_+)$ and $0\leq\chi_0\leq1$. In addition, for every $N\in\mathbb{N}$, define the sequences \begin{equation*} q_N(j)=a(j)\chi_0(\tfrac{j}{N}) \ \text{ and } \ h_N(j)=a(j)-q_N(j), \ \forall j\in\mathbb{N}_0. \end{equation*} Let $H_{\mathbf{q}_N}$ and $H_{\mathbf{h}_N}$ be the Hankel operators, with parameter sequences $\mathbf{q}_N(\mathbf{j})=q_N(|\mathbf{j}|)$ and $\mathbf{h}_N(\mathbf{j})=h_N(|\mathbf{j}|)$, $\forall \mathbf{j}\in\mathbb{N}_0^d$, respectively. In other words, $H_{\mathbf{h}_N}=H_\mathbf{a}-H_{\mathbf{q}_N}$. Then, by using the Leibniz rule, \begin{equation*} \big(h_N\big)^{(m)}(j)=\sum_{n=0}^{m}{m \choose n}a^{(m-n)}(j+n)\big(1-\chi_0\big)^{(n)}(\tfrac{j}{N}), \ \forall j\in\mathbb{N}_0. \end{equation*} Therefore, for every $j\geq2$, \begin{equation}\label{1} \begin{split} \left|\big(h_N\big)^{(m)}(j)j^{d+m}(\log j)^{\gamma}\right| & \leq \sum_{n=0}^{m}{m \choose n}\left|a^{(m-n)}(j+n)\right|j^{d+m-n}(\log j)^\gamma j^n\big(1-\chi_0\big)^{(n)}(\tfrac{j}{N})\\ & \leq \sum_{n=0}^{m}{m \choose n}\left|a^{(m-n)}(j+n)\right|(j+n)^{d+m-n}(\log (j+n))^\gamma j^n\big(1-\chi_0\big)^{(n)}(\tfrac{j}{N}). \end{split} \end{equation} Moreover, observe that, for any $n\in\mathbb{N}$, \begin{equation*} t^n\big(1-\chi_{0}\big)^{(n)}\big(\tfrac{t}{N}\big)=-\big(\tfrac{t}{N}\big)^n\chi_{0}^{(n)}\big(\tfrac{t}{N}\big), \ \forall t\in\mathbb{R}_+. \end{equation*} As a result, by recalling that $\chi_0$ is compactly supported, there exist positive constants $K_n$, $n=1,2,\dots,M(\gamma)$, independent of $N$, such that \begin{equation*} \sup_{t>0}t^n \big(1-\chi_{0}\big)^{(n)}\big(\tfrac{t}{N}\big)\leq K_n. \end{equation*} By considering $K:=\max_{1\leq n\leq M(\gamma)}\{K_n\}$, (\ref{1}) gives \begin{equation*} \left|\big(h_N\big)^{(m)}(j)\right|j^{d+m}(\log j)^{\gamma} \leq K \sum_{n=0}^{m}{m \choose n}\left|a^{(m-n)}(j+n)\right|(j+n)^{d+m-n}(\log (j+n))^\gamma, \ \forall j\geq 2. \end{equation*} By taking supremum, \begin{equation}\label{5} \sup_{j>N}\left|\big(h_N\big)^{(m)}(j)\right|j^{d+m}(\log j)^{\gamma} \leq K \sum_{n=0}^{m}{m \choose n}\sup_{j>N}\left|a^{(m-n)}(j+n)\right|(j+n)^{d+m-n}(\log (j+n))^\gamma. \end{equation} Under the assumption (\ref{eq12}) for $a$, we see that, for any $N\in\mathbb{N}$, $h_N$ satisfies assumption (\ref{eq_3}) of Lemma \ref{thm_1} and consequently, $H_{\mathbf{h}_N}$ satisfies relation (\ref{eq_4}). Thus, there exists a constant $C_\gamma$ such that \begin{equation*} \begin{split} \left\| H_\mathbf{a}-H_{\mathbf{q}_N} \right\|_{\mathbf{S}_{p,\infty}} & \leq C_\gamma\sum_{m=0}^{M(\gamma)} \sup_{j\in\mathbb{N}_0} \left| \big(h_N\big)^{(m)}(j) \right| (j+1)^{d+m} \big(\log(j+2)\big)^\gamma\\ & = C_\gamma \sum_{m=0}^{M(\gamma)}\sup_{j>N} \left| \big(h_N\big)^{(m)}(j) \right| (j+1)^{d+m} \big(\log(j+2)\big)^\gamma. \end{split} \end{equation*} Then (\ref{5}) implies that \begin{equation}\label{6} \left\| H_\mathbf{a}-H_{\mathbf{q}_N} \right\|_{\mathbf{S}_{p,\infty}} \lesssim \sum_{m=0}^{M(\gamma)} \sum_{n=0}^{m}{m \choose n}\sup_{j>N}\left|a^{(m-n)}(j+n)\right|(j+n)^{d+m-n}(\log (j+n))^\gamma. \end{equation} Notice that assumption (\ref{eq12}) implies that, for any $n\in\mathbb{N}$, \begin{equation*} \limsup_{j\to+\infty}\left|a^{(m)}(j+n)\right|(j+n)^{d+m}(\log (j+n))^\gamma=0, \ \forall m=0,1,\dots,M(\gamma). \end{equation*} Thus, letting $N\to+\infty$ in (\ref{6}) results that the right hand side converges to zero and the result is obtained. \end{proof} \section{Proof of Theorem \ref{thm0}} \begin{proof} Let $\tilde{a}$ be the sequence that is defined in (\ref{eq2_4'}) and generates the model operator $\tilde{H}$. Then $H_\mathbf{a}=\tilde{H}+(H_\mathbf{a}-\tilde{H})$, where $H_\mathbf{a}-\tilde{H}=H_{\mathbf{a}-\tilde{\mathbf{a}}}$ is a Hankel operator with parameter sequence $\big(\mathbf{a}-\tilde{\mathbf{a}}\big)(\mathbf{j})=\big(a-\tilde{a}\big)(|\mathbf{j}|)$, for any $\mathbf{j}\in\mathbb{N}_0^d$. By observing (\ref{eq5}) and Lemma \ref{lem3''}, we see that the sequences $a$ and $\tilde{a}$ present the same asymptotic behaviour, modulo some error terms. Thus, \begin{equation*} \big(a-\tilde{a}\big)(j)=h_1(j)+(-1)^jh_{-1}(j), \ \forall j\geq 2, \end{equation*} where $h_{\pm1}(j):=\big(g_{\pm1}-\tilde{g}_{\pm1}\big)(j)$, $\forall j\in\mathbb{N}_0$. Furthermore, notice that relation (\ref{eqc_18'}) in Lemma \ref{lem3''} implies that \begin{equation*} \tilde{g}_{\pm1}^{(m)}=o\big(j^{-d-m}(\log j)^{-\gamma}\big), \ j\to+\infty. \end{equation*} The same relation is satisfied by $g_{\pm1}$, as well, by assumption. Therefore, \begin{equation*} h_{\pm1}^{(m)}=o\big(j^{-d-m}(\log j)^{-\gamma}\big), \ j\to+\infty. \end{equation*} Consider the Hankel operators $H_{\mathbf{h}_{\pm 1}}:\ell^2(\mathbb{N}_0^d)\to \ell^2(\mathbb{N}_0^d)$, with parameter sequence $\mathbf{h}_{\pm1}(\mathbf{j})=h_{\pm1}(\left|\mathbf{j}\right|)$, for all $\mathbf{j}\in\mathbb{N}_0^d$. Then Lemma \ref{thm2_7} yields that their singular values satisfy the following asymptotic law: \begin{equation}\label{eq} s_n(H_{\mathbf{h}_{\pm 1}})=o(n^{-\gamma}), \ n\to+\infty. \end{equation} Let $\mathfrak{h}_{-1}(j):=(-1)^jh_{-1}(j)$, for all $j\in\mathbb{N}_0$, and consider the Hankel operator $H_{\boldsymbol{\mathfrak{h}}_{-1}}:\ell^2(\mathbb{N}_0^d)\to \ell^2(\mathbb{N}_0^d)$, with parameter sequence $\boldsymbol{\mathfrak{h}}_{-1}(\mathbf{j})=\mathfrak{h}_{-1}(\left|\mathbf{j}\right|)$, for all $\mathbf{j}\in\mathbb{N}_0^d$. Then $H_{\boldsymbol{\mathfrak{h}}_{-1}}$ and $H_{\mathbf{h}_{-1}}$ are unitarily equivalent. Indeed, by defining the unitary operator $Q:\ell^2(\mathbb{N}_0^d)\to\ell^2(\mathbb{N}_0^d)$, with \begin{equation*} \big(Q\mathbf{x}\big)(\mathbf{j})=(-1)^{|\mathbf{j}|}\mathbf{x}(\mathbf{j}), \ \forall \mathbf{j}\in\mathbb{N}_0^d, \ \forall \mathbf{x}\in\ell^2(\mathbb{N}_0^d), \end{equation*} it is checked easily that $H_{\boldsymbol{\mathfrak{h}}_{-1}}=Q^*H_{\mathbf{h}_{-1}}Q$. Thus, the singular values of $H_{\boldsymbol{\mathfrak{h}}_{-1}}$ satisfy (\ref{eq}). Notice that $H_{\mathbf{a}}-\tilde{H}=H_{\mathbf{h}_1}+H_{\boldsymbol{\mathfrak{h}}_{-1}}$. Therefore, since the space $\mathbf{S}_{p,\infty}^0$ is linear, the singular values of $H_\mathbf{a}-\tilde{H}$ satisfy (\ref{eq}). Finally, recall that the eigenvalue asymptotics of $\tilde{H}$ are given in Lemma \ref{prop0}. Thus by combining with the fact that \begin{equation*} s_n(H_a-\tilde{H})=o(n^{-\gamma}), \ n\to+\infty, \end{equation*} Lemma \ref{lem2_1} yields the asymptotic law (\ref{eq1}). \end{proof}
1,477,468,750,849
arxiv
\section{Introduction} \label{sec:intro} In \cite{Gyo}, Gy\H ory et al. present the following natural question. \begin{problem} \label{P:1} Let $K$ be a field of characteristic zero. Is it true that a monic polynomial $p = \sum_{i=0}^n c_ix^{n-i}\in K[x]$ of degree $n$ with exactly $k$ distinct zeros is determined up to finitely many possibilities by any $k$ of its non-zero coefficients? \end{problem} By degrees-of-freedom considerations, at least $k$ coefficients are needed; which sets of $k$ coefficients actually suffice, however, seems to be a delicate matter. We consider the following variation of Problem \ref{P:1}. The {\em codegree} of a monomial term $Cx^k$ in a univariate polynomial $f(x)$ is $\deg(f)-k$. A set of coefficients is {\em leading} if it corresponds to codegrees $\{0,\ldots,k\}$ for some $k$; it is {\em proper leading} if it corresponds to codegrees $\{1,\ldots,k\}$ for some $k$. \begin{problem} \label{P:2} Let $K$ be a field of characteristic zero. Is it true that a monic polynomial $p = \sum_{i=0}^n c_ix^{n-i}\in K[x]$ of unknown degree $n$, with exactly $k$ distinct known zeros $r_1, r_2, \dots, r_k$, is uniquely determined by its first $k$ proper leading coefficients? \end{problem} We answer Problem \ref{P:2} in the affirmative with the following result. \begin{theorem} \label{T:1} Let $p = \sum_{i=0}^n c_ix^{n-i} \in K[x]$ be a monic polynomial with $k+1$ distinct roots, $r_0 = 0, r_1, r_2, \dots, r_k$, with multiplicity $m_0, m_1, m_2, \dots m_k$, respectively. Then the multiplicities are uniquely determined by $c_0=1,c_1, \dots, c_k$. \end{theorem} Furthermore, $p$ may be determined by fewer than $k$ proper coefficients when $K$ is not algebraically closed. \begin{theorem} \label{T:4} Let $p = \sum_{i=0}^n c_ix^{n-i} \in K[x]$ be a monic polynomial such that $p(0) \neq 0$. Suppose $p = \prod_{i=1}^t q_i^{m_i}$ for $q_i \in K[x]$. The multiplicity vector ${\bf m} = \langle m_1, \dots, m_t \rangle^T$ is uniquely determined by the first $t$ proper coefficients if and only if $V \in \overline{K}^{t \times t}$ is non-singular where \[ V_{i,j} = \sum_{r : q_i(r) = 0} r^j. \] \end{theorem} \begin{remark} Observe that when $q_i = x-r_i$ (i.e., $p$ splits over $K$) Theorem \ref{T:4} provides the same conclusion as Theorem \ref{T:1}. \end{remark} In Section 2 we prove both of the main results. In particular, we prove Theorem \ref{T:1} via an algorithm which allows us to compute exactly the multiplicity of each root. In Section 3 we prove that this algorithm is numerically stable in the sense that the requisite number of bits of precision to approximate each root in order to compute its multiplicity {\em exactly} is linear in $k$ and the logarithms of (a) the ratio between the largest and smallest difference of roots, (b) the largest root, and (c) the largest coefficient of codegree at most $k$. We conclude by demonstrating the utility of this algorithm by computing previously unknown characteristic polynomials of two 3-uniform hypergraphs in Section 4. \section{Proof of Main Results} We begin with a proof of Theorem \ref{T:1}. \begin{proof}[Proof of Theorem \ref{T:1}] Fix such a monic polynomial $p$ with distinct roots $r_0 = 0$, $r_1$, $\ldots$, $r_k$ with respective multiplicities $m_0$, $m_1$, $\ldots$, $m_k$. Ignoring $r_0$ for a moment, let ${\bf r} = \langle r_1, \dots, r_k\rangle^T$ and ${\bf m} = \langle m_1, \dots, m_k\rangle^T$. We denote the Vandermonde matrix \[ V = \left( \begin{array}{cccc} 1 & 1 & \dots & 1 \\ r_1 & r_2 & \dots & r_k \\ r_1^2 & r_2^2 & \dots & r_k^2 \\ \vdots & \vdots & \vdots & \vdots \\ r_1^{k-1} & r_2^{k-1}& \dots & r_k ^{k-1}\end{array} \right) \] and consider \[ V_0 = \left( \begin{array}{cccc} r_1 & r_2 & \dots & r_k \\ r_1^2 & r_2^2 & \dots & r_k^2 \\ \vdots & \vdots & \vdots & \vdots \\ r_1^{k} & r_2^{k}& \dots & r_n^{k}\end{array} \right). \] Let ${\bf p} \in \overline{K}^n$ where \[ {\bf p}_i := \sum_{j=1}^k r_j^i m_j \] then \[ V_0{\bf m} = {\bf p}. \] Notice that $V_0 = V \diag({\bf r})$ is non-singular as it is the product of two non-singular matrices. We have then \begin{equation} \label{E:m} {\bf m} = V_0^{-1}{\bf p}. \end{equation} We present a formula for ${\bf p}$ which is a function of only the leading $k+1$ coefficients of $p$. Let $A$ be the diagonal matrix where $r_i$ occurs $m_i$ times and note \[ p(x) = \det(xI - A). \] By the Faddeev-LeVerrier algorithm (aka the Method of Faddeev, \cite{Gan}) we have for $j \geq 1$ \begin{equation} \label{E:Faddeev} c_{j} = -\frac{1}{j}\sum_{i=1}^j c_{j-i} \tr(A^i) = -\frac{1}{j}\sum_{i=1}^j c_{j-i} {\bf p}_i. \end{equation} Let ${\bf c} = \langle c_1, c_2, \dots, c_{k} \rangle^T$, $\Lambda = -\diag(1, 1/2, \dots, 1/k)$, and \[ C = \left( \begin{array}{cccc} c_0=1 & 0 & \dots & 0 \\ c_1 & c_0 & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ c_{k-1} & c_{k-2}& \dots & c_0\end{array} \right). \] By Equation \ref{E:Faddeev}, ${\bf c} = \Lambda C{\bf p}$. Moreover, as $\Lambda $ and $C$ are invertible we have ${\bf p} = (\Lambda C)^{-1}{\bf c}$. It follows from Equation \ref{E:m} that \begin{equation} \label{E:formula} {\bf m} = (\Lambda CV_0)^{-1}{\bf c}. \end{equation} Furthermore, $m_0 = n - {\bf 1}\cdot{\bf m}$. \end{proof} We briefly remark about the proof of Theorem \ref{T:1}. Problem \ref{P:1} has a flavor of polynomial interpolation: given $k$ points, how many (univariate) polynomials of degree $n$ go through each of the $k$ points? If $n \leq k-1$ the polynomial is known to be unique and is relatively expensive to compute (as any standard text in numerical analysis will attest). Our proof technique mimics this approach as the classical problem of determining $p = \sum_{i=0}^{k-1}{c_i}x^{(k-1)-i}$, which resembles $k$ distinct points $\{(x_i,y_i)\}_{i=1}^k$, can be solved by computing \[ V^T{\bf c} = {\bf y} \] where ${\bf c} = \langle c_{k-1}, c_{k-2}, \dots, c_0\rangle^T$, $V$ is as previously defined given $r_i = x_i$, and ${\bf y} = \langle y_1, \dots, y_k\rangle^T$. Suppose for a moment that each root is distinct so that \[ p(x) = \prod_{i=1}^k (x - r_i). \] Then $c_j$, the codegree $j$ coefficient, is precisely the $j$th elementary symmetric polynomial in the variables $r_1, \dots, r_k$. In the case of repeated roots we have that $c_j$ can be expressed using modified symmetric polynomials in the distinct roots where $r_i$ is replaced with $\binom{m_i}{j}r_i^j$. The expression for each coefficient via these modified symmetric polynomials is given by Equation \ref{E:Faddeev}. Note that if the roots of $p$ are known, it is possible to determine $p$ with fewer coefficients than the number of distinct roots (e.g., when $p$ is a non-linear minimal polynomial). We modify Theorem \ref{T:1} to include the case when some of the roots are known to occur with the same multiplicity. We now prove Theorem \ref{T:4}. \begin{proof}[Proof of Theorem \ref{T:4}] The proof follows similarly to that of Theorem \ref{T:1}. First suppose that $V$ is non-singular. Let ${\bf m} = \langle m_1, \dots, m_t \rangle^T$ and \[ {\bf p}_i = \sum_{j=1}^t \left(\sum_{r : q_j(r) = 0} r^i\right)m_j = (V{\bf m})_i \] so that if $V$ is non-singular, ${\bf m} = V^{-1}{\bf p}$. Let $A$ be defined as in Theorem \ref{T:1}: the diagonal matrix where the roots of $q_i$ occur $m_i$ times and note \[ p(x) = \det(xI - A). \] We have by the the Faddeev-LeVerrier algorithm, for $j \geq 1$ \[ c_{j} = -\frac{1}{j}\sum_{i=1}^j c_{j-i} \tr(A^i) = -\frac{1}{j}\sum_{i=1}^j c_{j-i} {\bf p}_i \] so that for ${\bf c} = \langle c_1, \dots, c_t\rangle^T$, $\Lambda = -\diag(1, 1/2, \dots, 1/t)$ we have \[ {\bf m} = (\Lambda CV)^{-1}{\bf c}. \] If instead ${\bf m}$ is uniquely determined by the first $t$ proper coefficients then $V{\bf m} = (\Lambda C)^{-1}{\bf c}$ has exactly one solution, hence $V$ is non-singular. \end{proof} As a non-example, consider the minimal polynomial of $\alpha = \sqrt{2}$, $q_{\alpha}(x) = x^2-2 \in \mathbb{Z}[x]$ and suppose \[ p = q_\alpha^d = x^{2d} + 0x^{2d-1} -2dx^{2d-2} + \dots\in \mathbb{Z}[x]. \] Observe that we cannot determine $d$ given $c_1 = 0$; moreover, this conclusion is unsurprising, given the hypotheses from Theorem \ref{T:4}, since \[ V = [\sqrt{2} - \sqrt{2}] = [0] \in \mathbb{C}^{1,1} \] is singular. However, by inspection we could determine $d$ given $c_2 = -2d$ and in fact the matrix $[\sqrt{2}^2 + (-\sqrt{2})^2] = [4] \in \mathbb{C}^{1 \times 1}$ is non-singular. Indeed we could determine $d$ with a simple change of variable: apply Theorem \ref{T:4} to $p_0 = (y-2)^d$ where $y = x^2$. \section{The stability of computing multiplicities} We now consider the feasibility of computing ${\bf m}$. In general, the matrix $V_0$ in Theorem \ref{T:1} may be poorly conditioned, so this calculation is often difficult to carry out even for modest values of $k$. The goal of this section is to show that if each root of a monic polynomial $p(x) \in \mathbb{Z}[x]$ is approximated by a disk of radius at most $\epsilon$, a ``reasonable'' precision, then the interval approximating ${\bf m}_i$, resulting from a particular algorithm, contains exactly one integer. That is, we provide an algorithm for exactly computing ${\bf m}$ via \cite{Sag} with substantially improved numerical stability over simply following the calculations in Section \ref{sec:intro}. \begin{theorem} \label{T:2} Let $p(x) = \sum_{i=0}^t c_ix^{t-i} \in \mathbb{Z}[x]$ be a monic polynomial with distinct non-zero roots $r_1,\dots, r_n$ such that $|r_1| \geq |r_2| \geq \dots \geq |r_n| > 0$. If each root is approximated by a disk of radius $\epsilon$ such that \[ \epsilon < \frac{m^2r}{2^{2n+7}n^5} \left(\frac{m}{MRc}\right)^n = \left ( \frac{m}{MRc} \right)^{n(1+o(1))} \] where \begin{itemize} \item{$M = \max\{\max\{|r_i - r_j|\}, 1\}$ and $m = \min\{\min\{|r_i - r_j|: i \neq j\}, 1\}$} \item{$R = \max\{|r_1|,1\}$ and $r = \min\{|r_n|,1\}$} \item{$c = \max\{\max\{|c_i|\}_{i=1}^n,1\}$.} \end{itemize} then the resulting disk approximating ${\bf m}_i = (\Lambda CV_0)^{-1}{\bf c}_i \in \mathbb{Z}$ contains exactly one integer (i.e., the computation of ${\bf m}$ is stable). \end{theorem} Notice that $MRc \geq 1$ and $MRc = 2$ for $x^n -1$ when $n$ is even. Roots of unity occur frequently in the spectrum of hypergraphs; see Section 3. In particular, $k$-cylinders -- essentially $k$-colorable $k$-graphs -- have a spectrum which is invariant under multiplication by the $k$th roots of unity. Consider now $p(x) = x^n-1$. We have $m = \sqrt{2 - 2\cos(\frac{2\pi}{n})}$ so that \[ \epsilon < \frac{(2 - 2 \cos(\frac{2\pi}{n}))^\frac{n+2}{2}}{2^{3n+7}n^5}. \] While $\epsilon$ may seem small, we are chiefly concerned with the number of bits of precision needed to approximate each root. Indeed for $x^n-1$ we need $|\ln \epsilon| = O(n \ln n)$ bits of precision by the small-angle approximation. \begin{remark} The bound on $\epsilon$ is ``reasonable'', as the number of bits required to approximate each root is proportional to the number of distinct roots of $p$ and the logarithms of the ratio of the smallest difference of the roots with the largest difference of roots, the largest root, and the largest coefficient. \end{remark} In practice, the difficulty of computing ${\bf m}$ as described in Theorem \ref{T:1} is in computing the inverse of the Vandermonde matrix, whose entries may vary widely in magnitude and which may be very poorly conditioned. The task of inverting Vandermonde matrices has been studied extensively. In \cite{Eis}, Eisenberg and Fedele provide a brief history of the topic as well results concerning the accuracy and effectiveness of several known algorithms. However, these algorithms provide good approximations for the entries of $V^{-1}$, whereas we seek to express them exactly as elements of the field of algebraic complex numbers, since ${\bf m}$ is a vector of integers. In \cite{Sot}, Soto-Eguibar and Moya-Cessa showed that $V^{-1} = \Delta WL$ where $\Delta $ is the diagonal matrix \begin{displaymath} \Delta_{i,j} = \left\{ \begin{array}{ll} \prod_{k=1, k \neq i}^n \frac{1}{r_i - r_k}& : i=j \\ 0 & : i \neq j, \end{array} \right. \end{displaymath} $W$ is the lower triangular matrix \begin{displaymath} W_{i,j} = \left\{ \begin{array}{ll} 0 & : i> j \\ \prod_{k=j+1, k \neq i}^n (r_i - r_k) & : \text{ otherwise}, \end{array} \right. \end{displaymath} and $L$ is the upper triangular matrix \begin{displaymath} L_{i,j} = \left\{ \begin{array}{ll} 0 & : i< j \\ 1 & : i = j\\ L_{i-1, j-1} - L_{i-1, j-1} r_{i-1} & : i \in [2,n], j \in [2,i-1]. \end{array} \right. \end{displaymath} Using this decomposition, it is possible to compute ${\bf m}$ exactly. To prove Theorem \ref{T:2} we first provide an upper bound for the diameter of the disk approximating an entry of $\Delta $, $W$, and $L$, respectively; to do so, we extensively employ computations of \cite{Pet} found in Chapter 1.3. We present the necessary background here. Let $D(z, \epsilon)$ be the open disk in the complex plane centered at $z$ of radius $\epsilon$. For $A = D(a,r_1)$, $B= D(b,r_2)$ complex open disks, we have \begin{enumerate} \item {$A \pm B = D(a\pm b, r_1 + r_2)$} \item{$1/B = D\left(\frac{\bar{b}}{|b|^2 - r_2^2}, \frac{r_2}{|b|^2 - r_2^2}\right)$} \item{$AB = D(ab, |a|r_2 + |b|r_2 + r_1r_2)$} \end{enumerate} In particular, for the special case of $A^n$ we have \begin{equation} \label{E:power} D(a,r_1)^n = D(a^n, (|a| - r_1)^n - |a|^n). \end{equation} Moreover, given $0 < r_1 < 1 \leq |a|$ \begin{equation} \label{E:bound} (|a| - r_1)^n - |a|^n \leq r_1(2|a|)^n \end{equation} since \[ (|a| - r_1)^n - |a|^n \leq \sum_{k=1}^n \binom{n}{k} r_1^k |a|^{n-k} \leq r_1(2|a|)^n. \] Finally, let $d(A) = 2r_1$ denote the diameter of $A$ and let \[ |A|= |a| + r_1 \] be the absolute value of $A$. Then for $u \in \mathbb{C}$ we have \begin{enumerate} \item{$d(A\pm B) = d(A) + d(B)$} \item{$d(uA) = |u| d(A)$} \item{$d(AB) \leq |B|d(A) + |A|d(B)$} \end{enumerate} For the remainder of this paper some numbers will be exact (e.g., rational numbers) while others will be approximated by a disk. The non-exact entries of a matrix $M \in \mathbb{C}^{n \times n}$ will be referred to as disks; this will be clear from the problem formulation or derived from the computations. With a slight abuse of notation we use $d(M_{i,j})$ and $|M_{i,j}|$ to denote the diameter and absolute value of the disk approximating the entry $M_{i,j}$. Moreover, we write \[ d(M) = \max\{d(M_{i,j}) : i,j \in [n]\} \text{ and } |M| = \max\{|M_{i,j}| : i,j \in [n]\}. \] In the case when the entry is exact, the diameter is zero and the absolute value (of the disk) is simply the modulus. \begin{theorem} \label{T:3} Assume the notations of Theorem \ref{T:4}, let $V$ denote the Vandermonde matrix from the proof of Theorem \ref{T:1}, and let $V^{-1} = \Delta WL$ by \cite{Sot}. Then \[ d(V^{-1}) \leq \frac{2^{2n+4}n}{m^2}\left(\frac{MR}{m}\right)^n\epsilon. \] and \[ |V^{-1}|\leq 2n\left(\frac{RM}{m}\right)^n. \] \end{theorem} \begin{proof} Let \[ D_i := D(r_i, \epsilon) \] denote the disk centered at $r_i$ with radius $\epsilon$. By Equation \ref{E:bound} we have for $s \neq t$ \begin{align*} d(\Delta)&\leq d \left ( \left(\frac{1}{D_s - D_t}\right)^n \right ) \\ &\leq 2^n\left(\frac{2\epsilon}{m^2-(2\epsilon)^2}\right)\left(\frac{m}{m^2-(2\epsilon)^2}\right)^n \\ &\leq \frac{2^{2n+2}}{m^{n+2}} \cdot \epsilon , \end{align*} since $\epsilon < m/4$, \[ d(W) \leq d((D_s - D_t)^n) \leq 2^{n+1}M^n\epsilon, \] and \[ d(L) \leq d(D_s^n) \leq (2R)^n \epsilon. \] We first consider $d(\Delta W)$. Observe that $\Delta W$ is upper triangular and each non-zero entry of $\Delta W$ is a product of exactly one non-zero entry of $\Delta $ and $W$. In this way \[ d((\Delta W)_{i,j}) \leq |W_{i,j}|d(\Delta _{i,i}) + |\Delta _{i,i}|d(W_{i,j}) \leq\frac{2^{2n+3}}{m^2} \left(\frac{M}{m}\right)^n\epsilon \] and \[ |\Delta W| \leq 2\left(\frac{M}{m}\right)^n. \] We now determine $d(\Delta WL)$ by first computing \[ d((\Delta W)_{i,k} L_{k,j}) \leq |L_{k,j}|d(\Delta W_{i,k}) + |\Delta W_{i,k}|d(L_{k,j}) \leq \frac{2^{2n+4}}{m^2}\left(\frac{RM}{m}\right)^n\epsilon. \] Hence \[ d(V^{-1}) = d(\Delta WL) \leq \max_{i,j} \sum_{k=1}^n d((\Delta W)_{i,k}L_{k,j}) \leq\frac{ 2^{2n+4}n}{m^2}\left(\frac{RM}{m}\right)^n\epsilon \] and \[ |V^{-1}| \leq 2n\left(\frac{RM}{m}\right)^n. \] \end{proof} In our computations we are concerned with $V_0 = V \cdot \diag({\bf r})$ where $\diag({\bf r}) = \diag(r_1, \dots, r_n)$ so that \[ V_0^{-1} = \diag({\bf r})^{-1}V^{-1}. \] The following Corollary is immediate from the observation that \[ d(\diag({\bf r})^{-1}) \leq \frac{2}{r}. \] \begin{corollary} \[ d(V^{-1}_0)\leq \frac{2^{2n+6}n}{m^2r}\left(\frac{MR}{m}\right)^n\epsilon \] and \[ |V_0^{-1}| \leq \frac{2n}{r} \left(\frac{RM}{m}\right)^n. \] \end{corollary} We are now able to prove Theorem \ref{T:2}. \begin{proof}[Proof of Theorem \ref{T:2}] Recall ${\bf m} = V_0^{-1} C^{-1}\Lambda ^{-1}{\bf c}$ as defined in the proof of Theorem \ref{T:1}. Fortunately, the remainder of the computations are straightforward as $C^{-1}, \Lambda ^{-1}$, and ${\bf c}$ have integer, and thus exact, entries. As \begin{displaymath} C_{i,j}^{-1} = \left\{ \begin{array}{lr} 0 & : i < j \\ 1 & : i = j\\ -\sum_{k=1}^{i-1} c_{i-k} C_{k,j}^{-1} & : i > j \end{array} \right. \end{displaymath} we have \[ d(V_0^{-1}C^{-1}) \leq n (nc^{n-1}) \frac{2^{2n+6}n}{m^2r}\left(\frac{MR}{m}\right)^n\epsilon = \frac{2^{2n+6}n^3}{m^2rc}\left(\frac{MRc}{m}\right)^n\epsilon \] Further, since $\Lambda ^{-1} = -\diag(1,2,\dots,n)$ we have \[ d(V_0^{-1}C^{-1}\Lambda ^{-1}) \leq |-n| d(V_0^{-1}C^{-1}) = \frac{2^{2n+6}n^4}{m^2rc}\left(\frac{MRc}{m}\right)^n\epsilon \] and, finally, \begin{align*} d(V_0^{-1}C^{-1}\Lambda ^{-1}){\bf c}) &\leq nc \cdot d(V_0^{-1}C^{-1}\Lambda ^{-1}) \\ &\leq \frac{2^{2n+6}n^5}{m^2r}\left(\frac{MRc}{m}\right)^n\epsilon < \frac{1}{2}. \end{align*} Thus each interval will contain at most one integer as desired. \end{proof} \section{Application to Hypergraph Spectra} For the present authors, Problem \ref{P:2} arose organically in the context of spectral hypergraph theory. In short, the authors were concerned with determining high-degree polynomials when the roots (without multiplicity) are known and all but the lowest-codegree coefficients are too costly to compute. We briefly explain the context of spectral hypergraph theory for those interested in the origin of such questions. However, our presentation of the computations is self-contained: the reader who wishes to see Theorem \ref{T:1} applied immediately may skip the next few paragraphs. For $k \geq 2$, a {\em $k$-uniform hypergraph} is a pair ${\mathcal H} = (V, E)$ where $V = [n]$ is the set of \em{vertices} and $E \subseteq \binom{[n]}{k}$ is the set of \em{edges}. It is common to refer to such hypergraphs as {\em $k$-graphs} when $k > 2$ and as just {\em graphs} when $k =2$. We are particularly interested in the computation of the characteristic polynomial of a uniform hypergraph. The characteristic polynomial of the adjacency matrix of a graph is straightforward to compute; however, the same cannot be said for hypergraphs. The {\em characteristic polynomial} of the {\em (normalized) adjacency hypermatrix} $\mathcal{A}$ of ${\mathcal H}$, denoted $\phi_{{\mathcal H}}(\lambda)$, is the resultant of a family of $|E|$ homogeneous polynomials\footnote{Namely, the Lagrangians of the links of all vertices minus $\lambda$ times the $(k-1)$-st power of the corresponding vertices' variables, or, equivalently, the coordinates $f_v$ of the gradient of the $k$-form naturally associated with $\mathcal{A}-\lambda \mathcal{I}$. The (symmetric) hyperdeterminant is the unique irreducible monic polynomial in the entries of $\mathcal{A}$ whose vanishing corresponds exactly to the existence of nontrivial solutions to the system $\{f_v = 0\}_{v \in V(\mathcal{H})}$.} of degree $k-1$ in the indeterminate $\lambda$; the {\em order $k$}, {\em dimension $n$} hypermatrix $\mathcal{A} \in \mathbb{R}^{[n]^k}$, whose rows and columns are indexed by the vertices of $\mathcal{H}$ and whose $(v_1,\ldots,v_k)$ entry is $1/(k-1)!$ times the indicator of the event that $\{v_1,\ldots,v_k\}$ is an edge of $\mathcal{H}$ is also sometimes called the {\em adjacency tensor} of $\mathcal{H}$. Equivalently, one can define the characteristic polynomial to be the hyperdeterminant of $\mathcal{A}-\lambda \mathcal{I}$ (as in \cite{Gel}), where $\mathcal{I}$ is the identity hypermatrix, i.e., $\mathcal{I}(v_1,\ldots,v_k)$ is the indicator of the event that $v_1 = \cdots = v_k$. The set $\sigma({\mathcal H}) = \{r : \phi_{{\mathcal H}}(r) = 0\} \subset \mathbb{C}$ is the {\em spectrum} of ${\mathcal H}$ and each $r \in \sigma({\mathcal H})$ is an {\em eigenvalue} of ${\mathcal H}$. It is known that $\phi_{\mathcal H}(\lambda)$ is a monic polynomial of degree $n(k-1)^{n-1}$, and many of the properties of characteristic polynomials of graphs generalize nicely to hypergraphs; we refer the interested reader to \cite{Coo} and \cite{Qi} for further exploration of the topic. Given a $k$-graph ${\mathcal H}$ we aim to compute $\phi_{\mathcal H}(\lambda)$. Unfortunately, the resultant is known to be NP-hard to compute (\cite{Gre}) despite its utility in several fields of mathematics, perhaps nowhere more so than computational algebraic geometry. Nonetheless, one can attempt to imitate classical approaches to computing characteristic polynomials of ordinary graphs. In particular, Harary \cite{Har} (and Sachs \cite{Sac}) showed that the coefficients of $\phi_G(\lambda)$ can be expressed as a certain weighted sum of the counts of subgraphs of $G$. The authors have established an analogous result for the coefficients of $\phi_{{\mathcal H}}(\lambda)$ \cite{Cla2}. This formula allows one to compute many low codegree coefficients -- i.e., the coefficients of $x^{d-k}$ for $k$ small and $d = \deg(\phi_{\mathcal{H}})$ -- by a certain linear combination of subgraph counts in $\mathcal{H}$. Unfortunately, this computation becomes exponentially harder as the codegree increases, making computation of the entire (often extremely high degree) characteristic polynomial impossible for all but the simplest cases. A method of Lu-Man \cite{Lu}, ``$\alpha$-normal labelings'' is an alternative approach that can obtain all eigenvalues with relative efficiency, but it gives no information about their multiplicities. {\it Combining} these two techniques, however, yields a method to obtain the full characteristic polynomial: obtain a list of roots, compute a few low-codegree coefficients using subgraph counts, and then deduce the roots' multiplicities. Therefore, we arrive at the following special case of Problem \ref{P:2}. \begin{problem} \label{P:3} Let $K$ be a field of characteristic zero. Is it true that a monic polynomial $p \in K[x]$ of degree $n$ with exactly $k$ distinct, known roots is determined by its $k$ proper leading coefficients? \end{problem} Returning to our application of Theorem \ref{T:1}, we can compute $\phi_{\mathcal H}(\lambda)$ if we know $\sigma({\mathcal H})$ and the first $|\sigma({\mathcal H})|$ coefficients (note this includes coefficients which are zero as well as the leading term). In \cite{Lu}, Lu and Man introduced {\em $\alpha$-consistent incidence matrices} which can be used to find the eigenvalues of ${\mathcal H}$ whose corresponding eigenvector has all non-zero entries. These eigenvalues are referred to as {\em totally non-zero eigenvalues} and we denote the set of totally non-zero eigenvalues of ${\mathcal H}$ as $\sigma^+({\mathcal H})$. The authors showed in \cite{Cla} that for $k > 2,$ \[ \sigma({\mathcal H}) \subseteq \bigcup_{H \subseteq {\mathcal H}} \sigma^+(H) \] where $H = (V_0,E_0) \subseteq {\mathcal H}$ if $V_0 \subseteq V$ and $E_0 \subseteq E$ (c.f.~Cauchy Interlacing Theorem when $k=2$). Computing $\sigma({\mathcal H})$ by way of $\sigma^+(H)$ involves solving smaller multi-linear systems than the one involved in computing $\phi_{\mathcal H}(\lambda)$. Generally speaking, $|\sigma({\mathcal H})|$ is considerably smaller than the degree of $\phi_{\mathcal H}(\lambda)$. In practice, this approach has yielded $\phi_{\mathcal H}(\lambda)$ when other approaches of computing $\phi_{\mathcal H}(\lambda)$ via the resultant have failed. We present two examples demonstrating these computations. Consider the \emph{hummingbird hypergraph} ${\mathcal B} = ([13],E)$ where \[ E = \{ \{1,2,3\},\{1,4,5\},\{1,6,7\},\{2,8,9\},\{3,10,11\},\{3,12,13\}\}. \] We present a drawing of ${\mathcal B}$ in Figure \ref{F:1} where the edges are drawn as shaded in triangles. Note that \[ \deg(\phi_{\mathcal B}) = n(k-1)^{n-1} = 13\cdot2^{12} = 53248 \] and, since ${\mathcal B}$ is a hypertree (and thus a 3-cylinder), its spectrum is invariant under multiplication by any third root of unity \cite{Coo}. We compute the minimal polynomials of the totally non-zero eigenvalues of $\phi_{\mathcal B}$ via \cite{Mac}, \begin{align*} \phi_{\mathcal B} = & x^{m_0}(x^9-6x^3 + 8x^3-4)^{m_1}(x^9-5x^6+5x^3-2)^{m_2} \\ &\cdot (x^3-1)^{m_3}(x^6-4x^3+2)^{m_4}(x^9-4x^6+3x^3-1)^{m_5}\\ &\cdot (x^6-3x^3+1)^{m_6}(x^3-3)^{54}(x^3-2)^{m_7}. \end{align*} With the intent of applying Theorem \ref{T:1} to $\phi_{\mathcal B}$ we consider the change of variable $y = x^3$ and observe that we need to determine $c_3, c_6, \dots, c_{48}$ as there are sixteen distinct nonzero cube roots. We compute \begin{align*} c_3 &= -18432 \\ c_6 &= 169843968 \\ c_9 &= -1043209971456 \\ c_{12} &= 4804960103034624 \\ c_{15} &=-17702435302276375440 \\ c_{18} &= 54341319772238850901668 \\ c_{21} &= -142960393819753656566577552 \\ c_{24} &= 329036832924106136747171871042 \\ c_{27} &= -673063350744784559041302787109576 \\ c_{30}&= 1238925078774563882036470496247467682 \\ c_{33}&= -2072891735870949695930286542580991559916 \\ c_{36} &= 3178738418917825954994865036362341584776658 \\ c_{39} &= -4498896549573513724009044022281523093964642496 \\ c_{42} &= 5911636016042739623328802656744094043553245557890 \\ c_{45} &= -7249053168113446546908444934275696322928768819713512 \\ c_{48} &= 8332230937213678426258491158832963153453272812465162851 \end{align*} Using Theorem \ref{T:2} we have $M < 3, m > .14, R < 4.39,r > .38,$ and $c= c_{48}$ so that each root of $\phi_{\mathcal B}$ needs to be approximated to at most 3091 bits of precision. Using SageMath (\cite{Sag}), we obtain \begin{align*} \phi({\mathcal B}) =& x^{20983}(x^9-6x^3 + 8x^3-4)^{729}(x^9-5x^6+5x^3-2)^{972} \\ &\cdot (x^3-1)^{1782}(x^6-4x^3+2)^{486}(x^9-4x^6+3x^3-1)^{324}\\ &\cdot (x^6-3x^3+1)^{216}(x^3-3)^{54}(x^3-2)^{119}. \end{align*} In Figure \ref{F:1} we provide a plot of $\sigma(\phi_{\mathcal B})$ drawn in the complex plane where a disk is centered at each root and each disk's area is proportional to the algebraic multiplicity of the underlying root in $\phi_{\mathcal B}$. \begin{figure}[ht] \begin{center} \includegraphics[scale = .48]{hummingbird_graph.png} \hspace{1cm} \includegraphics[scale = .25]{hummingbird.png} \end{center} \caption{The hummingbird hypergraph and its spectrum.} \label{F:1} \end{figure} Now consider the \emph{Rowling hypergraph}\footnote{The name was chosen for its resemblance to an important narrative device in \cite{Row}.} \[ {\mathcal R} = ([7], \{1,2,3\},\{\{1,4,5\},\{1,6,7\},\{2,5,6\},\{3,5,7\}\}). \] A drawing of ${\mathcal R}$ is given in Figure \ref{F:2} where the edges are drawn as arcs and its spectrum is drawn similarly to that of $\phi_{{\mathcal B}}$; note that ${\mathcal R}$ is also the Fano plane minus two edges. We have \[ \deg(\phi_{\mathcal R}) = n(k-1)^{n-1} = 7\cdot 2^6 = 448. \] It is easy to verify that ${\mathcal R}$ is not a 3-cylinder; however, its spectrum is invariant under multiplication by any third root of unity (see Lemma 3.11 of \cite{Fan}). By \cite{Lu} we have \begin{align*} \phi_{\mathcal R} = & x^{m_0}(x^3-1)^{m_1}(x^{15}-13x^{12}+65x^9-147x^6+157x^3-64)^{m_2} \\ &\cdot (x^6-x^3+2)^{m_3}(x^6-17x^3+64)^{m_4} \end{align*} With the intent of applying Theorem \ref{T:3} we need to determine only $c_3, c_6, c_9, c_{12}$. We have \begin{align*} c_3 &= -240 \\ c_6 &= 28320 \\ c_9 &= -2190860 \\ c_{12} &= 125012034 \end{align*} By Theorem \ref{T:2} we have $M < 4.5, m > .69, R < 2.25,$ and $r = 1$ so that at most 252 digits of precision are required to approximate each root. We compute \begin{align*} \phi_{\mathcal R} = & x^{133}(x^3-1)^{27}(x^{15}-13x^{12}+65x^9-147x^6+157x^3-64)^{12} \\ &\cdot (x^6-x^3+2)^6(x^6-17x^3+64)^3 \end{align*} \begin{figure}[ht] \begin{center} \includegraphics[scale = .5]{deathly_hallows.png} \hspace{1cm} \includegraphics[scale = .5]{rowlings_spectrum.png} \end{center} \caption{The Rowling hypergraph and its spectrum.} \label{F:2} \end{figure} \section{Acknowledgments} Thanks to Alexander Duncan for helpful discussions and insights. \bibliographystyle{amsplain}
1,477,468,750,850
arxiv
\section{Introduction} In essence, the study of propositional proof complexity started with the work of Cook and Reckhow \cite{CR79}. The first superpolynomial bound on the proof size was proved in a pioneering work of Tseitin \cite{Tse68} for regular resolution. Since then, many proof systems have been studied, some of them are logic-style (working with disjunctions, conjunctions and other Boolean operations) and some of them are algebraic (working with arbitrary polynomials). In this work, we consider extensions of two systems, an algebraic one and a logic-style one. \para{Algebraic proof systems.} Lower bounds for algebraic systems started with an exponential lower bound for the $\mathsf{Nullstellensatz}$ \cite{BeameIKPP96} system. The main system considered in this paper is based on the $\mathsf{Polynomial}$ $\mathsf{Calculus}$ system \cite{CEI96}, which is a dynamic version of $\mathsf{Nullstellensatz}$. Many exponential lower bounds are known for the size of $\mathsf{Polynomial}$ $\mathsf{Calculus}$ proofs for tautologies like the Pigeonhole Principle \cite{MR1691494,IPS99} and Tseitin tautologies \cite{BussGIP01}. While most results concern the representation of Boolean values by 0 and 1, there are also exponential lower bounds over the $\{-1, +1\}$ basis \cite{Sok20}. Many extensions of $\mathsf{Polynomial}$ $\mathsf{Calculus}$ and $\mathsf{Nullstellensatz}$ have been considered before. Buss et al. \cite{BussIKPRS96} showed that there is a tight connection between the lengths of constant-depth Frege proofs with $MOD_p$ gates and the length of $\mathsf{Nullstellensatz}$ refutations using extension axioms. Impagliazzo, Mouli and Pitassi \cite{IMP19_new} showed that a depth-3 extension of $\mathsf{Polynomial}$ $\mathsf{Calculus}$ called $\Sigma\Pi\Sigma$-$\mathsf{PC}$ p-simulates $\mathsf{CP}^*$ (an inequalities-based system, $\mathsf{Cutting}$ $\mathsf{Planes}$ \cite{CCT87, CHVATAL1989455} with coefficients written in unary) over $\mathbb{Q}$. Also, they showed that a stronger extension of $\mathsf{Polynomial}$ $\mathsf{Calculus}$, called $\mathsf{Depth}$-$k$-$\mathsf{PC}$, p-simulates $\mathsf{Cutting}$ $\mathsf{Planes}$ and another inequalities-based system $\mathsf{Sum}$-$\mathsf{of}$-$\mathsf{Squares}$; the simulations can be conducted over $\mathbb{F}_{p^m}$ for arbitrary prime number $p$ if $m$ is sufficiently large. Also very strong extensions were considered: Grigoriev and Hirsch \cite{GH03} considered algebraic systems over formulas. Grochow and Pitassi \cite{GP14} introduced the Ideal Proof System, $\mathsf{IPS}$, which can be considered as the version of $\mathsf{Nullstellensatz}$ where all polynomials are written as algebraic circuits (see also \cite{Pit97, Pit98} for earlier versions of this system). \para{Logic-style systems.} While exponential lower bounds for low-depth proof systems (both algebraic and logical ones) are known for decades, the situation with higher depth proof systems is much worse. The present knowledge is limited to exponential bounds for constant-depth Frege systems over de Morgan basis (that is, without xor's or equivalences) \cite{Ajt94, BussIKPRS96, BeameIKPP96}. In particular, no truly exponential lower bounds are known for the size of refutations of formulas in CNF in (dag-like) systems that work over disjunctions of equations or inequalities (see \cite{Kra98-Discretely} as the first paper defining these systems and containing partial results). $\mathsf{Res}$-$\mathsf{Lin}$ (defined in \cite{RT07}), working with disjunctions of linear equations, is the second system considered in our paper, and it can be viewed as a generalization of Resolution. Part and Tzameret \cite{PT18_new} proved an exponential lower bound for (dag-like) $\mathsf{Res}$-$\mathsf{Lin}$ refutations over $\mathbb{Q}$ for the bit-value principle $\mathsf{BVP}_n$. Although this is the first exponential lower bound for this system, the instance does not constitute a translation of a formula in CNF. Itsykson and Sokolov \cite{ITSYKSON2020102722} consider another extension of the resolution proof system that operates with disjunctions of linear equalities over $\mathbb{F}_2$ named $\mathsf{Res}(\oplus)$ and proved an exponential lower bound on the size of tree-like $\mathsf{Res}(\oplus)$-proofs. \subsection{Our results} We extend $\mathsf{Polynomial}$ $\mathsf{Calculus}$ with two additional rules. One rule allows to take a square root (it was introduced by Grigoriev and Hirsch \cite{GH03} in the context of transforming refutation proofs of non-Boolean formulas into derivation proofs; our motivation to take square roots is to consider an algebraic system that is at least as strong as $\mathsf{Res}$-$\mathsf{Lin}$ even for non-Boolean formulas, see below). Another rule is an algebraic version of Tseitin's extension rule, which allows to introduce new variables. We will denote our generalization of $\mathsf{Polynomial}$ $\mathsf{Calculus}$ as $\mathsf{Ext}$-$\mathsf{PC}^{\surd}$. In this work we give a positive answer to the question raised in \cite{IMP19_new} asking for a technique for proving size lower bounds on Polynomial Calculus without proving any degree lower bounds. Also we give an answer to another question raised in \cite{IMP19_new} by proving an exponential lower bound for the system with an extension rule even stronger than that in $\Sigma\Pi\Sigma$-$\mathsf{PC}$, which is another extension of Polynomial Calculus presented in the aforementioned work. We consider the following subset-sum instance, called $\mathsf{Binary}$ $\mathsf{Value}$ $\mathsf{Principle}$ ($\mathsf{BVP}_n$) \cite{AGHT19_new,PT18_new}: $$ 1 + x_1 + 2 x_2 + \ldots 2^{n - 1} x_n = 0, $$ and prove exponential lower bound for the size of $\mathsf{Ext}$-$\mathsf{PC}_{\mathbb{Q}}^{\surd}$ refutations of $\mathsf{BVP}_n$. Note that $\mathsf{Binary}$ $\mathsf{Value}$ $\mathsf{Principle}$ does not correspond to the translation of any CNF formula and thus the question about proving size lower bound on the refutation of formulas in CNF without proving degree lower bounds \textbf{remains open}. \begin{theorem} Any $\mathsf{Ext}$-$\mathsf{PC}_{\mathbb{Q}}^{\surd}$ refutation of $\mathsf{BVP}_n$ requires size $2^{\Omega(n)}$. \end{theorem} The technique we use for proving this lower bound is similar to the technique for proving conditional $\mathsf{IPS}$ lower bound in \cite{AGHT19_new}. However, since $\mathsf{Ext}$-$\mathsf{PC}$ proof system is weaker than $\mathsf{Ideal}$ $\mathsf{Proof}$ $\mathsf{System}$, we get an unconditional lower bound. The main idea of conditional lower bound in \cite{AGHT19_new} is to prove complexity lower bound on the free term in the end of $\mathsf{IPS}$-refutation of $\mathsf{BVP}_n$ over $\mathbb{Z}$ and then show that $\mathsf{IPS}_\mathbb{Z}$ simulates $\mathsf{IPS}_{\mathbb{Q}}$. One difference is that instead of concentrating on the \emph{complexity} of computing the free term of the proof, we concentrate on \emph{prime numbers} being mentioned in the proof (and thus appearing as factors of the free term). Then we consider $\mathsf{Res}$-$\mathsf{Lin}$ and show that $\mathsf{Ext}$-$\mathsf{PC}_{\mathbb{Q}}^{\surd}$ simulates $\mathsf{Res}$-$\mathsf{Lin}$ and thus get an alternative lower bound for $\mathsf{Res}$-$\mathsf{Lin}$. \begin{corollary}[Informal] Any $\mathsf{Res}$-$\mathsf{Lin}$ refutation of $\mathsf{BVP}_n$ requires size $2^{\Omega(n)}$. \end{corollary} Note that while Part and Tzameret \cite{PT18_new} prove an exponential lower bound on the number of lines in the proof, we prove a bound on the proof size (essentially, on the bit size of scalars appearing in the proof). \subsection{Organization of the paper} In Section~\ref{sec:prelim} we recall the definition of Polynomial Calculus ($\mathsf{PC}$) and give the definitions of Polynomial Calculus with square root ($\mathsf{PC}^{\surd}$) and Extended Polynomial Calculus with square root ($\mathsf{Ext}$-$\mathsf{PC}^{\surd}$). In Section~\ref{section-3} we prove exponential lower bound on the size of $\mathsf{Ext}$-$\mathsf{PC}_{\mathbb{Q}}^{\surd}$ refutations of $\mathsf{BVP}_n$. We start with considering derivations with integer coefficients ($\mathsf{Ext}$-$\mathsf{PC}_{\mathbb{Z}}^{\surd}$) and show that the free term in the end of such refutation of $\mathsf{BVP}_n$ is not just large but also is divisible by all primes less then $2^n$ (see Theorem~\ref{lower bound for integers depth-inf}). Then, in Theorem~\ref{lower bound q}, we convert proofs over $\mathbb{Q}$ into proofs over $\mathbb{Z}$ without changing the set of primes mentioned in the proof and thus get an $\mathsf{Ext}$-$\mathsf{PC}_{\mathbb{Q}}^{\surd}$ lower bound. In Section~\ref{section-4} we show that $\mathsf{Ext}$-$\mathsf{PC}_{\mathbb{Q}}^{\surd}$ simulates $\mathsf{Res}$-$\mathsf{Lin}$ and thus we get an alternative lower bound for the size of $\mathsf{Res}$-$\mathsf{Lin}$ refutations of $\mathsf{BVP}_n$. \section{Preliminaries}\label{sec:prelim} In this paper we are going to work with polynomials over integers or rationals. We define the size of a polynomial roughly as the total length of the bit representation of its coefficients: \begin{definition} [Size of a polynomial]\label{def:PolynomialSize} Let $f$ be an arbitrary integer or rational polynomial in variables $\{x_1, \ldots, x_n\}$. \begin{itemize} \item If $f \in \mathbb{Z}[x_1, \ldots, x_n]$ then $Size(f) = \sum \lceil \log |a_i| \rceil$ where $a_i$ are the coefficients of $f$. \item If $f \in \mathbb{Q}[x_1, \ldots, x_n]$ then $Size(f) = \sum \lceil \log |p_i| \rceil + \lceil \log|q_i| \rceil $ where $p_i \in \mathbb{Z}$, $q_i \in \mathbb{N}$ and $\frac{p_i}{q_i}$ are the coefficients of $f$. \end{itemize} \end{definition} \begin{definition}[Polynomial Calculus]\label{def:PC} Let $\Gamma = \{P_1, \ldots, P_m\} \subset \mathbb{F}[x_1, \ldots, x_n]$ be a set of polynomials in variables $\{x_1, \ldots, x_n\}$ over a field $\mathbb{F}$ such that the system of equations $P_1 = 0, \ldots, P_m = 0$ has no solution. A Polynomial Calculus refutation of $\Gamma$ is a sequence of polynomials $R_1, \ldots, R_s$ where $R_s = 1$ and for every $l$ in $\{1, \ldots, s\}$, $R_l \in \Gamma$ or is obtained through one of the following derivation rules for $j, k < l$ \begin{itemize} \item $R_l = \alpha R_j + \beta R_k$ for $\alpha, \beta \in \mathbb{F}$ \item $R_l = x_i R_k$ \end{itemize} The size of the refutation is $\sum_{l = 1}^s Size(R_l)$. The degree of the refutation is $\max_l deg(R_l)$. \end{definition} Now we consider a variant of Polynomial Calculus proof system with additional \textbf{square root derivation rule} (see \cite{GH03}). Moreover, we extend our definition from fields to \textbf{rings}. \begin{definition}[Polynomial Calculus with square root]\label{def:PCS} Let $\Gamma = \{P_1, \ldots, P_m\} \subset R[x_1, \ldots, x_n]$ be a set of polynomials in variables $\{x_1, \ldots, x_n\}$ over a ring $R$ such that the system of equations $P_1 = 0, \ldots, P_m = 0$ has no solution. A $\mathsf{PC}^{\surd}_{R}$ refutation of $\Gamma$ is a sequence of polynomials $R_1, \ldots, R_s$ where $R_s = M$ for some constant $M \in R, M \neq 0$ and for every $l$ in $\{1, \ldots, s\}$, $R_l \in \Gamma$ or is obtained through one of the following derivation rules for $j, k < l$ \begin{itemize} \item $R_l = \alpha R_j + \beta R_k$ for $\alpha, \beta \in R$ \item $R_l = x_i R_k$ for some $i \in \{1, \ldots, n\}$ \item $R_l^2 = R_k$ for some $i \in \{1, \ldots, n\}$ \end{itemize} The size of the refutation is $\sum_{l = 1}^s Size(R_l)$, where $Size(R_l)$ is the size of the polynomial $R_l$. The degree of the refutation is $\max_l deg(R_l)$. \end{definition} \begin{note} We will consider $\mathbb{Q}$ or $\mathbb{Z}$ as the ring $R$. For both of those rings, if we consider \textbf{Boolean} case, where axioms $x_i^2 - x_i = 0$ added, our system will be complete, which means that for every unsatisfiable over $\{0, 1\}$ assignment system $\{f_i(\vec x) = 0\}$ there is a $\mathsf{PC}^{\surd}_{R}$ refutation. Also, note that if $R$ is a domain and $P^2 = 0$ for some $P \in R[\vec x]$, then $P = 0$. \end{note} We now define a variant of $\mathsf{PC}^{\surd}_{R}$, $\mathsf{Ext}$-$\mathsf{PC}^{\surd}_{R}$ where the proof system is additionally allowed to introduce new variables $y_i$ corresponding to arbitrary polynomials in the original variables $x_i$. \begin{definition}[Extended Polynomial Calculus with square root]\label{def:Depth-inf-PC} Let $\Gamma = \{P_1, \ldots, P_m\} \subset R[x_1, \ldots, x_n]$ be a set of polynomials in variables $\{x_1, \ldots, x_n\}$ over a ring $R$ such that the system of equations $P_1 = 0, \ldots, P_m = 0$ has no solution. A $\mathsf{Ext}$-$\mathsf{PC}^{\surd}_{R}$ refutation of $\Gamma$ is a $\mathsf{PC}^{\surd}_{R}$ refutation of a set $$\Gamma' = \{P_1, \ldots, P_m, y_1 - Q_1(x_1, \ldots, x_n), y_2 - Q_2(x_1, \ldots, x_n, y_1), \ldots, y_m - Q_m(x_1, \ldots, x_n, y_1, \ldots, y_{m - 1})\} $$ where $Q_i \in R[\vec{x}, y_1, \ldots, y_{i - 1}]$ are arbitrary polynomials. The size of the $\mathsf{Ext}$-$\mathsf{PC}^{\surd}_{R}$ refutation is equal to the size of the $\mathsf{PC}^{\surd}_{R}$ refutation of $\Gamma'$. \end{definition} \section{Lower bound}\label{section-3} In order to prove lower bound for the $\mathsf{Ext}$-$\mathsf{PC}^{\surd}_{\mathbb{Q}}$ proof system, we consider the following subset-sum instance \cite{AGHT19_new, PT18_new}: \begin{definition}[Binary Value Principle $\mathsf{BVP}_n$] The \textbf{binary value principle} over the variables $x_1,\dots, x_n$, $\mathsf{BVP}_n$ for short, is the following unsatisfiable system of linear equations: $$ x_1 + 2 x_2 + \ldots 2^{n - 1} x_n + 1 = 0, $$ $$ x_1^2 - x_1 = 0, \; x_2^2 - x_2 = 0, \; \ldots, \; x_n^2 - x_n = 0. $$ \end{definition} \begin{theorem} \label{lower bound for integers depth-inf} Any $\mathsf{Ext}$-$\mathsf{PC}^{\surd}_{\mathbb{Z}}$ refutation of $\mathsf{BVP}_n$ requires size $\Omega(2^n)$. Moreover, the absolute value of the constant in the end of our $\mathsf{Ext}$-$\mathsf{PC}^{\surd}_{\mathbb{Z}}$ refutation consists of at least $C \cdot 2^n$ bits for some constant $C > 0$. Also, the constant in the end of our $\mathsf{Ext}$-$\mathsf{PC}^{\surd}_{\mathbb{Z}}$ refutation is divisible by every prime number less than $2^n$. \end{theorem} \begin{proof} Assume that $\{R_1, \ldots, R_t\}$ is the $\mathsf{Ext}$-$\mathsf{PC}^{\surd}_{\mathbb{Z}}$ refutation of $\mathsf{BVP}_n$. Then we know that $\{R_1, \ldots, R_t\}$ is $\mathsf{PC}_{\mathbb{Z}}^{\surd}$ refutation of some set $$ \Gamma' = \{G(\vec x), F_1(\vec x), \ldots, F_n(\vec x), y_1 - Q_1(\vec x), \ldots y_m - Q_m(\vec x, y_1, \ldots, y_{m - 1})\} $$ where $G(\vec x) = 1 + \sum_{i = 1}^{i = n} 2^{(i - 1)} x_i$, $F_i(\vec x) = x_i^2 - x_i$ and $Q_i \in \mathbb{Z}[\vec x, y_1, \ldots, y_{i - 1}]$. By the definition of $\mathsf{Ext}$-$\mathsf{PC}^{\surd}_{\mathbb{Z}}$ refutation we know that there exists an integer constant $M \neq 0$ such that $F_t = M$. \begin{claim}\label{cla:M-div-every-prime} $M$ is divisible by every prime number less than $2^n$. \end{claim} \begin{proofclaim} Consider arbitrary integer number $0 \le k < 2^n$ and its binary representation $b_{1}, \ldots, b_{n}$. Let $k + 1$ be \textbf{prime}. Then $G(b_1, \ldots, b_n) = k + 1$, $F_i(b_1, \ldots, b_n) = b_i^2 - b_i = 0$. Also consider integers $c_1, \ldots, c_m$ such that $c_i = Q_i(b_1, \ldots, b_n, c_1, c_2, \ldots, c_{i - 1})$. Now we will prove by induction that every integer number $R_i(b_1, \ldots, b_n, c_1, \ldots, c_m)$ is divisible by $k + 1$ and thus $M$ is divisible by every prime number less than $2^n$. \bfseries Base case: \mdseries if $i = 1$, then $R_i = G(b_1, \ldots, b_n, c_1, \ldots, c_m) = k + 1$ or $R_i = F_i(b_1, \ldots, b_n, c_1, \ldots, c_m) = 0$ or $R_i(b_1, \ldots, b_n, c_1, \ldots, c_m) = c_i - Q_i(b_1, \ldots, b_n, c_1, \ldots, c_{i - 1}) = 0$ which means that $R_i$ is divisible by $k + 1$. \bfseries Induction step: \mdseries suppose we know that $R_j$ is divisible by $k + 1$ for any $j \le i$. Now we will show it for $R_{i + 1}$. There are four cases: \begin{enumerate} \item If $R_{i + 1} \in \Gamma'$, then this case is equivalent to the base case and $R_{i + 1}(b_1, \ldots, b_n, c_1, \ldots, c_m)$ is divisible by $k + 1$. \item If $R_{i + 1} = \alpha R_j + \beta R_s$ for $\alpha, \beta \in \mathbb{Z}$ and $j, s \le i$, then $R_{i + 1}(b_1, \ldots, b_n, c_1, \ldots, c_m)$ is divisible by $k + 1$ because $R_j(b_1, \ldots, b_n, c_1, \ldots, c_m)$ and $R_s(b_1, \ldots, b_n, c_1, \ldots, c_m)$ are divisible by $k + 1$ and $\alpha$ and $\beta$ are integers. \item If $R_{i + 1} = x_j R_s$ or $R_{i + 1} = y_j R_s$, then $R_{i + 1}(b_1, \ldots, b_n, c_1, \ldots, c_m)$ is divisible by $k + 1$ because $R_s(b_1, \ldots, b_n, c_1, \ldots, c_m)$ is divisible by $k + 1$ and $b_i$ and $c_i$ are integers. \item If $R_{i + 1}^2 = R_s$, then we know that $R_s(b_1, \ldots, b_n, c_1, \ldots, c_m)$ is divisible by $k + 1$. Suppose $R_{i + 1}(b_1, \ldots, b_n, c_1, \ldots, c_m)$ is not divisible by $k + 1$. Then $R_{i + 1}(b_1, \ldots, b_n, c_1, \ldots, c_m)^2$ is not divisible by $k + 1$ since $k + 1$ is \textbf{prime}. But $R_{i + 1}(b_1, \ldots, b_n, c_1, \ldots, c_m)^2 = R_s(b_1, \ldots, b_n, c_1, \ldots, c_m)$ which leads us to a contradiction. \end{enumerate} Since every $R_{i}(b_1, \ldots, b_n, c_1, \ldots, c_m)$ is divisible by $k + 1$, we know that $M = R_{s}(b_1, \ldots, b_n, c_1, \ldots, c_m)$ is divisible by every $k + 1$ less than $2^n$, and in particular $M$ is divisible by every prime number less than $2^n$. \end{proofclaim} So we know that $M$ is divisible by the product of all prime numbers less than $2^n$. Then we know that $|M| > (\pi(2^n))!$ where $\pi(2^n)$ is the number of all prime numbers less than $2^n$. By the prime number theorem $\pi(2^n) > C \frac{2^n}{n}$. By Stirling's approximation we get $$ |M| > \left(C \frac{2^n}{n}\right)! > C' \cdot \left(C \frac{2^n}{n}\right)^{C \frac{2^n}{n}} > C'' \left(2^{\frac{n}{2}}\right)^{C \frac{2^n}{n}} > C'' 2^{(2^n C_0)} $$ which means that $M$ consists of at least $C_1 \cdot 2^n$ bits and therefore any $\mathsf{Ext}$-$\mathsf{PC}^{\surd}_{\mathbb{Z}}$ refutation of $\mathsf{BVP}_n$ requires size $\Omega(2^n)$. \end{proof} In order to prove a lower bound over $\mathbb{Q}$, we need to convert an $\mathsf{Ext}$-$\mathsf{PC}^{\surd}_{\mathbb{Q}}$ proof into $\mathsf{Ext}$-$\mathsf{PC}^{\surd}_{\mathbb{Z}}$ proof. \begin{theorem}\label{lower bound q} Any $\mathsf{Ext}$-$\mathsf{PC}^{\surd}_{\mathbb{Q}}$ refutation of $\mathsf{BVP}_n$ requires size $\Omega(2^n)$. \end{theorem} \begin{proof} Assume that $\{R_1, \ldots, R_t\}$ is the $\mathsf{Ext}$-$\mathsf{PC}^{\surd}_{\mathbb{Q}}$ refutation of $\Gamma$ of the size $S$. Then we know that $\{R_1, \ldots, R_t\}$ is a $\mathsf{PC}_{\mathbb{Q}}^{\surd}$ refutation of some set $\Gamma' = \Gamma \cup \{y_1 - Q_1(\vec x), \ldots, y_m - Q_m(\vec x, y_1, \ldots, y_{m - 1})\}$ where $Q_i \in \mathbb{Q}[\vec x, \vec y]$. Also, we know that $R_t = M$ for some $M \in \mathbb{Q}$. Consider integers $M_1, \ldots, M_m$ where $M_i$ is equal to the product of denominators of all coefficients of polynomial $Q_i$. Also consider all polynomials $R_j(\vec x, \vec y)$ which was derived by using linear combination rule which means that $R_j = \alpha R_i + \beta R_k$. Then we consider \textbf{all} constants $\alpha$ and $\beta$ occurring in linear combination derivations in our proof. Let's denote the set of those constants as $\{\gamma_1, \gamma_2, \ldots, \gamma_f\} \subset \mathbb{Q}$. Now consider the set of all \textbf{denominators} of the constants in $\{\gamma_1, \gamma_2, \ldots, \gamma_f\}$ and denote this set as $\{\delta_1, \delta_2, \ldots, \delta_l\} \subset \mathbb{N}$. Also consider the products of all denominators of coefficients of polynomials $\{R_1, \ldots, R_t\}$. We will denote the set of those integers as $\{L_1, \ldots, L_t\} \subset \mathbb{N}$. Now we will construct the $\mathsf{Ext}$-$\mathsf{PC}^{\surd}_{\mathbb{Z}}$ refutation of $\Gamma$ such that the constant in the end of this proof is equal to $M_1^{c_1} \cdot M_2^{c_2} \cdots M_m^{c_m} \cdot \delta_1^{c_{m + 1}} \cdots \delta_l^{c_{m + l}} \cdot L_1^{c_{m + l + 1}} \cdots L_t^{c_{m + l + t}} \cdot M$ where $\{c_1, c_2, \cdots, c_{m + l + t}\} \subset \mathbb{N} \cup \{0\}$. Firstly, we will translate polynomials $Q_i$ into some integer polynomials $Q_i'$. Consider $Q_1'(\vec x) = M_1 \cdot Q_1(\vec x)$ where $M_1$ is equal to the product of denominators of all coefficients of polynomial $Q_1$. Then $Q_1' \in \mathbb{Z}[\vec x]$ and $T_1 = M_1$. Then consider $Q_2' (\vec x, y_1') = T_2 \cdot Q_2 (\vec x, \frac{y_1'}{T_1})$ where $T_2$ is equal to $T_1^{\alpha_{1 1}} \cdot M_2$ where $\alpha_{1 1}$ is an \textbf{arbitrary} non-negative integer such that $Q_2' \in \mathbb{Z}[\vec x, y_1']$. Then for every $i$ we consider $Q_i'(\vec x, y_1', \ldots, y_{i - 1}') = T_i \cdot Q_{i}(\vec x, \frac{y_1'}{T_1}, \ldots, \frac{y_{i -1 }'}{T_{i - 1}})$ where $T_i = T_1^{\alpha_{i 1}} \cdot T_2^{\alpha_{i 2}} \cdots T_{i - 1}^{\alpha_{i i - 1}} \cdot M_i$ where $\alpha_{i 1}, \ldots, \alpha_{i i - 1}$ are \textbf{arbitrary} integers such that $Q_{i}' \in \mathbb{Z}[\vec x, y_1', \ldots, y_{i - 1}']$. Note that we are not interested in the size of the integers $\alpha_{i j}$ so they could be arbitrary large. Now we will construct $\mathsf{PC}_{\mathbb{Q}}^{\surd}$ refutation $\{R_1', \ldots, R_s'\}$ of the set $\Gamma'' = \Gamma \cup \{y_1' - Q_1'(\vec x), \ldots y_m' - Q_m'(\vec x, y_1', \ldots, y_{m - 1}')\}$ of the following form: this refutation duplicates the original refutation $\{R_1, \ldots, R_t\}$ in all cases except when the polynomial $R_i$ was derived by multiplying by some variable $y_j$ from some polynomial $R_k$. In this case we will multiply corresponding polynomial by $y_j'$ and then multiply it by $\frac{1}{T_j}$. Formally, we will prove the following claim: \begin{claim}\label{cla:Q-Z trans} There is an $\mathsf{PC}_{\mathbb{Q}}^{\surd}$ refutation $\{R_1', \ldots, R_s'\}$ of the set $\Gamma'' = \Gamma \cup \{y_1' - Q_1'(\vec x), \ldots y_m' - Q_m'(\vec x, y_1', \ldots, y_{m - 1}')\}$ for which the following properties holds: \begin{itemize} \item For every polynomial $R_i'(\vec x, y_1', \ldots, y_m')$ one of the following equations holds: $R_i'(\vec x, y_1 \cdot T_1, \ldots, y_m \cdot T_m) = R_j(\vec x, y_1, \ldots, y_m)$ for some $j$ or $R_i'(\vec x, y_1 \cdot T_1, \ldots, y_m \cdot T_m) = T_k \cdot R_j(\vec x, y_1, \ldots, y_m)$ for some $k$ and $j$. \item If $R_i'(\vec x, y_1', \ldots, y_m')$ was derived from $R_j'(\vec x, y_1', \ldots, y_m')$ and $R_k'(\vec x, y_1, \ldots, y_m)$ by taking linear combination with rational constants $\alpha$ and $\beta$ (which means that $R_i' = \alpha R_j' + \beta R_k'$), then $\alpha = \frac{1}{T_f}$ and $\beta = 0$ for some $f$ or there is some polynomial $R_h(\vec x, y_1', \ldots, y_m')$ which was derived from some polynomials $R_k$ and $R_l$ by using linear combination with constants $\alpha$ and $\beta$. \end{itemize} \end{claim} \begin{proofclaim} The proof is an easy (but lengthy) inductive argument and is given in the \hyperref[appendix]{Appendix}. \end{proofclaim} Now we will show that $\Gamma''$ has a $\mathsf{PC}_{\mathbb{Z}}^{\surd}$ refutation in which the constant in the end is equal to $$ M_1^{c_1} \cdot M_2^{c_2} \cdots M_m^{c_m} \cdot \delta_1^{c_{m + 1}} \cdots \delta_l^{c_{m + l}} \cdot L_1^{c_{m + l + 1}} \cdots L_t^{c_{m + l + t}} \cdot M. $$ In order to do this we will fix a $\mathsf{PC}_{\mathbb{Q}}^{\surd}$ refutation $\{R_1', \ldots, R_s'\}$ of $\Gamma''$ with the properties from the \hyperref[cla:Q-Z trans]{Claim 3.4} and construct a $\mathsf{PC}_{\mathbb{Z}}^{\surd}$ refutation of $\Gamma''$ by induction. Moreover, we will construct a $\mathsf{PC}_{\mathbb{Z}}^{\surd}$ refutation $\{R_1'', \ldots, R_f''\}$ in which every polynomial $R_i''$ is equal to $M_1^{d_1} \cdot M_2^{d_2} \cdots M_m^{d_m} \cdot \delta_1^{d_{m + 1}} \cdots \delta_l^{d_{m + l}} \cdot L_1^{d_{m + l + 1}} \cdots L_t^{d_{m + l + t}} \cdot R_i'$ for some non-negative integers $d_1, \ldots, d_{m + l + t}$ and some polynomial $R_i'$. Informally, we are going to multiply each line in our $\mathsf{PC}_{\mathbb{Q}}^{\surd}$ refutation by some constant in order to get correct $\mathsf{PC}_{\mathbb{Z}}^{\surd}$ refutation. But since we can't divide polynomials in our $\mathsf{PC}_{\mathbb{Z}}^{\surd}$ refutation by any constant, we will duplicate original $\mathsf{PC}_{\mathbb{Q}}^{\surd}$ refutation multiplied by some constant of the form $M_1^{d_1} \cdot M_2^{d_2} \cdots M_m^{d_m} \cdot \delta_1^{d_{m + 1}} \cdots \delta_l^{d_{m + l}} \cdot L_1^{d_{m + l + 1}} \cdots L_t^{d_{m + l + t}}$ every time we would like to simulate derivation in the original proof. \noindent\textbf{Induction statement:} Let $\{R_1', \ldots, R_i'\}$ be a $\mathsf{PC}_{\mathbb{Q}}^{\surd}$ derivation from $\Gamma''$ with the properties from the \hyperref[cla:Q-Z trans]{Claim 3.4}. Then there exists a $\mathsf{PC}_{\mathbb{Z}}^{\surd}$ derivation $\{R_1'', \ldots, R_f''\}$ from $\Gamma''$ such that \begin{itemize} \item $f \le 2 i^2$. \item There is some constant $F_i = M_1^{b_1} \cdot M_2^{b_2} \cdots M_m^{b_m} \cdot \delta_1^{b_{m + 1}} \cdots \delta_l^{b_{m + l}} \cdot L_1^{b_{m + l + 1}} \cdots L_t^{b_{m + l + t}} \in \mathbb{N}$ such that $$ F_i \cdot R_1' = R_{f - i + 1}'', \; F_i \cdot R_2' = R_{f - i + 2}'', \: \ldots, \; F_i \cdot R_i' = R_{f}'' $$ \end{itemize} \mbox{}\\ \noindent{\textit{Base case: }} If $i = 1$ then $R_i' \in \Gamma''$. Then we can take $R_{1}'' = R_i'$. \mbox{}\\ \noindent{\textit{Induction step: }} Suppose we have already constructed the $\mathsf{PC}_{\mathbb{Z}}^{\surd}$ refutation $\{R_1'', R_2'', \ldots, R_{f}''\}$ for which the induction statement is true. Then there are four cases depending on the way the $R_{i + 1}'$ is derived. \noindent{\textbf{Case 1}:} If $R_{i + 1}' \in \Gamma''$ then $F_{i + 1} = F_i$ and $$ R_{f + 1}'' = R_{i + 1}', \:R_{f + 2}'' = F_{i + 1} \cdot R_1', \; R_{f + 3}'' = F_{i + 1} \cdot R_2', \: \ldots, \; R_{f + i + 1}'' = F_{i + 1} \cdot R_i', \:, R_{f + i + 2}'' = F_{i + 1} \cdot R_{i + 1}' $$ \noindent{\textbf{Case 2}:} If $R_{i + 1}' = x_j R_l'$ or $R_{i + 1}' = y_j' R_l'$ then $F_{i + 1} = F_i$, $$ R_{f + 1}'' = F_{i + 1} \cdot R_1', \; R_{f + 2}'' = F_{i + 1} \cdot R_2', \: \ldots, \; R_{f + i}'' = F_{i + 1} \cdot R_i' $$ and $R_{f + i + 1}'' = x_j R_{f - i + l}'' = F_{i + 1} \cdot R_{i + 1}'$ or $R_{f + i + 1}'' = y_j R_{f - i + l}'' = F_{i + 1} \cdot R_{i + 1}''$. \noindent{\textbf{Case 3}:} If $R_{i + 1} = \alpha R_j + \beta R_k$ where $\alpha = \frac{p_1}{q_1}$ and $\beta = \frac{p_2}{q_2}$ where $\{p_1, q_1, p_2, q_2\} \subset \mathbb{Z}$. Then we can take $F_{i + 1} = q_1 q_2 F_i$, $$ R_{f + 1}'' = q_1 q_2 \cdot R_{f - i + 1}'' = F_{i + 1} \cdot R_1', \: R_{f + 2}'' = q_1 q_2 \cdot R_{f - i + 2}'' = F_{i + 1} \cdot R_2', \: \ldots, \: R_{f + i}'' = q_1 q_2 \cdot R_{f}'' = F_{i + 1} R_{i}' $$ and $R_{f + i + 1}'' = p_1 q_2 \cdot R_{f - i + j}'' + p_2 q_1 \cdot R_{f - i + k}'' = M_{i + 1} R_{i + 1}'$. From the \hyperref[cla:Q-Z trans]{Claim 3.4} we know that $\alpha = \frac{1}{T_k}$ for some $k$ and $\beta = 0$, or $q_2$ and $q_1$ are equal to some $\delta_k$ and $\delta_r$. From the induction statement we know that $$ F_i = M_1^{b_1} \cdot M_2^{b_2} \cdots M_m^{b_m} \cdot \delta_1^{b_{m + 1}} \cdots \delta_l^{b_{m + l}} \cdot L_1^{b_{m + l + 1}} \cdots L_t^{b_{m + l + t}}. $$ Then, since $T_k = M_1^{r_{1 k}} \cdots M_m^{r_{m k}}$, we know that $$ F_{i + 1} = M_1^{b_1'} \cdot M_2^{b_2'} \cdots M_m^{b_m'} \cdot \delta_1^{b_{m + 1}'} \cdots \delta_l^{b_{m + l}'} \cdot L_1^{b_{m + l + 1}'} \cdots L_t^{b_{m + l + t}'}, $$ and the induction statement stays true. \noindent{\textbf{Case 4}:} Suppose $R_{i + 1}'^2 = R_j'$. We know that $R_{i +1}'(x_1, \ldots, x_n, y_1', \ldots,y_m') = R_{k}(x_1, \ldots, x_n, \frac{y_1'}{T_1}, \ldots, \frac{y_m'}{T_m})$ or $R_{i +1}'(x_1, \ldots, x_n, y_1', \ldots,y_m') = T_h \cdot R_{k}(x_1, \ldots, x_n, \frac{y_1'}{T_1}, \ldots, \frac{y_m'}{T_m})$ for some $h$. Then we can take $M' = L_k \cdot T_1^{\alpha_1} \cdot T_2^{\alpha_2} \cdots T_m^{\alpha_m} = L_k \cdot M_1^{\alpha_1'} \cdot M_2^{\alpha_2'} \cdots M_m^{\alpha_m'}$ for some non-negative integers $\alpha_1, \ldots, \alpha_m$, such that $M' \cdot R_{i + 1}'$ is an integer polynomial. We know that such integers $\alpha_1, \ldots, \alpha_m$ exist since $L_k$ is the product of all denominators of coefficients of polynomial $R_k$. Then we can take $F_{i + 1} = M' \cdot F_i$. It's obvious that $F_{i + 1} \cdot R_{i + 1}'$ is an integer polynomial. Then we can make the following $\mathsf{PC}_{\mathbb{Z}}^{\surd}$ derivation: \begin{multline*} R_{f + 1}'' = F_i (M')^2 \cdot R_{f - i + j}'' = (F_i M')^2 \cdot R_j', \\ R_{f + 2}' = M' \cdot R_{f - i + 1}' = F_{i + 1} \cdot R_1, \: R_{f + 3}' = M' \cdot R_{f - i + 2}' = F_{i + 1} \cdot R_2, \: \ldots, \: R_{f + i + 1}' = M' \cdot R_{f}' = F_{i + 1} R_{i}. \end{multline*} Then we can take $R_{f + i + 2}'' = F_i M' \cdot R_{i + 1}'$ and since $R_{f + 1}'' = (F_i M')^2 \cdot R_j'$ we know that $(R_{f + i + 2}'')^2 = R_{f + 1}''$ and we get a correct $\mathsf{PC}_{\mathbb{Z}}^{\surd}$ derivation. Since $M' = L_p \cdot M_1^{\alpha_1'} \cdot M_2^{\alpha_2'} \cdots M_m^{\alpha_m'}$ we know that $$ F_{i + 1} = M_1^{b_1'} \cdot M_2^{b_2'} \cdots M_m^{b_m'} \cdot \delta_1^{b_{m + 1}'} \cdots \delta_f^{b_{m + l}'} \cdot L_1^{b_{m + l + 1}'} \cdots L_t^{b_{m + l + t}'}, $$ and the induction statement stays true. So now we have a $\mathsf{Ext}$-$\mathsf{PC}^{\surd}_{\mathbb{Z}}$ refutation of $\Gamma$ such that the constant in the end of this refutation is equal to $M_1^{c_1} \cdot M_2^{c_2} \cdots M_m^{c_m} \cdot \delta_1^{c_{m + 1}} \cdots \delta_l^{c_{m + l}} \cdot L_1^{c_{m + l + 1}} \cdots L_t^{c_{m + l + t}} \cdot M$. Suppose that $M = \frac{p'}{q'}$ where $p \in \mathbb{Z}$ and $q \in \mathbb{N}$. Then, from \autoref{lower bound for integers depth-inf} we know that $M_1^{c_1} \cdot M_2^{c_2} \cdots M_m^{c_m} \cdot \delta_1^{c_{m + 1}} \cdots \delta_f^{c_{m + l}} \cdot L_1^{c_{m + l + 1}} \cdots L_t^{c_{m + l + t}} \cdot p'$ is divisible by every prime number less than $2^n$. Since $M_1, \ldots, M_m$, $\delta_1, \ldots, \delta_l$, $L_1, \ldots, L_t$ are positive integers we know that $ M_1 \cdot M_2 \cdots M_m \cdot \delta_1 \cdots \delta_l \cdot L_1 \cdots L_t \cdot p' $ is divisible by every prime number less than $2^n$. Also we know that $$ \log \lceil M_1 \rceil + \cdots + \log \lceil M_m \rceil + \log \lceil \delta_1 \rceil + \cdots + \log \lceil \delta_l \rceil + \log \lceil L_1 \rceil + \cdots + \log \lceil L_t \rceil + \log \lceil p \rceil \le O(Size(S)) $$ because all constants $M_1, \ldots, M_m, L_1, \ldots, L_t$ are products of denominators in the lines of our refutation $\{R_1, \ldots, R_t\}$ and constants $\delta_1, \ldots, \delta_l$ are denominators of rationals in linear combinations used in our derivation. On the other hand, we know that $$ M_1 \cdot M_2 \cdots M_m \cdot \delta_1 \cdots \delta_l \cdot L_1 \cdots L_t \cdot p' \ge 2^{\Omega(n)} $$ since our product is divisible by every prime number less than $2^n$. Then we know that $S \ge 2^{\Omega(n)}$. \end{proof} \newpage \section{Connection between $\mathsf{Res}$-$\mathsf{Lin}$, $\mathsf{Ext}$-$\mathsf{PC}_{\mathbb{Q}}^{\surd}$ and $\mathsf{Ext}$-$\mathsf{PC}_{\mathbb{Q}}$}\label{section-4} Following \cite{RT07}, we define $\mathsf{Res}$-$\mathsf{Lin}$ proof system. \begin{definition} A \textbf{disjunction of linear equations} is of the following general form: \begin{equation} (a_1^{(1)} x_1 + \ldots + a_n^{(1)} x_n = a_0^{(1)}) \vee \cdots \vee (a_1^{(t)} x_1 + \ldots + a_n^{(t)} x_n = a_0^{(t)}) \end{equation} where $t \ge 0$ and the coefficients $a_i^{j}$ are \textbf{integers} (for all $0 \le i \le n$, $1 \le j \le t$). The semantics of such a disjunction is the natural one: We say that an assignment of integral values to the variables $x_1, \ldots, x_n$ satisfies (1) if and only if there exists $j \in \{1, \ldots, t\}$ so that the equation $a_1^{(j)} x_1 + \ldots + a_n^{(j)} x_n = a_0^{(j)}$ holds under the given assignment. The \textbf{size} of the disjunction of linear equations is $\sum_{i = 1}^{n} \sum_{j = 1}^{t} |a_i^{(j)}|$ if all coefficients are written in \textbf{unary} notation. If all coefficients are written in \textbf{binary} notation then the \textbf{size} is equal to $\sum_{i = 1}^{n} \sum_{j = 1}^{t} \lceil \log |a_i^{(j)}| \rceil$. \end{definition} \begin{definition} Let $K := \{K_1, \ldots, K_m\}$ be a collection of disjunctions of linear equations. An $\mathsf{Res}$-$\mathsf{Lin}$ proof from $K$ of a disjunction of linear equations $D$ is a finite sequence $\pi = (D_1, \ldots, D_l)$ of disjunctions of linear equations, such that $D_l = D$ and for every $i \in \{1, \ldots, l\}$, either $D_i = K_j$ for some $j \in \{1, \ldots, m\}$, or $D_i$ is a Boolean axiom $(x_h = 0) \vee (x_h = 1)$ for some $h \in \{1, \ldots, n\}$, or $D_i$ was deduced by one of the following $\mathsf{Res}$-$\mathsf{Lin}$ inference rules, using $D_j$, $D_k$ for some $j, k < i$: \begin{itemize} \item \textbf{Resolution}: Let $A, B$ be two, possibly empty, disjunctions of linear equations and let $L_1$, $L_2$ be two linear equations. From $A \vee L_1$ and $B \vee L_2$ derive $A \vee B \vee (\alpha L_1 + \beta L_2)$ where $\alpha, \beta \in \mathbb{Z}$. \item \textbf{Weakening}: From a (possibly empty) disjunction of linear equations $A$ derive $A \vee L$, where $L$ is an arbitrary linear equation over $\{x_1, \ldots, x_n\}$. \item \textbf{Simplification}: From $A \vee (k = 0)$ derive $A$, where $A$ is a, possibly empty, disjunction of linear equations and $k \neq 0$ is a constant. \item \textbf{Contraction}: From $A \vee L \vee L$ derive $A \vee L$, where $A$ is a, possibly empty, disjunction of linear equations and $L$ is some linear equation. \end{itemize} Note that we assume that the order of equations in the disjunction is not significant, while we contract identical equations, especially. An $\mathsf{Res}$-$\mathsf{Lin}$ \textbf{refutation} of a collection of disjunctions of linear equations $K$ is a proof of the empty disjunction from $K$. The \textbf{size} of an $\mathsf{Res}$-$\mathsf{Lin}$ proof $\pi$ is the total size of all the disjunctions of linear equations in $\pi$. If all coefficients in our $\mathsf{Res}$-$\mathsf{Lin}$ proof $\pi$ are written in the \textbf{unary} notation then we denote this proof an $\mathsf{Res}$-$\mathsf{Lin}_{U}$ derivation. Otherwise, if all coefficients are written in the \textbf{binary} notation then we denote this proof an $\mathsf{Res}$-$\mathsf{Lin}_{B}$ derivation. \end{definition} \begin{note} In the original $\mathsf{Res}$-$\mathsf{Lin}$ proof system duplicate linear equations can be discarded from the disjunction. Instead, we will use \textbf{contraction} rule explicitly. It is easy to see that both these variants of $\mathsf{Res}$-$\mathsf{Lin}$ system are equivalent. \end{note} \begin{definition} Let $D$ be a disjunction of linear equations: \begin{equation*} (a_1^{(1)} x_1 + \ldots + a_n^{(1)} x_n = a_0^{(1)}) \vee \cdots \vee (a_1^{(t)} x_1 + \ldots + a_n^{(t)} x_n = a_0^{(t)}) \end{equation*} We denote by $\widehat{D} $ its translation into the following system of polynomial equations: $$ y_1 \cdot y_2 \cdots y_t = 0 $$ $$ y_1 = a_1^{(1)} x_1 + \ldots + a_n^{(1)} x_n - a_0^{(1)}, \; y_2 = a_1^{(2)} x_1 + \ldots + a_n^{(2)} x_n - a_0^{(2)}, \; \ldots, \; y_t = a_1^{(t)} x_1 + \ldots + a_n^{(t)} x_n - a_0^{(t)} $$ If $D$ is the empty disjunction, we define $\widehat{D} $ to be the single polynomial equation 1 = 0. \end{definition} Now we will prove that $\mathsf{Ext}$-$\mathsf{PC}_{\mathbb{Q}}^{\surd}$ p-simulates $\mathsf{Res}$-$\mathsf{Lin}_{B}$ and $\Sigma \Pi \Sigma$-$PC_{\mathbb{Q}}$ p-simulates $\mathsf{Res}$-$\mathsf{Lin}_{U}$. \begin{theorem} \label{Res-Lin-b simulation} Let $\pi = (D_1, \ldots, D_l)$ be an $\mathsf{Res}$-$\mathsf{Lin}_{B}$ proof sequence of $D_l$ from some collection of initial disjunctions of linear equations $Q_1, \ldots, Q_m$. Also consider $L_1, \ldots, L_t$ --- all affine forms that we have in all disjunctions in our $\mathsf{Res}$-$\mathsf{Lin}_{B}$ proof sequence. Then, there exists an $\mathsf{PC}_{\mathbb{Q}}^{\surd}$ proof of $\widehat{D}_l$ from $\widehat{Q}_1 \cup \ldots \cup \widehat{Q}_m \cup \{y_1 = L_1, y_2 = L_2, \ldots, y_t = L_t\}$ of size at most $O(p(Size(\pi)))$ for some polynomial $p$. \end{theorem} \begin{proof} We proceed by induction on the number of lines in $\pi$. \mbox{}\\ \noindent{\textit{Base case:}} An $\mathsf{Res}$-$\mathsf{Lin}_{B}$ axiom $Q_i$ is translated into $\widehat{Q_i}$ and $\mathsf{Res}$-$\mathsf{Lin}_{B}$ Boolean axiom $(x_i = 0) \vee (x_i = 1)$ is translated into $\mathsf{PC}$ axiom $x_i^2 - x_i = 0$. \mbox{}\\ \noindent{\textit{Induction step: }} Now we will simulate all $\mathsf{Res}$-$\mathsf{Lin}_{B}$ derivation rules in the $\mathsf{PC}_{\mathbb{Q}}^{\surd}$ proof. \begin{itemize} \item \textbf{Resolution}: Assume that $D_i = A \vee B \vee (\alpha L_1 + \beta L_2)$ where $D_j = A \vee L_1$ and $D_k = B \vee L_2$. Then, we have already derived polynomial equations $$ y_{j 1} = (a_{j 1}^{(1)} x_1 + \ldots + a_{j n}^{(1)} x_n - a_{j 0}^{(1)}), \; \ldots, \; y_{j t_j} = (a_{j 1}^{(t_j)} x_1 + \ldots + a_{j n}^{(t_j)} x_n - a_{j 0}^{(t_j)}), $$ $$ y_{k 1} = (a_{k 1}^{(1)} x_1 + \ldots + a_{k n}^{(1)} x_n - a_{k 0}^{(1)}), \; \ldots, \; y_{k t_k} = (a_{k 1}^{(t_k)} x_1 + \ldots + a_{k n}^{(t_k)} x_n - a_{k 0}^{(t_k)}), $$ $$ y_{j 1} \cdot y_{j 2} \cdots y_{j t_j} = 0, \; y_{k 1} \cdot y_{k 2} \cdots y_{k t_k} = 0 $$ where $$ A = (a_{j 1}^{(2)} x_1 + \ldots + a_{j n}^{(2)} x_n = a_{j 0}^{(2)}) \vee \cdots \vee (a_{j 1}^{(t_j)} x_1 + \ldots + a_{j n}^{(t_j)} x_n = a_{j 0}^{(t_j)}), $$ $$ B = (a_{k 1}^{(2)} x_1 + \ldots + a_{k n}^{(2)} x_n = a_{k 0}^{(2)}) \vee \cdots \vee (a_{k 1}^{(t_k)} x_1 + \ldots + a_{k n}^{(t_k)} x_n = a_{k 0}^{(t_k)}) $$ $$ L_1 = (a_{j 1}^{(1)} x_1 + \ldots + a_{j n}^{(1)} x_n = a_{j 0}^{(1)}), \; L_2 = (a_{k 1}^{(1)} x_1 + \ldots + a_{k n}^{(1)} x_n = a_{k 0}^{(1)}). $$ Then we can derive $y_{j 1} \cdot y_{j 2} \cdots y_{j t_j} \cdot y_{k 2} \cdots y_{k t_k} = 0$, $y_{j 1} \cdot y_{j 2} \cdots y_{j t_j} \cdot y_{k 2} \cdots y_{k t_k} = 0$ and thus $(\alpha y_{j 1} + \beta y_{k 1}) \cdot y_{j 2} \cdots y_{j t_j} \cdot y_{k 2} \cdots y_{k t_k} = 0$. Then there is some variable $y_{i 1}$ for which holds $y_{i 1} = \alpha (a_{j 1}^{(1)} x_1 + \ldots + a_{j n}^{(1)} x_n - a_{j 0}^{(1)}) + \beta (a_{k 1}^{(1)} x_1 + \ldots + a_{k n}^{(1)} x_n - a_{k 0}^{(1)})$ and we can derive $y_{i 1} = \alpha y_{j 1} + \beta y_{k 1}$. Then we can derive $y_{i 1} \cdot y_{j 2} \cdots y_{j t_j} \cdot y_{k 2} \cdots y_{k t_k} = 0$ which is part of $\widehat{D}_i$. \item \textbf{Weakening}: Assume that $D_i = D_j \vee L$ where $L$ is a linear equation. Then, we have already derived polynomial equations $$ y_{j 1} = (a_{j 1}^{(1)} x_1 + \ldots + a_{j n}^{(1)} x_n - a_{j 0}^{(1)}), \; \ldots, \; y_{j t_j} = (a_{j 1}^{(t_j)} x_1 + \ldots + a_{j n}^{(t_j)} x_n - a_{j 0}^{(t_j)}), $$ $$ y_{j 1} \cdot y_{j 2} \cdots y_{j t_j} = 0. $$ We know that there is some variable $y_0$ for which $y_0 = b_{1}x_1 + \ldots b_n x_n - b_0$ where $L$ is a linear equation $b_{1}x_1 + \ldots b_n x_n = b_0$. From $y_{j 1} \cdot y_{j 2} \cdots y_{j t_j} = 0$ we can derive $y_0 \cdot y_{j 1} \cdot y_{j 2} \cdots y_{j t_j} = 0$ which is part of $\widehat{D}_i$. \item \textbf{Simplification}: Suppose that $D_i = A$ and $D_j = A \vee (k = 0)$ where $k \in \mathbb{Z}$, $k \neq 0$. Then, we have already derived polynomial equations $$ y_{j 1} = (a_{j 1}^{(1)} x_1 + \ldots + a_{j n}^{(1)} x_n - a_{j 0}^{(1)}), \; \ldots, \; y_{j t_j - 1} = (a_{j 1}^{(t_j - 1)} x_1 + \ldots + a_{j n}^{(t_j - 1)} x_n - a_{j 0}^{(t_j - 1)}), \; y_{j t_j} = k, $$ $$ y_{j 1} \cdot y_{j 2} \cdots y_{j t_j} = 0. $$ From equation $y_{j 1} \cdot y_{j 2} \cdots y_{j t_j} = 0$ we can derive equation $y_{j 1} \cdot y_{j 2} \cdots y_{j t_j - 1} \cdot k = 0$ from which we can derive $y_{j 1} \cdot y_{j 2} \cdots y_{j t_j - 1} = 0$ which is part of $\widehat{D}_i$. \item \textbf{Contraction}: Assume that $D_i = A \vee L$ and $D_j \vee L \vee L$ where $L$ is a linear equation. Then, we have already derived polynomial equations $$ y_{j 1} = (a_{j 1}^{(1)} x_1 + \ldots + a_{j n}^{(1)} x_n - a_{j 0}^{(1)}), \; \ldots, \; y_{j t_{j} - 1} = y_{j t_j} = (a_{j 1}^{(t_j)} x_1 + \ldots + a_{j n}^{(t_j)} x_n - a_{j 0}^{(t_j)}), $$ $$ y_{j 1} \cdot y_{j 2} \cdots y_{j t_j - 1} \cdot y_{j t_j} = 0. $$ Then we can derive $y_{j t_j - 1} = y_{j t_j}$ and $y_{j 1} \cdot y_{j 2} \cdots y_{j t_j - 2} \cdot (y_{j t_j - 1}^2) = 0$. Using multiplication we can derive $y_{j 1}^2 \cdot y_{j 2}^2 \cdots y_{j t_j - 2}^2 \cdot (y_{j t_j - 1}^2) = 0$ from which we can derive the equation $y_{j 1} \cdot y_{j 2} \cdots y_{j t_j - 1} = 0$ by using the square root rule. This equation is the last part of $\widehat{D}_i$ because other parts were derived earlier. \end{itemize} \end{proof} \begin{definition} Let $\Gamma = \{P_1, \ldots, P_m\} \subset \mathbb{F}[x_1, \ldots, x_n]$ be a set of polynomials in variables $\{x_1, \ldots, x_n\}$ over a ring $R$ such that the system of equations $P_1 = 0, \ldots, P_m = 0$ has no solution. A $\Sigma \Pi \Sigma$-$PC_{\mathbb{Q}}$ refutation of $\Gamma$ is a $\mathsf{PC}_{R}$ refutation of a set $\Gamma' = \{P_1, \ldots, P_m, Q_1, \ldots, Q_m\}$ where $Q_i$ are polynomials of the form $Q_i = y_i - (a_{i 0} + \sum_j a_{i j} x_j)$ for some constants $a_{i j} \in R$. The size of the $\Sigma \Pi \Sigma$-$PC_{\mathbb{Q}}$ refutation is equal to the size of the $\mathsf{PC}_{R}$ refutation of $\Gamma'$. \end{definition} \begin{theorem} \label{Res-Lin-u simulation} Let $\pi = (D_1, \ldots, D_l)$ be an $\mathsf{Res}$-$\mathsf{Lin}_{U}$ proof sequence of $D_l$, from some collection of initial disjunctions of linear equations $Q_1, \ldots, Q_m$. Then, there exists an $\Sigma \Pi \Sigma$-$PC_{\mathbb{Q}}$ proof of $\widehat{D}_l$ from $\widehat{Q}_1 \cup \ldots \cup \widehat{Q}_m$ of size at most $O(p(Size(\pi)))$ for some polynomial $p$. \end{theorem} \begin{proof} To prove this theorem we will use the following lemma from \cite{IMP19_new}: \begin{lemma}[\cite{IMP19_new}] Let $\Gamma = \{P_1, \ldots, P_a, Q_1, \ldots, Q_b, X, Y\}$ be a set of polynomials such that $$ P_1 = x_1 - (x - 1), \; P_2 = x_2 - (x - 2), \; \ldots, P_a = x_a - (x - a), $$ $$ Q_1 = y_1 - (y - 1), \; Q_2 = y_2 - (y - 2), \; \ldots, Q_b = y_b - (y - b), $$ $$ X = x \cdot x_1 \cdot x_2 \cdots x_a, \; Y = y \cdot y_1 \cdot y_2 \cdots y_b. $$ Then we can derive $\Gamma'$ from $\Gamma$ in $\Sigma \Pi \Sigma$-$PC_{\mathbb{Q}}$ with derivation of size $poly(a b)$ where $\Gamma' = \{Z_0, Z_1, \ldots, Z_{a + b}, Z\}$ and $$ Z_0 = z - (x + y), \; Z_1 = z_1 - (x + y + 1), \; Z_2 = z_2 - (x + y + 2) , \; \ldots, Z_{a + b} = z_{a + b} - (x + y + a + b), $$ $$ Z = z \cdot z_1 \cdot z_2 \cdots z_{a + b}. $$ \end{lemma} Now we will prove the theorem by induction on lines in $\pi$. \mbox{}\\ \noindent{\textit{Base case: }} An $\mathsf{Res}$-$\mathsf{Lin}_{B}$ axiom $Q_i$ is translated into $\widehat{Q_i}$ and $\mathsf{Res}$-$\mathsf{Lin}_{B}$ Boolean axiom $(x_i = 0) \vee (x_i = 1)$ is translated into $\mathsf{PC}$ axiom $x_i^2 - x_i = 0$. \mbox{}\\ \noindent{\textit{Induction step: }} Now we will simulate all $\mathsf{Res}$-$\mathsf{Lin}_{B}$ derivation rules in the $\mathsf{Ext}$-$\mathsf{PC}_{\mathbb{Q}}^{\surd}$ proof. \begin{itemize} \item \textbf{Resolution}, \textbf{Weakening}, \textbf{Simplification} rules simulation is the same as in \autoref{Res-Lin-b simulation}. \item \textbf{Contraction}: Assume that $D_i = A \vee L$ and $D_j \vee L \vee L$ where $L$ is a linear equation. Then, we have already derived polynomial equations $$ y_{j 1} = (a_{j 1}^{(1)} x_1 + \ldots + a_{j n}^{(1)} x_n - a_{j 0}^{(1)}), \; \ldots, \; y_{j t_{j} - 1} = y_{j t_j} = (a_{j 1}^{(t_j)} x_1 + \ldots + a_{j n}^{(t_j)} x_n - a_{j 0}^{(t_j)}), $$ $$ y_{j 1} \cdot y_{j 2} \cdots y_{j t_j - 1} \cdot y_{j t_j} = 0. $$ Then we can derive $y_{j t_j - 1} = y_{j t_j}$ and $y_{j 1} \cdot y_{j 2} \cdots y_{j t_j - 2} \cdot (y_{j t_j - 1}^2) = 0$. Using lemma we can introduce new variables $\{z_{-M}, \ldots, z_M\}$ and derive $$ z_{-M} = y_{j t_{j - 1}} + M, \;, z_{-M + 1} = y_{j t_{j - 1}} + M - 1, \ldots, z_{0} = y_{j t_{j - 1}}, \; z_M = y_{j t_{j - 1}} - M, $$ $$ z_{-M} \cdot z_{-M + 1} \cdots z_{M - 1} \cdot z_M = 0, $$ where $M = |a_{j 1}^{(t_{j - 1})}| + |a_{j 2}^{(t_{j - 1})}| + \ldots + |a_{j n}^{(t_{j - 1})}|$. Then we can substitute $y_{j t_{j}} - k$ for each $z_{k}$ one by one and get equation $$ f(y_{j t_{j - 1}}) = 0 $$ where $f(y_{j t_{j - 1}}) = b_1 \cdot y_{j t_{j - 1}} + b_2 \cdot y_{j t_{j - 1}}^2 + \ldots + b_{2 M + 1} \cdot y_{j t_{j - 1}}^{2 M + 1} $ is some polynomial from $\mathbb{Z}[y_{j t_{j - 1}}]$ and $b_1 = (M!)^2 \cdot (-1)^{M}$. Then we can derive the following equation by using multiplication rule: \begin{multline*} y_{j 1} \cdot y_{j 2} \cdots y_{j t_j - 2} \cdot f(y_{j t_{j - 1}}) = b_1 \cdot y_{j 1} \cdot y_{j 2} \cdots y_{j t_j - 2} \cdot y_{j t_j - 1} + \\ + y_{j 1} \cdot y_{j 2} \cdots y_{j t_j - 2} \cdot (y_{j t_j - 1}^2) \cdot (b_2 + b_3 \cdot y_{j t_{j - 1}} + \ldots + b_{2 M + 1} \cdot y_{j t_{j - 1}}^{2 M - 1}) = 0. \end{multline*} Now, using the equation $y_{j 1} \cdot y_{j 2} \cdots y_{j t_j - 2} \cdot (y_{j t_j - 1}^2) = 0$ we can derive $ b_1 \cdot y_{j 1} \cdot y_{j 2} \cdots y_{j t_j - 2} \cdot y_{j t_j - 1} = 0$ and since $b_1 \neq 0$ we can derive $y_{j 1} \cdot y_{j 2} \cdots y_{j t_j - 2} \cdot y_{j t_j - 1} = 0$. This equation is the last part of $\widehat{D}_i$ because other parts were derived earlier. \end{itemize} \end{proof} Now we will show that our lower bound provides an interesting counterpart to a result from \cite{PT18_new}. \begin{theorem}[\cite{PT18_new}] Any $\mathsf{Res}$-$\mathsf{Lin}_{B}$ refutation of $1 + 2x_1 + \ldots + 2^n x_n = 0$ is of the size $2^{\Omega(n)}$. \end{theorem} \begin{proof} From \autoref{lower bound q} we know that any $\mathsf{Ext}$-$\mathsf{PC}_{\mathbb{Q}}^{\surd}$ refutation of $\mathsf{BVP}_n$ requires size $2^{\Omega(n)}$ and thus from \autoref{Res-Lin-b simulation} we know that there is some polynomial $p$ such that for any $\mathsf{Res}$-$\mathsf{Lin}_{B}$ refutation of $\mathsf{BVP}_n$ of size $S$ the equation $p(S) \ge C_0 \cdot 2^{C_1 \cdot n}$ holds. Then we know that for some constant $C$ the equation $S \ge 2^{C \cdot n}$ holds. \end{proof} \section*{Open Problems} \begin{enumerate} \item \autoref{Res-Lin-b simulation} says that $\mathsf{Ext}$-$\mathsf{PC}_{\mathbb{Q}}^{\surd}$ p-simulates any $\mathsf{Res}$-$\mathsf{Lin}_{B}$ derivation. Is the square root rule necessary, that is, can we p-simulate $\mathsf{Res}$-$\mathsf{Lin}_{B}$ refutation in the $\mathsf{Ext}$-$\mathsf{PC}_{\mathbb{Q}}$ proof system? \item A major question is to prove an exponential lower bound on the size of $\Sigma \Pi \Sigma$-$\mathsf{PC}_{\mathbb{Q}}$ refutation of a translation of a formula in CNF. \end{enumerate} \section*{Acknowledgement} I would like to thank Edward A. Hirsch for guidance and useful discussions at various stages of this work. Also I wish to thank Dmitry Itsykson and Dmitry Sokolov for very helpful comments concerning this work. \small \bibliographystyle{plain}
1,477,468,750,851
arxiv
\section{Introduction} Thermalization behavior of closed quantum systems has received heightened interest since the suggestion that Anderson localization could be generalized to systems of interacting particles, a phenomenon dubbed \textit{many-body localization (MBL)} \cite{Nandkishore2015,Alet}. Over the last decade and a half, an increasingly large body of proof has spoken to the existence and complexity of this behavior. Perturbative arguments \cite{Basko2006a,Gornyi2005}, studies using exact diagonalization \cite{Pal2010,Oganesyan2007}, and further mathematical proofs \cite{Imbrie2016a} have all emerged over this period. This body of work has firmly proved the existence of MBL at strong disorder in one-dimension without broken time-reversal symmetry or spin-orbit coupling, while localization at weaker disorder closer to the phase transition is still the subject of exploration \cite{Vosk2015,Potter2015,Khemani2016b}. Experiments using cold atoms and trapped ions have also revealed robust MBL behavior \cite{Schreiber2015,Smith2016}. General many-body states are expressed in a Hilbert space that grows exponentially with system size; MBL systems have the added property of a description using an extensive set of \textit{$l$-bits}\cite{Huse2014,Serbyn2013,Ros2015}, which can be thought of as quasi-local generalizations of physical spins. The $l$-bit algebra implies a set of quasi-local operators: $\tau^z_i$ which measures the $l$-bit on the $i^{\text{th}}$ site and $\tau^x_i$ which flips the $l$-bit on the $i^{\text{th}}$ site. For the spin-$1/2$ systems with which we work, the $l$-bit algebra can be thought of as akin to the Pauli spin algebra. $\tau^z_i$ returns a phase ($\pm 1$) when applied to the eigenstate. Several algorithms exist to construct these integrals of motion approximately \cite{Pekker2016,Rademaker2017,Chandran2015,Chandran2015b,Pollmann2016,Pekker2017a,Wahl,Thomson2018}, which produce operators that do not commute exactly with the Hamiltonian. Recent algorithms have also been proposed to construct integrals of motion exactly \cite{Kulshreshtha,Goihl2017}. Our focus in this paper will be on using $l$-bit algebras to construct approximate eigenstates on large MBL systems. Several methods for constructing eigenstates on large MBL systems already exist. One particular class that has shown great success is algorithms based on the density matrix renormalization group (DMRG) algorithm \cite{Schollwock2011}. While DMRG itself finds the ground state of a generic local Hamiltonian, algorithms like shift-and-invert matrix product states (SIMPS) \cite{Yu2015}, DMRG-X \cite{Khemani2016} and En-DMRG \cite{Lim2016}, reviewed further in Section \ref{subsec:current}, are able to compute excited states of MBL systems by exploiting the area-law nature of MBL eigenstates. Additionally, a class of recent tensor network algorithms \cite{Wahl,Pollmann2016,Pekker2017a,Chandran2015b} provide effient methods of constructing matrices whose columns are approximate eigenstates of the system. These algorithms, also reviewed further in \ref{subsec:current}, use layers of local unitaries to generate a large unitary matrix that approximately diagonalizes the Hamltonian. We introduce a novel type of algorithm to construct approximate eigenstates on large MBL systems, using exact $l$-bit algebras on small subsystems to approximate an $l$-bit algebra on a larger system. Using this algorithm, eigenstates can be targeted by their $l$-bit labels, allowing one to access any eigenstate in practice. Further, we extend the algorithm to show how it can also be used to measure expectation values of products of local observables on eigenstates of large systems. We begin by describing the disordered Heisenberg spin chain and $l$-bit algebras in further detail. We then review the existing classes of methods used to construct approximate eigenstates on large MBL systems. In Section \ref{sec:methods}, we describe a new algorithm based on ideas similar to the authors' work in Ref. [\onlinecite{Kulshreshtha}] but far more efficient. The algorithm is used to construct exact $l$-bit algebras on small systems and is labelled \textit{operator localization optimization (OLO)}. We subsequently describe how we use the improved algorithm repeatedly to construct approximate eigenstates on large MBL systems, which we label the $\tau_x$ \textit{network representation} due to its similarity to the tensor network. We additionally introduce an algorithm, labeled the \textit{inchworm algorithm}, to measure products of local observables on the approximate eigenstates. In Section \ref{sec:results}, we first test the quality of our eigenstates by measuring their energy fluctuations and compare these results to the tensor network method of Ref. [\onlinecite{Wahl}]. We additionally show how the algorithm can be used to find correlations over large distances. Finally, we conclude the paper by discussing our algorithm in relation to tensor network-class and DMRG-class algorithms and considering future directions. \section{Phenomenology and other methods} \label{sec:phenom} \subsection{XXZ Spin Chain} We make use of the disordered XXZ spin chain, with the Hamiltonian \begin{equation} \label{eq:main_ham} H = \sum_{i = 1}^{L-1} \mathbf{S}_i \cdot \mathbf{S}_{i+1} + \sum_{i = 1}^L h_i S^z_i, \end{equation} where $\mathbf{S}_i = \frac{1}{2} \boldsymbol{\sigma}_i$ and the values $h_i$ are drawn randomly and independently from a uniform distribution $[-W,W]$. The properties of this model are well studied. For small $W$, the model is known to obey the Eigenstate Thermalization Hypothesis, while for large $W$, the model is known to exhibit localized, non-ergodic behavior. This behavior can be probed in a variety of ways, including through the level statistics of the energy spectrum \cite{Pal2010}; through the entanglement characteristics of eigenstates and mobility edge \cite{Luitz2015,Bauer2013}; through the behavior of integrals of motion \cite{Kulshreshtha}; and through diffusion characteristics \cite{Agarwal2015}. Though system behavior for large $W$ and small $W$ are well understood, the crossover between thermal and localized characteristics is still being probed. Starting from $W \ll 1$, behavior is thermal. As $W$ increases, Griffiths regions, rare insulating areas surrounded by regions with metallic behavior, begin to dominate the behavior of the system and transport of conserved quantities becomes subdiffusive \cite{Agarwal2015,Vosk2015,Potter2015}. Finally, as $W$ exceeds $W_c$, the system becomes completely localized. In the localized phase, eigenstates no longer obey ETH, level statistics display Poisson behavior \cite{Pal2010}, and system entanglement grows as the logarithm of time following a quantum quench \cite{Bardarson2012,Serbyn2013a}. Numerical simulations give estimates for the transition disorder strength at $W_c \approx 3.5$ in the thermodynamic limit\cite{Agarwal2015,Pal2010,Kulshreshtha,Luitz2015}, though this value is subject to finite-size effects. The work of this paper occurs well within the localized regime. \subsection{Current eigenstate approximation methods} \label{subsec:current} Several methods exist to approximate eigenstates on large MBL systems. We provide a brief review of two of these methods here in order to provide context for the method we present in this paper. The tensor network method for approximating eigenstates, developed and presented in Refs. [\onlinecite{Pollmann2016}] and [\onlinecite{Wahl}], uses layers of small unitary matrices to represent a large unitary matrix that approximately diagonalizes the Hamiltonian. Ref. [\onlinecite{Pollmann2016}] makes use of two-site unitary matrices stacked in multiple layers. Starting from arbitrary unitary blocks, the algorithm sweeps across the system, using the conjugate gradient method to minimize the total variance of the energy of the approximate eigenstates generated by the matrices. The computational cost of this algorithm scales linearly as a function of system size and exponentially as a function of number of layers. Closer to the MBL transition, more layers are required to accurately represent the system's eigenstates as their entanglement properties become less local. In Ref. [\onlinecite{Wahl}], the scaling of the computational cost is reduced by instead fixing the number of layers at $2$ and increasing the size of the smaller unitary blocks. Using unitary matrices that act on a larger number of sites, fewer layers are required to represent eigenstates to the same accuracy. Ref. [\onlinecite{Wahl}] also makes use of a cost function whose computational cost scales less quickly than that of finding the total eigenstate energy variance. We benchmark our algorithm against this one in Section \ref{sec:results}. The other method we highlight is DMRG-X, presented in Ref. [\onlinecite{Khemani2016}]. The original DMRG method makes use of the fact that ground states of one-dimensional systems can be represented accurately through matrix product states (MPS) \cite{Verstraete2006,Perez-Garcia2007}. The DMRG method starts from a random matrix product state, sweeping through the system and updating the constituent matrices of the MPS by minimizing an effective Hamiltonian with respect to individual parts of the MPS. The DMRG-X method of Ref. [\onlinecite{Khemani2016}] makes use of the fact that eigenstates of MBL systems can be represented efficiently through MPS \cite{Friesdorf2015}. Eigenstates can be targeted by their overlap with physical spin product states. \mbox{DMRG-X} starts from an initial physical spin product state and updates the constituent matrices of the MPS by replacing them with maximally overlapping eigenstates of an effective Hamiltonian, allowing one to target eigenstates based on proximity to a physical spin structure. Another variation of DMRG is presented in Ref. [\onlinecite{Lim2016}]. In this method, labelled En-DMRG, one can target eigenstates by energy using the DMRG and Lanczos methods. As opposed to DMRG, DMRG-X and En-DMRG allow one to target eigenstates across the energy spectrum as long as they can be accurately represented using MPS. In Section \ref{sec:concl} of this paper, we describe how our method complements the methods described above. \section{Methods} \label{sec:methods} \subsection{Introduction} As previously described, eigenstates of FMBL systems can be expressed through an $l$-bit spin algebra, akin to the physical spin algebra. In the absence of spin-spin interaction, eigenstates are physical spin product states and the $l$-bit algebra is simply that of the physical spins. In the presence of spin-spin interactions, the $l$-bit label of an eigenstate corresponds to a quasi-local measurement on the system. In an FMBL system, the weight of the $l$-bit measurement on a site decays exponentially with distance from the site, where weight is defined below. Where the physical spin measurement operator on site $i$ is labelled $\sigma^z_i$, the $l$-bit measurement operator on site $i$ is labelled $\tau^z_i$. The non-trivial action of the $\tau^z_i$ operator on site $j$ decays as $e^{|i-j|/\xi}$, where $\xi$ is the localization length. Just as the $\sigma^x_i$ operator flips the spin of a state on site $i$, a quasi-local $l$-bit flip operator $\tau^x_i$ can be defined on FMBL systems. The $l$-bit flip operator on site $i$ takes one eigenstate to another whose label $\tau^z_i$ is flipped on site $i$ and is unchanged everywhere else. $\tau^x_i$ operators can be constructed as \begin{equation} \tau^x_i = U \sigma^x_i U^\dagger , \end{equation} where $U$ diagonalizes the Hamiltonian and contains the eigenstates of the Hamiltonian in its columns. However, there exist exp($L$) real choices for $U$ for a system of size $L$. These choices correspond to rearranging the columns of $U$ and assigning each column a phase of plus or minus one for a real Hamiltonian. Only certain choices of $U$ will yield quasi-local operators with exponentially decaying weight. Therefore, given the set of eigenstates of a Hamiltonian, finding the correct $U$ is a non-trivial combinatorial optimization problem. \subsection{Constructing quasilocal operators on small subsystems} \label{subsec:olo} \begin{figure} \includegraphics[width=\linewidth]{txweights.png} \caption[width=\linewidth]{Sample weights of three $\tau^x_i$ operators for a system of size $L=14$ and disorder $W=15$. The weight of the operator decays exponentially as a function of distance from the primary site.}\label{fig:txweights} \end{figure} In Ref. [\onlinecite{Kulshreshtha}], the authors of this paper presented a method to construct exact quasi-local integrals of motion. In this work, we present a new method to construct optimally local $l$-bit operators, called operator localization optimization (OLO). The method presented here improves on the previous method by reducing the runtime for systems of $L=14$ by approximately 40 times while improving operator localization length; on the computing cluster used by the authors (a 10 TFLOP Beowulf cluster of 113 multicore machines), this amounted to an absolute reduction in time from approximately ten hours to fifteen minutes. To begin, we note that any operator $O$ can be written \begin{equation} O = \sum_{\gamma \in {0,x,y,z}} \sigma^{\gamma}_j \otimes A^{\gamma}_{\bar{j}} \end{equation} where $\sigma^0_i$ is the identity operator and $A^{\gamma}_{\bar{j}}$ is an operator that acts as the identity on site $j$ but is nontrivial elsewhere. The action of the operator on a site $j$ can be quantified as the weight function \begin{equation} \begin{split} w_j (O) &= \frac{16}{\mathcal{N}} \Tr[(A^{x}_{\bar{j}})^2 + (A^{y}_{\bar{j}})^2 + (A^{z}_{\bar{j}})^2] \\ &= \frac{1}{\mathcal{N}} \sum_{\gamma \in {x,y,z}} \Tr[(O - \sigma^\gamma_j O \sigma^\gamma_j)^2]. \end{split} \end{equation} This non-negative function measures how much the operator $O$ affects site $j$, and it is zero if $O$ acts as the identity on site $j$. The normalization factor $\mathcal{N}$ is simply the size of the Hilbert space; in this case $\mathcal{N} = 2^L$. Consider an $l$-bit labelling on the set of eigenstates $\{\alpha\}$. Each eigenstate is uniquely labeled by the chain of eigenvalues from the set $\{\tau^z_j \},$ for example $\{ + + - + - \}$ for a system with $5$ $l$-bit operators. The function $p(\alpha,i)$ takes an eigenstate $\alpha$ to the eigenstate with a label flipped on $i$ and identical everywhere else. For example, $p(\{+ + + +\},1) = \{- + + +\}$. We also define a measurement operator $s(\alpha,i)$ that takes the sign of the $l$-bit label of an eigenstate $\alpha$ on site $i$. For example, $s(\{- + + +\},1) = -1$. The $l$-bit measurement and flip operators can then be written \begin{align} \tau^z_i &= \sum_{\alpha} s(\alpha,i) \ket{\alpha} \bra{\alpha} \\ \tau^x_i &= \sum_{\alpha} \ket{\alpha} \bra{p(\alpha,i)}. \end{align} The functions $s$ and $p$ correspond to an $l$-bit algebra on eigenstates and equivalently a choice on the ordering of the columns of $U$. For these operators, the weight function can be written as follows: \begin{equation} \label{eq:tzweight} \begin{split} \mathcal{N} w_j (\tau^z_i) &= 3 \cdot 2^{L+1} \\ & \quad -2\sum_{\gamma \in {x,y,z}} \sum_{\alpha, \beta} s(\alpha,i) s(\beta,i) \bra{\alpha} \sigma^\gamma_j \ket{\beta} \bra{\beta} \sigma^\gamma_j \ket{\alpha} \\ &= 3 \cdot 2^{L+1}-2\sum_{\gamma} \Tr \left[ M^{\gamma,j} \sigma^z_i M^{\gamma,j} \sigma^z_i \right] \end{split} \end{equation} \begin{equation} \label{eq:txweight} \begin{split} \mathcal{N} w_j (\tau^x_i) & = 3 \cdot 2^{L+1} \\ & \quad - 2 \sum_{\gamma \in {x,y,z}} \sum_{\alpha,\beta} \bra{\alpha} \sigma^\gamma_j \ket{\beta} \bra{p(\beta,i)} \sigma^\gamma_j \ket{p(\alpha,i)} \\ &= 3 \cdot 2^{L+1} - 2\sum_{\gamma} \Tr \left[ M^{\gamma,j} \sigma^x_i M^{\gamma,j} \sigma^x_i \right] \end{split} \end{equation} where $M^{\gamma,j}=U^\dagger \sigma^\gamma_j U$. For a quasilocal operator on site $i$, $w_j \propto e^{|i-j|/\xi}$, where $\xi$ is the localization length of the operator. Therefore, our goal is to maximize the sums shown at the end of equations (\ref{eq:tzweight}) and (\ref{eq:txweight}) away from site $i$. Each term in the sum has a maximum $2^L$. Therefore, maximizing these will bring the total weight as close to $0$ as possible. As we expect the weight functions for $\tau^z_i$ and $\tau^x_i$ operators to mirror one another, we choose to focus just on the sum in equation (\ref{eq:txweight}), as we find that the weight decay of the $\tau^x_j$ operator is more sensitive to the pairing structure chosen. The first insight in solving this optimization problem comes from noting that in a system without spin-spin interaction (in which the eigenstates are simply product states), the ideal ordering of $U$ is one such that $M^{\gamma,j} = \sigma^\gamma_j$. If this is the case, then for all $j \neq i$, the $\sigma^x_i$ operators in equation (\ref{eq:txweight}) commute through $M^{\gamma,j}$, yielding \begin{equation} \begin{split} \Tr \left[ M^{\gamma,j} \sigma^x_i M^{\gamma,j} \sigma^x_i \right] = \Tr \left[ \sigma^x_i \sigma^x_i M^{\gamma,j} M^{\gamma,j} \right] = 2^L . \end{split} \end{equation} Inserted back into equation (\ref{eq:txweight}), this yields $w_j(\tau^x_i) = 0$ as desired. For $i=j$, $M^{x,i}$ commutes through, yielding $2^L$ for this part of the sum as well. For the other parts of the sum, $M^{\gamma,i}$ anticommutes with $\sigma^x_i$, yielding $-2^L$ for each part of these sums. Thus, we obtain $w_i(\tau^x_i) = 8 \cdot 2^L$, the maximum allowed weight. After turning on the spin-spin interaction, the same principle applies, though the ideal ordering becomes harder to find. We can minimize the weight where $j \neq i$ and maximize it where $j=i$ by satisfying the following two principles that can be seen from equations (\ref{eq:tzweight}) and (\ref{eq:txweight}). \begin{enumerate} \item For (a) $j \neq i, \gamma=x,y,z$ or (b) $j = i,\gamma = x$: if $|\bra{\alpha} \sigma^{\gamma}_j \ket{\beta}|$ is close to unity, then $\bra{p(\beta,i)} \sigma^{\gamma}_j \ket{p(\alpha,i)} \approx \bra{\alpha} \sigma^{\gamma}_j \ket{\beta}$. In case (a), the first line of equation (\ref{eq:txweight}) shows that the sum can be brought close to $2^L$, yielding a small value for $w_j(\tau^x_i)$. Case (b), for which $j=i$ is covered below. \item For $j=i,\gamma = y,z$: if $|\bra{\alpha} \sigma^{\gamma}_i \ket{\beta}|$ is close to unity then $\bra{p(\beta,i)} \sigma^{\gamma}_j \ket{p(\alpha,i)} \approx -\bra{\alpha} \sigma^{\gamma}_i \ket{\beta}$, or vice versa if the first quantity is negative. For $\gamma=x$ (case 1b), the sum in the first line of equation (\ref{eq:txweight}) is close to $2^L$. For $\gamma = y,z$, the sum is close to $-2^L$. When inserted into the first line of equation (\ref{eq:txweight}), the total is brought to 8 $\mathcal{N}$, the maximum allowed value. \end{enumerate} These two principles, if satisfied, thus ensure that the weight of an operator $\tau^x_i$ is maximized at site $i$ and minimized elsewhere. However, the correct ordering is necessary in order to satisfy both principles. Notice that (1b) and (2) are clearly satisfied by stipulating that if $\bra{\beta} \sigma^x_i \ket{\alpha} \gg 0$, then $p(\alpha,i) = \beta$. We will take this as an ansatz and numerically verify that the first condition is also satisfied. As the Hamiltonian conserves total spin in the $z$ direction, its eigenstates can be split into sectors according to total spin. There are $L+1$ sectors that we will label $U_1 , U_2, \ldots , U_{L+1}.$ Utilizing conservation of total spin serves two purposes. First, splitting the Hamiltonian into spin sectors allows us to diagonalize smaller matrices and thereby work with larger systems. Second, the sectors give us some information on the $l$-bit algebra; the $\tau^i_x$ operator changes the total spin of an eigenstate by one unit. Therefore, $\ket{p(\alpha,i)}$ exists in an adjacent spin sector to an eigenstate $\ket{\alpha}$ for all $\alpha$ and $i$. This fact narrows down the search for $p(\alpha,i)$ from the full spectrum of eigenstates to just one or two spin sectors. We now describe the pairing process inductively, starting from an eigenstate spin sector $U_i$ that we assume already has the correct $l$-bit labelling, meaning that its columns have been correctly ordered. Our goal is now to correctly order $U_{i+1}$. We start by taking the set of matrices $\{ O_j \} = \{ U_i^\dagger \sigma^+_j U_{i+1} \}$ over all $j$. The proper interpretation of each $O_j$ operator is as follows. The matrix $U_i^\dagger$ is one whose rows are eigenstates with a correct $l$-bit label. We then apply the operator $\sigma^+_j$ on the right, flipping the $j^\text{th}$ physical spin of each state from down to up or eliminating terms that are already up. Notice that $\sigma^+_j$ is a block off-diagonal matrix in the basis of total spin, with only upper triangular nonzero terms. This is the segment that flips $U_i$ into $U_{i+1}$ rather than $U_{i-1}$. This multiplication yields a matrix whose rows are the eigenstates from the $i^\text{th}$ spin sector with the $j^\text{th}$ physical spin flipped, though solely the part that lives in the $(i+1)^\text{th}$ spin sector. We then label the rows, flipping the $j^\text{th}$ $l$-bit. Taking the product of this matrix with $U_{i+1}$, whose columns are randomly arranged eigenstates. $O_j$ is thus an overlap matrix between $U_{i+1}$ and $U_{i}$ with a physical spin flip on site $j$. We now have a set of overlap matrices whose rows contain the set of labellings for the $(i+1)^\text{th}$ sector. Because there are $L$ such matrices, labellings may be represented multiples times, representing multiple approximations of the same eigenstate in the $(i+1)^\text{th}$ sector using different physical spin flips from the $i^\text{th}$ sector. If this is the case, we simply average the absolute value of the overlaps with identical row labellings. Our goal is now to match each column to a row labelling. We do this through a greedy algorithm, choosing each subsequent pairing by minimizing the second-worst available overlap of remaining eigenstates. This greedy algorithm generally outperforms simply repeatedly choosing the maximum available overlap. The final important factor in assigning eigenstates is determining the appropriate phase. The phase, plus or minus one, of an eigenstate has a strong effect on the localization length of the $\tau^x_j$ matrix. As can be seen in equations (\ref{eq:tzweight}) and (\ref{eq:txweight}), the weight of an operator $\tau^x_i$ or $\tau^z_i$ outside of site $i$ is minimized if the terms $\bra{\alpha} \sigma^{\gamma}_i \ket{\beta}$ have the same sign, so we multiply the eigenstate in $U_{i+1}$ by the appropriate phase (plus or minus one) to make all of the overlap terms positive. We start this inductive process from $U_{0}$, which consists only of the all down physical spin eigenstate which we automatically assign the label $\{ - - - \ldots \}$, and iterate through spin sectors until reaching $U_{L+1}$, which consists only of the all up physical spin eigenstate. The OLO algorithm gives us a good labelling of eigenstates to create localized $\tau^z_i$ and $\tau^x_i$ operators. The weight profiles of sample $\tau^x_i$ operators are given in Figure \ref{fig:txweights}. \subsection{Extension of operators to large systems and measurement of local observables} The OLO algorithm allows us to construct exact $l$-bits on one dimensional systems up to size $L=14$ using 16GB RAM. Attempting to access exact eigenstates and measure observables exactly on larger systems becomes computationally infeasible. However, the quasilocal nature of the $\tau^j_x$ operators in FMBL systems presents a natural method to approximately move between eigenstates of large MBL systems. The action of an operator $\tau^j_x$ becomes exponentially trivial far away from the site $j$. Therefore, if we construct $\tau^j_x$ on an appropriate window, we can extend the operator to a larger FMBL system simply by extending it using the identity matrix. If the window used is large enough, the approximate operator only misses an exponentially decaying action outside the window. From a known eigenstate (such as the all up physical spin state), we can then use combinations of $\tau^j_x$ operators to approximately access other eigenstates. The $2^L$ different combinations of products of $\tau^j_x$ operators yield the $2^L$ distinct eigenstates. We call this eigenstate formulation the $\tau_x$ network representation. Because the Hilbert space involved in this calculation grows exponentially with system size, it is not possible to store the full eigenstate, but it is possible to measure local observables on the eigenstates by selectively extending and tracing on the system such that the operating space is never too large to conduct calculations. Starting from a large Hamiltonian of size $L$, we break the system down into 'manageable' sub-systems of size $l$. Where the original Hamiltonian is \begin{equation} H = \sum_{i=1}^{L-1} \textbf{S}_i \cdot \textbf{S}_{i+1} + \sum_{i=1}^L h_i S^z_i, \end{equation} we create $L-l+1$ subsystems with Hamiltonian \begin{equation} H^{sub}_j = \sum_{i=j}^{j+l-2} \textbf{S}_i \cdot \textbf{S}_{i+1} + \sum_{i=j}^{j+l-1} h_i S^z_i. \end{equation} We use the algorithm described in the previous subsection to build a set of quasilocal $\tau_x^j$ operators on each site of each subsystem. \begin{figure} \includegraphics[width=\linewidth]{network.png} \caption[width=\linewidth]{A diagram portraying the structure of our $\tau^x$ network representation of eigenstates. Starting from the reduced density matrix $\rho_\uparrow$, represented in the red circles here, we carry out the multiplication from the inside outward. In order to implement our algorithm, we rearrange commuting operators so that this multiplication can be carried out from right to left, as indicated in equations (\ref{eq:observe1}) and (\ref{eq:observe}). In this case, $\left\langle \hat{o}_i \hat{o}_j \right\rangle = Tr(\tilde{\tau}^x_{s_1} \tilde{\tau}^x_{s_2} \rho_\uparrow \tilde{\tau}^x_{s_2} \tilde{\tau}^x_{s_1} \hat{o}_i \hat{o}_j).$ Note that $\hat{o}_j$ commutes through $\tilde{\tau}^x_{s_1}$ and can therefore be moved forward in the multiplication order, represented in this diagram by being moved inward. Operators $\hat{o}_i$ and $\tilde{\tau}^x_{s_1}$ however do not commute. Therefore, before including $\hat{o}_i$, we must extend the working window to include $\tilde{\tau}^x_{s_1}$, the largest working window we will need.}\label{fig:network} \end{figure} \begin{figure} \begin{minipage}{\linewidth} \includegraphics[width=\linewidth]{inchworm_1_wo.png} \subcaption{\label{fig:inchworm1}} \end{minipage} \begin{minipage}{\linewidth} \includegraphics[width=\linewidth]{inchworm_2_wo.png} \subcaption{\label{fig:inchworm2}} \end{minipage} \begin{minipage}{\linewidth} \includegraphics[width=\linewidth]{inchworm_3_wo.png} \subcaption{\label{fig:inchworm3}} \end{minipage} \caption[width=\linewidth]{A depiction of the algorithm described in the text. The white sites are those that have yet to be reached, the solid red sites are in the current working window, and the striped blue sites have been traced out. In (\subref{fig:inchworm1}), we start with a working window of size $l$. In (\subref{fig:inchworm2}), the working window extends one site to the left by taking a tensor product with the single-site spin up density matrix. The window is now of size $l+1$. If a $\tau^x$ operator or an observable exists in the window, it is introduced in this step. In (\subref{fig:inchworm3}), the working space contracts by taking the partial trace over the site on the right. The system is traversed in this manner. }\label{fig:inchworm} \end{figure} For each site of the large system, we can choose a subsystem containing the site and a $\tilde{\tau}_x^j$ operator of size $l$ centered on the site. Generally, there will be $l$ subsystems containing any given site of the total system and ideally we select a $\tilde{\tau}_x^j$ operator from the center of a subsystem so as to cut off as little of the operator's action as possible. \begin{figure*} \includegraphics[width=\linewidth]{varvflips_all_inset.png} \caption[width=\linewidth]{Energy fluctuations plotted as a function of number of $l$-bit flips from the all up physical spin eigenstate for systems of size $L=32$. Three different disorder strengths are shown, with three realizations for each disorder strength and approximately fifty eigenstates per realization. If the number of bit flips is greater than $16$, the algorithm is started from the all down physical eigenstate instead. As expected, fluctuation increases with an increasing number of bit flips, as the approximate operators introduce error. Also as expected, fluctuation decreases with increasing disorder.}\label{fig:varvflips} \end{figure*} In practice, we select a $\tilde{\tau}_x^j$ operator by calculating the energy fluctuation of the eigenstate produced by applying each of the candidate operators from the $l$ subsystems containing the site $j$ to the fully polarized eigenstate. We select the candidate operator that produces the lowest fluctuation. Energy fluctuation is calculated using the process described below. In our Hamiltonian, we know two eigenstates independent of the disorder realization: the all up and all down physical spin states ($\ket{\uparrow \uparrow \uparrow \ldots}$ and $\ket{\downarrow \downarrow \downarrow \ldots}$) We label these $\ket{+ + + \ldots}$ and $\ket{- - - \ldots}$ in the $l$-bit basis respectively. From these eigenstates, the set of $\tau^j_x$ operators that we have selected can be used to flip $l$-bits to attain any configuration, allowing us to target any eigenstate through its $l$-bit label. We now proceed with a verbal and pictorial description of the process by which we calculate the expectation value of a product of local operators on a large system while working in a Hilbert space of computationally manageable size. Our starting eigenstate here is the all up physical spin state whose density matrix we label $\rho_\uparrow$. We additionally generate an $l$-bit flip configuration $S = \{s_1, s_2, \ldots s_n \}$ and a product over local observables $\hat{O} = \hat{o}_1 \hat{o}_2 \ldots \hat{o}_m $. The quantity to calculate is: \begin{equation} \label{eq:observe1} \begin{split} \left\langle \hat{O} \right\rangle &= \Tr (\rho O)\\ &= \Tr \left[ \left( \prod_{j \in S} \tau^x_j \right) \ket{\uparrow \cdots} \bra{\uparrow \cdots} \left( \prod_{j \in S} \tau^x_j \right)^\dagger \hat{O} \right]. \end{split} \end{equation} We are careful to determine a canonical ordering of the product over $\tilde{\tau}^x_j$ operators; while the exact $\tau^x$ operators on the full system commute, ours may not commute exactly as they are not exact. We choose to order the operators in ascending order of $j$. Writing the products explicitly, our equation becomes \begin{equation} \label{eq:observe} \left\langle \hat{o}_1 \ldots \hat{o}_m \right\rangle = \Tr (\tilde{\tau}^x_{s_1} \ldots \tilde{\tau}^x_{s_n} \rho_\uparrow \tilde{\tau}^x_{s_n} \ldots \tilde{\tau}^x_{s_1} \hat{o}_1 \ldots \hat{o}_m). \end{equation} A visual representation is shown in Figure \ref{fig:network} Each of the $\tilde{\tau}^x_i$ operators has non-trivial support on a window of size $l$ and is trivially the identity outside of this window. Our goal is to never work with a reduced density matrix larger than the window. We initialize the process using the reduced density matrix of the all up physical spin eigenstate on the rightmost window. We then expand the system leftward, extending the window by taking the tensor product of an up spin with the current reduced density matrix. To keep the working space manageable, we subsequently contract the system from the right by taking the partial trace over the last site. For the process portrayed in Figure \ref{fig:inchworm} wherein the working window extends leftward and contracts rightward, we call this method the inchworm algorithm. The order in which we introduce the operators is important, as it pertains to the order of multiplication. We always introduce $\tilde{\tau}^x_i$ operators when we reach the operators' right edge, indicating that we introduce the $l$-bit flip operators in descending order. When we reach the right edge of an observable, we must first introduce any $\tilde{\tau}^x_i$ operator that intersects with the observable in order to maintain the order of the operation in Equation (\ref{eq:observe}). For example, when we encounter an observable $o_j$ that has an intersecting range with a bit flip operator $\tilde{\tau}^x_i$, we first include the $l$-bit flip operator even if its support does not extend as far to the right as the support of the observable. We update the density matrix by \begin{equation} \rho_n = \tilde{\tau}^x_i (\rho_{\uparrow,s} \otimes \rho_0) \tilde{\tau}^x_i \hat{o}_j, \end{equation} where $\rho_{\uparrow,s}$ is here the density matrix of all up physical spins of the length required to extend the $\rho_0$ to the range of $\tilde{\tau}^x_i$ and $\hat{o}_j$. Therefore, if $l$ is the lattice size of the $\tau^x_i$ operator and $l_o$ is the lattice size of the $\hat{o}_j$ operator, the largest window we ever need to work with has lattice size $l + l_o - 1$. Progressing until we reach the furthest left site on the system and tracing over the remaining sites, we obtain the product in Equation (\ref{eq:observe}). A visual representation of this ordering is shown and explained in Figure \ref{fig:network}. \section{Results} \label{sec:results} \subsection{Energy Fluctuation} A natural first test to verify the quality of the algorithm described above is to calculate the energy fluctuation of approximate eigenstates produced using the method. Calculating the energy fluctuation has the benefit of indicating the quality of the eigenstates constructed by the approximate $l$-bit flip operators, thereby proving the use of the method for general products of observables. The energy fluctuation or variance, $\Delta H^2 = \langle H^2 \rangle - \langle H \rangle ^2$, can be calculated by splitting the Hamiltonian into a sum of local operators acting over two sites. In this case, \begin{equation} \begin{split} H &= \sum_{i=1}^{L-1} \hat{h}_i , \\ \hat{h}_i &= \textbf{S}_i \cdot \textbf{S}_{i+1} + h_i S^z_i \end{split} \end{equation} We can then calculate the fluctuation by taking the sum over a set of products of local observable operators. Figure \ref{fig:varvflips} shows the variance of eigenstates produced by different combinations of $l$-bit flips for three disorder strengths for systems of size $L=32$. Each disorder strength contains three disorder realizations and approximately fifty eigenstates for each realization. Flipping no $l$-bits whatsoever, we expect a variance of zero, as the all up and all down physical spin states are exact eigenstates. We expect the variance to increase with number of bit flips because the approximate $l$-bit flip operators introduce error into the constructed eigenstate. Eigenstates are selected at random; therefore because the eigenstates follow a binomial distribution in number of $l$-bit flips, the number of bit flips is clustered about $L/2$. If the number of bit flips is greater than $L/2$, we start from the all down physical spin eigenstate, meaning that we never need to flip more than $L/2$ $l$-bits. \begin{figure} \begin{minipage}{\linewidth} \includegraphics[width=\linewidth]{mean_both.png} \end{minipage} \begin{minipage}{\linewidth} \includegraphics[width=\linewidth]{median_both.png} \end{minipage} \caption[width=\linewidth]{A comparison of the $\tau^x$ network representation presented in this paper with the tensor network representation for approximate eigenstates, both for systems of size $L=32$. \textit{(Top)} The mean of the $\tau^x$ network formulation is consistently worse, likely owing to the fact that the distribution of variances is uniform on a log scale and the mean is therefore dominated by upper outliers. \textit{(Bottom)} However, a comparison of the $50^{\text{th}}$ percentile variance of each method indicates that the median $\tau^x$ network eigenstate is more accurate than the median tensor network eigenstate for higher disorder strengths, $W \gtrapprox 10$. }\label{fig:both_comp} \end{figure} As a comparison to existing methods for approximating eigenstates on large localized systems, Figure \ref{fig:both_comp} shows the median and mean variance of eigenstates produced using the $\tau^x$ network described in this paper and the tensor network method described in Ref. [\onlinecite{Wahl}], where the constituting unitaries are of size $l=8$. The data are shown over three disorder strengths for systems of size $L=32$. For each disorder strength, three realizations are generated, with one hundred eigenstates per realization calculated using the $\tau^x$ network representation and one thousand for the tensor network method. The mean for the $\tau^x$ network representation is consistently worse than that for the tensor network method owing to the fact that the $\tau^x$ network method's mean is dominated by outlying eigenstates of high variance (see Fig. \ref{fig:varvflips}). However, the median variance for the $\tau^x$ network method becomes lower than that for the tensor network method with increasing disorder strength. This indicates that deep in the MBL phase, the typical approximation yielded by the $\tau^x$ network representation becomes better than that yielded by the tensor network method. As disorder strength increases, the non-trivial portions of the exact $\tau^x_j$ operators cut out of the subsystem window become smaller as the operators become more local. As a result, our approximate $\tilde{\tau}^x_j$ operators resemble the exact $\tau^x_j$ operators to a higher degree with increasing disorder strength, yielding more accurate eigenstates. \subsection{Correlations} Our technique also allows us to probe long-range correlations of observables measured on eigenstates of MBL systems. Though the approximation of the $l$-bit flip operator cuts off operator weight outside of some window, approximate eigenstates composed of overlapping strings of $\tilde{\tau}^x_j$ operators can display correlation outside of this operator window length. Generally, two observables that can be continuously connected by windows of $\tilde{\tau}^x_j$ operators will display a non-trivial correlation. For systems of a moderate size such as the $L=32$ size systems with which we work, this condition is fulfilled for most eigenstates. Because of the entanglement behavior of MBL eigenstates, local observables are thought to show an exponential decay in correlation with distance \cite{Imbrie2016a}. Though we could feasibly measure correlations between any local observable through our method, in this case we choose to focus on the spin-spin correlation function: \begin{equation} \langle S^z_i S^z_j \rangle - \langle S^z_i \rangle \langle S^z_j \rangle. \end{equation} We expect \begin{equation} \text{max}(\langle S^z_i S^z_j \rangle - \langle S^z_i \rangle \langle S^z_j \rangle) \propto e^{|i-j|/\xi}, \end{equation} where $\xi$ is a localization length, indicating that correlations of an eigenstate as a function of distance are bounded by an exponentially decaying envelope. An example of this behavior for a low disorder eigenstate is shown in Figure \ref{fig:correlation_ex}. \begin{figure} \includegraphics[width=\linewidth]{correlation_ex.png} \caption[width=\linewidth]{A plot of all correlations $\langle S^z_i S^z_j \rangle - \langle S^z_i \rangle \langle S^z_j \rangle$ for an approximate eigenstate of a system of size $L=32$ and disorder strength $W=6$. An exponentially decaying line approximately bounding the correlations is provided as a guide to the eye. The correlators decay with distance until they reach machine precision. Note further that this decay can still be observed beyond the subsystem window of size $l=14$.}\label{fig:correlation_ex} \end{figure} The $\tau^x$ network description of eigenstates is not immediately expected to be able to exhibit accurate correlation beyond the length of the subsystem. Overlapping $\tilde{\tau}^x$ operators can carry non-trivial action beyond the length of a subsystem, though it is not evident that this action is similar to that carried by products of exact $\tau^x$ operators. However, Figure \ref{fig:correlation_ex} shows that correlations continue to decay smoothly even outside of the subsystem window, which may indicate that the $\tau^x$ network eigenstate correlations are more accurate than expected. For each eigenstate calculated, approximately one hundred per disorder realization and three realizations per disorder strength, we calculated $\xi,$ the strength of the decay of the envelope of spin-spin correlation functions. The behavior of the mean $\xi$ as a function of disorder strength is shown in Figure \ref{fig:correlation_all}. Further, to determine the degree to which the correlator bound exhibits exponential decay, the average $R^2$ value for the exponentially decaying fit is shown in the inset. As each of the disorder strengths tested are within the MBL phase, we do not observe a breakdown in the exponential decay of the correlator as a function of distance, even for the lowest disorder strength, $W=6$. However, we do observe a gradual increase of the exponential decay length with decreasing disorder strength as expected. \begin{figure} \includegraphics[width=\linewidth]{correlation_all.png} \caption[width=\linewidth]{The average value of $\xi$ as a function of disorder strength for one hundred eigenstates per realization and three realizations per disorder strength $W$. The value of $\xi$ is defined through the envelope bounding the correlation functions of an eigenstate: $(\langle S^z_i S^z_j \rangle - \langle S^z_i \rangle \langle S^z_j \rangle) \propto e^{|i-j|/\xi}$. Note that the localization length decreases with increasing disorder. In the inset, the quality of an exponential decay fit on the envelope is shown. The fit is of high quality for all $W$ considered in our simulations.} \label{fig:correlation_all} \end{figure} \section{Discussion and conclusions} \label{sec:concl} We presented in this work the $\tau^x$ network representation of approximate eigenstates and the inchworm method to measure observables on those eigenstates for large MBL systems. Benchmarked against the tensor network method, we found that the algorithm does not construct eigenstates as accurately as the tensor network method close to the MBL crossover. However, the median eigenstate constructed by the algorithm outperforms that produced by the tensor network method deep in the MBL phase. In subsection \ref{subsec:current}, we outlined two current classes of methods to construct eigenstates on large MBL systems. Here, we briefly describe the advantages and disadvantages carried by the $\tau^x$ network representation compared to the others. Like SIMPS, DMRG-X, and En-DMRG, the $\tau^x$ network representation allows one to construct highly excited eigenstates of MBL systems. Compared to the DMRG-X algorithm, the $\tau^x$ network formulation does not produce eigenstates as accurately. For example, the DMRG-X algorithm in Ref. [\onlinecite{Khemani2016}] produces eigenstates with mean error at machine precision for systems of size up to $L=40$ with disorder as low as $W=8$. However, one of the benefits carried by the $\tau^x$ network representation is that it does not rely on eigenstate overlap with a physical spin product state, allowing one to theoretically target any eigenstate by its $l$-bit label. By contrast, eigenstates with low overlap to physical spin product states may not be captured by DMRG-X. The tensor network algorithm of Ref. [\onlinecite{Wahl}] also allows one to theoretically target any eigenstate by its $l$-bit label, making it most similar to the $\tau^x$ network representation. In terms of accuracy of eigenstate, the two algorithms are similar. The median tensor network eigenstate is an order of magnitude more accurate than that of the $\tau^x$ network formulation at $W=6$. However, at $W \gtrapprox 10$, the median $\tau^x$ network eigenstate becomes an order of magnitude more accurate than the tensor network algorithm. A primary divergence between the tensor network and $\tau^x$ network algorithms comes from computational speed. At $L=32$, the tensor network algorithm can take a long period of time (on our 113-machine, 10 TFLOP Beowulf computing cluster up to a week) to generate the unitary matrix for a given realization, but thereafter, one can compute observables on eigenstates almost instantaneously. Meanwhile, the OLO algorithm presented in subsection \ref{subsec:olo} can generate an $l$-bit algebra for a system of size $L=32$ several times faster (on our computing cluster less than three hours), but the inchworm algorithm can take several hours to measure observables on a given eigenstate. Thus, the tensor network algorithm might be preferred when given a realization and a large set of eigenstates to sample. The $\tau^x$ network representation allows for a quick sampling over many realizations and, as opposed to DMRG-class methods, produces an unbiased sample of eigenstates. There are several natural next steps in studying this algorithm. One would be to test it on other models with quasi-local operators that jump between eigenstates. For example, a recent work \cite{Wortis2017} explored quasi-local integrals of motion of a two-site, disordered Hubbard model by constructing $l$-bit-like algebras that could be used in the algorithm we present in this paper. Additionally, the algorithm structure presented in Figure \ref{fig:network} suggests that there is an analogy of this method to the tensor network algorithm. A future algorithm could use the $\tilde{\tau}^x$ operators from the OLO algorithm as a starting point, and then extend them by adding arbitrary unitary matrices on either side. These unitary matrices could then be optimized to minimize the commutation of the $\tilde{\tau}^x$ matrices or the energy fluctuation of the eigenstate produced by applying the $\tilde{\tau}^x$ matrices to the fully polarized eigenstate. \begin{acknowledgments} A.K.K. is supported by the Rhodes Trust. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 749150. The contents of this article reflect only the authors' views and not the views of the European Commission. S.H.S. was supported by EPSRC grant EP/N01930X/1. Statement of compliance with EPSRC policy framework on research data: This publication is theoretical work that does not require supporting research data. \end{acknowledgments}
1,477,468,750,852
arxiv
\section{Conclusion} We have shown that human-sourced partial annotations can be exploited to learn effective dependency parsers in short period of time. The ConvexMST method we adapt from Grave and Elhadad easily combines constraints from both language universals and partial annotations, providing greater robustness from starting annotation until one runs out of budget or time. We demonstrate this with actual annotations produced for English and Spanish, using annotators with a range of experience. Overall, we present a case for working in realistic settings by paying close attention to the various sources of annotation and tracking the real costs associated with that supervision. We believe that over-reliance on creeping supervision of this type may lead to an inaccurate picture of the cross-lingual and low-resource applicability of various models, and are encouraged by recent work on character-based models by Gillick et al. \shortcite{bytenlp} and Ballesteros et al \shortcite{ballesteros:2015}, among others. Their work shows viable models can be produced without relying on having annotations a priori, but rather learning representations on the fly that need not conform to any one set of standards. \section{Annotation} \subsection{Graph Fragment Language} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth,trim={0 0 0 0}]{figures/gfl-example.png} \caption{Spanish GFL Example: Parentheses indicate a constituent-style bracket, angle brackets indicate direct dependency relations.} \label{fig:gfl-example} \end{figure} We use the Graph Fragment Language (GFL) \cite{schneider:2013} to allow for light-weight, simple annotations that our annotators can easily learn and use confidently. The choice of annotation scheme is particularly important: we seek to optimize annotation speed rather than full-specification or high accuracy. In previous studies, the use of GFL has allowed for annotation rates of 2-3 times that of traditional dependency annotations while still maintaining a useful level of annotation density \cite{gflweb:2014,mielens-sun-baldridge:2015}. Hwa \shortcite{hwa:1999} demonstrated that it is most effective to provide high-level sentence constituents to a parser and allow it to fill in the low-level information itself. The GFL annotation in Figure~\ref{fig:gfl-example} shows two distinct notations. Constituent brackets are specified by parentheses and direct dependencies by angle brackets. Many words and phrases are underspecified. Allowing partial annotations dramatically increases the speed at which annotators can work, while simultaneously reducing error rates. These two effects both arise from being able to leave difficult or tedious portions of a sentence unspecified. \subsection{Filling in Partial Dependencies} A partial annotation produces a set of dependency tree fragments. Compared to an unlabeled sentence, this can substantially reduce the work a parser must do. When working with partial dependencies, there are two paths that can be taken with regard to overall model-building. In a `Fill-then-Parse' setup, the partial dependencies are first filled-in to produce full dependencies that are then used to train a standard dependency parser. In a `Fill+Parse' setup, one model both fills in and parses new sentences. We use a Fill+Parse setup, while previous work focused on Fill-then-Parse. The major benefit of the former is that learning can be sensitive to the source of an arc in the training data---e.g., whether it came from an annotator or a universal rule. Fill-then-Parse obscures this distinction and not knowing how trustworthy an arc is can lead to additional errors. Indeed, Fill+Parse method produces better results for our datasets than Fill-then-Parse (see Section~\ref{ssec:parsing-results}). \subsection{Simulated Cost Comparison} \label{ssec:cost} Many factors influence the cost of creating a corpus. Our goal is to minimize cost relative to the performance of a parser trained with the corpus. The actual cost of finding and paying annotators is the most obvious factor, and it will typically be higher for a low-resource language or highly specialized domain. Using a light-weight partial annotation scheme like GFL has the potential to increase the pool of qualified annotators and alleviate this challenge. \begin{figure*}[t] \centering \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figures/span-fixed-budget.pdf} \caption{Equal Cost} \label{fig:equal-cost} \end{subfigure} \quad \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figures/span-fixed-budget-variable-cost.pdf} \caption{Variable Cost} \label{fig:variable-cost} \end{subfigure} \caption{Comparison of performance versus total cost, Equal Cost is the sum of all specified dependencies, Variable Cost weights dependencies by completion percentage. Run on Spanish data using simulated partial dependencies.} \label{fig:cost-comparison} \end{figure*} Given a partial annotation scheme like GFL, an additional cost factor is that of obtaining a particular level of completion for each sentence. Consider that for any sentence there are both `low-hanging fruit' dependencies such as determiner attachment, and more difficult dependencies such as preposition attachment and long-distance relations. Harder dependencies take longer to annotate (and thus cost more), so it is worth considering cost metrics that incorporate completion percentage. In the absence of timing/expense data, we can simulate this intuition with a variable cost model for which each an additional dependency annotated in a sentence is more expensive than the previous one. Figure \ref{fig:cost-comparison} demonstrates the impact of completion cost. Parsing accuracies (for our parser introduced in the next section) are shown at different costs, under (a) simple equal (per arc) cost and (b) variable cost. We simulated the construction of various corpora by deriving partial dependencies from gold standard annotations), and show the cost curves for different sentence completion rates. 100\% completion produces the best performance with equal costs, but under the more realistic variable cost model, 30\% and 50\% completion win. We show later that this pattern holds under actual timed annotation. Garrette \shortcite{garrette:2015} demonstrated the benefit of partial annotations for CCG parsing. They focused on the number of (partial) bracket annotations (as a proxy for annotation time), holding this fixed while varying the number of sentences. Strikingly, they found that having 40\% of brackets across the full dataset was better than full brackets for 80\% of the corpus. This result uses an equal cost-per-bracket assumption, so the difference would be even more favorable to partial annotations with a variable cost. \subsection{Unsupervised vs. Partial Annotations} Without any direct annotations, we must rely on indirect supervision such as universal grammar rules, cross-lingual information transfer, and domain adaptation. Following Grave \& Elhadad \shortcite{grave-elhadad:2015}, we use the universal grammar rules in Table \ref{tab:ug-rules}. Indirect supervision via these rules is achieved by biasing produced trees to conform to the rules. This is the only form of dependency supervision considered by Grave \& Elhadad, though they do provide additional direct supervision via gold part-of-speech tags. \subsection{Data} \label{ssec:data} We use two sources of data. To compare with prior work, we use the universal treebanks (version 2.0), which cover ten languages from a variety of language families \cite{mcdonald:2013}. We obtained GFL annotations for a subset of the English data, originally from WSJ Section 03 of the Penn Treebank, and we use simulation techniques to produce partial dependencies for the other languages. \begin{table} \centering \small \begin{tabular}{cc} \hline Verb $\mapsto$ Verb & Noun $\mapsto$ Noun \\ Verb $\mapsto$ Noun & Noun $\mapsto$ Adj \\ Verb $\mapsto$ Pron & Noun $\mapsto$ Det \\ Verb $\mapsto$ Adv & Noun $\mapsto$ Num \\ Verb $\mapsto$ Adp & Noun $\mapsto$ Conj \\ \hline Adj $\mapsto$ Adv & Adp $\mapsto$ Noun \\ \end{tabular} \caption {Universal Grammar Rules} \label{tab:ug-rules} \end{table} Our second data source is the Spanish dependency treebank from the AnCora corpus \cite{taule:2008}. For 1410 unique sentences of AnCora, we have partial dependencies specified in GFL by twelve annotators. Most sentences received a single partial annotation from a single annotator, but one section of the corpus was annotated by all annotators. As the original corpus is fully-specified for gold dependencies, we can measure annotator agreement with a gold standard. The background and experience of the annotators varied considerably. Roughly one third were native Spanish speakers, with the rest ranging from fluent non-native speakers to a few with just a single year of formal study. This was done intentionally to provide a large variance in the types and quality of annotations that they were able to provide. Each annotator was trained for just 30 minutes. The nature of the annotations was explained and a small number of guidelines were provided. For instance, annotators were told that typically adjectives are dependents of nouns, nouns are dependents of verbs, and so on. These guidelines amounted to a summary of the rules in Table~\ref{tab:ug-rules}. During the annotation sessions, annotators were told to ask as many clarifying questions as needed, although in practice they needed very little guidance. Post-experiment debriefing interviews suggested that the straight-forward nature of the GFL notation was very helpful and became clear within a few example sentences. Despite minimal training time, annotators were able to produce relatively consistent annotations that agreed in large part with other annotators. Table \ref{tab:agreement} shows both pair-wise and overall agreement between annotators when considering arcs that each of the annotators in the pair had provided a head for. Overall agreement was high, with most pairwise numbers in the 70-80's, and agreement for individual annotators to the group is even higher -- mostly in the 80's. The partial annotation task proved helpful in terms of speed; our annotators were able to cover 750 tokens/hr, which compares favorably to the processes of the Penn Treebank, which achieved rates of 750-1000 tokens/hr for English \cite{marcus1993}, and 300-400 tokens/hr for Chinese \cite{xue2005}, both making use of initial parse suggestions from an existing parser. Efforts not using an existing parser proceed even slower; for instance the Ancient Greek Dependency Treebank reported rates of 100-200 tokens/hr \cite{bamman2011}. \begin{table} \resizebox{.5\textwidth}{!}{\begin{tabular}{l|c|c|c} ~ & Partial EN & Full ES & Partial ES \\ \hline Unique Sentences & 270 & 135 & 1410 \\ Total Sentences & 270 & 135 & 2162 \\ \hline Number of Annotators & 2 & 1 & 12 \\ Total Annotation Hours & 8 & 13 & 72 \\ \end{tabular}} \caption {Training Set Statistics} \label{tab:training-stats} \end{table} \begin{table*}[t] \centering \footnotesize \resizebox{.75\textwidth}{!}{\begin{tabular}[b]{c|c|c|c|c|c|c|c|c|c|c|c|c|c} Annotator & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & Avg. \\ \hline 1 & 1 & 0.73 & 0.9 & 0.88 & 0.55 & 0.77 & 1 & 0.94 & 0.9 & 0.28 & 0.95 & 0.67 & .80 \\ 2 & 0.73 & 1 & 0.78 & 0.83 & 0.95 & 0.62 & 0.77 & 0.75 & 1 & 0.27 & 0.8 & 0.85 & .78 \\ 3 & 0.9 & 0.78 & 1 & 0.85 & 0.64 & 0.8 & 0.9 & 0.82 & 0.96 & 0.3 & 0.85 & 0.72 & .79 \\ 4 & 0.88 & 0.83 & 0.85 & 1 & 0.6 & 0.88 & 0.83 & 0.92 & 1 & 0.33 & 0.91 & 0.68 & .81 \\ 5 & 0.55 & 0.95 & 0.64 & 0.6 & 1 & 0.46 & 0.64 & 0.55 & 0.83 & 0.23 & 0.59 & 0.85 & .66 \\ 6 & 0.77 & 0.62 & 0.8 & 0.88 & 0.46 & 1 & 0.88 & 0.74 & 0.88 & 0.16 & 0.75 & 0.6 & .71 \\ 7 & 1 & 0.77 & 0.9 & 0.83 & 0.64 & 0.88 & 1 & 0.94 & 1 & 0.2 & 1 & 0.67 & .82 \\ 8 & 0.94 & 0.75 & 0.82 & 0.92 & 0.55 & 0.74 & 0.94 & 1 & 1 & 0.36 & 0.94 & 0.7 & .81 \\ 9 & 0.9 & 1 & 0.96 & 1 & 0.83 & 0.88 & 1 & 1 & 1 & 0 & 1 & 0.81 & .87 \\ 10 & 0.28 & 0.27 & 0.3 & 0.33 & 0.23 & 0.16 & 0.2 & 0.36 & 0 & 1 & 0.11 & 0.12 & .28 \\ 11 & 0.95 & 0.8 & 0.85 & 0.91 & 0.59 & 0.75 & 1 & 0.94 & 1 & 0.11 & 1 & 0.68 & .80 \\ 12 & 0.67 & 0.85 & 0.72 & 0.68 & 0.85 & 0.6 & 0.67 & 0.7 & 0.81 & 0.12 & 0.68 & 1 & .70 \\ \hline Total & .85 & .86 & .86 & .9 & .82 & .77 & .98 & .92 & .96 & .61 & .94 & .82 & ~ \\ \end{tabular}} \caption {Pair-wise and total agreement by annotator. The `Total' row shows agreement with the set of all other annotators and the `Avg.' column is the average pairwise agreement.} \label{tab:agreement} \end{table*} \subsection{POS-Tagging} \label{sssec:pos} Our goal is to minimize real-world costs associated with producing a finished parsing model. To this end, we trained our own POS taggers using type label annotations \cite{garrette:2013} rather than using gold-standard tags. We use universal POS tags rather than the finer-grained sets the source corpora use, both for simplicity and cross-language comparisons \cite{petrov:2011}. We trained taggers for all languages using a limited amount of the available gold data---ensuring that the accuracy is comparable with low-resource human-sourced taggers. We extract types from the corpus, rank them by frequency, and take the most frequent types to train the tagger. The cutoff on how many types to take is derived from the number of types the annotators in Garrette et al. \shortcite{garrette:2013} were able to produce in two hours. The taggers all obtain around 80\% accuracy. \section{Introduction} Unsupervised parsing solutions are simultaneously an attractive yet troublesome method for handling low-data scenarios. The performance of unsupervised parsers has increased dramatically in recent years \cite{klein:2004,naseem:2010}, making them a potentially viable option for constructing labeled corpora on limited budgets. However, their performance is often outmatched by small amounts of labeled data \cite{blunsom:2010,spitkovsky:2012}. Further, recent work using linguistically-informed error analysis on unsupervised Combinatory Categorial Grammar parsing shows that entire syntactic phenomena are outside the scope of existing unsupervised parsers \cite{bisk:2015}. Accordingly, most recent work in this area has focused on methods of providing sources of indirect annotation, whether via linguistic world-knowledge \cite{naseem:2010,grave-elhadad:2015}, partial annotations \cite{flannery:2011,mielens-sun-baldridge:2015} or cross-lingual information transfer \cite{naseem:2012}. With unsupervised parsing, data collection is not entirely eliminated: a large amount of clean, relevant data is needed. Also, evaluations of unsupervised techniques typically rely on gold part-of-speech tags. Obtaining clean data for many languages is actually a difficult process--complicated by issues such as language identification, digitization, and varying or absent orthographies. This challenge also exists in many domain adaptation scenarios. We explore the effectiveness of creating small amounts of labeled data using the Graph Fragment Language (GFL), an annotation scheme designed for speed and ease \cite{schneider:2013,gflweb:2014}. We create 270 English and 2297 Spanish partial sentence annotations using GFL, using a mix of expert and non-expert annotators. We then adapt the minimum spanning tree based parsing technique of Grave \& Elhadad \shortcite{grave-elhadad:2015} to use these partial annotations in addition to universal dependency rules it already exploits. Throughout this work we will refer to this parser as ConvexMST.\footnote{Code available at github.com/jmielens/convex-mst} We present parsing results with and without gold part-of-speech tags. When using predicted POS tags, our experiments show that exploiting cheap, incomplete direct supervision in addition to language universals provides large absolute performance improvements for both English and Spanish: 6.3\% for the former and 17.3\% for the latter. Furthermore, the ConvexMST parser dramatically outperforms the Gibbs sampler parser of Mielens et al. \shortcite{mielens-sun-baldridge:2015} using the supervision (English: +5.2\%; Spanish: +14.4\%). We also show that the extra supervision provided by gold POS tags heavily influences results; in particular, it inflates performance when only using language universals. Experiments that rely on gold POS tags alone are thus not reliable indicators of performance in true low-resource settings. \section*{Acknowledgments} Supported by the U.S. Army Research Office under grant number W911NF-10-1-0533. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the U.S. Army Research Office. \nocite{*} \bibliographystyle{emnlp2016} \section{Method} \subsection{Convex-MST} This section provides a brief overview of the core parsing algorithm; for full details, see Grave \& Elhadad \shortcite{grave-elhadad:2015}. We begin by considering a binary vector $\mathbf{y}$ that encodes all of the dependencies in our corpus, such that $\mathbf{y}_{ijk} = 1$ if sentence $i$ has an arc with dependent $j$ and head $k$. This representation leads to the problem formulation in Equation \ref{eq:ge-prob}, where $Y$ is the convex hull of all the valid tree assignments for $\mathbf{y}$, $\mathbf{n}$ is the number of possible dependency arcs in the corpus, $\mathbf{u}$ is a penalty vector that penalizes potential dependency arcs that are not in the set of universal dependency rules, and $\mathbf{w}$ is a weight vector learned during training \begin{equation} \label{eq:ge-prob} \min_{\mathbf{y} \in Y} \min_{\mathbf{w}} \frac{1}{2n} \lVert \mathbf{y} - \mathbf{Xw} \rVert^{2}_{2} + \frac{\lambda}{2} \lVert \mathbf{w} \rVert^{2}_{2} - \mu \mathbf{u}^{T} \mathbf{y} \end{equation} \noindent This problem can be solved using Algorithm \ref{alg:optimization} \cite{grave-elhadad:2015}. \begin{algorithm}[t] \caption{Optimization algorithm from Grave \& Elhadad (2015)}\label{alg:optimization} \begin{algorithmic}[1] \For{$r\not=0$} Compute the optimal $\mathbf{w}$: $\mathbf{w}_{t} = \argmin_{\mathbf{w}} \frac{1}{2n} \lVert \mathbf{y}_{t} - \mathbf{Xw} \rVert^{2}_{2} + \frac{\lambda}{2} \lVert \mathbf{w} \rVert^{2}_{2}$ Compute the gradient w.r.t. $\mathbf{y}$: $\mathbf{g}_{t} = \frac{1}{n} (\mathbf{y}_{t} - \mathbf{Xw}_{t}) - \mu \mathbf{u}$ Solve the linear program: $\mathbf{s}_{t} = \min_{\mathbf{s} \in Y} \mathbf{s}^{T}\mathbf{g}_{t}$ Take the Franke-Wolfe step: $\mathbf{y}_{t} = \gamma_{t}\mathbf{s}_{t} + (1 - \gamma_{t})\mathbf{y}_{t}$ \EndFor \end{algorithmic} \end{algorithm} \subsection{Partial Dependency Features} \label{ssec:partial-features} The main modification we make is to add an additional term to penalize arcs that disagree with partial annotations. Let $\mathcal{S}$ be the set of all indices on $\mathbf{y}$ where that head-dependent pair conforms to one of the universal rules. Then we can require that some proportion of the arcs in the corpus satisfy: \begin{equation*} \frac{1}{n}\sum_{i \in \mathcal{S}}y_i \ge c \end{equation*} \noindent This is equivalent to $\mathbf{u}^{T}\mathbf{y} \ge c$, where: \begin{equation*} u_{i}=\begin{cases} 1/n, & \text{if $i \in \mathcal{S}$}.\\ 0, & \text{otherwise}. \end{cases} \end{equation*} \noindent This is how the penalty term $\mu \mathbf{u}^{T} \mathbf{y}$ from (\ref{eq:ge-prob}) is derived. Similarly, we can add another penalty term that ensures a certain percentage of the arcs conform to the arcs specified by annotators. If we let $\mathcal{G}$ be the set of all indicies on $\mathbf{y}$ where the word pair conforms to the GFL annotations, then it is simple to construct an additional penalty term $\xi \mathbf{v}^{T} \mathbf{y}$. There is a slight difference between the GFL penalty term and the universal rule penalty term. Whereas the universal rule penalty is based simply on whether the arc conforms or does not conform to the rules, the GFL annotations naturally lead to a three-way distinction: the annotation can specify that an arc \textit{should} be present, \textit{should not} be present, or make no commitment. Accordingly, we modify $\mathcal{G}$ to be two sets, $\mathcal{G}_w$ and $\mathcal{G}_b$, where $\mathcal{G}_w$ is the set of all indicies on $\mathbf{y}$ where the word pair should have an arc, and $\mathcal{G}_b$ is the set of all indicies on $\mathbf{y}$ where the word pair should \textit{not} have an arc. We refer to these as the whitelist and blacklist, accordingly. Under this formulation, the GFL-based penalty term $\xi \mathbf{v}^{T} \mathbf{y}$ is now made with: \begin{equation*} v_{i}=\begin{cases} 1/n, & \text{if $i \in \mathcal{G}_w$}\\ -1/n, & \text{if $i \in \mathcal{G}_b$}\\ 0, & \text{otherwise} \end{cases} \end{equation*} \noindent This leads to the modified objective function in (\ref{eq:gfl-obj}), which now seeks to find a solution that minimizes the number of arcs that violate both universal rules and the annotator-specified fragments. \begin{equation} \label{eq:gfl-obj} \min_{\mathbf{y} \in Y} \min_{\mathbf{w}} \frac{1}{2n} \lVert \mathbf{y} - \mathbf{Xw} \rVert^{2}_{2} + \frac{\lambda}{2} \lVert \mathbf{w} \rVert^{2}_{2} - \mu \mathbf{u}^{T} \mathbf{y} - \xi \mathbf{v}^{T} \mathbf{y} \end{equation} \noindent When no GFL annotations are specified for the corpus, the GFL penalty term goes to zero and the objective function reverts to its original formulation. Specific arcs are added to $\mathcal{G}_w$ and $\mathcal{G}_b$ in a number of ways, based on the different types of GFL annotation. Consider the GFL annotation in Figure~\ref{fig:white-black}. Here, the annotator has specified a direct dependency with `passed' as the head of `congress'. The arc `passed $\leftarrow$ congress' is added to $\mathcal{G}_w$, while all other arcs of the form `$X \leftarrow$ congress' are added to $\mathcal{G}_b$ because `congress' may only have a single head. Brackets may also result in additions to the whitelist and blacklist. In Figure~\ref{fig:white-black}, `a comprehensive plan' is bracketed. In this case, no arcs can be whitelisted, but many can be blacklisted. For instance, no word external to the bracket may be headed by a word in the bracket. This means arcs such as `plan $\leftarrow$ congress' must be in $\mathcal{G}_b$. Also, `passed' is indicated as the head of the entire bracket. We cannot whitelist any specific arcs with this information (since we do not know the head of the bracketed expression), but we know that no word internal to the bracket is headed by any word external to it, other than `passed'. Hence, arcs such as `congress $\leftarrow$ plan' must be in $\mathcal{G}_b$. \begin{figure}[t] \centering \includegraphics[width=.5\textwidth]{figures/white-black.png} \caption{GFL Whitelisting vs. Blacklisting} \label{fig:white-black} \end{figure} \section{Experiments and Discussion} \label{sec:results} We consider both simulated and actual partial annotations. Results based on actual annotation are the most important as they provide our best measure of performance under a realistic annotation setting. However, our Spanish annotators had only six hours each, and there was no inter-annotator communication or creation of annotation conventions, and no attempt to have them adopt the conventions in the gold-standard AnCora dependencies we evaluate against. Because of this, we include simulation results to eliminate this source of divergence to better measure the effectiveness of different methods for filling in missing arcs in a partial annotation. It of course also allows us to measure this for all the languages in the Universal Dependencies treebanks. We consider three different supervision settings for ConvexMST: \begin{itemize} \item \textit{UG} uses just the universal grammar based features, which is equivalent to the method used by Grave \& Elhadad \shortcite{grave-elhadad:2015}. \item \textit{GFL} uses just the human specified features. \item \textit{GFL+UG} uses both. \end{itemize} \noindent These three methods correspond with $\xi \neq 0$, $\mu \neq 0$, and $\xi \mu \neq 0$ in Equation 2. The training sets correspond with the `Partial EN' and `Partial ES' sets from Table \ref{tab:training-stats}. The set of sentences annotated with GFL is used as the training set for the \textit{GFL}, \textit{UG}, and \textit{GFL+UG} methods. \subsection{Simulated partial dependencies} \begin{table}[t] \centering \small \resizebox{.35\textwidth}{!}{\begin{tabular}{l|c|c|c} & \multicolumn{3}{c}{Degradation} \\ \hline Language & None & Light & Heavy \\ \hline DE & 69.6 & 69.5 & 68.2 \\ EN & 79.5 & 77.5 & 72.8 \\ ES & 78.0 & 76.5 & 71.5 \\ FR & 82.6 & 82.3 & 75.7 \\ IT & 82.0 & 81.8 & 77.4 \\ PT-BR & 80.6 & 80.0 & 72.4 \\ SV & 77.7 & 77.1 & 76.9 \\ \hline Average & 78.5 & 77.8 & 73.6 \\ \hline \end{tabular}} \caption {Simulated degradation results. \textit{Light} has 40\% of arcs removed and \textit{Heavy} has 70\% removed.} \label{tab:degradation} \end{table} Simulated partial dependencies are produced by removing dependencies via a stochastic process that approximates how we instructed human annotators to focus their efforts. Arcs are removed top-down, with arcs lower in the tree being more likely to be deleted. This results in trees with more high-level structures and less lower-level information. Figure~\ref{fig:degradation} demonstrates the stability of our parser under varying levels of such gold tree degradation. Missing arcs were recovered using our parse imputation scheme (using GFL+UG features), and the resulting parser was applied to the evaluation sentences. Accuracy decreases slightly to around 60\% removal, and then degrades more rapidly after that. Table~\ref{tab:degradation} provides numeric data for the simulations. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{figures/degradation.png} \caption{Degradation Simulations} \label{fig:degradation} \end{figure} \begin{table}[t] \small \centering \resizebox{.48\textwidth}{!}{\begin{tabular}{l|l|cc|cc} \multirow{2}{*}{Parser} & \multirow{2}{*}{Features} & \multicolumn{2}{c|}{Gold Tags} & \multicolumn{2}{c}{Predicted Tags} \\ ~ & ~ & EN & ES & EN & ES \\ \hline RB & N/A & 17.1 & 28.0 & 17.1 & 28.0 \\ \hline Gibbs & GFL & 60.2 & 65.3 & 55.8 & 52.7\\ \hline \multirow{3}{*}{ConvexMST} & UG & 63.1 & 63.5 & 56.9 & 50.0 \\ & GFL & 65.9 & 70.5 & 61.2 & 67.1 \\ & UG+GFL & \textbf{68.2} & \textbf{71.3} & \textbf{63.2} & \textbf{67.3} \\ \end{tabular}} \caption {Directed dependency accuracy on English and Spanish universal treebanks using annotator provided GFL annotations, 10 or fewer words.} \label{tab:parsing-results} \end{table} \subsection{Annotator-sourced partial dependencies} \label{ssec:parsing-results} Table \ref{tab:parsing-results} gives semi-supervised parsing results on the English and Spanish treebanks for sentences with 10 or fewer words. To investigate the impact of POS taggers on parsing results, we conducted two series of experiments using POS tags trained by our own tagger as discussed in Section~\ref{sssec:pos} (\textit{Predicted Tag}) and gold POS tags extracted from treebank (\textit{Gold Tag}). We compare against a right-branching baseline and the Gibbs parser of Mielens et al. \shortcite{mielens-sun-baldridge:2015}. All the parsing methods handily beat the right-branching baseline. ConvexMST-UG (the model of Grave and Elhadad \shortcite{grave-elhadad:2015}) beats the Gibbs parser with gold POS tags, but the ranking switches with predicted POS tags. This shows the effectiveness of ConvexMST, but highlights its brittleness with respect to tagging errors: bad tags lead to poor guidance from language universals. ConvexMST-GFL easily beats both these approaches: it exploits partial annotations much more effectively than the Gibbs parser and learns effectively without language universals. The difference is especially marked for predicted POS tags: ConvexMST-GFL beats ConvexMST-UG by 4.3\% for English and 17.1\% for Spanish. (Recall that there were 8 hours of annotation for English and 72 hours for Spanish.) \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{figures/accuracies_over_time.pdf} \caption{Learning curves for individual annotators (thin lines) and conglomerated training sets (thick line) over annotation time. Vertical lines indicate annotation session breaks. } \label{fig:learning-curves} \end{figure} The best method of all uses both partial annotations and language universals: ConvexMST-UG+GFL improves on ConvexMST-GFL for both languages and POS conditions. The impact of the combination is greater for English, which has less GFL annotation. Overall, these results show that this combination is robust to varying amounts of partial annotations: the UG constraints are strong on their own and provide a strong basis without annotations, they contribute when there are not many annotations available, and eventually become less essential (but remain unharmful) as more are provided. It is important to recall that the GFL annotations have no specific conformity to the gold standards of either \textit{original} corpus. Our goal was to understand the overall behavior of different methods given the same free-wheeling, diverse annotations; it is likely that higher numbers would have been achieved had we guided annotators to use corpus conventions, or used full annotations provided by our annotators as the evaluation set. The former defeats the spirit of our exercise, and we did not have sufficient budget for the latter. For Spanish, we also considered the performance of individual annotators alongside the full training set. The learning curves for individual annotators are shown in Figure \ref{fig:learning-curves}. There is substantial variation in the curves for the individual annotators; however, the curve based on the union of all annotations at each time step is smooth and is better than any individual past the three hour mark. One way to consider this is in terms of building an accurate parser quickly with multiple, diverse annotators, where wall clock time matters. Another way is to consider robustness with respect to possibly bad annotators. The next obvious steps would be to use active learning and to detect disagreement in annotators to either drop some or intervene to improve their quality. (Again, keep in mind that we are considering a ``cold start'' to this process, so there can be no gold standard for checking annotator quality.) \paragraph{Comparison to Full Annotation} To this point, all performance comparisons have been between different parse feature sets; we have demonstrated that the GFL features are complimentary to the UG features, and that when standing alone the GFL features are stronger than the UG features. The question of whether it might be more effective to simply have annotators produce full annotations is not addressed by these comparisons. To answer this question, we had our most experienced annotator fully annotate the same section that the other annotators did partially. Producing these full annotations required roughly 13 hours of time from the single expert annotator. In comparison, the other annotators were able to partially annotate the same section in roughly two hours each -- a total of 24 hours. However, the theoretical wall clock time of the group of annotators could be as low as two hours if the sessions were run in parallel. These different training sets were once again used to train ConvexMST models that were evaluated on a held out test set. Table \ref{tab:full-vs-partial} contains the results of this experiment, demonstrating that the group of inexperienced annotators producing partial annotations was able to achieve similar performance levels to the single annotator producing full annotations. It should be noted that this comparison does not weight the results using the extrinsic costs associated with the production of the training data. In a real-world environment, the expert annotator would likely be more expensive than the inexperienced annotators, and possibly all of them combined (especially in a crowd-sourcing scenario). This makes the performance per unit cost for partial annotators even higher than Table \ref{tab:full-vs-partial} indicates. See Section \ref{ssec:cost} for discussion and modeling of these extrinsic cost effects. \begin{table} \begin{tabular}{l|c|c} Feature Set & Partial Annotations & Full Annotations \\ \hline UG & 56.9 & 58.8 \\ GFL & 61.2 & 62.8 \\ GFL+UG & \textbf{63.2} & \textbf{66.6} \\ \end{tabular} \caption {Comparison between full and partial annotations, 10 or fewer words, using predicted POS tags.} \label{tab:full-vs-partial} \end{table} \subsection{Longer Sentences} We also evaluated ConvexMST with longer sentences: those with 20 words or less. For this, the right-branching baseline is 25.8\%. When using all the annotations on the common set for all annotators, the scores for ConvexMST with UG, GFL, and GFL+UG are 47.6\%, 54.4\%, and 55.3\%, respectively. The values are worse than for shorter sentences, as expected, but the pattern observed in Table \ref{tab:parsing-results} still holds: GFL annotations best UG alone, and their combination is the best of all. \subsection{Discussion \& Error Analysis} \paragraph{POS-Tagging Impact} \begin{figure*}[!ht] \centering \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figures/gold_1.png} \caption{Gold Tags} \label{fig:gold_parse} \end{subfigure} \quad \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{figures/pred_1.png} \caption{Predicted Tags} \label{fig:pred_parse} \end{subfigure} \caption{Differences in parsing results due to minimal POS tagging errors.} \label{fig:pos-parsing-error} \end{figure*} We thought it important to consider imperfect POS-taggings because this entire framework is based off of the assumption that the user is working from essentially no pre-existing resources. Assuming the availability of gold-standard POS tags is antithetical to this idea, and is one way in which direct supervision can show up in otherwise unsupervised (or indirectly supervised) systems. Many tagger errors are not likely to cause major problems during parsing; for instance mislabeling pronouns as nouns, or adverbs as adjectives, is unlikely to lead to major structural issues. However, more unlikely errors can cause more dramatic effects, as shown in Figure~\ref{fig:pos-parsing-error}. Here, the phrase `beating politically' (gold tags `\textsc{noun adv}') is mis-tagged as `\textsc{adj verb}', leading to the attachment of `politically' to the root word and the reorganization of a substantial chunk of the sentence. \paragraph{Weighting Constraint Violations} For feature sets with both GFL and UG-based constraints, a weighting factor can bias the parser towards being more likely to respect either GFL or UG constraints. We experimented with this, and found that for the datasets we considered, the best results were obtained when we weighted violations of GFL constraints as worse than violations of UG constraints. This result is not entirely unexpected given the relative performances of the constraints on their own, but it provides more evidence that direct supervision even in small amounts can beat indirect supervision.
1,477,468,750,853
arxiv
\section{Introduction} This article presents and ad hoc -solution to full Navier-stokes system in incompressible case. We begin in a didactic way by presenting a rather trivial solution to Navier-Stokes system, then we proceed with a more complex situation. The trivial solution is spatially periodic and smooth, the more complex solution turns out to be integrable on the whole space, also it is found to be a smooth one. The solution is obtained considering a divergence free vector field with gaussian decay of amplitudes, and finally adding a suitable force field. The mathematical treatment is rather elementary, mainly vector calculus. One can refer e.g. \cite{VC}. On the details of Navier-Stokes, one can see e.g \cite{FD}. \newpage \section{Navier-Stokes equations} The incompressible Navier-stokes equations are in vector form \begin{equation} \frac{\partial \vec{u}}{\partial t}+\nabla p=\nu\triangle \vec{u}-\vec{u}\cdot \nabla \vec{u} +\vec{f} \end{equation} \begin{equation} \nabla \cdot \vec{u}=0 \end{equation} where $\vec{u}=(u_x(x,y,z,t),u_y(x,y,z,t),u_z(x,y,z,t))$ is a vector (velocity) field $\vec{u}:\mathbb{R}^3\times [0,\infty)\longrightarrow \mathbb{R}^3$, $\vec{f}=(f_x(x,y,z,t),f_y(x,y,z,t),f_z(x,y,z,t))$ is a vector (force) field $\vec{f}:\mathbb{R}^3\times [0,\infty)\longrightarrow \mathbb{R}^3$ and $p(x,y,z,t)$ is a scalar (pressure) field $p:\mathbb{R}^3\times [0,\infty)\longrightarrow \mathbb{R}$. $\nu>0$ (viscosity) is a constant. \\ In terms of Cartesian coordinates, the equations can be stated equivalently as \begin{equation} \frac{\partial u_x}{\partial t}+\frac{\partial p}{\partial x}=\nu(\frac{\partial^2 u_x}{\partial x^2}+\frac{\partial^2 u_x}{\partial y^2}+\frac{\partial^2 u_x}{\partial z^2})-u_x\frac{\partial u_x}{\partial x}-u_y\frac{\partial u_x}{\partial y}-u_z\frac{\partial u_x}{\partial z}+f_x \end{equation} \begin{equation} \frac{\partial u_y}{\partial t}+\frac{\partial p}{\partial y}=\nu(\frac{\partial^2 u_y}{\partial x^2}+\frac{\partial^2 u_y}{\partial y^2}+\frac{\partial^2 u_y}{\partial z^2})-u_x\frac{\partial u_y}{\partial x}-u_y\frac{\partial u_y}{\partial y}-u_z\frac{\partial u_y}{\partial z} +f_y \end{equation} \begin{equation} \frac{\partial u_z}{\partial t}+\frac{\partial p}{\partial z}=\nu(\frac{\partial^2 u_z}{\partial x^2}+\frac{\partial^2 u_z}{\partial y^2}+\frac{\partial^2 u_z}{\partial z^2})-u_x\frac{\partial u_z}{\partial x}-u_y\frac{\partial u_z}{\partial y}-u_z\frac{\partial u_z}{\partial z} +f_z \end{equation} \begin{equation} \frac{\partial u_x}{\partial x}+\frac{\partial u_y}{\partial y}+\frac{\partial u_z}{\partial z}=0 \end{equation} \section{Introductory preliminaries;solution of Navier-Stokes with no external force field} Consider the functions \begin{equation} u_x(x,y,z,t)=e^{at+ i(\alpha x+\beta y+\gamma z)} \end{equation} \begin{equation} u_y(x,y,z,t)=e^{at+ i(\alpha x+\beta y+\gamma z)} \end{equation} \begin{equation} u_z(x,y,z,t)=e^{at+ i(\alpha x+\beta y+\gamma z)} \end{equation} with $x,y,z\in \mathbb{R}$, $t\geq 0$ and $a,\alpha,\beta,\gamma \in \mathbb{R}$. \\ We thus have $u_x=u_y=u_z=u$, equivalently $\vec{u}=(u(x,y,z,t),u(x,y,z,t),u(x,y,z,t))$. We shall adopt this notation. Let us calculate the partial derivatives. We will have the following functional equations for $u_x$ $$ \frac{\partial u_x}{\partial x}=i\alpha u_x \; ,\; \frac{\partial u_x}{\partial y}=i\beta u_x \; ,\; \frac{\partial u_x}{\partial z}=i\gamma u_x $$ $$ \frac{\partial^2u_x}{\partial x^2}=-\alpha^2 u_x \; ,\; \frac{\partial^2u_x}{\partial y^2}=-\beta^2 u_x \; ,\; \frac{\partial^2u_x}{\partial z^2}=-\gamma^2 u_x $$ Similarly for $u_y$ $$ \frac{\partial u_y}{\partial x}=i\alpha u_y \; ,\; \frac{\partial u_y}{\partial y}=i\beta u_y \; ,\; \frac{\partial u_y}{\partial z}=i\gamma u_y $$ $$ \frac{\partial^2u_y}{\partial x^2}=-\alpha^2 u_y \; ,\; \frac{\partial^2u_y}{\partial y^2}=-\beta^2 u_y \; ,\; \frac{\partial^2u_y}{\partial z^2}=-\gamma^2 u_y $$ And for $u_z$ $$ \frac{\partial u_z}{\partial x}=i\alpha u_z \; ,\; \frac{\partial u_z}{\partial y}=i\beta u_z \; ,\; \frac{\partial u_z}{\partial z}=i\gamma u_z $$ $$ \frac{\partial^2u_z}{\partial x^2}=-\alpha^2 u_z \; ,\; \frac{\partial^2u_z}{\partial y^2}=-\beta^2 u_z \; ,\; \frac{\partial^2u_z}{\partial z^2}=-\gamma^2 u_z $$ From these we find that \begin{equation} \nabla \cdot \vec{u}=\frac{\partial u_x}{\partial x}+\frac{\partial u_y}{\partial y}+\frac{\partial u_z}{\partial z}=i\alpha u_x+i\beta u_y+i\gamma u_z=0 \end{equation} As $u_x=u_y=u_z=u$, we will have \begin{equation} iu(\alpha+\beta +\gamma)=0 \end{equation} From which we conclude \begin{equation} \alpha +\beta+ \gamma=0 \end{equation} We shall call it the \textit{incompressibility condition}. \\ Let us substitute the partial derivatives into Navier-Stokes equations.We will use the notation $u=u_x=u_y=u_z$. We will obtain \begin{equation} \frac{\partial u}{\partial t}+\frac{\partial p}{\partial x}=-\nu(\alpha^2 u+\beta^2 u+\gamma^2 u)-i\alpha u^2-i\beta u^2-i\gamma u^2 \end{equation} \begin{equation} \frac{\partial u}{\partial t}+\frac{\partial p}{\partial y}=-\nu(\alpha^2 u+\beta^2 u+\gamma^2 u)-i\alpha u^2-i\beta u^2-i\gamma u^2 \end{equation} \begin{equation} \frac{\partial u}{\partial t}+\frac{\partial p}{\partial z}=-\nu(\alpha^2 u+\beta^2 u+\gamma^2 u)-i\alpha u^2-i\beta u^2-i\gamma u^2 \end{equation} This is called The Fundamental Functional Equation. Which equal \begin{equation} \frac{\partial u}{\partial t}+\frac{\partial p}{\partial x}=-\nu(\alpha^2 +\beta^2 +\gamma^2 )u-iu^{2}(\alpha + \beta +\gamma) \end{equation} \begin{equation} \frac{\partial u}{\partial t}+\frac{\partial p}{\partial y}=-\nu(\alpha^2 +\beta^2 +\gamma^2 )u-iu^{2}(\alpha + \beta +\gamma) \end{equation} \begin{equation} \frac{\partial u}{\partial t}+\frac{\partial p}{\partial z}=-\nu(\alpha^2 +\beta^2 +\gamma^2 )u-iu^{2}(\alpha + \beta +\gamma) \end{equation} Now, as from the incompressibility condition $\alpha + \beta +\gamma =0$, we will have just \begin{equation} \frac{\partial u}{\partial t}+\frac{\partial p}{\partial x}=-\nu(\alpha^2 +\beta^2 +\gamma^2 )u \end{equation} \begin{equation} \frac{\partial u}{\partial t}+\frac{\partial p}{\partial y}=-\nu(\alpha^2 +\beta^2 +\gamma^2 )u \end{equation} \begin{equation} \frac{\partial u}{\partial t}+\frac{\partial p}{\partial z}=-\nu(\alpha^2 +\beta^2 +\gamma^2 )u \end{equation} This is the fundamental functional equation. Let us turn to calculate the time-derivative. As $u(x,y,z,t)=e^{at+ i(\alpha x+\beta y+\gamma z)}$, we will have \begin{equation} \frac{\partial u}{\partial t}=au \end{equation} From which we will have the parametric functional equation \begin{equation} au+\frac{\partial p}{\partial x}=-\nu(\alpha^2 +\beta^2 +\gamma^2 )u \end{equation} \begin{equation} au+\frac{\partial p}{\partial y}=-\nu(\alpha^2 +\beta^2 +\gamma^2 )u \end{equation} \begin{equation} au+\frac{\partial p}{\partial z}=-\nu(\alpha^2 +\beta^2 +\gamma^2 )u \end{equation} This holds, if \begin{equation} a=-\nu(\alpha^2 +\beta^2 +\gamma^2 ) \; \; \; and \; \; \; \frac{\partial p}{\partial x}=\frac{\partial p}{\partial y}=\frac{\partial p}{\partial z}=0\Longrightarrow p(x,y,z,t)=c \end{equation} $c\in\mathbb{R}$. \\ This gives the final solution \begin{equation} u(x,y,z,t)=e^{-\nu(\alpha^2 +\beta^2 +\gamma^2 )t+ i(\alpha x+\beta y+\gamma z)} \end{equation} with the restriction $\alpha+\beta+\gamma=0$. Dropping the imaginary component, we will have \begin{equation} u(x,y,z,t)=e^{-\nu(\alpha^2 +\beta^2 +\gamma^2 )t}cos(\alpha x+\beta y+\gamma z) \end{equation} with the restriction $\alpha + \beta +\gamma =0$ and \begin{equation} p(x,y,z,t)=c, \; \; c\in \mathbb{R} \end{equation} \section{Analytical solution for Navier-Stokes equations} \subsection{The 'grundfunctions'} We shall begin with the arbitrary choice of 'grundfunctions', as we shall call them. They are completely ad hoc in their nature. \begin{equation} u_x(x,y,z,t)= \alpha (Biy-y)(Ciz-z)e^{at-\frac{1}{2}(x^2+y^2+z^2)+\frac{i}{2}(Ax^2+By^2+Cz^2)} \end{equation} \begin{equation} u_y(x,y,z,t)=\beta (Aix-x)(Ciz-z)e^{at-\frac{1}{2}(x^2+y^2+z^2)+\frac{i}{2}(Ax^2+By^2+Cz^2)} \end{equation} \begin{equation} u_z(x,y,z,t)=\gamma (Aix-x)(Biy-y)e^{at-\frac{1}{2}(x^2+y^2+z^2)+\frac{i}{2}(Ax^2+By^2+Cz^2)} \end{equation} with $a,A,B,C,\alpha ,\beta , \gamma \in \mathbb{C}$ and $i$ is the imaginary unit. \\ For things to come, we calculate straightforwardly \begin{equation} u_xu_y=\alpha \beta (Aix-x)(Biy-y)(Ciz-z)^2e^{2at-(x^2+y^2+z^2)+i(Ax^2+By^2+Cz^2)} \end{equation} \begin{equation} u_xu_z=\alpha \gamma (Aix-x)(Biy-y)^2(Ciz-z)e^{2at-(x^2+y^2+z^2)+i(Ax^2+By^2+Cz^2)} \end{equation} \begin{equation} u_yu_z=\beta \gamma (Aix-x)^2(Biy-y)(Ciz-z)e^{2at-(x^2+y^2+z^2)+i(Ax^2+By^2+Cz^2)} \end{equation} We note immediately that the divergence of such vector field of grundfunctions vanishes, as \begin{equation} \nabla \cdot \vec{u}=(Aix-x)u_x+(Biy-y)u_y+(Ciz-z)u_z=0 \forall x,y,z,t \end{equation} if and only if \begin{equation} \alpha +\beta +\gamma=0 \end{equation} This is the incompressibility condition. \\ \subsection{Derivatives of grundfunctions} Let us calculate the partial derivatives. We will have the following functional equations for $u_x$ $$ \frac{\partial u_x}{\partial x}=(Aix-x)u_x $$ $$ \frac{\partial u_x}{\partial y}=(Biy-y)u_x+\frac{u_x}{y} $$ $$ \frac{\partial u_x}{\partial z}=(Ciz-z)u_x+\frac{u_x}{z} $$ The second partial derivatives $$ \frac{\partial^2 u_x}{\partial x^2}=(Aix-x)^2u_x+(Ai-1)u_x $$ $$ \frac{\partial^2 u_x}{\partial y^2}=(Biy-y)^2u_x+3(Bi-1)u_x $$ $$ \frac{\partial^2 u_x}{\partial z^2}=(Ciz-z)^2u_x+3(Ci-1)u_x $$ The cross-terms $$ u_x\frac{\partial u_x}{\partial x}=(Aix-x)u_{x}^{2} $$ $$ u_y\frac{\partial u_x}{\partial y}=(Biy-y)u_xu_y+\frac{u_xu_y}{y} $$ $$ u_z\frac{\partial u_x}{\partial z}=(Ciz-z)u_xu_z+\frac{u_xu_z}{z} $$ So sum of cross-terms will be \begin{equation} u_x((Aix-x)u_{x}+(Biy-y)u_y+\frac{u_y}{y}+(Ciz-z)u_z+\frac{u_z}{z})=\frac{u_xu_y}{y}+\frac{u_xu_z}{z} \end{equation} And for $u_y$ $$ \frac{\partial u_y}{\partial x}=(Aix-x)u_y+\frac{u_y}{x} $$ $$ \frac{\partial u_y}{\partial y}=(Biy-y)u_y $$ $$ \frac{\partial u_y}{\partial z}=(Ciz-z)u_y+\frac{u_y}{z} $$ The second partial derivatives $$ \frac{\partial^2 u_y}{\partial x^2}=(Aix-x)^2u_y+3(Ai-1)u_y $$ $$ \frac{\partial^2 u_y}{\partial y^2}=(Biy-y)^2u_y+(Bi-1)u_y $$ $$ \frac{\partial^2 u_y}{\partial z^2}=(Ciz-z)^2u_y+3(Ci-1)u_y $$ The cross terms $$ u_x\frac{\partial u_y}{\partial x}=u_x((Aix-x)u_y+\frac{u_y}{x}) $$ $$ u_y\frac{\partial u_y}{\partial y}=(Biy-y)u_{y}^{2} $$ $$ u_z\frac{\partial u_y}{\partial z}=u_z((Ciz-z)u_y+\frac{u_y}{z}) $$ So sum of cross-terms will be \begin{equation} u_y((Aix-x)u_{x}+(Biy-y)u_y+\frac{u_x}{x}+(Ciz-z)u_z+\frac{u_z}{z})=\frac{u_yu_x}{x}+\frac{u_yu_z}{z} \end{equation} And for $u_z$ $$ \frac{\partial u_z}{\partial x}=(Aix-x)u_z+\frac{u_z}{x} $$ $$ \frac{\partial u_z}{\partial y}=(Biy-y)u_z+\frac{u_z}{y} $$ $$ \frac{\partial u_z}{\partial z}=(Ciz-z)u_z $$ The second partial derivatives $$ \frac{\partial^2 u_z}{\partial x^2}=(Aix-x)^2u_z+3(Ai-1)u_z $$ $$ \frac{\partial^2 u_z}{\partial y^2}=(Biy-y)^2u_z+3(Bi-1)u_z $$ $$ \frac{\partial^2 u_z}{\partial z^2}=(Ciz-z)^2u_x+(Ci-1)u_x $$ The cross-terms $$ u_x\frac{\partial u_z}{\partial x}=u_x((Aix-x)u_z+\frac{u_z}{x}) $$ $$ u_y\frac{\partial u_z}{\partial y}=u_y((Biy-y)u_z+\frac{u_z}{y}) $$ $$ u_z\frac{\partial u_z}{\partial z}=(Ciz-z)u_{z}^{2} $$ So sum of cross-terms will be \begin{equation} u_z((Aix-x)u_{x}+(Biy-y)u_y+\frac{u_x}{x}+(Ciz-z)u_z+\frac{u_y}{y})=\frac{u_zu_x}{x}+\frac{u_zu_y}{y} \end{equation} \subsection{Navier-Stokes equations with the choice of grundfunctions} The right side of the Navier-Stokes -system will become \begin{equation} \nu((Aix-x)^2+(Ai-1)+(Biy-y)^2+3(Bi-1)+(Ciz-z)^2+3(Ci-1) )u_x-(\frac{u_xu_y}{y}+\frac{u_xu_z}{z}) +f_x \end{equation} \begin{equation} \nu((Aix-x)^2+3(Ai-1)+(Biy-y)^2+(Bi-1)+(Ciz-z)^2+3(Ci-1))u_y-(\frac{u_yu_x}{x}+\frac{u_yu_z}{z}) +f_y \end{equation} \begin{equation} \nu((Aix-x)^2+3(Ai-1)+(Biy-y)^2+3(Bi-1)+(Ciz-z)^2+(Ci-1))u_z-(\frac{u_zu_x}{x}+\frac{u_zu_y}{y}) + f_z \end{equation} The time derivatives will be obviously \begin{equation} \frac{\partial u_x}{\partial t}=au_x \end{equation} \begin{equation} \frac{\partial u_y}{\partial t}=au_y \end{equation} \begin{equation} \frac{\partial u_z}{\partial t}=au_z \end{equation} Therefore we naturally choose, \begin{equation} a=\nu((A+B+C)i-3) \end{equation} As we want $a$ to be a constant. Therefore the grundfunctions will become \begin{equation} u_x(x,y,z,t)= \alpha (Biy-y)(Ciz-z)e^{\nu((A+B+C)i-3)t-\frac{1}{2}(x^2+y^2+z^2)+\frac{i}{2}(Ax^2+By^2+Cz^2)} \end{equation} \begin{equation} u_y(x,y,z,t)=\beta (Aix-x)(Ciz-z)e^{\nu((A+B+C)i-3)t-\frac{1}{2}(x^2+y^2+z^2)+\frac{i}{2}(Ax^2+By^2+Cz^2)} \end{equation} \begin{equation} u_z(x,y,z,t)=\gamma (Aix-x)(Biy-y)e^{\nu((A+B+C)i-3)t-\frac{1}{2}(x^2+y^2+z^2)+\frac{i}{2}(Ax^2+By^2+Cz^2)} \end{equation} One can see that the the field is rotating with respect to time due to the factor $(A+B+C)i$. \\ Therefore the remaining pressure gradient will have components \begin{equation} \frac{\partial p}{\partial x}=\nu((Aix-x)^2+(Biy-y)^2+2(Bi-1)+(Ciz-z)^2+2(Ci-1))u_x-(\frac{u_xu_y}{y}+\frac{u_xu_z}{z}) +f_x \end{equation} \begin{equation} \frac{\partial p}{\partial y}=\nu((Aix-x)^2+2(Ai-1)+(Biy-y)^2+(Ciz-z)^2+2(Ci-1))u_y-(\frac{u_yu_x}{x}+\frac{u_yu_z}{z}) +f_y \end{equation} \begin{equation} \frac{\partial p}{\partial z}=\nu((Aix-x)^2+2(Ai-1)+(Biy-y)^2+2(Bi-1)+(Ciz-z)^2)u_z-(\frac{u_zu_x}{x}+\frac{u_zu_y}{y}) + f_z \end{equation} We do not know in general, whether such pressure field exists, instead, we construct an explicit solution. \subsection{Explicit solution with a nontrivial external force field} We will show an explicit solution with the aid of an external force field. We consider an external force field \begin{equation} \vec{f}=(f_x,f_y,f_z) \end{equation} We assume it is smooth and integrable. For simplicity we will take $A=B=C=1$, that is, the frequencies are all of unity. Now consider the pressure gradient. \\ \begin{equation} \frac{\partial p}{\partial x}=\nu((ix-x)^2u_x+(iy-y)^2u_x+2(i-1)u_x+(iz-z)^2u_x+2(i-1)u_x)-(\frac{u_xu_y}{y}+\frac{u_xu_z}{z})+f_x \end{equation} \begin{equation} \frac{\partial p}{\partial y}=\nu((ix-x)^2u_y+2(i-1)u_y+(iy-y)^2u_y+(iz-z)^2u_y+2(i-1)u_y)-(\frac{u_yu_x}{x}+\frac{u_yu_z}{z})+f_y \end{equation} \begin{equation} \frac{\partial p}{\partial z}=\nu((ix-x)^2u_z+2(i-1)u_z+(iy-y)^2u_z+2(i-1)u_z+(iz-z)^2u_z)-(\frac{u_zu_x}{x}+\frac{u_zu_y}{y})+f_z \end{equation} Let us multiply the first equation by $(xi-x)$, the second by $(yi-y)$ and the third by $(zi-z)$, and sum the three equations with each other. Using the property of zero-divergence, we will get just \begin{equation} -x(i-1)(\frac{u_xu_y}{y}+\frac{u_xu_z}{z}-f_x)-y(i-1)(\frac{u_yu_x}{x}+\frac{u_yu_z}{z}-f_y)-z(i-1)(\frac{u_zu_x}{x}+\frac{u_zu_y}{y}-f_z)= \end{equation} \begin{equation} x(i-1)\frac{\partial p}{\partial x}+y(i-1)\frac{\partial p}{\partial y}+z(i-1)\frac{\partial p}{\partial z} \end{equation} Let us choose backwards so that \begin{equation} \frac{\partial p}{\partial x}=-(\frac{u_xu_y}{y}+\frac{u_xu_z}{z})+f_x \end{equation} \begin{equation} \frac{\partial p}{\partial y}=-(\frac{u_yu_x}{x}+\frac{u_yu_z}{z})+f_y \end{equation} \begin{equation} \frac{\partial p}{\partial z}=-(\frac{u_zu_x}{x}+\frac{u_zu_y}{y})+f_z \end{equation} As we have \begin{equation} \frac{u_xu_y}{y}=-4\alpha \beta xz^2e^{\nu(6i-6)t-(x^2+y^2+z^2)+i(x^2+y^2+z^2)} \end{equation} \begin{equation} \frac{u_xu_z}{z}=-4\alpha \gamma xy^2e^{\nu(6i-6)t-(x^2+y^2+z^2)+i(x^2+y^2+z^2)} \end{equation} \begin{equation} \frac{u_zu_y}{y}=-4\beta \gamma zx^2e^{\nu(6i-6)t-(x^2+y^2+z^2)+i(x^2+y^2+z^2)} \end{equation} \begin{equation} \frac{u_yu_x}{x}=-4\beta \alpha yz^2e^{\nu(6i-6)t-(x^2+y^2+z^2)+i(x^2+y^2+z^2)} \end{equation} \begin{equation} \frac{u_zu_x}{x}=-4\alpha \gamma zy^2e^{\nu(6i-6)t-(x^2+y^2+z^2)+i(x+y+z)} \end{equation} \begin{equation} \frac{u_yu_z}{z}=-4\beta \gamma yx^2e^{\nu(6i-6)t-(x^2+y^2+z^2)+i(x^2+y^2+z^2)} \end{equation} With $g=e^{\nu(6i-6)t-(x^2+y^2+z^2)+i(x^2+y^2+z^2)}$ \begin{equation} \frac{\partial p}{\partial x}=4\alpha( \beta z^2+\gamma y^2)xg +f_x \end{equation} \begin{equation} \frac{\partial p}{\partial y}=4\beta(\alpha z^2+\gamma x^2)yg +f_y \end{equation} \begin{equation} \frac{\partial p}{\partial z}=4\gamma(\alpha y^2+\beta x^2)zg+f_z \end{equation} Now suppose we have an external force field $\vec{f}$, such that \begin{equation} f_x=-4\alpha( \beta z^2+\gamma y^2)xg+4xg \end{equation} \begin{equation} f_y=-4\beta(\alpha z^2+\gamma x^2)yg+4yg \end{equation} \begin{equation} f_z=-4\gamma(\alpha y^2+\beta x^2)zg+4zg \end{equation} Then the pressure gradient will be just \begin{equation} \frac{\partial p}{\partial x}=4xg \end{equation} \begin{equation} \frac{\partial p}{\partial y}=4yg \end{equation} \begin{equation} \frac{\partial p}{\partial z}=4zg \end{equation} This can be integrated easily to obtain \begin{equation} p(x,y,z,t)=-(1+i)e^{\nu(6i-6)t-(x^2+y^2+z^2)+i(x^2+y^2+z^2)}+ constant \end{equation} \subsection{Conclusion} We have provided an analytical solution to the full Navier-Stokes equations that is smooth and integrable with the aid of an external force field. Now one can proceed to extend this solution to more complex situations. What kind of frequency combinations are possible? Are there nontrivial solutions without an external force field?
1,477,468,750,854
arxiv
\section{Introduction} It has been the major goal of particle physics to discover a theoretical framework for unifying gravity with the other three known forces, {\it viz.}, electromagnetism, and the weak and strong nuclear forces. Such a theory must be compatible with quantum theory at very small scales corrsponding to very high energies. Even the possibly less ambitious goal of reconciling general relativity with quantum theory has been elusive and may require new concepts to accomplish. There has been a particular interest in the possibility that a quantum gravity theories will lead to Lorentz invariance violation (LIV) at the Planck scale, $\lambda_{Pl} = \sqrt{G\hbar /c^3} \sim 1.6 \times 10^{-35}$ m. This scale corresponds to a mass (energy) scale of $M_{Pl} = \hbar / (\lambda_{Pl}c) \sim 1.2 \times 10^{19}$ GeV/c$^2$. It is at the Planck scale where quantum effects are expected to play a key role in determining the effective nature of space-time that emerges as general relativity in the classical continuum limit. The idea that Lorentz invariance (LI) may indeed be only approximate has been explored within the context of a wide variety of suggested Planck-scale physics scenarios. These include the concepts of deformed relativity, loop quantum gravity, non-commutative geometry, spin foam models, and some string theory (M theory) models. Such theoretical explorations and their possible consequences, such as observable modifications in the energy-momentum dispersion relations for free particles and photons, have been discussed under the general heading of ``Planck scale phenomenology''. There is an extensive literature on this subject. (See ~\cite{ma05} for a review; some recent references are Refs.~\cite{el08} -- ~\cite{he09}. For a non-technical treatment of the present basic approaches to a quantum gravity theory, see Ref.~\cite{smolin}). One should keep in mind that in a context that is separate from quantum gravity considerations, it is important to test LI for its own sake ~\cite{co98,cg99}. {\it LIV gratia LIV}. The significance of such an approach is evident when one considers the unexpected discoveries of the violation of $P$ and $CP$ symmetries. In fact, it has been shown that a violation of $CPT$ would imply LIV ~\cite{gr02} We will consider here some of the consequent searches for such effects using high energy astrophysics observations, particularly observations of high energy cosmic $\gamma$-rays and ultrahigh energy cosmic rays. \section{LIV Perturbations} We know that Lorentz invariance has been well validated in particle physics; indeed, it plays an essential role in designing machines such as the new LHC (Large Hadron Collider). Thus, any LIV extant at accelerator energies (``low energies'') must be extremely small. This consideration is reflected by adding small Lorentz-violating terms in the free particle Lagrangian. Such terms can be postulated to be independent of quantum gravity theory, {\it e.g.}, Refs.~\cite{co98,cg99}. Alternatively, it can be assumed that the terms are small because they are suppressed by one or more powers of $p/M_{Pl}$ (with the usual convention that $c = 1$.) In the latter case, in the context of effective field theory (EFT), such terms are assumed to approximate the effects of quantum gravity at ``low energies'' when $p \ll M_{Pl}$. One result of such assumptions is a modification of the dispersion relation that relates the energy and momentum of a free particle or photon. This, in turn, can lead to a maximmum attainable velocity (MAV) of a particle different from $c$ or a variation of the velocity of a photon {\it in vacuo} with photon energy. Both effects are clear violations of relativity theory. Such modifications of kinematics can result in changes in threshold energies for particle interactions, suppression of particle interactions and decays, or allowance of particle interactions and decays that are kinematically forbidden by Lorentz invariance ~\cite{cg99}. A simple formulation for breaking LI by a small first order perturbation in the electromagnetic Lagrangian which leads to a renormalizable treatment has been given by Coleman and Glashow~\cite{cg99}. The small perturbative noninvariant terms are both rotationally and translationally invariant in a preferred reference frame which one can assume to be the frame in which the cosmic background radiation is isotropic. These terms are also taken to be invariant under $SU(3)\otimes SU(2)\otimes U(1)$ gauge transformations in the standard model. Using the formalism of Ref.~\cite{cg99}, we denote the MAV of a particle of type $i$ by $c_{i}$, a quantity which is not necessarily equal to $c \equiv 1$, the low energy {\it in vacua\/} velocity of light. We further define the difference $c_{i} - c_{j} \equiv \delta_{ij}$. These definitions can be generalized and can be used to discuss the physics implications of cosmic-ray and cosmic $\gamma$-ray observations~\cite{sg01 -- st09}. \section{Electroweak Interactions} In general then, $c_e \ne c_\gamma$. The physical consequences of such a violation of LI depend on the sign of the difference between these two MAVs. Defining \begin{equation} c_{e} \equiv c_{\gamma}(1 + \delta) ~ , ~ ~~~0< |\delta| \ll 1\;, \label{delta} \end{equation} \noindent one can consider the two cases of positive and negative values of $\delta$ separately~\cite{cg99,sg01}. {\it Case I:} If $c_e<c_\gamma$ ($\delta < 0$), the decay of a photon into an electron-positron pair is kinematically allowed for photons with energies exceeding \begin{equation} E_{\rm max}= m_e\,\sqrt{2/|\delta|}\;. \end{equation} \noindent The decay would take place rapidly, so that photons with energies exceeding $E_{\rm max}$ could not be observed either in the laboratory or as cosmic rays. From the fact that photons have been observed with energies $E_{\gamma} \ge$ 50~TeV from the Crab nebula, one deduces for this case that $E_{\rm max}\ge 50\;$TeV, or that -$\delta < 2\times 10^{-16}$. {\it Case II:} For this possibility, where $c_e>c_\gamma$ ($\delta > 0$), electrons become superluminal if their energies exceed $E_{\rm max}/2$. Electrons traveling faster than light will emit light at all frequencies by a process of `vacuum \v Cerenkov radiation.' This process occurs rapidly, so that superluminal electron energies quickly approach $E_{\rm max}/2$. However, because electrons have been seen in the cosmic radiation with energies up to $\sim\,$2~TeV, it follows that $E_{\rm max} \ge 2$~TeV, which leads to an upper limit on $\delta$ for this case of $3 \times 10^{-14}$. Note that this limit is two orders of magnitude weaker than the limit obtained for Case I. However, this limit can be considerably improved by considering constraints obtained from studying the \gray\ spectra of active galaxies~\cite{sg01}. \subsection{Constraints on LIV from AGN Spectra} A constraint on $\delta$ for $\delta > 0$ follows from a change in the threshold energy for the pair production process $\gamma + \gamma \rightarrow e^+ + e^-$. This follows from the fact that the square of the four-momentum is changed to give the threshold condition \begin{equation} 2\epsilon E_{\gamma}(1-cos\theta)~ -~ 2E_{\gamma}^2\delta ~\ge~ 4m_{e}^2, \label{threshold} \end{equation} \noindent where $\epsilon$ is the energy of the low energy photon and $\theta$ is the angle between the two photons. The second term on the left-hand-side comes from the fact that $c_{\gamma} = \partial E_{\gamma}/\partial p_{\gamma}$. It follows that the condition for a significant increase in the energy threshold for pair production is $E_{\gamma}\delta/2$ $ \ge$ $ m_{e}^2/E_{\gamma}$, or equivalently, $\delta \ge {2m_{e}^{2}/E_{\gamma}^{2}}$. ~ The observed $\gamma$-ray spectrum of the active galaxies Mkn 501 and Mkn 421 while flaring ~\cite{ah01} exhibited the high energy absorption expected from $\gamma$-ray annihilation by extragalactic pair-production interactions with extragalactic infrared photons~\cite{ds02,ko03}. This led Stecker and Glashow~\cite{sg01} to point out that the Mkn 501 spectrum presents evidence for pair-production with no indication of LIV up to a photon energy of $\sim\,$20~TeV and to thereby place a quantitative constraint on LIV given by $\delta < 2m_{e}^{2}/E_{\gamma}^{2} \simeq 10^{-15}$. \section{Gamma-ray Constraints on Quantum Gravity and Extra Dimension Models} As previously mentioned, LIV has been proposed to be a consequence of quantum gravity physics at the Planck scale ~\cite{ga95,al02}. In models involving large extra dimensions, the energy scale at which gravity becomes strong can occur at a quantum gravity scale, $M_{QG} << M_{Pl}$, even approaching a TeV~\cite{ar98}. In the most commonly considered case, the usual relativistic dispersion relations between energy and momentum of the photon and the electron are modified~\cite{al02,ac98} by a term of order $p^3/M_{QG}$. Generalizing the LIV parameter $\delta$ from equation (\ref{delta}) to an energy dependent form, we find \begin{equation} \delta~ \equiv~ {\partial E_{e}\over{\partial p_{e}}}~ -~ {\partial E_{\gamma} \over{\partial p_{\gamma}}}~ \simeq~ {E_{\gamma}\over{M_{QG}}}~ -~{m_{e}^{2}\over{2E_{e}^{2}}}~ -~ {E_{e}\over{M_{QG}}} . \label{pcube} \end{equation} It follows that the threshold condition for pair production given by equation (\ref{threshold}) implies that $M_{QG}~\ge~ E_{\gamma}^3/8m_{e}^2.$ Since pair production occurs for energies of at least 20 TeV, we find a constraint on the quantum gravity scale~\cite{st03} $M_{QG} \ge 0.3 M_{Pl}$. This constraint contradicts the predictions of some proposed quantum gravity models involving large extra dimensions and smaller effective Planck masses. In a variant model of Ref. ~\cite{el04}, the photon dispersion relation is changed, but not that of the electrons. In this case, we find the even stronger constraint $M_{QG} \ge 0.6 M_{Pl}$. \section{Energy Dependent Photon Delays from GRBs and Tests of Lorentz Invariance Violation} One possible manifestation of Lorentz invariance violation, from Planck scale physics produced by quantum gravity effects, is a change in the energy-momentum dispersion relation of a free particle or a photon. If this results from the linear Planck-supressed term as in equation~(\ref{pcube}) above, this results in a photon velocity retardation that is of first order in $E_{\gamma}/M_{QG}$~\cite{ac98,el00}. In a $\Lambda CDM$ cosmology, where present observational data indicate that $\Omega_{\Lambda} \simeq 0.7$ and $\Omega_{m} \simeq 0.3$, the resulting difference in the propagation times of two photons having an energy difference $\Delta E_{\gamma}$ from a $\gamma$-ray burst (GRB) at a redshift $z$ will be \begin{equation} \Delta t_{LIV} = H_{0}^{-1} {{\Delta E_{\gamma}} \over M_{QG} }{\int_0^z}{{dz'(1+z')} \over {\sqrt{\Omega_{\Lambda} + \Omega_{m}(1+z')^3}}} \label{delay} \end{equation} \noindent for a photon dispersion of the form $c_{\gamma} = c(1 - E_{\gamma}/M_{QG}$), with $c$ being the usual low energy velocity of light~\cite{ja08}. In other words, $\delta$, as defined earlier, is given by $- E_{\gamma}/M_{QG}$. The {\it Fermi} Gamma-ray Space Telescope, (see Figure \ref{glast}), with its \gray\ {\it Burst Monitors (GBM)} covers an energy range from 8 keV to 40 MeV and its {\it Large Area Telescope (LAT)} covers an energy range from 20 MeV to $> 300$ GeV. \footnote {See paper the of Silvia Rain\`{o}, these proceedings.} It can observe and study both GRBs and flares from active galactic nuclei over a large range of both energy and distance. This was the case with the GRB 090510, a short burst at a cosmological distance corresponding to a redshift of 0.9 that produced photons with energies extending from the X-ray range to a \gray\ of energy $\sim$ 31 GeV. This burst was therefore a perfect subject for the application of equation (\ref{delay}). {\it Fermi} observations of GRB090510 have yielded the best constraint on any first order retardation of photon velocity with energy $\Delta t \propto (E/M_{QG})$. This result would require a value of $M_{QG} \gsim 1.2 M_{Pl}$~\cite{Fermi2009}\footnote {See also the paper of Francesco de Palma, these proceedings.} In large extra dimension scenarios, one can have effective Planck masses smaller than $1.22 \times 10^{19}$ GeV, whereas in most QG scenarios, one expects that the minimum size of space-time quanta to be $\lambda_{Pl}$. This implies a value for $M_{QG} \lsim M_{Pl}$ in all cases. In particular, we note the string theory inspired model of Ref.~\cite{el08}. This model invisions space-time as a gas of D-particles in a higher dimensional bulk where the observable universe is a D3 brane. The photon is represented as an open string that interacts with the D-particles, resulting a retardation $\propto E_{\gamma}/M_{QG}$. The new {\it Fermi} data appear to rule out this model as well as other models that predict such a retardation. The dispersion effect will be smaller if the dispersion relation has a quadratic dependence on $E_{\gamma}/M_{QG}$ as suggested by effective field theory considerations~\cite{my03,ja04}. This will obviate the limits on $M_{QG}$ given above. These considerations also lead to the prediction of vacuum birefringence (see next section). \begin{figure}[h] \includegraphics[height=.25\textheight]{glast-new.eps} \caption{Schematic of the {\it Fermi} satellite, launched in June of 2008. The {\it LAT} is located at the top (yellow area) and the {\it GBM} array is located directly below.} \label{glast} \end{figure} \section{Looking for Birefringence Effects from Quantum Gravity} A possible model for quantizing space-time which has been actively investigated is {\it loop quantum gravity} (see the review given in Ref. ~\cite{pe04} and references therein.) A signature of this model is that the quantum nature of space-time can produce a vacuum birefringence effect. (See also the EFT treatment in Ref.~\cite{my03}.) This is because electromagnetic waves of opposite circular polarizations will propagate with different velocities, which leads to a rotation of linear polarization direction through the angle \begin{equation} \theta(t)=\left[\omega_+(k)-\omega_-(k)\right]t/2=\xi k^2 t/2M_{Pl} \label{rotation} \end{equation} for a plane wave with wave-vector $k$~\cite{ga99}. Again, for simple Planck-suppressed LIV, we would expect that $\xi \simeq 1$. Some astrophysical sources emit highly polarized radiation. It can be seen from equation (\ref{rotation}) that the rotation angle is reduced by the large value of the Planck mass. However, the small rotations given by equation (\ref{rotation}) can add up over astronomical or cosmological distances to erase the polarization of the source emission. Therefore, if polarization is seen in a distant source, it puts constraints on the parameter $\xi$. Observations of polarized radiation from distant sources can therefore be used to place an upper bound on $\xi$. Equation (\ref{rotation}) indicates that the higher the wave number $|k|$, the stronger the rotation effect will be. Thus, the depolarizing effect of space-time induced birefringence will be most pronounced in the \gray\ energy range. It can also be seen that the this effect grows linearly with propoagation time. The difference in rotation angles for wave-vectors $k_1$ and $k_2$ is \begin{equation} \Delta\theta=\xi (k_2^2-k_1^2) d/2M_{Pl}, \label{diffrotation} \end{equation} replacing the time $t$ by the distance from the source to the detector, denoted by $d$. The best secure bound on this effect, $|\xi|\lsim 10^{-9}$, was obtained using the observed 10\% polarized soft \gray\ emission from the region of the Crab Nebula~\cite{ma08}. Clearly, the best tests of birefringence would be to measure the polarization of $\gamma$-rays} \def\ie{\it i.e.} \def\eg{\it e.g.\ from GRBs. We note that linear polarization in X-ray flares from GRBs has been predicted~\cite{fa05}. Most \gray\ bursts have redshifts in the range 1-2 corresponding to distances of greater than a Gpc. Should polarzation be detected from a burst at distance $d$, this would place a limit on $|\xi|$ of \begin{equation} |\xi| \lsim 5 \times 10^{-15}/d_{0.5} \end{equation} \noindent where $d_{0.5}$ is the distance to the burst in units of 0.5 Gpc ~\cite{ja04}. Detectors that are dedicated to polarization measurements in the X-ray and \gray\ energy range and which can be flown in space to study the polarization from distant astronomical sources are now being designed~\cite{mi05,pr05}. \section{LIV and the Ultrahigh Energy Cosmic Ray Spectrum} \subsection{The ``GZK Effect''} Shortly after the discovery of the 3K cosmogenic background radiation (CBR), Greisen ~\cite{gr66} and Zatsepin and Kuz'min ~\cite{za66} predicted that pion-producing interactions of such cosmic ray protons with the CBR should produce a spectral cutoff at $E \sim$ 50 EeV. The flux of ultrahigh energy cosmic rays (UHECR) is expected to be attenuated by such photomeson producing interactions. This effect is generally known as the ``GZK effect''. Owing to this effect, protons with energies above $\sim$100~EeV should be attenuated from distances beyond $\sim 100$ Mpc because they interact with the CBR photons with a resonant photoproduction of pions ~\cite{st68}. \subsection{Modification of the GZK Effect Owing to LIV} Let us consider the photomeson production process leading to the GZK effect. Near threshold, where single pion production dominates, \begin{equation} p + \gamma \rightarrow p + \pi. \end{equation} Using the normal Lorentz invariant kinematics, the energy threshold for photomeson interactions of UHECR protons of initial laboratory energy $E$ with low energy photons of the CBR with laboratory energy $\omega$, is determined by the relativistic invariance of the square of the total four-momentum of the proton-photon system. This relation, together with the threshold inelasticity relation $E_{\pi} = m/(M + m) E$ for single pion production, yields the threshold conditions for head on collisions in the laboratory frame \begin{equation} 4\omega E = m(2M + m) \end{equation} \noindent for the proton, and \begin{equation} 4\omega E_{\pi} = {{m^2(2M + m)} \over {M + m}} \label{pion} \end{equation} \noindent in terms of the pion energy, where M is the rest mass of the proton and m is the rest mass of the pion~\cite{st68}. If LI is broken so that $c_\pi~ >~ c_p$, the threshold energy for photomeson is altered.\footnote{This requirement precludes the `quasi-vacuum \v{C}erenkov radiation' of pions, {\it via} the rapid, strong interaction, pion emission process, $p \rightarrow N + \pi$. This process would be allowed by LIV in the case where $\delta_{\pi p}$ is negative, producing a sharp cutoff in the UHECR proton spectrum. (For more details, see Refs.~\cite{cg99,st09,alt07}.} Because of the small LIV perturbation term, the square of the four-momentum is shifted from its LI form so that the threshold condition in terms of the pion energy becomes \begin{equation} 4\omega E_{\pi} = {{m^2(2M + m)} \over {M + m}} + 2 \delta_{\pi p} E_{\pi}^2 \label{LIVpi} \end{equation} \noindent where $ \delta_{\pi p} \equiv c_\pi~ - ~ c_p$, again in units where the low energy velocity of light is unity. Equation (\ref{LIVpi}) is a quadratic equation with real roots only under the condition \begin{equation} \delta_{\pi p} \le {{2\omega^2(M + m)} \over {m^2(2M + m)}} \simeq \omega^2/m^2. \label{root} \end{equation} Defining $\omega_0 \equiv kT_{CBR} = 2.35 \times 10^{-4}$ eV with $T_{CBR} = 2.725\pm 0.02$ K, equation (\ref{root}) can be rewritten \begin{equation} \delta_{\pi p} \le 3.23 \times 10^{-24} (\omega/\omega_0)^2. \label{CG} \end{equation} \subsection{Kinematics} If LIV occurs and $\delta_{\pi p} > 0$, photomeson production can only take place for interactions of CBR photons with energies large enough to satisfy equation (\ref{CG}). This condition, together with equation (\ref{LIVpi}), implies that while photomeson interactions leading to GZK suppression can occur for ``lower energy'' UHE protons interacting with higher energy CBR photons on the Wien tail of the spectrum, other interactions involving higher energy protons and photons with smaller values of $\omega$ will be forbidden. Thus, the observed UHECR spectrum may exhibit the characteristics of GZK suppression near the normal GZK threshold, but the UHECR spectrum can ``recover'' at higher energies owing to the possibility that photomeson interactions at higher proton energies may be forbidden. We now consider a more detailed quantitative treatment of this possibility, {\it viz.}, GZK coexisting with LIV. The kinematical relations governing photomeson interactions are changed in the presence of even a small violation of Lorentz invariance. The modified kinematical relations containing LIV have a strong effect on the amount of energy transfered from a incoming proton to the pion produced in the subsequent interaction, {\it i.e.}, the inelasticity ~\cite{st09,al03,ss08}. The primary effect of LIV on photopion production is a reduction of phase space allowed for the interaction. This results from the limits on the allowed range of interaction angles integrated over in order to obtain the total inelasticity. For real-root solutions for interactions involving higher energy protons, the range of kinematically allowed angles becomes severely restricted. The modified inelasticity that results is the key in determining the effects of LIV on photopion production. The inelasticity rapidly drops for higher incident proton energies. Figure \ref{inelasticity} shows the calculated proton inelasticity modified by LIV for a value of $\delta_{\pi p} = 3 \times 10^{-23}$ as a function of both CBR photon energy and proton energy ~\cite{ss08}. Other choices for $\delta_{\pi p}$ yield similar plots. The principal result of changing the value of $\delta_{\pi p}$ is to change the energy at which LIV effects become significant. For a choice of $\delta_{\pi p} = 3 \times 10^{-23}$, there is no observable effect from LIV for $E_{p}$ less than $\sim200$ EeV. Above this energy, the inelasticity precipitously drops as the LIV term in the pion rest energy approaches $m_{\pi}$. \begin{figure}[h] \includegraphics[height=.3\textheight]{inelasticity-short.eps} \caption{The calculated proton inelasticity modified by LIV for $\delta_{\pi p} = 3 \times 10^{-23}$ as a function of CBR photon energy and proton energy \protect ~\cite{ss08}.} \label{inelasticity} \end{figure} With this modified inelasticity, the proton energy loss rate by photomeson production is given by \begin{equation} {{1}\over{E}}{{dE}\over{dt}} = - {{\omega_{0}c}\over{2\pi^2 \gamma^2}\hbar^3c^3} \int\limits_\eta^\infty d\epsilon ~ \epsilon ~ \sigma(\epsilon) K(\epsilon) \ln[1-e^{-\epsilon/2\gamma\omega_{0}}]\end{equation} \noindent where we now use $\epsilon$ to designate the energy of the photon in the cms, $\eta$ is the photon threshold energy for the interaction in the cms, $K(\epsilon)$ denotes the inelasticity, and $\sigma(\epsilon)$ is the total $\gamma$-p cross section with contributions from direct pion production, multipion production, and the $\Delta$ resonance. The corresponding proton attenuation length is given by $\ell = cE/r(E)$, where the energy loss rate $r(E) \equiv (dE/dt)$. This attenuation length is plotted in Figure \ref{attn} for various values of $\delta_{\pi p}$ along with the unmodified pair production attenuation length from pair production interactions, $p + \gamma_{CBR} \rightarrow e^+ + e^-$. \begin{figure}[h] \includegraphics[height=.3\textheight]{attenuation.eps} \caption{The calculated proton attenuation lengths as a function proton energy modified by LIV for various values of $\delta_{\pi p}$ (solid lines), shown with the attenuation length for pair production unmodified by LIV (dashed lines). From top to bottom, the curves are for $\delta_{\pi p} = 1 \times 10^{-22}, 3 \times 10^{-23} , 2 \times 10^{-23}, 1 \times 10^{-23}, 3 \times 10^{-24}, 0$ (no Lorentz violation) \protect ~\cite{ss08}.} \label{attn} \end{figure} \section{UHECR Spectra with LIV and Comparison with Present Observations} The effect of by a very small amount of LIV on the UHECR spectrum was analytically calculated in Ref.~\cite{ss08} in order to determine the resulting spectral modifications. It can be demonstrated that there is little difference between the results of using an analytic calculation {\it vs.} a Monte Carlo calculation ({\it e.g.}, see Ref. ~\cite{ta09}). In order to take account of the probable redshift evolution of UHECR production in astronomical sources, they took account of the following considerations: \\ \noindent ({\it i}) The CBR photon number density increases as $(1+z)^3$ and the CBR photon energies increase linearly with $(1+z)$. The corresponding energy loss for protons at any redshift $z$ is thus given by \begin{eqnarray} r_{\gamma p}(E,z) = (1+z)^3 r[(1+z)E]. \label{eq5} \end{eqnarray} \noindent ({\it ii}) They assumed that the average UHECR volume emissivity is of the energy and redshift dependent form given by $q(E_i,z) = K(z)E_i^{-\Gamma}$ where $E_i$ is the initial energy of the proton at the source and $\Gamma = 2.55$. For the source evolution, we assume $K(z) \propto (1 + z)^{3.6}$ with $z \le 2.5$ so that $K(z)$ is roughly proportional to the empirically determined $z$-dependence of the star formation rate. $K(z=0)$ and $\Gamma$ are normalized fit the data below the GZK energy. Using these assumptions, one can calculate the effect of LIV on the UHECR spectrum. The results are actually insensitive to the assumed redshift dependence because evolution does not affect the shape of the UHECR spectrum near the GZK cutoff energy ~\cite{be88,st05}. At higher energies where the attenuation length may again become large owing to an LIV effect, the effect of evolution turns out to be less than 10\%. The curves calculated in Ref.~\cite{st09} assuming various values of $\delta_{\pi p}$, are shown in Figure \ref{Auger} along with the latest {\it Auger} data from Ref. ~\cite{sch09}. They show that {\it even a very small amount of LIV that is consistent with both a GZK effect and with the present UHECR data can lead to a ``recovery'' of the UHECR spectrum at higher energies.} \begin{figure} \includegraphics[height=.3\textheight]{auger.eps} \caption{Comparison of the latest Auger data with calculated spectra for various values of $\delta_{\pi p}$, taking $\delta_p = 0$ (see text). From top to bottom, the curves give the predicted spectra for $\delta_{\pi p} = 1 \times 10^{-22}, 6 \times 10^{-23}, 4.5 \times 10^{-23}, 3 \times 10^{-23} , 2 \times 10^{-23}, 1 \times 10^{-23}, 3 \times 10^{-24}, 0$ (no Lorentz violation) \protect ~\cite{st09}.} \label{Auger} \end{figure} \subsection{Allowed Range for the LIV Parameter $\delta_{\pi p}$} Stecker and Scully ~\cite{st09} have updated compared the theoretically predicted UHECR spectra with various amounts of LIV to the latest {\it Auger} data from the procedings of the 2009 International Cosmic Ray Conference ~\cite{sch09},~\cite{data}. This update is shown in Figure \ref{Auger}. The amount of presently observed GZK suppression in the UHECR data is consistent with the possible existence of a small amount of LIV. The value of $\delta_{\pi p}$ that results in the smallest $\chi^2$ for the modeled UHECR spectral fit using the observational data from {\it Auger} ~\cite{sch09} above the GZK energy. The best fit LIV parameter found was in the range given by $\delta_{\pi p}$ = $3.0^{+1.5}_{-3.0} \times 10^{-23}$, corresponding to an upper limit on $\delta_{\pi p}$ of $4.5 \times 10^{-23}$. \footnote{The {\it HiRes} data ~\cite{ab08} do not reach a high enough energy to further restrict LIV.} \footnote{We note that the overall fit of the data to the theoretically expected spectrum is somewhat imperfect, even below the GZK energy and even for the case of no LIV. It appears that the {\it Auger} spectrum seems to steepen even below the GZK energy. As a conjecture, one can assume that the derived energy may be too low by about 25\%, within the uncertainty of both systematic-plus statistical error given for the energy determination. This gives better agreement between the theoretical curves and the shifted data ~\cite{st09}. The constraint on LIV would be only slightly reduced if this shift is assumed.} \subsection{Implications for Quantum Gravity Models} An effective field theory approximation for possible LIV effects induced by Planck-scale suppressed quantum gravity for $E \ll M_{Pl}$ was considered in Ref. ~\cite{ma09}. These authors explored the case where a perturbation to the energy-momentum dispersion relation for free particles would be produced by a CPT-even dimension six operator suppressed by a term proportional to $M_{Pl}^{-2}$. The resulting dispersion relation for a particle of type $a$ is \begin{equation} E_a^2 = p_a^2 + m_a^2 + \eta_a \left( {{p^4}\over{M_{Pl}^2}} \right) \label{QG} \end{equation} In order to explore the implications of our constraints for quantum gravity, one can take the perturbative terms in the dispersion relations for both protons and pions, to be given by the dimension six dispersion terms in equation (\ref{QG}) above. Making this identification, the LIV constraint of $\delta_{\pi p} < 4.5 \times 10^{-23}$ in the fiducial energy range around $E_f = 100$ EeV indirectly implies a powerful limit on the representation of quantum gravity effects in an effective field theory formalism with Planck suppressed dimension six operators. Equating the perturbative terms in both the proton and pion dispersion relations, one obtains the relation~\cite{st09} \begin{equation} 2\delta_{\pi p} \simeq (\eta_{\pi} - 25 \eta_{p}) \left({{0.2E_f}\over{M_{Pl}}}\right)^2 , \label{dim6} \end{equation} \noindent where the pion fiducial energy is taken to to be $\sim 0.2 E_f$, as at the $\Delta$ resonance that dominates photopion production and the GZK effect~\cite{st68}. Equation (\ref{dim6}), together with the constraint $\delta_{\pi p} < 4.5 \times 10^{-23}$, indicates that any LIV from dimension six operators is suppressed by a factor of at least ${\cal{O}}(10^{-6} M_{Pl}^{-2})$, except in the unlikely case that $\eta_{\pi}- 25 \eta_{p} \simeq 0$. These results are in agreement with those obtained independently by Maccione et al. from the Monte Carlo runs~\cite{ma09}. It can thus be concluded that an effective field theory representation of quantum gravity with dimension six operators that suppresses LIV by only a factor of $M_{Pl}^2$ {\it i.e.} $\eta_p, \eta_{\pi} \sim 1$, is effectively ruled out by the UHECR observations. \section{Beyond Constraints: Seeking LIV} As we have seen (see Figure \ref{Auger}), even a very small amount of LIV that is consistent with both a GZK effect and with the present UHECR data can lead to a ``recovery'' of the primary UHECR spectrum at higher energies. This is the clearest and the most sensitive evidence of an LIV signature. The ``recovery'' effect has also been deduced in Refs.~\cite{ma09} and ~\cite{bi09} \footnote{In Ref.~\cite{bi09}, a recovery effect is also claimed for high proton energies in the case when $\delta_{\pi p} < 0$. However, we have noted that the `quasi-vacuum \v{C}erenkov radiation' of pions by protons in this case will cut off the proton spectrum and no ``recovery'' effect will occur.}. In order to find it (if it exists) three conditions must exist: ({\it i}) sensitive enough detectors need to be built, ({\it ii}) a primary UHECR spectrum that extends to high enough energies ($\sim$ 1000 EeV) must exist, and ({\it iii}) one much be able to distinguish the LIV signature from other possible effects. \subsection{Obtaining UHECR Data at Higher Energies} We now turn to examining the various techniques that can be used in the future in order to look for a signal of LIV using UHECR observations. As can be seen from the preceding discussion, observations of higher energy UHECRs with much better statistics than presently obtained are needed in order to search for the effects of miniscule Lorentz invariance violation on the UHECR spectrum. \subsubsection{Auger North} Such an increased number of events may be obtained using much larger ground-based detector arrays. The {\it Auger} collaboration has proposed to build an ``{\it Auger North''} array that would be seven times larger than the present southern hemisphere Auger array ({\tt http://www.augernorth.org}). \subsubsection{Space Based Detectors} Further into the future, space-based telescopes designed to look downward at large areas of the Earth's atmosphere as a sensitive detector system for giant air-showers caused by trans-GZK cosmic rays. We look forward to these developments that may have important implications for fundamental high energy physics. Two potential spaced-based missions have been proposed to extend our knowledge of UHECRs to higher energies. One is {\it JEM-EUSO} (the Extreme Universe Space Observatory) ~\cite{EUSO}, a one-satellite telescope mission proposed to be placed on the Japanese Experiment Module (JEM) on the International Space Station. The other is {\it OWL} (Orbiting Wide-angle Light Collectors) ~\cite{OWL}, a two satellite mission for stereo viewing, proposed for a future free-flyer mission. Such orbiting space-based telescopes with UV sensitive cameras will have wide fields-of-view (FOVs) in order to observe and use large volumes of the Earth's atmosphere as a detecting medium. They will thus trace the atmospheric fluorescence trails of numbers of giant air showers produced by ultrahigh energy cosmic rays and neutrinos. Their large FOVs will allow the detection of the rare giant air showers with energies higher than those presently observed by ground-based detectors such as {\it Auger}. Such missions will thus potentially open up a new window on physics at the highest possible observed energies. \section{Conclusions} The {\it Fermi} timing results for GRB090510 rule out and string-inspired D-brane model predictions as well as other quantum gravity predictions of a retardation of photon velocity that is simply proportional to $E/M_{QG}$ because they would require $M_{QG} > M_{Pl}$. More indirect results from \gray\ birefringence limits, the non-decay of 50 TeV $\gamma$-rays} \def\ie{\it i.e.} \def\eg{\it e.g.\ from the Crab Nebula, and the TeV spectra of nearby AGNs also place severe limits on violations of special relativity (LIV). Limits on Lorentz invariance violation from observations of ultrahigh energy cosmic-rays provide severe constraints for other quantum gravity models, appearing to rule out retardation that is simply proportional to $(E/M_{QG})^2$. Various effective field theory frameworks lead to such energy dependences. New theoretical models of Planck scale physics and quantum gravity need to meet all of the present observational constraints. One scenario that may be considered is that gravity, {\it i.e.} $G$, becomes weaker at high energies. We know that the strong, weak and electromagnetic interactions all have energy dependences, given by the running of the coupling constants. If $G$ decreases, then the effective $\lambda_{Pl} = \sqrt{G\hbar /c^3}$ would decrease and the effective $M_{Pl} = \hbar / (\lambda_{Pl}c)$ would increase. In that case, the space-time quantum scale would be less than the usual definition of $\lambda_{Pl}$. Such speculation is presently {\it cogitare ex arcis}, but might be plausible if a transition to a phase where the various forces are unified occurs at very high energies ~\cite{st80}. At the time of the present writing, high energy astrophysics observations have led to strong constraints on LIV. Currently, we have no positive evidence for LIV. This fact, in itself, should help guide theoretical research on quantum gravity, already ruling out some models. Will this lead to a new null result comparable to Michelson-Morley? Will a totally new concept be needed to describe physics at the Planck scale? If all of the known forces are unified at the Planck scale, this would not be surprising. One thing is clear: a consideration of all empirical data will be necessary in order to finally arrive at a true theory of physics at the Planck scale.
1,477,468,750,855
arxiv
\subsection*{1. Introduction} A detailed understanding of the range of applicability of the $\varphi^4$ field theory in $d$ dimensions is of fundamental interest to statistical and elementary particle physics \cite{ZJ}. The perturbative treatment of the critical behavior of the $\varphi^4$ field theory in $d \leq 4$ dimensions is known to be nontrivial because of the problem of infrared divergences. This problem is solved by the renormalization-group theory \cite{ZJ,BGZJ}. Above four dimensions where the critical behavior is mean-field like, no infrared problems of perturbation theory arise and no necessity exists for invoking the renormalization group. Thus the $\varphi^4$ theory above four dimensions appears to be free of essential problems. This is true, however, only for infinite systems. For the $\varphi^4$ field theory of confined systems above four dimensions \cite{ZJ,B1} there are several aspects of general interest $-$ such as the nature of the fundamental reference lengths, the range of validity of universal finite-size scaling, the relevance of inhomogeneous fluctuations, and the significance of lattice effects $-$ that have remained unresolved until recently. These issues have turned out [4-6] to be closely related to the longstanding problem regarding the verification of earlier phenomenological \cite{BNP,B2} and analytical \cite{B1} predictions for $d > 4$ and regarding the various attempts to test these predictions by means of Monte Carlo (MC) simulations for the five-dimensional Ising model [7-12]. Clarifying these issues is of substantial interest for a better understanding of finite-size effects and of the concept of finite-size scaling [13-18], not only for $d > 4$ but also for the limit $d \rightarrow 4$. Recently [4-6] we have shown, on the basis of exact results in the limit $n \rightarrow \infty$ of the $O(n)$ symmetric $\varphi^4$ theory, that finite-size effects for $d > 4$, for cubic geometry and periodic boundary conditions, are more complicated and less universal than predicted previously and that therefore the previous analyses of MC data were not conclusive. In particular we have found that lattice effects and inhomogeneous fluctuations of the order parameter play an unexpectedly import role and that two reference lengths must be employed in a finite-size scaling description. So far, however, no direct justification was given for our conclusions to be valid also for the more relevant case of lattice systems with a {\it{finite}} number $n$ of components of the order parameter. It is the purpose of the present paper to provide this justification. We shall present a perturbative treatment of a $\varphi^4$ {\it{lattice}} model in one-loop order that leads to quantitative predictions of asymptotic finite-size effects for $d > 4$ and $n = 1$. We shall show that the previous arguments \cite{CD1} demonstrating the necessity of using two scaling variables (rather than a single scaling variable) remain valid also for finite $n$ for both the field-theoretic and the lattice model. We also confirm that the finite-size effects of the $\varphi^4$ lattice model differ fundamentally from those of the $\varphi^4$ field theory for general $n$. This implies that the Landau-Ginzburg-Wilson continuum Hamiltonian for an $n$ component order parameter does not correctly describe the finite-size effects of spin models on lattices with periodic boundary conditions above the upper critical dimension. More specifically, we study the case of cubic geometry (volume $L^d$) with periodic boundary conditions and calculate the asymptotic finite-size scaling form of the order-parameter distribution function $P(\Phi)$ where $\Phi$ is the spatial average of the fluctuating local order parameter $\varphi$. From $P(\Phi)$ we derive the asymptotic finite-size scaling functions of the susceptibilty, of the order parameter and of the Binder cumulant \cite{14}. {\it{Two}} scaling variables \begin{equation} x = t(L/\xi_0)^2 \quad , \quad t = (T-T_c)/T_c \end{equation} and \begin{equation} y = (L/l_0)^{4-d} \end{equation} are needed where $\xi_0$ and $l_0$ are reference lengths related to the bulk correlation length $\xi$, similar to the case $n \rightarrow \infty$ [4-6]. $\xi_0$ is the amplitude of $\xi$ for $T > T_c$ at vanishing external field $h$ whereas $l_0$ is (proportional to) the amplitude of $\xi$ at $T = T_c$ for small $h$ \cite{CD2}. The second length $l_0$ is connected with the four-point coupling $u_0$ according to $l_0 \sim u_0^{1/(d-4)}$. As an alternative choice of scaling variables we also employ $w$ and $y$ where \begin{equation} w \; =\; x y^{-1/2} \; = \; t (L/\tilde{\ell})^{d/2} \end{equation} with the reference length \begin{equation} \tilde{\ell} \; = \; l_0 (\xi_0/l_0)^{4/d} \quad . \end{equation} In addition to the lengths $\xi_0$, $l_0$ and $L$ there is the microscopic length $\tilde{a}$ or $\Lambda^{-1}$, i.e., the lattice spacing of the lattice model or the inverse cutoff of the field-theoretic model. For short-range interactions, $\xi_0$ and $l_0$ are expected to be of ${\it O}(\tilde{a})$ or ${\it O}(\Lambda^{-1})$. Our scaling functions presented in Section 4 are valid in the asymptotic range $L \gg \tilde{a}$, $\xi \gg \tilde{a}$ or $L\Lambda \gg 1$, $\xi \Lambda \gg 1$. The role of the length $l_0$ is twofold. Since the dangerous irrelevant character of $u_0$ \cite{F2,PF} exists already at the mean-field level, the length $l_0$ appears via $\tilde{\ell}$ in the variable $w \; \sim \; u_0^{-1/2} t L^{d/2}$ already at the level of the lowest-mode approximation which takes only homogeneous fluctuations into account \cite{B1}. The second important role of $l_0$ originates from $u_0$ being the coupling of the {\it{inhomogeneous}} higher modes. Here $u_0$ does not have a dangerous irrelevant character. In fact, these higher modes are {\it{relevant}} for $d > 4$ as has been demonstrated for $n \rightarrow \infty$ \cite{CD1} and will be shown in the present paper to be valid for general $n$, contrary to opposite statements in the literature [1,3]. The length $l_0$ sets the length scale of the finite-size effects arising from these modes. Both scaling variables $x$ and $y$ or $w$ and $y$ must be employed in general, i.e., at any finite value of $\xi/L < \infty$ in the entire asymptotic $L^{-1} - \xi^{-1}$ plane (Fig.1) to provide a complete description of asymptotic finite-size effects of the $\varphi^4$ theory. Our description is consistent with the general scaling structure in terms of $tL^2$ and $u_0 L^{4-d}$ proposed by Privman and Fisher \cite{PF} but inconsistent with the reduced structure proposed by Binder et al. [7] and with the lowest-mode approximation of Br\'{e}zin and Zinn-Justin [3] in terms of a single scaling variable $tL^{d/2}$ equivalent to $w$. We find that it is only the region between the curved dotted lines in Fig.1 where the single-variable scaling forms of Refs. [3] and [7] are justifiable for the lattice model, but not for the field-theoretic model. The region between the curved lines corresponds to the special case $\xi/L \rightarrow \infty$ in the limit $L \rightarrow \infty$ and $|t| \rightarrow 0$ at finite $w$ where the large - $L$ behavior becomes lowest-mode like for the lattice model. For $t = 0$ ($w=0$) this was found previously [4-6] for the case of the susceptibility and of the Binder cumulant, as conjectured in Ref. \cite{15}. As a consequence we shall show that characteristic temperatures, in the sense of ''pseudocritical'' temperatures \cite{F} such as $T_{max}(L)$ of the maxima of the susceptibility or the ''effective critical temperature'' \cite{RNB} $T_c(L)$ where the magnetization has its maximum slope, scale asymptotically as $T_c - T_{max}(L) \sim L^{-d/2}$ or $T_c - T_c(L) \sim L^{-d/2}$ for the $\varphi^4$ lattice model, in agreement with previous Monte Carlo (MC) data \cite{RNB}. Correspondingly we predict $\chi_{max}\sim L^{d/2}$ asymptotically for the lattice model. On a quantitative level, our theory predicts that the large -$L$ behavior close to $T_c$ is strongly affected by nonnegligible finite-size terms $\sim L^{(4-d)/2}$ caused by the higher (inhomogeneous) modes, as demonstrated recently for the Binder cumulant at $T_c$ \cite{CD3}. In the analysis of Ref. [9] the observed ''slow approach to the scaling limit'' was considered to be the most significant discrepancy between the lowest-mode prediction \cite{B1} and the MC data. Our theory identifies the terms $\sim L^{(4-d)/2}$ as the possible origin of this discrepancy. We also show that, for the same reason, the successful method of determining bulk $T_c$ via the intersection point of the Binder cumulant \cite{14} is not accurately applicable to finite spin models of small size in $d = 5$ dimensions, as demonstrated in Ref. [6]. New MC simulations over a larger range of $L$ would be desirable for testing the predictions of our theory regarding the magnitude of the $L^{(4-d)/2}$ and $L^{4-d}$ terms. In Section II we derive some of the bulk properties of the lattice model for $n = 1$ above four dimensions in one-loop order. In particular we identify the amplitudes of the correlation length at $h = 0$ for $T > T_c$ as well as at $T = T_c$ for $h \neq 0$. In Section III we derive the effective Hamiltonian and the order-parameter distribution function in one-loop order. Applications to the asymptotic (large $L \gg \tilde{a}$, small $|t| \ll 1$) finite-size scaling functions and predictions of the lattice model for $d = 5$ are presented and discussed in Section IV. Results for the field-theoretic model are briefly presented in Section V. We summarize the results of our paper in Section VI. \subsection*{2. Lattice model: Bulk properties for ${\bf{d > 4}}$} We consider a $\varphi^4$ lattice Hamiltonian $H$ for the one-component variables $\varphi_i$ with \mbox{$ - \infty \leq \varphi_{i} \leq \infty$} on the lattice points ${\bf{x}}_i$ of a simple-cubic lattice in a cube with volume $V = L^d$ and with periodic boundary conditions. We assume \begin{equation}\label{Hhat} H \; = \; \tilde{a}^d \; \left \{ \sum\limits_i [ \frac{r_0}{2} \varphi^2_i \; + \; u_0 (\varphi_i^2)^2 - h \varphi_i]\; + \; \sum\limits_{i,j} \frac{1}{2 \tilde{a}^2} J_{ij} (\varphi_i - \varphi_j)^2 \right \} \end{equation} where $\tilde{a}$ is the lattice spacing. The couplings $J_{ij}$ are dimensionless quantities. The variables $\varphi_j$ have the Fourier representation \begin{equation} \varphi_j \; = \; \frac{1}{L^{d}} \; \sum\limits_{\bf{k}} \; e^{i {\bf{k}} \cdot {\bf{x}}_j} \hat{\varphi}_{\bf{k}} \quad . \end{equation} In terms of the Fourier components \begin{equation}\label{var} \hat{\varphi}_{\bf{k}} \; = \; \tilde{a}^d \sum\limits_j \; e^{-i{\bf{k \cdot x}}_j} \varphi_j \end{equation} the Hamiltonian $H$ reads \begin{eqnarray}\label{Hhat=} H & = & L^{-d} \; \sum\limits_{\bf{k}} \frac{1}{2}\; [ r_0 \; + \; \hat{J}_{\bf{k}} ] \hat{\varphi}_{\bf{k}} \; \hat{\varphi}_{-{\bf{k}}} \; - \; h \hat{\varphi}_{{\bf{0}}} \nonumber \\[0.4cm] && + \; u_0 L^{-3d} \sum\limits_{{\bf{k k' k''}}} \; (\hat{\varphi}_{{\bf{k}}} \; \hat{\varphi}_{\bf{{k'}}}) (\hat{\varphi}_{{\bf{k''}}} \; \hat{\varphi}_{{\bf{-k-k'-k''}}}) \end{eqnarray} where \begin{equation}\label{D} \hat{J}_{\bf{k}} \; = \; \frac{2}{\tilde{a}^2} \; [J(0) \; - \; J({\bf{k}})] \end{equation} with \begin{equation}\label{J} J({\bf{k}}) \; = \;(\tilde{a}/L)^{d} \; \sum\limits_{i,j} \; J_{ij} e^{-i{\bf{k}}\cdot ({\bf{x}}_i-{\bf{x}}_j)} \; . \end{equation} The summation $\; \; \sum_{\bf{k}} \; \; $ runs over discrete $ \; {\bf{k}}\; $ vectors with components $\; k_j \; = 2 \pi m_j / L, \; m_j = 0 , \pm 1, \pm 2, \cdots, \; \; j = 1, 2, \cdots, d $ with a finite cutoff {\mbox {$-\Lambda \equiv - \pi/\tilde{a} \leq k_j $}} $ < \pi/\tilde{a} \equiv \Lambda$. We assume a finite-range pair interaction $J_{ij}$ such that its Fourier transform has the small ${\bf{k}}$ behavior \begin{equation} \hat{J} _{\bf{k}} \; = \; J_0 {\bf{k}}^2 \quad + \quad O (k_i^2 k_j^2) \end{equation} with \begin{equation}\label{J0} J_0 \; = \; \frac{1}{d} (\tilde{a}/L)^d \; \sum\limits_{i,j} \left ( J_{ij}/\tilde{a}^2 \right ) \left ( {\bf{x}}_i \; - \; {\bf{x}}_j \right )^2 \quad . \end{equation} The complete information on thermodynamic properties is contained in the Gibbs free energy per unit volume (in units of $k_B T$) \begin{equation}\label{V} f \; = \; - \frac{1} {L^d} \ln \int {\cal{D}} \varphi \exp (- H) \end{equation} where the symbol $\int {\cal{D}}\varphi$ is the usual abbreviation for the multiple integral over the real and imaginary parts of (the finite number of) the Fourier components $ \varphi_{\bf{k}}$. Recently we have found \cite{CD2} in the large-$n$ limit that the two reference lengths of the finite-size scaling functions for $d > 4$ are determined by the two {\it{bulk}} correlation-length amplitudes $\xi_0$ at $h = 0$ for $T > T_c$ and $l_0$ at $T = T_c$ for small $h$. We shall see that this property remains valid, as far as $\xi_0$ is concerned, also for $n = 1$ in one-loop order. With regard to $l_0$, an additional $n$-dependent prefactor arises that is $1$ in the large-$n$ limit and $3^{-1/2} 2^{-1/3}$ for $n = 1$ in one-loop order (see Eq. (28) below). For this purpose we need to identify the amplitudes of the bulk correlation length for $d > 4$. We also calculate the bulk amplitudes of the order parameter $M_b$ and of the susceptibilities $\chi^+_b$ and $\chi^-_b$ above and below $T_c$ as reference quantities for the corresponding finite-size effects. All of our calculations are carried out at finite cutoff $\Lambda$ (finite lattice spacing $\tilde{a}$). First we derive the asymptotic form of the bulk susceptibility $\chi_b$ at $h = 0$ above and below $T_c$ as well as at $T = T_c$ for small $h$. The bulk Gibbs free energy density is denoted by $f_b = \lim\limits_{L \rightarrow \infty} f$. In terms of the bulk order parameter \begin{equation} M_b \; = \; - \lim\limits_{L \rightarrow \infty} \; \partial f/\partial h \quad \end{equation} the bulk Helmholtz free energy density $\Gamma_b \; = \; f_b + M_b h$ reads in one-loop order \begin{equation} \Gamma_b (r_0, M_b) \; = \; \frac{1}{2} r_0 M^2_b + u_0 M^4_b \; + \; \frac{1}{2} \int\limits_{\bf{k}} \ln (r_0 + 12 u_0 M^2_b + \hat{J}_{\bf{k}}) \end{equation} where $\int_{\bf{k}}$ stands for $(2\pi)^{-d} \int d^dk$ with $|k_i| \leq \Lambda$. Above $T_c$ for $h = 0$, the inverse bulk susceptibility $(\chi^+_b)^{-1}$ is \begin{equation} (\chi^+_b)^{-1} \; = \; (\partial^2 \Gamma_b/\partial M^2_b)_{M_b = 0} \; = \; r_0 \; + \; 12 u_0 \int\limits_{\bf{k}} (r_0 + \hat{J}_{\bf{k}})^{-1} \; + \; O(u^2_0) \end{equation} which determines the critical value $r_{0c}$ of $r_0$ as \begin{equation} r_{0c} \; = \; - \; 12u_0 \; \int\limits_{\bf{k}} \; \hat{J}^{-1}_{\bf{k}} \; + \; O(u^2_0) \quad . \end{equation} Thus we rewrite $(\chi^+_b)^{-1}$ above $T_c$ in terms of $r_0 - r_{0c}$ as \begin{equation} (\chi^+_b)^{-1} \; = \; (r_0 - r_{0c}) \left[ 1 - 12u_0 \int\limits_{\bf{k}} \hat{J}_{\bf{k}}^{-2} \right] \; + \; O \; (u^2_0) \end{equation} where \begin{equation} r_0 - r_{0c} \; = \; a_0 t, \quad t \; = \; (T - T_c)/T_c \quad . \end{equation} Note that the integral in Eq. (18) exists only for $d > 4$ and only for finite cutoff. The spontaneous value $M_s$ of the bulk order parameter for $h \rightarrow 0$ below $T_c$ is determined by $\partial \Gamma_b / \partial M_b = 0$. This yields for $d > 4$ \begin{equation} M^2_s \, = \; (4u_0)^{-1} (r_{0c} - r_0) \left[ 1 + 24 u_0 \int\limits_{\bf{k}} \hat{J}^{-2}_{\bf{k}} \right] + O(u^2_0) \quad . \end{equation} The inverse susceptibility $(\chi^-_b)^{-1}$ for $h \rightarrow 0$ below $T_c$ is in one-loop order \begin{equation} (\chi^{-}_b)^{-1} \; = \; \left ( \frac{\partial^2 \Gamma}{\partial M^2_b} \right )_{M_b = M_s} \; = \; 2 \left(r_{0c} - r_{0} \right)\left[1 - 12 u_0 \int\limits_{\bf{k}} \hat{J}^{-2}_{\bf{k}} \right]+ O(u^2_0)\quad . \end{equation} From the equation of state at $T_c$ \begin{equation} \frac {\partial \Gamma_b}{\partial M_b} \; = \; h \; = \; 4 u_0 M^3_b \; \left[ 1 - 36 u_0 \; \int\limits_{\bf{k}} \; \hat{J}^{-2}_{\bf{k}} \right] \end{equation} we obtain the $h$ dependence of the inverse bulk susceptibility $\chi_c^{-1}$ at $T_c$ as \begin{equation} \chi_c^{-1} \; = \; \frac {\partial^2 \Gamma_b}{\partial M_b^2} \; = \; 3 h^{2/3} \left\{ 4 u_0 \left[ 1 - 36 u_0 \int\limits_{\bf{k}} \hat{J}_{\bf{k}}^{-2} \right] \right\} ^{1/3} \; + \; O(u^3_0) \; . \end{equation} For $T \geq T_c$, the bulk susceptibility at finite wave vector ${\bf{q}}$ \begin{equation}\label{W} \chi_b ({\bf{q}}) \; = \;\lim\limits_{L \rightarrow \infty} \frac {\tilde{a}^{2d}}{L^d} \sum\limits_{i,j} < \varphi_i \; \varphi_j > \; e^{-i{\bf{q}} \cdot ({\bf{x}}_i - {\bf{x}}_j)} \quad \end{equation} has the one-loop form (for both $h = 0 $ and $h \not= 0$) \begin{equation} \chi_b({\bf{q}})^{-1} \; = \; \chi_b ({\bf{0}})^{-1} \; + \; \hat{J}_{\bf{q}} \left[ 1 + O(u^2_0) \right] \quad . \end{equation} Thus the square of the bulk correlation length for $T \geq T_c$ is given by \begin{equation}\label{chide} \xi^2 \; = \; \chi_b \; ({\bf{0}}) \left [ \partial \chi_b ({\bf{k}})^{-1}/ \partial {\bf{k}}^2 \right ]_{{\bf{k}} = {\bf{0}}} \; = \; J_0 \; \chi_b ({\bf{0}}) \; + \; O(u^2_0) \; . \end{equation} Substituting Eqs. (18) and (23) into Eqs. (25) and (26) yields the asymptotic form for $d > 4$ \begin{equation} \xi \; = \; \xi_0 t^{-1/2}\quad , \quad t > 0 \; , \; h = 0\; , \end{equation} and \begin{equation} \xi \; = \; 3^{-1/2} 2^{-1/3} l_0 (h^2 l_0^{d+2} J_0^{-1})^{-1/6} \; , \; t = 0 \; , \; h \not= 0 \end{equation} with the lengths \begin{eqnarray} \xi_0 \; & = & \; a_0^{-1/2} \; J_0^{1/2} \left[ 1 + 12 u_0 \; \int\limits_{\bf{k}} \; \hat{J}^{-2}_{\bf{k}} \right]^{1/2} \quad + \quad O(u^2_0) \end{eqnarray} and \begin{eqnarray} l_0 \; & = & \; \left\{u_0 J^{-2}_0 \left [ 1 \; + 36 u_0 \; \int\limits_{\bf{k}} \hat{J}_{\bf{k}}^{-2} \right]^{-1} \right \}^{1/(d-4)}. \end{eqnarray} These lengths will appear also in the finite-size scaling functions in Sect. IV. We see that for $d > 4$ the fluctuations (that enter via the one-loop integrals) only modify the amplitudes but do not change the mean-field $t$ and $h$ dependence. The ''dangerous'' $u_0$ dependence \cite{F2} of $\xi$ at $T_c$, Eqs. (28) and (30), is clearly seen in $l_0 \sim u_0^{1/(d-4)}$. We note that both $\xi_0$ and $l_0$ are cutoff dependent via $\int_{\bf{k}} \hat{J}_{\bf{k}}^{-2}$. In rewriting $[1 - 12 u_0 \int_{\bf{k}} \hat{J}^{-2}_{\bf{k}}]^{-1/2}$ as $[1 + 12 u_0 \int_{\bf{k}} \hat{J}^{-2}_{\bf{k}}]^{1/2}$ in $\xi_0$ (and similarly in $l_0$) we have been guided by the resummed forms of $\xi_0$ and $l_0$ in the limit $n \rightarrow \infty$ (Eqs. (141) and (142) in Ref. \cite{CD1}). Corresponding results can be derived for the continuum $\varphi^4$ Hamiltonian (see Eq. (62) below) with periodic boundary conditions, similar to the case $n \rightarrow \infty$ studied previously \cite{CD1}. This amounts essentially to replacing $\hat{J}_{\bf{k}}$ by ${\bf{k}}^2$ in the equations given above. As far as bulk properties of $\chi_b^+, \chi^-_b$, $\xi$ and $M_b$ are concerned, only the nonuniversal amplitudes are modified but the $t$ and $h$ dependence remains identical for the field-theoretic and the lattice $\varphi^4$ model. For the finite system, however, lattice effects become significant as we shall see in the subsequent Sections. For the specific heat even the (finite) bulk value at $T_c$ turns out to be different for the field-theoretic and the lattice model, similar to the case $n \rightarrow \infty$ \cite{CD1}. \subsection*{3. Order-parameter distribution function for ${\bf{d > 4}}$} \subsubsection*{3.1. General form in one-loop order} A perturbation approach to finite-size effects of the $\varphi^4$ lattice model for $d > 4$ can be set up in a way similar to the previous finite-size perturbation theory for $d < 4$ above and below $T_c$ \cite{EDC}. We decompose \begin{equation} \varphi_j \; = \; \Phi \; + \; \sigma_j \end{equation} and shall derive an effective Hamiltonian $H^{eff}$ \cite{B1,EDC} for the lowest (homogeneous) mode \begin{equation} \Phi \; = \; \frac{\tilde{a}}{L^d} \sum\limits_j \; \varphi_j \end{equation} by a perturbative treatment of the fluctuations of the higher modes \begin{equation} \sigma_j \; = \; L^{-d} \sum\limits_{{\bf{k}} \neq {\bf{0}}} \hat{\varphi}_{\bf{k}} \; e^{i{\bf{k}} \cdot {\bf{x}}_j} \quad . \end{equation} Correspondingly we write the lattice Hamiltonian, Eq. (5), in the form \begin{equation} H \; = \; H_0(\Phi) \; + \; H_1 + H_2 \quad , \end{equation} with the lowest-mode Hamiltonian \begin{equation} H_0 (\Phi) \; = \; L^d (\frac{1}{2} r_0 \Phi^2 \; + \; u_0 \Phi^4 - h \Phi) \quad , \end{equation} the Gaussian part \begin{equation} H_1 \; = \; L^{-d} \; \sum\limits_{{\bf{k}}\neq {\bf{0}}} \frac{1}{2} (\bar{r}_{0L} + \hat{J}_{\bf{k}}) \hat{\sigma}_{\bf{k}} \; \hat{\sigma}_{-{\bf{k}}} \; \end{equation} with \begin{equation} \bar{r}_{0L} \; = \; r_0 \; + \; 12 u_0 M^2_0 \; , \end{equation} and the perturbation part \begin{eqnarray} H_2 \; = \; \tilde{a}^d \; \sum\limits_j \left[ 6 u_0 (\Phi^2 - M^2_0) \sigma^2_j + 4 u_0 \sigma^3_j \; + u_0 \sigma^4_j \right] \quad . \end{eqnarray} Unlike the case $d < 4$, we must work, for $d > 4$, at finite cutoff. Thus we shall incorporate the finite shift $r_{0c} \sim O(u_0)$, Eq. (17), of the parameter $r_0$ whereas in the previous \cite{EDC} dimensional regularization at infinite cutoff there was no $O(u_0)$ contribution to $r_{0c}$. Thus we define \begin{equation} M^2_0 \; = \; \frac{1}{Z_0^{eff}} \; \int\limits^\infty_{-\infty} d \Phi \; \Phi^2 \; \exp (- H^{eff}_0) \end{equation} where now \begin{eqnarray} H^{eff}_0 &\; = \; & L^d \left[ \frac{1}{2} (r_0 - r_{0c}) \Phi^2 \; + \; u_0 \Phi^4 - h \Phi \right] \; \; \end{eqnarray} and \begin{equation} Z_0^{eff} \; = \; \int\limits^\infty_{-\infty} \; d \Phi \exp (- H^{eff}_0) \; . \end{equation} contain the shifted variable $r_0 - r_{0c}$. The partition function is decomposed as \begin{equation} Z = \int {\cal{D}} \varphi \; e^{-H} \; = \; \int\limits_{-\infty}^\infty \; d \Phi \; \exp \left[- (H_0 + \Gamma) \right] \quad , \end{equation} where \begin{equation} \Gamma (\Phi) \; = \; - \ln \int {\cal{D}} \sigma \exp \left[ -(H_1 + H_2) \right] \end{equation} is determined by the higher modes within perturbation theory. We rewrite \begin{equation} H_0 (\Phi) \; + \; \Gamma (\Phi) \; = \; \Gamma (0) + H^{eff} (\Phi) \end{equation} and define the order-parameter distribution function \begin{equation} P(\Phi) \; = \; \frac{1}{Z^{eff}}\; \exp \left[ -H^{eff} (\Phi) \right] \; , \end{equation} \begin{equation} Z^{eff} \; = \; \int\limits_{-\infty}^\infty \; d \Phi \exp \left[ - H^{eff} (\Phi) \right] \; . \end{equation} In a perturbation calculation with respect to $H_2$ we obtain the effective Hamiltonian in one-loop order \begin{equation} H^{eff} (\Phi) \; = \; L^d \left[ \frac{1}{2} r_0^{eff} \Phi^2 \; + \; u^{eff}_0 \Phi^4 \; + \; O(\Phi^6)-h\Phi \right]\quad \end{equation} where \begin{eqnarray} r^{eff}_0 = r_0 - r_{0c} + 12 u_0 \left[ L^{-d} \sum\limits_{{\bf{k}} \neq {\bf{0}}} (r_{0L} + \hat{J}_{\bf{k}})^{-1} -\int_{\bf k}\hat{J}_{\bf k}^{-2}\right]\nonumber \\ + 144 u^2_0 M^2_0 L^{-d} \sum\limits_{{\bf{k}} \neq {\bf{0}}} (r_{0L} + \hat{J}_{\bf{k}})^{-2}, \end{eqnarray} \begin{equation} u^{eff}_0 \; = \; u_0 - 36 u^2_0 L^{-d} \; \sum\limits_{{\bf{k}} \neq {\bf{0}}} (r_{0L} \; + \; \hat{J}_{\bf{k}})^{-2} \quad . \end{equation} In Eq. (48) we have added and subtracted $r_{0c}$ and have replaced $\bar{r}_{0L}$, in the $O(u_0)$ terms, by \begin{equation} r_{0L} \; = \; r_0 - r_{0c} \; + \; 12 u_0 M^2_0 \quad . \end{equation} This quantity is a positive function of $r_0 - r_{0c}$ for arbitrary $- \infty \leq r_0 - r_{0c} \leq \infty$ at any finite $L$. Moments of the distribution function can now be calculated as \begin{equation} < \Phi^m > \; = \; \int\limits^\infty_{-\infty} d \Phi \; \Phi^m \; P(\Phi) \end{equation} and \begin{equation} < |\Phi|^m > \; = \; \int\limits^\infty_{-\infty} d \Phi \; |\Phi|^m P(\Phi) \quad . \end{equation} Because of the (one-loop) $\Phi^4$ structure of $H^{eff}$, these averages can be expressed in terms of the well-known functions \begin{equation} \vartheta_m(Y) \; = \; \frac{\int\limits_0^\infty ds \; s^m \; \exp (-\frac{1}{2}Y s^2 \; - \; s^4)} {\int\limits_0^\infty ds \; \exp (-\frac{1}{2}Y s^2 \; - \; s^4)} \end{equation} that appear also in the finite-size theory below four dimensions \cite{EDC,CDS}. The moments determine several thermodynamic quantities such as \cite{EDC} the susceptibilities \begin{eqnarray} \chi^+ \; & = & \; L^d < \Phi^2> \quad , \\[1cm] \chi^- \; & = & \; L^d (< \Phi^2> \; - \; < |\Phi| >^2) \quad , \end{eqnarray} the ''magnetization'' $ M \; = \; < |\Phi| > $ , and the Binder cumulant \begin{equation} U \; = \; 1 \; - \; \frac{1}{3} < \Phi^4>/< \Phi^2>^2 \quad . \end{equation} In terms of the effective parameters $r^{eff}_0$ and $u^{eff}_0$ they can be expressed in one-loop order as \begin{eqnarray} \chi^+ \; & = & \; (L^d / u_0^{eff})^{1/2} \; \vartheta_2 \; (Y^{eff}) \quad , \\[1cm] \chi^- \; & = & \; (L^d / u_0^{eff})^{1/2}\; \left[ \vartheta_2 \; (Y^{eff}) \; - \; \vartheta_1 \; (Y^{eff})^2 \right] \quad , \\[1cm] M \; & = & \; (L^d u_0^{eff})^{-1/4} \; \vartheta_1 \; (Y^{eff}) \quad , \\[1cm] U \; & = & \; 1 \; - \; \frac{1}{3} \; \vartheta_4 (Y^{eff}) / \vartheta_2 (Y^{eff})^2 \quad , \end{eqnarray} with the dimensionless quantity \begin{equation} Y^{eff} \; = \; L^{d/2} \; r^{eff}_0 (u_0^{eff})^{-1/2}. \end{equation} We note that at this stage of perturbation theory these expressions do not yet represent a systematic expansion with respect to the coupling $u_0$, compare Eqs. (5.19)-(5.22) of Ref. \cite{EDC}. Corresponding formulas are obtained for the case of the Landau-Ginzburg-Wilson continuum Hamiltonian \begin{equation} H \; = \; \int\limits_V d^dx \left[ \frac{1}{2} r_0 \varphi^2 \; + \; \frac{1}{2} (\bigtriangledown \varphi)^2 \; + \; u_0 \varphi^4 \; - \; h \varphi \right] \end{equation} with the field $\varphi (x)$ by the replacement $\hat{J}_{\bf{k}} \rightarrow {\bf{k}}^2$ in the sums of the one-loop terms of the effective parameters $r^{eff}_0$ and $u^{eff}_0$. A justification of the above perturbation theory can be given in terms of the order-parameter distribution function where all higher modes are treated in a nonperturbative way \cite{CDS}. \subsubsection*{3.2 Asymptotic form of the effective parameters} In order to study the asymptotic finite-size critical behavior we shall consider the limit of $L \gg \tilde{a} $, $\xi \gg \tilde{a}$ or $L\Lambda \gg 1$, $\xi \Lambda \gg 1$. For this purpose we decompose the perturbation part of $r^{eff}_0$ and $u^{eff}_0$ into bulk integrals and finite-size contributions in the following way, \begin{eqnarray} r_0^{eff} & =& (r_0 - r_{0c}) \left\{ 1 - 12 u_0 \int\limits_{\bf{k}} \left[\hat{J}_{\bf{k}} (r_{0L} + \hat{J}_{\bf{k}}) \right]^{-1} \right\} \nonumber \\ & +& 144 u^2_0 M^2_0 \left\{ \int_{\bf{k}} (r_{0L} + \hat{J_{\bf{k}}})^{-2} - \int\limits_{\bf{k}} \left[\hat{J}_{\bf{k}} (r_{0L} + \hat{J}_{\bf{k}}) \right]^{-1}\right\} \nonumber \\ & - & 12 u_0 \Delta_1 (r_{0L}) \; - \; 144 u^2_0 M^2_0 \Delta_2 (r_{0L}), \end{eqnarray} \begin{equation} u_0^{eff} = u_0 - 36 u^2_0 \int\limits_{\bf{k}} (r_{0L} + \hat{J}_{\bf{k}})^{-2} + 36 u^2_0 \Delta_2 (r_{0L}) \quad , \end{equation} with \begin{equation} \Delta_m (r_{0L}) \; = \; \int\limits_{\bf{k}} (r_{0L} + \hat{J}_{\bf{k}})^{-m} \; - \; L^{-d} \sum\limits_{{\bf{k}} \neq {\bf{0}}} \; (r_{0L} + \hat{J}_{\bf{k}})^{-m} \; . \end{equation} In the lowest-mode approximation we would have simply $r^{eff}_0 \; = \; r_0 \; , \; u^{eff}_0 \; = \; u_0$. Up to this point, the determination of the effective Hamiltonian for the field-theoretic model, Eq. (62), is still parallel to that of the lattice model. The corresponding formulas are simply obtained by the replacement $\hat{J}_{\bf{k}} \rightarrow {\bf{k}}^2$. The crucial difference, however, comes from the large-$L$ behavior of $\Delta_m$, as shown recently for the special case $m = 1$ and $r_{0L} = 0$ \cite{CD1}. For general $r_{0L}$ we find, for the lattice model, the cutoff-independent large-$L$ behavior \begin{equation} \Delta_m (r_{0L}) \; = \; J_0^{-m} I_m (r_{0L} J^{-1}_0 L^2) \; L^{2m-d} + O (e^{-L/\tilde{a}}) \end{equation} with \begin{equation} I_m(x) = (2\pi)^{-2m} \int\limits_0^{\infty} dy \; y^{m-1}\; e^{-(xy/4\pi^2)}\left[ (\pi/y)^{d/2} \; - \; K(y)^d + 1 \right] \end{equation} where $K(y) = \sum\limits_{j = - \infty}^\infty \exp (-y j^2)$. For the field-theoretic model, however, the corresponding large-$L$ behavior differs significantly according to the cutoff dependent result \begin{eqnarray} \int_{\bf{k}} (r_{0L} + {\bf{k}}^2)^{-m} &-& L^{-d} \sum\limits_{{\bf{k}} \neq {\bf{0}}}(r_{0L} + {\bf{k}}^2)^{-m} = I_m(r_{0L} L^2) L^{2m-d} \nonumber \\ &+& \Lambda^{d-2m} \left\{ a_m (d, r_{0L} \Lambda^{-2})(\Lambda L)^{-2} + O\left[ (\Lambda L)^{-4}\right]\right\} \end{eqnarray} where \begin{equation} a_m (d,r_{0L} \Lambda^{-2})= \frac{d}{3(2\pi)^{d-2}} \int\limits^\infty_0 dx x^m \left[ \int\limits_{-1}^1 dy e^{-y^2x} \right]^{d-1} \exp \left[ -(1 + r_{0L} \Lambda^{-2}) x \right] . \end{equation} This leads to \begin{eqnarray} r_0^{eff} & =& (r_0 - r_{0c}) \left\{ 1 - 12 u_0 \int\limits_{\bf{k}} \left[{\bf k}^2 (r_{0L} + {\bf k}^2 ) \right]^{-1} \right\} \nonumber \\ & +& 144 u^2_0 M^2_0 \left\{ \int_{\bf{k}} (r_{0L} + {\bf k}^2)^{-2} - \int\limits_{\bf{k}}\left[ {\bf k}^2(r_{0L} + {\bf k}^2) \right]^{-1}\right\} \nonumber \\ & - & 12 u_0 \left[I_1 (r_{0L}L^2)L^{2-d}+\Lambda^{d-2} a_1 (d,r_{0L}\Lambda^{-2})(\Lambda L)^{-2}\right] \nonumber \\ & -& 144 u^2_0 M^2_0 \left[I_2 (r_{0L}L^2)L^{4-d}+\Lambda^{d-4} a_2 (d,r_{0L}\Lambda^{-2})(\Lambda L)^{-2}\right], \end{eqnarray} \begin{eqnarray} u_0^{eff} = u_0 &-& 36 u^2_0 \int\limits_{\bf{k}} (r_{0L} + {\bf k}^2 )^{-2} \nonumber \\ &+& 36 u^2_0 \left[I_2 (r_{0L}L^2)L^{4-d}+\Lambda^{d-4} a_2 (d,r_{0L}\Lambda^{-2})(\Lambda L)^{-2}\right] \end{eqnarray} for the field-theoretic model. We note that for $d < 4$ the cutoff dependent terms in Eqs. (70) and (71) vanish in the limit $\Lambda \rightarrow \infty$. For $d > 4$, however, part of these terms are divergent for $\Lambda \rightarrow \infty$ and cannot be dropped. In particular these terms carry the important size dependence $\sim L^{-2}$ of $r_0^{eff}$ which is not present in Eq.(63) for the lattice model and which has been overlooked previously \cite{ZJ,B1}. Employing the method of dimensional regularization \cite{ZJ} in Eq. (68) would mean that the finite results for $d < 4$ at $\Lambda = \infty$ are continued analytically to $d > 4 $. This would yield the same (cutoff-independent) result for $r_0^{eff}$ and $u_0^{eff}$ of the field-theoretic case as of the lattice model. Thus dimensional regularization would omit the important analytic $L^{-2}$ dependence in Eqs. (68) and (70). This omission, however, cannot be justified $-$ unlike the omission of an analytic $t$ dependence in bulk critical phenomena. Thus the method of dimensional regularization may yield misleading results in the finite-size field theory above the upper critical dimension. The remaining bulk integrals in $r^{eff}_0$ and $u^{eff}_0$ have finite limits for $r_{0L} \rightarrow 0$ (large $L$, small $|t|$) for both the lattice and field-theoretic model. Taking the limit $r_{0L} \rightarrow 0$ in these integrals is justified only if $|t| \ll 1$ and $(L/\tilde{a})^{-d/2}\ll 1$ or $(\Lambda L)^{-d/2}\ll 1$. This restriction should be kept in mind when applying our asymptotic scaling functions to MC data of spin models of small size. The asymptotic expressions for the lattice model for $d > 4$ read \begin{eqnarray} r^{eff}_0 = (r_0 - r_{0c})\left[ 1 - 12 u_0 \int_{\bf{k}} \hat{J}_{\bf{k}}^{-2} \right] - 12 u_0 J_0^{-1} L^{2-d} I_1 (r_{0L} J^{-1}_0 L^2) \nonumber \\ - 144 u_0^2 M^2_0 J_0^{-2} L^{4-d} I_2 (r_{0L} J^{-1}_0 L^2), \end{eqnarray} \begin{equation} u^{eff}_0 = u_0 \left[ 1 - 36 u_0 \int_{\bf{k}} \hat{J}_{\bf{k}}^{-2} + 36 u_0 J^{-2}_0 I_2 (r_{0L} J^{-1}_0 L^2) L^{4-d} \right]. \end{equation} The corresponding results of the field-theoretic model for $d > 4$ are obtained from Eqs. (70) and (71) as \begin{eqnarray} r^{eff}_0 &=& (r_0 - r_{0c}) \left[ 1 - 12 u_0 \int_{\bf{k}} {\bf{k}}^{-4}\right]- 12 u_0 \; L^{2-d} I_1 (r_{0L} L^2) \nonumber \\ &-& 144 u_0^2 M^2_0 L^{4-d} I_2 (r_{0L} L^2) - 12 u_0 \Lambda^{d-4} a_1 (d, r_{0L} \Lambda^{-2}) L^{-2} \nonumber \\ &-& 144 u_0^2 \Lambda^{d-4} M^2_0 a_2 (d, r_{0L} \Lambda^{-2}) (\Lambda L)^{-2}, \end{eqnarray} \begin{eqnarray} u^{eff}_0 \; = \; u_0 \left [ 1 \; - \; 36 u_0 \int_{\bf{k}} {\bf{k}}^{-4} \; + \; 36 u_0 L^{4-d} \; I_2 (r_{0L} L^2) \right . \nonumber \\ \left . +36 u_0 \Lambda^{d-4} a_2 (d,r_{0L}\Lambda^{-2})(\Lambda L)^{-2} \right ]. \end{eqnarray} Substituting Eqs. (72) $-$ (75) into Eqs. (47) - (49) completes our calculation of the asymptotic form of $H^{eff}$ and of the order-parameter distribution function $P(\Phi)$, Eq. (45), in one-loop order for $d > 4$ and $n = 1$. The restriction "asymptotic" means that these results, Eqs. (72)-(75), are applicable to arbitrary $r_{0L}L^2$ only in the sense that $L/\tilde{a}$ must be large and $|t|$ must be small. As found already in the large-$n$ limit \cite{CD1}, the leading ``shift of $T_c$'' [3] in $r^{eff}_0$, Eq. (72), is not just a temperature independent constant $\sim L^{2-d}$ for the lattice model but a more complicated function of $r_{0L} L^2$. For the field-theoretic model the leading shift is proportional to $L^{-2}$ according to Eq. (74) and is nonuniversal, i.e., explicitly cutoff-dependent. This result differs from the (temperature independent) shift $\sim L^{2-d}$ predicted for the field-theoretic model \cite{B1} and from the shift $\sim L^{-d/2}$ considered in previous work [7$-$9]. Our shifts are caused by the (inhomogeneous) higher modes of the order-parameter fluctuations. They cannot be neglected even for large $L/\tilde{a}$ (except for the extreme case of the bulk limit) and cannot be regarded only as ''corrections'' to the lowest-mode approximation since they lead to a two-variable finite-size scaling structure for both the field-theoretic and the lattice model, in contrast to the single-variable scaling structure of the lowest-mode approximation, as will be further discussed in Section 4.2. Our results can be generalized to $n > 1$ by means of a nonperturbative treatment of the order-parameter distribution function of the $O(n)$ symmetric $\varphi^4$ theory \cite{CDS}. It is obvious that this does not change the conclusions regarding the structural differences between the finite-size effects of the field-theoretic and lattice versions of the $\varphi^4$ model. \subsection*{4. Finite-size scaling functions of the lattice model} \subsubsection*{4.1 Analytic results} In the following we consider only the case $h = 0$. Inspection of the asymptotic expressions of $r^{eff}_0$ and $u^{eff}_0$, Eqs. (72) and (73), shows that they depend on three different lengths $\xi_0$, Eq.(29), $l_0$, Eq.(30), and $L$. Therefore there exist different ways of writing $H^{eff}$ in a finite-size scaling form. Considering the ratio $\xi/L$ as a fundamental dimensionless variable [13-15] leads to the following scaling form \begin{equation} H^{eff} \; = \; F(z,x,y) \; = \; \frac{1}{2} r^{eff} (x,y) z^2 \; + \; u^{eff} (x,y) z^4 \; , \end{equation} where $x$ and $y$ are the scaling variables given in Eqs. (1) and (2) and $z$ is the scaled order-parameter variable \begin{equation} z = \; J^{-1/2}_0 \; L^{(d-2)/2} \Phi \quad . \end{equation} The scaled effective parameters are in one-loop order \begin{eqnarray} r^{eff} (x,y) \; = \; r^{eff}_0 L^2 J^{-1}_0 \; = \; x - 12 I_1 (\bar{r})y - 144 \vartheta_2 (y_0) I_2 (\bar{r})y^{3/2} \end{eqnarray} and \begin{eqnarray} u^{eff}(x,y)\; = \; u^{eff}_0 \; L^{4-d} J^{-2}_0 \; = \; y + 36 I_2 (\bar{r})y^2 \end{eqnarray} with \begin{equation} \bar{r} \; = \; x + 12 \vartheta_2 (y_0)y^{1/2} \; \end{equation} and \begin{equation} y_0 \; = \; xy^{-1/2} \quad . \end{equation} This leads to the finite-size scaling form \begin{equation} \chi^{\pm} = L^2 P^{\pm}_\chi (x,y) \end{equation} and \begin{equation} M = L^{(2-d)/2} P_M (x,y) \end{equation} with the two-variable scaling functions \begin{eqnarray} P^+_\chi (x,y) &=& J^{-1}_0 \left[u^{eff}(x,y)\right]^{-1/2} \vartheta_2 (Y(x,y)), \\ P^-_\chi (x,y) &=& J^{-1}_0 \left[u^{eff}(x,y)\right]^{-1/2} \left\{ \vartheta_2 (Y(x,y)) - \left[ \vartheta_1 (Y(x,y))\right]^2 \right\}, \\ P_M (x,y) &=& J^{-1/2}_0 \left[ u^{eff}(x,y)\right] ^{-1/4} \vartheta_1 (Y(x,y)), \\ U(x,y) &=& 1 - \frac{1}{3} \frac{\vartheta_4 (Y(x,y))} {\left[\vartheta_2(Y(x,y))\right]^2}, \end{eqnarray} where \begin{equation} Y(x,y) \; = \; r^{eff} (x,y) \; \left[u^{eff}(x,y)\right]^{-1/2} \quad . \end{equation} The traditional finite-size scaling theories for $d < 4$ [13-18] have no asymptotic dependence on a second scaling variable $y$. We see that our scaling functions do not depend on the nonuniversal model parameters $\tilde{a}, J_{ij}, a_0, u_0$, except via the length scales $\xi_0$ and $l_0$ contained in $x$ and $y$, and apart from the metric prefactors $J_0^{-1}$ and $J_0^{-1/2}$ in Eqs. (84)-(86). Thus we may consider these functions to be universal in a restricted sense, i.e., for a certain class of lattice models (rather than continuum models, see below). From the previous one-loop finite-size theory \cite{EDC} and the successful comparison with high-precision MC data in three dimensions \cite{CDT} it has become clear that careful consideration must be devoted to the appropriate form of evaluating these one-loop results. The previous analysis indicated that the prefactor $(u^{eff})^{-1/2}$ in Eqs. (84)-(88) should be further expanded with respect to the coupling $u_0$, in the spirit of a systematic perturbation approach, see Eqs. (5.45), (5.46) and footnote $1$ of Sec.$7$ of Ref. \cite{EDC}, as well as Eqs. (6.15), (6.16), (6.31) and (6.32) of Ref. \cite{EDC}. Thus the expanded forms $(u^{eff})^{-1/2}=y^{-1/2}[1+18I_2(\bar{r}) y]^{-1}$ or $(u^{eff})^{-1/2}=y^{-1/2}[1-18I_2(\bar{r})y]$ should be substituted into Eqs. (84), (85) and (88), respectively [and similarly for $(u^{eff})^{-1/4}$ in Eq. (86)]. These expanded forms should be taken into account in a future quantitative comparison of Eqs. (84)-(88) with MC data. In the present paper we confine ourselves, for simplicity, to the unexpanded form of Eqs. (84)-(88), as has been done in the result for $U(x,y)$ presented in Ref. \cite{CD3}. The same comment applies to Eqs. (99)-(103) below. At $T_c \; (x=0)$ we obtain from Eqs. (78) - (88) the large -$L$ behavior in one-loop order for $d > 4$ \begin{eqnarray} \chi_c^+ & \; \sim \; & L^2 J^{-1}_0 \; y^{-1/2} \vartheta_2(0) \sim L^{d/2} \quad , \\[1cm] M_c & \; \sim \; & L^{(2-d)/2} J^{-1/2}_0 \; y^{-1/4} \vartheta_1 (0) \sim L^{-d/4} \quad , \\[1cm] \lim\limits_{L \rightarrow \infty}U(0,y) & \; = \; & 1 \; - \; \frac{1}{3} \vartheta_4(0)/\vartheta_2(0)^2 \; = \; 0.2705 \quad . \end{eqnarray} The exponents in Eqs. (89) and (90) and the asymptotic value in Eq. (91) are identical with those obtained in the lowest-mode approximation at $T_c$ \cite{B1}. The dangerous irrelevant character of $u_0\sim y$ is clearly exhibited in Eqs. (89) and (90) in the form of $y^{-1/2}$ and $y^{-1/4}$. Alternatively we may employ, instead of $x$ and $y$, the variables $w$ and $y$ where $w$ is given by Eq. (3). This implies the following scaling form \begin{equation} H^{eff} \; = \; \tilde{F}(s,w,y) \; = \; \frac{1}{2} \tilde{r}^{eff}(w,y) s^2 \; + \; \tilde{u}^{eff} (w,y) s^4 \end{equation} with the scaled order-parameter variable \begin{equation} s\; = \; J^{-1/2}_0 \; L^{d/4} \; l_0^{(d-4)/4} \; \Phi \quad . \end{equation} The scaled effective parameters are \begin{eqnarray} \tilde{r}^{eff} (w,y) & =& r^{eff} y^{-1/2} \nonumber \\ &=& w - 12 I_1 (\tilde{r}) y^{1/2} - 144 \vartheta_2 (w) I_2 (\tilde{r}) y \end{eqnarray} and \begin{eqnarray} \tilde{u}^{eff}(w,y) \; = \; u^{eff} y^{-1} \; = \; 1 + 36 I_2 (\tilde{r}) y \end{eqnarray} with \begin{eqnarray} \tilde{r} \; = \; [w + 12 \vartheta_2 (w)]y^{1/2} \quad . \end{eqnarray} This leads to the finite-size scaling forms \begin{equation} \chi^{\pm} \; = \; L^{d/2} \tilde{P}^\pm_\chi (w,y) \end{equation} and \begin{equation} M = L^{-d/4} \tilde{P}_M (w,y) \end{equation} with the two-variable scaling functions \begin{eqnarray} \tilde{P}^+_\chi(w,y) & = & A \left[ \tilde{u}^{eff} (w,y) \right]^{-1/2}\vartheta_2 (\tilde{Y} (w,y)), \\ \tilde{P}^-_\chi(w,y) & = & A \left[ \tilde{u}^{eff}(w,y) \right]^{-1/2} \left\{ \vartheta_2 (\tilde{Y} (w,y)) - [ \vartheta_1 (\tilde{Y} (w,y))]^2 \right\}, \\ \tilde{P}_M(w,y) & = & \sqrt{A} \left[ \tilde{u}^{eff}(w,y) \right]^{-1/4} \vartheta_1 (\tilde{Y} (w,y))\\ \tilde{U}(w,y) & = & 1 - \frac{1}{3} \vartheta_4 (\tilde{Y}(w,y))/ \left[\vartheta_2 (\tilde{Y}(w,y)) \right]^2, \end{eqnarray} where $A=J^{-1}_0 l_0^{(4-d)/2}$ and \begin{eqnarray} \tilde{Y}(w,y) \;& = & \;\tilde{r}^{eff} (w,y) \left[ \tilde{u}^{eff}(w,y) \right]^{-1/2} \quad . \end{eqnarray} In the lowest-mode approximation the $y$ dependence in Eqs. (92) - (103) is dropped and Eqs. (94) and (103) are replaced by $\tilde{Y} = \tilde{r}^{eff} = u_0^{-1/2} a_0 tL^{d/2}$. These results will be discussed and applied to $d = 5$ in the following Subsections. \subsubsection*{4.2 Discussion} Both sets of scaling variables $(x,y)$ and $(w,y)$ are useful in the analysis of the finite-size scaling structure. First we consider the $(x,y)$ representation. In order to elucidate the effect of the fluctuations of the (inhomogeneous) higher modes above and below $T_c$ we assume $y$ to be small (large $L/l_0$) and expand $\chi^+$ and $\chi^-$ with respect to $y$ at finite $|x| > 0$, i.e., $T \neq T_c$. This yields \begin{eqnarray} \chi^+ &=& \chi^+_b \; \left\{ 1 \; - 12 \left[ x^{-1} \; - \; I_1(x) \right] \frac{y}{x} \; + \; O(y^2/x^2) \right\},\phantom{123} x > 0 \\ \chi^- &=& \chi^-_b \; \Big\{ 1 \; + \; \left[ 15 x^{-1} + 12 I_1 (-2x) \; - \; 36 x I_2 (-2x) \right] \frac{y}{x} \nonumber \\ &&\phantom{12345}\; + \; O(y^2/x^2) \Big\} , \phantom{123} x < 0 \end{eqnarray} where $\chi^+_b$ and $\chi^-_b$ are the bulk quantities given in Eqs. (18) and (21). Similar expressions can be derived for $M$ and $U$. The terms $\sim x^{-1}$ in the square brackets can be traced back to the lowest-mode contributions whereas the terms $\sim I_1(x)$ and $\sim I_m(-2x)$ arise from the higher (inhomogeneous) modes. If the latter terms were ignored one could rewrite $\chi^\pm$ in a lowest-mode form with the single variable $x/y^{1/2}$, as noted previously in the case $n \rightarrow \infty$ [4$-$6]. For any finite $|x| = L^2/\xi^2$, however, i.e., along the straight dashed lines in the $L^{-1} - \xi^{-1}$ plane (Fig. 1), there exists no argument that would allow one to ignore the $I_m$ terms arising from the higher modes. This proves the necessity of including two separate scaling variables in general. In particular it is misleading to consider the finite-size effects of the higher modes as a ''correction'' to the lowest-mode approximation $-$ in the same sense as changes of mean-field exponents caused by critical fluctuations for $d < 4$ should not be considered as "corrections". The crucial point is that the higher modes cause a new {\it{structure}} of the finite-size scaling functions that cannot be written in terms of a single variable $x/y^{1/2}$ (except for the special case $x \rightarrow 0$, $y \rightarrow 0$ at finite $x/y^{1/2}$, see below). This structural aspect is a matter of principle, regardless of how large or small the effect of the higher modes might be. In this context we take up a nontrivial aspect in the discussion in the previous literature about role played by the ''shift of $T_c$''. It was asserted that a term of the type $\sim L^{2-d}$ in the parameter $r^{eff}_0$ of $H^{eff}$ \cite{B1} represents a ''correction to scaling'' [9] or a "subdominant term" [12] that can be neglected in the large -$L$ limit compared to the lowest mode part $r_0 - r_{0c} = a_0 t$. This assertion is incorrect, however, for the reasons just given in the preceding paragraph. The term $\sim L^{2-d}$ in $r_0^{eff}$, Eq. (72), is, in fact, the origin of the terms $\sim I_1(x)$ of $\chi^{\pm}$ in Eqs. (104) and (105) which we have shown to represent nonnegligible contributions rather than ''corrections''. Similarly, the term $\sim M^2_0 L^{4-d}$ in $r^{eff}_0$, Eq. (72), is the origin of the nonnegligible higher-mode contribution $I_2(-2x)$ to $\chi^-$ in Eq. (105). The expansion in Eqs. (104) and (105) breaks down in the limit of small $|x|$, i.e., large $\xi/L$. This includes the large -$L$ limit at $T = T_c$ where the exponent of the susceptibility $\chi_c \sim L^{d/2}$ and the Binder cumulant $U_c$ have been found [4$-$6] to agree with the lowest-mode approximation for the lattice model (but not for the field-theoretic model). Here this result is seen from the representation in terms of $w$ and $y$ as given in Eqs. (92)-(103). In this representation the single-variable lowest-mode like structure appears as the leading $w$ dependence whereas the higher-mode contributions $\sim I_1$ and $\sim I_2$ are multiplied by $y^{1/2}$ and $y$. As noted previously \cite{CD1} these higher-mode contributions are not of a dangerous irrelevant character even though the dangerous irrelevant four-point coupling $u_0$ determines the length scale $l_0 \sim u_0^{1/(d-4)}$. We shall see below that the $y^{1/2}$ terms are quantitatively important. Nevertheless, at first sight it seems justified to consider the latter contributions as asymptotically negligible in the limit $y \rightarrow 0$ corresponding to $L \rightarrow \infty$. In the terminology of the renormalization group, this limit corresponds to approaching the "Gaussian fixed point" of the dimesionless four-point coupling $u_0 L^{4-d}$. Neglecting the $y$-dependence in this limit is justified, however, only if $w$ is kept finite, i.e., if $|t|$ vanishes sufficiently strongly. Keeping $w$ finite for $L \rightarrow \infty$ is a special case corresponding to paths in the $L^{-1} - \xi^{-1}$ plane where the ratio $\xi/L$ diverges as $L^{(d-4)/4}$. Such paths become asymptotically parallel to the vertical axis (Fig. 1). This includes the special case $T = T_c$, $L \rightarrow \infty$. The description of the entire $L^{-1} - \xi^{-1}$ plane, on the other hand, requires both $w$ and $y$ as generic scaling variables in order to correctly include the $w \rightarrow \infty$ limit which corresponds to a finite ratio $\xi/L$ for $L \rightarrow \infty$. The latter limit $w \rightarrow \infty$ cannot be taken correctly within the single-variable scaling structure such as $\tilde{P}_{\chi}^{\pm}(w,0)$ and within the reduced scaling form $\tilde{F}(s,w,0)$. This structure does not capture the complete finite-size effects presented in Eq. (104) and (105) above. For this reason the inhomogeneous fluctuations must be considered as {\it relevant}, for finite $L$, in the sense of the renormalization group. For the same reason the reduced scaling form [equivalent to $\tilde{F}(s,w,0)$] that was proposed by Binder et al. \cite{BNP} for general $\xi/L$ (not only for $\xi/L = \infty$), is not valid. Our order-parameter distribution function $P(\Phi) \sim \exp (-H^{eff})$ can be compared with the zero-field probability distribution function of Binder \cite{B2,14} below $T_c$ \begin{eqnarray} P_L(s) \; = \; const \left \{ \; \exp \left[ - (s - m_b)^2 L^d/2\chi_b \right] \right. \nonumber\\[0.5cm] \left. \; + \; \exp \left[ - ( s + m_b)^2 L^d /2\chi_b \right] \;\right \} \end{eqnarray} where $m_b \sim |t|^{1/2}$ and $\chi_b \sim |t|^{-1}$ are $L$ independent bulk quantities. In $P_L(s)$ the temperature dependence enters in the form $[L/l(t)]^d$ with the ''thermodynamic length'' [8,9,19] $l(t) \sim |t|^{-2/d}$. Our theory identifies the relevant length scale $\tilde{\ell}$ of the corresponding variable $w$, Eq. (3), in terms of a combination of $l_0$ and $\xi_0$ according to Eq. (4). The distribution function $P_L(s)$ has been invoked as an argument in support of the single-variable scaling structure of the free energy (at $h = 0$) proposed by Binder et al. \cite{BNP}. For finite $\xi/L$, however, our results do not agree with the structure of $P_L(s)$ which does not contain the important $y$ dependence reflected in the shift $\sim I_1 L^{2-d}$ in Eqs. (72), (78), (94) that is caused by the inhomogeneous modes. Thus the double Gaussian form of $P_L(s)$, Eq. (106), as well as the underlying theory of Gaussian thermodynamic fluctuations \cite{LL}, are not applicable to finite systems with periodic boundary conditions in the critical region above the upper critical dimension. This is remarkable in view of the fact that mean-field theory becomes exact in the bulk limit for $d > 4$. \subsubsection*{4.3 Predictions for ${\bf d = 5}$} We illustrate and further discuss our results for the lattice model for the case $d = 5$ which can be compared with MC data of the five-dimensional Ising model. In Fig. 2 we plot the order-parameter distribution function in terms of $F$, Eqs. (76)-(81), \begin{equation} P(\Phi, t,L,u_0) \; d \Phi \; = \; \frac{\exp [-F(z,x,y)]}{\int\limits_{-\infty}^\infty dz \exp \left[ - F (z,x,y)\right]} \; dz \; , \end{equation} for typical values of $x$ and $y$ above, at and below $T_c$. The shape of these functions resembles that of the MC data shown in Fig. 1 of Ref. \cite{RNB}. In order to demonstrate the effect of the higher modes on these functions we show the order parameter distribution function in Fig. 3 in terms of $\tilde{F}$, Eqs. (92)-(96), \begin{equation} P(\Phi, t,L,u_0) \; d \Phi \; = \; \frac{\exp [-\tilde{F} (s,w,y)]}{\int\limits_{-\infty}^\infty ds \exp \left[ - \tilde{F} (\tilde{s},w,y)\right]} \; ds \; , \end{equation} with given $w$ but for several values of $L/l_0$ including the limiting function for $L/l_0 \rightarrow \infty$ at fixed $w$. The latter function has the structure $P_0 (\Phi)$ of the lowest-mode approximation (with one-loop expressions for the reference lengths $\xi_0$ and $l_0$). This does not mean, however, that $P_0(\Phi)$ is the exact representation of $P$ in the large -$L$ limit in general. The constraint $w = const < \infty$ restricts the validity of $P_0(\Phi)$ only to the special region $|t|L^{d/2} < \infty$ corresponding to $\xi/L \rightarrow \infty$ (region between the curved dotted lines in Fig. 1). The width of this region in the $L^{-1}-\xi^{-1}$ plane vanishes asymptotically for $L \rightarrow \infty$. This special region is of interest because it contains characteristic ("pseudocritical" [13]) temperatures close to $T_c$ such as $T_{max}(L)$ and $T_c(L)$ to be defined below. The scaling functions $\tilde{P}_{\chi}^{\pm}$, Eqs. (99) and (100), of $\chi^+$ and $\chi^-$ corresponding to the order-parameter distribution function of Fig. 3 are shown in Fig. 4. Similar plots can be made for $M$ and $U$. For comparison with MC data we refer to Figs. 11-14 of Ref. [9]. At first sight, the changes due to the variation of $L/l_0$ appear to be small. At the level of accuracy of previous MC data \cite{RNB}, however, these changes and their disagreements with the lowest-mode predictions \cite{B1} have been clearly detected and have been considered as a major discrepancy, regardless of their smallness, because of their unexplained weak $L$ dependence. Here we further elucidate the deviations from the lowest-mode predictions by plotting in Fig. 5 the scaling functions of $U, \chi^\pm$ and $M$ at and below $T_c$ as functions of the reduced length $L/l_0$. As originally found in Ref. \cite{CD3} for the example of the Binder cumulant at $T_c$, the slow approach to the asymptotic $(L \rightarrow \infty)$ values arises from the $y^{1/2} \sim L^{(4-d)/2}$ terms. This slow approach was observed in the previous MC data \cite{RNB} and, at that time, gave sufficient reason to doubt the correctness of the lowest-mode predictions \cite{B1} which do not have a weak subleading $L$-dependence at $T_c$. Our theory now shows that subsequent attempts [10-12] to explain the discrepancies did not resolve the problems. In particular, the bulk form of the renormalization-group flow equations employed by Bl\"ote and Luijten \cite{LB} led to an apparent confirmation of the (incorrect) shift $\sim L^{2-d}$ predicted in Ref. \cite{B1} for the {\it field-theoretic} model. These bulk flow equations \cite{LB} do not correctly describe the finite-size effects of the $\varphi^4$ {\it lattice} model either. We believe that our theory identifies the origin of the previous discrepancy, apart from possible quantitative aspects which we shall address elsewhere, after a quantitative identification of the lengths $\xi_0$ and $l_0$. An interesting consequence of the existence of the limiting function $P_0 (\Phi)$ mentioned above is the existence of limiting scaling functions such as $\tilde{P}_\chi^- (w,0)$ of $\chi^-$ for $L/l_0 \rightarrow \infty$ at fixed $w$ [Fig. 4 (b)]. For finite $L$, $\chi^-$ exhibits a maximum $\chi_{max}$ below $T_c$ at a temperature $T_{max}(L)$. The asymptotic $L$ dependence of $T_c - T_{max}(L)$ can be inferred from the fact that $\tilde{P}_\chi^- (w,0)$ has a temperature dependence only of the form of $w$ $\sim t L^{d/2}$. This implies the large -$L$ behavior $T_c - T_{max}(L) \sim L^{-d/2}$ and correspondingly $\chi_{max}\sim L^{d/2}$. Similar arguments lead to our prediction $T_c - T_c(L) \sim L^{-d/2}$ where $T_c(L)$ is the ''effective critical temperature'' \cite{RNB} at which the magnetization has its maximum slope. The same power law is valid for the temperature at which the specific heat has its maximum. These power laws $\sim L^{-d/2}$ agree with the MC data \cite{RNB}. The true asymptotic amplitudes of these power laws, however, have not been observed in previous MC simulations in $d = 5$ dimensions because of the slow approach of the subleading terms $\sim L^{(4-d)/2}$ towards $L \rightarrow \infty$ mentioned above. MC simulations of larger systems would be desirable for testing the magnitude of such subleading terms predicted by our theory. As indicated already in Figs. 3 and 4 of Ref. [6], we also point to an additional interesting effect of practical importance. In Fig. 6 we have plotted the Binder cumulant as a function of $x$ for several values of $L/l_0$. Without the effect of the slowly decaying contribution $\sim y ^{1/2} \sim L^{(4-d)/2}$ one would have expected \cite{RNB} a well identifiable intersection point of these curves if $L$ is, say, larger than $10\; \tilde{a}$. For $d < 4$, this features has been a standard and successful empirical method of determining the value of bulk $T_c$ from MC data of finite systems. Our previous \cite{CD3} and present figures demonstrate that this method is not accurately applicable, without additional information, to systems with $d = 5$. \subsection*{5. Finite-size scaling functions of the ${\bf{\varphi^4}}$ field theory} For the field-theoretic model the effective parameters $r^{eff}_0$ and $u^{eff}_0$, even in their asymptotic form in Eqs. (74) and (75), depend explicitly on the length $\Lambda^{-1}$, in addition to the lengths $\xi_0, l_0$ and $L$, as found already in the limit $n \rightarrow \infty$ \cite{CD1}. This means that none of the original nonuniversal model parameters $a_0, u_0$ and $\Lambda$ becomes unimportant even close to $T_c$ and for large $L$. In a scaled form the effective parameters read \begin{eqnarray} r^{eff} = r^{eff}_0 \; L^2 \; & = & \;x - 12 \; I_1 (\bar{r})y \; - 144 \; \vartheta_2 (y_0) I_2 (\bar{r}) y^{3/2}\nonumber \\ & - & 12 u_0 \Lambda^{d-4} \; a_1 (d, \bar{r} \Lambda^{-2} \;) \nonumber \\ & - & 144 (u_0 \Lambda^{d-4})^{3/2} \vartheta_2 (y_0) a_2 (d, \bar{r} \Lambda^{-2} ) (\Lambda L)^{-d/2} \; \end{eqnarray} and \begin{eqnarray} u^{eff} \;& = &\; u_0^{eff} \; L^{4-d} \; = y + 36 I_2 (\bar{r})y^2\nonumber \\[1cm] & + & \; 36 (u_0 \Lambda^{d-4})^2 \; a_2 (d, \bar{r} \Lambda^{-2}) (\Lambda L)^{2-d} \end{eqnarray} where $\bar{r}$ and $y_0$ are given in Eqs. (80) and (81). The last term $\sim L^{2-d}$ in Eq. (110) can be neglected asymptotically. Substituting these expressions into Eqs. (76), (84) $-$ (88) and (107) (with $J_0 = 1$) yields the finite-size scaling functions of the order-parameter distribution function and of the quantities $\chi^{\pm}, M$ and $U$. The two-variable finite-size scaling functions depend on $x$ and $y$ and, in addition, explicitly on the nonuniversal parameter $u_0 \Lambda^{d-4}$. Thus the scaling functions are nonuniversal for $n = 1$, and obviously also for general $n$. At $T_c$, the asymptotic power laws of $\chi$ and $M$ are found to be \begin{eqnarray} \chi^+_c (L) \; &=& \; L^2 P^+_\chi (0,y) \sim L^{d-2}\quad ,\\[1cm] M_c(L) \; &=& \;L^{(2-d)/2}\; P_M (0,y) \sim L^{-1} \end{eqnarray} for the field-theoretic model which differ from those of the lattice model in Eqs. (89) and (90). The asymptotic value of $U$ at $T_c$ for the field-theoretic model is in one-loop order \begin{equation} \lim\limits_{L \rightarrow \infty} \; U(0,y) \; = \; 2/3 \end{equation} which is far from that of the lattice model in Eq. (91). The significant differences between Eqs. (89)-(91) and Eqs. (111)-(113) are due to the $L$-independent but cutoff-dependent term $\sim u_0 \Lambda^{d-4}$ in Eq. (109), similar to the constant additive term in Eq. (122) of Ref. \cite{CD1}. Existing MC data for Ising models on $d = 5$ lattices (such as the MC result $U_{MC}=0.319\pm 0.017$ in Ref. \cite{RNB}) clearly disagree with these field-theoretic results and rule out the possibility that the $\varphi^4$ field theory provides a correct description of finite lattice systems above the upper critical dimension. In particular, the prediction of a breakdown of universality for finite systems above the upper critical dimension constitutes a serious failure of the continuum approximation for lattice systems. The present results for finite $n$ confirm our earlier [4-6,22] assertion regarding the applicability of the $\varphi^4$ field theory for $d > 4$. \subsection*{6. Summary and conclusions} We summarize and further comment on the results of this paper as follows. On the basis of a one-loop calculation for the $\varphi^4$ model on a lattice and for the $\varphi^4$ continuum model in a cubic geometry with periodic boundary conditions above four dimensions we have shown that our general conclusions regarding universality and finite-size scaling inferred from the large-$n$ limit [4-6] remain valid for finite $n$. In particular, $\varphi^4$ field theory based on the Landau-Ginzburg-Wilson continuum Hamiltonian \cite{ZJ} does not correctly describe the leading finite-size effects of spin systems on a lattice with $d > 4$. Although the critical exponents of mean-field theory are exact for bulk systems above four dimensions, the thermodynamic theory of Gaussian fluctuations \cite{LL} is not applicable to finite systems with periodic boundary conditions in the critical region for $d > 4$. Finite-size scaling in terms of a single scaling variable, as predicted by the phenomenological theory of Binder et al. \cite{BNP} and by the lowest-mode approximation of Br\'{e}zin and Zinn-Justin \cite{B1}, is not valid for the $\varphi^4$ field theory. For the $\varphi^4$ lattice model it is not valid for any finite $\xi/L$ where $\xi$ is the bulk correlation length. As originally conjectured in Ref. \cite{15}, lowest-mode like large -$L$ behavior is asymptotically correct for the lattice model at $T_c$ as shown previously for the susceptibility [4$-$6] and the Binder cumulant \cite{CD3}; furthermore, it is valid in the small region of finite $|w| \sim |t| L^{d/2}$ in the large -$L$ and small $|t|$ limit (Fig. 1). This region, corresponding to a divergent ratio $\xi/L \sim L^{(d-4)/4} \rightarrow \infty$, represents only a small part (between the two curved dotted lines of Fig.1) of the general finite-size scaling regime (of arbitrary finite $\xi/L$) for which earlier theories \cite{B1,BNP} were originally thought to be valid. Our two-variable finite-size scaling structure is consistent with that proposed by Privman and Fisher \cite{PF} but is significantly less universal than anticipated previously [15]. The inhomogeneous higher modes have been shown to be relevant above the upper critical dimension, contrary to different statements in the previous literature [1,3,10,12,15,27-43]. We have identified the characteristic length scale $l_0$, Eq. (30), of the finite-size effects of the higher modes in terms of the amplitude of the bulk correlation length $\xi$ at $T = T_c$ for small external field $h$. The one-loop finite-size effects arising from the relevant higher modes do not represent ''corrections'' to the lowest-mode approximation but constitute a generic part of the correct finite-size scaling structure. By contrast, two-loop contributions are expected to represent only quantitative corrections that will not change the scaling structure. The ''shift of $T_c$ '' \cite{B1} in the temperature variable $r_0^{eff}$ of the exponential order-parameter distribution function is proportional to $L^{-2}$ for the field-theoretical model and proportional to $I(t,L^{-1})L^{2-d}$ for the lattice model where the function $I(t,L^{-1})$ has a finite limit $I(0,0)$. The effects caused by these shifts remain nonnegligible at any finite ratio $\xi/L$ even in the large -$L$ limit as demonstrated in Eqs. (104) and (105) for the example of the susceptibility above and below $T_c$. The ''shift of $T_c$'' $\sim L^{2-d}$ mentioned in the preceding paragraph must be distinguished from shifts of characteristic temperatures, in the sense of pseudocritical temperatures \cite{F}, such as the temperature $T_{max}(L)$ at which the susceptibility $\chi^-(t,L)$ has its maximum, or the ''effective critical temperature'' $T_c(L)$ \cite{RNB} where the order parameter has its maximum slope. We find that these ''shifts'' have the asymptotic (large $L$) behavior $T_c - T_{max}(L) \sim L^{-d/2}$ and $T_c - T_c(L) \sim L^{-d/2}$ for the $\varphi^4$ lattice model. Similarly our theory implies $\chi_{max}\sim L^{d/2}$ asymptotically. This is a simple consequence of the fact that the order-parameter distribution function shown in Fig.3 has a finite limit for $L \rightarrow \infty$ at finite $w$ and that the position of $T_{max}(L)$ and $T_c(L)$ remain located in the temperature region of finite $w$ in the limit $L \rightarrow \infty$. Our theory identifies the possible origin of a significant discrepancy between MC data at $d = 5$ [9] and the lowest-mode prediction for the Binder cumulant at $T_c$ \cite{B1} in terms of slowly decaying finite-size terms $\sim L^{(4-d)/2}$ (Fig. 5). These terms also mask the true asymptotic amplitudes of the power laws $\chi_c \sim L^{d/2}$ and $M_c \sim L^{-d/4}$ at $T_c$. For the same reason the method of determining bulk $T_c$ (from MC data via the intersection point of the Binder cumulant) is demonstrated in Fig. 6 to become quantitatively inaccurate at $d = 5$, as found originally in Ref. \cite{CD3}. Quantitative predictions for various asymptotic finite-size scaling functions have been made for $d = 5$ and $n = 1$ (Figs. 2$-$6). These predictions are expected to be valid for sufficiently large $L/\tilde{a}$ and small $|t|$. The true range of applicability remains to be explored by quantitative comparisons with MC data, after an appropriate identification of the nonuniversal lengths $\xi_0$ and $l_0$. As noted previously \cite{CD1}, it is not yet established whether the $\varphi^4$ model on a finite lattice is fully equivalent to finite spin models regarding the leading and subleading finite-size effects. Our results indicate that in the limit $d-4 \rightarrow 0^+$, for systems with periodic boundary conditions, different amplitudes of finite-size effects at $d=4$ are obtained depending on whether a lattice model or a continuum model is considered. In view of this possible ambiguity at $d=4$ the limiting behaviour for $4-d \rightarrow 0^+$ (i.e., $\epsilon \rightarrow 0^+$ in the standard $\epsilon=4-d$ expansion ) should also be reexamined for lattice models at finite lattice spacing and for continuum models at finite cutoff. \vspace{5mm} {\bf{Note added}} After completion of the present work we received a preprint '' Finite-size scaling above the upper critical dimension revisited: The case of the five-dimensional Ising model'' by E. Luijten, K. Binder, and H.W.J. Bl\"ote where the authors compare the asymptotic result for $U(x,y)$ in the (unexpanded) form of Eqs. (87) and (88) with their Monte Carlo data of the five-dimensional Ising model. The authors confirm the '' occurrence of spurious cumulant intersections'' predicted in Figs. 3 and 4 of Ref. \cite{CD3} and agree with the slow convergence of finite-size effects for $L \rightarrow \infty$ found in Ref. \cite{CD3} which essentially resolves the longstanding discrepancies noted in the MC studies in Refs. [7-9] regarding the Binder cumulant. On a much more quantitative level than considered previously [3-12], however, the authors estimate the length $l_0$ as $l_0 = 0.603 \; (13) $ and claim to find {\it{new}} ''significant discrepancies'' between their MC data for small $L$ and our asymptotic (large $L$) one-loop result for $\chi^+$ in the (unexpanded) form of Eq. (84). We doubt the significance of the quantitative deviations for small system sizes $L = 4$ and $8$ shown in their Figs. 7(a) and (b), except for the {\it{sign and curvature}} of the deviations from the large-$L$ behavior of the susceptibility shown in their Figs. 8 and 9. We propose that the latter issue can be resolved essentially on the basis of our complete {\it non-asymptotic} one-loop expression for $H^{eff}$ presented in Eqs. (47)-(49) of this paper or on the basis of the underlying order-parameter distribution function \cite{CDS}, rather than by an asymptotic two-loop calculation suggested by Luijten et al. We note that similar non-asymptotic effects are well known for small spin systems at $T_c$ in three dimensions as discussed in the context of Fig. 14 of Ref. [18]. We doubt the reliability of the estimate of $l_0=0.603 \; (13)$ by Luijten et al. since it was found by applying the asymptotic ($L \rightarrow \infty$) expression for $\chi_c^{+}$ in their Eq. (31) to non-asymptotic MC data. Our Fig. 5b indicates that the apparent (6 percent) mismatch between theory and MC data for $L \leq 22$ in Fig. 9 of Luijten et al. is due to this inadequate estimate of $l_0$. Part of the remarks by Luijten et al. regarding the limiting case $t \rightarrow 0$, $L \rightarrow \infty$ at fixed $t L^{d/2}$ agree with our earlier and present independent findings. We disagree, however, with their claim that ''there is no contradiction at all'' between our finite-size theory and the ideas of Ref. \cite{BNP}. First, we note that the ideas of Ref. [7] fail for the continuum $\varphi^4$ model. Second, we maintain that the single-variable scaling structure (for $h=0$) proposed in Ref. \cite{BNP} does not capture the correct structure of finite-size effects of the lattice model at any finite value of $\xi/L$ (see Fig.1). In particular the ideas of Ref. [7] do not lead to a correct description of the finite-size departures from bulk critical behavior at small but fixed $|t|$ in the asymptotic range $0 < |t| \ll 1$ [compare Eqs. (104) and (105)] to which an acceptable finite-size scaling structure should be applicable (such as the general structure proposed in Ref. \cite{PF}). \vspace{5mm} {\bf{Acknowledgment}} We thank K. Binder, H.W.J. Bl\"ote, and E. Luijten for sending us their preprint. Support by Sonderforschungsbereich 341 der Deutschen Forschungsgemeinschaft and by NASA under contract number 960838 and 100G7E094 is acknowledged. One of the authors (X.S.C.) thanks the National Natural Science Foundation of China for support under Grant No. 19704005. We also thank the referee for useful comments. \newpage
1,477,468,750,856
arxiv
\section{Introduction} In his 1905-1906 papers on Brownian motion for suspensions of hard spheres, Einstein obtained now famous relation for the self-diffusion coefficient $% D_{0}$\ for the noninteracting hard spheres of radius $R$ immersed in a solvent at temperature $T$ [1]:% \begin{equation} D_{0}=\frac{k_{B}T}{\gamma }=\frac{k_{B}T}{6\pi R\eta _{0}}. \tag{1.1} \end{equation}% In this formula $\eta _{0}$ is the viscosity of a pure solvent and $k_{B}$ is the Boltzmann's constant. \ This result is valid only in the infinite dilution limit. In another paper [2] Einstein took into account the effects of finite concentrations and obtained the first nonvanishing correction to $% \eta _{0}$ for small but finite concentrations. It is given by \begin{equation} \eta /\eta _{0}=1+2.5\text{ }\varphi +O\left( \varphi ^{2}\right) \tag{1.2} \end{equation}% with $\varphi $\ being the volume fraction $\varphi :=\frac{n}{V}\frac{4}{3}% \pi R^{3}$. \ In this formula $\mathit{n}$ is the number of monodisperse hard spheres in the volume $\mathit{V}$. If we formally replace $\eta _{0}$ by $\eta $\ in Eq. (1.1), the obtained result can be cautiously used as a definition for the cooperative diffusion coefficient \textit{D}, i.e. \begin{equation} D=\frac{k_{B}T}{6\pi R\eta }. \tag{1.3} \end{equation}% Below, we use symbols $D_{0}$ for the self-diffusion coefficient and $D$\ for the cooperative diffusion coefficient. \ By combining Eq.s (1.1) - (1.3), we also obtain: \begin{equation} D/D_{0}=1-2.5\varphi +O\left( \varphi ^{2}\right) . \tag{1.4} \end{equation}% Eq.(1.4) compares well with experimental results, e.g. those discussed in Ref.[3]\footnote{% The data on page 5 and \ in Table 2 of this reference support our conjecture.% }. Numerous attempts have been made to obtain results like Eq.(1.4) systematically. The above results are restricted by the observation that Stoke's formula for friction $\gamma $ is applicable only for time scales longer than the characteristic relaxation time $\tau _{r}$ of the solvent, $% \tau _{r}:=\rho R^{2}/\pi \eta _{0}),$ e.g. see [4]. \ In this formula $\rho $ is the density of pure solvent. This requirement provides the typical cut-off time scale, while the parameter $R$ serves as a typical space cut-off for the problems we are going to study in this work. By analogy with the theory of nonideal gases, expansion Eq. (1.2) is referred to as a "virial". Unlike the theory of nonideal gases, where the virial coefficients are known \ exactly to a very high order [5], values for coefficients in the virial expansion for $\eta $ have been an active area of research to date even in the low concentration regime. \ A considerable progress was made in obtaining closed form approximations describing the rheological properties of suspensions of hard spheres in a broad range of concentrations [6-8]. Similar results for particles of other geometries are much less complete [7,9]. An extension of these results to solutions of polymers has taken place in parallel with these developments [10]. A noticeable advancements have been made in our understanding of rheology of dilute and semidilute \ polymer solutions for fully flexible polymers and rigid rods. It should be noted, though, that polymers add further complexities because the connectivity of the polymer chain backbone plays an essential role in calculations of rheological properties of polymer solutions. The effect of chain connectivity on viscoelastic properties of polymer solutions has been an object of extensive discussion, and many of theoretical difficulties encountered in describing these solutions are shared by suspensions of \ hard spheres. In particular, it is known [11], that particles immersed in a viscous fluid affect the motion of each other both hydrodynamically and by direct interaction (hard core, etc). Since the motion of particles in a fluid is correlated, it contributes to the distribution of local velocities within the fluid. Behavior of many systems (e.g. those listed in Section 6) other than the hard sphere suspensions happens to be closely related or even isomorphic to that \ noticed in suspensions. This observation makes study of suspensions important in many areas of physics, chemistry and biology. For reasons which will become apparent upon reading, in this work we shall mention only physical applications. In a polymer solutions when the polymer concentration $\varphi $\ increases, it is believed that the hydrodynamic interactions become unimportant due to the effects of \ hydrodynamic screening [12].\ To our knowledge, screening has not been established in the theory of hard sphere suspensions. If it \ would occur in suspensions, the screened particle motion could be affected only by thermal fluctuations (truly Brownian motion!). Such Brownian hard spheres can be described by the short range interacting random walk model [13]. \ For finite concentrations, we expect the longer range hydrodynamic interactions to be very important. \ That this is indeed the case, is the central theme of our paper. In what follows, we provide the theoretical arguments in favor of hydrodynamic \ interparticle interactions and screening which must be present in solutions at non-vanishing concentrations. By exploiting analogies between electrodynamics and fluid mechanics we shall demonstrate that hydrodynamic screening occurs in much the same way as screening of the magnetic field in superconductors. Therefore, mathematically, the description of screening in suspensions is analogous to that for the Meissner effect in superconductors. This observation \ will allow us to account for a number of interesting properties of suspensions. For instance, the viscosity of hard sphere suspensions is known to diverge beyond some critical concentration $\varphi ^{\ast }$. This phenomenon has been observed experimentally and is well documented, e.g. see Refs. [14-18]. \ All these references are concerned with changes in rheological\ properties of suspensions occurring with changes in concentration $\varphi .$ Using scaling-type arguments Brady, Ref.[19], and, independently, Bicerano \textit{et al}, Ref. [20], had found that near $\varphi =\varphi ^{\ast }$ the relative viscosity $\eta /\eta _{0} $ diverges as $\eta /\eta _{0}=C(1-\varphi /\varphi ^{\ast })^{-2}$ with $C$ being some constant. Furthermore, as it is shown by Bicerano \textit{et al}, such analytical dependence of relative viscosity on concentration $\varphi $ actually works extremely well for all concentrations. In view of (1.3), it is reasonable to expect vanishing of $D$ for $\varphi \rightarrow \varphi ^{\ast }.$ \ This phenomenon was indeed observed \ in Ref.[21]. . \ Theoretically, the result for relative viscosity was obtained as result of a combined \ nontrivial use of topological and combinatorial arguments. Such arguments can also be used, for instance, for description of the onset of turbulence in fluids or gases. As described in Ref.[22], such a regime in these substances is characterized by the sharp increase in the viscosity (just like in suspensions). According to Chorin, Ref.[22], Section 6.8, one can think about such an increase as analogous to processes which take place in superfluid $^{4}$He when one goes in temperatures from below to above $% \lambda -$transition, that is from the superfluid to normal fluid state. Such a transition is believed to be associated with uninhibited proliferation of \ tangled vortices on any scale. In this work we demonstrate that Chorin's conjecture is indeed correct. This interpretation is possible only if \ both\ topological and combinatorial arguments are rigorously \ and carefully taken into account. Surprisingly, when this is done, the emerging description becomes isomorphic to that known for the Bose-Einstein condensation transition. Because of this, \ in addition to turbulence, in concluding section of this \ work we briefly discuss a number of apparently different physical systems \ whose behavior under certain conditions resembles that found in colloidal supensions. The rest of this paper is organized as follows. In section 2, we introduce notations, discuss \ experimental data with help of previously found generalized Stokes-Einstein relation [23] and make conjectures about how these results should be interpreted in the case if hydrodynamic screening does exist. Some familiarity with Ginzburg-Landau (G-L) theory of superconductivity is expected for proper understanding of this and the following sections. In Section 3 we study in detail how many particle diffusion processes should be affected if hydrodynamic interactions are taken into account. The major new results of this section are given in Section 3.3. where we rigorously demonstrate that account of hydrodynamic interactions causes modification of Fick's laws of diffusion in the same way as presence of electromagnetic field \ causes modification of the Schrodinger's equation for charged particles \ The gauge fields emerging in the modified Fick's equations are of zero curvature implying involvement of the Chern-Simons topological field theory. The following Section 4 considers in detail the implications of the results obtained in Section 3. The major new result of this section is given in Section 4.4. where we adopted the logic of the ground breaking paper by London and London [24] in order to demonstrate the existence of hydrodynamic screening. Thus, the phenomenon of screening in suspensions is analogous to the Meissner effect in superconductors [25]. In Section 5 we follow the logic of Ginzburg-Landau paper [26] elaborating the work by London brothers and develop similar G-L-type theory for suspensions. The major new result of this section is presented in Section 5.5. in which by using combinatorial and topological methods \ we reproduce the scaling results by Brady [19] and Bicerano et al, Ref.[20]. In Section 6 we place the obtained results in a much broader context. It is done with help of two key concepts: helicity and force-free fields. They had been in use for some time in areas such as magnetohydrodynamics, fluid, plasma and gas turbulence, classical mechanics written in hydrodynamic formalism but not in superconductivity or colloidal suspensions, etc. \ In this section we mention as well other uses of these concepts in disciplines such as high temperature superconductivity, quantum chromodynamics, string theory, non-Abelian fluids, etc. The paper also contains three appendices which are made sufficiently self contained. They are not only very helpful in providing details supporting the results of the main text but also of independentl interest. \section{Stokes-Einstein Virial Expansions for a Broad Concentration Range} \subsection{ General Results} In 1976, Batchelor obtained the following \ general result for the cooperative diffusion coefficient [27]: \begin{equation} D\left( \varphi \right) =\frac{K\left( \varphi \right) }{6\pi \eta r}\frac{% \varphi }{1-\varphi }\left( \frac{\partial \mu }{\partial \varphi }\right) _{p,T}, \tag{2.1} \end{equation}% where $K\left( \varphi \right) $ is the sedimentation coefficient of the particles in suspension and $\mu $ is the chemical potential. Batchelor obtained for $K\left( \varphi \right) $ the following result: \begin{equation} K\left( \varphi \right) =1-6.55\varphi +O\left( \varphi ^{2}\right) \tag{2.2} \end{equation}% so that (2.1) with thus obtained first order result for $K\left( \varphi \right) $ can be used only for low concentrations. In Ref.[3] \ an attempt was made to extend Batchelor's results to higher concentrations. This was achieved in view of the fact that \begin{equation} \frac{\varphi }{1-\varphi }\left( \frac{\partial \mu }{\partial \varphi }% \right) _{p,T}=\left( \frac{\partial \Pi }{\partial n}\right) _{p,T}, \tag{2.3} \end{equation}% where $\Pi $ is the osmotic pressure. Use of this result in (2.1) produces: \begin{equation} D\left( \varphi \right) =\frac{K\left( \varphi \right) }{6\pi \eta r}\left( \frac{\partial \Pi }{\partial n}\right) _{p,T}. \tag{2.4} \end{equation}% The Carnahan-Starling equation of state for hard spheres can be used to obtain the following result for compressibility \begin{equation} \left( \frac{\partial \Pi }{\partial n}\right) _{p,T}=k_{B}T\frac{\left[ \left( 1+2\varphi \right) ^{2}+\left( \varphi -4\right) \varphi ^{3}\right] }{\left( 1-\varphi \right) ^{4}}, \tag{2.5} \end{equation}% thus converting equation (2.4) into \begin{equation} D\left( \varphi \right) =\frac{K\left( \varphi \right) }{6\pi \eta r}k_{B}T% \frac{\left[ \left( 1+2\varphi \right) ^{2}+\left( \varphi -4\right) \varphi ^{3}\right] }{\left( 1-\varphi \right) ^{4}}. \tag{2.6} \end{equation}% To be in accord with Batchelor's result (2.2) at low concentrations, the authors [3] suggested replacing of Eq.(2.2) by% \begin{equation} K\left( \varphi \right) \approx \left( 1-\varphi \right) ^{6.55} \tag{2.7} \end{equation}% which allows us to rewrite (2.6) in the following final form \begin{equation} D/D_{0}=\left( 1-\varphi \right) ^{6.55}\frac{\left[ \left( 1+2\varphi \right) ^{2}+\left( \varphi -4\right) \varphi ^{3}\right] }{\left( 1-\varphi \right) ^{4}} \tag{2.8} \end{equation}% convenient for comparison with experimental data. Such a comparison can be found in Fig.12 of Ref [3] \ where this result is plotted against author's own experimental data for the cooperative diffusion coefficient. The experimental data within error margins appears to agree extremely well with the theoretical curve obtainable from Eq.(2.8). However, it should be kept in mind that, in fact, originally Eq.(2.2) was determined only to first order in $\varphi $ (and, therefore, only for the volume fractions less than about 0.05). Therefore, formally, Eq.(2.8) is in accord with Eq.(2.2) only for volume fractions of lesser than about 0.03. Therefore, it is clear from Fig 12 of [3] that to improve the agreement \ in the whole range of concentrations, a knowledge of a second order in $\varphi $ is desirable in (2.2). This problem can be by passed as follows. From [3 ] the viscosity data from the same experiments were obtained so that the data can be fit to the following second order expansion: \begin{equation} \eta /\eta _{0}=1+2.5\varphi +6.54\varphi ^{2}+O\left( \varphi ^{3}\right) . \tag{2.9} \end{equation}% To obtain this result, the authors constrained the first order coefficient to 2.5 to comply with Einstein's result (1.2) for viscosity. If one considers these data without such a constraint, then one obtains, \begin{equation} \eta /\eta _{0}=1+2.4\varphi +7.1\varphi ^{2}+O\left( \varphi ^{3}\right) . \tag{2.10} \end{equation} In the paper by Kholodenko and Douglas [23] the following result for the cooperative diffusion coefficient was derived (the generalized Stokes-Einstein relation) \begin{equation} D/D_{0}=\frac{1}{\left( \eta /\eta _{0}\right) }\left[ \frac{S\left( \mathbf{% 0},0\right) }{S_{0}\left( \mathbf{0},0\right) }\right] ^{-1/2}, \tag{2.11} \end{equation}% where $S\left( \mathbf{0},0\right) $ is the $\mathbf{k}=0,$ zero angle static scattering form factor. The thermodynamic sum rule for the hard sphere gas \ produces the following result for this formfactor: \begin{equation} \left[ \frac{S\left( \mathbf{0},0\right) }{S_{0}\left( \mathbf{0},0\right) }% \right] ^{-1/2}=1+4\varphi +7\varphi ^{2}+O\left( \varphi ^{3}\right) . \tag{2.12} \end{equation}% By combining Eq.s (2.9)-(2.12) the result for cooperative diffusion is obtained: \begin{equation} D/D_{0}=1+1.6\varphi -3.9\varphi ^{2}+O\left( \varphi ^{3}\right) . \tag{2.13} \end{equation}% For the sake of comparison with experiment, we made a numerical fit to the experimental data for higher concentrations obtained in [3]\ by a polynomial (up to a second order in $\varphi )$ with the result$\footnote{% The correlation coefficient obtained for this fit is 0.97.}$: \begin{equation} D/D_{0}=1+1.5505\varphi -5.3663\varphi ^{2}+O\left( \varphi ^{3}\right) . \tag{2.14} \end{equation}% Comparison between Eq.s (2.13) and (2.14) shows that the theoretically obtained result, Eq.(2.13), is in good agreement with the experimental data, Eq.(2.14), within error margins. Alternatively, we can use the reciprocal of the empirical expression, Eq.(2.14), in (2.11) to obtain \begin{equation} \eta /\eta _{0}=1+2.45\varphi +8.5\varphi ^{2}+O\left( \varphi ^{3}\right) , \tag{2.15} \end{equation}% which also compares well with the experimental data, Eq.(2.10). \subsection{The generalized Stokes-Einstein relation and the role of hydrodynamic screening} We return now to Eq.(2.11) for further discussion. \ Based on the results of \ introductory section, especially on\ Eq.s (1.1) and (1.3), we can formally write: \begin{equation} D/D_{0}=\frac{R}{R^{\ast }}\frac{\eta _{0}}{\eta }=1+a_{1}\varphi +a_{2}\varphi ^{2}+O\left( \varphi ^{3}\right) . \tag{2.17} \end{equation}% The actual values of $a_{i}$ , $i=1,2,..$ can be determined using Eq.(2.11) written in the following form \begin{equation} D/D_{0}=\frac{\left( \xi /\xi _{0}\right) ^{-1}}{\left( \eta /\eta _{0}\right) }=\frac{1}{\left( \eta /\eta _{0}\right) }\left[ \frac{S\left( \mathbf{0},0\right) }{S_{0}\left( \mathbf{0},0\right) }\right] ^{-1/2}, \tag{2.18} \end{equation}% where $\xi $ is the correlation length, $\xi _{0}$ is the correlation length in the infinite dilution limit $\xi _{0}\sim R$. To justify such a move we need to remind to our readers of some facts from the dynamical theory of linear response. To do so, we\ borrow some results from our previous work.[23]. \bigskip Generally, both $\mathit{D}$ and $\mathit{D}_{0}$\ are measured by light scattering experiments. In these experiments the Fourier transform of the density-density correlator \begin{equation} S(\mathbf{R},\tau )=\langle \delta n(\mathbf{r},t)\delta n(\mathbf{r}% ^{\prime },t^{\prime })\rangle \tag{2.19} \end{equation}% is being measured. \ The formfactor, Eq.(2.19), is written with account of translational invariance, requiring the above correlator to be a function of relative distance\textbf{\ }$\left\vert \mathbf{r}-\mathbf{r}^{\prime }\right\vert \equiv \left\vert \mathbf{R}\right\vert $ only. Time homogeneity, makes it in addition to be a function of $\left\vert t-t^{\prime }\right\vert $ =$\tau $ only. In this expression, $\langle ...\rangle $ represents an equilibrium thermal average while density fluctuations are given by $\delta n(\mathbf{r},t)=n(\mathbf{r},t)-\langle n\rangle $. By definition, the Fourier transform of Eq.(2.19) is given by \begin{equation} S(\mathbf{q},\omega )=\int d\mathbf{r}\int d\tau S(\mathbf{R},\tau )e^{i(% \mathbf{q}\cdot \mathbf{r}-\omega \tau )}. \tag{2.20} \end{equation}% . Using this expression, \ we obtain the initial decay rate $\Gamma _{% \mathbf{q}}^{(0)}$ as follows [10,23]: \begin{equation} \Gamma _{\mathbf{q}}^{(0)}=-\frac{\partial }{\partial \tau }\ln \left[ \int_{-\infty }^{\infty }\frac{d\omega }{2\pi }e^{i\omega \tau }S(\mathbf{q}% ,\omega )\right] _{\tau \rightarrow 0^{+}}. \tag{2.21} \end{equation}% \ With help of this result, the cooperative diffusion coefficient is obtained as% \begin{equation} D=\frac{\partial }{\partial q^{2}}\Gamma _{\mathbf{q}}^{(0)}|_{\mathbf{q}=0}. \tag{2.22} \end{equation}% In the limit of vanishingly low concentrations the self -diffusion coefficient is known to be [10] \begin{equation} D_{0}=\frac{1}{6}\lim_{t\rightarrow \infty }\frac{1}{t}\left\langle \left\{ \mathbf{r}(t)-\mathbf{r}(0)\right\} ^{2}\right\rangle , \tag{2.23} \end{equation}% where $<...>$ denotes the Gaussian-type average. Following Lovesey [28], it is convenient to rewrite this result as \begin{equation} D_{0}=\frac{1}{3}\int_{0}^{\infty }dt\left\langle \mathbf{v}\cdot \mathbf{v}% (t)\right\rangle \tag{2.24} \end{equation}% in view of the fact that if \begin{equation} \mathbf{r}(t)-\mathbf{r}(0)=\int\limits_{0}^{t}\mathbf{v}(\tau )d\tau \tag{2.25a} \end{equation}% then, \begin{equation} \left\langle \left( \mathbf{r}(t)-\mathbf{r}(0)\right) ^{2}\right\rangle =\left\langle \left\{ \int\limits_{0}^{t}\mathbf{v}(\tau )d\tau \right\} ^{2}\right\rangle =2\int\limits_{0}^{t}d\tau \int\limits_{0}^{\bar{\tau}}d% \bar{\tau}\left\langle \mathbf{v}(\tau )\cdot \mathbf{v}(\bar{\tau}% )\right\rangle =2\int\limits_{0}^{t}d\tau (t-\tau )\left\langle \mathbf{v}% (\tau )\cdot \mathbf{v}(0)\right\rangle , \tag{2.25b} \end{equation}% while, by definition, $D_{0}=\frac{1}{6}\lim_{t\rightarrow \infty }\frac{1}{t% }\left\langle \left( \mathbf{r}(t)-\mathbf{r}(0)\right) ^{2}\right\rangle .$ With these definitions in place and taking into account Eq.(2.18), we would like now to discuss in more detail the relationship between $D$ and $D_{0}$. Using Eq.(2.29) of Ref. [23] we obtain% \begin{equation} D=\lim_{\tau \rightarrow 0^{+},\mathbf{k}=0}\frac{1}{3}\int\limits_{0}^{% \infty }dt^{^{\prime \prime }}\frac{<\mathbf{j}(\mathbf{0},t)\cdot \mathbf{j}% (\mathbf{0},t^{\prime \prime })>}{S(\mathbf{0},0)}, \tag{2.26} \end{equation}% where the current $\mathbf{j}$ is given as $\mathbf{j=}\delta n(\mathbf{r},t)% \mathbf{v}(r,t),$ provided that the non-slip boundary condition% \begin{equation} \mathbf{v}_{f}(\mathbf{r},t)=\frac{d\mathbf{r}}{dt}\equiv \mathbf{v}(t) \tag{2.27} \end{equation}% is applied. Here $\mathbf{v}_{f}(\mathbf{r},t)$ is the velocity of the fluid and $\mathbf{r}(t)$ is the position of the center of mass of the hard sphere with respect to the chosen frame of reference. Eq.(2.26) is in agreement with Eq.(2.24) in view of the fact that in the limit of zero concentration $% S(\mathbf{0},0)=1$ so that $<\mathbf{j}(\mathbf{0},t)\cdot \mathbf{j}(% \mathbf{0},t^{\prime \prime })>\rightarrow \left\langle \mathbf{v}\cdot \mathbf{v}(t)\right\rangle $ as we would like to demonstrate now. For this purpose, in view of Eq.(2.22) it is convenient to rewrite the result Eq.(2.26) is the equivalent form \begin{equation} S(\mathbf{q},0)\Gamma _{\mathbf{q}}^{(0)}=\int\limits_{0}^{\infty }dt^{\prime \prime }\mathbf{q}\cdot <\mathbf{j}(\mathbf{q},t)\mathbf{j}(-% \mathbf{q},t^{\prime \prime })>\cdot \mathbf{q}\mid _{\tau \rightarrow 0^{+}} \tag{2.28} \end{equation}% in accord with Eq.(2.15) of Ref.[23]. This relation is very convenient for theoretical analysis. For instance, it is straightforward to obtain $D$ in the decoupling approximation as suggested by Ferrell [21]. It is given by \begin{equation} ^{{}}\mathbf{q}\cdot <\mathbf{j}(\mathbf{q},t)\mathbf{j}(-\mathbf{q}% ,t^{\prime })>\cdot \mathbf{q}\dot{=}\mathbf{q}\cdot <\mathbf{v}(\mathbf{q}% ^{\prime },t)\mathbf{v}(\mathbf{q-q}^{\prime },t^{\prime })>\cdot \mathbf{q}% \langle \delta n(\mathbf{q-q}^{\prime },t)\delta n(\mathbf{q}^{\prime },t^{\prime })\rangle . \tag{2.29} \end{equation}% In Section 5.4. we provide proof \ that the above decoupling is in fact exact. This provides an explanation why it is working so well in real experiments. \ In the meantime, \ we shall consider this decoupling as an approximation. Once such an approximation is made, the problem of calculation of $D$ is reduced to the evaluation of correlators defined in Eq.(2.29). For the velocity-velocity correlator, the following expression was obtained before (e.g. see Ref.[23], Eq.(2.18)): \begin{eqnarray} &<&v_{i\mathbf{k}}(t)v_{j\mathbf{k}^{\prime }}(t^{\prime })>=(2\pi )^{3}\delta (\mathbf{k}+\mathbf{k}^{\prime })\{\delta _{ij}-\frac{k_{i}k_{j}% }{k^{2}}\}\frac{2k_{B}T}{\eta k^{2}}\delta (t-t^{\prime }) \nonumber \\ &\equiv &2k_{B}T\mathcal{H}_{ij}(\mathbf{k})\delta (t-t^{\prime }) \TCItag{2.30} \end{eqnarray}% with $i,j=1-3.$ This expression defines the Oseen tensor $\mathcal{H}_{ij}(% \mathbf{k})$ to be discussed in detail in the next section. The presence of the delta function $\delta (t-t^{\prime })$ in Eq.(2.30) makes it possible to look only at the equal time density-density correlator in the decomposition of the $\mathbf{j}-\mathbf{j}$ correlator given by Eq.(2.29). Such a correlator also was discussed in Ref.[23] where it is shown to be% \begin{equation} \langle \delta n(\mathbf{k-k}^{\prime },t)\delta n(\mathbf{k}^{\prime },t)\rangle =k_{B}T<n>\left[ \frac{\partial }{\partial \Pi }<n>\right] _{T} \tag{2.31a} \end{equation}% Actually, it is both time and $k-$independent since, as is well known, it is the thermodynamic sum rule. That is% \begin{equation} S(\mathbf{0},0)=k_{B}T<n>\left[ \frac{\partial }{\partial \Pi }<n>\right] _{T}. \tag{2.31b} \end{equation}% It is convenient to rewrite this result as follows \begin{equation} k_{B}T<n>\left[ \frac{\partial }{\partial \Pi }<n>\right] _{T}=\int d\mathbf{% R}S\mathbf{(R,}0\mathbf{)\equiv }S(\mathbf{0},0), \tag{2.31c} \end{equation}% implying that \begin{equation} S\mathbf{(R,}0\mathbf{)=}k_{B}T<n>\left[ \frac{\partial }{\partial \Pi }<n>% \right] _{T}\delta (\mathbf{R}). \tag{2.32} \end{equation}% In view of Eq.(2.12), we notice that in the limit of vanishing concentrations $S(\mathbf{0},0)=1.$In such an extreme case the decoupling made in Eq.(2.29) superimposed with the definition, Eq.(2.22), and the fact that \[ \frac{\partial }{\partial q^{2}}\cdot \cdot \cdot =\frac{1}{3}% tr(\sum\limits_{i,j}\frac{\partial }{\partial q_{i}}\frac{\partial }{% \partial q_{j}}\cdot \cdot \cdot ) \]% produces the anticipated result, Eq.(2.24), as required. Next, following Ferrell [21], we regularize the delta function in Eq.(2.32). Using an identity $1=\int\limits_{0}^{\infty }dxxe^{-x},$ \ the regularized expression for $S\mathbf{(R,}0\mathbf{)}$ is obtained as follows% \begin{equation} S\mathbf{(}r\mathbf{,}0\mathbf{)=}\frac{k_{B}T}{4\pi \xi ^{2}}<n>\left[ \frac{\partial }{\partial \Pi }<n>\right] _{T}\frac{1}{r}e^{-\frac{r}{\xi }}, \tag{2.33} \end{equation}% where $r=\left\vert \mathbf{R}\right\vert $ and the parameter $\xi $ is proportional to the \textsl{static} correlation length\footnote{% For more details, see our work, Ref.[23].}. To use this expression for calculations of $D,$ by employing Eq.(2.26) we have to transform the hydrodynamic correlator, Eq.(2.30), into coordinate form as well. Such a form is given in Eq.(2.33) of Ref.[23] as \begin{equation} <\mathbf{v}(\mathbf{r},t)\cdot \mathbf{v}(\mathbf{r}^{\prime },t^{\prime })>=% \frac{k_{B}T}{\pi \eta }\frac{1}{\left\vert \mathbf{r}-\mathbf{r}^{\prime }\right\vert }\delta (t-t^{\prime }). \tag{2.34} \end{equation}% This expression is written with total disregard of possible effects of the hydrodynamic screening, though. The combined use of Eq.s (2.33) and (2.34) in Eq.(2.26) produces the anticipated result% \begin{equation} D=\frac{k_{B}T}{3\pi \eta \xi } \tag{2.35} \end{equation}% in accord with that obtained by Ferrell, Ref.[21], Eq.(11), provided that we redefine (still arbitrary) the parameter $\xi $ as $2\check{\xi}$. The result (2.35) also coincides with Eq.(1.3) if we identify $\check{\xi}$ with $R^{\ast }.$ Furthermore, by looking at Eq.(2.18) we realize that in the infinite dilution limit we have to replace $\check{\xi}$ by $\xi _{0\text{ }% } $ and, accordingly, $\eta $ by $\eta _{0}$. Such an identification leads to the generalized Stokes-Einstein relation in the form given by Eq.(2.18) implying that \begin{equation} \check{\xi}/\xi _{0}=\left[ \frac{S\left( \mathbf{0},0\right) }{S_{0}\left( \mathbf{0},0\right) }\right] ^{1/2}. \tag{2.36a} \end{equation}% Since we have noticed before that $S_{0}\left( \mathbf{0},0\right) =1$ this result can be rewritten as% \begin{equation} \check{\xi}=\sqrt{S\left( \mathbf{0},0\right) }\xi _{0}. \tag{2.36b} \end{equation} Suppose now that hydrodynamic interactions are screened to some extent. In such a case the result Eq.(2.34) should be modified accordingly. Thus, we obtain \begin{equation} <\mathbf{v}(\mathbf{r},t)\cdot \mathbf{v}(\mathbf{r}^{\prime },t^{\prime })>=% \frac{k_{B}T}{\pi \eta }\frac{\exp (-\dfrac{r}{\xi _{H}})}{r}\delta (t-t^{\prime }), \tag{2.37} \end{equation}% where we\ have introduced the \textsl{hydrodynamic} correlation length $\xi _{H}.$ If, as we shall demonstrate below, the analogy between hydrodynamic and superconductivity makes sense under some conditions then, using \ this \ assumed analogy we introduce the Ginzburg parameter $\kappa _{G}$ \ for this problem via known relation [25]: \begin{equation} \xi _{H}=\kappa _{G}\check{\xi}. \tag{2.38} \end{equation}% Using Eq.s (2.33), (2.37) and (2.38) in (2.26), the result for $D$, Eq.(2.35), acquires the following form:% \begin{equation} D=\frac{k_{B}T}{6\pi \Sigma \eta }(1+\frac{1}{\kappa _{G}})^{-1}. \tag{2.39} \end{equation}% Since, the adjustable parameter $\Sigma $ is introduced in Eq.(2.39) quite arbitrarily, we can, following Ferrell, Ref. [21], take full advantage of this fact now. To do so, we notice that from the point of view of the observer, the relation (2.36) holds irrespective of the absence or presence of hydrodynamic screening. Because of this, we write% \begin{equation} \Sigma (1+\frac{1}{\kappa _{G}})=\check{\xi} \tag{2.40} \end{equation}% so that the Eq.(2.36) used in the generalized Stokes-Einstein relation remains unchanged. By combining Eq.s(2.36b), (2.38) and (2.40) we obtain:% \begin{equation} \kappa _{G}=\frac{\xi _{H}}{\xi _{0}}\frac{1}{\sqrt{S\left( \mathbf{0}% ,0\right) }}=\frac{\Sigma }{\xi _{0}}\frac{\kappa _{G}}{\sqrt{S\left( \mathbf{0},0\right) }}(1+\frac{1}{\kappa _{G}}) \tag{2.41a} \end{equation}% or, equivalently,% \begin{equation} \frac{\xi _{0}}{\Sigma }=\frac{1}{\sqrt{S\left( \mathbf{0},0\right) }}(1+% \frac{1}{\kappa _{G}}). \tag{2.41b} \end{equation}% In this equation the parameter $\Sigma $ is \ still undefined. We can define this parameter now based on physical arguments. In particular, let us set $% \Sigma =S\left( \mathbf{0},0\right) \xi _{0}.$ Then, we end up with the equation \begin{equation} 1+\frac{1}{\kappa _{G}}=\frac{1}{\sqrt{S\left( \mathbf{0},0\right) }} \tag{2.42} \end{equation}% leading to \begin{equation} \kappa _{G}=\frac{1}{\frac{1}{\sqrt{S\left( \mathbf{0},0\right) }}-1}. \tag{2.43} \end{equation}% To reveal the physical meaning of this equation we use Eq.s (2.36b), (2.38) and (2.43) in order to obtain% \begin{equation} \xi _{H}=\frac{\sqrt{S\left( \mathbf{0},0\right) }\xi _{0}}{\dfrac{1}{\sqrt{% S\left( \mathbf{0},0\right) }}-1}. \tag{2.44} \end{equation}% From Eq.(2.12) we notice that by considering the infinite dilution limit $% \varphi \rightarrow 0,$ we obtain: $\xi _{H}\rightarrow \infty ,$ implying absence of hydrodynamic screening. Consider the opposite case: $\varphi \rightarrow \infty $ (that is, in practice, $\varphi $ being large). Looking at Eq.(2.12) we notice that in this case $\xi _{H}\rightarrow 0$ indicating the complete screening, as expected. Using Eq.(2.42), these results allow us to rewrite the generalized Stokes -Einstein relation, Eq.(2.18), in the equivalent form emphasizing the role of hydrodynamic screening. Thus, we obtain, \begin{equation} D/D_{0}=\frac{1}{\left( \eta /\eta _{0}\right) }(1+\frac{1}{\kappa _{G}}). \tag{2.45} \end{equation} \bigskip \section{Diffusion processes in the presence of hydrodynamic interactions} \subsection{Some facts from the diffusion theory} If $n$ is the local density, then the flux\textbf{\ $j$}$=n\mathbf{v}$% \textit{\ }obeys Fick's first law:% \begin{equation} \mathbf{j}=-D\mathbf{\nabla }n, \tag{3.1} \end{equation}% where $D$ is the (in general, cooperative) diffusion coefficient, and $% \mathbf{v}$ is the local velocity. Upon substitution of this expression into the continuity equation \begin{equation} \frac{\partial n}{\partial t}+\mathbf{\nabla }\cdot \mathbf{j}=0 \tag{3.2} \end{equation}% we obtain the diffusion equation commonly known as Fick's second law \begin{equation} \frac{\partial n}{\partial t}=D\nabla ^{2}n\text{.} \tag{3.3} \end{equation}% In the presence of some external forces, i.e. \begin{equation} \mathbf{F}=-\mathbf{\nabla }U, \tag{3.4} \end{equation}% the diffusion laws must be modified. This is achieved by assuming the existence of some kind of friction, i.e.by assuming that there exists a relation \begin{equation} \gamma \mathbf{v}=\mathbf{F} \tag{3.5} \end{equation}% between the local velocity $\mathbf{v}$ and force $\mathbf{F}$\textbf{\ }% with the coefficient of proportionality $\gamma $ being, for instance (in the case of hard spheres), of the type given in Eq.(1.1). With such an assumption, the diffusion current\textbf{, }Eq.(3.1)\textbf{, }is modified now as follows% \begin{equation} \mathbf{j}=-D\mathbf{\nabla }n-\frac{n}{\gamma }\mathbf{\nabla }U. \tag{3.6} \end{equation}% Such a definition makes sense. Indeed, in the case of equilibrium$,$ when the concentration $n_{eq}$ obeys the Boltzmann's law% \begin{equation} n_{eq}=n_{0}\exp (-\frac{U}{k_{B}T}), \tag{3.7} \end{equation}% vanishing of the current in Eq.(3.6) is assured by substitution of Eq.(3.7) into Eq.(3.6) thus leading to the already cited Einstein result, Eq.(1.1), for $D_{0}$. As in the case of Eq.(1.3), we shall assume that for finite concentrations one can still use the Einstein-like result for the diffusion coefficient. With such an assumption, the current $\mathbf{j}$ in Eq.(3.6) acquires the following form [12]:% \begin{equation} \mathbf{j}=-\frac{n}{\gamma }\mathbf{\nabla }(k_{B}T\ln n+U)\equiv -\frac{n}{% \gamma }\mathbf{\nabla }\mu \mathbf{,}\text{ } \tag{3.8} \end{equation}% where the last equality defines the nonequilibrium chemical potential $\mu ,$ e.g. like that given in Eq.(2.1). Alternatively, the modified flux velocity $% \mathbf{v}_{f}$ is given now by $-\frac{1}{\gamma }\mathbf{\nabla }\mu $ so that the continuity Eq.(3.2) reads as \begin{equation} \frac{\partial n}{\partial t}+\mathbf{\nabla }\cdot (n\mathbf{v}_{f})=0. \tag{3.9} \end{equation}% Exactly the same equation can be written for the probability density $\Psi $ if we formally replace $n$ by $\Psi $ in the above equation [10]. Such an interpretation of diffusion is convenient since it allows \ one to talk about diffusion in terms of the trajectories of Brownian motion of individual particles whose positions $\mathbf{x}_{n}(t),n=1,2,...$ are considered to be as random variables. Then, the probability $\Psi $ describes such collective Brownian motion process described by the following Schrodinger-like equation \begin{equation} \frac{\partial }{\partial t}\Psi =-\sum\limits_{n}\frac{\partial }{\partial \mathbf{x}_{n}}(\Psi \mathbf{v}_{fn}) \tag{3.10} \end{equation}% in which the velocity $\mathbf{v}_{fn}$ is given by \begin{equation} \mathbf{v}_{fn}=-\sum\limits_{m}L_{nm}\frac{\partial }{\partial \mathbf{x}% _{m}}(k_{B}T\ln \Psi +U). \tag{3.11} \end{equation}% Thus, we obtain our final result \begin{equation} \frac{\partial }{\partial t}\Psi =\sum\limits_{m,n}\frac{\partial }{\partial \mathbf{x}_{n}}L_{nm}(k_{B}T\frac{\partial }{\partial \mathbf{x}_{m}}\Psi +% \frac{\partial U}{\partial \mathbf{x}_{m}}\Psi ) \tag{3.12} \end{equation}% adaptable for hydrodynamic extension. For this purpose, we need to remind our readers \ of some basic facts from hydrodynamics \subsection{Hydrodynamic fluctuations and Oseen tensor} The analog of Newton's equation \ for fluids is the Navier-Stokes equation. It is given by [29] \begin{equation} \frac{\partial }{\partial t}\mathbf{v}+(\mathbf{v}\cdot \mathbf{\nabla })% \mathbf{v}=-\frac{1}{\rho }\mathbf{\nabla }P+\Gamma \nabla ^{2}\mathbf{v} \tag{3.13} \end{equation}% where $P$ is the hydrodynamic pressure, $\Gamma =\eta _{0}/\rho _{0}$ is the kinematic viscosity and $\rho _{0}$ is the density of the the pure solvent. At low Reynold's numbers, the convective term $($\textbf{$v$}$\cdot \mathbf{% \nabla })\mathbf{v}$ can be neglected [29], p.63. We shall also assume that the fluid is incompressible, i.e% \begin{equation} \text{div }\mathbf{v}=0. \tag{3.14} \end{equation}% Under such conditions the Fourier transformed Navier-Stokes equation can be written as \begin{equation} \frac{\partial }{\partial t}\mathbf{v}_{\mathbf{k}}=-\Gamma \mathbf{k}^{2}% \mathbf{v}_{\mathbf{k}}-\frac{i\mathbf{k}}{\rho }P_{\mathbf{k}}. \tag{3.15} \end{equation}% Let us add a fluctuating source term $\mathbf{f}_{\mathbf{k}}$ to the right hand side of Eq.(3.15). Then, using \ the incompressibility condition, Eq.(3.14), we obtain: \begin{equation} P_{\mathbf{k}}=-i\rho \frac{\mathbf{k}\cdot \mathbf{f}_{\mathbf{k}}(t)}{k^{2}% }. \tag{3.16} \end{equation}% Introducing the transverse tensor $T_{ij}(\mathbf{k})=\delta _{ij}-\frac{% k_{i}k_{j}}{k^{2}}$ and decomposing a random force as \begin{equation} f_{i\mathbf{k}}^{T}\left( t\right) =\sum\limits_{j}T_{ij}(\mathbf{k})f_{j% \mathbf{k}}\left( t\right) \tag{3.17} \end{equation}% eventually replaces the Navier-Stokes equation by the Langevin-type equation for the transverse velocity fluctuations:% \begin{equation} \frac{\partial }{\partial t}\mathbf{v}_{\mathbf{k}}+\mathbf{\Gamma k}^{2}% \mathbf{v}_{\mathbf{k}}=\mathbf{f}_{\mathbf{k}}^{T}\left( t\right) . \tag{3.18} \end{equation}% As is usually done for such type of equations, we shall assume that the random fluctuating forces are Gaussianly distributed. This assumption is equivalent to the statement that \begin{equation} \left\langle f_{i\mathbf{k}}^{T}(t)f_{j\mathbf{k}^{\prime }}^{T}(t^{^{\prime }})\right\rangle =T_{ij}(\mathbf{k})(2\pi )^{3}\delta (\mathbf{k}+\mathbf{k}% ^{\prime })\tilde{D}\delta (t-t^{\prime }) \tag{3.19} \end{equation}% with parameter $\tilde{D}$ to be determined. A formal solution of the Langevin-type Eq.(3.18) is given by \begin{equation} \mathbf{v}_{\mathbf{k}}(t)=\mathbf{v}_{\mathbf{k}}(0)e^{-\tilde{\Gamma}% t}+\int\limits_{0}^{t}dt^{\prime }\mathbf{f}_{\mathbf{k}}^{T}\left( t^{\prime }\right) e^{-\tilde{\Gamma}(t-t^{\prime })} \tag{3.20} \end{equation}% with $\tilde{\Gamma}=\mathbf{k}^{2}\Gamma .$ Introducing $\mathbf{v}_{% \mathbf{k}}(t)-\mathbf{v}_{\mathbf{k}}(0)e^{-\tilde{\Gamma}t}=\mathbf{\hat{v}% }_{\mathbf{k}}(t),$ we obtain \begin{equation} \left\langle \hat{v}_{i\mathbf{k}}(t)\hat{v}_{j\mathbf{k}^{\prime }}(t^{\prime })\right\rangle =\left\langle \int\limits_{0}^{t}dt^{\prime }% \mathbf{f}_{\mathbf{k}}^{T}\left( t^{\prime }\right) e^{-\tilde{\Gamma}% (t-t^{\prime })}\int\limits_{0}^{t^{\prime }}dt^{\prime \prime }\mathbf{f}_{% \mathbf{k}^{\prime }}^{T}(t^{\prime \prime })e^{-\tilde{\Gamma}(t^{\prime }-t^{\prime \prime })}\right\rangle . \tag{3.21} \end{equation}% To calculate this correlator, and to determine the parameter $2\tilde{D},$ we consider the equal time correlator first. In such a case the equipartition theorem produces the following result:% \begin{equation} \left\langle \hat{v}_{i\mathbf{k}}(t)\hat{v}_{j\mathbf{k}^{\prime }}(t)\right\rangle =T_{ij}(\mathbf{k})(2\pi )^{3}\delta (\mathbf{k}+\mathbf{k% }^{\prime })\frac{k_{B}T}{\rho }. \tag{3.22} \end{equation}% Taking into account Eq.s (3.19) and (3.22) we obtain for the velocity-velocity correlator, Eq.(3.21), the following result% \begin{eqnarray} \left\langle \hat{v}_{i\mathbf{k}}(t)\hat{v}_{j\mathbf{k}^{\prime }}(t^{\prime })\right\rangle &=&T_{ij}(\mathbf{k})(2\pi )^{3}\delta (\mathbf{% k}+\mathbf{k}^{\prime })\frac{2k_{B}T}{\rho \tilde{\Gamma}}\frac{\tilde{% \Gamma}}{2}\exp (-\tilde{\Gamma}\left\vert t-t^{\prime }\right\vert ) \nonumber \\ &=&2k_{B}T\mathcal{H}_{ij}(\mathbf{k})\frac{\tilde{\Gamma}}{2}\exp (-\tilde{% \Gamma}\left\vert t-t^{\prime }\right\vert ). \TCItag{3.23} \end{eqnarray}% In the limit $\tilde{\Gamma}\rightarrow \infty $ the combination $\frac{% \tilde{\Gamma}}{2}\exp (-\tilde{\Gamma}\left\vert t-t^{\prime }\right\vert )$ can be replaced by $\delta (t-t^{\prime }).$ In this limit the obtained expression coincides with already cited Eq.(2.30). Furthermore, the constant $\tilde{D}$ can be chosen as $\frac{k_{B}T}{\rho }.$ To prove the correctness of these assumptions, we take a Fourier transform (in time variable) in order to obtain \begin{equation} \left\langle \hat{v}_{i\mathbf{k}}(\omega )\text{ }\hat{v}_{j(-\mathbf{k)}% }(-\omega )\right\rangle \dot{=}\frac{2k_{B}T}{\rho }\frac{\tilde{\Gamma}}{% \omega ^{2}+\tilde{\Gamma}^{2}}T_{ij}(\mathbf{k}). \tag{3.24} \end{equation}% This result coincides with Eq.(89.17) of Ref.[25] as required. Here the sign $\dot{=}$ means "up to a delta function prefactor". Incidentally, these prefactors were preserved in another volume of Landau and Lifshitz, e.g. see Ref. [30], Eq.(122.12). Since in the limit $\omega \rightarrow 0$ we reobtain (upon inverse Fourier transform in time) Eq.(2.30), this fact provides the needed justification for replacement of the factor $\frac{% \tilde{\Gamma}}{2}\exp (-\tilde{\Gamma}\left\vert t-t^{\prime }\right\vert )$ by $\delta (t-t^{\prime }).$ In polymer physics, Ref.[10], typically only this $\omega \rightarrow 0$ limit is considered, which is equivalent to considering physical processes at time scales much larger than the characteristic time scale $\tau _{r}=\rho R^{2}/\pi \eta _{0}$ mentioned in the Introduction. Although this fact \ could cause some inconsistencies (e.g. see discussion below), we shall follow the traditional pathway by considering mainly this limit causing us to drop altogether time-dependence in Eq.(3.15) thus bringing it to the form considered in the book by Doi and Edwards, Ref.[10], Eq. (3.III.2). Following these authors, this approximation allows us to specify a random force $\mathbf{f}(\mathbf{r})$ as% \begin{equation} \mathbf{f}(\mathbf{r})=\sum\limits_{n}\mathbf{F}_{n}\delta (\mathbf{r}-% \mathbf{R}_{n}) \tag{3.25} \end{equation}% implying that particle (hard sphere) locations are at the points $\mathbf{R}% _{n}$ so that the fluctuating component of fluid velocity $\mathbf{v}(% \mathbf{r})$ at $\mathbf{r}$ is given by \begin{equation} \mathbf{v}(\mathbf{r})=\sum\limits_{n}\mathbf{H}(\mathbf{r}-\mathbf{R}% _{n})\cdot \mathbf{F}_{n} \tag{3.26} \end{equation}% with the Oseen tensor $\mathbf{H}_{ij}(\mathbf{r})$ in the coordinate representation given by \begin{equation} \mathbf{H}_{ij}(\mathbf{r})=\frac{1}{8\pi \eta \left\vert \mathbf{r}% \right\vert }(\delta _{ij}+\hat{r}_{i}\hat{r}_{j}). \tag{3.27} \end{equation}% In this expression $\hat{r}_{i}=\dfrac{r_{i}}{\left\vert \mathbf{r}% \right\vert }.$In view of Eq.(2.27), we can rewrite Eq.(3.26) in the following suggestive form% \begin{equation} \mathbf{v}(\mathbf{R}_{n})=\sum\limits_{m(m\neq n)}\mathbf{H}(\mathbf{R}_{n}-% \mathbf{R}_{m})\cdot \mathbf{F}_{m}\text{, } \tag{3.28} \end{equation}% for velocity $\mathbf{v}(\mathbf{R}_{n})$ of the particle located at $% \mathbf{R}_{n}.$ \subsection{Fick's laws in the presence of hydrodynamic interactions. Emergence of gauge fields} \bigskip By comparing Eq.s(3.12) and (3.28) we could write the Fick's first law explicitly should the Oseen tensor be also defined for $m=n$. \ But it is not defined in this case. As in electrostatics, self-interactions must be excluded from consideration. In view of the results of Section 2, the situation can be repaired if we assume that at some concentrations the hydrodynamic interactions are totally screened. In such a case only the usual Brownian motion of individual particles is expected to survive. With these remarks, Fick's first law for such hydrodynamically interacting suspensions of spheres can be written now as follows% \begin{equation} \mathbf{v}_{f}(\mathbf{R}_{n})=-\sum\limits_{m}\mathbf{\tilde{H}}(\mathbf{R}% _{n}-\mathbf{R}_{m})\cdot \frac{\partial }{\partial \mathbf{R}_{m}}% (k_{B}T\ln \Psi +U), \tag{3.29} \end{equation}% where the redefined Oseen tensor $\mathbf{\tilde{H}}_{ij}(\mathbf{R})$ has the diagonal part $\mathbf{\tilde{H}}_{ii}(\mathbf{R})=\frac{1}{\gamma }$ in accord with Eq.(1.1). The potential $U$ comes from short-range non- hydrodynamic interactions between particles, which are always present. Using this result and Eq.(3.10), we finally arrive at the Fick's second law% \begin{equation} \frac{\partial \Psi }{\partial t}=\sum\limits_{n,m}\frac{\partial }{\partial \mathbf{R}_{n}}\cdot \mathbf{\tilde{H}}(\mathbf{R}_{n}-\mathbf{R}_{m})\cdot (k_{B}T\frac{\partial \Psi }{\partial \mathbf{R}_{m}}+\frac{\partial U}{% \partial \mathbf{R}_{m}}\Psi ) \tag{3.30} \end{equation}% in accord with Eq.(3.110) of Ref.[10]. Since this equation contains both diagonal and nondiagonal terms the question arises about its mathematical meaning. That is, we should inquire: under what conditions does the solution to this equation exist? \ The solution will exist if and only if the above equation can be brought to the diagonal form. To do so, as it is usually done in mathematics, we have to find generalized coordinates in which the above equation will acquire the diagonal form. Although the attempts to do so were made by several authors, most notably, by Kirkwood, e.g. see Ref.[10], chr-3 and references therein, in this work we would like to extend their results to account for effects of gauge invariance. We begin with the following auxiliary problem: since $\nabla ^{2}=$div$\cdot \mathbf{\nabla }$ $\equiv \mathbf{\nabla }\cdot \mathbf{\nabla ,}$ we are interested in finding how this result changes if we transform it from the flat Euclidean space to the space described in terms of generalized coordinates. This task is easy if we take into account that in the Euclidean space% \begin{equation} \mathbf{\nabla }\cdot \mathbf{\nabla =}\sum\limits_{i,j}\frac{\partial }{% \partial x_{i}}h^{ij}\frac{\partial }{\partial x_{j}}, \tag{3.31} \end{equation}% with $h^{ij}$ being a diagonal matrix with unit entries. We notice that the above expression is a scalar and, hence, it is covariant. This means, that we can replace the usual derivatives by covariant derivatives, the metric tensor $h^{ij}$ by the metric tensor $g^{ij}$ in the curved space so that in this, the most general case, we obtain% \begin{equation} D_{i}g^{ij}D_{j}f(\mathbf{x})=\frac{\partial }{\partial x_{i}}g^{ij}\frac{% \partial }{\partial x_{j}}f(\mathbf{x})+g^{kj}\Gamma _{ik}^{i}\frac{\partial }{\partial x_{j}}f(\mathbf{x}), \tag{3.32} \end{equation}% where summation over repeated indices is assumed, as usual. The covariant derivative $\mathbf{D}_{i}$ is defined for a scalar $f$ as $D_{i}f=\frac{% \partial }{\partial x_{i}}f$ and for contravariant vector X$^{i}$ as% \begin{equation} D_{j}X^{i}=\frac{\partial X^{i}}{\partial x_{j}}+\Gamma _{jk}^{i}X^{i} \tag{3.33} \end{equation}% with Christoffel symbol $\Gamma _{jk}^{i}$ defined in a usual way of Riemannian geometry. A precise definition of this symbol is going to be given below. Since $\Gamma _{ik}^{i}=\frac{\partial }{\partial x_{k}}\ln \sqrt{g}$ , we can rewrite Eq.(3.32) in the following alternative final form% \begin{equation} \nabla ^{2}f=D_{i}g^{ij}D_{j}f(\mathbf{x})=\frac{1}{\sqrt{g}}\frac{\partial }{\partial x_{i}}[g^{ij}\sqrt{g}\frac{\partial }{\partial x_{i}}f] \tag{3.34} \end{equation}% so that in Eq.(3.3) the operator$\ \nabla ^{2}$ is replaced \ now by that given by Eq.(3.34). To make this presentation complete, we have to include the relation \begin{equation} g_{ij}=\frac{\partial r^{k}}{\partial q^{i}}\frac{\partial r^{l}}{\partial q^{j}}h_{kl}. \tag{3.35} \end{equation}% In the simplest case, when we are dealing with 3 dimensional vectors, so that $\mathbf{r}=\mathbf{r}(q_{1},q_{2},q_{3}),$ sometimes it is convenient to introduce vectors% \begin{equation} \mathbf{e}_{i}=\frac{\partial \mathbf{r}}{\partial q^{i}} \tag{3.36} \end{equation}% and the metric tensor% \begin{equation} g_{ij}=\mathbf{e}_{i}\cdot \mathbf{e}_{j} \tag{3.37} \end{equation}% with "$\cdot "$ being the usual Euclidean scalar product sign. Definitions Eq.(3.35) and (3.37) are obviously equivalent in the present case. Because of this, it is clear that upon transformation to the curvilinear coordinates the Riemann curvature tensor \ written in terms of $g_{ij}$ is \ still zero since it is obviously zero for the $h_{kl}.$ The curvature tensor will be introduced and discussed below. Before doing so, using the example we have just described, we need to rewrite Eq.(3.30) in terms of generalized coordinates. In the present case, we must have $3N$ generalized coordinates and the tensor $h_{kl}$ is not a unit tensor anymore. Our arguments will not change if we replace Eq.(3.30) by that in which the potential $U=0$. Furthermore, we shall adsorb the factor $k_{B}T$ into the tensor $\mathbf{% \tilde{H}}$\textbf{\ }and this redefined tensor we shall use instead of $% h_{kl}.$ Evidently, the final result for the Laplacian, Eq.(3.34), will remain unchanged. The question arises: if in the first example the Riemannian curvature tensor remains flat \textsl{after} \ a coordinate transformation (since the tensor $h_{kl}$ is the tensor describing the flat Euclidean space), what can be said about the Riemann tensor in the present case? To answer this question consider once again Eq.(3.35), this time with the tensor \textbf{H}$_{mn}$ instead of h$_{ij}.$ For the sake of argument, let us ignore for a moment the fact that each of \ generalized coordinates is 3-dimensional. Then, we obtain, \begin{equation} g_{\alpha \beta }=\frac{\partial R^{k}}{\partial q^{\alpha }}\text{\~{H}}% _{kl}\frac{\partial R^{l}}{\partial q^{\beta }}, \tag{3.38a} \end{equation}% where we introduced a set of new generalized coordinates $% \{Q\}=\{q_{1},...,q_{N}\}$ so that $R_{l}=R_{l}(\{Q\})$. We shall use Greek indices for new coordinates and Latin for old. In the case of 3 dimensions the above result becomes \begin{equation} g_{\mathbf{\alpha \beta }}=\frac{\partial \mathbf{R}^{\mathbf{k}}}{\partial \mathbf{q}^{\mathbf{\alpha }}}\cdot \text{\~{H}}_{\mathbf{kl}}\cdot \frac{% \partial \mathbf{R}^{\mathbf{l}}}{\partial \mathbf{q}^{\mathbf{\beta }}} \tag{3.38b} \end{equation}% with "$\cdot "$ being the Euclidean scalar product sign as before. The indices \textbf{k}, \textbf{l,} $\mathbf{\alpha }$ and $\mathbf{\beta }$ now have 3 components each. We are interested in generalized coordinates which make the metric tensor $g_{\mathbf{\alpha \beta }}$ diagonal. By analogy with Eq.(3.36), \ we introduce now a scalar product \begin{equation} <\mathbf{R}\cdot \mathbf{R}>\equiv \mathbf{R}^{\mathbf{k}}\cdot \text{\~{H}}% _{\mathbf{kl}}\cdot \mathbf{R}^{\mathbf{l}} \tag{3.39} \end{equation}% so that instead of the vectors $\mathbf{e}_{i}$ we obtain now% \begin{equation} \mathbf{e}_{\mathbf{\alpha }}=\frac{\partial \mathbf{R}}{\partial \mathbf{q}% ^{\mathbf{\alpha }}} \tag{3.40} \end{equation}% and, accordingly,% \begin{equation} g_{\mathbf{\alpha \beta }}=<\mathbf{e}_{\mathbf{\alpha }}\cdot \mathbf{e}_{% \mathbf{\beta }}>. \tag{3.41} \end{equation}% The Christoffel symbol can be defined now as \begin{equation} \frac{\partial \mathbf{e}_{\mathbf{\alpha }}}{\partial \mathbf{q}^{\mathbf{% \beta }}}=\Gamma _{\mathbf{\alpha \beta }}^{\mathbf{\gamma }}\mathbf{e}_{% \mathbf{\gamma }}. \tag{3.42} \end{equation}% To find \ the needed generalized coordinates, we impose an additional constraint% \begin{equation} \frac{\partial \mathbf{e}_{\mathbf{\alpha }}}{\partial \mathbf{q}^{\mathbf{% \beta }}}=\frac{\partial \mathbf{e}_{\mathbf{\beta }}}{\partial \mathbf{q}^{% \mathbf{\alpha }}} \tag{3.43} \end{equation}% compatible with the symmetry of the tensor \~{H}$_{\mathbf{kl}}.$ By combining Eq.s(3.42) and (3.43) we obtain,% \begin{equation} \Gamma _{\mathbf{\alpha \beta }}^{\mathbf{\gamma }}\mathbf{e}_{\mathbf{% \gamma }}=\Gamma _{\mathbf{\beta \alpha }}^{\mathbf{\gamma }}\mathbf{e}_{% \mathbf{\gamma }} \tag{3.44} \end{equation}% implying that $\Gamma _{\mathbf{\alpha \beta }}^{\mathbf{\gamma }}=\Gamma _{% \mathbf{\beta \alpha }}^{\mathbf{\gamma }}$. That is, the imposition of the constraint, Eq.(3.43), is equivalent to requiring that our new generalized space is Riemannian (that is, without torsion). In such a space we would like to consider the following combination \begin{equation} R_{\mathbf{\alpha \beta }}\equiv \frac{\partial ^{2}\mathbf{e}_{\mathbf{% \alpha }}}{\partial \mathbf{q}^{\mathbf{\alpha }}\partial \mathbf{q}^{% \mathbf{\beta }}}-\frac{\partial ^{2}\mathbf{e}_{\mathbf{\beta }}}{\partial \mathbf{q}^{\mathbf{\beta }}\partial \mathbf{q}^{\mathbf{\alpha }}}. \tag{3.45} \end{equation}% Again, using Eq.(3.42) we obtain \begin{equation} \frac{\partial }{\partial \mathbf{q}^{\mathbf{\alpha }}}(\Gamma _{\mathbf{% \alpha \beta }}^{\mathbf{\gamma }}\mathbf{e}_{\mathbf{\gamma }})=\left( \frac{\partial }{\partial \mathbf{q}^{\mathbf{\alpha }}}\Gamma _{\mathbf{% \alpha \beta }}^{\mathbf{\gamma }}\right) \mathbf{e}_{\mathbf{\gamma }% }+\left( \frac{\partial }{\partial \mathbf{q}^{\mathbf{\alpha }}}\mathbf{e}_{% \mathbf{\gamma }}\right) \Gamma _{\mathbf{\alpha \beta }}^{\mathbf{\gamma }}. \tag{3.46} \end{equation}% Analogously, we obtain% \begin{equation} \frac{\partial }{\partial \mathbf{q}^{\mathbf{\beta }}}(\Gamma _{\mathbf{% \beta \alpha }}^{\mathbf{\gamma }}\mathbf{e}_{\mathbf{\gamma }})=\left( \frac{\partial }{\partial \mathbf{q}^{\mathbf{\beta }}}\Gamma _{\mathbf{% \beta \alpha }}^{\mathbf{\gamma }}\right) \mathbf{e}_{\mathbf{\gamma }% }+\left( \frac{\partial }{\partial \mathbf{q}^{\mathbf{\beta }}}\mathbf{e}_{% \mathbf{\gamma }}\right) \Gamma _{\mathbf{\beta \alpha }}^{\mathbf{\gamma }}. \tag{3.47} \end{equation}% Finally, we use Eq.(3.42) in Eq.(3.46) and (3.47) in order to obtain the following result for $R_{\mathbf{\alpha \beta }}$% \begin{eqnarray} R_{\mathbf{\alpha \beta }} &=&\left( \frac{\partial }{\partial \mathbf{q}^{% \mathbf{\alpha }}}\Gamma _{\mathbf{\alpha \beta }}^{\mathbf{\gamma }}\right) \mathbf{e}_{\mathbf{\gamma }}-\left( \frac{\partial }{\partial \mathbf{q}^{% \mathbf{\beta }}}\Gamma _{\mathbf{\beta \alpha }}^{\mathbf{\gamma }}\right) \mathbf{e}_{\mathbf{\gamma }}+\Gamma _{\mathbf{\alpha \beta }}^{\mathbf{% \omega }}\Gamma _{\mathbf{\omega \alpha }}^{\mathbf{\gamma }}\mathbf{e}_{% \mathbf{\gamma }}-\Gamma _{\mathbf{\omega \beta }}^{\mathbf{\gamma }}\Gamma _{\mathbf{\beta \alpha }}^{\mathbf{\omega }}\mathbf{e}_{\mathbf{\gamma }} \nonumber \\ &\equiv &R_{\mathbf{\alpha \alpha \beta }}^{\gamma }\mathbf{e}_{\mathbf{% \gamma }} \TCItag{3.48} \end{eqnarray}% The second line defines the Riemann curvature tensor. In the most general case it is given by $R_{\mathbf{\alpha \delta \beta }}^{\mathbf{\gamma }}.$ By combining Eq.s(3.40), (3.43), (3.45) and (3.48) we conclude that \begin{equation} \frac{\partial ^{2}\mathbf{e}_{\mathbf{\alpha }}}{\partial \mathbf{q}^{% \mathbf{\alpha }}\partial \mathbf{q}^{\mathbf{\beta }}}=\frac{\partial ^{2}% \mathbf{e}_{\mathbf{\beta }}}{\partial \mathbf{q}^{\mathbf{\beta }}\partial \mathbf{q}^{\mathbf{\alpha }}} \tag{3.49} \end{equation}% implying that the Riemann tensor is zero so that the connection $\Gamma _{% \mathbf{\alpha \beta }}^{\mathbf{\gamma }}$ is flat. For such a case we can replace the covariant derivative $D_{i}$ by $\nabla _{i}$ +$A_{i}$ [31]. The vector field $A_{i}$ is defined as follows. Introduce a 1-form $A$ via $% A=A_{i}dx^{i}$, $A_{i}=A_{i}^{\alpha }T^{\alpha },$ where in the non -Abelian case $T^{\alpha }$ is one of infinitesimal generators of some Lie group $G$ obeying the commutation relations $[T^{\alpha },T^{\beta }]=if^{\alpha \beta \gamma }T^{\gamma }$ of the associated with it Lie algebra. In addition, tr[$T^{\alpha }T^{\beta }]=\frac{1}{2}\delta ^{\alpha \beta }$. The Chern-Simons (C-S) functional $CS(A)$ \ producing upon minimization the needed flat connections is given by[31,32] \begin{eqnarray} CS(\mathbf{A}) &=&\frac{k}{4\pi }\int\limits_{M}tr(\mathbf{A}\wedge d\mathbf{% A}+\frac{2}{3}\mathbf{A}\wedge \mathbf{A}\wedge \mathbf{A}) \nonumber \\ &=&\frac{k}{8\pi }\int\limits_{M}\varepsilon ^{ijk}tr(A_{i}(\partial _{j}A_{k}-\partial _{k}A_{j})+\frac{2}{3}A_{i}[A_{j},A_{k}]) \TCItag{3.50} \end{eqnarray}% with $k$ being some integer. Minimization of this functional produces an equation for the flat connections. Indeed, we have% \begin{eqnarray} \frac{8\pi }{k}CS(\mathbf{A}+\mathbf{B}) &=&\int\limits_{M}tr(\mathbf{B}% \wedge d\mathbf{A}+\mathbf{A}\wedge \mathbf{B}+2\mathbf{B}\wedge \mathbf{A}% \wedge \mathbf{A}) \nonumber \\ &=&2\int\limits_{M}tr(\mathbf{B}\wedge (d\mathbf{A}+\mathbf{A}\wedge \mathbf{% A}), \TCItag{3.51} \end{eqnarray}% where we took into account that \[ \int\limits_{M}tr(A_{i}dx^{i}\wedge \frac{\partial B_{k}}{\partial dx^{j}}% dx^{j}\wedge dx^{k})=\int\limits_{M}tr(B_{k}dx^{k}\wedge \frac{\partial A_{i}% }{\partial dx^{j}}dx^{j}\wedge dx^{i}). \]% From here, by requiring \begin{equation} \frac{\delta }{\delta B}CS(\mathbf{A}+\mathbf{B})=0 \tag{3.52} \end{equation}% we obtain our final result: \begin{equation} d\mathbf{A}+\mathbf{A}\wedge \mathbf{A}\equiv (\frac{\partial A_{i}}{% \partial x_{j}}-\frac{\partial A_{j}}{\partial x_{i}}+[A_{i},A_{j}])dx^{i}% \wedge dx^{j}\equiv F(\mathbf{A})dx^{i}\wedge dx^{j}=0. \tag{3.53} \end{equation}% In the last equality we have taken into account that both in the C-S and Yang-Mills theory $F(\mathbf{A})$ is the curvature associated with connection $\mathbf{A}$. Vanishing of curvature produces Eq.(3.53) for the field $\mathbf{A}$. Irrespective to the explicit form of the field $\mathbf{A% }$, we have just demonstrated that, at least in the case when the potential $% U$ in Eq.(3.30) is zero, this equation can be brought into diagonal form provided that the operator $\nabla _{i}$ is replaced by $\nabla _{i}$ +$% A_{i} $ with the field $A_{i}$ to be specified below, in the next section. \section{An interplay between topology and randomness: connections with the vortex model of superfluid $^{4}$He} \subsection{General comments} \bigskip The C-S functional, Eq.(3.50), whose minimization produces Eq.(3.53) for the field $\mathbf{A}$ was introduced into physics by Witten [32] and was discussed in the context of polymer physics in our previous works summarized in Ref.[33]. Since polymer physics of fully flexible polymer chains involves diffusion-type equations [10], the connections between polymer and colloidal physics are apparent. For this reason, we follow \ Ref.[32] in our exposition and use it as general source of information. Specifically, as explained by Witten [32], theories based on the C-S functional are known as topological field theories. The averages in these theories produce all kinds of topological invariants (depending upon the generators T$^{\alpha }$ in the non Abelian case) which are observables for such theories. In the present case the question arises: should we use the non Abelian version of the C-S field theory or is it sufficient to use only its Abelian version, to be defined shortly? \ Since both versions of C-S theory were discussed in the context of polymer physics in Ref.[33], we would like to argue that, for the purposes of this work, the Abelian version of the C-S theory is sufficient. We shall provide the proof of this fact in this section. The action functional for the abelian C-S field theory is given by\footnote{% E.g. see Eq.(4.12) of Ref.[33].} \begin{equation} S_{C-S}^{A}[\mathbf{A}]=\frac{k}{8\pi }\int\limits_{M}d^{3}x\varepsilon ^{\mu \nu \rho }A_{\mu }\partial _{\nu }A_{\rho } \tag{4.1} \end{equation}% With such defined functional one calculates the (topological) averages with help of the C-S probability measure% \begin{equation} <\cdot \cdot \cdot >_{C-S}\equiv \hat{N}\int D[\mathbf{A}]\exp \{iS_{C-S}^{A}[\mathbf{A}]\}\cdot \cdot \cdot . \tag{4.2} \end{equation}% The random objects which are subject to averaging are the Abelian Wilson loops $W(C)$ defined by \begin{equation} W(\text{C})=\exp \{ie\oint\limits_{\text{C}}d\mathbf{r}\cdot \mathbf{A\},} \tag{4.3} \end{equation}% where C is some closed contour in 3 dimensional space (normally, without self-intersections), and $e$ is some constant ("charge") whose exact value is of no interest to us at this moment. The averages of products of Wilson's loops (perhaps, forming a link $L$) \begin{equation} W(L)=\prod\limits_{i=1}^{n}W(\text{C}_{i}) \tag{4.4} \end{equation}% are the main objects of study in such a topological field theory. Substitution of $W(L)$ into Eq.(4.2) produces the following result [32]% \begin{equation} <W(L)>_{C-S}=\exp \{i\left( \frac{2\pi }{k}\right) \sum\limits_{i,j}e_{i}e_{j}lk(i,j)\} \tag{4.5} \end{equation}% with the (Gauss) linking number $lk(i,j)$ defined as \begin{equation} lk(i,j)=\frac{1}{4\pi }\oint\limits_{\text{C}_{i}}\oint\limits_{\text{C}_{j}}% \left[ d\mathbf{r}_{i}\times d\mathbf{r}_{j}\right] \cdot \dfrac{\left( \mathbf{r}_{i}-\mathbf{r}_{j}\right) }{\left\vert \mathbf{r}_{i}-\mathbf{r}% _{j}\right\vert ^{3}}\equiv \frac{1}{4\pi }\int\limits_{0}^{\text{T}% _{i}}\int\limits_{0}^{\text{T}_{j}}ds_{i}ds_{j}[\mathbf{v}(s_{i})\times \mathbf{v}(s_{j})]\cdot \dfrac{\left( \mathbf{r}(s_{i})-\mathbf{r}% (s_{j})\right) }{\left\vert \mathbf{r}(s_{i})-\mathbf{r}(s_{j})\right\vert ^{3}}. \tag{4.6} \end{equation}% Here T$_{i}$ and T$_{j}$ are respectively the contour lengths of contours C$% _{i}$ and C$_{j}$ and $\mathbf{v}(s)=\frac{d}{ds}\mathbf{r}(s).$ With the Gauss linking number defined in such a way, in view\ of Eq.(4.5), it should be clear that we must to consider as well self-linking numbers $lk(i,i).$ \ Such a technicality requires us to think about the so called framing operation discussed in some detail in both Ref.s [32] and [33]. We shall ignore this technicality until Section 6 for reasons which will become apparent. \subsection{An interplay between the topology and randomness in hydrodynamics% } Following Tanaka and Ferrari, Refs [34, 35], we rewrite the Gauss linking number in a more physically suggestive form. For this purpose, we introduce the "magnetic" field $\mathbf{B}(\mathbf{r})$ via \begin{equation} \mathbf{B}(\mathbf{r})=\frac{1}{4\pi }\oint\limits_{\text{C}_{j}}d\mathbf{r}% _{j}\times \dfrac{\left( \mathbf{r}-\mathbf{r}(s_{j})\right) }{\left\vert \mathbf{r}-\mathbf{r}(s_{j})\right\vert ^{3}} \tag{4.7} \end{equation}% allowing us to rewrite the linking number $lk(i,j)$ as \begin{equation} lk(i,j)=\oint\limits_{\text{Ci}}d\mathbf{r}_{i}\cdot \mathbf{B}(\mathbf{r}% _{i}). \tag{4.8} \end{equation}% Eq.(4.7) for the field $\mathbf{B}(\mathbf{r})$ is known in magnetostatics as Biot-Savart law, e.g. see Ref.[36], Eq.(30.14). Because of this, we recognize that% \begin{equation} \mathbf{\nabla }\cdot \mathbf{B=}0 \tag{4.9} \end{equation}% and \begin{equation} \mathbf{\nabla }\times \mathbf{B=j,} \tag{4.10} \end{equation}% where $\mathbf{j}(\mathbf{r})=\oint\limits_{C}ds\mathbf{v(}s\mathbf{)}\delta \mathbf{(r-r(}s\mathbf{)).}$To connect these results with hydrodynamics, we introduce the vector potential $\mathbf{A}$\textbf{\ }in such a way that% \textbf{\ }% \begin{equation} \mathbf{\nabla }\times \mathbf{A=B.} \tag{4.11} \end{equation}% Using this result in Eq.(4.10) we obtain as well% \begin{equation} \nabla ^{2}\mathbf{A}=-\mathbf{j} \tag{4.12} \end{equation}% in view of the fact that $\mathbf{\nabla }\cdot \mathbf{A=}0.$ In hydrodynamics we can represent the local fluid velocity following Ref.[37], page 86, as \begin{equation} \mathbf{v}=\mathbf{\nabla }\times \mathbf{A} \tag{4.13} \end{equation}% and define the vorticity $\mathbf{\vec{\omega}}$ as \begin{equation} \mathbf{\vec{\omega}=\nabla }\times \mathbf{v.} \tag{4.14} \end{equation}% By analagy with Eq.s(4.10) and (4.12) we now obtain \begin{equation} \nabla ^{2}\mathbf{A}=-\mathbf{\vec{\omega}.} \tag{4.15} \end{equation}% Hence, to apply previous results to hydrodynamics the following identification should be made: \begin{equation} \mathbf{\vec{\omega}\rightleftarrows j.;v\rightleftarrows B.} \tag{4.16} \end{equation}% The kinetic energy $\mathcal{E}$ of a fluid in a volume $M$ is given by \begin{equation} \mathcal{E=}\frac{\rho }{2}\int\limits_{M}\mathbf{v}^{2}d^{3}\mathbf{r}. \tag{4.17} \end{equation}% We would like now to explain how this energy is related to the above defined linking numbers. For this purpose, we introduce the following auxiliary functional:% \begin{equation} \mathcal{F}[\mathbf{A}]_{i}=\oint\limits_{\text{C}_{i}}d\mathbf{r}_{i}\cdot \mathbf{A}(\mathbf{r}_{i}). \tag{4.18} \end{equation}% Use of the theorem by Stokes produces% \begin{equation} \mathcal{F}[\mathbf{A}]_{i}=\oint\limits_{\text{C}_{i}}d\mathbf{r}_{i}\cdot \mathbf{A}(\mathbf{r}_{i})=\iint\limits_{S_{i}}d\mathbf{S}_{i}\cdot \left( \mathbf{\nabla }\times \mathbf{A}\right) =\iint\limits_{S_{i}}d\mathbf{S}% _{i}\cdot \mathbf{v=}\iint\limits_{S_{i}}d\mathbf{S}_{i}\oint\limits_{\text{C% }_{j}}ds_{j}\mathbf{v(}s_{j}\mathbf{)\delta (r}_{i}\mathbf{-r(}s_{j}\mathbf{% )).} \tag{4.19} \end{equation}% At the same time, for the linking number, Eq.(4.8), an analogous procedure leads to the following chain of equalities% \begin{equation} lk(i,j)=\oint\limits_{\text{Ci}}d\mathbf{r}_{i}\cdot \mathbf{B}(\mathbf{r}% _{j})=\oint\limits_{\text{Ci}}d\mathbf{r}_{i}\cdot \mathbf{v}(\mathbf{r}% _{j})=\iint\limits_{S_{i}}d\mathbf{S}_{i}\cdot \left( \mathbf{\nabla }\times \mathbf{v}\right) =\iint\limits_{S_{i}}d\mathbf{S}_{i}\cdot \mathbf{\vec{% \omega}.} \tag{4.20a} \end{equation}% Since the same vector potential was used in both Eq.s(4.11) and (4.13) we notice that Eq.s (4.12) and (4.15) also imply that% \begin{equation} \mathbf{\vec{\omega}=}e\mathbf{j,} \tag{4.20b} \end{equation}% where $e$ is some constant to be determined. Because of this, we obtain \begin{equation} e\mathcal{F}[\mathbf{A}]_{i}=lk(i,j)=e\iint\limits_{S_{i}}d\mathbf{S}% _{i}\oint\limits_{\text{C}_{j}}ds_{j}\mathbf{v(}s_{j}\mathbf{)}\delta \mathbf{(r}_{i}\mathbf{-r(}s_{j}\mathbf{)).} \tag{4.21} \end{equation}% Since the obtained equivalence is of central importance for the entire work, we would like to discuss a few additional details of immediate relevance. In particular, from Eq.(4.20b), which we shall call from now on, the London equation (e.g. see the Subsection 4.4 below), it should be clear that the as yet unknown constant $e$ must have dimensionality of inverse length L$^{-1}. $ This fact should be taken into account when we consider the following dimensionless\footnote{% In view of the fact that the dimensionality of $e$ is fixed we have introduced a factor $f$ which makes the functional $\mathcal{W[}\mathbf{A}]$ dimensionless. This factor will be determined shortly below.} functional \begin{equation} \mathcal{W[}\mathbf{A}]=\frac{\rho }{2k_{B}T}\int\limits_{M}d^{3}\mathbf{% r(\nabla \times A)}^{2}+i\frac{e}{f}\sum\limits_{j}\oint\limits_{\text{Cj}}d% \mathbf{r}_{j}\cdot \mathbf{A}(\mathbf{r}_{j}) \tag{4.22} \end{equation}% and the path integral associated with it, i.e. \begin{equation} \check{N}\int D[\mathbf{A}]\delta (\mathbf{\nabla }\cdot \mathbf{A})\exp \{-% \mathcal{W[}\mathbf{A}]\}\equiv <W(L)>_{T} \tag{4.23a} \end{equation}% to be compared with Eq.s(4.2) and (4.5). Here the thermal average $<\cdot \cdot \cdot >_{T}$ is defined by \begin{equation} <\cdot \cdot \cdot >_{T}=\check{N}\int D[\mathbf{A}]\delta (\mathbf{\nabla }% \cdot \mathbf{A})\exp \{-\frac{\rho }{2k_{B}T}\int\limits_{M}d^{3}\mathbf{% r(\nabla \times A)}^{2}\}\cdot \cdot \cdot . \tag{4.23b} \end{equation}% Calculation of this Gaussian path integral is complicated by the presence of a delta constraint (Coulomb gauge) in the path integral measure. Fortunately, this path integral can be found in the paper by Brereton and Shah [38]. Without providing the details, these authors presented the following final result in notations adapted to this work:% \begin{equation} <W(L)>_{T}=\exp \{-\frac{1}{2\rho }\left( \frac{e}{f}\right) ^{2}\sum\nolimits_{i,j=1}^{^{\prime }}\int\limits_{0}^{t}\int\limits_{0}^{t}ds_{i}ds_{j}\mathbf{\dot{r}}% (s_{i})\cdot \mathbf{\tilde{H}}[\mathbf{r}(s_{i})-\mathbf{r}(s_{j})]\cdot \mathbf{\dot{r}}(s_{j})\}. \tag{4.24} \end{equation}% The Oseen tensor $\mathbf{\tilde{H}}(\mathbf{R})$ in this expression was previously defined in Eq.(3.27) and the prime on the summation sign means that the diagonal part of this tensor should be excluded. Even though calculations \ leading to this result are not given in Ref.[38], they can be easily understood field-theoretically. For this purpose, we have to regularize the delta function constraint in the path integral measure in Eq.(4.23) very much the same way as Ferrell, Ref.[21], did it in the case of hydrodynamics as we discussed in Section 2. Specifically, we write% \begin{eqnarray} &&\check{N}\int D[\sigma (\mathbf{r})]\exp (-\frac{1}{2\xi }\int d^{3}% \mathbf{r}\sigma ^{2}(\mathbf{r}))\int D[\mathbf{A}]\delta (\mathbf{\nabla }% \cdot \mathbf{A-\sigma (r)})\exp \{-\frac{\rho }{2k_{B}T}\int\limits_{M}d^{3}% \mathbf{r(\nabla \times A)}^{2}\}\cdot \cdot \cdot \nonumber \\ &=&\check{N}\int D[\mathbf{A}]\exp \{-\frac{\rho }{2k_{B}T}% \int\limits_{M}d^{3}\mathbf{r(\nabla \times A)}^{2}-\frac{1}{2\xi }\int d^{3}% \mathbf{r(\nabla }\cdot \mathbf{A)}^{2}\}\cdot \cdot \cdot \nonumber \\ &=&\check{N}\int D[\mathbf{A}]\exp \{-\frac{\rho }{2k_{B}T}% \int\limits_{M}d^{3}\mathbf{r}A_{\mu }[-\delta _{\mu \nu }\mathbf{\nabla }% ^{2}-(1-\frac{1}{\tilde{\xi}})\partial _{\mu }\partial _{\nu }]A_{\nu }\}\cdot \cdot \cdot . \TCItag{4.25} \end{eqnarray}% with some adjustable regularizing parameter $\tilde{\xi}.$Also, for the quadratic form (in $\mathbf{A}$\textbf{)} in the exponent of the last expression we obtain \begin{equation} \int\limits_{M}d^{3}\mathbf{r}A_{\mu }[-\delta _{\mu \nu }\mathbf{\nabla }% ^{2}-(1-\frac{1}{\tilde{\xi}})\partial _{\mu }\partial _{\nu }]A_{\nu }=\int d^{3}\mathbf{k}A_{\mu }(\mathbf{k})\mathbf{[\delta }_{\mu \nu }\mathbf{k}% ^{2}-(1-\frac{1}{\tilde{\xi}})k_{\mu }k_{\nu }]A_{\nu } \tag{4.26} \end{equation}% The inverse of the matrix $\mathbf{[\delta }_{\mu \nu }\mathbf{k}^{2}-(1-% \frac{1}{\tilde{\xi}})k_{\mu }k_{\nu }]$ is easy to find following Ramond [39]. Indeed, we write \begin{equation} \mathbf{[\delta }_{\mu \nu }\mathbf{k}^{2}-(1-\frac{1}{\tilde{\xi}})k_{\mu }k_{\nu }][X(\mathbf{k})\delta _{\nu \rho }+Y(\mathbf{k})k_{\nu }k_{\rho }]=\delta _{\mu \rho }. \tag{4.27} \end{equation}% From here the unknown functions $X(\mathbf{k})$ and $Y(\mathbf{k})$ can be determined so the inverse matrix is given explicitly by \begin{equation} \lbrack X(\mathbf{k})\delta _{\nu \rho }+Y(\mathbf{k})k_{\nu }k_{\rho }]=% \frac{1}{\mathbf{k}^{2}}[\delta _{\nu \rho }-(1-\tilde{\xi})\frac{k_{\nu }k_{\rho }}{\mathbf{k}^{2}}]. \tag{4.28} \end{equation}% In the limit $\tilde{\xi}\rightarrow 0$ we recover the Oseen tensor (up to a constant $1/\eta )$ in the $k$-space representation in accord with Ref.[10].These results explain why in the average, Eq.(4.24), there are no diagonal terms. Now we are ready to determine the constant $e$ introduced in Eq.(4.22). \subsection{Reparametrization invariance and vortex-vortex interactions} \bigskip The important result for $<W(L)>_{T}$ contains random velocities $% \mathbf{\dot{r}}(s)$ and thus, seemingly, additional averaging is required. \ The task now lies in finding the explicit form of this averaging. To do so, several steps are required. To begin, we notice that in the absence of hydrodynamic interactions Eq.(3.30) acquires the following form% \begin{equation} \frac{\partial \Psi }{\partial t}=D_{0}\sum\limits_{n}\frac{\partial ^{2}}{% \partial \mathbf{R}_{n}^{2}}\Psi \tag{4.29} \end{equation}% with diffusion coefficient $D_{0}$ defined in Eq.(1.1). \ If Eq.(4.29) we treat $\Psi $ as Green's function (e.g. see Appendix A for details), then it can be formally represented in the path integral form as \begin{equation} \Psi (t;\mathbf{R}_{1,...,}\mathbf{R}_{\text{N}})=\int D[\{\mathbf{R}(\tau )\}]\exp (-\frac{1}{4D_{0}}\int\limits_{0}^{t}\sum\nolimits_{i-1}^{\text{N}}% \left[ \mathbf{\dot{r}}(\tau _{i})\right] ^{2}d\tau _{i}). \tag{4.30} \end{equation}% In this expression we have suppressed the explicit $\mathbf{R}$-dependence of the path integral to avoid excessive notation. Hydrodynamic interactions can now be accounted for as follows \begin{eqnarray} \mathcal{F} &=&\check{N}\int D[\mathbf{A}]\exp \{-\frac{\rho }{2k_{B}T}% \int\limits_{M}d^{3}\mathbf{r}A_{\mu }[-\delta _{\mu \nu }\mathbf{\nabla }% ^{2}-(1-\frac{1}{\tilde{\xi}})\partial _{\mu }\partial _{\nu }]A_{\nu }\} \nonumber \\ &&\times \int D[\{\mathbf{r}(\tau )\}]\exp (-\frac{1}{4D_{0}}% \int\limits_{0}^{t}(\sum\nolimits_{j-1}^{\text{N}}\left[ \mathbf{\dot{r}}% (\tau _{j})\right] ^{2}d\tau _{j})\exp \{i\frac{e}{f}\int\limits_{0}^{t}\sum% \nolimits_{j-1}^{\text{N}}\left[ \mathbf{\dot{r}}(\tau _{j})\right] \cdot \mathbf{A[r(}\tau _{j}\mathbf{)]}d\tau _{j}\mathbf{\}} \nonumber \\ &\equiv &<\prod\limits_{j=1}^{\text{N}}\int D[\{\mathbf{r}(\tau _{j})\}]\exp (-\frac{1}{4D_{0}}\int\limits_{0}^{t}\left[ \mathbf{\dot{r}}(\tau _{j})% \right] ^{2}d\tau _{j})\exp \{i\frac{e}{f}\int\limits_{0}^{t}\mathbf{\dot{r}}% (\tau _{j})\cdot \mathbf{A[r(}\tau _{j}\mathbf{)]}d\tau _{j}\mathbf{\}>}_{T}. \TCItag{4.31} \end{eqnarray}% Perturbative calculation of path integrals of the type \begin{equation} \text{I}[\mathbf{A};t]=\int D[\{\mathbf{r}(\tau _{j})\}]\exp (-\frac{1}{% 4D_{0}}\int\limits_{0}^{t}\left[ \mathbf{\dot{r}}(\tau _{j})\right] ^{2}d\tau _{j}\exp \{i\frac{e}{f}\int\limits_{0}^{t}\left[ \mathbf{\dot{r}}% (\tau _{j})\right] \cdot \mathbf{A[r(}\tau _{j}\mathbf{)]}d\tau _{j}\mathbf{% \}} \tag{4.32} \end{equation}% was considered by Feynman long \ ago, Ref.[40]. From this paper it follows that the most obvious way to do such a calculation is to write the usual Schr% \"{o}dinger-like equation% \begin{equation} \left( \frac{\partial }{\partial t}-D_{0}(\mathbf{P}-ie\mathbf{A}% )^{2}\right) G(t,\mathbf{r};t^{\prime }\mathbf{r}^{\prime })=0\text{, }% \mathbf{r}\neq \mathbf{r}^{\prime }\text{ } \tag{4.33} \end{equation}% and to take into account that $(\mathbf{P}-ie\mathbf{A})^{2}=\mathbf{P}% ^{2}-ie\mathbf{A}\cdot \mathbf{P-}ie\mathbf{P}\cdot \mathbf{A}-e^{2}\mathbf{A% }^{2}\simeq \mathbf{P}^{2}-ie\mathbf{A}\cdot \mathbf{P+}O\mathbf{(A}^{2})$ \ (since $\mathbf{P}\cdot \mathbf{A=}0).$ This result is useful to compare with Eq.(3.32) in order to recognize that the field $\mathbf{A}$ is indeed a connection. To use these results, we would like to rewrite Eq.(3.30) in the alternative form which (for $U=0$) is given by% \begin{equation} \frac{\partial \Psi }{\partial t}=D_{0}\sum\limits_{n}\text{ }\frac{\partial ^{2}}{\partial \mathbf{R}_{n}^{2}}\Psi \text{\ +\ }k_{B}T\sum% \nolimits_{m,n,i,j}^{\prime }\text{\ }\mathbf{\tilde{H}}_{ij}(\mathbf{R}_{n}-% \mathbf{R}_{m})\text{\ }\frac{\partial }{\partial R_{in}}\text{\ }\frac{% \partial }{\partial R_{jm}}\text{\ }\Psi \text{.} \tag{4.34} \end{equation}% In arriving at this equation we took into account Eq.(3.14). Consider such an equation for $n=2$. In this case, we rewrite Eq.(4.34) in the style of quantum mechanics, i.e.% \begin{equation} \left( \frac{\partial }{\partial t}-H_{1}-H_{2}-V_{12}\right) \Psi =0 \tag{4.35} \end{equation}% in which, as in quantum mechanics, we shall treat $V_{12}$ as a perturbation. The best way of dealing with such problems is to use the method of Green's functions. For our reader's convenience we present some facts about this method in Appendix A. Eq.(A.10) of this Appendix provides an equation for the effective potential $\mathcal{V}.$ A similar type of equation was obtained in the book by Doi and Edwards, Ref.[10], in Section 5.7.3., who used methods of the effective medium theory. Using this theory they were able to prove the existence of screening for the case of polymer solutions. \ We shall reach an analogous conclusion about screening in colloidal suspension using different arguments to be discussed in the next subsection. In the meantime, we would like to provide arguments justifying our previously made approximation: $(\mathbf{P}-ie\mathbf{A})^{2}\simeq \mathbf{P}^{2}-ie\mathbf{A}\cdot \mathbf{P+}O\mathbf{(A}^{2}).$ Using results of Appendix A, we introduce the one-particle Green's function $G_{0}$ as a solution to the equation \begin{equation} \left( \frac{\partial }{\partial t}-D_{0}\frac{\partial ^{2}}{\partial \mathbf{R}^{2}}\right) G_{0}(\mathbf{R},t;\mathbf{R},t^{\prime })=\delta (% \mathbf{R}-\mathbf{R}^{\prime })\delta (t-t^{\prime }). \tag{4.36} \end{equation}% Having in mind the determination of previously introduced factor $f$ (in Eq.(4.22))$,$ it is convenient to rescale the variables in this equation to convert it into dimensionless form. Evidently, the most convenient choice is $t=\tau /(D_{0}/R_{0}^{2})$ and $\mathbf{R}=R_{0}\mathbf{\tilde{R}}$ with $% R_{0}$ is the hard sphere radius introduced in Eq.(1.1) and $\tau $ and $% \mathbf{\tilde{R}}$ being dimensionless time and space coordinates. Below, we shall avoid \ the use of tildas for $\mathbf{\tilde{R}}$ and shall still write $t$ instead of $\tau \mathbf{.}$The original symbols can be restored whenever they are required. Having this in mind, next we consider the two-particle Green's function $G_{0}.$In the absence of interactions, it is just the product of two Green's functions of the type given by Eq.(4.36). As a result, the Dyson-type equation for the full Green's function for Eq.(4.34) ($n=2$ case) is given by \begin{eqnarray} G(\mathbf{R}_{1},\mathbf{R}_{2},t;\mathbf{R}_{1}^{^{\prime \prime }},\mathbf{% R}_{2}^{^{\prime \prime }},t^{\prime \prime }) &=&G_{0}(\mathbf{R}_{1},t;% \mathbf{R}_{1}^{\prime },t^{\prime })G_{0}(\mathbf{R}_{2},t;\mathbf{R}% _{2}^{\prime },t^{\prime }) \nonumber \\ &&+\int G_{0}(\mathbf{R}_{1},t;\mathbf{R}_{1}^{\prime },t_{1}^{\prime })G_{0}(\mathbf{R}_{2},t;\mathbf{R}_{2}^{\prime },t_{1}^{\prime })V(\mathbf{R% }_{1}^{\prime },\mathbf{R}_{2}^{\prime })G(\mathbf{R}_{1}^{\prime },\mathbf{R% }_{2}^{\prime },t_{1}^{\prime };\mathbf{R}_{1}^{^{\prime \prime }},\mathbf{R}% _{2}^{^{\prime \prime }},t^{\prime \prime })d\mathbf{R}_{1}^{\prime }d% \mathbf{R}_{2}^{\prime }dt_{1}^{\prime } \nonumber \\ && \TCItag{4.37} \end{eqnarray}% in which the potential $V(\mathbf{R}_{1}^{\prime },\mathbf{R}_{2}^{\prime })= $\ $k_{B}T\mathbf{\tilde{H}}_{ij}(\mathbf{R}_{1}-\mathbf{R}_{2})$\ $% \frac{\partial }{\partial R_{1i}}$\ $\frac{\partial }{\partial R_{2i}}.$ As before, summation over repeated indices is assumed. \ Using results of Appendix A and Eq.(4.37) it is possible to write now the equation for the effective potential. In view of the results to be discussed in the next subsection, this is actually unnecessary. Hence, we proceed with other tasks at this point. Specifically, taking into account Eq.(3.27) in which the explicit form of the Oseen tensor is given, we conclude that the nondiagonal part of this tensor can be discarded in the Dyson Eq.(4.37). This is so because of the following obvious identity: $\left[ (\mathbf{r}_{1}-\mathbf{r}% _{2})\cdot \mathbf{r}_{1}\right] \left[ (\mathbf{r}_{1}-\mathbf{r}_{2})\cdot \mathbf{r}_{2}\right] +\left[ (\mathbf{r}_{2}-\mathbf{r}_{1})\cdot \mathbf{r}% _{1}\right] \left[ (\mathbf{r}_{2}-\mathbf{r}_{1})\cdot \mathbf{r}_{2}\right] =0$ associated with the scalar products of unit vectors in Eq.(3.27). Evidently, it is always possible to select a coordinate system centered, say, at $\mathbf{r}_{1}$Alternatively, this result can be easily proven in $% k $-space taking into account the incompressibility constraint. Furthermore, these observations cause us to write the potential $V(\mathbf{R}_{1},\mathbf{% R}_{2})$ in the following \ dimensionful form\footnote{% Using dimensional analysis performed for Eq.(4.36) the result, Eq.(4.38), can be easily rewritten also in dimensionless form}% \begin{equation} V(\mathbf{R}_{1}^{\prime },\mathbf{R}_{2}^{\prime })=\ \frac{k_{B}T}{4\pi \eta }\frac{1}{\left\vert \mathbf{R}_{1}-\mathbf{R}_{2}\right\vert }\ \frac{% \partial }{\partial \mathbf{R}_{1}}\cdot \ \frac{\partial }{\partial \mathbf{% R}_{2}}. \tag{4.38a} \end{equation}% Using dimensional analysis of Eq.(4.36), this result can be easily rewritten also in dimensionless form. Explicitly, it is given by \begin{equation} V(\mathbf{R}_{1}^{\prime },\mathbf{R}_{2}^{\prime })=\ \frac{k_{B}T}{4\pi \eta R_{0}^{2}D_{0}}\frac{1}{\left\vert \mathbf{R}_{1}-\mathbf{R}% _{2}\right\vert }\ \frac{\partial }{\partial \mathbf{R}_{1}}\cdot \ \frac{% \partial }{\partial \mathbf{R}_{2}} \tag{4.38b} \end{equation}% in which the scalar product can be of any sign. This fact is of importance because of the following. Using Eq.(4.31) and proceeding with calculations of the path integral following Feynman's prescriptions [40], we obtain exactly the same equation as that given by Eq.(4.37). This observation allows us to determine the constants $e$ and $f$ explicitly. In view of the results just obtained, the constant $e$ can be determined only with accuracy up to a sign. \ Taking this into account, the value of $e$ is determined as $e=\pm \dfrac{1}{R_{0}}% \sqrt{\dfrac{D_{0}\rho }{4\pi \eta }},$ while the constant $f$ is given by $% D_{0}^{{}}$ in view of the fact that the field $\mathbf{A}$ in Eq.(4.22) has dimensionality $L^{2}/t$ , \ i.e. that of the diffusion coefficient, while the dimensionality of $e$ is fixed by the Eq.(4.20b), so that the combination $eds\mathbf{\dot{r}}(s)$ is dimensionless. Using these results and Eq.(4.38), we can rewrite $<W(L)>_{T}$ defined by Eq.(4.24) in the following \ manifestly dimensionless physically suggestive form% \begin{equation} <W(L)>_{T}=\exp (-\frac{k_{B}T}{D_{0}8\pi \eta }\sum\nolimits_{i,j=1}^{^{% \prime }}\frac{s_{i}}{R_{0}}\frac{s_{j}}{R_{0}}\oint \oint \frac{\left\vert d% \mathbf{r}(\tau _{i})\cdot d\mathbf{r}(\tau _{j})\right\vert }{\left\vert \mathbf{r}(\tau _{i})-\mathbf{r}(\tau _{j})\right\vert }) \tag{4.39} \end{equation}% where we have introduced the dimensionless Ising spin-like variables $s_{i}$ playing the role of charges accounting for the sign of the product $\frac{% \partial }{\partial \mathbf{R}_{1}}\cdot \ \frac{\partial }{\partial \mathbf{% R}_{2}}.$ Since the whole system must be "electrically neutral", at this point it is possible to develop the Debye-H\"{u}ckel-type theory of hydrodynamic screening by analogy with that developed for Coulombic systems, e.g. see Ref.[41]. \ Nevertheless, below we choose another, more elegant pathway to arrive at the same conclusions. Before doing so, we notice that there is an important difference between the double integral, Eq.(4.39), and $\frac{1}{4D_{0}}\int\limits_{0}^{t}\left[ \mathbf{\dot{r}}(\tau _{j})\right] ^{2}d\tau _{j}$ present in the exponent in Eq.(4.31). While the double integral, Eq.(4.39), is manifestly reparametrization invariant, the diffusion integral is not. This means that we can always reparametrize time in this diffusion integral so that the coefficient $\left( 4D\right) ^{-1}$ can be made equal to any preassigned nonnegative integer. This was effectively done already when we introduced the dimensionless variables in Eq.(4.36). Such inequivalence between these two types of integrals can be eliminated if we replace this diffusion -type integral by that which is manifestly reparametrization- invariant. In such a case the total action is given by \begin{equation} S=m_{0}\sum\limits_{i}\oint d\tau _{i}\sqrt{\mathbf{r}^{2}(\tau _{i})}+\frac{% k_{B}T}{D_{0}8\pi \eta }\sum\nolimits_{i,j=1}^{^{\prime }}s_{i}s_{j}\oint \oint \frac{\left\vert d\mathbf{r}(\tau _{i})\cdot d\mathbf{r}(\tau _{j})\right\vert }{\left\vert \mathbf{r}(\tau _{i})-\mathbf{r}(\tau _{j})\right\vert }). \tag{4.40} \end{equation}% It should be noted that use of a symbol $\oint $ instead of $\int $ in Eq.(4.40) is a delicate matter. In [33] we demonstrated that in the limit of long times (that is in the limit $\omega \rightarrow 0$ used in this work) all random walks are asymptotically closed (that is, the Brownian trajectory in this limit becomes very much the same as known for ring polymers)% \footnote{% Additional mathematical results on this property are discussed in Section 6.2.}. Since the result, Eq.(4.40), is manifestly reparametrization invariant, such a replacement is permissible. Additional explanations are given in Appendix B which we recommend to read only after reading of Section 5. The constant $m_{0}$ in Eq.(4.40) will be determined in the next section. The form of the action given by Eq.(4.40) is almost identical to that for the action for the superfluid liquid $^{4}$He as discussed in the book by Kleinert [42], page 300. From the same book, it also follows that the Ginzburg-Landau theory of superconductivity also can be recast in the same form. We said "almost identical to" meaning that in these two theories (of superfluidity and superconductivity) the self-interaction of vortices is also allowed so that if the above expression \ would represent the dual (vortex) description of colloidal suspension dynamics (e.g. see Appendix B), then the prime in the double summation above can be removed since the vortices are allowed to intersect with themselves. In the direct case, when the focus of attention is on particles, removal of the prime in the double summation in Eq.(4.40) would imply that the Oseen tensor is defined for particles hydrodynamically interacting with themselves. This assumption is not present in the original Doi-Edwards formulation, Ref.[10]. As we noticed already in Eq.(3.29), the diagonal part of the Oseen tensor is associated with self-diffusion. The question therefore arises: can this "almost equivalence" be converted into full equivalence? \ The main feature of superconductors is the existence of the Meissner (for hard spheres)\ and dual (for vortices) Meissner effect. In the present case such an effect is equivalent to the existence of hydrodynamic screening. Hence, to prove such an equivalence requires us to prove the existence of hydrodynamic screening for suspensions. Evidently, we cannot immediately use Eq.(4.40) for such a proof. Therefore, in the next subsection we use London-style arguments to arrive at the desired conclusion. \subsection{ London-style theory of hydrodynamic screening} \bigskip We begin our proof by taking into account the non-slip boundary condition, Eq.(2.27): \begin{equation} \mathbf{v}(\mathbf{r},t)=\frac{d\mathbf{r}}{dt}=\mathbf{v}(t). \tag{2.27} \end{equation}% Within the approximations made, we also have to impose the incompressibility requirement \begin{equation} \mathbf{\nabla }\cdot \mathbf{v}(\mathbf{r},t)=0. \tag{3.14} \end{equation}% Because of this requirement, the current $\mathbf{j}=\rho \mathbf{v}$ becomes $\mathbf{j}=n_{0}\mathbf{v}$ with the density $n_{0}$ being a constant. Since $\mathbf{j}$\textbf{\ }is a vector, we can always represent it as \begin{equation} \mathbf{j}=\alpha \mathbf{\nabla }\psi \tag{4.41} \end{equation}% with suitably chosen scalar $\psi $ and some proportionality constant $% \alpha .$ To choose such a scalar we take into account that in the present case \begin{equation} \mathbf{\nabla }\cdot \mathbf{j=}0 \tag{4.42} \end{equation}% implying \begin{equation} \nabla ^{2}\psi =0. \tag{4.43} \end{equation}% The vector $\mathbf{j}$\textbf{\ } given by Eq\textbf{.(}4.41\textbf{) }is not uniquely defined. It\textbf{\ }will still obey the Eq.(4.42) if we write% \begin{equation} \mathbf{j}=\alpha \mathbf{\nabla }\psi \pm g\mathbf{A} \tag{4.44} \end{equation}% for a vector \textbf{A} such that $\mathbf{\nabla }\cdot \mathbf{A=}0.$ Evidently$,$ a vector obeying Eq.(4.13) by construction possess this property. The choice of the sign "+" or "-" in the above equation can be determined based on the following arguments. Since $\mathbf{j}=n_{0}\mathbf{v% }$ and since $n_{0}$ is constant, we can replace Eq.(4.44) by \begin{equation} \mathbf{v}=\alpha \mathbf{\nabla }\psi \pm g\mathbf{A} \tag{4.45} \end{equation}% by suitably redefinig constants $\alpha $ and $g$. Next, we assume that $% \mathbf{v}$ is a random variable so that on average $<\mathbf{v}>=0$ thus implying \begin{equation} <\alpha \mathbf{\nabla }\psi >\pm g<\mathbf{A>=}0. \tag{4.46} \end{equation}% This equation causes us to choose the sign "-". After this, we can write for the correlator \begin{equation} <\mathbf{v}\cdot \mathbf{v}>=\alpha ^{2}<\mathbf{\nabla }\psi \cdot \mathbf{% \nabla }\psi >+g^{2}<\mathbf{A}\cdot \mathbf{A}>=2g^{2}<\mathbf{A}\cdot \mathbf{A}>. \tag{4.47} \end{equation}% In view of our choice of \textbf{A}, the $<\mathbf{A}\cdot \mathbf{A}>$ correlator coincides with that given in the exponent of Eq.(4.24). Now we take into account Eq.(4.20b) where, of course, we replace $\mathbf{j}$ by $% \mathbf{v}$ so that using the dictionary, Eq.(4.16), we arrive at \begin{equation} \mathbf{\vec{\omega}}=e\mathbf{v}\text{ \ \ \ \ \ \ \ \ \ \ (London equation)% } \tag{4.48} \end{equation}% supplemented with \begin{equation} \mathbf{\vec{\omega}=\nabla }\times \mathbf{v}\text{ \ \ \ \ \ (Maxwell equation).} \tag{4.14} \end{equation}% Such an identification becomes apparent because of the following arguments. Let us use Eq.(4.45) in Eq.(4.48) in order to obtain% \begin{equation} \mathbf{\vec{\omega}}=e\mathbf{(}\alpha \mathbf{\nabla }\psi -e\mathbf{A).} \tag{4.49} \end{equation}% In this equation we replaced the constant $g$ by $e$. Furthermore, since Eq.(4.49) formally looks like the Fick's first law, we can as well rewrite this result as \begin{equation} \mathbf{\vec{\omega}}=e\mathbf{(}\frac{D_{0}}{2\pi }\mathbf{\nabla }\psi -e% \mathbf{A).} \tag{4.50} \end{equation}% By applying to both sides of this equation the curl operator and taking into account Eq.(4.13), we obtain% \begin{equation} \mathbf{\nabla }\times \mathbf{\vec{\omega}=-}e^{2}\mathbf{v.} \tag{4.51} \end{equation}% Taking into account the Maxwell's Eq.(4.14) and using it in Eq.(4.51) we obtain as well \begin{equation} \nabla ^{2}\mathbf{v}=e^{2}\mathbf{v.} \tag{4.52a} \end{equation}% Equivalently, we obtain, \begin{equation} \nabla ^{2}\mathbf{A=}e^{2}\mathbf{A.} \tag{4.51b} \end{equation}% Using Eq.s(4.47), (4.52a) and following the same steps as in the Appendix A of our previous work, Ref.[23], we obtain \begin{equation} \left\langle \mathbf{v(r)\cdot v(0)}\right\rangle =\frac{const}{r}\exp (-% \frac{\mathbf{r}}{\xi _{H}}), \tag{4.53} \end{equation}% where $\xi _{H}=e^{-1}=\left( \dfrac{1}{R_{0}}\sqrt{\dfrac{D_{0}\rho }{4\pi \eta }}\right) ^{-1}$ and the constant in Eq.(4.53) can be obtained from comparison between this equation and Eq.(2.37). The analogous result is also obtained for the $<\mathbf{A}\cdot \mathbf{A}>$ correlator. In accord with Eq.(2.44) we obtain the\ result of central importance $\xi _{H}\rightarrow \infty $ when $\rho \rightarrow 0,$ implying absence of screening in the infinite dilution limit. Our derivation explains the rationale behind the identification of Eq.s (4.14) and (4.48) with the Maxwell and London equations in the theory of superconductivity, Ref.[25], pages 174, 175. Evidently, such an identification becomes possible only in view of the topological nature of the London equation, Eq.(4.48), coming from identification of Eq.(4.19) with (4.20a). \section{Exotic superconductivity of colloidal suspensions} \subsection{General Remarks} In the previous section we developed a theory of hydrodynamic screening following ideas of the London brothers, Ref.[24]. As is well known, their seminal work found its most notable application in the theory of ordinary superconductors [25]. At the same time, Eq.(4.40) \ was originally used in the theory of superfluid $^{4}$He. In the book by Kleinert [42] it is shown that Eq.(4.40) can be rewritten in such a way that it will acquire the same form as used in the phenomenological Ginzburg-Landau (G-L) theory of superconductivity [25]. We would like to arrive at the same conclusions differently. In doing so we also would like to determine both the physical and mathematical meaning of the parameter $m_{0}$ which was left undetermined in Eq.(4.40). We shall develop our arguments mainly following the original G-L pathway. \ It should be said, though, that in the present case the connections with superconductivity are only in the structure of equations to be derived. The underlying physics is similar but not identical to that for superconductors. Indeed, in the case of superconductors one typically is talking about the supeconducting-to-normal transition controlled by temperature. Also, one is talking about the temperature-dependent "critical" magnetic field (the upper and the lower critical magnetic fields in the case of superconductors of the second kind) which destroys the superconductivity. In the present case of colloidal suspensions there is no explicit temperature dependence: \textsl{% the same} phenomena can take place at \textsl{any} temperature at which the solvent is not frozen. If we account for short range forces, then, of course, one can study a situation in which such a colloidal suspension is undergoing a temperature-controlled phase transition. Such a case requires a separate treatment and will not be considered in this work. \ In the present case the phase diagram can be qualitatively described as follows. The infinite dilution limit corresponds to the \textsl{normal} state. The regime of finite concentrations corresponds to a \textsl{mixed} state, typical for superconductors of the \textsl{second kind}, and the dramatic jump in viscosity discussed in the Introduction and in Section 2 corresponds to the transition to the "fully superconducting" state. Such a difference from the usual superconductors brings some new physics into play which may be useful, in other disciplines, e.g. in the high energy physics or turbulence, etc.% \footnote{% E.g. see Section 6.} \subsection{ G-L style derivation of equations of superconductivity for colloidal suspensions} \bigskip We begin with the one of Maxwell's equations in its conventional form, e.g. as given in Ref.[25], page 181,% \begin{equation} \mathbf{\nabla }\times \mathbf{B}=\frac{4\pi }{c}\mathbf{j.} \tag{5.1} \end{equation}% In the G-L theory we have for the current $\mathbf{j}$\textbf{\ }the following result\textbf{:}% \begin{equation} \mathbf{j}=-\frac{i\tilde{e}\hbar }{2m}(\varphi ^{\ast }\nabla \varphi -\varphi \nabla \varphi ^{\ast })-\frac{2\tilde{e}^{2}}{mc}\left\vert \varphi \right\vert ^{2}\mathbf{A.} \tag{5.2} \end{equation}% Both equations can be obtained by minimization of the following (\textsl{% truncated}) G-L functional\footnote{% This truncation is known in literature as the "\textsl{London limit}".}% \begin{equation} \mathcal{F[}\mathbf{A},\varphi ]=\int d^{3}r\{\frac{\left( \mathbf{\nabla }% \times \mathbf{A}\right) ^{2}}{8\pi }+\frac{\hbar ^{2}}{4m}\left\vert (% \mathbf{\nabla }-\frac{2i\tilde{e}}{\hbar c}\mathbf{A)}\varphi \right\vert ^{2}\} \tag{5.3} \end{equation}% with respect to $\mathbf{A}$. Substitution of the ansatz $\varphi =\dfrac{% \sqrt{n_{s}}}{2}\exp (i\psi )$ into Eq.(5.2) leads to the current% \begin{equation} \mathbf{j}=\frac{\tilde{e}\hbar }{2m}n_{s}(\mathbf{\nabla }\psi -\frac{2% \tilde{e}}{\hbar c}\mathbf{A}) \tag{5.4a} \end{equation}% to be compared with our Eq.(4.50). Evidently, this result is equivalent to the postulated London equation for superconductors% \begin{equation} \mathbf{\nabla }\times \mathbf{j=-}\dfrac{en_{s}}{mc}\mathbf{B.} \tag{5.4b} \end{equation}% \ At the same time, a comparison of Eq.(5.4a) with Eq.(4.50) leads to the following \ chain of identifications: $\dfrac{\tilde{e}\hbar }{2m}% n_{s}\rightleftarrows eD_{0}$ and $\dfrac{\tilde{e}^{2}}{mc}% n_{s}\rightleftarrows e^{2}.$ Consequently, we obtain as well: $\dfrac{\hbar ^{2}}{2m}\rightleftarrows D_{0},\dfrac{\tilde{e}n_{s}}{\hbar }\rightarrow e$% ; $\dfrac{\tilde{e}^{2}n_{s}}{m}\rightarrow e^{2},c\rightleftarrows 4\pi \rightarrow \dfrac{1}{2\pi },\dfrac{2\tilde{e}}{\hbar }\rightleftarrows \dfrac{e}{D_{0}}\rightleftarrows \dfrac{2e}{n_{s}}.$ Using these identifications, we can rewrite the functional $\mathcal{F[}\mathbf{A}% ,\varphi ]$ as follows% \begin{equation} \mathcal{F[}\mathbf{A},\varphi ]=\frac{\rho }{2}\int d^{3}r\{\left( \mathbf{% \nabla }\times \mathbf{A}\right) ^{2}+D_{0}\left\vert (\mathbf{\nabla }-i% \frac{2\pi e}{D_{0}}\mathbf{A)}\varphi \right\vert ^{2}\}. \tag{5.5} \end{equation}% In the traditional setting, the \ superconducting density $n_{s}$ is determined from the full G-L functional \begin{equation} \mathcal{F[}\mathbf{A},\varphi ]=\int d^{3}r\{\frac{\left( \mathbf{\nabla }% \times \mathbf{A}\right) ^{2}}{8\pi }+\frac{\hbar ^{2}}{4m}\left\vert (% \mathbf{\nabla }-\frac{2i\tilde{e}}{\hbar c}\mathbf{A)}\varphi \right\vert ^{2}+a\left\vert \varphi \right\vert ^{2}+\frac{b}{2}[\left\vert \varphi \right\vert ^{2}]^{2}\}, \tag{5.6} \end{equation}% e.g. by minimization with respect to $\varphi ^{\ast }$. In fact, to obtain $% n_{s}$ it is formally sufficient to treat only the case when $\mathbf{A}=0$. Indeed, under this condition we obtain% \begin{equation} a\varphi _{c}+b\left\vert \varphi _{c}\right\vert ^{2}\varphi _{c}=0, \tag{5.7} \end{equation}% which has a nontrivial solution only for $a<0.$ In this case we get $n_{s}=% \frac{\left\vert a\right\vert }{b},$ provided that $b>0,$ as usual. If we use this result back in Eq.(5.6), that is we use $\varphi _{c}=\dfrac{\sqrt{% n_{s}}}{2}\exp (i\psi )$ in Eq.(5.6) then, the polynomial (in $\varphi )$ part of the functional becomes a constant. This constant is divergent when the volume of the system goes to infinity. To prevent this from happening another constant term is typically added to the functional $\mathcal{F[}% \mathbf{A},\varphi ]$ so that it acquires the following canonical form% \begin{equation} \mathcal{F[}\mathbf{A},\varphi ]=\int d^{3}\mathbf{r}\{\frac{\left( \mathbf{% \nabla }\times \mathbf{A}\right) ^{2}}{8\pi }+\frac{\hbar ^{2}}{4m}% \left\vert (\mathbf{\nabla }-\frac{2i\tilde{e}}{\hbar c}\mathbf{A)}\varphi \right\vert ^{2}+\frac{b}{2}(\left\vert \varphi \right\vert ^{2}-n_{s})^{2}\}. \tag{5.8} \end{equation}% Then, when $\varphi _{c}=\dfrac{\sqrt{n_{s}}}{2}\exp (i\psi ),$ the polynomial (in $\varphi )$ part of the functional vanishes and, accordingly, in this limit we require \begin{equation} \int d^{3}r\left\vert (\mathbf{\nabla }-\frac{2i\tilde{e}}{\hbar c}\mathbf{A)% }\varphi _{c}\right\vert ^{2}\rightarrow 0 \tag{5.9} \end{equation}% as well. This leads us to the equation \begin{equation} \frac{\hbar c}{i2\tilde{e}}\frac{1}{\varphi _{c}}\mathbf{\nabla }\varphi _{c}=\mathbf{A} \tag{5.10a} \end{equation}% or to% \begin{equation} \frac{\hbar c}{2\tilde{e}}\mathbf{\nabla }\psi =\mathbf{A.} \tag{5.10b} \end{equation}% This equation coincides(on average) with the previously obtained Eq.(4.46) (with redefinitions described above) as required and will be treated further in Section 5.4. It should be noted though that originally, in London's theory, Ref.[24], the $n_{s}$ was left as an adjustable parameter and, hence, microscopically undefined. This is important in our case since the phenomenon of supercoductivity can be looked upon (as in thermodynamics) without any reference to spontaneous symmetry breaking, Higgs effect, etc. \ At the level of G-L theory, the London equations are reproduced with help of the truncated G-L functional. Hence, in principle, in the present case use of the truncated functional, Eq.(5.5), is also sufficient. At the macroscopic mean field level the presence of \ polynomial terms in the full-G-L functional, Eq.s(5.6) and (5.8) seems somewhat artificial. They do not reveal their microscopic origin and are introduced just to fit the data. \ We would like to use some known facts from the path integral treatments of superconductivity/superfluidity in order to reveal their physical meaning. Such information is also useful for development of \ the hydrodynamic theory of colloidal suspensions. \subsection{Path integrals associated with the G-L functional} \bigskip In view of Eq.(4.40), we begin our discussion with the simplest case of the path integral for a single "relativistic" scalar particle. Following Polyakov, Ref.[43], the Euclidean version of propagator for such a (Klein-Gordon) particle is given by \begin{equation} G(x,x^{\prime })=\int \left( \frac{\mathfrak{D}\mathbf{x}(\tau )}{\mathfrak{D% }f(\tau )}\right) \exp (-m_{0}\int\limits_{0}^{1}d\tau \sqrt{\mathbf{\dot{x}}% ^{2}(\tau )}), \tag{5.11a} \end{equation}% where in the most general case \begin{equation} \mathbf{\dot{x}}^{2}(\tau )=g_{\mu \nu }(\mathbf{x})\frac{dx^{\mu }}{d\tau }% \frac{dx^{\nu }}{d\tau }. \tag{5.11b} \end{equation}% This propagator is of interest in string theory since it represents a reduced form of the propagator for the bosonic string. As in the case of a string, the action of this path integral is manifesttly reparametrization-invariant, i.e. invariant under changes of the type $% \mathbf{x}$($\tau )\rightarrow \mathbf{x}(f(\tau ))$ (with $f(\tau )$ being some nonnegative monotonically increasing function). The path integral measure is designed to absorb this redundancy. The full account of this absorption is cumbersome. Because of this, instead of copying Polyakov's treatment of such a path integral, we shall adopt a simplified treatment allowing us to recover Polyakov's final results. We begin with an obvious well-known identity% \begin{equation} \left( \frac{1}{4\pi t}\right) ^{\frac{d}{2}}\exp (-\frac{1}{4}\frac{\mathbf{% x}^{2}}{t})=\int\limits_{\mathbf{x}(0)=0}^{\mathbf{x}(t)=\mathbf{x}}% \mathfrak{D}[\mathbf{x}(\tau )]\exp \{-\frac{1}{4}\int\limits_{0}^{t}d\tau \left( \frac{d\mathbf{x}}{d\tau }\right) ^{2}\}. \tag{5.12} \end{equation}% This identity is used below as follows. Consider the propagator for the Klein-Gordon (K-G) field given by \begin{equation} G(\mathbf{x})=\int \frac{d^{d}\mathbf{k}}{\left( 2\pi \right) ^{d}}\frac{% \exp (i\mathbf{k\cdot x})}{\mathbf{k}^{2}+m^{2}}. \tag{5.13} \end{equation}% By employing the identity% \begin{equation} \frac{1}{a}=\int\limits_{0}^{\infty }dx\exp (-ax) \tag{5.14} \end{equation}% Eq.(5.13) can be rewritten as follows% \begin{eqnarray} G(\mathbf{x}) &=&\int\limits_{0}^{\infty }dt\exp (-tm^{2})\int \frac{d^{d}% \mathbf{k}}{\left( 2\pi \right) ^{d}}\exp (i\mathbf{k\cdot x-}t\mathbf{k}% ^{2}) \nonumber \\ &=&\frac{1}{\mathcal{E}}\int\limits_{0}^{\infty }dt\exp (-\mathcal{E}% tm^{2})\int \frac{d^{d}\mathbf{k}}{\left( 2\pi \right) ^{d}}\exp (i\mathbf{% k\cdot x-}\mathcal{E}t\mathbf{k}^{2}) \nonumber \\ &=&\frac{1}{\mathcal{E}}\int\limits_{0}^{\infty }dt\exp (-t\mathcal{E}\text{% \textit{m}}^{2})\left( \frac{1}{4\pi \mathcal{E}t}\right) ^{\frac{d}{2}}\exp (-\frac{1}{4}\frac{\mathbf{x}^{2}}{\mathcal{E}t}) \nonumber \\ &=&\frac{1}{\mathcal{E}}\int\limits_{0}^{\infty }dt\exp (-t\mathfrak{m}% ^{2})\int\limits_{\mathbf{x}(0)=0}^{\mathbf{x}(t)=\mathbf{x}}\mathfrak{D}[% \mathbf{x}(\tau )]\exp \{-\frac{1}{4\mathcal{E}}\int\limits_{0}^{t}d\tau \left( \frac{d\mathbf{x}}{d\tau }\right) ^{2}\} \TCItag{5.15} \end{eqnarray}% where we used the identity, Eq.(5.12), to obtain the last line and introduced an arbitrary nonnegative parameter $\mathcal{E}$ for comparison with results by Polyakov. Specifically, using page 163 of the book by Polyakov (and comparing our 3rd line above with the 3rd line of his Eq.(9.63)) we can make the following identifications: $\mathcal{% E\rightleftarrows \varepsilon }$, $m^{2}\rightleftarrows \mu .$ Since, according to Polyakov, $\mu =\varepsilon ^{-1}(m_{0}-\dfrac{c}{\sqrt{% \varepsilon }})$ with $c$ being some constant, we obtain: $m_{0}=\mathcal{E}% m^{2}+\dfrac{c}{\sqrt{\varepsilon }}$. That is, the physical mass $m^{2}$ entering the K-G equation is obtained as the limit of the expression ($% \varepsilon \rightarrow 0)$ \begin{equation} m=\lim_{m_{0}\rightarrow m_{cr}}\varepsilon ^{-\frac{1}{2}}(m_{0}-m_{cr})^{% \frac{1}{2}}. \tag{5.16} \end{equation}% Clearly, such an expression is nonnegative by construction. From the last line of Eq.(5.15) it follows that the propagator for the K-G field is just the direct Laplace transform of the nonrelativistic "diffusion" propagator, Eq.(5.12), with the Laplace variable $\mathit{m}$ playing a role of a mass for such a field. In the Euclidean version of the K-G propagator this mass cannot be negative since in such a case the identity Eq.(5.14) cannot be used so that the connection between the nonrelativistic and the K-G propagators is lost. However, Eq.(5.2) seemingly is for the \textsl{quantum} current while the propagator in Eq.(5.12) is describing Brownian motion, not quantum diffusion. To fix the problem we have to replace time $t$ in Eq.(5.12) by $it$ and, accordingly, to make changes in Eq.(5.15). This then converts the Laplace transform into the Fourier transform, provided that the nonrelativistic propagator describes the \textsl{retarded} Green's function. To use the full strength of the apparatus of quantum field theory one needs to use the causal Green's functions. This is required by the relativistic covariance treating space and time coordinates on the same footing. Once all of these requirements are met, it becomes possible to treat the case of a negative mass. It should be emphasized at this point that the London-style derivation given in the previous section formally \textsl{does not} require such quantum mechanical analogy. Because of this, the following problem emerges: is it possible to reproduce the functional integral $\mathcal{F}$ defined by Eq.(4.31) using the truncated G-L functional for superconductivity in the exponent of the associated path integral? \ We would like to provide an affirmative answer to this question now. We begin with the partition function $Z$ for the two-component scalar K-G-type field% \begin{equation} \ln Z=-\ln [\det (-\nabla ^{2}+m^{2})] \tag{5.17} \end{equation}% Since \begin{equation} \ln [\det (-\nabla ^{2}+m^{2})]=tr\left[ \ln (-\nabla ^{2}+m^{2})\right] \tag{5.18} \end{equation}% and \begin{equation} tr\left[ \ln (-\nabla ^{2}+m^{2})\right] =\int \frac{d^{d}\mathbf{k}}{\left( 2\pi \right) ^{d}}\ln (\mathbf{k}^{2}+m^{2}), \tag{5.19} \end{equation}% we can use the results of our previous work, Ref.[44], for evaluation of the last expression. \ Thus, we obtain, \begin{eqnarray} tr\left[ \ln \frac{(-\nabla ^{2}+m^{2})}{(-\nabla ^{2})}\right] &=&\int\limits_{0}^{m^{2}}dy\frac{d}{dy}\int \frac{d^{d}\mathbf{k}}{\left( 2\pi \right) ^{d}}\ln (\mathbf{k}^{2}+y) \nonumber \\ &=&\int\limits_{0}^{m^{2}}dy\int \frac{d^{d}\mathbf{k}}{\left( 2\pi \right) ^{d}}\frac{1}{\mathbf{k}^{2}+y}=\int\limits_{0}^{m^{2}}dyG(\mathbf{0};y) \nonumber \\ &=&\int\limits_{0}^{\infty }dt\int\limits_{0}^{m^{2}}dy\exp (-ty)\int\limits_{\mathbf{x}(0)=\mathbf{0}}^{\mathbf{x}(t)=\mathbf{0}}% \mathfrak{D}[\mathbf{x}(\tau )]\exp \{-\frac{1}{4}\int\limits_{0}^{t}d\tau \left( \frac{d\mathbf{x}}{d\tau }\right) ^{2}\} \nonumber \\ &=&\int\limits_{0}^{\infty }\frac{dt}{t}(1-\exp (-m^{2}t))\int\limits_{% \mathbf{x}(0)=\mathbf{0}}^{\mathbf{x}(t)=\mathbf{0}}\mathfrak{D}[\mathbf{x}% (\tau )]\exp \{-\frac{1}{4}\int\limits_{0}^{t}d\tau \left( \frac{d\mathbf{x}% }{d\tau }\right) ^{2}\}. \TCItag{5.20} \end{eqnarray}% Following the usual practice, we shall write $\oint $ instead of $% \int\limits_{\mathbf{x}(0)=\mathbf{0}}^{\mathbf{x}(t)=\mathbf{0}}$ in the path integral and consider a formal (that is diverging!) expression for the free energy $\mathcal{F}_{0}$ \begin{equation} \exp \left( -\mathcal{F}_{0}\right) =\ln Z_{0}=-\ln [\det (-\nabla ^{2}+m^{2})]=\int\limits_{0}^{\infty }\frac{dt}{t}\exp (-m^{2}t)\oint \mathfrak{D}[\mathbf{x}(\tau )]\exp \{-\frac{1}{4}\int\limits_{0}^{t}d\tau \left( \frac{d\mathbf{x}}{d\tau }\right) ^{2}\} \tag{5.21} \end{equation}% by keeping in mind that this result makes sense mathematically only when the same expression with $m^{2}=0$ is subtracted from it as required by Eq.(5.20). Inclusion of the electromagnetic field into this scheme can be readily accomplished now. For this purpose we replace the $\mathbf{\nabla }$ operator by its covariant derivative:$\mathbf{\nabla \rightarrow D\equiv \nabla -}ie\mathbf{A}$\textbf{\ }\ (we put $D_{0}=1$ in view of developments presented in Eq.(5.15)) \ Using $\mathbf{D}$ instead of $\mathbf{\nabla }$ in Eq.(5.20) we have to evaluate the following path integral\footnote{% For $m^{2}=0$ this is just part of the truncated G-L functional.} \begin{equation} \lbrack \det^{{}}(-\mathbf{D}^{2}+m^{2})]^{-1}=\int D[\bar{\varphi},\varphi ]\exp (-\frac{1}{2}\int d^{3}r\{\bar{\varphi}(-\mathbf{D}^{2}+m^{2})\varphi \}). \tag{5.22} \end{equation}% For $\mathbf{A}=0$ we did this already while for $\mathbf{A}\neq 0$ we can treat terms containing $\mathbf{A}$ as perturbation. We can do the same for the path integral in Eq.(4.32). This is easy to understand if we realize that \begin{equation} \int\limits_{0}^{\infty }dt\exp (-m^{2}t)\text{I}[\mathbf{A};t]\mid _{% \mathbf{A}=0}=-\frac{d}{dm^{2}}\int\limits_{0}^{\infty }\frac{dt}{t}\exp (-m^{2}t)\oint \mathfrak{D}[\mathbf{x}(\tau )]\exp \{-\frac{1}{4}% \int\limits_{0}^{t}d\tau \left( \frac{d\mathbf{x}}{d\tau }\right) ^{2}\} \tag{5.23} \end{equation}% Therefore, the final answer reads as follows% \begin{eqnarray} \exp \left( -\mathcal{F}\right) &=&\ln Z=-\ln [\det (-\mathbf{D}% ^{2}+m^{2})]=\int\limits_{0}^{\infty }\frac{dt}{t}\exp (-m^{2}t)\oint \mathfrak{D}[\mathbf{x}(\tau )]\exp \{-\int\limits_{0}^{t}d\tau \lbrack \frac{1}{4}\left( \frac{d\mathbf{x}}{d\tau }\right) ^{2}+i\frac{e}{f}\mathbf{% \dot{x}}\cdot \mathbf{A}[\mathbf{x}(\tau )]]\} \nonumber \\ &=&\int\limits_{0}^{\infty }\frac{dt}{t}\exp (-m^{2}t)\oint \mathfrak{D}[% \mathbf{x}(\tau )]\exp \{-\frac{1}{4}\int\limits_{0}^{t}d\tau \left( \frac{d% \mathbf{x}}{d\tau }\right) ^{2}\}\exp \{-i\frac{e}{f}\oint d\mathbf{r}\cdot \mathbf{A}[\mathbf{x}(\tau )]\}. \TCItag{5.24} \end{eqnarray}% This result demonstrates that applying the operator $\int\limits_{0}^{\infty }\frac{dt}{t}\exp (-m^{2}t)$ to I$[\mathbf{A};t]$ \ defined in Eq.(4.32) makes it equivalent to the "matter" part of truncated G-L functional for superconductivity as needed. This raises a question about comparison of the full G-L functional with the "diffusion" path integrals of \ Section 4. Evidently, this can be done only if in the original diffusion Eq.(3.30) we do not discard the potential $U$. If we do not discard the potential and if, instead, we ignore the hydrodynamic interactions completely, we would end up with the following path integral for interacting Brownian particles in the canonical ensemble% \begin{equation} \Xi =\int \prod\limits_{l=1}^{N}\mathcal{D}[\mathbf{x}(\tau _{l})]\exp \{-% \frac{1}{4D_{0}}\sum\limits_{i=1}^{N}\int\limits_{0}^{t}d\tau _{i}\left( \frac{d\mathbf{x}}{d\tau _{i}}\right) ^{2}-\sum\limits_{i<j}^{N}\int\limits_{0}^{t}d\tau _{i}\int\limits_{0}^{t}d\tau _{j}V[\mathbf{x}(\tau _{i})-\mathbf{x}(\tau _{j})]\}. \tag{5.25} \end{equation}% It is essential that this expression \textsl{does not} contain self-interactions typical for problems involving polymer chains with excluded volume-type interactions. The situation here resembles that encountered when, following Doi and Edwards, Ref.[10], we redefined the Oseen tensor in Eq.s (3.29\ ) and (3.30) so that \ it acquired the diagonal part. In the present case we must require the diagonal part to be zero at the end of calculations. These results, correct for colloidal particles, may become incorrect in the present case for the following reason. From looking either at Eq.(5.24) or Eq.(4.39), we recognize that in these cases we are dealing with assemblies of loops (vortices) which are in one-to one correspondence with diffusing particles. While this topic is studied in detail in the next subsection and Appendix B, here we notice that if Eq.(5.25) is written for such loops, then the excluded volume requirement becomes essential, even for a single loop. Indeed, the existence of such a loop is possible only if the field $\mathbf{A}$\textbf{\ }associated with these loops\textbf{\ }is uniquely defined. This is possible only if the loop contour does not have self-interactions. This is the origin of the excluded volume constraint requirement. With this restriction imposed, we introduce a density% \begin{equation} \rho (\mathbf{r})=\sum\limits_{i=1}^{N}\int\limits_{0}^{t}d\tau _{i}\delta (% \mathbf{x}-\mathbf{x}(\tau _{i})) \tag{5.26} \end{equation}% so that the binary potential in Eq.(5.25) can be written as \begin{equation} \frac{1}{2}\sum\limits_{i,j}^{N}\int\limits_{0}^{t}d\tau _{i}\int\limits_{0}^{t}d\tau _{j}V[\mathbf{x}(\tau _{i})-\mathbf{x}(\tau _{j})]=\frac{1}{2}\int d\mathbf{r}\int d\mathbf{r}^{\prime }\rho (\mathbf{r}% )V[\mathbf{r}-\mathbf{r}^{\prime }]\rho (\mathbf{r}^{\prime }). \tag{5.27} \end{equation}% Then, using the Hubbard-Stratonovich (H-S) identity we obtain, \begin{equation} \exp (-\frac{1}{2}\int d\mathbf{r}\int d\mathbf{r}^{\prime }\rho (\mathbf{r}% )V[\mathbf{r}-\mathbf{r}^{\prime }]\rho (\mathbf{r}^{\prime }))=\mathfrak{N}% \int D[\psi (\mathbf{r})]\exp (-\frac{1}{2}\int d\mathbf{r}\int d\mathbf{r}% ^{\prime }\psi (\mathbf{r})V^{-1}[\mathbf{r}-\mathbf{r}^{\prime }]\psi (% \mathbf{r}^{\prime }))\exp (i\int dr\psi (\mathbf{r})\rho (\mathbf{r})) \tag{5.28} \end{equation}% with $\mathfrak{N}$ being a normalization constant (bringing the above identity to the statement $1=1$ for $\rho =0).$ Use of this result in Eq.(5.25) in which self exclusion is allowed converts this partition function into the following form (written for the loop ensemble)% \begin{equation} \Xi =\mathfrak{N}\int D[\psi (\mathbf{r})]\exp (-\frac{1}{2}\int d\mathbf{r}% \int d\mathbf{r}^{\prime }\psi (\mathbf{r})V^{-1}[\mathbf{r}-\mathbf{r}% ^{\prime }]\psi (\mathbf{r}^{\prime }))\prod\limits_{i=1}^{N}G_{i}(0;t\mid \psi ) \tag{5.29} \end{equation}% where \begin{equation} G_{i}(0;t\mid \psi )=\oint \mathfrak{D}[\mathbf{x}(\tau _{i})]\exp \{-\int\limits_{0}^{t}d\tau _{i}[\frac{1}{4}\left( \frac{d\mathbf{x}}{d\tau _{i}}\right) ^{2}+ie\psi \lbrack \mathbf{x}(\tau _{i})]]\}. \tag{5.30} \end{equation}% In the case of polymers, typically, one uses the delta function-type potential for description of the interactions. This observation is helpful in the present case as well because of the following. Consider the G-L functional, Eq.(5.6), and use the H-S identity for the interaction term% \begin{equation} \exp (-\frac{b}{2}[\left\vert \varphi \right\vert ^{2}]^{2})=\mathfrak{N}% \int D[\psi (\mathbf{r})]\exp (-\frac{1}{2b}\int d\mathbf{r}\int d\mathbf{r}% ^{\prime }\psi (\mathbf{r})\psi (\mathbf{r}^{\prime }))\exp (i\int dr\psi (% \mathbf{r}))\left\vert \varphi \right\vert ^{2}). \tag{5.31} \end{equation}% This allows us to replace the determinant, Eq.(5.22), by the following (more general) determinant% \begin{equation} \lbrack \det^{{}}(-\mathbf{D}^{2}+m^{2}+i\psi )]^{-1}=\int D[\bar{\varphi}% ,\varphi ]\exp (-\frac{1}{2}\int d^{3}r\{\bar{\varphi}(-\mathbf{D}% ^{2}+m^{2}+i\psi \lbrack \mathbf{r}])\varphi \}) \tag{5.32} \end{equation}% which, in view of Eq.(5.24), can be equivalently rewritten as \begin{eqnarray} \exp \left( -\mathcal{F}\right) &=&\ln Z=-\ln [\det (-\mathbf{D}% ^{2}+m^{2}+i\psi )]=\mathfrak{N}\int D[\psi (\mathbf{r})]\exp (-\frac{1}{2b}% \int d\mathbf{r}\int d\mathbf{r}^{\prime }\psi (\mathbf{r})\psi (\mathbf{r}% ^{\prime })) \nonumber \\ &&\times \int\limits_{0}^{\infty }\frac{dt}{t}\exp (-m^{2}t)\oint \mathfrak{D% }[\mathbf{x}(\tau )]\exp \{-\frac{1}{4}\int\limits_{0}^{t}d\tau \left( \frac{% d\mathbf{x}}{d\tau }\right) ^{2}\}\exp \{-ie\oint d\mathbf{x}\cdot \mathbf{A}% [\mathbf{x}(\tau )]\}+i\oint d\tau \psi \lbrack \mathbf{x}(\tau )]\} \nonumber \\ &=&\int\limits_{0}^{\infty }\frac{dt}{t}\exp (-m^{2}t)\oint \mathfrak{D}[% \mathbf{x}(\tau )]\exp \{-\frac{1}{4}\int\limits_{0}^{t}d\tau \left( \frac{d% \mathbf{x}}{d\tau }\right) ^{2}\} \nonumber \\ &&\times \exp \{-i\frac{e}{f}\oint d\mathbf{x}\cdot \mathbf{A}[\mathbf{x}% (\tau )]-\frac{b}{2}\oint d\tau \oint d\tau ^{\prime }\delta (\mathbf{x}% (\tau )-\mathbf{x}(\tau ^{\prime })\}. \TCItag{5.33} \end{eqnarray}% Alternatively, this result can be rewritten as a grand canonical ensemble of selfavoiding loops \begin{eqnarray} Z[\mathbf{A;}b]-1 &=&\sum\limits_{n=1}^{\infty }\frac{1}{n!}% \prod\limits_{l=1}^{n}[\int\limits_{0}^{\infty }\frac{dt_{l}}{t_{l}}\exp (-m^{2}t_{l})\oint \mathfrak{D}[\mathbf{x}(\tau _{l})]\exp \{-\frac{1}{4}% \sum\limits_{l=1}^{n}\int\limits_{0}^{t}d\tau _{l}\left( \frac{d\mathbf{x}}{% d\tau _{l}}\right) ^{2}\} \nonumber \\ &&\times \exp \{-i\frac{e}{f}\oint d\mathbf{x}\cdot \mathbf{A}[\mathbf{x}% (\tau _{l})]\}-\frac{b}{2}\sum\limits_{l,m=1}^{n}\oint d\tau _{l}\oint d\tau _{m}^{\prime }\delta (\mathbf{x}(\tau _{l})-\mathbf{x}(\tau _{m}^{\prime })\}. \TCItag{5.34} \end{eqnarray}% This result is useful to compare with Eq.(4.31). From such a comparison it is evident that Eq.(5.34) is compatible with that obtained previously. It accounts for the effects of non hydrodynamic-type interactions which can be incorporated, in principle, in the diffusion Eq.(3.30) in which the potential $U$ must be specified. Clearly, \ the use of path integrals makes such a task much simpler. However, even though the above derivation is intuitively appealing, strictly speaking, it cannot be used for a number of reasons. Unlike the G-L functional, Eq.(5.8), which is convenient for studying of topological and nonperturbative effects, Eq.(5.34) makes sense only in perturbative calculations. This means that phenomena such as screening (caused by the Higgs effect) cannot be captured with such a formalism alone. These observations explain why screening effects were found in solutions of polymers but not in colloidal suspensions, Ref. [10,12]. Furthermore, Eq.(5.34) contains a mixture of reparametrization-invariant and non invariant terms. This is questionable mathematically. It would be more logical to have the entire action reparametrization-invariant. We study these issues in some detail in the next subsection. \subsection{Reparametrization-invariance and its consequences. London-style analysis} Since path integrals mathematically can seldom be defined rigorously, we would like in this subsection to extend the analysis of Sections 4.2.-4.4 avoiding the use of path integrals. We start with a discussion of the result, Eq.(5.10b), which is the superconducting analog of Eq.(4.46) for suspensions. Since $\mathbf{B}=\mathbf{\nabla }\times \mathbf{A,}$ (or $% \mathbf{v}$ $=\mathbf{\nabla }\times \mathbf{A}$ in the case of suspensions)\ we conclude that Eq.(5.10b) causes the first term in the G-L functional Eq.(5.6) (or (5.5)) to vanish in the bulk. Nevertheless, it is perfectly permissible to write \begin{equation} \frac{\hbar c}{2\tilde{e}}\oint d\mathbf{r}\cdot \mathbf{\nabla }\psi =\oint d\mathbf{r}\cdot \mathbf{A.} \tag{5.35} \end{equation}% and to use Stokes' theorem \begin{equation} \oint\limits_{C}d\mathbf{r}\cdot \mathbf{A=}\iint d\mathbf{S\cdot (\nabla }% \times \mathbf{A)=}\iint d\mathbf{S\cdot B=}n\frac{hc}{2\tilde{e}} \tag{5.36} \end{equation}% with $n=0,\pm 1,\pm 2,...$ \ An analogous result for suspensions reads as follows% \begin{equation} \oint\limits_{C}d\mathbf{r}\cdot \mathbf{A=}\iint d\mathbf{S\cdot (\nabla }% \times \mathbf{A)=}\iint d\mathbf{S\cdot v=}n\frac{D_{0}}{e}. \tag{5.37a} \end{equation}% In view of Eq.s(4.19) and (4.20), this result leads also to \begin{equation} \frac{1}{D_{0}}\iint d\mathbf{S\cdot \tilde{\omega}=}n=\frac{e}{D_{0}}% \oint\limits_{C}d\mathbf{r}\cdot \mathbf{A} \tag{5.37b} \end{equation}% which is the same as Eq.(4.21). These results can be interpreted in a number of ways. For the sake of argument, we would like to explore the more established case of superconductivity first. Following Lund and Regge, Ref.[45], we suppose that the vector potential $\mathbf{A}$ can be presented as follows% \begin{equation} \mathbf{A(r)=}\frac{k}{4\pi }\oint\limits_{C}\frac{1}{\left\vert \mathbf{r}-% \mathbf{r}(\sigma )\right\vert }\left( \frac{\partial \mathbf{r}}{\partial \sigma }\right) d\sigma \equiv \frac{k}{4\pi }\oint\limits_{C}\frac{1}{% \left\vert \mathbf{r}-\mathbf{r}(\sigma )\right\vert }\mathbf{v}(\sigma )d\sigma \tag{5.38} \end{equation}% with the appropriately chosen constant $k.$ This result easily follows from Eq.(4.15) under the assumption that $\tilde{\omega}(\mathbf{r}% )=k\oint\limits_{C}d\sigma \mathbf{v}(\sigma )\delta (\mathbf{r}-\mathbf{r}% (\sigma ))$ (which is the same as our Eq.(4.20b)). Substitution of this result into Eq.(5.36) produces% \begin{equation} \oint\limits_{C_{1}}d\mathbf{r}\cdot \mathbf{A=}\frac{k}{4\pi }% \oint\limits_{C_{1}}\oint\limits_{C_{2}}d\mathbf{\sigma }d\mathbf{\sigma }% ^{\prime }\frac{\mathbf{v}(\sigma )\cdot \mathbf{v}(\sigma ^{\prime })}{% \left\vert \mathbf{r(\sigma )}-\mathbf{r}(\sigma ^{\prime })\right\vert }=n% \frac{hc}{2\tilde{e}}. \tag{5.39} \end{equation}% The obtained result allows us to determine the constant $k.$ To do so we need to demonstrate that the above double integral is a linking number, e.g. see Eq.(4.6). The proof of this result depends upon correctness of the following statement \begin{eqnarray} \oint\limits_{C}d\mathbf{r}\cdot \mathbf{A} &\mathbf{=}&\varkappa \oint\limits_{C}d\mathbf{r}\cdot \mathbf{B=}\varkappa \oint\limits_{C}d% \mathbf{r}\cdot \mathbf{\nabla }\times \mathbf{A=} \nonumber \\ &=&\frac{\varkappa k}{4\pi }\oint\limits_{C_{1}}d\sigma \mathbf{v}(\sigma )\cdot \oint\limits_{C_{2}}d\mathbf{\sigma }^{\prime }\mathbf{v}(\sigma ^{\prime })\times \frac{(\mathbf{r(\sigma )}-\mathbf{r}(\sigma ^{\prime }))}{% \left\vert \mathbf{r(\sigma )}-\mathbf{r}(\sigma ^{\prime })\right\vert ^{3}} \nonumber \\ &=&\frac{\varkappa k}{4\pi }\oint\limits_{C_{1}}\oint\limits_{C_{2}}d\sigma d\sigma ^{\prime }\left[ \mathbf{v}(\sigma )\times \mathbf{v}(\sigma ^{\prime })\right] \cdot \frac{(\mathbf{r(\sigma )}-\mathbf{r}(\sigma ^{\prime }))}{\left\vert \mathbf{r(\sigma )}-\mathbf{r}(\sigma ^{\prime })\right\vert ^{3}} \nonumber \\ &=&\varkappa klk(1,2) \TCItag{5.40} \end{eqnarray}% with linking number $lk(1,2)$ defined in Eq.(4.6). If the above result is correct and the constant $\varkappa $ can be found then, the constant $k$ can be determined from Eq.(5.39). Hence, the task lies in demonstrating that the nonzero constant $\varkappa $ does exist. To do so we shall use the standard London analysis. Thus, we write \begin{equation} \mathbf{\nabla }\times \mathbf{B}=\frac{4\pi }{c}\mathbf{j}\text{ \ \ \ \ (Maxwell equation) } \tag{5.1} \end{equation}% and \begin{equation} \mathbf{\nabla }\times \mathbf{j=-}\dfrac{en_{s}}{mc}\mathbf{B}\text{ \ \ (London \ equation).} \tag{5.4b} \end{equation}% Since, $\mathbf{B}=\mathbf{\nabla }\times \mathbf{A,}$ and $\mathbf{\nabla }% \cdot \mathbf{B}=\mathbf{\nabla }\cdot \mathbf{A=}0,$ we have $\varkappa \mathbf{B=A}$ so that we obtain \begin{equation} \mathbf{\nabla }\times \mathbf{A=}\varkappa \frac{4\pi }{c}\mathbf{j} \tag{5.41} \end{equation}% and, from here% \begin{equation} -\mathbf{\nabla }^{2}\mathbf{A}=\mathbf{\nabla }\times \mathbf{\nabla }% \times \mathbf{A=}\varkappa \frac{4\pi }{c}\left( \mathbf{\nabla }\times \mathbf{j}\right) =-\varkappa \dfrac{4\pi en_{s}}{mc^{2}}\mathbf{B=-}% \varkappa ^{2}\dfrac{4\pi en_{s}}{mc^{2}}\mathbf{A} \tag{5.42} \end{equation}% which is the familiar screening-type equation, e.g. see Eq.(4.51b). Since, in the conventional setting the penetration depth $\delta ^{2}$ is known to be $\delta ^{2}=\left( \dfrac{4\pi en_{s}}{mc^{2}}\right) ^{-1},$ we can chose $\varkappa ^{2}\mathbf{=}1$ implying that $k=\dfrac{hc}{2\tilde{e}}.$ The choice $\varkappa \mathbf{=}1$ does not mean of course that the constant $\varkappa $ is dimensionless. Because of this, we obtain \begin{equation} \frac{1}{4\pi \varkappa }\oint\limits_{C_{1}}\oint\limits_{C_{2}}d\mathbf{% \sigma }d\mathbf{\sigma }^{\prime }\frac{\mathbf{v}(\sigma )\cdot \mathbf{v}% (\sigma ^{\prime })}{\left\vert \mathbf{r(\sigma )}-\mathbf{r}(\sigma ^{\prime })\right\vert }=lk(1,2) \tag{5.43} \end{equation}% in accord with Eq.(4.21). Next, if we take into account screening effects, the conclusions we've reached will remain the same due to reparametrization invariance of both sides of Eq.(5.43). Indeed, consider one loop, say $% C_{1}, $ going from -$\infty $ to +$\infty $ in the z-direction. If we compactify $\mathbf{R}^{3}$ by adding one point at infinity so that \textbf{R% }$^{3}$ becomes \textbf{S}$^{3}$, then such a loop will be closed. Another loop can stay mainly in the x-y plane so that the linking number becomes the winding number, e.g. see Ref.[46], page 134. Under these conditions the screening factor $exp(-\dfrac{r}{\delta })$ \footnote{% Emergence of such a screening factor can be easily understood if we replace Eq.(4.15) by Eq.(5.42) with the right hand side given by $\tilde{\omega}(% \mathbf{r})=k\oint\limits_{C}\mathbf{v}(\sigma )\delta (\mathbf{r}-\mathbf{r}% (\sigma ))$ in accord with Eq.(5.38).}under the integral of the left hand side of Eq.(5.43) is unimportant since we can always arrange our windings in such a way that $r\ll \delta $ for any preassigned $\func{nonzero}\delta $ so that the screening factor becomes unimportant. The above analysis can be extended to the case of colloidal suspensions in view of the results of Sections 4.2 and 4.4. implying that in both superconductivity and colloidal suspensions the phase transition is topological in nature (e.g. in the colloidal case Eq.(4.39) is a topological invariant to be considered in the next subsection). Evidently, such a conclusion cannot be reached by perturbatively calculating the Green's function in Eq.(4.37). In Section 5.1 we discussed similarities and differences between superconductors and colloidal suspensions. It is appropriate now to add a few additional details to the emerging picture. In the case of superconductivity correctness of the topological picture depends upon the existence of nontrivial solutions of Eq.(5.42). These are possible only when the parameter $n_{s}$ is nonzero. When it becomes zero the above picture breaks down. In the case of suspensions the role of the parameter $\varkappa ^{-1}$ is played by the density-dependent parameter $e$. This can be easily seen if we take into account that dimensional analysis requires us to replace Eq.(5.38) by% \begin{equation} \mathbf{A(r)=}\frac{D_{0}}{4\pi }\oint\limits_{C}\frac{1}{\left\vert \mathbf{% r}-\mathbf{r}(\sigma )\right\vert }\mathbf{v}(\sigma )d\sigma \tag{5.44} \end{equation}% so that by employing Eq.(5.37b) we obtain, \begin{equation} \frac{e}{4\pi }\oint\limits_{C_{1}}\oint\limits_{C_{2}}d\mathbf{\sigma }d% \mathbf{\sigma }^{\prime }\frac{\mathbf{v}(\sigma )\cdot \mathbf{v}(\sigma ^{\prime })}{\left\vert \mathbf{r(\sigma )}-\mathbf{r}(\sigma ^{\prime })\right\vert }=n=lk(1,2) \tag{5.45} \end{equation}% as expected. \subsection{Bose-Einstein-type transition in a system of linked loops} In the Introduction we noted that Chorin, Ref.[22], conjectured that the superfluid-to normal transition in $^{4}$He is associated with vortices causing a sharp increase in viscosity. In this subsection we wold like to demonstrate that, at least for colloidal suspensions, his conjecture is correct: the sharp increase in viscosity is associated with the lambda-type transition. Instead of treating this problem in full generality, i.e. for the nonideal Bose gas, we simplify matters and consider a Bose condensation type transition typical for the ideal Bose gas. It should be noted though that our simplified treatment is motivated only by the fact that it happens to be sufficient for comparison with experimental data. In other cases, such a restriction can be lifted. To develop such a theory we use the information obtained in the previous subsection augmented by some additional facts needed for completion of our task. In particular, we are interested in \ the expression for the kinetic energy. Up to a constant it is given by \begin{equation} E\dot{=}\frac{1}{2}\int d^{3}\mathbf{r(\nabla \times A)}^{2} \tag{5.46} \end{equation}% and is manifestly nonnegative. Using known facts from vector analysis this expression can be rewritten as follows \begin{equation} E\dot{=}\frac{1}{2}\int d^{3}\mathbf{r(\nabla \times A)}\cdot \mathbf{v=}% \frac{1}{2}\int d^{3}\mathbf{r[A\cdot \tilde{\omega}+}div\mathbf{[A\times v]]=}\frac{1}{2}\int d^{3}\mathbf{rA\cdot \tilde{\omega}} \tag{5.47} \end{equation}% In view of Eq.(5.38) we can rewrite this result as \begin{equation} E\dot{=}\frac{k^{2}}{2}\oint\limits_{C_{1}}\oint\limits_{C_{2}}d\mathbf{% \sigma }d\mathbf{\sigma }^{\prime }\frac{\mathbf{v}(\sigma )\cdot \mathbf{v}% (\sigma ^{\prime })}{\left\vert \mathbf{r(\sigma )}-\mathbf{r}(\sigma ^{\prime })\right\vert } \tag{5.48} \end{equation}% to be compared with Eq.(5.45). Using such a comparison we arrive at an apparent contradiction: while \ an expression for $E$ should be nonnegative, the linking number $lk(1,2)$ can be both positive or negative. If we make the replacement $\mathbf{r}\rightarrow -\mathbf{r}$ in Eq.(5.48) nothing changes but if we do the same for $lk(1,2)$ it changes the sign. Thus, if we want to use $lk(1,2)$ in Eq.(5.48) we have to use $\left\vert lk(1,2)\right\vert $. This number was introduced by Arnold and is known in literature as \textsl{entanglement complexity}\footnote{% For more deatails about this number and its many applications can be found in our works, Refs.[47,48].}. \ Evidently, in view of this remark, $n$ in Eq.(5.45) can be only nonnegative. If we require our system to be invariant with respect to rotations of \ the coordinate frame, Eq.(4.39) should be rewritten according to the procedure developed in our work, Ref.[49]. This means, that we introduce a set of linking numbers: $% n_{1},n_{2},...,n_{i},... $ so that \ for a given $n\footnote{% E.g. see Eq.(4.4).},$ the set of $\frac{1}{2}n(n-1)\equiv N$ possible linking numbers can be characterized by the total linking number $L$, i.e. we have% \begin{equation} \sum\limits_{i=1}^{N}n_{i}=L. \tag{5.49} \end{equation}% This result can be rewritten alternatively as follows. Let $C_{1}$\ be the number of links with linking number $\QTR{sl}{1}$, $C_{2}$ the number of links with linking number 2 and so on. Then, we obtain \begin{equation} \sum\limits_{i=1}^{L}iC_{i}=L. \tag{5.50} \end{equation}% Furthermore, we also must require \begin{equation} \sum\limits_{i=1}^{L}C_{i}=N \tag{5.51} \end{equation}% Define the Stirling-type number $\tilde{S}(L,N)$ via the following generating function\footnote{% The true Stirling number of the first kind $S(L,N)$ is defined as follows: $% S(L,N):=(-1)^{L-N}\tilde{S}(L,N).$} \begin{equation} \sum\limits_{N=0}^{L}\tilde{S}(L,N)x^{N}=x(x+1)\cdot \cdot \cdot (x+L-1). \tag{5.52} \end{equation}% Set in this definition $x=1$. This then allows us to introduce the probability $p(L,N)=\tilde{S}(L,N)/L!$ The number $\tilde{S}(L,N)$ can be easily obtained\footnote{% E.g. see Ref. [49].} with the result given by \begin{equation} \tilde{S}(L,N)=\prod\limits_{i=1}^{N}\frac{L!}{i^{C_{i}}C_{i}!}. \tag{5.53} \end{equation}% With thus obtained results, we are now ready to return to Eq.(4.39) in which we make a rescaling: $\mathbf{r}(\tau )\rightarrow R_{0}\mathbf{\tilde{r}}% (\tau )$, with $\mathbf{\tilde{r}}(\tau )$ being dimensionless. After which, using Eq.(5.45) we can rewrite Eq.(4.39) as follows \begin{equation} <W(L)>_{T}=\exp (-\frac{3\eta _{0}}{\eta }L) \tag{5.54} \end{equation}% Evidently, the numerical factor of 3 in the exponent is non-essential and can be safely dropped upon rescaling of $L$. To use this expression we combine it with Eq.(5.34) in which we have to make some adjustments following Feynman, Ref.[50], pages 62-64. On these pages Feynman discusses a partition function for the ideal Bose gas written in the path integral form. We would like to rewrite his result in the notation of our paper. For this purpose we use Eq.(4.30) in which the path integral is written for a loop and is in discrete form. We obtain, \begin{eqnarray} h(\nu ) &=&\left( \frac{1}{4\pi D_{0}}\right) ^{\dfrac{3\nu }{2}}\int \prod\limits_{i=1}^{\nu }d^{3}\mathbf{r}_{i}\exp \{-\frac{1}{4D_{0}}[(% \mathbf{r}_{1}-\mathbf{r}_{2})^{2}+(\mathbf{r}_{2}-\mathbf{r}_{3})^{2}+...+(% \mathbf{r}_{\nu -1}-\mathbf{r}_{\nu })^{2}+(\mathbf{r}_{\nu }-\mathbf{r}% _{1})^{2}\} \nonumber \\ &=&V\left( \frac{1}{4\pi \nu D_{0}}\right) ^{\dfrac{3\nu }{2}} \TCItag{5.55} \end{eqnarray}% with $V=\int d^{3}\mathbf{r}_{1}.$Under such circumstances the Brownian ring is made out of $\nu $ links(segments) so that we can identify its length with $\nu .$ In the present case each such ring is linked with another ring thus forming a link with a linking number $iC_{i}$, $i=0,1,2,...$ Since the linking number is independent of the lengths of rings from which it is made, we can take advantage of this fact by identifying the index $i$ with $\nu .$ By combining Eq.s (5.50)-(5.55) and repeating the same steps as given in Feynman's lectures we assemble the following \ dimensionless grand partition function $\mathcal{F}$ \begin{eqnarray} e^{-\mathcal{F}} &=&\sum\limits_{C_{1},...,C_{q},....}\prod\limits_{\nu }% \frac{h(\nu )^{C_{\nu }}}{C_{\nu }!\nu ^{C_{\nu }}}\exp (-\frac{\eta _{0}}{% \eta }\nu C_{\nu })=\sum\limits_{C_{1},...,C_{q},....}\prod\limits_{\nu }% \frac{1}{C_{\nu }!}(h(\nu )\frac{z^{\nu }}{\nu })^{C_{\nu }} \nonumber \\ &=&\prod\limits_{\nu }\sum\limits_{C_{\nu }=0}^{\infty }\frac{1}{C_{\nu }!}% (h(\nu )\frac{z^{\nu }}{\nu })^{C_{\nu }}=\exp (\sum\limits_{\nu }h(\nu )% \frac{z^{\nu }}{\nu }). \TCItag{5.56} \end{eqnarray}% Here the "chemical" potential $z=exp(-\frac{\eta _{0}}{\eta }).$ Taking the logarithm of both sides of the above equation we obtain the partition function for the ideal Bose gas. Written per unit volume it reads% \begin{equation} \mathcal{F=-}\left( \frac{1}{4\pi D_{0}}\right) ^{\dfrac{3}{2}}\zeta _{5/2}(z). \tag{5.57} \end{equation}% In this expression $\zeta _{\alpha }(z)$ is Riemann's zeta function \begin{equation} \zeta _{\alpha }(z)=\sum\limits_{n=1}^{\infty }\frac{z^{n}}{n^{\alpha }}. \tag{5.58} \end{equation}% This function is well defined for $z<1$, i.e. for $\dfrac{\eta }{\eta _{0}}% <\infty $ and is divergent for $z>1,$ thus indicating a Bose condensation whose onset is determined by the value $z=1$ (i.e. $\eta =\infty )$ for which $\zeta _{5/2}(1)=1.341.$ If we follow standard treatments, then we obtain for the critical density $\rho _{c}$% \begin{equation} \rho _{c}=\left( \frac{1}{4\pi D_{0}}\right) ^{\dfrac{3}{2}}2.612. \tag{5.59} \end{equation}% In view of Eq.(5.55), the obtained result for density has the correct dimensionality. From here the critical volume fraction is: $\varphi _{c}=\rho _{c}\frac{4}{3}\pi R_{0}^{3}$. The number $2.612$ is just the value of $\zeta _{3/2}(1).$ This means, that we can write in the general case \begin{equation} \rho (z)=\left( \frac{1}{4\pi D_{0}}\right) ^{\dfrac{3}{2}}\zeta _{3/2}(z) \tag{5.60} \end{equation}% thus giving us the equation \begin{equation} \frac{\rho _{c}-\rho }{\rho _{c}}=1-\frac{\zeta _{3/2}(z)}{\zeta _{3/2}(1)}. \tag{5.61} \end{equation}% In the book by London, Ref.[51], we found the following expansion for $\zeta _{3/2}(z)$ in the vicinity of $z=1$ $(z<1):$ \begin{equation} \zeta _{3/2}(z)=-3.545\alpha ^{\frac{1}{2}}+2.612+1.460\alpha -0.104\alpha ^{2}+...., \tag{5.62} \end{equation}% where $\alpha =-\ln z.$ Use of this result in Eq.(5.61) produces the following result: \begin{equation} \frac{\eta }{\eta _{0}}=\left( \frac{1}{3.545}\right) ^{2}(1-\frac{\rho }{% \rho _{c}})^{-2} \tag{5.63} \end{equation}% in accord with scaling predictions by Brady, Ref. [19], and \ Bicerano \textit{et al.}Ref.[20]. It should be noted though that in view of Eq.(5.54) the actual value of the constant prefactor in Eq.(5.63) is quite arbitrary and can be adjusted with help of experimental data. For instance, by making this prefactor of order unity, Bicerano \textit{et all} obtained a very good agreement with experimental data in the whole range of concentrations, e.g. see Ref. [20], Fig.4. \section{Discussion and outlook} \subsection{\protect\bigskip General comments} \ With the exception of the work by De Gennes [52] on phase transition in smectics A, \ the superconductivity and superfluidity phenomena are typically associated with the domain of low temperature condensed matter physics\footnote{% Lately, however, these ideas have began to be popular in color supercoductivity dealing with quark matter [53].}.This fact remains true even with account of cuprate superconductors, Ref.[54]. The results obtained in this work cause us to look at these phenomena differently. For instance, the previously mentioned relation $\tilde{\omega}(\mathbf{r}% )=k\oint\limits_{C}\mathbf{v}(\sigma )\delta (\mathbf{r}-\mathbf{r}(\sigma )) $ used in the work by Lund and Regge, Ref.[45], for fluids, coincides with our Eq.(4.20b) for colloids. The work of Lund and Regge is based on previous work by Rasetti and Regge, Ref.[55], on superfluid He and, therefore, their results are apparently valid only in the domain of low temperatures.This conclusion is incorrect however as shown in the series of papers by Berdichevsky, Ref.s [56,57]. Any ideal (that is Euler-type) incompressible fluid can be treated this way. Furthermore, as results by Chorin, Ref.[22],\ indicate, the same methods should be applicable for description of the onset of fluid/gas turbulence. In our work the fluid is manifestly nonideal. Nevertheless, in the long time (zero frequencies) limit it can still be treated as if it is ideal. \ The most spectacular departure from traditional view on the results by Lund and Regge was recently made in a series of papers by Schief and collaborators, Refs.[58,59]. The latest results elaborating on \ his work can be found in Ref.[60]. Schief demonstrated that the results of\ Lund and Regge work well in the case of magnetohydrodynamics, that is, ultimately in the plasma installations designed for controlled thermonuclear synthesis. The basic underlying physics of all these phenomena can be summarized as follows. In every system which supports knotted structures, the existence of a decoupling of topological properties from the conformational (statistical) properties of flux tubes from which these knots/links are made should be possible. Since this statement is not restricted to a simple Abelian C-S field theory describing knots/links existing in G-L theory, in full generality the theory should include the G-L theory as a special case (as demonstrated above). Accordingly, the minimization of the corresponding truncated G-L functional may or may not lead to London-type equations. We would like to illustrate these general statements by \ specific examples. This is accomplished below. \subsection{Helicity and force-free fields imply knoting and linking but not nesesssarily superconductivity via London mechanism} The concept of helicity has its origin in theory of neutrino, Ref.[61]. An expression $\mathbf{\sigma }\cdot \mathbf{p/}\left\vert \mathbf{p}% \right\vert $ is\ called \textsl{helicity.} Here $\mathbf{\sigma }\cdot \mathbf{p=\sigma }_{x}p_{x}+\mathbf{\sigma }_{y}p_{y}+\mathbf{\sigma }% _{z}p_{z}$, and $p_{i}$ and $\sigma _{i}$ , $i=1-3,$ are being respectively the components of the momentum and Pauli matrices. The eigenvalue equation \begin{equation} \left[ \mathbf{\sigma }\cdot \mathbf{p/}\left\vert \mathbf{p}\right\vert % \right] \Psi =\lambda \Psi \tag{6.1} \end{equation}% produces eigenvalues $\lambda $ which can be only $\pm 1.$ Moffat, Ref.[62], designed a classical analog of the helicity operator. He proposed to use the product $\mathbf{v}\cdot \mathbf{\nabla }\times \mathbf{v\equiv v}\cdot \mathbf{\tilde{\omega}}$ \ for this classical analog. In it, as before, e.g. see Eq.(4.14), the vorticity field $\mathbf{\tilde{\omega}}$ is used. Moffat constructed an integral (over the volume $M$) \begin{equation} I=\int\limits_{M}\mathbf{v}\cdot \mathbf{\tilde{\omega}}dV \tag{6.2} \end{equation}% along with two other integrals: the kinematic kinetic energy% \begin{equation} \frac{2T}{\rho }E=\int\limits_{M}\mathbf{v}^{2}dV \tag{6.3} \end{equation}% and the rotational kinetic energy \begin{equation} \Omega =\int\limits_{M}\mathbf{\tilde{\omega}}^{2}dV. \tag{6.4} \end{equation}% Then, he used the Schwarz inequality \begin{equation} I^{2}\leq E\Omega \text{ \ or }\Omega \geq \frac{I^{2}}{E} \tag{6.5} \end{equation}% in order to demonstrate that the equality is achieved only if $\mathbf{% \tilde{\omega}=\alpha v}$ where $\alpha $ is a constant. \ Since this requirement coincides exactly with our Eq.(4.20b), it is of interest to sudy this condition further. In particular, under this condition we obtain $% \alpha I=E$ which would coincide with our Eq.(5.43) (see also 5.48)) should $% I$ be associated with the linking number. Fortunately, this is indeed the case. The proof was given by Arnold and is outlined in Ref.[63], pages 141-146. In view of its physical significance, we would like to discuss it in some detail. Before doing so, we notice that the condition $\mathbf{\tilde{\omega}=\alpha v}$ is known in literature as the \textsl{force-free} \textsl{condition }for the following reason. In electrodynamics, the motion of an electron in a magnetic field is given by (in the system of units in which $m=c=e=1$) \begin{equation} \frac{d\mathbf{v}}{dt}=\mathbf{v}\times \mathbf{B} \tag{6.6a} \end{equation}% while the use of the Maxwell's equation, our Eq.(4.10), produces as well \begin{equation} \mathbf{v}=\mathbf{\nabla }\times \mathbf{B}=\alpha \mathbf{B} \tag{6.6b} \end{equation}% Using previously established equivalence $\mathbf{v}\rightleftarrows \mathbf{% B}$ \ and substitution of Eq.(6.6b) into Eq.(6.6a) explains why the force-free condition is given by $\mathbf{\tilde{\omega}=\alpha v.}$ This equation can be looked upon as an eigenvalue equation for the operator $% \nabla \times (\cdot \cdot \cdot ).$ From this point of view the force-free equation is totally analogous to its quantum counterpart, Eq.(6.1). Details can be found in Ref.[64]. Going back to Arnold's proof, we note that according to Moffatt, Ref.[62], page 119, \begin{equation} I=\int\limits_{V}\mathbf{v}\cdot \mathbf{\tilde{\omega}}dV=\frac{1}{4\pi }% \int\limits_{V(1)}\int\limits_{V(2)}\frac{\mathbf{R}_{12}\cdot \lbrack \tilde{\omega}(1)\times \tilde{\omega}(2)]}{\left\Vert \mathbf{R}% _{12}\right\Vert ^{3}}dV(1)dV(2). \tag{6.7} \end{equation}% Clearly, if as is done by Moffatt and others in physics literature (e.g. Lund and Regge, etc.), we assume that the vector potential $\mathbf{A}$ can be given in the form of Eq.(5.44), then $I$ indeed becomes the linking number, Eq.(4.6). If, however, we do not make such an assumption, then much more sophisticated methods are required for the proof of this result. Use of these methods is not of academic interest only, as we would like to explain now. According to Kozlov, Ref.[65],the force-free case $\mathbf{\tilde{\omega% }=\alpha v}$ belongs to the category of so called vortex motion in the \textsl{weak sense. }There are many other vortex motions for which $\mathbf{v% }\times \nabla \times \mathbf{v\neq 0.}$ These are vortex motions in the% \textsl{\ strong sense}. Evidently, any relation with superconductivity or superfluidity (which is actually only hinted at this stage in view of results obtained in previous sections) is lost in this (strong) case. But even with the vorticity present in the weak sense this connection is not immediately clear.This is so because of multitude of solutions of the force-free equation as discussed, for example, in Refs.[66, 67]. We would like to discuss only those solutions which are suitable for use in Arnold's theorem. These solutions can be obtained as follows. Taking the curl of the equation \begin{equation} \mathbf{\nabla }\times \mathbf{B}=\alpha \mathbf{B,} \tag{6.8} \end{equation}% provided that $\mathbf{\nabla }\cdot \mathbf{B}=0,$ produces \begin{equation} (\nabla ^{2}+\alpha ^{2})\mathbf{B}=0, \tag{6.9} \end{equation}% to be compared with our result, Eq.(4.52a). Unlike our case,\ which is motivated by analogies with superconductivity and superfluidity, in the present case there are many solutions of this equation. We choose only the solution which illustrates the\ theorem by Arnold. It is given by $\mathbf{v}% =(Asinz+Ccosy,Bsinx+Acosz,Csiny+Bcosx),$ where $ABC\neq 0$ and $A,B,C\in \mathbf{R}$ This solution is obtained for $\alpha =1.$ Following Arnold, we introduce the asymptotic linking number $\Lambda (x_{1},x_{2})$ via \begin{equation} \Lambda (x_{1},x_{2})=\lim_{T_{1},T_{2}\rightarrow \infty }\frac{1}{4\pi T_{1}T_{2}}\int\limits_{0}^{T_{1}}\int\limits_{0}^{T_{2}}dt_{1}dt_{2}\frac{(% \mathbf{\dot{x}}_{1}(t_{1})\times \mathbf{\dot{x}}_{2}(t_{2}))\cdot (\mathbf{% x}_{1}(t_{1})-\mathbf{x}_{2}(t_{2}))}{\left\Vert \mathbf{x}_{1}(t_{1})-% \mathbf{x}_{2}(t_{2})\right\Vert ^{3}}. \tag{6.10a} \end{equation}% The theorem proven by Arnold states that if the motion described by trajectories $\mathbf{x}_{1}(t_{1})$ and $\mathbf{x}_{2}(t_{2})$ is ergodic, then \begin{equation} \frac{1}{4\pi }\int\limits_{V(1)}\int\limits_{V(2)}\frac{\mathbf{R}% _{12}\cdot \lbrack \tilde{\omega}(1)\times \tilde{\omega}(2)]}{\left\Vert \mathbf{R}_{12}\right\Vert ^{3}}dV(1)dV(2)=\frac{1}{V^{2}}% \int\limits_{V(1)}\int\limits_{V(2)}\Lambda (x_{1},x_{2})dV(1)dV(2)=lk(1,2). \tag{6.10b} \end{equation}% That is the function $\Lambda (x_{1},x_{2})$ on ergodic trajectories is almost everywhere constant. This theorem as such does not imply that this constant is an integer. For us it is important to realize that \textsl{both } Eq.(4.52a) and Eq.(6.9) can produce trajectories minimizing the Schwarz inequality thus leading to the condition $\alpha I=E$ with $I$ being either linking (in the case of suspensions) or self-linking number (depending upon the problem in question) or a conbination of both. \ Because both Eq.(4.52a) and (6.9) cause formation of links, the choice between them should be made on a case-by-case basis. In particular, existence of the Messner effect in superconductors leaves us with no freedom of choice between these two equations. \ In the case of magnetohydrodynamics/plasma physics the situation is less obvious. In the next subsection we shall argue in favour of superconducting/superfluid choice between these equations. To our knowledge, such a choice was left unused in plasma physics literature. \subsection{Ideal magnetohydrodynamics and superfluidity/superconductivity} \bigskip In order to discuss the work by Schief, Ref.[58], we would like to remind to our readers of some facts from the work by Lund and Regge (originally meant to describe superfluid $^{4}$He) since these fact nicely supplement those presented in previous sections. We already mentioned that Berdichevsky adopted these results for normal fluids, including those which are turbulent. Lund and Regge assumed that the vortex has a finite thickness so that the non-slip boundary condition, Eq.(2.27), should be now amended to account for finite thickness. The amended equation is given by \begin{equation} v_{i}(t)=\frac{\partial x_{i}}{\partial t}+\frac{\partial x_{i}}{\partial \sigma }\frac{\partial \sigma }{\partial t}, \tag{6.11} \end{equation}% where $\sigma $ parametrizes the coordinate along the vortex line. Eq.(5.38) taken from work by Lund and Regge then implies:% \begin{equation} \varepsilon _{ijk}\frac{\partial x_{j}}{\partial \sigma }(\frac{\partial x_{k}}{\partial t}-v_{k})=0. \tag{6.12} \end{equation}% This equation is treated as an equation of motion by Lund and Regge obtained with help of the following Lagrangian \begin{equation} \mathcal{L}=\frac{k\rho }{3}\int\limits_{C}\varepsilon _{ijk}x_{i}\frac{% \partial x_{j}}{\partial \sigma }\frac{\partial x_{k}}{\partial t}d\sigma -% \frac{\rho }{2}\int\limits_{V}\mathbf{v}^{2}d^{3}V. \tag{6.13} \end{equation}% \ \ Since the zero thickness limit of the action for this Lagrangian is given by our Eq.(4.22), which upon integration of the A-field leads to the result, Eq.(4.24), the same can be done in the present case and, accordingly, by analogy with the action, Eq.(4.22), which was extended, e.g. see Eq.(4.40), in the present case it can be extended as well so that the final result for the action of the Nambu-Goto bosonic string interacting with electromagnetic-type field reads (using the same signature of space-time as used in Ref.[45]) \begin{equation} S=-m\int d\sigma d\tau \sqrt{-g}+f\int A_{\mu \nu }\frac{\partial x_{\mu }}{% \partial \sigma }\frac{\partial x_{\nu }}{\partial t}d\sigma d\tau -\frac{1}{% 4}\int \mathbf{F}^{2}dvol \tag{6.14} \end{equation}% with \begin{equation} \sqrt{-g}=[-\left( \frac{\partial x^{\nu }}{\partial \sigma }\frac{\partial x_{\nu }}{\partial \sigma }\right) \cdot \left( \frac{\partial x^{\mu }}{% \partial \tau }\frac{\partial x_{\mu }}{\partial \tau }\right) +\left( \frac{% \partial x^{\nu }}{\partial \sigma }\frac{\partial x_{\nu }}{\partial \tau }% \right) ^{2}]^{\frac{1}{2}}. \tag{6.15} \end{equation}% and $m$ and $f$ being some coupling constants. \ The metric of the surface enclosing the vortex can be always brought to diagonal form by some conformal transformation\footnote{% For more details, please see Ref.[68].}. In such coordinates, variation of the action $S$ produces the following set of equations% \begin{equation} m(\frac{\partial ^{2}}{\partial \tau ^{2}}-\frac{\partial ^{2}}{\partial \sigma ^{2}})x_{\mu }=f\varepsilon ^{\mu \nu \lambda \rho }F_{\nu }\frac{% \partial x_{\rho }}{\partial \tau }\frac{\partial x_{\lambda }}{\partial \sigma } \tag{6.16a} \end{equation}% and% \begin{equation} \partial ^{\mu }\partial _{\mu }A^{\alpha \beta }=-2f\int d\sigma d\tau (% \frac{\partial x^{\alpha }}{\partial \sigma }\frac{\partial x^{\beta }}{% \partial \tau }-\frac{\partial x^{\alpha }}{\partial \tau }\frac{\partial x^{\beta }}{\partial \sigma })\delta ^{\left( 4\right) }(x(\sigma ,\tau )-y) \tag{6.16b} \end{equation}% provided that $\partial _{\mu }A^{\mu \nu }=0.$ Since the last equation is just the wave equation with an external source, the equation of motion for the vortex is Eq.(6.16a). In such a form it was obtained in Ref.[58] describing vortices in ideal magnetohydrodynamics. Under some physically plausible condition it was reduced in the same reference to the equation of motion for the one-dimensional Heisenberg ferromagnet. This result will be discussed further below from a somewhat different perspective. It should be noted though that Eq.(6.16a) emerges in Ref.[58] under somewhat broader conditions than those allowed by the force-free equation. In view of the content of the next subsection, we would like to reproduce this, more general case, now. \ For this purpose, we recall that the Euler's equation for fluids can be written in the form, Ref.[29], \begin{equation} \frac{\partial }{\partial t}\mathbf{\tilde{\omega}=\nabla \times (v\times \tilde{\omega}).} \tag{6.17} \end{equation}% In the case when $\mathbf{\tilde{\omega}}$ is time-independent, it is sufficient to require only that \begin{equation} \mathbf{v\times \tilde{\omega}=\nabla }\Phi \tag{6.18} \end{equation}% with $\Phi $ being some (potential) scalar function. In the case of hydrodynamics the equation $\Phi =const$ is the famous Bernoulli equation. Thus, the force-free condition in this case is equivalent to the Bernoulli condition/equation. In magnetohydrodynamics there is an analog of the Bernoulli equation as explained in Ref.[69]. So, again, the equation $\Phi =const$ is equivalent to the force-free condition. \ In the case of magnetohydrodynamics the vortex Eq.(6.16.a) is obtained under the condition $% \Phi =const.$ Since Eq.(6.16a) describes the vortex filament, the helicity integral, Eq.(6.7), describes either linking, self-linking or both. In the case of self-linking it is known, e.g. see Ref.s[48,63], that $% lk(1,1)=Tw+Wr. $ Analytically, the writhe $Wr$ term is expressible as in Eq.(4.6) but with $C_{1}$ and $C_{2}$ now representing the same closed curve. The need for $Tw$ disappears if the closed curve can be considered to have zero thickness. More accurately, the closed curve should be a ribbon in order to have a nonzero $Tw$. This is explained in Ref.[63]. With the exception of Appendix C, in this work we have ignored such complications. \subsection{Classical mechanics in the vortex formalism, inertial dynamics of nonrigid bodies and G-L theory of high temperature superconductors} \bigskip Euler's Eq.(6.17) can be rewritten in the equivalent form: \begin{equation} \frac{\partial }{\partial t}\mathbf{v}=\mathbf{v}\times \mathbf{\tilde{\omega% }-\nabla }\Phi . \tag{6.19} \end{equation}% Following Kozlov, Ref.[65], in the case of Hamiltonian mechanics it is convenient to consider a very similar (Lamb) equation given by \begin{equation} \frac{\partial }{\partial t}\mathbf{u+}\left( \mathbf{\nabla \times u}% \right) \cdot \mathbf{v=-\nabla }\Phi , \tag{6.20} \end{equation}% in which the vector $\mathbf{u}$\textbf{\ }is such that $\nabla \cdot \mathbf{u}=0.$ It can be demonstrated that Hamiltonian dynamics is isomorphic to the dynamics described by the above Lamb equation, provided that we make the following identifications. Let $\Sigma _{t}^{n}$ be a manifold in phase space $P=T^{\ast }M$ admitting a single-valued projection onto a configurational space $M$. In canonical coordinates $x$ and $y$ this manifold is defined by the equation% \begin{equation} \mathbf{y}=\mathbf{u}(\mathbf{x},t). \tag{6.21} \end{equation}% It is not difficult to demonstrate that the manifold $\Sigma _{t}^{n}$ is an invariant manifold for a canonical Hamiltonian $H(\mathbf{x},\mathbf{y},t)$ if and only if the field $\mathbf{y}$ satisfies the Lamb's Eq.(6.20) and that $\Phi (\mathbf{x},t)=H(\mathbf{x},\mathbf{y}(\mathbf{x},t),t)$ is a function on $M$ parametrized by time $t$ in such a way that \begin{equation} \mathbf{v}=\frac{\partial H}{\partial \mathbf{y}}\mid _{\mathbf{y}=\mathbf{u}% }\text{ } \tag{6.22} \end{equation}% and% \begin{equation} \mathbf{\dot{y}=-}\frac{\partial H}{\partial \mathbf{x}}\mid _{\mathbf{y}=% \mathbf{u}}=\frac{\partial \mathbf{u}}{\partial t}+\frac{\partial \mathbf{u}% }{\partial \mathbf{x}}\cdot \mathbf{v.} \tag{6.23} \end{equation}% Relevance of these results to our discussion can be seen when Eq.(6.23) is compared with Eq.(6.11) of Lund and Regge. This comparison shows their near equivalence. In view of this, we would like to exploit this equivalence further by employing it for analysis of the truncated G-L functional analogous to our Eq.(5.3) typically used for phenomenological description of high temperature superconductors [54]. In this case the functional $\mathcal{% F[}\mathbf{A},\varphi ]$ should be replaced by \begin{equation} \mathcal{\tilde{F}[}\mathbf{A},\varphi ]=\int d^{3}r\{\frac{\left( \mathbf{% \nabla }\times \mathbf{A}\right) ^{2}}{8\pi }+\frac{\hbar ^{2}}{4m_{\perp }}% \left\vert (\mathbf{\nabla }_{\perp }-\frac{2i\tilde{e}}{\hbar c}\mathbf{A}% _{\perp }\mathbf{)}\varphi \right\vert ^{2}+\frac{\hbar ^{2}}{4m_{\parallel }% }\left\vert (\mathbf{\nabla }_{\parallel }-\frac{2i\tilde{e}}{\hbar c}% \mathbf{A}_{\parallel }\mathbf{)}\varphi \right\vert ^{2}\} \tag{6.24} \end{equation}% with its components lying in the x-y (cuprate) plane and z-plane perpendicular to it. By varying this functional with respect to $\mathbf{A}% _{\perp }$ and $\mathbf{A}_{\parallel }$ separately we obtain respectively the following components for the Maxwell's equation \begin{equation} \mathbf{\nabla }\times \mathbf{B}_{i}=\frac{4\pi }{c}\mathbf{j}_{i}\text{ (}% i=\perp \text{and}\parallel \text{)}, \tag{6.25} \end{equation}% where \begin{equation} \mathbf{j}_{\perp }=-\frac{ie\hbar }{2m_{\perp }}(\varphi ^{\ast }\mathbf{% \nabla }_{\perp }\varphi -\varphi \mathbf{\nabla }_{\perp }\varphi ^{\ast })-% \frac{2\tilde{e}^{2}}{m_{\perp }c}\left\vert \varphi \right\vert ^{2}\mathbf{% A}_{\perp }\mathbf{.}\text{and }\mathbf{j}_{\parallel }=-\frac{ie\hbar }{% 2m_{\parallel }}(\varphi ^{\ast }\frac{d}{dz}\varphi -\varphi \frac{d}{dz}% \varphi ^{\ast })-\frac{2\tilde{e}^{2}}{m_{\parallel }c}\left\vert \varphi \right\vert ^{2}\mathbf{A}_{\parallel }. \tag{6.26} \end{equation}% From here we obtain the phenomenological London-type equations% \begin{equation} \mathbf{\nabla }\times \mathbf{j}_{\perp }\mathbf{=-}\dfrac{en_{s}}{m_{\perp }c}\mathbf{B}_{\perp }\text{ and }\mathbf{\nabla }\times \mathbf{j}% _{\parallel }\mathbf{=-}\dfrac{en_{s}}{m_{\parallel }c}\mathbf{B}_{\parallel }. \tag{6.27} \end{equation}% By combining Eq.s (6.25) and (6.27) and using results of our Sections 4.2. and 4.4 we can rewrite these equations in the following suggestive (London-type) form% \begin{equation} \mathbf{\tilde{\omega}}_{\perp }=e_{\perp }\mathbf{v}_{\perp }\text{ and }% \mathbf{\tilde{\omega}}_{\parallel }=e_{\parallel }\mathbf{v}_{\parallel }. \tag{6.28} \end{equation}% This form allows us to make a connection with the inertial dynamics of a nonrigid body. Following Kozlov, Ref.[65], we consider the motion of a nonrigid body in which particles can move relative to each other due to internal forces. Let the inertia axes of the body be the axes of the moving frame. Let \textbf{K} be the angular momentum of the body relative to a fixed point and $\mathbf{\omega }$ the angular velocity of the moving trihedron while the inertia matrix \textbf{I} is diag ($I_{\perp },I_{\perp },I_{\parallel })\footnote{% For the sake of comparison with superconductors, we deliberately choose the matrix in such form.}.$ The angular momentum and the angular velocity are related by \begin{equation} \mathbf{K}=\mathbf{I\omega }+\mathbf{\lambda ,} \tag{6.29} \end{equation}% where $\mathbf{\lambda =}(\lambda _{\perp },\lambda _{\perp },\lambda _{\parallel })$ is the gyroscopic torque originating from the motion of particles inside the body. From here we obtain the Euler equation \begin{equation} \mathbf{\dot{K}=\omega \times K=}0, \tag{6.30} \end{equation}% which is a simple consequence of Eq.(6.29). In view of Eq.s(4.45) and (4.48) we can identify Eq.s (6.28) with (6.29) thus formally making Eq.s (6.29), of London type. The hydrodynamic analogy can be in fact extended so that the hydrodynamically looking Lamb-type equation can be easily obtained and analyzed. Details are given in Ref.[65], page 148. \subsection{Dirac monopoles, dual Meissner effect, Abelian projection for QCD and string models} At this point our readers may have already noticed the following. 1.In our derivation of Eq.(5.63) we made screening effects seemingly disappear while the title of our work involves screening. 2.In Eq.(6.14) we introduced the Nambu-Goto string normally used in hadron physics associated with non Abelian Yang-Mills (Y-M) gauge fields. Quantum chromodynamics (QCD) of hadrons and mesons is definitely not the same thing as scalar electrodynamics (that is G-L model) discussed in our work. 3. Variation of the action $S$ in Eq.(6.14) leading to the string equation of motion, Eq.(6.16a), under some conditions reduces to the equation of motion for the Heinsenberg (anti) ferromagnetic chain, which indeed describes the motion of the vortex filaments [59]. \ From this reference it follows that such equation of motion, in principle, can be obtained quite independently from the Nambu-Goto string, QCD, etc. In this subsection we demonstrate that the above loose ends are in fact indicative of the very deep underlying mathematics and physics needed for a unified description of all of these phenomena. \ \bigskip The formalism developed thus far in this work suffers from a kind of asymmetry. On one hand, we started with a solution of hard spheres and then we noticed that these hard speres in solution act as currents (if one is using the magnetic analogy). The famous Biot-Savart law in magnetostatics is causing two currents to be entangled with each other thus creating the Gauss linking number, Eq.(4.8). Thus, \ it appears that in solution two particles (currents) are always linked (entangled) with each other. That this is indeed the case was noticed long ago as mentioned in the Introduction, e.g. see Ref.[11]. We can treat the vortices causing such linkages as independent objects. This is reflected in the fact that we introduced the vorticity $\mathbf{\tilde{\omega}}\ \mathbf{(r})$ as $\mathbf{% \tilde{\omega}}=k\oint\limits_{C}d\sigma \mathbf{v}(\sigma )\delta (\mathbf{r% }-\mathbf{r}(\sigma ))$, e.g read comments after Eq.(5.38). In view of our major equation $\mathbf{\tilde{\omega}}\ \mathbf{(r})=e\mathbf{v,}$ we can think either about the velocity (or vorticity) of a particular hard sphere or about the velocity of a particular vortex. Because of this, it is possible to treat both particles and vortices on the same footing. In such a picture (sketched in Appendix B) one can either eliminate vortices and think about effective interactions between hard spheres or vice versa. In this sense we can talk about the \textsl{duality} of descriptions and, hence, about the \textsl{dual Meissner effect}-for loops instead of particles% \footnote{% It should be noted that in the case of usual superconductors one should distinguish between the constant magnetic fields penetrating superconductors and the fields made by vortices. In the case of colloidal suspensions it is also possible to create some steady velocity current and to consider velocity at a given point in the fluid as made of both steady and fluctuating \ parts.}. Before describing the emerging picture in more detail, we note the following. Consider the expression for vorticity $\mathbf{\tilde{\omega}}% =k\oint\limits_{C}d\sigma \mathbf{v}(\sigma )\delta (\mathbf{r}-\mathbf{r}% (\sigma ))$ from the point of view of reparametrization invariance. In particular, since we have a closed contour, we can always choose it as going from infinity to minus infinity (it is easy topologically to wrap it onto a closed contour of any size). For the function $y=exp(\sigma )$ we have evidently $0\leq y\leq \infty $ when $\sigma $ varies from -$\infty $ to $% \infty .$ This means that we can replace $\sigma $ by $\ln y$ \ in the expression for the vorticity in order to obtain \begin{equation} \mathbf{\tilde{\omega}}=k\int\limits_{-\infty }^{0}dz\mathbf{v}(z)\delta (% \mathbf{r}-\mathbf{r}(z)), \tag{6.31} \end{equation}% which in a nutshell is the same thing as a Dirac monopole, Ref.[70], with charge strength $k$, so that the vortices can be treated as Dirac monopoles. In Appendix C \ we provide some facts about Dirac monopoles in relation to vortices. According to Dirac [70] the string attached to such a monopole can either go to infinity (as in the present case) or to another monopole of equal and opposite strength. \ In our case this means only that when two hard spheres become hydrodynamically entangled, they cannot escape the linkage they formed. This is the (topological) essence of quark confinement in QCD known as \textsl{monopole condensation\footnote{% That is the Bose-Einstein-type condensation in view of results of Section 5. This is explained further in Appendix C}}. But we are not dealing with QCD in this work! How then we can talk about the QCD? The rationale for this was put forward first by Nambu, Ref.[71]\footnote{% We discuss his work briefly in Appendix C}. In his work he superimposed the G-L and Dirac monopole theories to demonstrate quark confinement for mesons (these are made \ of just two quarks: quark and antiquark). For this qualitative picture to make sense, there should be some way of reducing QCD to G-L type theory. The feasibility of such an \textsl{abelian reduction (projection)} was investigated first by 't Hooft in Ref.[72]. Recent numerical studies have provided unmistakable evidence supporting the idea of quark confinement through monopole condensation, Ref.s [73,74]. Theoretical advancements made since the publication of 't Hooft's paper took place along two different (opposite) directions. In one direction, recently, Faddeev and Niemi found knot-like topological solitons using a Skyrme-type nonlinear sigma model and conjectured that such a model can correctly represent QCD in the low energy limit [75,76]. That this is indeed the case was established in a series of papers by Cho [77,78] and, more recently, by Kondo, Ref.[79]. In another direction, in view of the fact that, while macroscopically the Meissner effect is triggered by the effective mass of the vector field, microscopically this mass is generated by Cooper pairs [25], it makes sense to look at detection of the excited states of such Cooper-like pairs experimentally. The famous variational BCS treatment of superconductivity contains at its heart the gap equation responsible for the formation of Cooper pairs. The BCS treatment\ of superconductivity was substantially improved by Richardson, Ref.[80], who solved the microscopic model exactly. His model is known in literature as the Richardson model. Closely related to this model is a model proposed by Gaudin. It is also exactly solvable (by Bethe anstatz methods) [81]. The Gaudin model(s) describes various properties of one dimensional spin chains in the semiclassical limit. Energy spectra of the Gaudin and Richardson models are very similar. In particular, under some conditions they are equidistant, like those for bosonic string models.\footnote{% Also, for monopoles models discussed in Appendix C.}. Recently, we were able to find new models associated \ with Veneziano amplitudes, e.g. see Ref.s [82,83], \ describing meson-meson scattering processes. In particular, we demonstrated that the Richardson-Gaudin spin chain model producing equidistant spectra can be obtained directly from Veneziano amplitudes. Since the Veneziano amplitudes \ describe extremely well the meson mass spectrum, and since we demonstrated that the Richardson-Gaudin model (originally used in superconductivity and nuclear physics) can be recovered from combinatorial and analytical properties of these amplitudes, this means that the Abelian reduction can be considered as confirmed (at least for mesons) not only numerically but also experimentally. \subsection{Miscellaneous} \bigskip In Section 3.3. we demostrated that for colloidal suspensions it is sufficient to use only the Abelian version of the Chern-Simons theory for description of emerging entanglements. There could be other instances where such an Abelian treatment might fail. Examples of more sophisticated non-Abelian fluids were considered in several recent excellent reviews [84,85]. These papers might serve as points of departure for the treatment of more elaborate hydrodynamical problems involving non-Abelian entanglements. Finally, the force-free equation $\mathbf{\tilde{\omega}% =\alpha v}$ which is used in our work, is known to possesss interesting new physical properties when, instead of treating $\alpha $ as a constant, one treats $\alpha $ as some function of \ the coordinates. Such treatment can be found in Ref.[86] and involves the use of conformal \ transformations and invariants recently considered in our work on the Yamabe problem, Ref.[87], and the Poincare$^{\prime }$ conjecture, Ref.[68]. \qquad \bigskip \qquad \bigskip \textbf{Acknowledgement}. Both authors gratefully acknowledge useful technical correspondence and conversations with Dr. Jack Douglas (NIST). This work would look very different or even would not be written\ at all without his input and his impatience to see this work completed. \bigskip \qquad \bigskip \textbf{Appendix A. Some facts from the theory of Green's functions} \bigskip Consider an equation \begin{equation} \left( \frac{\partial }{\partial t}-H\right) \Phi =0. \tag{A.1} \end{equation}% Such an equation can be written in the form of an integral equation as follows% \begin{equation} \Phi (\mathbf{x},t)=\int G_{0}(\mathbf{x},t;\mathbf{x}^{\prime },t^{\prime })\Phi _{0}(\mathbf{x}^{\prime },t^{\prime })d\mathbf{x}^{\prime }dt^{\prime } \tag{A.2} \end{equation}% so that \begin{equation} \Phi (\mathbf{x},t\rightarrow t^{\prime })=\Phi _{0}(\mathbf{x},t^{\prime }). \tag{A.3} \end{equation}% Under such conditions, the Green's function $G_{0}(\mathbf{x},t;\mathbf{x}% ^{\prime },t^{\prime })$ must obey the following equation \begin{equation} \left( \frac{\partial }{\partial t}-H\right) G_{0}(\mathbf{x},t;\mathbf{x}% ^{\prime },t^{\prime })=\delta (\mathbf{x}-\mathbf{x}^{\prime })\delta (t-t^{\prime }) \tag{A.4} \end{equation}% provided that $G_{0}=0$ for $t<t^{\prime }.$ In a more complicated situation, when \begin{equation} \left( \frac{\partial }{\partial t}-H-V\right) G(\mathbf{x},t;\mathbf{x}% ^{\prime },t^{\prime })=\delta (\mathbf{x}-\mathbf{x}^{\prime })\delta (t-t^{\prime }) \tag{A.5} \end{equation}% we can write a formal solution for G in the form of the integral (Dyson's) equation% \begin{equation} G(\mathbf{x},t;\mathbf{x}^{\prime },t^{\prime })=G_{0}(\mathbf{x},t;\mathbf{x% }^{\prime },t^{\prime })+\int G_{0}(\mathbf{x},t;\mathbf{x}^{\prime },t^{\prime })V(\mathbf{x}^{\prime },t^{\prime })G(\mathbf{x}^{\prime },t^{\prime };\mathbf{x}^{\prime \prime },t^{\prime \prime })d\mathbf{x}% ^{\prime }dt^{\prime } \tag{A.6} \end{equation}% or, symbolically, $G=G_{0}+G_{0}VG$ . In the case of Eq.(4.35) of the main text, we have to replace Eq.(A.5) by \begin{equation} \left( \frac{\partial }{\partial t}-H_{1}-H_{2}-V_{12}\right) G(\mathbf{x}% _{1},\mathbf{x}_{2},t;\mathbf{x}_{1}^{\prime },\mathbf{x}_{2}^{\prime },t^{\prime })=\delta (\mathbf{x}_{1}-\mathbf{x}_{1}^{\prime })\delta (% \mathbf{x}_{2}-\mathbf{x}_{2}^{\prime })\delta (t-t^{\prime }) \tag{A.7} \end{equation}% and, accordingly, the Dyson type Eq.(A.6) is now replaced by the analogous equation in which now we must have $G_{0}(\mathbf{x}_{1},\mathbf{x}_{2},t;% \mathbf{x}_{1}^{\prime },\mathbf{x}_{2}^{\prime },t^{\prime })=G_{0}(\mathbf{% x}_{1},t;\mathbf{x}_{1}^{\prime },t^{\prime })G_{0}(\mathbf{x}_{2},t;\mathbf{% x}_{2}^{\prime },t^{\prime }).$ To check the correctness of such a decomposition we note that for $\mathbf{x}\neq \mathbf{x}^{\prime }$ Eq.s (A.1) and (A.4) coincide while for $t\rightarrow t^{\prime }$ integration of Eq.(A.4) over a small domain around zero and taking into account that $% G_{0}=0$ for $t<t^{\prime }$ produces $G_{0}(\mathbf{x},t\rightarrow t^{\prime };\mathbf{x}^{\prime },t^{\prime })=\delta (\mathbf{x}-\mathbf{x}% ^{\prime }).$ Repeating these arguments for the two-particle Green's function and using Eq.(A.7) (with $V_{12}=0$ ) provides the needed proof of the decomposition of $G_{0}$ in the two-particle case. Define now formally the renormalized potential $\mathcal{V}$ via \begin{equation} G=G_{0}+G_{0}\mathcal{V}G_{0}. \tag{A.8} \end{equation}% Then, by comparing this equation with the original Dyson's equation for G we obtain% \begin{equation} G-G_{0}=G_{0}\mathcal{V}G_{0}=G_{0}VG=G_{0}V(G_{0}+G_{0}\mathcal{V}G_{0}) \tag{A.9} \end{equation}% This allows us to write the integral equation for the effective potential $% \mathcal{V}$ as% \begin{equation} \mathcal{V=}V+VG_{0}\mathcal{V}. \tag{A.10} \end{equation} \qquad \textbf{Appendix B. Dual treatment of \ the dynamics of colloidal supensions and hydrodynamic screening } \qquad We begin by first considering screening. The path integral for the functional, Eq.(5.5), can be conveniently rewritten as follows \begin{eqnarray} \mathcal{F[}\mathbf{A},\varphi ] &=&\frac{\rho }{2}\int d^{3}r\{\left( \mathbf{\nabla }\times \mathbf{A}\right) ^{2}+D_{0}\left\vert (\mathbf{% \nabla }-i\frac{2\pi e}{D_{0}}\mathbf{A)}\varphi \right\vert ^{2}\} \nonumber \\ &=&\frac{\rho }{2}\int d^{3}r\{\left( \mathbf{\nabla }\times \mathbf{A}% \right) ^{2}+\left( \frac{D_{0}}{\pi }\right) ^{2}(\mathbf{\nabla }\psi -% \frac{2\pi e}{D_{0}}\mathbf{A)}^{2}\} \TCItag{B.1} \end{eqnarray}% upon substitution of the ansatz $\varphi =\dfrac{\sqrt{2D_{0}}}{2\pi }\exp (i\psi )$ into first line of Eq.(B.1). Such a substitution is consistent with the current defined in Eq.(4.50). Since $\mathbf{\nabla }\cdot \mathbf{A=}0\mathbf{,}$ we obtain \begin{equation} (\mathbf{\nabla }\psi -\frac{2\pi e}{D_{0}}\mathbf{A)}^{2}=(\mathbf{\nabla }% \psi )^{2}+\left( \frac{2\pi e}{D_{0}}\mathbf{A}\right) ^{2}-\frac{4\pi e}{% D_{0}}\mathbf{A\cdot \nabla }\psi . \tag{B.2} \end{equation}% Consider now the following path integral% \begin{equation} Z=\int D\{\psi \}\exp [-\frac{1}{2}\left( \frac{D_{0}}{\pi }\right) ^{2}\int d^{3}\mathbf{r}((\mathbf{\nabla }\psi )^{2}-\frac{4\pi e}{D_{0}}\mathbf{% A\cdot \nabla }\psi )]. \tag{B.3a} \end{equation}% Since it is of a Gaussian-type, it can be straightforwardly calculated with the result% \begin{equation} Z=N\exp (-\frac{e^{2}}{2}A_{\mu }\frac{\partial _{\mu }\partial _{\nu }}{% \nabla ^{2}}A_{\nu }). \tag{B3b} \end{equation}% Here $N$ is some (normalization) constant. Using this result and Eq.(B.1) we obtain the following final expression for the partition function for the vector A-field with account of constraints \begin{equation} \Xi =\int D[\mathbf{A}]\exp \{-\frac{\rho }{2k_{B}T}\int d^{3}\mathbf{r\{}% A_{\mu }[-\delta _{\mu \nu }\mathbf{\nabla }^{2}-(1-\frac{1}{\tilde{\xi}}% )\partial _{\mu }\partial _{\nu }]A_{\nu }+e^{2}A_{\mu }(\delta _{\mu \nu }-% \frac{\partial _{\mu }\partial _{\nu }}{\nabla ^{2}})A_{\nu }\}. \tag{B.4a} \end{equation}% This result is in complete accord with Eq.(4.51b) where for the mass $m$ of the vector field \ \textbf{A} we obtained: $m=e$. The above derivation was made \textsl{without the use of Higgs-type calculations}, Ref[88]. Surely, it is in accord with these calculations. We would like now to rewrite the obtained result in a somewhat formal \ (simplified) form as follows:% \begin{equation} \Xi =\int D[\mathbf{A}]\delta (\mathbf{\nabla }\cdot \mathbf{A)}\exp \{-% \frac{\rho }{2k_{B}T}\int d^{3}\mathbf{r[}\left( \mathbf{\nabla }\times \mathbf{A}\right) ^{2}+e^{2}\mathbf{A}^{2}]\}. \tag{B.4b} \end{equation}% This will be used below in such simplified form. To avoid extra notation, we also set $\dfrac{\rho }{k_{B}T}=1.$ This factor can be restored if needed. Now we are ready for the dual treatment, which can be done in several ways. For instance, following the logic of Dirac's paper [70 ], we replace $\Xi $ by \begin{equation} \Xi =\int D[\mathbf{A}]\delta (\mathbf{\nabla }\cdot \mathbf{A)}\exp \{-% \frac{1}{2}\int d^{3}\mathbf{r[}\left( \mathbf{\nabla }\times \mathbf{A}% \right) +\mathbf{v})^{2}+e^{2}\mathbf{A}^{2}]\} \tag{B.5} \end{equation}% where $\mathbf{v}=\dfrac{\mathbf{\tilde{\omega}}}{e}=\oint\limits_{C}d\sigma \mathbf{v}(\sigma )\delta (\mathbf{r}-\mathbf{r}(\sigma )).$ Next, we use the Hubbard-Stratonovich-type identity allowing us to make a linearization, e.g.% \begin{equation} \exp \{-\frac{1}{2}\int d^{3}\mathbf{r[(}\left( \mathbf{\nabla }\times \mathbf{A}\right) +\mathbf{v)}^{2}\}=\int D[\mathbf{\Psi }]\exp [-\frac{1}{2}% \int d^{3}\mathbf{r\Psi }^{2}+i\int d^{3}\mathbf{r(}\left( \mathbf{\nabla }% \times \mathbf{A}\right) +\mathbf{v)\cdot \Psi ]} \tag{B.6} \end{equation}% Then, we take advantage of the fact that $\left( \mathbf{\nabla }\times \mathbf{A}\right) \cdot \Psi =\left( \mathbf{\nabla }\times \mathbf{\Psi }% \right) \cdot \mathbf{A+\nabla \cdot (A\times \Psi )}$.By ignoring surface terms this allows us to rewrite the above result as follows% \begin{eqnarray} &&\int D[\mathbf{\Psi }]\exp [-\frac{1}{2}\int d^{3}\mathbf{r\Psi }% ^{2}+i\int d^{3}\mathbf{r(}\left( \mathbf{\nabla }\times \mathbf{A}\right) +% \mathbf{v)\cdot \Psi ]} \nonumber \\ &=&\int D[\mathbf{\Psi }]\exp [-\frac{1}{2}\int d^{3}\mathbf{r\Psi }% ^{2}+i\int d^{3}\mathbf{r(}\left( \mathbf{\nabla }\times \mathbf{\Psi }% \right) \cdot \mathbf{A}+\mathbf{v\cdot \Psi )]} \TCItag{B.7} \end{eqnarray}% Using this result in Eq.(B.5) and using the Hubbard-Stratonovich transformation again we obtain: \begin{eqnarray} \Xi &=&\int D[\mathbf{A}]\delta (\mathbf{\nabla }\cdot \mathbf{A)}\exp \{-% \frac{1}{2}\int d^{3}\mathbf{r[}\left( \mathbf{\nabla }\times \mathbf{A}% \right) +\mathbf{v})^{2}+e^{2}\mathbf{A}^{2}]\} \nonumber \\ &=&\int D[\mathbf{\Psi }]\delta (\mathbf{\nabla }\cdot \mathbf{\Psi )}\exp [-% \frac{1}{2}\int d^{3}\mathbf{r[\Psi }^{2}+\frac{1}{e^{2}}\left( \mathbf{% \nabla }\times \mathbf{\Psi }\right) ^{2}]+i\int d^{3}\mathbf{r\Psi \cdot v].% } \TCItag{B.8} \end{eqnarray}% Since exp$(i\int d^{3}\mathbf{r\Psi \cdot v)=}\exp (i\oint\limits_{C}d\sigma \mathbf{v}(\sigma )\cdot \mathbf{\Psi }(\sigma ))$ we can use this expresiion in Eq.(5.34) in order eventually to arrive at the functional of G-L-type (analogous to Eq.(5.6) with obviously redefined constants). The vector field $\mathbf{\Psi }$ is now massive. It is convenient to make a replacement: $\mathbf{\Psi \rightleftarrows }e\mathbf{\Psi }$ to make the functional for the $\Psi $ field look exactly as in Eq.(B.5) (with $\mathbf{v% }=0$). The above transformations provide a manifestly dual formulation of the colloidal suspension problem. These transformations can be made differently nevertheless. Such an alternative treatment is useful since the end result has relevance to string theory and to the problem of quark confinement in QCD as was first noticed by Nambu, Ref.[71]. This topic is discussed briefly in the next appendix. \qquad \bigskip\ \textbf{Appendix C \ Nambu string and colloidal suspensions: Some unusual uses of Dirac monopoles.} We begin with Eq.(B.5) but this time we treat it differently. In particular, we have% \begin{eqnarray} \Xi &=&\int D[\mathbf{A}]\delta (\mathbf{\nabla }\cdot \mathbf{A)}\exp \{-% \frac{1}{2}\int d^{3}\mathbf{r[}\left( \mathbf{\nabla }\times \mathbf{A}% \right) +\mathbf{v})^{2}+e^{2}\mathbf{A}^{2}]\} \nonumber \\ &=&\int D[\mathbf{A}]\delta (\mathbf{\nabla }\cdot \mathbf{A)}\exp \{-\frac{1% }{2}\int d^{3}\mathbf{rv}^{2}-\int d^{3}\mathbf{r[}\left( \mathbf{\nabla }% \times \mathbf{A}\right) \cdot \mathbf{v}-\frac{1}{2}\int d^{3}\mathbf{r}% \left( \mathbf{\nabla }\times \mathbf{A}\right) ^{2}-\frac{e^{2}}{2}\int d^{3}\mathbf{rA}^{2}\} \nonumber \\ &=&\int D[\mathbf{A}]\delta (\mathbf{\nabla }\cdot \mathbf{A)}\exp \{-\frac{1% }{2}\int d^{3}\mathbf{rv}^{2}-\int d^{3}\mathbf{r[}\left( \mathbf{\nabla }% \times \mathbf{v}\right) \cdot \mathbf{A}-\frac{1}{2}\int d^{3}\mathbf{r}% \left( \mathbf{\nabla }\times \mathbf{A}\right) ^{2}-\frac{e^{2}}{2}\int d^{3}\mathbf{rA}^{2}\} \nonumber \\ &=&\int D[\mathbf{A}]\delta (\mathbf{\nabla }\cdot \mathbf{A)}\exp \{-\frac{1% }{2}\int d^{3}\mathbf{rv}^{2}+e^{2}\sum\limits_{i<j}\oint\limits_{C_{i}}% \oint\limits_{C_{j}}\frac{d\mathbf{l(\sigma }_{i})\cdot d\mathbf{l(\sigma }% _{j})}{\left\vert \mathbf{r}(\sigma _{i})-\mathbf{r}(\sigma _{j})\right\vert }\exp (-\frac{\left\vert \mathbf{r}(\sigma _{i})-\mathbf{r}(\sigma _{j})\right\vert }{\xi _{H}})\}. \TCItag{C.1} \end{eqnarray}% The exponent in Eq.(C.1) is useful for comparison with that given in Eq.(4.40). Such a comparison suggests that\ while the second (linking) term is \ essentially the same as in Eq.(4.40)\footnote{% We have mentioned already that the screening is not affecting the topological nature of this term.}, the first term in the exponent of Eq.(C.1) might be analogous to the "kinetic" string-like term in Eq.(4.40). This line of reasoning can be found in the paper by Nambu [71]. If one ignores quark masses as is usually done in string-theoretic literature, then Eq.(13) of Nambu's paper looks very much like our Eq.(C.1), provided that we identify the first term with the stringy Nambu-Goto term\footnote{% E.g. see Eq.(6.14).}. To do so, we formally need to use the results of our Sections 5.5 and 6.2. This time, however, we have to allow for self-linking. Also, we have to take into account that for this case the energy and the helicity become the same (up to a constant). Thus, one can consider the helicity instead of energy. A very detailed treatment of helicity was made in the paper by Ricca and Moffatt, Ref.[89], from which it follows that the helicity is ideally suited for the description of self-linking. In such a case we have to deal with closed curves of finite thickness. In fact, it is sufficient to have a closed tube instead of a closed infinitely thin curve. On such a tube one can perform the Dehn surgery by cutting a tube at some section, twisting the free ends through a relative angle $2\pi n_{0},$ where $n_{0}$ is some integer, and reconnecting the ends. This operation makes a self-linking proportional to $n_{0}$. If we agree that the Dehn twists are made only in increments of $\pm 2\pi ,$ we obtain the "spectrum" which is equidistant and, hence, string-like. This intuitive picture can be made more quantitative as follows. Taking into account Eq.(5.48), the kinetic term in the exponent of Eq.(C.1) can be tentatively written as follows% \begin{equation} \frac{1}{2}\int d^{3}\mathbf{rv}^{2}=\frac{e^{2}}{2}\sum\limits_{i}\oint% \limits_{C_{i}}\oint\limits_{C_{i}}d\mathbf{\sigma }d\mathbf{\sigma }% ^{\prime }\frac{\mathbf{v}(\sigma )\cdot \mathbf{v}(\sigma ^{\prime })}{% \left\vert \mathbf{r(\sigma )}-\mathbf{r}(\sigma ^{\prime })\right\vert }. \tag{C.2} \end{equation}% This expression suffers from two apparent deficiencies. First, while the second term in the exponent of Eq.(C.1) accounts for screening effects, Eq.(C.2) is written without such an account. Second, since energy and helicity are proportional to each other and since the Dehn surgery can be made only for surfaces, Eq.(C.2) should be modified by replacing infinitely thin contours by tubes. To repair the first problem we follow the book by Pismen, Ref.[90], where on page 186 we find the following information. Consider our Eq.s(4.52a) or (4.52b) and take into account Eq.(4.15). Then, we can write \begin{equation} \nabla ^{2}\mathbf{A}-e^{2}\mathbf{A}=-e\oint\limits_{C}d\sigma \mathbf{v}% (\sigma )\delta (\mathbf{r}-\mathbf{r}(\sigma )). \tag{C.3} \end{equation} The solution of the equation for vector potential $\mathbf{A}$, Eq.(5.38), should \ now be modified to account for screening and boundary effects. The result for energy, Eq.(5.47), \ now will be changed accordingly so that the screening exponent will emerge in Eq.(C.1). To account for surface effects we recognize that the self-linking expression, Eq.(C.2) is reparametrization invariant. If, instead of infinitely thin contours we consider fluctuating tubes, the reparametrization invariance should survive. The surface analog of the expression $\oint\limits_{C}d\sigma \mathbf{v}(\sigma )\delta (% \mathbf{r}-\mathbf{r}(\sigma ))$ is given in Eq.(6.16b). By introducing the notation \begin{equation} S^{\alpha \beta }=\frac{\partial x^{\alpha }}{\partial \sigma }\frac{% \partial x^{\beta }}{\partial \tau }-\frac{\partial x^{\alpha }}{\partial \tau }\frac{\partial x^{\beta }}{\partial \sigma } \tag{C.4} \end{equation}% the self-linking term can be brought into the following final form (for just one loop for brevity)% \begin{equation} \frac{1}{2}\int d^{3}\mathbf{rv}^{2}=\frac{e^{2}}{2}\int d\sigma d\tau \int d\sigma ^{\prime }d\tau ^{\prime }S^{\alpha \beta }(\sigma ,\tau )\frac{\exp (-\frac{\left\vert \mathbf{r}(\sigma ,\tau )-\mathbf{r}(\sigma ^{\prime },\tau ^{\prime })\right\vert }{\xi _{H}})}{\left\vert \mathbf{r}(\sigma ,\tau )-\mathbf{r}(\sigma ^{\prime },\tau ^{\prime })\right\vert }^{\prime }S^{\alpha \beta }(\sigma ^{\prime },\tau ^{\prime }), \tag{C.5} \end{equation}% which is just what Nambu obtained. \ He further demonstrated that such a term can be transformed into $-m\int d\sigma d\tau \sqrt{-g}$ \ (e.g. see our Eq.(6.14)) with the constant $m$ (the string tension) being related to coupling constant(s) of the theory. Since in the limit of infinitely thin tubes results just obtained match those discussed in our Section 5.5, we would like to take advantage of this observation. In Section 5.5 we considered fully flexible (Brownian) loops. From the theory of polymer solutions it is known that such loops can be made of the so called semiflexible polymers whose rigidity is rather weak. Following our work, Ref.[91], the path integrals describing semiflexible polymer chains \ are given by \begin{equation} I=\int D[\mathbf{u}(\tau )]\exp (-S[\mathbf{u}(\tau )]) \tag{C.6} \end{equation}% with action $S[\mathbf{u}(\tau )]$ given by \begin{equation} S[\mathbf{u}(\tau )]=\frac{\kappa }{2}\int\limits_{0}^{N}d\tau \left( \frac{d% \mathbf{u}}{d\tau }\right) ^{2}+\int\limits_{0}^{N}d\tau \lambda (\tau )(% \mathbf{u}^{2}(\tau )-1). \tag{C.7} \end{equation}% The rigidity constant is $\kappa .$ For brevity it will be put equal to one. The Lagrange multiplier $\lambda $ takes care of the fact that the "motion" is taking place on the surface of a 2-sphere. Minimization of the action $S$ produces% \begin{equation} \frac{d^{2}}{d\tau ^{2}}\mathbf{u}=\lambda \mathbf{u} \tag{C.8} \end{equation}% with Lagrange multiplyer being determined by the constraint $\frac{d}{d\tau }% \mathbf{u}^{2}=0$ thus producing instead of Eq.(C.8) the following result:% \begin{equation} \mathbf{\ddot{u}=-(\dot{u}\cdot \dot{u})u,} \tag{C.9} \end{equation}% where $\mathbf{\dot{u}=}\frac{d}{d\tau }\mathbf{u,}$ etc. In view of the results of this subsection, consider now an \ immediate extension of the obtained results known as the Neumann model\footnote{% Some useful details related to Neumann's model can be found in our work, Ref.[92].} \begin{equation} \mathbf{\ddot{u}+Gu=}\lambda \mathbf{u,}\text{ }\mathbf{u}^{2}=1 \tag{C.10} \end{equation}% for some matrix \textbf{G} which always can be brought to the diagonal form. By analogy with Eq.(6.8), we can rewrite Eq.(C.10) in the following equivalent form \begin{equation} \mathbf{u}\times \lbrack \mathbf{\ddot{u}+Gu]=}0 \tag{C.11} \end{equation}% since $\mathbf{u}\times \lambda \mathbf{u=}0$ . The above equation is just a special case of the Landau-Lifshitz (L-L) equation describing dynamics of Heisenberg (anti)ferromagnets. In one space and one time dimension the L-L equation reads \begin{equation} \frac{\partial }{\partial t}\mathbf{u}=\{\mathbf{u}\times \lbrack \mathbf{% \ddot{u}+Gu]\},} \tag{C.12} \end{equation}% where now $\mathbf{\dot{u}=}\frac{d}{dx}\mathbf{u,}$ etc. In Sections 6.3 and 6.5 we mentioned already that L-L equation describes the dynamics of vortex filaments in fluids, plasmas, etc and is also obtainable from the Lund-Regge theory. Following Veselov.[93], consider a special solution of the L-L equation obtained by inserting the ansatz $\mathbf{u}(x,t)=\mathbf{u}% (x-i\theta t)$ into Eq.(C.12). Such a substitution produces:% \begin{equation} -i\theta \mathbf{\dot{u}=}\{\mathbf{u}\times \lbrack \mathbf{\ddot{u}+Gu]\},u% }^{2}=1. \tag{C.13} \end{equation}% Taking a vector product of both sides of this equation produces% \begin{equation} \mathbf{\ddot{u}+Gu=}\lambda \mathbf{u+}i\theta \lbrack \mathbf{\dot{u}% \times u].} \tag{C.14} \end{equation}% This equation describes the classical motion of a charged particle in the presence of a Dirac monopole. At the quantum level such a problem was studied in detail by Dunne, Ref.[94], who demonstrated that in the limit $% \theta \rightarrow \infty $ the monopole spectrum is equidistant. This result is compatible with the result of Ricca and Moffat [89], and explains the role of monopoles in quark confinement (in view of results of our Section 5.5). Furthermore it corroborates the results of our recent work, Ref.[83], briefly mentioned in Section 6.5., where the spectrum of the1-d Heisenberg XXX spin chain was recovered directly from the combinatorics of scattering data supplied by uses of Veneziano amplitudes in scattering experiments. \qquad \qquad \bigskip \textbf{References} \qquad [1] \ \ A. Einstein, Ann. der Physik 17 (1905) 549. [2] \ \ A. Einstein, Ann. der Physik 19 (1906) 289. [3] \ \ M.S. Selim, M.A. Al-Naafa, M.C. Jones, \ \ \ \ \ \ Apl.ChE Journal 39 (1993) 3. [4] \ \ W. Russel, D. Saville, W. Schowalter, Colloidal \ \ \ \ \ \ Dispersions, Cambridge University Press, Cambridge, 1989. [5] \ \ W. Hoover, F. Ree, J. Chem. Phys. 40 (1964) 2048. [6] \ \ R. Roscoe, British Journal of Appl. Phys. 3 (1952), 267. [7] \ \ J. Rallison, J. Fluid Mech. 186 (1988) 471. [8] \ \ T. Tadros, Adv. in Colloid and Interface Sci. 12 (1980), 141. [9] \ \ H. Brenner, Int. J. Multiphase Flow 1 (1974), 195. [10] \ M. Doi, S.F. Edwards, The Theory of Polymer Dynamics, \ \ \ \ \ \ \ Oxford University Press, Oxford, 1986. [11] \ P. Hawksley, British. J. Appl. Phys. 5 (1954) S1-S5. [12] \ S. Edwards, M. Muthukumar, Macromolecules 17 (1984) 586. [13] \ N.Van Kampen, Stochastic processes in Physics and Chemistry, \ \ \ \ \ \ \ North-Holland, Amsterdam, 1981. [14] \ S. Meeker, W. Poon, P. Pusey, Phys. Rev. E 55 (1997) 5718. [15] \ P. Segre, S. Meeker, P. Pusey, W. Poon, \ \ \ \ \ \ \ Phys. Rev. Lett. 75 (1995) 958. [16] \ H. Hooper, J. Yu, A. Sassi, D. Soane, \ \ \ \ \ \ \ J. Appl. Polym. Sci. 63 (1997) 1369. [17] \ G. Phillies, J. Colloid Interface Sc. 248 (2002) 528. [18] \ S. Phan, W. Russel, Z. Cheng, J. Zhu, P. Chaikin, J. Dunsmuir, \ \ \ \ \ \ \ R.Ottewill, Phys. Rev. E 54 (1996) 6633. [19] \ J.Brady, J.Chem.Phys. 99 (1993) 567. [20] \ J.Bicerano, J.Douglas, D.Brune, Rev.Macromol. \ \ \ \ \ \ \ Chem.Phys.C39 (1999) 561. [21] \ R. Ferrell, Phys. Rev. Lett. 24, (1970) 1169. [22] \ J.Chorin, Vorticity and Turbulence, Springer-Verlag, Berlin, 1994. [23] \ A.Kholodenko, J.Douglas, Phys.Rev.E 51 (1995) 1081. [24] \ F.London, H.London, proc.Roy.Soc.London, Ser.A 149 (1935) 71. [25] \ E. Lifshitz, L. Pitaevskii, Statistical Physics Part 2, Landau and \ \ \ \ \ \ \ Lifshitz, Course of Theoretical Physics, Volume 9, \ \ \ \ \ \ \ Pergamon Press, \ London, 1980. [26] \ V.Ginzburg, L.Landau, Zh.Exp.Theor.Phys 20 (1950) 1064. [27] \ G.Batchelor, J. Fluid Mech. 74 (1976) 1. [28] \ S.Lovesey, Theory of Neutron Scattering From Condensed Matter, \ \ \ \ \ \ \ Volume 1, Oxford University Press, Oxford, 1984. [29] \ L.Landau, E. Lifshitz, Fluid Mechanics, Landau and Lifshitz Course \ \ \ \ \ \ \ of Theoretical Physics, Volume 6, Pergamon Press, London, 1959. [30] \ L. Landau, E.Lifshitz, Statistical Physics Part 1.,Course in Theoretical \ \ \ \ \ \ \ Physics,Vol.5, Pergamon Press, London, 1982. [31] \ J.Jost, Riemannian Geometry and Geometric Analysis, \ \ \ \ \ \ \ Springer-Verlag, Berlin, 2005. [32] \ E.Witten, Comm.Math.Phys.121 (1989) 351. [33] \ A.Kholodenko, T.Vilgis, Phys.Reports 298 (1998) 251. [34] \ F. Tanaka, Prog. Theor. Phys. 68 (1982) 148. [35] \ F. Ferrari, I. Lazzizzera, Nucl. Phys. B 559 (1999) 673. [36] \ L. Landau, E.Lifshitz, Electrodynamics of Continuous Media, \ \ \ \ \ \ \ Landau and Lifshitz Course of Theoretical Physics, Volume 8, \ \ \ \ \ \ \ Pergamon Press, London, 1984. [37] \ G. Batchelor, An Introduction to Fluid Dynamics, \ \ \ \ \ \ \ Cambridge University Press, Cambridge, 1967. [38] \ M. Brereton, S. Shah, J. Phys. A: Math. Gen. 13 (1980) 2751. [39] \ P.Ramond, Field Theory: A Modern Primer, \ \ \ \ \ \ \ Addison-Wesley Publ.Co., New York, 1989. [40] \ R.Feynman, Phys.Rev.80 (1950) 440. [41] \ A. Kholodenko, A. Beyerlein, Phys. Rev. A34 (1986) 3309. [42] \ H.Kleinert, Gauge Fields in Condensed Matter, Volume2, \ \ \ \ \ \ \ World Scientific, Singapore, 1989. [43] \ A.Polyakov, Gauge Fields and Strings, \ \ \ \ \ \ \ Harwood Academic Publishers, New York, 1987. [44] \ A.Kholodenko, A.Beyerleyin, Phys.Rev.E A34 (1986) 3309. [45] \ F.Lund, T.Regge, Phys.Rev.D 14 (1976) 1524. [46] \ B.Dubrovin, A.Fomenko, S.Novikov, Modern Geometry- \ \ \ \ \ \ \ Methods and Applications, Volume 2, Springer-Verlag, Berlin, 1985. [47] \ A.Kholodenko, D.Rolfsen, J.Phys.A 29 (1996) 5677. [48]. A.Kholodenko, T.Vilgis, Phys.Reports 298 (1998) 251. [49] \ A.Kholodenko, Landau's last paper and its impact on \ \ \ \ \ \ \ mathematics, physics and other disciplines in new millenium, \ \ \ \ \ \ \ arxiv: 0806.1064. [50] \ R.Feynman, Statistical Mechanics, Addison Wesley Publ.Co, \ \ \ \ \ \ \ New York, 1972. [51] \ F.London, Superfluids, Volume 2, \ \ \ \ \ \ \ J.Wiley \& Sons Co., New York, 1954. [52] \ P-G. De Gennes, Solid State Communications 10 (1972) 753. [53] \ K.Iida, G.Baym, Phys.Rev. D 63 (2001) 074018. [54] \ A.Leggett, Quantum Liquids, \ \ \ \ \ \ \ Oxford University Press, Oxford, 2006. [55] M.Rasetti, T.Regge, Physica 80A (1975) 217. [56] V.Berdichevsky, Phys.Rev.E 57 (1998) 2885. [57] V.Berdichevsky, Continuum Mech.Thermodyn.19 (2007) 135. [58] W.Schief, Phys.Plasmas 10 (2003) 2677. [59] C.Rogers, W.Schief, J.Math.Phys.44 (2003) 3341. [60] L.Garcia de Andrade, Phys.Scr.73 (2006) 484. [61] N.Bogoliubov, D.Schirkov, Introduction to the Theory of \ \ \ \ \ \ Quantized Fields, Wiley Intersicience, New York, 1959. [62] H.Moffatt, J.Fluid Mech.\ 35 (1969) 117. [63] V.Arnold, B.Khesin, Topological Methods in Hydrodynamics, \ \ \ \ \ \ \ Springer-Verlag, Berlin, 1998. [64] K.Brownstein, Phys.Rev. A 35 (1987) 4856. [65] V.Kozlov, General Theory of Vortices, \ \ \ \ \ \ Springer-Verlag, Berlin, \ 1998. [66] H.Zaghloul, O.Barajas, Am.J.Phys.58 (1990) 783. [67] F.Gonzales-Gascon, D.Peralta-Salas, Phys.Lett. A 292 (2001) 75. [68] A.Kholodenko, J.Geom. Phys.58 (2008) 259. [69] V.Ferraro, C.Plumpton, Magneto-Fluid Mechanics, \ \ \ \ \ \ Oxford University Press, Oxford, 1961. [70] P.Dirac, Phys.Rev.74 (1948) 817. [71] Y.Nambu, Phys.Rev.D10 (1974) 4262. [72] G. t'Hooft, Nucl.Phys B 190 (1981) 455. [73] T.Suzuki, I. Yotsuyanagi, Phys.Rev. D 42 (1990) 4257. [74] J.Stack, S.Neiman, R.Wensley, Phys rev.D50 (1994) 3399. [75] L.Faddeev, A.Niemi, Spin-charge separation, conformal \ \ \ \ \ \ covariance and the SU(2) Yang-Mills theory, arXiv: hep-th/0608111 [76] L.Faddeev, Knots as possible excitations of the quantum \ \ \ \ \ \ Yang-Mills field, arXiv.0805.1624 [77] Y.Cho, Phys.Rev.D 21 (1980) 1080. [78] Y.Cho, D.Pak, Phys.Rev.D 65 (2002) 074027. [79] K-I. Kondo, Phys.Rev.D 74 (2006) 125003. [80] R.Richardson, Journal of Math.Phys. 9 (1968) 1327. [81] M.Gaudin, La Function d'Onde de Bethe\textit{,} \ \ \ \ \ \ Masson, Paris, 1983. [82] \ A.Kholodenko, \textit{\ }J.Geom.Phys.56 (2006)1387. [83] \ A.Kholodenko, New strings for old Veneziano amplitudes IV. \ \ \ \ \ \ \ Connections with spin chains, arxiv: 0805.0113. [84] \ R.Jackiw, V.Nair,S-Y. Pi, A.Polychronakos, J.Phys.A 37 (2004) R327. [85] \ A.Polychronakos, Noncommutative Fluids, arxiv: 0706.1095. [86] \ I.Benn, J.Kress, J.Phys.A 29 (1996) 6295. [87] \ A.Kholodenko, E.Ballard, Physica A 380 (2007 115. [88] \ W.Cottingham, D.Greenwood, An Introduction to the Standard \ \ \ \ \ \ \ Model of Paricle Physics, Cambridge U.Press, Cambridge, 2007. [89] \ H.Moffatt, R.Ricca, Proc.R.Soc. London A 439 (1992) 411. [90] \ L.Pismen, Vortices in Nonlinear Fields, Clarendon Press, \ \ \ \ \ \ \ Oxford, 1999. [91] \ A.Kholodenko, Th.Vilgis, Phys.Rev. E 52 (1995) 3973. [92] \ A.Kholodenko, Quantum signatures of solar system dynamics, \ \ \ \ \ \ \ arXiv: 0707.3992. [93] \ A.Veselov, Sov.Phys.\ Dokl. 28 (1983) 458. [94] \ G.Dunne, Ann.Phys. 215 (1992) 233. \ \ \ \ \ \ \ \end{document}
1,477,468,750,857
arxiv
\section{Introduction} Increasing the efficiency of photon-to-electron energy conversion in nanomaterials has been under active investigation in recent years. For instance, one hopes that efficiency of the nanomaterial-based solar cells can be increased due to carrier multiplication, or multiple exciton generation (MEG) process, where absorption of a single energetic photon results in the generation of several excitons \cite{10.1063/1.1736034,ISI:000229120900009,AJ2002115}. In the course of MEG the excess photon energy is channeled into creating additional charge carriers instead of generating vibrations of the nuclei \cite{AJ2002115}. Indeed, phonon-mediated electron relaxation is a major time evolution channel competing with the MEG. The conclusion about MEG efficiency in a nanoparticle can only be made by simultaneously including MEG, {phonon-mediated carrier relaxation}, and, possibly, other processes, such as charge and energy transfer \cite{PhysRevB.88.155304,doi:10.1021/jz4004334}. In the bulk semiconductor materials MEG in the solar photon energy range is inefficient \cite{5144200,5014421,10.1063/1.370658}. In contrast, in nanomaterials MEG is expected to be enhanced by spatial confinement, which increases electrostatic interactions between electrons \cite{doi:10.1146/annurev.physchem.52.1.193,AJ2002115,doi:10.1021/nl0502672,doi:10.1021/nl100177c,doi:10.1021/ar300189j}. A potent measure of MEG efficiency is the average number of excitons generated from an absorbed photon -- the internal quantum efficiency (QE) -- which can be measured in experiments \cite{Semonin16122011}. MEG has been observed in single-wall carbon nanotubes (SWCNTs) using transient absorption spectroscopy \cite{doi:10.1021/nl100343j} and the photocurrent spectroscopy \cite{Gabor11092009}; $QE=1.3$ at the photon energy $\hbar \omega = 3E_g,$ where $E_g$ is the electronic gap, was found in the (6,5) SWCNT. Theoretically, MEG in SWCNTs has been studied using tight-binding approximation with QE up to $1.5$ predicted {in (17,0) zigzag SWNT} \cite{PhysRevB.74.121410,PhysRevLett.108.227401}. {It has been demonstrated that in semiconductor nanostructures MEG is dominated by the impact ionization process \cite{PhysRevLett.106.207401,PhysRevB.86.165319}. Therefore, MEG QE requires calculations of the exciton-to-biexciton decay rate (${\rm R}_{1\to2}$) and of the biexciton-to-exciton recombination rate (${\rm R}_{2\to1}$), the direct Auger process, and, of course, inclusion of carrier phonon relaxation. In SWCNTs accurate description of these processes requires inclusion of the electron-hole bound state effects -- excitons \cite{doi:10.1021/acs.chemrev.5b00012}. } Recently, Density Functional Theory (DFT) combined with the many-body perturbation theory (MBPT) techniques has been used to calculate ${\rm R}_{1\to2}$ and ${\rm R}_{2\to1}$ rates, and the photon-to-bi-exciton, ${\rm R}_2$, and photon-to-exciton, ${\rm R}_1$, rates in two chiral (6,2) and (10,5) SWCNT with different diameters including exciton effects \cite{doi:10.1063/1.4963735}. QE was then estimated as $QE=({\rm R}_1+2 {\rm R}_2)/({\rm R}_1+{\rm R}_2).$ The results suggested that efficient MEG in chiral SWCNTs might be present within the solar spectrum range with ${\rm R}_{1\to2}\sim 10^{14}~s^{-1}$, while ${\rm R}_{2\to1}/{\rm R}_{1\to2}\leq 10^{-2};$ it was found that $QE\simeq 1.2-1.6.$ However, MEG strength in these SWCNTs was found to vary strongly with the excitation energy due to highly non-uniform density of states. It was suggested that MEG efficiency in these systems could be enhanced by altering the low-energy electronic spectrum via surface functionalization, or simply by mixing SWCNTs of different chiralities. Another aspect of MEG dynamics has to do with the spin structure of the final bi-exciton state. So far, mostly the simplest possibility of a high-energy spin singlet exciton decaying into two spin-zero excitons has been considered in the literature. However, in recent years another possibility for the bi-exciton state where a singlet exciton decays into a pair of spin-one exciton states which are in the total spin singlet state -- the singlet fission (SF) -- has received considerable attention. (See \cite{doi:10.1021/cr1002613,doi:10.1021/ar300288e} for reviews.) This is because triplet excitons tend to have lower energies compared to the singlets and have much longer radiative recombination lifetimes, which may be beneficial for energy conversion applications \cite{doi:10.1021/nl070355h}. Also, it has been observed that in some organic molecular crystals, such as various acene and rubrene configurations, there is resonant energy level alignment between singlet and the double triplet exciton states which enhances SF \cite{C6CE00873A}. Properties and dynamics of triplet excitons in SWCNTs have been studied, both experimentally and theoretically \cite{doi:10.1021/nl070355h,PhysRevB.74.121410,nat_phot_2014_s1cnt}. But, to the best of our knowledge, investigation of SF in SWCNTs using DFT-based MBPT has not been attempted. In this work we develop and apply a DFT-based MBPT technique to explore the possibility of SF in chiral SWCNTs. We calculate ${\rm R}_{1\to 2}$ and ${\rm R}_{2\to 1}$ rates for SF for the (6,2), (6,5), (10,5) SWCNTs, and, also, in (6,2) SWCNT functionalized with Cl atoms. {This work aims to provide further insights into the elementary processes contributing to MEG in SWCNTs and its dependence on the chirality, excitation energy, and its sensitivity to the surface functionalization.} The paper is organized as follows. Section \ref{sec:method} contains description of the methods and approximations employed in this work. Section III contains description of the atomistic models studied in this work and of DFT simulation details. Section \ref{sec:results} contains discussion of the results obtained. Conclusions and Outlook are presented in Section \ref{sec:conclusions}. \section{Theoretical Methods and Approximations} \label{sec:method} \subsection{Electron Hamiltonian in the KS basis} \label{sec:H} The electron field operator $\psi_{\alpha}({\bf x})$ is related to the annihilation operator of the $i^{th}$ KS state, ${\rm a}_{i\alpha},$ as \begin{eqnarray} \psi_{\alpha}({\bf x})=\sum_i\phi_{i\alpha}({\bf x}){\rm a}_{i\alpha}, \label{psi_to_a} \end{eqnarray} where $\phi_{i\alpha}({\bf x})$ is the $i^{th}$ KS orbital, and $\alpha$ is the electron spin index \cite{FW,Mahan}. Here we only consider spin non-polarzed states with $\phi_{i\uparrow}=\phi_{i\downarrow}\equiv \phi_i;$ also ${\{}{\rm a}_{i\alpha},~{\rm a}_{j\beta}^{\+}{\}}=\delta_{ij}\delta_{\alpha\beta},~{\{}{\rm a}_{i\alpha},~{\rm a}_{j\beta}{\}}=0.$ In the Kohn-Sham (KS) state representation the Hamiltonian of electrons in a CNT is (see, {\it e.g.}, \cite{molphysKK,doi:10.1063/1.4963735}) \begin{eqnarray} {\rm H}= \sum_{i\alpha}\epsilon_{i} {\rm a}_{i\alpha}^{\+}{\rm a}_{i\alpha} +{\rm H}_{C}-{\rm H}_{V}+{\rm H}_{e-exciton}. \label{H} \end{eqnarray} where $\epsilon_{i\uparrow}=\epsilon_{i\downarrow}\equiv \epsilon_i$ is the $i^{th}$ KS energy eigenvalue. Typically, in a periodic structure $i={\{}n,{\bf k}{\}},$ where $n$ is the band number, ${\bf k}$ is the lattice wavevector. However, for reasons explained in Section III here KS states are labeled by just integers. The second term is the (microscopic) Coulomb interaction operator \begin{eqnarray} {\rm H}_C=\frac12\sum_{ijkl~\alpha,\beta}{\rm V}_{ijkl}{\rm a}^{\dagger}_{i\alpha}{\rm a}^{\dagger}_{j\beta}{\rm a}_{k\beta}{\rm a}_{l\alpha}, ~{\rm V}_{ijkl}=\int{\rm d}{\bf x}{\rm d}{\bf y}~\phi^{*}_i({\bf x})\phi^{*}_j({\bf y})\frac{e^2}{|{\bf x}-{\bf y}|}\phi_k({\bf y}) \phi_l({\bf x}). \label{HC} \end{eqnarray} The ${\rm H}_{V}$ term is the compensating potential which prevents double-counting of electron interactions \begin{eqnarray} {\rm H}_{V}=\sum_{ij}{\rm a}_{i\alpha}^{\+}\left(\int{\rm d}{\bf x}{\rm d}{\bf y}~\phi^*_i({\bf x}){V_{KS}({\bf x},{\bf y})}\phi_j({\bf y})\right){\rm a}_{j\alpha}, \label{HV} \end{eqnarray} where $V_{KS}({\bf x},{\bf y})$ is the KS potential consisting of the Hartree and exchange-correlation terms (see, {\it e.g.}, \cite{RevModPhys.74.601,RevModPhys.80.3}). Photon and electron-photon coupling terms are not directly relevant to this work and, so, are not shown, for brevity. Before discussing ${\rm H}_{e-exciton},$ the last term in the Hamiltonian (\ref{H}), let us recall that in the Tamm-Dancoff approximation a spin zero exciton state can be represented as \cite{PhysRevB.62.4927,PhysRevB.29.5718} \begin{eqnarray} \ket{{\alpha}}_{0}={\rm B}^{\alpha\dagger}\ket{g.s.}=\sum_{e h}\sum_{\sigma=\uparrow,\downarrow}\frac{1}{\sqrt{2}} {\rm \Psi}^{\alpha}_{eh} a^{\dagger}_{e\sigma} a_{h\sigma} \ket{g.s.}, \label{Psialpha} \end{eqnarray} where ${\rm \Psi}^{\alpha}_{eh}$ is the spin-zero exciton wavefunction, ${\rm B}^{\alpha\dagger}$ is the $\alpha^{th}$ singlet exciton state creation operator; the index ranges are $e>HO,~h\leq HO,$ where HO is the highest occupied KS level, $LU=HO+1$ is the lowest unoccupied KS level. For a spin one exciton we have \begin{eqnarray} \ket{{\alpha}}_{1M}={\rm B}_{M}^{\alpha\dagger}\ket{g.s.}=\sum_{e h}\sum_{\mu,\nu} {\rm \Phi}^{\alpha}_{eh} a^{\dagger}_{e\mu} a_{h\nu} {\rm F}^{\mu\nu}_M\ket{g.s.}, ~\mu,\nu=\uparrow,\downarrow, \label{Psialpha1} \end{eqnarray} where ${\rm F}^{\mu\nu}_1=\delta_{\mu\uparrow}\delta_{\nu\downarrow},~{\rm F}^{\mu\nu}_{0}=-{(\sigma_3)_{\mu\nu}}/{\sqrt{2}},~{\rm F}^{\mu\nu}_{-1}=-\delta_{\mu\downarrow}\delta_{\nu\uparrow};~\sigma_i,~i=1,2,3,$ is a Pauli matrix; ${\rm \Phi}^{\alpha}_{eh}$ is the spin-one exciton wavefunction, ${\rm B}_{M}^{\alpha\dagger}$ is the triplet exciton creation operator for the state $\alpha$ with spin label $M,~M=-1,0,1.$ Then \begin{eqnarray} {\rm H}_{e-exciton}&=& \sum_{e h \alpha}\sum_{\sigma} \frac{1}{\sqrt{2}}\left(\left[\epsilon_{eh}-E^{\alpha}\right]{\rm \Psi}^{\alpha}_{eh}a_{h\sigma}a^{\dagger}_{e\sigma}({\rm B}^{\alpha}+{\rm B}^{\alpha\dagger})+h.c.\right)+\nonumber \\ &&\sum_{e h \alpha}\sum_{\mu \nu}\sum_{M=-1,0,1} \left(\left[\epsilon_{eh}-{\cal E}^{\alpha}\right]{\rm \Phi}^{\alpha}_{eh}a_{h\nu}a^{\dagger}_{e\mu}{\rm F}^{\mu\nu}_M({\rm B}_{M}^{\alpha}+{\rm B}_{M}^{\alpha\dagger})+h.c.\right)+\nonumber \\ &+&\sum_{\alpha}\left(E^{\alpha}{\rm B}^{\alpha\dagger}{\rm B}^{\alpha}+ {\cal E}^{\alpha}\left[\sum_{M=-1,0,1}{\rm B}_{M}^{\alpha\dagger}{\rm B}_{M}^{\alpha}\right]\right),~\epsilon_{eh}=\epsilon_{e}-\epsilon_{h}, \label{Heexc} \end{eqnarray} where ${\rm B}^{\alpha\dagger},~E^{\alpha}$ and ${\rm B}_{M}^{\alpha\dagger},~{\cal E}^{\alpha}$ are the singlet and triplet {exciton~creation~operators and energies, respectively.} The ${\rm H}_{e-exciton}$ term can be seen as the result of, {\it e.g.}, re-summation of perturbative corrections to the electron-hole correlation function (see, {\it e.g.}, \cite{Berestetskii:1979aa,Beane:2000fx}); it describes coupling of excitons, both singlets and triplets, to electrons and holes, which allows systematic inclusion of excitons in the perturbative calculations \cite{PhysRevLett.92.077402,PhysRevLett.92.257402,PhysRevLett.95.247402,Beane:2000fx}. {To avoid double-counting one chooses the appropriate degrees of freedom, {\it i.e.}, ${\rm a},~{\rm a}^{\dagger}$ or ${\rm B},~{\rm B}^{\dagger},$ which depends on the quantity of interest.} To determine exciton wave functions and energies one solves the Bethe-Salpeter equation (BSE) \cite{PhysRevB.62.4927,PhysRevB.29.5718}. In the static screening approximation commonly used for semiconductor nanostructures (see, {\it e.g.}, \cite{PhysRevLett.90.127401,PhysRevB.68.085310,PhysRevB.79.245106}) the BSE is \cite{PhysRevB.68.085310} { \begin{eqnarray} &&\left([\epsilon_e-\epsilon_h]-E^{\alpha}\right){\rm \Psi}^{\alpha}_{eh}+ \sum_{e^{'}h^{'}} ({\rm c}{\rm K}_{Coul}+{\rm K}_{dir})(e,h;e^{'},h^{'}){\rm \Psi}^{\alpha}_{e^{'},h^{'}}=0,\nonumber \\ &&{\rm K}_{Coul} = \sum_{{\bf q}\neq 0} \frac{8\pi e^2{{\rho}}_{eh}({\bf q}){{\rho}}^{*}_{e^{'}h^{'}}({\bf q})}{V|{\bf q}|^2}, ~ {\rm K}_{dir} = -\frac{1}{V}\sum_{{\bf q}\neq 0} \frac{4\pi e^2{{\rho}}_{e e^{'}}({\bf q}){{\rho}}^{*}_{h h^{'}}({\bf q})}{|{\bf q}|^2-\Pi(0,-{\bf q},{\bf q})}, \label{BSEspin0} \end{eqnarray}} where \begin{eqnarray} {{\rho}}_{ji}({\bf p})=\sum_{{\bf k}}\phi_j^{*}({\bf k}-{\bf p})\phi_i({\bf k}), \label{rhoij} \end{eqnarray} is the transitional density, and \begin{eqnarray} \Pi(\omega,{\bf k},{\bf p})&=&\frac{8 \pi e^2}{V\hbar}\sum_{ij}\rho_{ij}({\bf k})\rho_{ji}({\bf p})\left(\frac{\theta_{-j}\theta_{i}} {\omega-\omega_{ij}+i\gamma}-\frac{\theta_{j}\theta_{-i}}{\omega-\omega_{ij}-i\gamma}\right),\nonumber \\ \sum_{i}\theta_i&=&\sum_{i > HO},~\sum_{i}\theta_{-i}=\sum_{i \leq HO}, \label{Piwkp} \end{eqnarray} is the RPA polarization insertion (see, {\it e.g.}, \cite{FW}). Additional screening approximation used in the ${\rm K}_{dir}$ term will be discussed in Section II.~B. For the triplet excitons only the direct term contributes, so ${\rm c}=0$ in Eq. (\ref{BSEspin0}) \cite{PhysRevLett.80.3320}. \vspace{-0.0090095ex} BSE in terms of the Feynman diagrams is shown in Fig. \ref{fig:BSE}. \begin{figure} \includegraphics[width=0.96\hsize]{bethesalpeter2.eps} \caption{Feynman diagrams representing BSE. Thin solid lines represent KS state propagators, thick solid lines are excitons, zigzag lines -- Coulomb potential; ${\Pi}$ is the polarization insertion, Eq.~(\ref{Piwkp}).} \label{fig:BSE} \end{figure} In our DFT simulations we have used hybrid Heyd-Scuseria-Ernzerhof (HSE06) exchange correlation functional \cite{vydrov:074106,heyd:219906}, which has been successful in reproducing electronic gaps in various semiconductor nanostructures ({\it e.g.}, \cite{RevModPhys.80.3,doi:10.1021/ct500958p}). (See, however, \cite{PhysRevLett.107.216806}.) So, here using the HSE06 functional is to substitute for $GW$ corrections to the KS energies, {\it i.e.}, for the first step in the standard three-step procedure \cite{PhysRevB.34.5390,PhysRevB.62.4927}. Therefore, single-particle energy levels and wave functions are approximated by the KS $\epsilon_i$ and $\phi_i({\bf x})$ from the HSE06 DFT output. While $GW$ technique would improve accuracy of our calculations, it is unlikely to alter our results and conclusions qualitatively. Now one is to apply standard perturbative many-body quantum mechanics techniques ({\it e.g.}, \cite{AGD,FW}) to compute the SF decay rates, {\it i.e.}, exciton-to-bi-exciton, bi-exciton-to-exciton rates with the two triplet excitons in the total spin-zero state, working to the second order in the screened Coulomb interaction. As noted above, phonon-meditated electron energy relaxation is an important process competing with MEG. A suitable approach to describe time-evolution of a photo-excited nanosystem is the Boltzmann transport equation which includes phonon emission/absorption terms together with the terms describing exciton-to-bi-exciton decay and recombination, along with the charge and energy transfer contributions, {\it etc.} This challenging task is work in progress. In this work electron-phonon interaction effects are only included by adding small imaginary parts to the KS energies $\epsilon_i\rightarrow \epsilon_i - i \gamma_i$, which results in the non-zero line-widths in the expressions below. {In this work all $\gamma$ will be set to 0.025 eV corresponding to room temperature.} The KS orbital Fourier transformation conventions used in this work are \begin{eqnarray} &&\phi_i({\bf k}) = \frac{1}{\sqrt{V}}\int_{V} {\rm d}{\bf x}~\phi_i({\bf x}) {\rm e}^{-i{\bf k}\cdot{\bf x}}, ~\phi_i({\bf x}) =\frac{1}{\sqrt{V}}\sum_{{\bf k}}\phi_i({\bf k}) {\rm e}^{i{\bf k}\cdot{\bf x}},\nonumber \\ &&{\bf k} = 2\pi \left( \frac{n_x}{L_x}, \frac{n_y}{L_y}, \frac{n_z}{L_z} \right),~n_x, n_y, n_z=0,\pm 1, \pm 2,... \label{phiKS} \end{eqnarray} with $V=L_x L_y L_z$ being the simulation cell volume. \subsection{Medium Screening Approximation} For completeness, let us outline the main idea of the simplified treatment of medium screening used in this work \cite{doi:10.1080/00268976.2015.1076580,doi:10.1063/1.4963735}. The standard random phase approximation (RPA) Coulomb potential is \begin{eqnarray} {\rm W}(\omega,{\bf k},{\bf p})=\frac{4\pi e^2}{V}\left[k^2\delta_{{\bf k},-{\bf p}}-\Pi(\omega,{\bf k},{\bf p})\right]^{-1}. \label{Wwkp} \end{eqnarray} In the static limit $\Pi(\omega,{\bf k},{\bf p})\simeq \Pi(\omega=0,{\bf k},{\bf p}).$ Evaluating ${\rm W}(0,{\bf k},{\bf p})$ requires matrix inversion which can severely limit applicability of the MBPT techniques \cite{Deslippe20121269,PhysRevB.79.245106}. (See \cite{doi:10.1021/ct500958p} for recent advances.) In order to be able to simulate nanosystems of interest one is forced to sacrifice some accuracy. With this in mind, a significant technical simplification is to retain only the {\it diagonal} matrix elements in $\Pi(0,{\bf k},{\bf p}),$ {\it i.e.}, to approximate $\Pi(0,{\bf k},{\bf p})\simeq\Pi(0,-{\bf k},{\bf k})\delta_{{\bf k},-{\bf p}}$ as implemented in Eqs. (\ref{BSEspin0},\ref{VC}). In the position space this corresponds to $\Pi(0,{\bf x},{\bf x^{'}})\simeq\Pi(0,{\bf x}-{\bf x^{'}}),$ {\it i.e.}, to approximating the system as a uniform medium. One rationale for this approximation is that in quasi one-dimensional systems, such as CNTs, one can expect $\Pi({\bf x},{\bf x^{'}})\simeq\Pi(z-z^{'}),$ where $z,z^{'}$ are the axial positions. Previously, we have checked quality of our computational approach including this screening approximation for chiral SWCNTs \cite{doi:10.1063/1.4963735}. We have computed low-energy absorption spectra for (6,2) and (10,5) SWCNTs and found that our predictions for $E_{11}$ and $E_{22}$ -- the energies of the first two absorption peaks corresponding to transitions between the van Hove peaks in the CNT density of states -- reproduce results of Weisman and Bachillo \cite{doi:10.1021/nl034428i} within {5 - 13 \% error.} Additionally, we have simulated SWCNT (6,5) and found $E_{11}=1.1~eV,~E_{22}=2.05~eV$ {\it vs.} $E_{11}=1.27~eV~eV,~E_{22}=2.19~eV$ from \cite{doi:10.1021/nl034428i}. {This suggests that our approach is adequate for the semi-quantitative description of these systems.} Accuracy could be improved by using full interaction ${\rm W}(0,{\bf k},{\bf p}),$ or ${\rm W}(\omega,{\bf k},{\bf p})$, and GW, which would be much more computationally expensive. However, it would not change the overall conclusions of this work. \subsection{Expressions for the Rates} Within our approximations exciton-to-bi-exciton decay rate from the impact ionization process is given by \begin{eqnarray} {\rm R}_{1{\rightarrow}2}=-2{\rm Im}\Sigma_{\gamma}(\omega_{\gamma}), \label{R1to2} \end{eqnarray} where $\Sigma_{\gamma}(\omega)$ are the exciton-to-bi-exciton decay contributions to the self-energy function of the exciton state $\gamma$ with energy $E^{\gamma}=\hbar\omega_{\gamma}.$ The relevant self-energy Feynman diagrams are shown in Fig. \ref{fig:R1to2}. For completeness, let us quote the expressions for the all-singlet exciton-to-bi-exciton rates \cite{doi:10.1063/1.4963735} \begin{eqnarray} R_{1{\to}2}(\omega_{\gamma})&=&R^p+R^h+{\tilde R}^p+{\tilde R}^h,\nonumber \\ R^p(\omega_{\gamma})&=&2\frac{2 \pi}{\hbar^2}\sum_{\alpha\beta}\delta(\omega_{\gamma}-\omega_{\alpha} -\omega_{\beta}) \abs*{\sum_{ijkln}W_{{jlnk}} \theta_l \theta_{-n} (\Psi_{ln }^{\beta }){}^* \theta_i \theta_{-j} \theta_{-k} \Psi_{{ij}}^{\gamma} \left(\Psi_{{ik}}^{\alpha }\right){}^* }^2,\nonumber \\ R^h(\omega_{\gamma})&=&2\frac{2 \pi}{\hbar^2}\sum_{\alpha\beta}\delta(\omega_{\gamma}-\omega_{\alpha} -\omega_{\beta}) \abs*{\sum_{ijkln}W_{{jlnk}} \theta_{-l} \theta_n \Psi_{{nl}}^{\beta } \theta _{-i} \theta_j \theta _k (\Psi_{{ji}}^{\gamma }){}^* \Psi_{{ki}}^{\alpha} }^2. \label{R1to2all} \end{eqnarray} The expressions for ${\tilde R}^h$ and ${\tilde R}^p$ are the same as the ones for $R^h,~R^p$ with $W_{jlnk}$ replaced by $W_{jlkn}$ and divided by 2. A spin-singlet state composed of two noninteracting spin-one excitons is ({\it cf.} Eq. 5 of \cite{doi:10.1063/1.4794425}) \begin{eqnarray} \ket{\alpha\beta}_{TT;0}&=&\frac{1}{\sqrt{3}}\left({\rm B}_{1}^{\alpha\dagger}{\rm B}_{-1}^{\beta\dagger}-{\rm B}_{0}^{\alpha\dagger}{\rm B}_{0}^{\beta\dagger}+{\rm B}_{-1}^{\alpha\dagger} {\rm B}_{1}^{\beta\dagger}\right)\ket{g.s.}=\nonumber \\ &=&\sum_{e,h,e^{'},h^{'}} \sum_{\mu,\nu,\lambda,\sigma}{\rm T}^{\mu\nu\lambda\sigma}{\rm \Phi}^{\alpha}_{eh}{\rm \Phi}^{\beta}_{e^{'}h^{'}} a^{\dagger}_{e\mu} a_{h\nu} a^{\dagger}_{e^{'}\lambda} a_{h^{'}\sigma}\ket{g.s.},\nonumber \\ {\rm T}^{\mu\nu\lambda\sigma}&=&-\frac{1}{\sqrt{3}}\left(\delta_{\mu\sigma}\delta_{\nu\lambda}-\frac{1}{2}\delta_{\mu\nu}\delta_{\lambda\sigma}\right). \label{2t00} \end{eqnarray} The expressions for the singlet fission rate, {\it i.e.}, the rate for the singlet-to-two-triplets process, are \begin{eqnarray} R^{SF}_{1{\to}2}(\omega_{\gamma})&=&{\rm R}^p+{\rm R}^h,\nonumber \\ {\rm R}^p(\omega_{\gamma})&=&\frac{2 \pi}{\hbar^2}\frac{3}{2}\sum_{\alpha\beta}\delta(\omega_{\gamma}-\omega_{1,\alpha} -\omega_{1,\beta}) \abs*{\sum_{ijkln}W_{{jlkn}} \theta_l \theta_{-n} (\Phi_{ln }^{\beta }){}^* \theta_i \theta_{-j} \theta_{-k} \Psi_{{ij}}^{\gamma} \left(\Phi_{{ik}}^{\alpha }\right){}^* }^2,\nonumber \\ {\rm R}^h(\omega_{\gamma})&=&\frac{2 \pi}{\hbar^2}\frac{3}{2}\sum_{\alpha\beta}\delta(\omega_{\gamma}-\omega_{1,\alpha} -\omega_{1,\beta}) \abs*{\sum_{ijkln}W_{{jlkn}} \theta_{-l} \theta_n \Phi_{{nl}}^{\beta } \theta _{-i} \theta_j \theta _k (\Psi_{{ji}}^{\gamma }){}^* \Phi_{{ki}}^{\alpha} }^2, \label{R1to2triplet} \end{eqnarray} where ${\cal E}^{\gamma}=\hbar\omega_{1,\gamma}.$ In the above \begin{eqnarray} W_{jlnk}&=&\sum_{{\bf q}\neq 0}\frac{4 \pi e^2}{V}\frac{{{\rho}}_{kj}^{*}({\bf q}){{\rho}}_{ln}({\bf q})} {\left(q^2-\Pi(0,-{\bf q},{\bf q})\right)} \label{VC} \end{eqnarray} is the (approximate) screened Coulomb matrix element, and \begin{eqnarray} \delta(x)=\frac1\pi\frac{\gamma}{x^2+\gamma^2}, \label{delta_L} \end{eqnarray} the Lorentzian representation of the $\delta$-function. Only the direct channel diagram (Fig. \ref{fig:R1to2}, on the right) contributes to SF. In the above expressions only the terms leading in the ratio of the typical exciton binding energy to the HO-LU gap $\epsilon_{binding}/E_g < 1$ are shown, for brevity. The rate as a function of energy is given by averaging over the initial exciton states within given energy range with the $\gamma=0.025~eV$ resolution, {\it i.e.}, \begin{eqnarray} R(\epsilon)=\frac{1}{N(\epsilon)}\sum_{\alpha}R(E^{\alpha}), \label{R1to2ave} \end{eqnarray} where the sum is over the exciton states within the $(\epsilon,\epsilon+\gamma)$ energy range, $N(\epsilon)$ is the number of such states. The above expressions have the overall structure of the Fermi Golden Rule. The bi-exciton-to-exciton rate expressions are given by similar expressions with the initial and final states reversed. \begin{figure}[!t \center \includegraphics*[width=0.995\textwidth]{R1to2_v1.eps} \caption{Exciton self-energy Feynman diagrams for the exciton$\to$bi-exciton process. Thin solid lines stand for the KS state propagators, thick solid lines depict excitons, zigzag lines -- screened Coulomb potential. The diagrams on the left and the right correspond to the exchange and direct channels, respectively. Not shown for brevity are the similar diagrams with all the Fermion arrows reversed. Only the direct channel diagram contributes to SF. For SF final bi-exciton state is understood to be the singlet. } \vspace{-0.45ex} \label{fig:R1to2} \end{figure} {\section{Computational Details}} \label{sec:compdetail} The optimized geometries and KS orbitals and KS energy eigenvalues of the chiral SWCNTs studied here have been obtained using the {\it{ab initio}} total energy and molecular dynamics program VASP (Vienna ab initio simulation program) with the hybrid Heyd-Scuseria-Ernzerhof (HSE06) exchange correlation functional \cite{vydrov:074106,heyd:219906} using the projector augmented-wave (PAW) pseudopotentials \cite{PhysRevB.50.17953,PhysRevB.59.1758}. \begin{figure}[!t \center \vspace{-8.5cm} \includegraphics*[width=0.995\textwidth]{CNT623cellCl2para3_CNT651cell_CNT1053cell_v1.eps} \vspace{-7cm} \caption{Atomistic models of chiral SWCNTs. Shown in a) is (6,2) with two chlorine atoms adsorbed to the surface in a para configuration. In order to keep the doping concentration low three unit cells have been included in the simulations. In b) is SWCNT (6,5). Only one unit cell is included due to computational cost restrictions. In c) is (10,5) with three unit cells. } \vspace{-0.45ex} \label{fig:optimized_structures} \end{figure} Using conjugated gradient method for ion position relaxation the structures were relaxed until residual forces on the ions were no greater than $0.05~eV/\AA.$ The momentum cutoff defined by \begin{eqnarray} \frac{\hbar^2{k}^2}{2 m}\leq {\cal E}_{max}, \label{Ecutoff} \end{eqnarray} where $m$ is the electron mass, was set to ${\cal E}_{max}=400~eV.$ The number of KS orbitals included in the simulations which regulated energy cutoff were chosen so that $\epsilon_{i_{max}}-\epsilon_{HO}\simeq\epsilon_{LU}-\epsilon_{i_{min}}\geq 3~eV,$ where $i_{max},~i_{min}$ are the highest and the lowest KS labels included in simulations. SWCNT atomistic models were placed in various finite volume simulation boxes with periodic boundary conditions where in the axial direction the length of the box has been chosen to accommodate an integer number of unit cells, while in the other two directions the SWCNTs have been kept separated by about $1~nm$ of vacuum surface-to-surface thus excluding spurious interactions between their periodic images. Previously, we have found reasonably small (about 10\%) variation in the single particle energies over the Brillouin zone when three unit cells were included in the DFT simulations \cite{doi:10.1063/1.4963735}. So, simulations have been done including three unit cells of (6,2) and (10,5) SWCNTs at the $\Gamma$ point. So, in our approximation lattice momenta of the KS states, which are suppressed by the reduced Brillouin zone size, have been neglected. For (6,5) SWCNT due to high computational cost only one unit cell was included. But as mentioned above, simulation based on this size-reduced model reproduced the absorption spectrum features with the same accuracy as other SWCNTs. (See Table I.) The rationale for including more unit cells instead of standard sampling of the Brillouin zone by including more $K$-points in the DFT simulations is that surfaces of these SWCNTs are to be functionalized. Inclusion of several unit cells allows us to keep the concentration of surface dopants reasonably low. So, here we have simulated (6,2) SWCNT doped with chlorine, where two $Cl$ atoms are attached to the same carbon ring in the para configuration, which has been found to be the preferred arrangement \footnote{Private communication with S.~Kilina.} The atomistic models of the optimized nanotubes are shown in Fig. (\ref{fig:optimized_structures}). In this work all the DFT simulations have been done in a vacuum which should be adequate to describe properties of these SWCNTs dispersed in a non-polar solvent. \section{Results and Discussion} \label{sec:results} \begin{figure}[!t \center \begin{tabular}{cc} \vspace{-5.25ex} \raisebox{8.175\totalheight}{\hspace{-4.25ex} (a)} \raisebox{0.185\totalheight}{\includegraphics*[width=0.465\textwidth] {DOS_exc_2texc_CNT62_3cell_eV_v1.eps}} & \raisebox{8.175\totalheight}{(b)}\raisebox{0.175\totalheight}{\includegraphics*[width=0.495\textwidth] {R12_vs_hweV_t_s_CNT62_3cell_g025_v1.eps}}\\ \vspace{-2.25ex} \raisebox{9.175\totalheight}{\hspace{-3.25ex} (c)} \raisebox{0.195\totalheight}{\includegraphics*[width=0.465\textwidth] {DOS_exc_2texc_CNT105_3cell_v1.eps}}& \raisebox{9.175\totalheight}{\hspace{-2.25ex} (d)} \raisebox{0.195\totalheight}{\includegraphics*[width=0.485\textwidth] {R12_vs_hweV_t_s_CNT105_3cell_g025_v1.eps}}\\ \vspace{-2.25ex} \raisebox{9.175\totalheight}{\hspace{-3.25ex} (e)} \raisebox{0.195\totalheight}{\includegraphics*[width=0.465\textwidth] {DOS_exc_2texc_CNT65_1cell_eV_v1.eps}}& \raisebox{9.175\totalheight}{\hspace{-2.25ex} (f)} \raisebox{0.195\totalheight}{\includegraphics*[width=0.485\textwidth] {R12_vs_hweV_t_s_CNT65_1cell_g025_v1.eps}}\\ \end{tabular} \caption{Singlet exciton and triplet biexciton densities of states (DOS) and the MEG $R_{1\to 2}$ rates, all-singlet and SF, for the (6,2) ((a) and (b)), (10,5) ((c) and (d)) and (6,5) ((e) and (f)) CNTs. The rates for (6,2) and (10,5) are from \cite{doi:10.1063/1.4963735} and shown here for comparison. (Color on-line only.)} \vspace{-0.25ex} \label{fig:excDOS_R12_62_105_65} \end{figure} The main results are shown in Table I and in Figs. (\ref{fig:excDOS_R12_62_105_65}),~(\ref{fig:R_62Cl}). We have found (see Table I) that in all cases the lowest triplet exciton energy is red-shifted compared to the singlet, which is as expected since the repulsive exchange contribution to the BSE kernel is absent for the triplets \cite{doi:10.1021/nl070355h}. As a result, the energy threshold for SF is somewhat lower compared to the all-singlet MEG. The SF and all-singlet MEG rates for pristine (6,2), (10,5) and (6,5) SWCNTs are shown in Fig. (\ref{fig:excDOS_R12_62_105_65}). Shown here for comparison are the all-singlet rates for (6,2) and (10,5) are from \cite{doi:10.1063/1.4963735}. {\begin{table} \raisebox{0.00001\totalheight}{\begin{tabular}{|c|c|c|c|c|}\hline \textbf{} & $(6,2)$ & $(6,2)+Cl_2$ & $(6,5)$ & $(10,5)$ \\ \hline \textbf{$E_g,~eV$} & 1.33 & 0.96 & 1.22 & 0.91 \\ \hline \textbf{$E_g^{BSE}~s=0,~eV$} & 0.98 & 0.74 & 1.09 & 0.835 \\ \hline \textbf{$E_g^{BSE}~s=1,~eV$} & 0.73 & 0.27 & 0.86 & 0.71 \\ \hline \end{tabular}} \label{Pik-kresults} \caption{ $E_g\equiv\epsilon_{LU}-\epsilon_{HO}$, is the HO-LU gap, $E^{BSE}_g$ is the minimal exciton energy from BSE for the singlets ($s=0$) and triplets ($s=1$). } \end{table}} \begin{figure}[!t \center \begin{tabular}{cc} \vspace{-5.25ex} \raisebox{8.175\totalheight}{\hspace{-4.25ex} (a)} \raisebox{0.185\totalheight}{\includegraphics*[width=0.465\textwidth] {DOS_exc_CNT62_para3_3cellEg.eps}} & \raisebox{8.175\totalheight}{(b)}\raisebox{0.17\totalheight}{\includegraphics*[width=0.485\textwidth] {DOS_texc_CNT62_para3_3cellEg.eps}}\\ \vspace{-2.25ex} \raisebox{9.175\totalheight}{\hspace{-3.25ex} (c)} \raisebox{0.195\totalheight}{\includegraphics*[width=0.465\textwidth] {DOS_exc_2texc_CNT62_Cl2para3_3cell_eV_v1.eps}}& \raisebox{9.175\totalheight}{\hspace{-0.25ex} (d)} \raisebox{0.2095\totalheight}{\includegraphics*[width=0.485\textwidth] {R12_R21_vs_hweV_t_s_CNT62_Cl2para3_3cell_g025_v1.eps}}\\ \end{tabular} \caption{Exciton DOS and MEG rates for the pristine and doped (6,2) SWCNT. Shown in (a) are the singlet exciton DOSs for the pristine and $Cl$ doped (6,2) SWCNT; in (b) -- the triplet exciton DOSs for the pristine and doped (6,2) SWCNT. Shown in (c) are the singlet exciton and triplet biexciton DOSs for the doped (6,2) SWCNT. In (d) are the MEG rates for the $Cl$ doped (6,2) SWCNT: dashed (red) line depicts the all-singlet exciton-to-biexciton rate $R_{1\to 2}$, solid (blue) line -- the SF exciton-to-biexciton rate. The (green) dotted line corresponds to the biexciton-to-exciton rate $R_{2\to1}$ of the $Cl$ doped (6,2) SWCNT. {\bf $R_{2\to1}$ has been multiplied by 10 for better presentation .} This recombination rate is the greatest of all the cases considered here. (Color on-line only.) } \vspace{-0.25ex} \label{fig:R_62Cl} \end{figure} Our calculations predict that efficient MEG both in the SF and all-singlet channels is present in chiral SWCNTs within the solar spectrum range but its strength varies strongly with the excitation energy. This is clearly due to the highly non-uniform low-energy electronic spectrum in SWCNTs (see Fig. \ref{fig:excDOS_R12_62_105_65}, (a), (c), (e)). The $R_{1\to 2}$ MEG rates reach $10^{14}-10^{15}~1/s$ (see Fig. \ref{fig:excDOS_R12_62_105_65}, (b), (d), (f)). The recombination rates $R_{2\to1}$ are suppressed for all energies with $R_{2\to 1}/R_{1\to 2} \leq 10^{-2}$ \cite{doi:10.1063/1.4963735}; they are not shown. In (6,2) the all-singlet MEG starts at the energy threshold $2 \times E_g=1.95~eV,$ the SF -- at $2.3 \times E^t_g=1.7~eV,$ where $E^t_g$ is the minimal triplet exciton energy, but in (10,5) the all-singlet MEG becomes appreciable at about $2.4 \times E_g=2.0~eV;$ the threshold for SF is $2.75 \times E^{t}_g=1.95~eV.$ In (6,5) the all-singlet MEG starts at $2.1 \times E_g=2.25~eV,$ SF -- at $2.2 \times E^t_g=1.9~eV,$ Shown in Fig. \ref{fig:R_62Cl} are results for the (6,2) SWCNT with chlorine atoms attached to the surface as described in Section \ref{sec:compdetail}. Complete discussion of the influence of this surface defect on the system's optoelectronic properties will be presented elsewhere. As far as the MEG-related properties are concerned, we predict that doping significantly red-shifts exciton energy spectra, both singlet (Fig. \ref{fig:R_62Cl}, (a)) and triplet (Fig. \ref{fig:R_62Cl}, (b)). DOS for the initial and final MEG states are shown in Fig. \ref{fig:R_62Cl}, (c). In this case, SF MEG is energetically allowed even for the lowest singlet exciton. Shown in Fig. \ref{fig:R_62Cl}, (d) are the MEG rates for the $Cl$-decorated (6,2) SWCNT. The all-singlet MEG threshold is at about $2 E_g=1.5~eV;$ the threshold for SF is $0.75~eV,$ which is the lowest singlet exciton energy. Importantly, both the all-singlet and SF MEG rates $R_{1 \to 2}$ are much less oscillatory as a function of the exciton energy than the pristine case rates ({\it cf.} Fig. \ref{fig:excDOS_R12_62_105_65}, (b) and \ref{fig:R_62Cl}, (d)). The recombination rate $R_{2\to 1}$ -- which is the greatest of all the cases considered -- is shown in Fig. \ref{fig:R_62Cl}, (d). Note that it is multiplied by 10 for better presentation. In all cases we find that SF rates are greater in magnitude than the all-singlet rates. This is likely due to the aforementioned overall red-shift of the triplet biexciton spectrum compared to the singlet exciton energies. While the Coulomb interaction matrix elements between the electron/hole and trion states are similar in magnitude in both cases, for the same energy there are simply more available bi-exciton final states for the SF than for the all-singlet channel. \section{Conclusions and Outlook} \label{sec:conclusions} Working to the second order in the screened Coulomb interaction and including electron-hole bound state effects we have developed a DFT-based MBPT technique for SF which allows one to compute the exciton-to-bi-exciton and the inverse bi-exciton-to-exciton rates when the initial state is a high-energy singlet while the final state is a pair of non-interacting triplet excitons in spin-correlated state with the total spin zero. Then, this method was used to calculate MEG in the chiral SWCNTs, using (6,2), (6,5) and (10,5) as examples. Also, we have simulated (6,2) SWCNT with chlorine atoms adsorbed to the surface. Our calculations suggest that chiral SWCNTs have efficient MEG within the solar spectrum range both for the all-singlet channel and SF with $R_{1\to 2} \sim 10^{14}-10^{15}~s^{-1}$ and with the recombination rates suppressed as $R_{2\to 1}/R_{1\to 2} \sim 10^{-2}.$ In the pristine SWCNTs the MEG rates vary strongly with the excitation energy. In contrast, our results for the $Cl$-decorated (6,2) SWCNT suggest that surface functionalization significantly alters low-energy spectrum in a SWCNT. As is typical for doping, the defect creates additional shallow electronic states, which improves MEG efficiency. In the doped case, $R_{1\to 2}$ is not only greater in magnitude, but also is a much smoother function of the excitation energy. An alternative way to increase efficiency of carrier multiplication is to use SWCNT mixtures of different chiralities. As noted above, an investigation of MEG efficiency in a nanosystem should be comprehensive, {\it i.e.}, carrier multiplication and biexciton recombination should be allowed to ``compete" with other processes, such as phonon-mediated carrier relaxation, energy and charge transfer, {\it etc.} \cite{doi:10.1021/jz4004334}. The Kadanoff-Baym-Keldysh, or NEGF, technique is a suitable formalism to achieve this goal \cite{Landau10,PhysRevB.83.165306,PhysRevLett.112.257402}. Bi-exciton creation and recombination, both in the all-singlet and SF channels, phonon emission, recombination, energy and charge transfer and other effects are to be included in the transport equation describing time evolution of a weakly non-equilibrium photoexcited state. As described above (see Section II), our calculations had to utilize several simplifying approximations. However, we have verified that our results for the absorption spectra are in reasonable agreement with experimental data with the error less then 13\% for E$_{11}$ and E$_{22}$ excitonic bands for the (6,2), (6,5) and (10,5) nanotubes. This suggests overall applicability of our technique for these systems at least at the semi-quantitative level. Accuracy of our methods can be further improved in several ways. One natural improvement is to calculate $GW$ single particle energy corrections, which then can be easily incorporated in the rate expressions. {It is likely to blue-shift the rate curves by a fraction of eV without significant changes to the shape.} Another step is to use full RPA interaction ${\rm W}(0,{\bf k},{\bf p})$ rather than ${\rm W}(0,-{\bf k},{\bf k}).$ Also, in the impact ionization process the typical energy exchange exceeds the gap and, so, role of dynamical screening needs to be investigated. Going beyond second order in the screened Coulomb interaction would require keeping the wave function renormalization factor (see, {\it e.g.}, \cite{FW}) in the exciton decay rate expressions in Eqs. (\ref{R1to2}), (\ref{R1to2triplet}). However, none of these corrections are likely to change the main results of this work, while drastically increasing computational cost. \section{Acknowledgments} Authors acknowledge financial support from the NSF grant CHE-1413614. The authors acknowledge the use of computational resources at the Center for Computationally Assisted Science and Technology (CCAST) at North Dakota State University and the National Energy Research Scientific Computing Center (NERSC) allocation award 86678, supported by the Office of Science of the DOE under contract No. DE-AC02-05CH11231.
1,477,468,750,858
arxiv
\section{Introduction} The Bernoulli polynomials $B_{n}\left( x\right) $ are defined by exponential generating function \begin{equation} \sum_{n\geq 0}B_{n}\left( x\right) \frac{t^{n}}{n!}=\frac{te^{xt}}{e^{t}-1 \text{, }\left\vert t\right\vert <2\pi \text{.} \label{13} \end{equation} In particular, the rational numbers $B_{n}=B_{n}\left( 0\right) $ are called Bernoulli numbers and have an explicit formula \cite{Graham} \[ B_{n}=\sum_{k=0}^{n}\QATOPD\{ \} {n}{k}\frac{\left( -1\right) ^{k}k!}{k+1}, \ where $\QATOPD\{ \} {n}{k}$ is the Stirling number of second kind \cit {Graham}. As it is well known, the Bernoulli numbers are considerable importance in different areas of mathematics such as number theory, combinatorics, special functions. Moreover, many generalizations and extensions of these numbers appear in the literature. One of the generalization of the Bernoulli numbers is $p$-Bernoulli numbers, defined by a three-term recurrence relation \cit {Rahmani \begin{equation} B_{n+1,p}=pB_{n,p}-\frac{\left( p+1\right) ^{2}}{p+2}B_{n,p+1}, \label{16} \end{equation with the initial condition $B_{0,p}=1$. These numbers also satisfy an explicit formula \[ B_{n,p}=\frac{p+1}{p!}\sum_{k=0}^{n}\QATOPD\{ \} {n+p}{k+p}_{p}\frac{\left( -1\right) ^{k}\left( k+p\right) !}{k+p+1}, \ where $\QATOPD\{ \} {n+p}{k+p}_{p}$ is the $p$-Stirling number of second kind \cite{Broder}. As a special case, setting $p=0$ in the above equation gives $B_{n,0}=B_{n}$. $p$-Bernoulli polynomials which is the polynomial extension of $B_{n,p}$, are defined by the following convolutio \begin{equation} B_{n,p}\left( x\right) =\sum_{k=0}^{n}\binom{n}{k}x^{n-k}B_{k,p}. \label{18} \end{equation The first few $p$-Bernoulli polynomials are \begin{eqnarray*} B_{0,p}\left( x\right) &=&1, \\ B_{1,p}\left( x\right) &=&x-\frac{1}{p+2}, \\ B_{2,p}\left( x\right) &=&x^{2}-\frac{2x}{p+2}-\frac{p-1}{\left( p+2\right) \left( p+3\right) }. \end{eqnarray* Moreover, these polynomials have integral representations \begin{eqnarray} \dint\limits_{b}^{a}B_{n,p}\left( t\right) dt &=&\frac{B_{n+1,p}\left( a\right) -B_{n+1,p}\left( b\right) }{n+1}, \label{19} \\ \dint\limits_{0}^{1}B_{n,p}\left( t\right) dt &=&\frac{1}{n+1}\sum_{k=0}^{n \binom{n+1}{k}B_{k,p}, \label{20} \end{eqnarray a recurrence relatio \begin{equation} B_{n,p}\left( x+1\right) -B_{n,p}\left( x\right) =\sum_{k=0}^{n-1}\binom{n}{ }B_{k,p}\left( x\right) , \label{22} \end{equation and a three-term recurrence relatio \begin{equation} B_{n+1,p}\left( x\right) =\left( x+p\right) B_{n,p}\left( x\right) -\frac \left( p+1\right) ^{2}}{p+2}B_{n,p+1}\left( x\right) . \label{21} \end{equation} In the special case of (\ref{18}) when $x=0$, we obtain $B_{n,p}\left( 0\right) =B_{n,p}$. Some other properties and applications of $p$-Bernoulli polynomials and numbers can be found in \cite{Rahmani}. The main formula of this paper is \cite[p. 361]{Rahmani \[ \frac{1}{p+1}\sum_{n\geq 0}B_{n,p}\frac{t^{n}}{n!}=\dint\limits_{-1}^{0 \frac{\left( 1+y\right) ^{p}}{1-y\left( e^{t}-1\right) }dy,\text{ for }p\geq 0. \ Using the generating function of geometric polynomials $w_{n}\left( y\right) $ (see Section 2 for details of $w_{n}\left( y\right) $), the above equation can be written as \begin{equation} \frac{1}{p+1}B_{n,p}=\dint\limits_{-1}^{0}\left( 1+y\right) ^{p}w_{n}\left( y\right) dy \label{17} \end{equation which is the generalization of Keller's identity \cite{KELLER \[ \dint\limits_{-1}^{0}w_{n}\left( y\right) dy=B_{n}. \ Thus, using this integral representation and the properties of geometric polynomials, we generalize a recurrence relation of Bernoulli numbers to $p -Bernoulli numbers and obtain an explicit formula for $p$-Bernoulli numbers. Moreover, extending the representation (\ref{17}) to $p$-Bernoulli polynomials, we give the generalization of the telescopic formula and Raabe's formula of Bernoulli polynomials for $p$-Bernoulli polynomials. Thus, as special cases of these results, we get an explicit formula, a finite summation and a convolution identity\ for Bernoulli polynomials and numbers. Besides, we evaluate a Faulhaber-type summation in terms of $p -Bernoulli polynomials. First, we extend the well known recurrence relation of Bernoulli number \[ \sum_{k=0}^{n}\binom{n+1}{k}B_{k}=0\text{ \ for }n\geq 1, \ to $p$-Bernoulli numbers in the following theorem. \begin{theorem} \label{teo3}For $n\geq 1$ and $p\geq 0$ \begin{equation} \sum_{k=0}^{n}\binom{n+1}{k}B_{k,p}=-pB_{n,p}. \label{6} \end{equation} \end{theorem} We note that using (\ref{20}) and (\ref{22}) in the above theorem gives us the following conclusion \[ B_{n,p}\left( 1\right) =B_{n,p}-pB_{n-1,p}, \ an \begin{equation} \dint\limits_{0}^{1}B_{n,p}\left( t\right) dt=\frac{-pB_{n,p}}{n+1}, \label{7a} \end{equation respectively. Also, these results are the generalization of the following well known properties of $B_{n} \[ B_{n}\left( 1\right) =B_{n}\text{ and }\dint\limits_{0}^{1}B_{n}\left( t\right) dt=0,\text{ \ for }n\geq 1. \] The Bernoulli numbers are connected with some well known special numbers \cite{Can1, Cenkci2, Merca2, Merca1, Mezo2, Mihioubi2}\textbf{.} Rahmani \cite{Rahmani} also gave explicit formulas involving different kind of special numbers. Now, we obtain a new explicit formula for $B_{n,p}$, and hence $B_{n}$, in the following theorem. \begin{theorem} \label{teo4}For $n\geq 1$ and $p\geq 0$ \begin{equation} B_{n,p}=\left( p+1\right) \sum_{k=1}^{n}\QATOPD\{ \} {n}{k}\frac{\left( -1\right) ^{k+n}k!}{\left( k+p\right) \left( k+p+1\right) }. \label{8} \end{equation When $p=0$ this become \begin{equation} B_{n}=\sum_{k=1}^{n}\QATOPD\{ \} {n}{k}\frac{\left( -1\right) ^{k+n}\left( k-1\right) !}{k+1}. \label{26} \end{equation} \end{theorem} In order to deal with some properties of $p$-Bernoulli polynomials, we need to extend the integral representation (\ref{17}) to $B_{n,p}\left( x\right) . \begin{proposition} \label{pro2}Let $n$ and $p$ be the non-negative integers. Then we hav \begin{equation} \frac{1}{p+1}B_{n,p}\left( x\right) =\dint\limits_{-1}^{0}\left( 1+y\right) ^{p}w_{n}\left( x;y\right) dy, \label{2} \end{equation where $w_{n}\left( x;y\right) $ (see Section 2) is two variable geometric polynomials. \end{proposition} One of the important properties of $B_{n}\left( x\right) $ is the telescopic formula \[ B_{n}\left( x+1\right) -B_{n}\left( x\right) =nx^{n-1}. \ From this formula, Bernoulli gave a closed formula for Faulhaber's summation in terms of Bernoulli polynomials and number \begin{equation} \sum_{k=0}^{m}k^{n}=\frac{B_{n+1}\left( m+1\right) -B_{n+1}}{n+1}. \label{3} \end{equation} Now, we want to give an extension of telescopic formula for $p$-Bernoulli polynomials. \begin{proposition} \label{pro1}For any non-negative integer $n$ and $p$ \begin{equation} B_{n,p+1}\left( x+1\right) -B_{n,p+1}\left( x\right) =\frac{p+2}{p+1}\left( B_{n,p}\left( x+1\right) -x^{n}\right) . \label{28} \end{equation} \end{proposition} This telescopic formula for $p$-Bernoulli polynomials gives us the evaluation of finite summation of $p$-Bernoulli polynomials. In particular case $p=0$, we arrive at a new finite summation involving Bernoulli polynomials. \begin{theorem} \label{teo5}For any non-negative integer $n,m$ and $p$, \begin{equation} \sum_{k=0}^{m}B_{n,p}\left( k+1\right) =\frac{B_{n+1}\left( m+1\right) -B_{n+1}}{n+1}+\frac{p+1}{p+2}\left( B_{n,p+1}\left( m+1\right) -B_{n,p+1}\right) . \label{4} \end{equation When $p=0$ this become \begin{equation} \sum_{k=0}^{m}\left[ B_{n}\left( k+1\right) +nk^{n}\right] =\left( m+1\right) B_{n}\left( m+1\right) . \label{27} \end{equation} \end{theorem} Another important identity for Bernoulli polynomials is the Raabe's formula \[ m^{n-1}\sum_{k=0}^{m-1}B_{n}\left( x+\frac{k}{m}\right) =B_{n}\left( mx\right) . \ Now, we want to extend the Raabe's formula to $p$-Bernoulli polynomials. \begin{theorem} \label{teo1}For $m\geq 1$ and $n,p\geq 0$ \begin{equation} m^{n-1}\sum_{k=0}^{m-1}B_{n,p}\left( x+\frac{k}{m}\right) =\left( p+1\right) B_{n}\left( mx\right) -p\sum_{k=0}^{n}\binom{n}{k}\frac{m^{k}B_{n-k}\left( mx\right) B_{k,p}}{k+1}. \label{31} \end{equation} \end{theorem} Using the generating function technique, Chu and Zhou \cite{Chu} give several convolution identities for Bernoulli numbers. Two of them are the followings \begin{eqnarray*} \sum_{k=0}^{n}\binom{n}{k}\frac{B_{k+1}B_{n-k}}{k+1} &=&-B_{n}-B_{n+1}, \\ \sum_{k=0}^{n}\binom{n}{k}\frac{2^{k}B_{k+1}B_{n-k}}{k+1} &=&\frac -B_{n+1}-\left( 2^{n-1}+1\right) B_{n}}{2}. \end{eqnarray*} If we set $p=1$ and $x=0$ in (\ref{31}) and use (\ref{16}) and (\ref{17}), we have a close formula for a generalization of the above equations in the following corollary. \begin{corollary} \label{cor2}For $m\geq 1$ and $n,p\geq 0$ \[ \sum_{k=0}^{n}\binom{n}{k}\frac{m^{k}B_{k+1}B_{n-k}}{k+1}=\frac -mB_{n}-B_{n+1}}{m}+m^{n-1}\sum_{k=0}^{m-1}\frac{k}{m}B_{n}\left( \frac{k}{m \right) . \] \end{corollary} Finally, we evaluate a Faulhaber-type summation in terms of $p$-Bernoulli polynomials and numbers which generalize the following finite summation \cit [p. 18, Eq. 1]{Gould \[ \sum_{k=0}^{n}\frac{\left( -1\right) ^{k}}{\binom{n}{k}}=\frac{n+1}{n+2 \left( \left( -1\right) ^{n}+1\right) . \] \begin{theorem} \label{teo2}For $n\geq 1$ and $p\geq 0$, we hav \[ \sum_{k=0}^{n}\frac{k^{p}\left( -1\right) ^{k}}{\binom{n}{k}}=\frac{n+1}{n+2 \left[ \left( -1\right) ^{n+p}B_{p,n+1}\left( -n\right) +B_{p,n+1}\right] . \] \end{theorem} The summary by sections is as follows: Section 2 is the preliminary section where we give definitions and known results needed. In Section 3, we derive a recurrence relation for $p$-Bernoulli and a Raabe-type relation for geometric polynomials, which we need in the proofs of Theorem \ref{teo1} and Theorem \ref{teo2}. In Section 4, we give the proofs of the results, mentioned above. \section{Preliminaries} Geometric polynomials are defined by the exponential generating function \cite{T \begin{equation} \frac{1}{1-y\left( e^{t}-1\right) }=\sum_{n=0}^{\infty }w_{n}\left( y\right) \frac{t^{n}}{n!}. \label{14} \end{equation They have an explicit formula \begin{equation} w_{n}\left( y\right) =y\sum_{k=1}^{n}\QATOPD\{ \} {n}{k}\left( -1\right) ^{n+k}k!\left( y+1\right) ^{k-1},\text{ \ }n>0, \label{23} \end{equation and a reflection formula \begin{equation} w_{n}\left( y\right) =\left( -1\right) ^{n}\frac{y}{y+1}w_{n}\left( -y-1\right) ,\text{ for }n>0, \label{24} \end{equation Moreover, these polynomials are related to $p$-Bernoulli numbers with an integral representatio \begin{equation} {\int\limits_{-1}^{0}}y^{p}w_{n}\left( y\right) dy=\left( -1\right) ^{n+p+1 \frac{p+1}{p+2}B_{n-1,p+1},\text{ for }n>1,\text{ }p\geq 0. \label{25} \end{equation} See \cite{B, B2, B3, B4, BoyadzhievandDil, Dil1, Kargin1} for other properties and applications of geometric polynomials. Two variable geometric polynomials are defined by means of the following generating function \cite{Kargin1 \begin{equation} \sum_{n=0}^{\infty }w_{n}\left( x;y\right) \frac{t^{n}}{n!}=\frac{e^{xt}} 1-y\left( e^{t}-1\right) }. \label{9} \end{equation Moreover, they are related to $w_{k}\left( y\right) $ with a convolutio \begin{equation} w_{n}\left( x;y\right) =\sum_{k=0}^{n}\binom{n}{k}w_{k}\left( y\right) x^{n-k}, \label{10} \end{equation with a special cas \begin{equation} w_{n}\left( 0;y\right) =w_{n}\left( y\right) \text{.} \label{29} \end{equation} \section{Some other basic properties} In this section, in order to use in the proof of Theorem \ref{teo1} and Theorem \ref{teo2}, we give a recurrence relation for $p$-Bernoulli polynomials and a Raabe-type formula for two variable geometric polynomials. For the proof of Theorem \ref{teo1}, we first need the following proposition. \begin{proposition} For $n\geq 1$ and $p\geq 0$, we hav \begin{equation} p^{2}\sum_{k=1}^{n}\binom{n+1}{k+1}y^{n-k}B_{k,p}=\left( p+1\right) y^{n+1}+p\left( n+1\right) y^{n}-\left( p+1\right) B_{n+1,p-1}\left( 1+y\right) . \label{36} \end{equation} \end{proposition} \begin{proof} From (\ref{18}), we have \begin{equation} \sum_{k=1}^{n}\binom{n}{k}y^{n-k}B_{k,p}\left( x\right) =B_{n,p}\left( x+y\right) -y^{n}. \label{11} \end{equation Let integrate both sides of the above equation with respect to $x$ from $0$ to $1$. Then, using (\ref{7a}), the left hand side of (\ref{11}) become \begin{eqnarray} \sum_{k=1}^{n}\binom{n}{k}y^{n-k}{\int\limits_{0}^{1}}B_{k,p}\left( x\right) dx &=&-p\sum_{k=1}^{n}\binom{n}{k}\frac{y^{n-k}B_{k,p}}{k+1} \nonumber \\ &=&\frac{-p}{n+1}\sum_{k=1}^{n}\binom{n+1}{k+1}y^{n-k}B_{k,p}. \label{12} \end{eqnarray On the other hand, using (\ref{19}) and Proposition \ref{pro1} in the right hand side of (\ref{11}), we have \begin{eqnarray*} {\int\limits_{0}^{1}}\left[ B_{n,p}\left( x+y\right) -y^{n}\right] dx &=& \int\limits_{y}^{y+1}}B_{n,p}\left( t\right) dt-y^{n}{\int\limits_{0}^{1}dx} \\ &=&\frac{B_{n+1,p}\left( y+1\right) -B_{n+1,p}\left( y\right) }{n+1}-y^{n} \\ &=&\frac{p+1}{p\left( n+1\right) }\left[ B_{n+1,p-1}\left( y+1\right) -y^{n+1}\right] -y^{n}. \end{eqnarray* Combining the above equation with (\ref{12}) gives the desired equation. \end{proof} Now, we give the Raabe-type formula for two variable geometric polynomials in the following proposition. Later, we use it in the proof of Theorem \re {teo2}. \begin{proposition} For $m\geq 1$ and $n,p\geq 0$ \begin{equation} m^{n-1}\sum_{k=0}^{m-1}w_{n-1}\left( x+\frac{k}{m},y\right) =\frac{1}{ny \sum_{k=1}^{n}\binom{n}{k}m^{k}B_{n-k}\left( mx\right) w_{k}\left( y\right) . \label{15} \end{equation} \end{proposition} \begin{proof} Using (\ref{9}) and the identit \[ \sum_{k=0}^{m-1}x^{k}=\frac{x^{m}-1}{x-1}, \ we have \begin{eqnarray*} \sum_{n=0}^{\infty }\frac{t^{n}}{n!}\sum_{k=0}^{m-1}w_{n}\left( x+\frac{k}{m ,y\right) &=&\sum_{k=0}^{m-1}\sum_{n=0}^{\infty }w_{n}\left( x+\frac{k}{m ,y\right) \frac{t^{n}}{n!} \\ &=&\frac{1}{1-y\left( e^{t}-1\right) }\sum_{k=0}^{m-1}e^{\left( x+\frac{k}{m \right) t} \\ &=&\frac{e^{xt}}{1-y\left( e^{t}-1\right) }\frac{e^{t}-1}{e^{t/m}-1} \\ &=&\frac{e^{xt}}{e^{t/m}-1}\frac{1}{1-y\left( e^{t}-1\right) }\frac{\left( 1-\left( 1-y\left( e^{t}-1\right) \right) \right) }{y} \\ &=&\frac{1}{y\left( t/m\right) }\left( \frac{\left( t/m\right) e^{xt}} e^{t/m}-1}\frac{1}{1-y\left( e^{t}-1\right) }-\frac{\left( t/m\right) e^{xt }{e^{t/m}-1}\right) . \end{eqnarray* From (\ref{13}) and (\ref{14}), the above equation can be written a \[ \frac{yn}{m}\sum_{n=1}^{\infty }\frac{t^{n}}{n!}\sum_{k=0}^{m-1}w_{n-1 \left( x+\frac{k}{m},y\right) =\sum_{n=1}^{\infty }\frac{t^{n}}{n!}\left[ \sum_{k=1}^{n}\binom{n}{k}\frac{B_{n-k}\left( mx\right) w_{k}\left( y\right) }{m^{n-k}}-\frac{B_{n}\left( mx\right) }{m^{n}}\right] . \ Finally, comparing the coefficients of $\frac{t^{n}}{n!}$ in the both sides of the above equation, we get (\ref{15}). \end{proof} \section{Proofs} In this section, we give the proofs of all results mentioned in Section 1. \begin{proof}[Proof of Theorem \protect\ref{teo3}] Using (\ref{24}) in the following equation \cite[Proposition 15 {BoyadzhievandDil}, we have \begin{eqnarray*} \sum_{k=0}^{n}\binom{n}{k}w_{k}\left( y\right) &=&\frac{1+y}{y}w_{n}\left( y\right) \\ &=&\left( -1\right) ^{n}w_{n}\left( -y-1\right) . \end{eqnarray* Multiplying both sides of the above equation by $\left( 1+y\right) ^{p}$, integrating it with respect to $y$ from $-1$ to $0$ and using (\ref{17}) and (\ref{25}), we achieve \begin{eqnarray*} \frac{1}{p+1}\sum_{k=0}^{n}\binom{n}{k}B_{k,p} &=&\left( -1\right) ^{n} \int\limits_{-1}^{0}}\left( {1+}y\right) ^{p}w_{n}\left( -y-1\right) dy \\ &=&\left( -1\right) ^{n+p}{\int\limits_{-1}^{0}}x^{p}w_{n}\left( x\right) dx \\ &=&-\frac{p+1}{p+2}B_{n-1,p+1}. \end{eqnarray* Finally, using (\ref{16}) gives the desired equation. \end{proof} \begin{proof}[Proof of Theorem \protect\ref{teo4}] Multiplying both sides of (\ref{23}) by $\left( 1+y\right) ^{p}$, integrating it with respect to $y$ from $-1$ to $0$ and using (\ref{17}), we hav \begin{eqnarray*} \frac{1}{p+1}B_{n,p} &=&\sum_{k=1}^{n}\QATOPD\{ \} {n}{k}\left( -1\right) ^{n+k}k!\int\limits_{-1}^{0}y\left( y+1\right) ^{p+k-1}dy \\ &=&\sum_{k=1}^{n}\QATOPD\{ \} {n}{k}\left( -1\right) ^{n+k+1}k!\int\limits_{0}^{1}\left( 1-x\right) x^{p+k-1}dx. \end{eqnarray* Finally, using the well known relation of Beta function \begin{equation} B\left( x,y\right) =\int\limits_{0}^{1}\left( 1-t\right) ^{x-1}t^{y-1}dt \frac{\left( x-1\right) !\left( y-1\right) !}{\left( x+y-1\right) !}, \label{34} \end{equation where $x,y=1,2,3,\cdots $, we obtain (\ref{8}). \end{proof} \begin{proof}[Proof of Proposition \protect\ref{pro2}] Using the equations (\ref{17}) and (\ref{18}) in (\ref{10}), we hav \begin{eqnarray*} {\int\limits_{-1}^{0}}\left( {1+}y\right) ^{p}w_{n}\left( x;y\right) dy &=&\sum_{k=0}^{n}\binom{n}{k}x^{n-k}{\int\limits_{-1}^{0}}\left( {1+ y\right) ^{p}w_{k}\left( y\right) dy \\ &=&\sum_{k=0}^{n}\binom{n}{k}x^{n-k}B_{k,p} \\ &=&\frac{1}{p+1}B_{n,p}. \end{eqnarray*} \end{proof} \begin{proof}[Proof of Proposition \protect\ref{pro1}] The two variable geometric polynomials have \cite[Eq. 14]{Kargin1 \[ w_{n}\left( x+1;y\right) -w_{n}\left( x;y\right) =\frac{1}{1+y}\left( w_{n}\left( x+1,y\right) -x^{n}\right) . \ Multiplying both sides of the above equation by $\left( 1+y\right) ^{p+1}$, integrating it with respect to $y$ from $-1$ to $0$ and using (\ref{17}) yield (\ref{28}). \end{proof} \begin{proof}[Proof of Theorem \protect\ref{teo5}] Replacing $x$ with $k$ in (\ref{28}) and summing over $k$ from $0$ to $m$, we obtai \begin{eqnarray} \frac{p+2}{p+1}\sum_{k=0}^{m}B_{n,p}\left( k+1\right) -\sum_{k=0}^{m}k^{n} &=&\sum_{k=0}^{m}\left( B_{n,p+1}\left( k+1\right) -B_{n,p+1}\left( k\right) \right) \nonumber \\ &=&B_{n,p+1}\left( m+1\right) -B_{n,p+1}. \label{30} \end{eqnarray If we use Bernoulli's well known identity for Faulhaber summation \[ \sum_{k=0}^{m}k^{n}=\frac{B_{n+1}\left( m+1\right) +B_{n+1}}{n+1}, \ in the second part of the left hand side of (\ref{30}), we arrive at the first part of theorem. For the second part of the theorem, if we use (\ref{16}) and (\ref{21}) for p=0$, (\ref{4}) become \begin{eqnarray*} \sum_{k=0}^{m}B_{n}\left( k+1\right) &=&\left( m+1\right) B_{n}\left( m+1\right) -n\frac{B_{n+1}\left( m+1\right) +B_{n+1}}{n+1} \\ &=&\left( m+1\right) B_{n}\left( m+1\right) -n\sum_{k=0}^{m}k^{n}. \end{eqnarray* Then, we have (\ref{27}). \end{proof} \begin{proof}[Proof of Theorem \protect\ref{teo1}] Multiplying both sides of (\ref{15}) by $\left( 1+y\right) ^{p+1}$ and using (\ref{24}), we hav \begin{eqnarray*} &&m^{n-1}\sum_{k=0}^{m-1}\left( 1+y\right) ^{p+1}w_{n-1}\left( x+\frac{k}{m ,y\right) \\ &&\qquad \quad =\frac{1}{n}\sum_{k=1}^{n}\binom{n}{k}m^{k}B_{n-k}\left( mx\right) \frac{\left( 1+y\right) ^{p+1}}{y}w_{k}\left( y\right) \\ &&\qquad \quad =\frac{1}{n}\sum_{k=1}^{n}\binom{n}{k}m^{k}B_{n-k}\left( mx\right) \left( 1+y\right) ^{p}\left( -1\right) ^{k}w_{k}\left( -y-1\right) \\ &&\qquad \quad =mB_{n-1}\left( mx\right) \left( 1+y\right) ^{p+1}+\frac{1}{n \sum_{k=2}^{n}\binom{n}{k}m^{k}B_{n-k}\left( mx\right) \left( -1\right) ^{k}\left( 1+y\right) ^{p}w_{k}\left( -y-1\right) . \end{eqnarray* Integrating the above equation with respect to $y$ from $-1$ to $0$ and using (\ref{17}) and (\ref{25}), we obtai \begin{eqnarray*} &&\frac{m^{n-1}}{p+2}\sum_{k=0}^{m-1}B_{n-1,p+1}\left( x+\frac{k}{m}\right) \\ &&\qquad \qquad =\frac{mB_{n-1}\left( mx\right) }{p+2}+\frac{1}{n \sum_{k=2}^{n}\binom{n}{k}m^{k}B_{n-k}\left( mx\right) \left( -1\right) ^{k}\int\limits_{-1}^{0}\left( 1+y\right) ^{p}w_{k}\left( -y-1\right) dy \\ &&\qquad \qquad =\frac{mB_{n-1}\left( mx\right) }{p+2}+\frac{1}{n \sum_{k=2}^{n}\binom{n}{k}m^{k}B_{n-k}\left( mx\right) \left( -1\right) ^{k+p}\int\limits_{-1}^{0}x^{p}w_{k}\left( x\right) dx \\ &&\qquad \qquad =\frac{mB_{n-1}\left( mx\right) }{p+2}-\frac{p+1}{n\left( p+2\right) }\sum_{k=2}^{n}\binom{n}{k}m^{k}B_{n-k}\left( mx\right) B_{k-1,p+1} \\ &&\qquad \qquad =mB_{n-1}\left( mx\right) -\frac{p+1}{n\left( p+2\right) \sum_{k=1}^{n}\binom{n}{k}m^{k}B_{n-k}\left( mx\right) B_{k-1,p+1}. \end{eqnarray* Replacing $p+1$ with $p$ and $n$ with $n+1$, the above equation can be rewritten a \begin{eqnarray*} m^{n-1}\sum_{k=0}^{m-1}B_{n,p}\left( x+\frac{k}{m}\right) &=&\left( p+1\right) B_{n}\left( mx\right) -\frac{p}{n+1}\sum_{k=1}^{n+1}\binom{n+1}{k m^{k-1}B_{n+1-k}\left( mx\right) B_{k-1,p} \\ &=&\left( p+1\right) B_{n}\left( mx\right) -\frac{p}{n+1}\sum_{k=0}^{n \binom{n+1}{k+1}m^{k}B_{n-k}\left( mx\right) B_{k,p} \\ &=&\left( p+1\right) B_{n}\left( mx\right) -p\sum_{k=0}^{n}\binom{n}{k}\frac m^{k}B_{n-k}\left( mx\right) B_{k,p}}{k+1}. \end{eqnarray*} \end{proof} \begin{proof}[Proof of Theorem \protect\ref{teo2}] We have an arithmetic-geometric progression \cite{DeBruyn, WangandHsu \begin{equation} \sum_{k=0}^{n}k^{p}y^{k}=\frac{A_{p}\left( y\right) }{\left( 1-y\right) ^{p+1}}-y^{n+1}\sum_{k=0}^{p}\binom{p}{k}\left( n+1\right) ^{p-k}\frac A_{k}\left( y\right) }{\left( 1-y\right) ^{k+1}}, \label{32} \end{equation where $A_{n}\left( y\right) $ is the Eulerian polynomial of degree $n$ \cit {COMTET}. These polynomials are closely related to the geometric polynomials with relation \cite[Eq. 3.18]{B \[ A_{n}\left( y\right) =\left( 1-y\right) ^{n}w_{n}\left( y\right) . \ If we multiply both side of (\ref{32}) and use this relation, (\ref{32}) can be rewritten as \[ \sum_{k=0}^{n}k^{p}y^{k}\left( 1+y\right) ^{n-k}=\left( 1+y\right) ^{n+1}w_{p}\left( y\right) -y^{n+1}\sum_{k=0}^{p}\binom{p}{k}\left( n+1\right) ^{p-k}w_{k}\left( y\right) . \ Integrating both sides of the above equation with respect $y$ from $-1$ to $0 $, we hav \begin{eqnarray} &&\sum_{k=0}^{n}k^{p}\int\limits_{-1}^{0}y^{k}\left( 1+y\right) ^{n-k}dy \nonumber \\ &&\quad =\int\limits_{-1}^{0}\left( 1+y\right) ^{n+1}w_{p}\left( y\right) dy-\left( n+1\right) ^{p}\int\limits_{-1}^{0}y^{n+1}dy-p\left( n+1\right) ^{p-1}\int\limits_{-1}^{0}y^{n+2}dy \nonumber \\ &&\qquad -\sum_{k=2}^{p}\binom{p}{k}\left( n+1\right) ^{p-k}\int\limits_{-1}^{0}y^{n+1}w_{k}\left( y\right) . \label{33} \end{eqnarray Using (\ref{34}) in the left hand side of (\ref{33}) yield \[ \sum_{k=0}^{n}k^{p}\int\limits_{-1}^{0}y^{k}\left( 1+y\right) ^{n-k}dy=\frac 1}{n+1}\sum_{k=0}^{n}\frac{\left( -1\right) ^{k}}{\binom{n}{k}}. \ From (\ref{17}), the first integral in the right hand side of (\ref{33}) become \[ \int\limits_{-1}^{0}\left( 1+y\right) ^{n+1}w_{p}\left( y\right) dy=\frac{1} n+2}B_{p,n+1}. \ The second and third integrals in the right hand side can be evaluated easily. For the last integral in right hand side of (\ref{33}), if we use \ref{36}) and (\ref{25}), we have \begin{eqnarray*} &&\sum_{k=2}^{p}\binom{p}{k}\left( n+1\right) ^{p-k}\int\limits_{-1}^{0}y^{n+1}w_{k}\left( y\right) \\ &&\qquad \qquad =\frac{\left( -1\right) ^{n+p}\left( n+1\right) ^{p}}{n+2} \frac{\left( -1\right) ^{n+1}p\left( n+1\right) ^{p-1}}{n+3}-\frac{\left( -1\right) ^{n+p}B_{p,n+1}\left( -n\right) }{n+2}. \end{eqnarray* Finally, combining all these evaluated integrals give the desired equation. \end{proof}
1,477,468,750,859
arxiv
\section{Introduction} Since the establishment of the special theory of relativity as a true of nature, Lorentz symmetry has been taken as a key ingredient of theoretical physics. A motivation for studies involving the violation of Lorentz symmetry is the demonstration that string theories may support spontaneous violation of this symmetry \cite{Samuel}, with important and interesting connections with the physics in the Planck energy scale. The Standard Model Extension (SME) \cite{Colladay} has arisen as theoretical framework for addressing Lorentz violation (LV) in a broader context than the usual Standard Model, in an attempt of scrutinizing remanent effects of this violation in several low energy systems. In this way, the SME incorporates Lorentz-violating coefficients to all sectors of the standard model and to general relativity, representing a suitable tool for investigating Lorentz violation in several distinct respects. The violation of Lorentz symmetry in the gauge sector of the SME is governed by a CPT-odd and a CPT-even tensor, yielding some unconventional phenomena such as vacuum birefringence and Cherenkov radiation. The LV coefficients are usually classified in accordance with the parity and birefringence. The CPT-odd term is represented by the Carroll-Field-Jackiw (CFJ) background \cite{Jackiw}, which is also parity-odd and birefringent.\ This electrodynamics has been much investigated, encompassing aspects as diverse as: consistency aspects and modifications induced in QED \cite{Adam,Soldati,Higgs}, supersymmetry \cite{Susy}, vacuum Cherenkov radiation emission \cite{Cerenkov1}, finite-temperature contributions and Planck distribution \cite{winder1, Tfinite}, electromagnetic propagation in waveguides \cite{winder2}, Casimir effect \cite{Casimir}. The CPT-even gauge sector of the SME is represented by the CPT-even tensor, $\left( k_{F}\right) _{\alpha\beta\mu\nu},$ composed of 19 independent coefficients, with nine nonbirefringent and ten birefringent ones. This sector has been studied since 2002 \cite{KM1,KM2}, \cite{KM3}, \cite{Kostelec}, being represented by the following Lagrangian: \begin{equation} \mathit{{\mathcal{L}}}_{(1+3)}=-\frac{1}{4}F_{\hat{\mu}\hat{\nu}}F^{\hat{\mu }\hat{\nu}}-\frac{1}{4}\left( k_{F}\right) _{\hat{\mu}\hat{\nu}\hat{\lambda }\hat{\kappa}}F^{\hat{\mu}\hat{\nu}}F^{\lambda\kappa}-J_{\hat{\mu}}A^{\hat {\mu}},\label{L1 \end{equation} where the indices with hat, $\hat{\mu},\hat{\nu},$ run from 0 to 3, $A^{\hat{\mu}}$ is the four-potential and $F_{\hat{\mu}\hat{\nu}}$ is the usual electromagnetic field tensor. The tensor $\left( k_{F}\right) _{\hat{\mu}\hat{\nu}\hat{\lambda}\hat{\kappa}}$ stands for the Lorentz-violating coupling and possesses the symmetries of the Riemann tensor, $\left( k_{F}\right) _{\hat{\mu}\hat{\nu}\hat{\lambda}\hat{\kappa}}=-\left( k_{F}\right) _{\hat{\nu}\hat{\mu}\hat{\lambda}\hat{\kappa}},$ $\left( k_{F}\right) _{\hat{\mu}\hat{\nu}\hat{\lambda}\hat{\kappa}}=-\left( k_{F}\right) _{\hat{\mu}\hat{\nu}\hat{\kappa}\hat{\lambda}},\ $\ $\left( k_{F}\right) _{\hat{\mu}\hat{\nu}\hat{\lambda}\hat{\kappa}}=\left( k_{F}\right) _{\hat{\lambda}\hat{\kappa}\hat{\mu}\hat{\nu}},$ $\left( k_{F}\right) _{\hat{\mu}\hat{\nu}\hat{\lambda}\hat{\kappa}}+\left( k_{F}\right) _{\hat{\mu}\hat{\lambda}\hat{\kappa}\hat{\nu}}+\left( k_{F}\right) _{\hat{\mu}\hat{\kappa}\hat{\nu}\hat{\lambda}}=0,$ and a double null trace, $\left( k_{F}\right) ^{\hat{\mu}\hat{\nu}}{}_{\hat{\mu}\hat{\nu }}=0.$ A very useful parametrization for addressing this electrodynamics is the one presented in Refs. \cite{KM1,KM2}, in which the nineteen LV components are enclosed in four $3\times3$ matrices, defined a \begin{align} \left( \kappa_{DE}\right) ^{j\kappa} & =-2\left( k_{F}\right) ^{0j0\kappa},\text{ }\left( \kappa_{HB}\right) ^{j\kappa}\;=\;\frac{1 {2}\epsilon^{jpq}\epsilon^{\kappa lm}\left( k_{F}\right) ^{pqlm ,\label{P1}\\[0.3cm] \text{ }\left( \kappa_{DB}\right) ^{j\kappa} & =-\left( \kappa _{HE}\right) ^{\kappa j}\;=\;\epsilon^{\kappa pq}\left( k_{F}\right) ^{0jpq}.\label{P2 \end{align} The matrices $\kappa_{DE},\kappa_{HB}$ contain together 11 independent components while $\kappa_{DB},\kappa_{HE}$ possess together 8 components, which sums the 19 independent elements of the tensor $\left( k_{F}\right) _{\alpha\nu\rho\varphi}$. \ The ten birefringent components are severely constrained by astrophysical tests involving high-quality cosmological spectropolarimetry data, which have yielded stringent upper bounds at the level of 1 part in $10^{32}$ \cite{KM1,KM2} and 1 part in $10^{37}$\cite{KM3}. The nonbirefringent components are embraced by the matrices $\widetilde{\kappa }_{e-}$ (six elements) and $\widetilde{\kappa}_{o+}$ (three elements), and can be constrained by means of laboratory tests \cite{Cherenkov2} and the absence of emission of Cherenkov radiation by UHECR (ultrahigh energy cosmic rays) \cite{Klink2,Klink3}. These coefficients also undergo restriction at the order of 1 part in $10^{17}$ considering their sub-leading birefringent role \cite{Exirifard}. This CPT-even sector has\ also been recently investigated in connection with consistency aspects in Refs. \cite{Klink4,Fred}. Planar theories have been investigated since the beginning of 80's \cite{Deser}, and have gained much attention due its connection with the Chern-Simons theories \cite{Dunne}, planar superconductivity, anyons and quantum Hall effect \cite{Ezawa}, and planar vortex configurations \cite{Vortex}. The great importance of these topics has amounted to a great development for the planar theories \cite{Avinash}. A\ CPT-even field theory in (1+2)-dimensions model with Lorentz violation was recently attained by means of the dimensional reduction of the CPT-even gauge sector of the Standard Model Extension \cite{DreducCPT}. The resulting planar electrodynamics is composed of a gauge and scalar sectors, both endowed with Lorentz violation, whose planar Lagrangian is \begin{equation} \mathit{{\mathcal{L}}}_{(1+2)}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{1 {4}Z_{\mu\nu\lambda\kappa}F^{\mu\nu}F^{\lambda\kappa}+\frac{1}{2}\partial _{\mu}\phi\partial^{\mu}\phi-C_{\mu\lambda}{\partial^{\mu}\phi\partial ^{\lambda}\phi}+T_{\mu\lambda\kappa}{\partial^{\mu}\phi}F^{\lambda\kappa },\label{LP1 \end{equation} where $Z_{\mu\nu\lambda\kappa},C_{\mu\lambda},T_{\mu\lambda\kappa}$ are LIV tensors which have together 19 components and present the following symmetries \begin{equation} Z_{\mu\nu\lambda\kappa}=\;Z_{\lambda\kappa\mu\nu},\text{ }Z_{\mu\nu \lambda\kappa}=-Z_{\nu\mu\lambda\kappa},\text{ \ }Z_{\mu\nu\lambda\kappa }=-Z_{\mu\nu\kappa\lambda}, \end{equation \begin{align} Z_{\mu\nu\lambda\kappa}+Z_{\mu\lambda\kappa\nu}+Z_{\mu\kappa\nu\lambda} & =0,\label{Perm}\\[0.3cm] T_{\mu\lambda\kappa}+T_{\lambda\kappa\mu}+T_{\kappa\mu\lambda} & =0,\text{ \end{align \begin{equation} C_{\mu\lambda}=C_{\lambda\mu},\text{ }T_{\mu\lambda\kappa}=-T_{\mu \kappa\lambda}.\label{sym1 \end{equation} Some aspects of this model, involving wave equations and dispersion relations, were addressed in Ref. \cite{DreducCPT}, having shown that the pure abelian gauge or electromagnetic sector presents nonbirefringence at any order. The birefringence in this model is associated with the elements of the coupling tensor, $T_{\mu\lambda\kappa}.$ In the present work, we accomplish the dimensional reduction of the nonbirefringent gauge sector of the SME, represented by 9 components which can be incorporated in a symmetric and traceless tensor $\kappa_{\hat{\nu \hat{\rho}},$ defined as the contraction \cite{Altschul}: $\kappa_{\hat{\nu }\hat{\rho}}=\left( k_{F}\right) _{\text{ \ }\hat{\nu}\hat{\alpha}\hat{\rho }}^{\hat{\alpha}}.$ The\ nonbirefringent components of the tensor $\left( k_{F}\right) _{\hat{\mu}\hat{\nu}\hat{\lambda}\hat{\kappa}}$ are parametrized as \begin{equation} \left( k_{F}\right) _{\hat{\mu}\hat{\nu}\hat{\lambda}\hat{\rho}}=\frac{1 {2}\left( g_{\hat{\mu}\hat{\lambda}}\kappa_{\hat{\nu}\hat{\rho}}-g_{\hat{\mu }\hat{\rho}}\kappa_{\hat{\nu}\hat{\lambda}}-g_{\hat{\nu}\hat{\lambda} \kappa_{\hat{\mu}\hat{\rho}}+g_{\hat{\nu}\hat{\rho}}\kappa_{\hat{\mu \hat{\lambda}}\right) , \end{equation} which implies \begin{equation} \left( k_{F}\right) _{\hat{\mu}\hat{\nu}\hat{\lambda}\hat{\rho}}F^{\hat{\mu }\hat{\nu}}F^{\hat{\lambda}\hat{\rho}}=2\kappa_{\hat{\nu}\hat{\rho} F_{\hat{\lambda}}^{\text{ \ \ }\hat{\nu}}F^{\hat{\lambda}\hat{\rho}}, \end{equation} so that the Lagrangian (\ref{L1}) takes on the form \begin{equation} \mathit{{\mathcal{L}}}_{(1+3)}=-\frac{1}{4}F_{\hat{\mu}\hat{\nu}}F^{\hat{\mu }\hat{\nu}}-\frac{1}{2}\kappa_{\hat{\nu}\hat{\rho}}F_{\hat{\lambda}}{ ^{\hat{\nu}}F^{\hat{\lambda}\hat{\rho}}-J_{\hat{\mu}}A^{\hat{\mu}}.\label{L2 \end{equation} Some properties of this nonbirefringent electrodynamics were investigated in Ref.\cite{Fred}, in which the corresponding Feynman gauge propagator was evaluated and some of its consistency properties (causality and unitarity) were analyzed. In the present work, we perform the dimensional reduction of Lagrangian (\ref{L2}), which produces a nonbirefringent planar theory composed of 9 LIV parameters instead of the 19 ones attained in Ref. \cite{DreducCPT}. In this simpler framework, Lorentz violation is controlled only by a rank-2, which modifies the kinetic part of the scalar and gauge sectors, and a rank-1 tensor, which couples both sectors. The density of energy was evaluated, revealing that the model presents positive-definite energy for small values of the Lorentz-violating parameters. We work out the complete dispersion relations of this planar model from the vacuum-vacuum amplitude, showing that all theory is nonbirefringent. Such planar model provides a more direct way to analyze consistency aspects associated with the Feynman propagator and the effects of the LIV parameters on some planar systems of interest. This work is organized as follows. In Sec. II, we accomplish the dimensional reduction of Lagrangian (\ref{L2}), obtaining a planar scalar electrodynamics in which the Lorentz violation is controlled by the symmetric tensor, $\kappa_{\mu\rho},$ the counterpart of the original tensor $\kappa_{\hat{\mu }\hat{\rho}}$ defined in (1+2) dimensions, and a three-vector denoted as $S_{\nu}$. The energy-momentum tensor is computed and the density of energy is analyzed. The Sec. III is devoted to the analysis of the dispersion relation in two situations: considering the complete model and regarding the gauge and scalar sector as decoupled. In Sec.IV, we write the corresponding equations of motion and wave equations for the model. The wave equations for the gauge and scalar sectors are solved in the stationary regime at first-order in the LIV parameters. In Sec. V, we present our Conclusions. \section{The dimensional reduction procedure} In this section, we perform the dimensional reduction of the model represented by Lagrangian (\ref{L2}). There are some distinct procedures for accomplishing the dimensional reduction of a theory. In the present case, we adopt the one that freezes the third spacial component of the position four--vector and any other four-vector. This is done requiring that the physical fields $\left\{ \chi\right\} $ do not depend anymore on this component, that is,\ $\partial _{_{3}}\chi=0$. Besides this, we split out the fourth component of the four-vectors. This procedure is employed in Ref. \cite{DreducCPT}. The electromagnetic four-potential is written as \begin{equation} A^{\hat{\nu}}\longrightarrow(A^{\nu};\phi), \end{equation} where $A^{\left( 3\right) }=\phi$ is now a scalar field and the Greek indices (without hat) run from $0$ to $2$, $\mu=0,1,2$. Carrying out this prescription for the terms of \ Lagrangian (\ref{L2}), one then obtains: \begin{align} F_{\hat{\mu}\hat{\nu}}F^{\hat{\mu}\hat{\nu}} & =F_{\mu\nu}F^{\mu\nu }-2\partial_{\mu}\phi\partial^{\mu}\phi,\label{F1}\\[0.3cm] \kappa_{\hat{\nu}\hat{\rho}}F_{\hat{\lambda}}{}^{\hat{\nu}}F^{\hat{\lambda }\hat{\rho}} & =\kappa_{\nu\rho}F_{\lambda}{}^{\nu}F^{\lambda\rho}-2S_{\nu }F^{\nu\lambda}\partial_{\lambda}\phi+\eta\partial_{\lambda}\phi \partial^{\lambda}\phi-\kappa_{\nu\rho}\partial^{\nu}\phi\partial^{\rho}\phi, \end{align} where we have defined $F^{\mu3}=\partial^{\mu}\phi,$ \ $F_{\mu3 =-\partial_{\mu}\phi$. Also, we have renamed the set of LIV parameters, \ they now are represented by a second rank tensor $\kappa_{\nu\rho}$ which is the (1+2)-dimensional counterpart of the tensor $\kappa_{\widehat{\nu }\widehat{\rho}}$, a vector $S_{\nu}$ and a scalar quantity $\eta$ which are defined a \begin{equation} S_{\nu}=\kappa_{\nu3}\ ,\text{ }\eta=\kappa_{33}, \end{equation} respectively. Thus, after the dimensional reduction procedure we attain the following Lagrangian density \begin{equation} \mathcal{L}_{1+2}=\underset{\mathcal{L}_{EM}}{\underbrace{-\frac{1}{4 F_{\mu\nu}F^{\mu\nu}-\frac{1}{2}\kappa_{\nu\rho}F_{\lambda}^{\text{ \ }\nu }F^{\lambda\rho}}}+\underset{\mathcal{L}_{scalar}}{\underbrace{\frac{1 {2}[1-\eta]\partial_{\mu}\phi\partial^{\mu}\phi+\frac{1}{2}\kappa_{\nu\rho }\partial^{\nu}\phi\partial^{\rho}\phi}+\underset{\mathcal{L}_{coupling }{\underbrace{S_{\nu}F^{\nu\lambda}\partial_{\lambda}\phi}}}-A_{\mu}J^{\mu }-J\phi,\label{LR \end{equation} it is composed of a gauge sector $\left( \mathcal{L}_{EM}\right) ,$ a scalar sector $\left( \mathcal{L}_{scalar}\right) ,$ and a coupling sector $\left( \mathcal{L}_{coupling}\right) \ $ruled by the Lorentz-violating vector $S_{\nu}$ that contains three LIV parameters. The Lorentz-violating symmetric tensor $\kappa_{\nu\rho}$ presents six independent coefficients, which modify both the electromagnetic and scalar sectors, altering the dynamics of the Maxwell field and yielding a noncanonical kinetic term for the scalar field. The LIV noncanonical kinetic term present in the scalar sector has been recently investigated in scenarios involving topological defects in (1+1) dimensions \cite{Defects} and acoustic black holes with Lorentz-violation \cite{Brito} in (1+2) dimensions. A similar term is also found in the Lagrangian of Ref.\cite{DreducCPT}. The present work provides a possible origin for this kind of term. Our planar model (\ref{LR}) has ten dimensionless Lorentz-violating parameters contained in the tensors $\kappa_{\nu\rho}$, $S_{\nu}$ and in the scalar $\eta$. The traceless condition of the original tensor, $\kappa^{\hat{\rho} {}_{\hat{\rho}}=0$, gives one constraint between the $\kappa_{\nu\rho -$components \begin{equation} \kappa_{00}-\kappa_{ii}=\eta,\label{trace \end{equation} so, the model possesses nine independent Lorentz-violating parameters, the same number of the original four-dimensional theory. It demonstrates the consistency in the dimensional reduction procedure. We define the components of the electric field as $E^{i}=F_{0i}$, the magnetic field by $B=-\frac{1}{2}\epsilon_{ij}F_{ij}$ and $\epsilon_{012}=\epsilon _{12}=1$, then the Lagrangian (\ref{LR}) can be written in terms of fields of the electric and magnetic field in the form: \begin{equation} \mathcal{L}_{1+2}=\mathcal{L}_{EM}+\mathcal{L}_{scalar}+\mathcal{L _{coupling}, \end{equation} where \begin{align} \mathcal{L}_{EM} & =\frac{1}{2}(1+\kappa_{00})\mathbf{E}^{2}-\frac{1 {2}(1-\kappa_{ii})B^{2}-\frac{1}{2}\kappa_{ij}E^{i}E^{j}+\kappa_{0i \epsilon_{ij}E^{j}B,\\[0.3cm] \mathcal{L}_{scalar} & =\frac{1}{2}(1-\eta)[\left( \partial_{t}\phi\right) ^{2}-\left( \partial_{i}\phi\right) ^{2}]+\frac{1}{2}\kappa_{00}\left( \partial_{t}\phi\right) ^{2}-\kappa_{0i}\partial_{t}\phi\partial_{i \phi+\frac{1}{2}\kappa_{ij}\partial_{i}\phi\partial_{j}\phi,\\[0.3cm] \mathcal{L}_{coupling} & =-S^{0}E^{j}\partial_{j}\phi-S^{i}E^{i}\partial _{t}\phi+\epsilon_{ij}S^{i}\partial_{j}\phi B.\label{cccg \end{align} The above decomposition allows to determine the parity-properties of the LIV coefficients. In (1+2)-dimension, the parity operator acts doing $\mathbf{r\rightarrow}(-x,y)$, it changes the fields as $A_{0}\rightarrow A_{0},$ $\mathbf{A\rightarrow}(-A_{x},A_{y})$\textbf{, }the $\mathbf{E\rightarrow}(-E_{x},E_{y})\mathbf{,}$ $B\rightarrow-B$. For more details, see Ref. \cite{Deser}. Here, we consider that the field $\phi$ behaves as a scalar, $\phi\rightarrow\phi$. Since the Lagrangian density is parity-even, we can conclude that the planar model possesses nine independent coefficients, six are parity-even $\left( \kappa_{00},\kappa_{02},\kappa _{11},\kappa_{22},S_{0},S_{2}\right) $, and three are parity-odd\textbf{\ $\left( \kappa_{01},\kappa_{12},S_{1}\right) $. The fact that the components of the vector $S^{\mu}$ transform distinctly is a consequence of the way as the vectors $\mathbf{E,B}$ $\ $and the field $\phi$ behave under parity. An issue that deserves some attention is the energy stability, once it is known that the Lorentz violation yields energy instability in some models, as for example, the Carroll-Field-Jackiw electrodynamics \cite{Jackiw}. A preliminary analysis concerning this point can be performed by means of the energy-momentum tensor for the full planar theory, \begin{equation} \Theta^{\mu\nu}=\frac{\partial\mathcal{L}}{\partial\left( \partial_{\mu }A_{\rho}\right) }\partial^{\nu}A_{\rho}+\frac{\partial\mathcal{L} {\partial\left( \partial_{\mu}\phi\right) }\partial^{\nu}\phi-g^{\mu\nu }\mathcal{L}, \end{equation} which is carried out a \begin{align} \Theta^{\mu\nu} & =-F^{\mu\rho}F^{\nu}{}_{\rho}-\kappa^{\rho\beta}F^{\mu {}_{\beta}F^{\nu}{}_{\rho}+\kappa^{\mu\beta}F^{\rho}{}_{\beta}F^{\nu}{}_{\rho }+S^{\mu}F^{\nu\rho}{}\partial_{\rho}\phi\nonumber\\[-0.3cm] & \\ & +S_{\rho}F^{\rho\nu}{}\partial^{\mu}\phi+S_{\beta}F^{\beta\mu}\partial^{\nu }\phi+\left( 1-\eta\right) \partial^{\mu}\phi\partial^{\nu}\phi+\kappa ^{\mu\beta}\partial_{\beta}\phi\partial^{\nu}\phi-g^{\mu\nu}\mathcal{L .\nonumber \end{align} We now specialize our evaluation for the density of energy \begin{equation} \Theta{^{00}}={\frac{1}{2}M_{jk}E_{j}E_{k}}+{\frac{1}{2}\left( 1-\kappa _{jj}\right) B^{2}+B\epsilon_{jk}S_{j}\partial_{k}\phi-S_{j}E_{j}{ \partial_{0}}\phi+{\frac{1}{2}\left( 1+\kappa_{jj}\right) \left( \partial_{0}\phi\right) ^{2}\ +\frac{1}{2}N_{jk}\partial_{j}\phi\partial _{k}\phi,}\label{T00 \end{equation} where we have defined the symmetric matrice \begin{equation} M_{jk}={\left( 1+\kappa_{00}\right) \delta_{jk}-\kappa_{jk}~\ ,~\ \ N _{jk}={{\left( 1-\kappa_{00}+\kappa_{ii}\right) \delta}_{jk}-\kappa{_{jk},} \end{equation} and used $\eta=\kappa_{00}-\kappa_{jj}$. We see that the energy density for the electromagnetic and scalar fields, when regarded as isolated, are \begin{align} \Theta_{EM}^{00} & ={\frac{1}{2}}M_{ij}{E_{j}E_{k}+\frac{1}{2}\left( 1-\kappa_{jj}\right) B^{2}},~\label{T00G}\\[0.3cm] \Theta_{scalar}^{00} & ={\frac{1}{2}\left( 1+\kappa_{jj}\right) \left( \partial_{0}\phi\right) ^{2}\ +\frac{1}{2}}N_{jk}{\partial_{j}\phi \partial_{k}\phi}.\label{T00S \end{align} Both the gauge and scalar energy densities will be positive-definite if $\left\vert \kappa_{jj}\right\vert <1$ and the matrices $M_{ij}$ and $N_{ij}$ are positive-definite. As the LV parameters are usually much smaller than the unit, we conclude that the scalar and gauge sectors, as regarded separately, are stable. However, the energy positivity of the full model seems to be spoiled by the mixing terms, {$S$}${_{j}E_{j}{}\partial_{0}}\phi\ $and ${B\epsilon_{jk}}${$S$}${_{j}\partial_{k}\phi}$. In order to have more clarity, we write Eq.(\ref{T00}) in the following way \begin{align} \Theta{^{00}} & ={\frac{1}{2}}\left[ E_{j}-\left( M^{-1}\right) _{ja}{S_{a}\partial_{0}\phi}\right] {M_{jk}}\left[ E_{k}-\left( M^{-1}\right) _{ka}{S_{a}\partial_{0}\phi}\right] +{\frac{1}{2}\left( 1-\kappa_{ii}\right) }\left[ B+\frac{{\epsilon_{jk}S_{j}\partial_{k}\phi }{{\left( 1-\kappa_{ii}\right) }}\right] ^{2}\nonumber\\[-0.3cm] & \\ & +{\frac{1}{2}\left[ 1+\kappa_{jj}-\left( M^{-1}\right) _{ij}{S_{i}S _{j}\right] \left( \partial_{0}\phi\right) ^{2}\ +\frac{1}{2}}\left[ {N_{jk}-}\frac{\left( S_{a}\right) ^{2}\delta_{jk}-S_{j}S_{k} {{1-\kappa_{ii}}}\right] {\partial_{j}\phi\partial_{k}\phi.}\nonumber \end{align} It shows that the energy density is positive-definite whenever the LV parameters are sufficiently small. \section{Dispersion relations} In this section, we compute the dispersion relations of the model described by the Lagrangian density (\ref{LR}). Our approach follows an alternative way by evaluating the vacuum-vacuum amplitude (VVA) of the model. After the Hamiltonian analysis, the well-defined vacuum-vacuum amplitude (VVA) for the model, in the generalized Lorentz gauge, can be written as \begin{equation} Z=\det\left( \xi^{-1/2}\square\right) \int\mathcal{D}A_{\mu}\mathcal{D \phi\exp\left\{ i\int dx~\frac{1}{2}A_{\mu}D^{\mu\nu}A_{\nu}-\frac{1}{2 \phi\mathbb{\boxdot}\phi+\phi\mathbb{J}^{\mu}A_{\mu}\right\} ,\label{zz-1 \end{equation} where $\xi$ is the gauge-fixing parameter and we have defined the following operators \begin{equation} D^{\mu\nu}=\left( \square+\kappa^{\rho\sigma}\partial_{\rho}\partial_{\sigma }\right) g^{\mu\nu}+\left( {\xi}^{-1}-1\right) \partial^{\mu}\partial^{\nu }+\kappa^{\mu\nu}\square-\kappa^{\mu\rho}\partial_{\rho}\partial^{\nu -\kappa^{\nu\rho}\partial_{\rho}\partial^{\mu},\label{DDg \end{equation \begin{equation} \boxdot=\left( 1-\eta\right) \square+\kappa^{\mu\nu}\partial_{\mu \partial_{\nu}~,~\ \mathbb{J}^{\mu}=S^{\mu}\square-S^{\nu}\partial_{\nu }\partial^{\mu}.\label{FFs \end{equation} With the purpose of understanding the dispersion relations of the full model, we first analyze the dispersion relations of the gauge and scalar sectors when considered uncoupled. \subsection{Uncoupled dispersion relations} For $S^{\mu}=0$ the vacuum-vacuum amplitude (\ref{zz-1}) is factored as $Z=Z_{A_{\mu}}Z_{\phi},$ where $Z_{A_{\mu}}$ and $Z_{\phi}$ are the vacuum-vacuum amplitudes for the pure gauge and pure scalar fields, respectively. \subsubsection{Dispersion relation for the pure gauge field} The vacuum-vacuum amplitude for the pure gauge field is \begin{equation} Z_{A_{\mu}}=\det\left( \xi^{-1/2}\square\right) \int\mathcal{D}A_{\mu \exp\left\{ i\int dx~\frac{1}{2}A_{\mu}D^{\mu\nu}A_{\nu}\right\} =\det\left( \xi^{-1/2}\square\right) \left( \det D^{\mu\nu}\right) ^{-1/2}, \end{equation} with the operator $D^{\mu\nu}$ defined by (\ref{DDg}). By computing the functional determinant, \begin{equation} \det D^{\mu\nu}=\det\left( \xi^{-1}\square^{2}\right) \det\left( \mathbb{\boxminus}\right) , \end{equation} the VVA for the pure gauge field is \begin{equation} Z_{A_{\mu}}=\det\left( \mathbb{\boxminus}\right) ^{-1/2}, \end{equation} where the operator $\mathbb{\boxminus}$ in momentum space reads as \begin{equation} \mathbb{\tilde{\boxminus}}=\alpha p_{0}^{2}+\beta p_{0}+\gamma, \end{equation} \textbf{with }the coefficients defined a \begin{align} \alpha & =(1+\kappa_{00})(1+\kappa_{00}-\text{tr}\mathbb{K)}+\det \mathbb{K},~\ \ \mathbb{K}=\left[ \kappa_{ij}\right] ,\ \ \ \ \ \ \ \\[0.3cm] \text{ }\beta & =-2\kappa_{0i}Q_{ij}p_{j},\text{ \ }Q_{ij}\;=\;\left[ (1+\kappa_{00})\delta_{ij}-\kappa_{ij}\right] ,\\[0.3cm] \gamma & =(1-\text{tr}\mathbb{K})[\kappa_{ij}p_{i}p_{j}-(1+\kappa _{00})\mathbf{p}^{2}]-(\epsilon_{ij}p_{i}\kappa_{0j})^{2}. \end{align} The dispersion relations for the pure gauge field are obtained from the condition $\mathbb{\tilde{\boxminus}}=0$, which yields \begin{equation} p_{0}=\frac{\kappa_{0i}Q_{ij}p_{j}}{\alpha}\pm\frac{\sqrt{(\kappa_{0i Q_{ij}p_{j})^{2}-\alpha(1-\text{tr}\mathbb{K})[\kappa_{ij}p_{i}p_{j -(1+\kappa_{00})\mathbf{p}^{2}]+\alpha(\epsilon_{ij}p_{i}\kappa_{0j})^{2} }{\alpha}.\label{DR2 \end{equation} It is easy to show that this relation implies nonbirefringence at any order in the LIV parameters, once it yields the same phase velocity for the left and right modes traveling at the same sense. For similar situations, see Ref. \cite{DreducCPT}. At first order, it is given by \begin{equation} p_{0}=\kappa_{0i}p_{i}\pm\left\vert \mathbf{p}\right\vert \left( 1-\frac {1}{2}\kappa_{00}-\frac{\kappa_{ij}p_{i}p_{j}}{2\mathbf{p}^{2}}\right) .\label{FO1 \end{equation} The gauge dispersion relation (\ref{DR2}) can specialized for some particular cases. For $\kappa_{ij}=0,$ $\kappa_{0j}=0,$ the Lorentz-violating coefficients are represented by the parity-even element $\kappa_{00}$ and the Eq.(\ref{DR2}) yields the relatio \begin{equation} p_{0}=\pm\frac{\left\vert \mathbf{p}\right\vert }{(1+\kappa_{00})^{1/2 },\label{DR2A \end{equation} which is the isotropic parity-even dispersion relation. Adopting $\kappa _{00}=0,\kappa_{0j}=0,$ we achieve the anisotropic dispersion relation \begin{equation} p_{0}=\pm N_{0}\left\vert \mathbf{p}\right\vert \sqrt{1-\kappa_{ij}p_{i p_{j}/\mathbf{p}^{2}},\label{DR2B \end{equation} where $N_{0}=\sqrt{(1-\text{tr}\mathbb{K})/(1-\text{tr}\mathbb{K +\det\mathbb{K})}$. This relation involves parity-even and parity-odd coefficients. For $\kappa_{ij}=0,$ $\kappa_{00}=0,$ we attain other anisotropic dispersion relation \begin{equation} p_{0}=\kappa_{0i}p_{i}\pm\left\vert \mathbf{p}\right\vert \sqrt{1+\left( \kappa_{0i}\right) ^{2}}.\label{DR2C \end{equation} The energy-momentum tensor of the pure gauge field shows that the electromagnetic sector represents a stable theory. The relations (\ref{DR2A},\ref{DR2B},\ref{DR2C}), however, could anticipate a noncausal electrodynamics for some values of the LIV coefficients. The spoil of causality may be inferred from the evaluation of the group velocity $\left( u_{g}=dp_{0}/d\left\vert \mathbf{p}\right\vert \right) $ associated with each dispersion relation. \subsubsection{Dispersion relation of the pure scalar sector} In the same way, the vacuum-vacuum amplitude for the uncoupled scalar field is \begin{equation} Z_{\phi}=\int\mathcal{D}\phi\exp\left\{ -\frac{i}{2}\int dx\phi \mathbb{\boxdot}\phi\right\} =\left( \det\mathbb{\boxdot}\right) ^{-1/2}, \end{equation} with the operator $\mathbb{\boxdot}$ defined in Eq. (\ref{FFs}). In the momentum space it is read as \begin{equation} \mathbb{\tilde{\boxdot}}=\left( 1-\eta\right) p^{2}+\tilde{\kappa ^{\rho\sigma}p_{\rho}p_{\sigma}.\ \end{equation} The dispersion relation of the scalar field are computed by the condition $\mathbb{\tilde{\boxdot}}=0,$ taking into account the relation (\ref{trace}), which provides the following equation for $p_{0}$: \begin{equation} \left( 1+\text{tr}\mathbb{K}\right) p_{0}^{2}-2\left( \kappa_{0i p_{i}\right) p_{0}-\left( 1-\kappa_{00}+\text{tr}\mathbb{K}\right) \mathbf{p}^{2}+\kappa_{ij}p_{i}p_{j}=0, \end{equation} whose roots are \begin{equation} p_{0}^{\left( \pm\right) }=\lambda\left[ \kappa_{0i}p_{i}\pm\sqrt{\left( \kappa_{0i}p_{i}\right) ^{2}+\left( 1+\text{tr}\mathbb{K}\right) \left[ \left( 1-\kappa_{00}+\text{tr}\mathbb{K}\right) \mathbf{p}^{2}-\kappa _{ij}p_{i}p_{j}\right] }\right] ,\label{DRs \end{equation} where $\lambda=\left[ 1+\text{tr}\mathbb{K}\right] ^{-1}.$ This is a nonbirefringent relation at any order in LIV parameters. At first order such relation is given by \begin{equation} p_{0}=\kappa_{0i}p_{i}\pm\left\vert \mathbf{p}\right\vert \left( 1-\frac {1}{2}\kappa_{00}-\frac{\kappa_{ij}p_{i}p_{j}}{2\mathbf{p}^{2}}\right) ,\label{FO2 \end{equation} which is exactly the first-order gauge dispersion relation given in Eq. (\ref{FO1}). Although the exact dispersion relations of the scalar and gauge sectors, (\ref{DR2}) and (\ref{DRs}), are clearly different, at first order in the LIV parameters both sectors are governed by the same dispersion relation. A direct analysis of the relation (\ref{DRs}) indicates that the scalar sector can support noncausal modes, similarly as it occurs in the gauge sector. \subsection{Full dispersion relations} In order to examine the complete dispersion relations, we evaluate the vacuum-vacuum amplitude (\ref{zz-1}) considering the presence of the coupling vector, $S^{\mu}$. We first integrate the $\phi-$field, obtaining \begin{equation} Z=\det\left( \xi^{-1/2}\square\right) \det\left( \mathbb{\boxdot}\right) ^{-1/2}\int\mathcal{D}A_{\mu}\exp\left\{ i\int dx~\frac{1}{2}A_{\mu }\mathbb{D}^{\mu\nu}A_{\nu}\right\} , \end{equation} where the operator $\mathbb{D}^{\mu\nu}$ is defined a \begin{equation} \mathbb{D}^{\mu\nu}=D^{\mu\nu}+\frac{\mathbb{J^{\mu}J}^{\nu}}{\mathbb{\tilde {\boxdot}}}. \end{equation} By integrating the gauge field, we achiev \begin{equation} Z=\det\left( \xi^{-1/2}\square\right) \det\left( \mathbb{\boxdot}\right) ^{-1/2}\det\left( \mathbb{D}^{\mu\nu}\right) ^{-1/2}, \end{equation} which can be rewritten as \begin{equation} Z=\det\left( \xi^{-1/2}\square\right) \det\left( \mathbb{\boxdot}\right) \det\left( \mathbb{\boxdot}D^{\mu\nu}+\mathbb{J^{\mu}J}^{\nu}\right) ^{-1/2}.\label{ssd \end{equation} We now compute the functional determinant of the term $\left( \mathbb{\boxdot }D^{\mu\nu}+\mathbb{J^{\mu}J}^{\nu}\right) $, \begin{equation} \det\left( \mathbb{\boxdot}D^{\mu\nu}+\mathbb{J^{\mu}J}^{\nu}\right) =\det\left( \xi^{-1/2}\square\right) ^{2}\det\left( \mathbb{\boxdot }\right) ^{2}\det\left( \otimes\right) , \end{equation} which replaced in Eq. (\ref{ssd}) leads to the simpler resul \begin{equation} Z=\det\left( \otimes\right) ^{-1/2}.\label{zzx \end{equation} In the momentum space the operator $\otimes$ is represented by $\tilde {\otimes}\left( p\right) $ and the dispersion relations for the full model are obtained from the equation $\tilde{\otimes}\left( p\right) =0$. In our case, we have the exact equation for the dispersion relations,\textbf{\ \begin{equation} \tilde{\otimes}\left( p\right) =a_{4}\left( p_{0}\right) ^{4}+a_{3}\left( p_{0}\right) ^{3}+a_{2}\left( p_{0}\right) ^{2}+a_{1}p_{0}+a_{0 =0,\label{fulldp-1 \end{equation} with $a_{k}$ $\left( k=0,1,2,3,4,\right) $ being functions of the LIV parameters having the following structur \begin{align} a_{4} & =1+a_{4}^{\left( 1\right) }+a_{4}^{\left( 2\right) +a_{4}^{\left( 3\right) },~\ \\[0.2cm] a_{3} & =a_{3}^{\left( 1\right) }+a_{3}^{\left( 2\right) }+a_{3}^{\left( 3\right) },\\[0.2cm] a_{2} & =-2\mathbf{p}^{2}+a_{2}^{\left( 1\right) }+a_{2}^{\left( 2\right) }+a_{2}^{\left( 3\right) },~\\[0.2cm] a_{1} & =a_{1}^{\left( 1\right) }+a_{1}^{\left( 2\right) }+a_{1}^{\left( 3\right) },\\[0.2cm] a_{0} & =\mathbf{p}^{4}+a_{0}^{\left( 1\right) }+a_{0}^{\left( 2\right) }+a_{0}^{\left( 3\right) }, \end{align} where $a_{k}^{\left( n\right) }$ $\left( n=1,2,3\right) $ represents the contribution to $n$th order in the LIV parameters to the coefficient $a_{k}$, whose explicit expressions are given in the appendix A. Below we present some configurations of the LIV parameters which allow to factorize and solve exactly the full dispersion relation equation given in Eq\textbf{. (\ref{fulldp-1}). We first analyze the pure contribution of the coupling vector $S^{\mu}$ to the dispersion relations of the scalar and gauge fields. For this purpose, we set $\kappa^{\mu\nu}=0$ in the full vacuum-vacuum amplitude (\ref{zzx}), obtainin \begin{equation} Z=\det\left( \square\right) ^{-1/2}\det\left[ \left( 1+S^{2}\right) \square-\left( S\cdot\partial\right) ^{2}\right] ^{-1/2}. \end{equation} It describes two bosonic degrees of freedom; a first one is a gauge field governed by the usual dispersion relation, \begin{equation} p_{0}=\pm\left\vert \mathbf{p}\right\vert , \end{equation} while the second one describes a massless scalar field \begin{equation} \left( p_{0}\right) _{\pm}=-\frac{S_{0}\left( \mathbf{S\cdot p}\right) }{1-\mathbf{S}^{2}}\pm\frac{\sqrt{\mathbf{p}^{2}\left( 1+S^{2}\right) \left( 1-\mathbf{S}^{2}\right) +\left( \mathbf{S\cdot p}\right) ^{2}\left( 1-\mathbf{S}^{2}\right) +\left( S_{0}\right) ^{2}\left( \mathbf{S\cdot p}\right) ^{2}}}{1-\mathbf{S}^{2}}, \end{equation} which also is compatible with absence of birefringence. At leading-order the above dispersion relation reads as \begin{equation} \left( p_{0}\right) _{\pm}=-S_{0}\left( \mathbf{S\cdot p}\right) \pm\left\vert \mathbf{p}\right\vert \left( 1+\frac{1}{2}\left( S_{0}\right) ^{2}+\frac{1}{2}\frac{\left( \mathbf{S\cdot p}\right) }{\mathbf{p}^{2} ^{2}\right) , \end{equation} showing that the contributions of the vector $S_{\mu}$ to the dispersion relations only begin at second order. The second case corresponds to the general isotropic dispersion relation, provided by fixing $\kappa_{ij}=0$, $\kappa_{0i}=0$ and $S_{i}=0$. The partition function (\ref{zzx}) factorizes as \begin{equation} Z=\det\left[ \left( 1+\kappa_{00}\right) \square+\kappa_{00}\nabla ^{2}\right] ^{-1/2}\det\left[ \left( 1+\kappa_{00}\right) \square-\left\{ \left( S_{0}\right) ^{2}-\left( k_{00}\right) ^{2}-\kappa_{00}\right\} \nabla^{2}\right] ^{-1/2}, \end{equation} describing two bosonic degree of freedom supporting the following dispersion relations: \begin{equation} p_{0}=\pm\frac{\left\vert \mathbf{p}\right\vert }{\sqrt{1+\kappa_{00} },\label{G11 \end{equation} \begin{equation} p_{0}=\pm\left\vert \mathbf{p}\right\vert \sqrt{\frac{1-\left( \kappa _{00}\right) ^{2}+\left( S_{0}\right) ^{2}}{1+\kappa_{00}}}.\label{S11 \end{equation} The relation (\ref{G11}) describes the gauge field, while the relation (\ref{S11}) is associated to the massless scalar field. This association comes from Eqs. (\ref{DR2}) and (\ref{DRs}), when properly written for the pure isotropic coefficient, $\kappa_{00}.$ A third case is obtained by considering $\kappa_{0i}$ and $S_{0}$ as non-null, which provides the following vacuum-vacuum amplitud \begin{equation} Z=\det\left[ \square-2\kappa_{0i}\partial_{i}\partial_{0}\right] ^{-1/2 \det\left[ \square-2\kappa_{0i}\partial_{i}\partial_{0}-\left\{ \left( S_{0}\right) ^{2}+\left( k_{0i}\right) ^{2}\right\} \nabla^{2}\right] ^{-1/2}. \end{equation} The first operator $\square-2\kappa_{0i}\partial_{i}\partial_{0}$ describes the dispersion relation of a massless scalar degree of freedo \begin{equation} p_{0}=\kappa_{0i}p_{i}\pm\left\vert \mathbf{p}\right\vert \sqrt{1+\frac {\left( \kappa_{0i}p_{i}\right) ^{2}}{\mathbf{p}^{2}}}, \end{equation} while the operator $\square-2\kappa_{0i}\partial_{i}\partial_{0}-\left[ \left( S_{0}\right) ^{2}+\left( k_{0i}\right) ^{2}\right] \nabla^{2}$ gives the dispersion relations of the gauge field, \begin{equation} p_{0}=\kappa_{0i}p_{i}\pm\left\vert \mathbf{p}\right\vert \sqrt{1+\left( \kappa_{0i}\right) ^{2}+\left( S_{0}\right) ^{2}}. \end{equation} The specialization of the exact relations (\ref{DR2}) and (\ref{DRs}) for the coefficients $\kappa_{0i}$ is the element that allows to define what is the scalar and the gauge field dispersion relation. A more complicate case which also provides exact dispersion relations is obtained by considering as non-null $\kappa_{00}$ and $S_{i}$, yielding \begin{equation} \tilde{\otimes}\left( p\right) =a_{4}\left( p_{0}\right) ^{4}-a_{2}\left( p_{0}\right) ^{2}+a_{0}=0, \end{equation} wit \begin{align} a_{4} & =\left( 1+\kappa_{00}\right) \left( 1+\kappa_{00}-\mathbf{S ^{2}\right) ,\\ a_{2} & =\mathbf{p}^{2}\left( 1+\kappa_{00}\right) \left[ 2-\left( \kappa_{00}\right) ^{2}-2\mathbf{S}^{2}\right] +\left( 1+2\kappa _{00}\right) \left( \mathbf{S\cdot p}\right) ^{2},\\ a_{0} & =\mathbf{p}^{4}\left( 1+\kappa_{00}\right) \left( 1-\kappa _{00}-\mathbf{S}^{2}\right) +\mathbf{p}^{2}\left( 1+\kappa_{00}\right) \left( \mathbf{S\cdot p}\right) ^{2}. \end{align} It gives the dispersion relation for the gauge field \begin{equation} p_{0}^{\left( 1\right) }=\pm\sqrt{\frac{a_{2}+\sqrt{\left( a_{2}\right) ^{2}-4a_{4}a_{0}}}{2a_{4}}}, \end{equation} and the following one for the massless scalar \begin{equation} p_{0}^{\left( 2\right) }=\pm\sqrt{\frac{a_{2}-\sqrt{\left( a_{2}\right) ^{2}-4a_{4}a_{0}}}{2a_{4}}}. \end{equation} Both dispersion relations can expressed at second order in the LIV coefficients, yielding \begin{align} p_{0} & =\pm\left\vert \mathbf{p}\right\vert \left( 1-\frac{1}{2}\kappa _{00}+\frac{A^{\left( 2\right) }-2\sqrt{B^{\left( 4\right) }} {8\mathbf{p}^{2}}\right) ,\label{dpx1}\\ p_{0} & =\pm\left\vert \mathbf{p}\right\vert \left( 1-\frac{1}{2}\kappa _{00}+\frac{A^{\left( 2\right) }+2\sqrt{B^{\left( 4\right) }} {8\mathbf{p}^{2}}\right) ,\label{dpx2 \end{align} wher \begin{align} A^{\left( 2\right) } & =\mathbf{p}^{2}\left( \kappa_{00}\right) ^{2}+2\left( \mathbf{S\cdot p}\right) ^{2},\\[0.2cm] B^{\left( 4\right) } & =\mathbf{p}^{4}\left( \kappa_{00}\right) ^{4}+4\mathbf{p}^{4}\left( \kappa_{00}\right) ^{2}\mathbf{S}^{2 -6\mathbf{p}^{2}\left( \kappa_{00}\right) ^{2}\left( \mathbf{S\cdot p}\right) ^{2}+\left( \mathbf{S\cdot p}\right) ^{4}. \end{align} Here, it is important to highlight that at first-order in the LIV backgrounds the dispersion relations (\ref{dpx1},\ref{dpx2}) are the same one, confirming the results of the previous subsections: at first order the scalar and the gauge sectors are governed by the same dispersion relations. For arbitrary configurations of the LIV\ backgrounds, it is convenient to compute the roots of the dispersion relations (\ref{fulldp-1}) in a perturbative way. At first order in the LIV parameters, we obtain \begin{equation} p_{0}^{\left( g,s\right) }=\kappa_{0i}p_{i}\pm\left\vert \mathbf{p \right\vert \left( 1-\frac{1}{2}\kappa_{00}-\frac{1}{2}\frac{\kappa_{ij p_{i}p_{j}}{\mathbf{p}^{2}}\right) ,\label{ggff \end{equation} for the dispersion relations of the gauge and massless scalar fields. This is the same expression of Eqs. (\ref{FO1}, \ref{FO2}), confirming our previous computations. We thus verify that the all the dispersion relations of this planar model are free from the influence of the vector $S^{\mu}$ at first-order in the LIV parameters. \section{Equations of motion and stationary solutions} The classical behavior of this theory is governed by the equations of motion stemming from the Euler-Lagrange equations, that is \begin{align} \partial_{\alpha}F^{\alpha\beta}+\kappa^{\beta\rho}\partial_{\alpha}F^{\alpha }{}_{\rho}-\kappa_{\text{ \ }}^{\alpha\rho}\partial_{\alpha}F^{\beta}{}_{\rho }+S^{\beta}\square\phi-S^{\alpha}\partial_{\alpha}\partial^{\beta}\phi & =J^{\beta},\label{G1}\\[0.3cm] \lbrack1-\eta]\square\phi+\kappa^{\alpha\rho}\partial_{\alpha}\partial_{\rho }\phi+S_{\nu}\partial_{\alpha}F^{\nu\alpha} & =-J.\label{Phi2 \end{align} In terms of the gauge potential and by using the Lorentz gauge, $\partial\cdot A=0$, these equations are written a \begin{align} \lbrack\square g^{\beta\rho}+\square\kappa_{\text{ }}^{\beta\rho}+g^{\beta \rho}\kappa_{\text{ \ }}^{\alpha\sigma}\partial_{\alpha}\partial_{\sigma }-\kappa_{\text{ \ }}^{\rho\alpha}\partial_{\alpha}\partial^{\beta}]A_{\rho }+\left[ S^{\beta}\square-S^{\alpha}\partial_{\alpha}\partial^{\beta}\right] \phi & =J^{\beta},\label{G1B}\\[0.3cm] \lbrack(1-\eta)\square+\kappa^{\alpha\rho}\partial_{\alpha}\partial_{\rho }]\phi+S_{\nu}\square A^{\nu} & =-J.\label{Phi1B \end{align} The modified Maxwell equations stems from Eq.(\ref{G1}) lead to the altered forms for the Gauss's and Ampere's laws, \begin{equation} \left( 1+\kappa_{00}\right) \partial_{i}E^{i}+\epsilon^{ji}\kappa _{0j}\partial_{i}B-\kappa_{ij}\partial_{i}E^{j}-S_{0}\nabla^{2}\phi -S^{i}\partial_{i}\partial_{t}\phi\;=\;\rho,\label{E1 \end{equation \begin{align} \left( \epsilon^{ij}-\kappa_{il}\epsilon^{lj}-\kappa_{jl}\epsilon ^{il}\right) \partial_{j}B+\kappa_{0l}\epsilon^{il}\partial_{0}B-\partial _{0}E^{i}+\kappa_{il}\partial_{0}E^{l}-\kappa_{i0}\partial_{j}E^{j} & \nonumber\\ & \label{B1}\\ +\kappa_{j0}\partial_{j}E^{i}-S^{i}\nabla^{2}\phi+S^{i}\partial_{0}^{2 \phi-S^{j}\partial_{j}\partial^{i}\phi-S^{0}\partial_{0}\partial^{i}\phi & =J^{i},\nonumber \end{align} while the scalar sector evolves in accordance with \begin{equation} \lbrack1-\eta+\kappa_{00}]\partial_{t}^{2}\phi-[1-\eta]\nabla^{2}\phi +\kappa^{ij}\partial_{i}\partial_{j}\phi+2\kappa^{0j}\partial_{0}\partial _{j}\phi-S_{0}\partial_{i}E^{i}+S_{i}\partial_{0}E^{i}-\epsilon_{ij S_{i}\partial_{j}B=-J.\label{Scalar1 \end{equation} In order to solve this electrodynamics, Eqs.(\ref{Phi2}, \ref{E1}, \ref{B1}) should be considered jointly with the Faraday's law \begin{equation} \partial_{t}B+\nabla\times\mathbf{E}=0.\label{Bi \end{equation} which comes from the tensor form of Bianchi identity, $\partial_{\mu F^{\mu\ast}=0.$ Here, $F^{\mu\ast}=\frac{1}{2}\epsilon^{\mu\nu\alpha F_{\nu\alpha}$ is the the dual of the electromagnetic field tensor in $(1+2)-$dimensions. At first order in LIV parameters, the solutions of the equations of motion (\ref{G1B}) and (\ref{Phi1B}) are \begin{align} A_{\mu} & =\frac{1}{\square}\left( g_{\mu\rho}-\kappa_{\mu\rho}-g_{\mu\rho }\kappa_{\text{ \ }}^{\alpha\beta}\frac{\partial_{\alpha}\partial_{\beta }{\square}+\kappa_{\rho\alpha}\frac{\partial^{\alpha}\partial_{\mu}}{\square }\right) J^{\rho}+\frac{1}{\square}\left( S_{\mu}-S^{\sigma}\frac {\partial_{\sigma}\partial_{\mu}}{\square}\right) J,\label{EQGG}\\[0.3cm] \phi & =-\frac{1}{\square}\left[ 1+\eta-\kappa^{\alpha\beta}\frac {\partial_{\alpha}\partial_{\beta}}{\square}\right] J+\frac{1}{\square }S_{\rho}J^{\rho}.\label{EQSC \end{align} The pure Green's functions for the gauge and the scalar fields read \begin{align} G_{\mu\rho}\left( x-x^{\prime}\right) & =\frac{1}{\square}\left[ g_{\mu\rho}-\kappa_{\mu\rho}-g_{\mu\rho}\kappa_{\text{ \ }}^{\alpha\beta \frac{\partial_{\alpha}\partial_{\beta}}{\square}+\kappa_{\rho\alpha \frac{\partial^{\alpha}\partial_{\mu}}{\square}\right] \delta\left( x-x^{\prime}\right) ,\label{ggf}\\[0.2cm] G_{\mu}\left( x-x^{\prime}\right) & =\frac{1}{\square}\left( S_{\mu }-S^{\sigma}\frac{\partial_{\sigma}\partial_{\mu}}{\square}\right) \delta\left( x-x^{\prime}\right) ,\\[0.2cm] G\left( x-x^{\prime}\right) & =-\frac{1}{\square}\left[ 1+\eta-\kappa ^{\mu\beta}\frac{\partial_{\mu}\partial_{\beta}}{\square}\right] \delta\left( x-x^{\prime}\right) ,\label{gsf \end{align} respectively, where $x=(x_{0},\mathbf{r})$. The above equations show the both sources $J^{\mu}$ and $J$ can be generate electromagnetic phenomena. \subsection{Static solutions for the pure gauge field} The stationary solution for the gauge field in (\ref{EQGG}) can be expressed as \begin{equation} A_{\mu}\left( \mathbf{r}\right) =\int d\mathbf{r}^{\prime}G_{\mu\rho}\left( \mathbf{r-r}^{\prime}\right) J^{\rho}\left( \mathbf{r}^{\prime}\right) +\int d\mathbf{r}^{\prime}G_{\mu}\left( \mathbf{r-r}^{\prime}\right) J\left( \mathbf{r}^{\prime}\right) ,\label{A5 \end{equation} where $G_{\mu\rho}\left( \mathbf{r-r}^{\prime}\right) $ is the stationary Green's function whose components obtained from (\ref{ggf})\ are \begin{align} G_{00}\left( \mathbf{R}\right) & =-\frac{1}{2\pi}\left( 1-\kappa _{00}+\frac{1}{2}\kappa_{aa}\right) \ln R-\frac{1}{4\pi}\kappa^{ab \frac{R_{a}R_{b}}{R^{2}},\nonumber\\[0.2cm] G_{0i}\left( \mathbf{R}\right) & =\frac{1}{2\pi}\kappa_{0i}\ln R\;,\;\;G_{i0}\left( \mathbf{R}\right) \;=\;\frac{1}{4\pi}\kappa_{0i}\ln R\text{\ }-\frac{1}{4\pi}\kappa_{0a}\frac{R_{a}R_{i}}{R^{2}},\\[0.2cm] G_{ij}\left( \mathbf{R}\right) & =\frac{1}{2\pi}\left[ \delta_{ij}\left( 1+\frac{1}{2}\kappa_{aa}\right) +\frac{1}{2}\kappa_{ij}\right] \ln R+\frac{1}{4\pi}\delta_{ij}\kappa_{ab}\frac{R_{a}R_{b}}{R^{2}}-\frac{1}{4\pi }\kappa_{ja}\frac{R_{a}R_{i}}{R^{2}},\nonumber \end{align} and $G_{\mu}\left( \mathbf{r-r}^{\prime}\right) $ is the Green's functions describing the contribution of the scalar source $J$ \ to the electromagnetic field given b \begin{equation} G_{0}\left( \mathbf{R}\right) =-\frac{1}{2\pi}S_{0}\ln R~,~\ \ G_{i}\left( \mathbf{R}\right) =-\frac{1}{4\pi}S_{i}\ln R\text{\ }+\frac{1}{4\pi S_{a}\frac{R_{a}R_{i}}{R^{2}}, \end{equation} where we have denoted $\mathbf{R}=\mathbf{r-r}^{\prime}$. The non-diagonal Green's function components reveal that charges yield electric and magnetic fields, as well as currents do. We now compute the electric and magnetic fields for some special configurations of charge and current densities. In accordance with Eq. (\ref{A5}), the scalar and vector potentials are \begin{align} A_{0}\left( \mathbf{r}\right) & =-\frac{1}{2\pi}\left( 1-\kappa_{00 +\frac{1}{2}\kappa_{aa}\right) \int d\mathbf{r}^{\prime}~\rho\left( \mathbf{r}^{\prime}\right) ~\ln\left\vert \mathbf{r-r}^{\prime}\right\vert ~-\frac{1}{4\pi}\kappa_{ab}\int d\mathbf{r}^{\prime}\frac{\left( \mathbf{r-r}^{\prime}\right) _{a}\left( \mathbf{r-r}^{\prime}\right) _{b }{\left( \mathbf{r-r}^{\prime}\right) ^{2}}\rho\left( \mathbf{r}^{\prime }\right) \nonumber\\[-0.3cm] & \label{AI}\\ & +\frac{1}{2\pi}\kappa_{0a}\int d\mathbf{r}^{\prime}~J^{a}\left( \mathbf{r}^{\prime}\right) \ln\left\vert \mathbf{r-r}^{\prime}\right\vert ~-\frac{1}{2\pi}S_{0}\int d\mathbf{r}^{\prime}J\left( \mathbf{r}^{\prime }\right) \ln\left\vert \mathbf{r-r}^{\prime}\right\vert ~\nonumber \end{align} an \begin{align} A_{j} & =\frac{1}{4\pi}\kappa_{0j}\int d\mathbf{r}^{\prime}~\rho\left( \mathbf{r}^{\prime}\right) ~\ln\left\vert \mathbf{r-r}^{\prime}\right\vert \text{\ }-\frac{1}{4\pi}\kappa_{0a}\int d\mathbf{r}^{\prime}\frac{\left( \mathbf{r-r}^{\prime}\right) _{a}\left( \mathbf{r-r}^{\prime}\right) _{j }{\left\vert \mathbf{r-r}^{\prime}\right\vert ^{2}}\rho\left( \mathbf{r ^{\prime}\right) \nonumber\\[0.3cm] & +\frac{1}{2\pi}\left[ \delta_{jb}\left( 1+\frac{1}{2}\kappa_{aa}\right) +\frac{1}{2}\kappa_{jb}\right] \int d\mathbf{r}^{\prime}~J^{b}\left( \mathbf{r}^{\prime}\right) \ln\left\vert \mathbf{r-r}^{\prime}\right\vert ~\label{AII}\\[0.3cm] & +\frac{1}{4\pi}\delta_{jc}\kappa_{ab}\int d\mathbf{r}^{\prime}\frac{\left( \mathbf{r-r}^{\prime}\right) _{a}\left( \mathbf{r-r}^{\prime}\right) _{b }{\left\vert \mathbf{r-r}^{\prime}\right\vert ^{2}}J^{c}\left( \mathbf{r ^{\prime}\right) -\frac{1}{4\pi}\kappa_{ab}\int d\mathbf{r}^{\prime \frac{\left( \mathbf{r-r}^{\prime}\right) _{a}\left( \mathbf{r-r}^{\prime }\right) _{j}}{\left\vert \mathbf{r-r}^{\prime}\right\vert ^{2}}J^{b}\left( \mathbf{r}^{\prime}\right) \nonumber\\[0.3cm] & -\frac{1}{4\pi}S_{j}\int d\mathbf{r}^{\prime}J\left( \mathbf{r}^{\prime }\right) \ln\left\vert \mathbf{r-r}^{\prime}\right\vert \text{\ }+\frac {1}{4\pi}S_{a}\int d\mathbf{r}^{\prime}\frac{\left( \mathbf{r-r}^{\prime }\right) _{a}\left( \mathbf{r-r}^{\prime}\right) _{j}}{\left\vert \mathbf{r-r}^{\prime}\right\vert ^{2}}J\left( \mathbf{r}^{\prime}\right) ,\nonumber \end{align} respectively. For a pointlike static charge distribution, $\rho(\mathbf{r}^{\prime )=q\delta(\mathbf{r}^{\prime})~\left[ J_{i}\left( \mathbf{r}^{\prime }\right) =0=J\left( \mathbf{r}^{\prime}\right) \right] ,$ the scalar potential and the potential vector are \begin{align} A_{0}\left( \mathbf{r}\right) & =-\frac{q}{2\pi}\left[ \left( 1-\kappa_{00}+\frac{1}{2}\kappa_{aa}\right) \ln r+\frac{1}{2}\kappa_{ab \frac{r_{a}r_{b}}{r^{2}}\right] ,\label{A01}\\[0.3cm] A_{j}\left( \mathbf{r}\right) & =\frac{q}{4\pi}\left( \kappa_{0j}\ln r\text{\ }-\kappa_{0a}\frac{r_{a}r_{j}}{r^{2}}\right) ,\label{A02 \end{align} respectively. The solution (\ref{A01}) differs from the usual scalar potential generated by a pointlike charge in (1+2) dimensions mainly by the term $\kappa^{ab}r_{a}r_{b}/r^{2},$ which yields an anisotropic behavior for it. The electric field produced by the pointlike charge is, \begin{equation} E_{i}\left( \mathbf{r}\right) =-\frac{q}{2\pi}\left[ \left( 1-\kappa _{00}+\frac{1}{2}\kappa_{aa}\right) \frac{r_{i}}{r^{2}}+\kappa_{ib \frac{r_{b}}{r^{2}}-\kappa_{ab}\frac{r_{a}r_{b}}{r^{4}}r_{i}\right] ,\label{E4 \end{equation} which in addition to its radial behavior $r^{-1}$ presents anisotropies, due to the two last terms $\kappa_{ib}r_{b}/r^{2}\ $and $\kappa_{ab}r_{a r_{b}r_{i}/r^{4}$, produced by the LIV backgrounds but these Lorentz-violating corrections do not modify the global asymptotic behavior of the electric field in (1+2) dimensions: it remains decaying as $1/r$. From the potential vector (\ref{A02}) we compute the associated magnetic field produced by a pointlike charge, \begin{equation} B\left( \mathbf{r}\right) =\frac{q}{2\pi}\epsilon_{ij}\frac{\kappa_{0i r_{j}}{r^{2}}. \end{equation} Here, we observe that the LIV parameter $\kappa_{0i}$ engenders an anisotropic magnetic field whose asymptotic behavior goes as $r^{-1}$. It can be used to impose an upper-bound for the $\kappa_{0i}$ coefficients by using the experimental data concerning the two-dimensional physics. \bigskip For a pointlike charge with velocity $\mathbf{u}$\textbf{,} $J^{i}(\mathbf{r}^{\prime})=q\delta(\mathbf{r}^{\prime})u^{i}~,\left[ \rho\left( \mathbf{r}^{\prime}\right) =0=J\left( \mathbf{r}^{\prime }\right) \right] $, the scalar potential i \begin{equation} A_{0}\left( \mathbf{r}\right) =-\frac{q}{2\pi}\kappa_{0a}u_{a}\ln r~, \end{equation} while the vector potential is \begin{equation} A_{j}\left( \mathbf{r}\right) =-\frac{q}{2\pi}\left[ \left( 1+\frac{1 {2}\kappa_{aa}\right) u_{j}+\frac{1}{2}\kappa_{ja}u_{a}\right] \ln r-\frac{q}{4\pi}\kappa_{ab}u_{j}\frac{r_{a}r_{b}}{r^{2}}+\frac{q}{4\pi \kappa_{ab}u_{b}\frac{r_{a}r_{j}}{r^{2}}. \end{equation} The respective electric and magnetic field ar \begin{equation} E_{i}\left( \mathbf{r}\right) =-\frac{q}{2\pi}\kappa_{0a}u_{a}\frac{r_{i }{r^{2}}, \end{equation \begin{equation} B\left( \mathbf{r}\right) =\frac{q}{2\pi}\left[ \left( 1+\frac{1}{2 \kappa_{aa}\right) \epsilon_{ij}\frac{r_{i}u_{j}}{r^{2}}-\epsilon_{ij \kappa_{ab}\frac{r_{a}r_{b}r_{i}u_{j}}{r^{4}}+\epsilon_{ij}\kappa_{ja \frac{3r_{i}u_{a}-r_{a}u_{i}}{r^{2}}\right] . \end{equation} In this model a pointlike scalar source, $J(\mathbf{r}^{\prime})=q_{s \delta(\mathbf{r}^{\prime}),~\left[ \rho\left( \mathbf{r}^{\prime}\right) =0=J_{i}\left( \mathbf{r}^{\prime}\right) \right] $, also generates electromagnetic phenomena whose scalar and vector potentials are given b \begin{equation} A_{0}\left( \mathbf{r}\right) =-\frac{q_{s}}{2\pi}S_{0}\ln r,\text{ \ \ }A_{j}\left( \mathbf{r}\right) =-\frac{q_{s}}{4\pi}S_{j}\ln r\text{\ }+\frac{q_{s}}{4\pi}S_{a}\frac{r_{a}r_{j}}{r^{2}}, \end{equation} leading to the following electric and magnetic field solutions: \begin{equation} E_{i}\left( \mathbf{r}\right) =-\frac{q_{s}}{2\pi}S_{0}\frac{r_{i}}{r^{2 }~,\text{ }B\left( \mathbf{r}\right) =\frac{q_{s}}{2\pi}\epsilon_{ij \frac{S_{j}r_{i}}{r^{2}}. \end{equation} \subsection{Static solutions for the pure scalar field} From (\ref{EQSC}) , the stationary solution for the scalar field in can be expressed as \begin{equation} \phi\left( \mathbf{r}\right) =\int d\mathbf{r}^{\prime}G\left( \mathbf{r-r}^{\prime}\right) J\left( \mathbf{r}^{\prime}\right) -\frac {1}{2\pi}S_{\mu}\int d\mathbf{r}^{\prime}~J^{\mu}(\mathbf{r}^{\prime )~\ln\left\vert \mathbf{r-r}^{\prime}\right\vert \label{A5x \end{equation} where $G\left( \mathbf{r-r}^{\prime}\right) $ is the stationary scalar Green's function obtained from Eq. (\ref{gsf}), we attai \begin{equation} G(\mathbf{R})=\frac{1}{2\pi}\left( 1+\eta+\frac{1}{2}\kappa_{aa}\right) \ln R+\frac{1}{4\pi}\kappa_{ab}\frac{R_{a}R_{b}}{R^{2}}. \end{equation} The scalar field generated by a pointlike scalar source, $J(\mathbf{r ^{\prime})=q_{_{s}}\delta(\mathbf{r}^{\prime}),$ is \begin{equation} \phi\left( \mathbf{r}\right) =\frac{q_{_{s}}}{2\pi}\left[ \left( 1+\eta+\frac{1}{2}\kappa_{aa}\right) \ln r+\frac{1}{2}\kappa_{ij}\frac {r_{i}r_{j}}{r^{2}}\right] . \end{equation} We thus confirm that scalar field presents a very similar behavior to the one of the scalar potential, given by Eq. (\ref{A01}). Similarly, the scalar field produced by a pointlike charge scalar source, $\rho(\mathbf{r}^{\prime})=q\delta(\mathbf{r}^{\prime})$, and a pointlike charge with constant velocity $\mathbf{u}$\textbf{,} $J^{i}(\mathbf{r ^{\prime})=q\delta(\mathbf{r}^{\prime})u^{i}$,$~$ar \begin{equation} \phi\left( \mathbf{r}\right) =-\frac{q}{2\pi}S_{0}\ln r~,~\ \phi\left( \mathbf{r}\right) =\frac{q}{2\pi}S_{i}u_{i}\ln r, \end{equation} respectively, showing similar radial behavior. \section{Conclusions} In this work, we have performed the dimensional reduction of the nonbirefringent CPT-even electrodynamics of the standard model extension. Such procedure generates a planar Lorentz-violating electrodynamics composed of a gauge field and a scalar field linearly coupled by a LIV 3-vector $S^{\mu}$. Both fields have kinetic terms modified by the Lorentz violating symmetric tensor, $\kappa^{\nu\rho}$. This planar model possesses nine independent LV components including six parity-even and three parity-odd, being more simpler than the one of Ref. \cite{DreducCPT}, in which the Lorentz-violation is governed by 19 parameters (see Lagrangian (\ref{L1})). The evaluation of the energy-momentum tensor has shown that the density of energy of the full theory can be positive definite whenever the LV parameters are sufficiently small. This indicates that the full theory is endowed with energy stability. The same conclusion is valid for both the pure gauge and the pure scalar sectors. A complete study on the dispersion relations was performed. Initially, we have evaluated the dispersion relations of the gauge and scalar sector (regarded as uncoupled) from the vacuum-vacuum amplitude, revealing that, at first order, these two fields are described by the same dispersion relations.\textbf{\ }After, we have carried out the full dispersion relations, which were exactly computed for some special combinations of the LIV parameters. The coupling vector $S^{\mu}$ contributes only at second order for the dispersion relations. All the expressions confirm that the planar model is nonbirefringent at any order, whereas the original (1+3)-dimensional model is nonbirefringent only at leading order. From these relations we\ also conclude the gauge and scalar sector are stable, but endowed with causality illness. A more careful analysis\ about the physical consistency of this model (stability, causality, unitarity) is under progress. We have established the wave equations for the gauge and scalar field and we have achieved their stationary solutions, via the Green's function technique, at first-order in the LIV coefficients. The Lorentz-violating terms induce an anisotropic character to these stationary solutions which now exhibit an explicit angular dependence. However, the LIV coefficients do not modify the long distance profile of the solutions, keeping the $r^{-1}$ asymptotic behavior of the pure Maxwell planar electrodynamics (a fact compatible with dimensionless nature of the LIV coefficients). The scalar and vector potential generated by a pointlike scalar charge were carried out as well, showing that it generates electromagnetic fields. An analogous calculation was accomplished for the scalar sector, demonstrating that it obeys stationary solutions similar to the ones of the scalar potential $A_{0}$. This kind of theoretical framework can find applications in usual planar systems, such as vortex and Hall systems. At moment, we are particulary interested in analyzing effects of LIV coefficients in stable vortex configurations, having already verified that the gauge sector represented by Lagrangian $\mathcal{L}_{EM},$ when properly coupled to the Higgs sector endowed with a fourth-order self-interacting potential, supports BPS (Bogomol'nyi, Prasad, Sommerfeld) solutions. Advances will be reported elsewhere. \begin{acknowledgments} The authors are grateful to FAPEMA, CAPES and CNPq (Brazilian research agencies) for invaluable financial support. The authors also acknowledge the IFT (Instituto de F\'{\i}sica Te\'{o}rica) staff for the kind hospitality during the realization of this work. \end{acknowledgments}
1,477,468,750,860
arxiv
\subsection{\label{ObstacleForce}Force calculation} In summary, we have studied the deflection and oscillations of an elastic fiber under the quasistatic flow of a 2D foam. We have independently measured the different contributions to the force distribution acting along the fiber, and shown that the pressure force dominates in our configuration. We also studied the statistics on the fiber deflections and the energy released in cascades of plastic events in the surrounding foams. Finally, the measure of the maximal fiber deflection allows us to estimate the elastic modulus and the yield stress of the foam. We hope that this study will open the way on the unexplored field of the interplay between elasto-plastic flows and deformable objects. Among possible future extensions of this work, we can cite the investigation of the large deflection regime, the deflection of a fiber aligned with the flow direction, and the interaction of an assembly of evenly disposed fibers, which represent common situations, in particular in biological systems. On the theoretical part, a continuous description of this interactions would allow to use the fiber deflection to probe the elastoplastic parameters of the foam. \bibliographystyle{apsrev-no_url.bst}
1,477,468,750,861
arxiv
\section*{General Notation} \section{Introduction} \label{intro} The angle across an area of a power system is a weighted combination of synchrophasor measurements of voltage phasor angles around the border of the area \cite{DobsonvoltPS12,DobsonIREP10}. The weights are calculated from a DC load flow model of the area in such a way that the area angle satisfies circuit laws. Area angles were first developed for the special case of areas called cutset areas that extend all the way across the power system \cite{DobsonHICSS10,DobsonPESGM10,LopezPESGM12}. We previously showed how area angle responded to single line outages inside the area in some Japanese test cases \cite{DarvishiNAPS13}. The increase in area angle largely reflected outage severity and ways to choose the area were discussed. \looseness=-1 Area angles are easy to calculate from synchrophasor measurements, and their general value is in giving a fast and meaningful bulk measure related to stress in a specific area of the power system. Area angle monitoring would complement slower monitoring via state estimation. The approximate relation of changes in area angle to outage severity suggests that it could be easier to set alarm thresholds using area angles. Another measure of stress, the voltage angle between two synchrophasor locations, responds to events throughout the power system, and is not easy to relate to a particular area. This paper seeks to quantify outage severity with bulk area monitoring; to identify the line outage in the area see \cite{SehwailNAPS12,SehwailPS13,TatePS08}. The area angle is measured across an area from one ``side" of the area such as the north to the other side of the area such as the south. The area susceptance across the area can also be defined, and, according to Ohm's law in a DC power flow context, the equivalent power flow through the area is the product of the area susceptance and the area angle. The power flow through the area is often approximately constant, so it is intuitively plausible that when a line outages, the area susceptance decreases and the area angle increases. In this paper, we explain and examine this approximate relationship between area angle and area susceptance in detail, including testing on two areas of the WECC. We choose these areas of the transmission system between major generation and major load to try to describe with the area angle the stress resulting from the transfer of power through the area from generation to load. There are some arts to choose a good area to be meaningful with respect to power flow direction. The testing on the WECC areas also shows how changes in area angle can usually distinguish the single line outage severity. This paper is limited to single line outages that, for simplicity, do not island the system.\footnote{Islanding line outages require assumptions about generator redispatch} \section{Area angle and area susceptance formulas and relations} \subsection{Formulas for voltage angle and power through the area} \label{formulas} We summarize from \cite{DobsonvoltPS12} formulas related to the area angle and power entering the area. We consider a connected area R of the power system with border buses $M$ and interior buses $N$. The susceptance matrix from the base case DC power flow is written as $B$, with subscripts indicating submatrices or elements of $B$. The following notation is used for column vectors of voltage angles and powers: \vspace{5pt} \noindent \begin{tabular}{ @{\hspace{1.5cm}}ll @{}} $\theta_n$&voltage angles at interior buses $N$\\ $P_n$&power injected at interior buses $N$\\ $\theta_m$&voltage angles at border buses $M$\\ $P_m$& power injected at border buses $M$\\ $P_{m}^{\rm into}$&power entering R at border buses $M$\\&\qquad along tie lines\\ \end{tabular} \vspace{5pt} The vector of powers $P_m^{\rm R}$ entering the border buses of R is the sum of the power $P_m$ injected directly at the border buses and the power $P_m^{\rm into}$ flowing into the area along the tie lines: \begin{align} P_m^{\rm R}=P_m+P_m^{\rm into}. \label{ImR} \end{align} The susceptance matrix of the area $R$, considered as an isolated area without its tie lines, is $B_{mm}^{\rm R}$. Retaining the border buses $M$ and applying to $R$ a standard Ward or Kron reduction to eliminate the interior buses $N$, we get \begin{align} P_m^ {\rm Rred}&=P^{\rm R}_m- B_{{m}n}B_{nn}^{-1}P_{n}, \label{pmred}\\ B_{mm}^ {\rm Rred}&= B_{mm}^{\rm R}-B_{mn}B_{nn}^{-1} B_{nm}. \label{bmmred} \end{align} We indicate the partition of the border buses into two sets $M_a$ and $M_b$ by specifying the row vector $\sigma_a$, whose $i$th component is one if bus $i$ is in $M_a$, and is zero otherwise. Now we can define our main quantities. An equivalent power \cite{DobsonvoltPS12} that flows from $M_a$ to $M_b$ through R is \begin{align} P_{\rm area}=\sigma_a P_m^ {\rm Rred}. \label{parea} \end{align} The susceptance of the area $b_{\rm area}$ is \begin{align} b_{\rm area}=\sigma_a B_{mm}^ {\rm Rred} \sigma_a ^T. \label{bab} \end{align} The area angle $\theta_{\rm area}$ is the scalar quantity \begin{align} \theta_{\rm area}&=\frac{\sigma_a B_{mm}^ {\rm Rred} \theta_m}{b_{\rm area}}\notag\\ &=w \theta_m =w[1] \theta_{m}[1]+w[2] \theta_{m}[2]+...+w[k] \theta_{m}[k] \label{thetanew} \end{align} where $w$ is a row vector of weights $w=(w[1],w[2],...,w[k])$ that depend only on the area topology and the susceptances of lines in the area. $k$ is the number of border buses. To monitor the area angle with (\ref{thetanew}), we use the synchrophasor measurements of $\theta_m$ at the border buses and recent base case susceptances and topology of a DC load flow\footnote{Such DC load load flows are generally available \cite{TatePS08}. } of the area R to calculate the weights $w$. If an outage of line $i$ occurs, then the synchrophasor measurements at the border buses change to $\theta_m^{(i)}$ but we continue to use the weights computed {\sl before} the outage to compute the area angle as \begin{align} \theta_{\rm area}^{(i)}=\frac{\sigma_a B_{mm}^ {\rm Rred} \theta_m^{(i)}}{b_{\rm area}}=w\theta_m^{(i)}. \label{pmuareaanglei} \end{align} \subsection{Approximate inverse relation between area angle and area susceptance} We informally explain why the monitored area angle $\theta_{\rm area}^{(i)}$ varies approximately inversely to the area susceptance $b_{\rm area}^{(i)}$. It turns out\footnote{This approximation will be established with more rigor in a future paper.} that the monitored area angle $\theta_{\rm area}^{(i)}$ of (\ref{pmuareaanglei}) is close to the following area angle (note the square brackets in the superscript $[i]$): \begin{align} \theta_{\rm area}^{[i]}=\frac{\sigma_a B_{mm}^ {{\rm Rred}(i)} \theta_m^{(i)}}{b_{\rm area}^{(i)}}=\frac{\sigma_a B_{mm}^ {{\rm Rred}(i)}\theta_m^{(i)}}{\sigma_a B_{mm}^ {{\rm Rred}(i)}\sigma_a^T } ,\label{pmuareaangleiii} \end{align} which is the area angle that would be computed after the outage of line $i$ if the outage of line $i$ were accounted for in the weights. (The difference between (\ref{pmuareaangleiii}) and (\ref{pmuareaanglei}) is that the susceptance matrix $B_{mm}^ {{\rm Rred}(i)}$ that accounts for the outage of line $i$ replaces $B_{mm}^ {\rm Rred}$ in both the numerator and denominator of (\ref{pmuareaangleiii}).) The results in section \ref{results} show numerical evidence that $\theta_{\rm area}^{(i)}$ and $\theta_{\rm area}^{[i]}$ are close; that is, \begin{align} \theta_{\rm area}^{(i)}\approx \theta_{\rm area}^{[i]}. \end{align} It is the case \cite{DobsonvoltPS12} that Ohm's law applies to area angles so that \begin{align} P_{\rm area}=b_{\rm area} \theta_{\rm area}. \label{ohm} \end{align} In particular, when line $i$ outages, we have \begin{align} P^{(i)}_{\rm area}=b^{(i)}_{\rm area} \theta^{[i]}_{\rm area}, \label{ohmi} \end{align} and, from (\ref{ImR}), (\ref{pmred}), and (\ref{parea}), we have \begin{align} P^{(i)}_{\rm area}=\sigma_a (P_m+P_m^{{\rm into}(i)}- B_{mn}^{(i)}{(B^{(i)}_{nn}})^{-1}P_{n}). \label{pareaiex} \end{align} Since line $i$ is assumed to be a non-islanding outage, and there are assumed to be no losses in the DC load flow approximation, there is no redispatch or load shedding and $P_m$ and $P_n$ do not change when the line outages. The term $B_{{m}n}^{(i)}{(B^{(i)}_{nn}})^{-1}P_{n}$ describes how the injected powers $P_n$ redistribute to equivalent injections at the border buses after line $i$ outages, and is usually close to the equivalent injections $B_{mn}B_{nn}^{-1}P_{n}$ before the outage.$^{3}$ Now we consider the effect of the line outage on the power $P_m^{\rm into}$ entering the area R along the tie lines. There are two cases. In the first case, there is no alternative path for power to flow around the area (that is, the area is a cutset area \cite{DobsonHICSS10,DobsonPESGM10} in that removing the area disconnects the network), and the power entering the area along the tie lines does not change so that $P_m^{{\rm into}(i)}=P_m^{\rm into}$. In the second case, there is an alternative path for the power to flow around the area, and $P_m^{{\rm into}(i)}$ will be different than $P_m^{\rm into}$. However, in the practical cases considered in this paper, the alternative paths have fairly high impedance so that the difference between $P_m^{{\rm into}(i)}$ and $P_m^{\rm into}$ is small. The conclusion is that in this paper, $P^{(i)}_{\rm area}\approx P_{\rm area}$. Gathering these relationships and approximations together we obtain \begin{align} \theta^{(i)}_{\rm area}\approx \theta^{[i]}_{\rm area} =\frac{P^{(i)}_{\rm area}}{b^{(i)}_{\rm area}}\approx \frac{P_{\rm area}}{b^{(i)}_{\rm area}} \label{pareaiex2} \end{align} Also a numerical example of approximation (\ref{pareaiex2}) is given at the end of Section~IV. Thus $\theta^{(i)}_{\rm area}$ and $b^{(i)}_{\rm area}$ are approximately inversely related. \section{Simple examples} To better understand the relationship between the susceptance of the area and the area voltage angle, we first consider a very simple case of 3 parallel lines connecting bus $a$ to bus $b$ with respective susceptances $b_1$, $b_2$, and $b_3$. Power $P_a$ is generated at bus $a$ and consumed at bus $b$. In this simple case, the area susceptance $b_{\rm area}=b_1+b_2+b_3$ is the sum of the line susceptances and the area angle $\theta_{\rm area}=\theta_a-\theta_b$ is the angle difference between the voltages at bus $a$ and $b$, and the equivalent power through the area $P_{\rm area}=P_a$. In the base case, \begin{align} \theta_{\rm area} =\frac{P_{\rm area}}{b_{\rm area}}=\frac{P_a}{b_1+b_2+b_3} \end{align} If line 1 outages, the power flowing through the area $P_{\rm area}=P_a$ remains constant, the area susceptance decreases to $b_{\rm area}^{(1)}=b_2+b_3$, and the area angle increases to \begin{align} \theta_{\rm area}^{(1)} = \theta_{\rm area}^{[1]} =\theta_a^{(1)}-\theta_b^{(1)} =\frac{P_{\rm area}^{(1)}}{b_{\rm area}^{(1)}} =\frac{P_a}{b_{\rm area}^{(1)}} =\frac{P_a}{b_2+b_3} \end{align} The voltage angle increase reflects the decreased susceptance in the network and the increased area stress. We also have $\theta_{\rm area}^{(2)}= P_a/b_{\rm area}^{(2)}$ and $\theta_{\rm area}^{(3)}= P_a/b_{\rm area}^{(3)}$, and it can be seen that outaging the line with the largest susceptance gives the largest increase in area angle. To observe the same effects in an example in which multiple voltage angles are combined to form the area angle, consider the simple symmetric network shown in Figure \ref{pic1Change-simpleEx}. Buses 1 and 2 are north border buses and buses 4 and 5 are south border buses. The susceptance of each of the four lines connected to the north border is 30 pu, the susceptance of each of the two lines between bus 3 and bus 4 is 10 pu, and the susceptance of each of the two lines between bus 3 and bus 5 is 40 pu. The power generation at the north border and the loads at the south border are shown in per unit in Figure \ref{pic1Change-simpleEx}. The larger susceptance lines 7 and 8 have a larger power flow of 40 pu. We are interested in the voltage angle across the area from the north border to the south border, which is the following weighted combination of the border voltage angles: \begin{align} \theta_{\rm area5bus} =0.5 \, \theta_1+0.5 \, \theta_2-0.33\, \theta_4-0.67 \,\theta_5 \label{theta1-simpleEx} \end{align} \begin{figure}[h] \begin{center} \includegraphics[width=2.8in]{pic1Change-simpleEx} \caption{5 bus example network with north border buses 1 and 2 in red and south border buses 4 and 5 in blue.} \label{pic1Change-simpleEx} \end{center} \end{figure} We take out each line in the system in turn and calculate the area susceptance $b_{\rm area5bus}^{(i)}$ and the monitored area angle $\theta_{\rm area5bus}^{(i)}$ in each case. The results in Figure \ref{pl1Change-simpleEx} show that the area voltage angle responds to and changes inversely with the area susceptance. Moreover, the changes are largest for most severe line outages. For example, lines 7 and 8 have the largest susceptances and power flows, and when either line 7 or line 8 outages, the area angle increases the most and the susceptance decreases the most. Lines 5 and 6 have the smallest susceptances and power flows, and when either line 5 or line 6 outages, the area angle increases the least and the susceptance decreases the least. \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{pl1Change-simpleEx} \caption{Area angle $\theta_{\rm area5bus}^{(i)}$ in degrees and area susceptance $b_{\rm area5bus}^{(i)}$ in pu for each line outage of 5 bus system. Base case is indicated as line 0.} \label{pl1Change-simpleEx} \end{center} \end{figure} A 9 bus example of an asymmetric network with lines of equal susceptance is shown in Figure \ref{pic2Change-simpleEx.jpg}. Buses 1 and 2 are north border buses and bus 3 is the south border bus. Buses 8 and 9 have generators each providing 8 pu and bus 10 has load of 16 pu, so the total power into the area at the north border is 16 pu. The north to south area angle is \begin{align} \theta_{\rm area9bus} =0.44\, \theta_1+0.56\, \theta_2- \theta_3 \label{theta2-simpleEx} \end{align} \begin{figure}[h] \begin{center} \includegraphics[width=2in]{pic2Change-simpleEx} \caption{9 bus example network with north border buses 1 and 2 in red and the south border bus 3 in blue. The buses inside the area are black.} \label{pic2Change-simpleEx.jpg} \end{center} \end{figure} The results in Figure \ref{pl2Change-simpleEx} show that the area angle $\theta_{\rm area9bus}^{(i)}$ responds to and changes inversely with the area susceptance $b_{\rm area9bus}^{(i)}$. In this example, although all the lines have the same susceptance, they participate differently in transferring power north to south through the area. Therefore their outages have different severities and different impacts on the area susceptance and area angle. For example, after the line outages 3, 4, 7, 8, which have the largest power flow since they are in the main path of transferring power from north to south, we get the largest decrease in the susceptance and also the largest increase in the area angle, which correctly indicates that these are severe outages. In contrast, after the line outages 5 and 9, which have the smallest power flow since they are not in the main path of transferring power from north to south but instead run from east to west, we get the smallest change in area susceptance and area angle, which correctly indicates that these are less severe outages. \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{pl2Change-simpleEx} \caption{Area angle $\theta_{\rm area9bus}^{(i)}$ in degrees and area susceptance $b_{\rm area9bus}^{(i)}$ in pu for each line outage for 9 bus system. Base case is indicated as outage 0.} \label{pl2Change-simpleEx} \end{center} \end{figure} \section{Results for angles across areas of WECC} \label{results} We illustrate the use of area angles to monitor single, non-islanding line outages inside two areas of the WECC system. The first area, for which the network, border buses, and weights are shown in Figure~\ref{pic1Change-WeccIDGeneral}, covers roughly Washington, Oregon, Idaho, Montana, and Wyoming and contains over 700 lines. The north border is near the Canadian border and the south border is near the Oregon-California border and its extension eastwards. The area angle is the following weighted combination of the border bus angles: \begin{align} \theta_{\rm area1} =\, & 0.79\, \theta_1 + 0.21\, \theta_2 - 0.42 \,\theta_3 - 0.46 \,\theta_4\notag\\& - 0.02 \,\theta_5 - 0.05 \,\theta_6- 0.04 \,\theta_7 - 0.01 \,\theta_8 \notag \end{align} \begin{figure}[] \begin{center} \includegraphics[width=\columnwidth]{pic1Change-WeccIDGeneral} \caption{Area 1 of WECC system with area lines in black, north border buses in red, and south border buses in blue. Layout detail is not geographic.} \label{pic1Change-WeccIDGeneral} \end{center} \end{figure} The second and smaller area shown in Figure~\ref{pic4Change-WeccIDGeneral} covers roughly Washington and Oregon. The northern (and western) border is near the borders of Canada-Washington, Washington-Montana and Oregon-Idaho, and the south border is near the Oregon-California border. The area angle is \begin{align} \theta_{\rm area2} &=\, 0.223\, \theta_1 + 0.006\, \theta_2\notag\\& + 0.008 \,\theta_3 + 0.01 \,\theta_4 + 0.02 \,\theta_5 + 0.18 \,\theta_6+ 0.59 \,\theta_7\notag\\& - 0.39 \,\theta_8 - 0.41 \,\theta_9 - 0.004 \,\theta_{10}- 0.03 \,\theta_{11} - 0.18 \,\theta_{12}\notag \end{align} In practice the measurements with very small weights could be omitted. \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{pic4Change-WeccIDGeneral} \caption{Area 2 of WECC system with area lines in black, north border buses in red and south border buses in blue. Layout detail is not geographic.} \label{pic4Change-WeccIDGeneral} \end{center} \end{figure} For both areas, we are interested in monitoring the north-south area stress with the area angle when there are single non-islanding line outages, and relating changes in the area angle to the area susceptance and the outage severity. We take out each line in the system in turn and calculate the monitored area angle $\theta_{\rm area}^{(i)}$ and the area susceptance $b_{\rm area}^{(i)}$ in each case. \looseness=-1 To quantify the severity of each outage, we compute the maximum power that can enter the area after the outage of each line; for more detail see \cite{DarvishiNAPS13}. The real power through the area is increased by increasing the power entering at each border bus proportionally. (Generally power enters the area at the northern border buses and leaves the area from the south border buses.) The maximum power entering the area through the north border occurs when the first line limit inside the area is encountered. The idea is that the more severe line outages will more strictly limit the maximum power that can be transferred north to south through the area. This definition of outage severity can be related to the economic effect of limiting the north-south transfer. The area angle and the area susceptance for each line outage are shown in Figure \ref{pl1Change-WeccIDGeneral} for area 1 and in Figure \ref{pl4Change-WeccIDGeneral} for area 2. The similar patterns of changes in the area angles and area susceptances confirm that the inverse relationship between area angle and area susceptance usually applies. \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{pl1Change-WeccIDGeneral} \vspace{-25pt} \caption{Area angle $\theta_{\rm area1}^{(i)}$ in degrees, area susceptance $b_{\rm area1}^{(i)}$ , and maximum power into the area in pu for each line outage in WECC area 1. Base case (the point at extreme right) is $\theta_{\rm area1}=66.5^{\rm o}$, $b_{\rm area1}=39.0\,$pu, max power = 46.9. For clarity, graph shows $b_{\rm area}$ multiplied by 2, and max power multiplied by 1.5.} \label{pl1Change-WeccIDGeneral} \end{center} \begin{center} \includegraphics[width=\columnwidth]{pl4Change-WeccIDGeneral} \vspace{-25pt} \caption{Area angle $\theta_{\rm area2}^{(i)}$ in degrees, area susceptance $b_{\rm area2}^{(i)}$, and max power into the area in pu for each line outage in WECC area 2. Base case (the point at extreme right) is $\theta_{\rm area2}=52.9^{\rm o}$, $b_{\rm area2}=66.7\,$pu, max power = 66.0\,pu.} \label{pl4Change-WeccIDGeneral} \end{center} \end{figure} \looseness =-1 Figures~\ref{pl1Change-WeccIDGeneral} and \ref{pl4Change-WeccIDGeneral} also show the outage severity computed as the maximum power into the area. Note that the line outages are sorted according to increasing maximum power into the area (decreasing severity). The most severe line outages are on the left hand sides of Figures~\ref{pl1Change-WeccIDGeneral} and \ref{pl4Change-WeccIDGeneral}, and it can be seen that the area angle usually increases substantially for most of the severe line outages. Moreover, in the middle portion of the figures with small changes in severity from the base case (the flat portion of the maximum power into the area), the change in area angle from the base case is usually also small. This suggests, for our chosen quantification of outage severity, that large increases in area angle usually indicate the severe line outages. In our experience, this good result relies on our use of realistic line limits. This tracking of the severity of the outages with the area angle is imperfect, but this is to be expected when trying to monitor over 700 lines in WECC area 1 and 500 lines in WECC area 2 with one scalar area angle as a single bulk area index. (Also note that we are only using a dozen or fewer synchrophasor measurements to compute the area angle.) There are several reasons for the exceptional line outages in which the changes in area angle do not track the outage severity. Large generation or load inside the area can influence the maximum power entering the area under single line outage conditions, and the discrepancy can arise from inaccurate assessment of the outage severity with the maximum power entering the area. The line limits that determine the maximum power entering the area and the outage severity may not follow the susceptance of the lines and so the susceptance of the area and hence in these cases the area angle cannot track the outage severity. These effects are also the likely cause of the outages at the right of Figure \ref{pl4Change-WeccIDGeneral} having a maximum power into the area larger than the base case. \looseness=-1 To numerically check the assertion that $\theta_{\rm area}^{(i)}$ and $\theta_{\rm area}^{[i]}$ are close, we compute the ratio $\theta_{\rm area}^{(i)}/\theta_{\rm area}^{[i]}$ for each line outage. For WECC area 1, $\theta_{\rm area1}^{(i)}/\theta_{\rm area1}^{[i]}$ has mean 0.9999, standard deviation 0.002501, and it ranges from 0.9846 to 1.014. For WECC area 2, $\theta_{\rm area2}^{(i)}/\theta_{\rm area2}^{[i]}$ has mean 0.9993, standard deviation 0.006082, and it ranges from 0.9236 to 1.056. \section{Conclusion} \label{conclusion} It is useful to monitor area angle by combining together synchrophasor measurements at the borders of a suitably chosen area. The area angle and the area susceptance change when single, non-islanding line outages occur and we show that area angle and susceptance tend to change inversely using both simple examples and two examples of areas with hundreds of lines in a real power system. This approximate relation between area angle and area susceptance gives intuition about how the area angle works to detect line outages in the area. The area angle results in a real power system also show that the amount of change in the area angle usually indicates the severity of the line outage (the exceptions generally relate to outages of lines that are connected to generation or load inside the area). This suggests that a threshold for changes in the area angle to distinguish severe single line outages could be set. \newpage \section*{Acknowledgments} \label{ack} We gratefully acknowledge support in part from DOE project ``The Future Grid to Enable Sustainable Energy Systems," an initiative of PSERC, and NSF grant CPS-1135825, and the Electric Power Research Center at Iowa State University. We gratefully acknowledge access to the WECC data that enabled this research. The analysis and conclusions are strictly those of the authors and not of WECC. \newpage
1,477,468,750,862
arxiv
\section{Introduction} \begin{figure}[ht] \centering \includegraphics[width = 1.0\linewidth]{figures/Cover.png} \caption{ Our method extracts trajectories and computes pedestrian movement features at interactive rates. We use the learned behavior and movement features to detect anomalies in the pedestrian trajectories. The lines indicate behavior features (explained in detail in Section 3.5). The yellow lines indicate anomalies detected by our approach.} \label{fig:cover} \end{figure} There has been a growing interest in developing computational methodologies for simulating and analyzing the movements and behaviors of crowds in real-world videos. This include simulation of large crowds composed of a large number of pedestrians or agents, moving in a shared space, and interacting with each other. Some of the driving applications include surveillance, training systems, robotics, navigation, computer games, and urban planning. In this paper, we deal with the problem of interactive anomaly detection in crowd videos and develop approaches that perform no precomputation or offline learning. Our research is motivated by the widespread use of commodity cameras that are increasingly used for surveillance and monitoring, including sporting events, public places, religious and political gatherings, etc. One of the key challenges is to devise methods that can automatically analyze the behavior and movement patterns in crowd videos to detect anomalous or atypical behaviors~\cite{LiCrowdedSceneAnalysis2015}. Furthermore, many of these applications desire interactive or realtime performance, and do not rely on apriori learning or labeling. Many algorithms have been designed to track individual agents and/or to recognize their behavior and movements and detect abnormal behaviors)~\cite{Junior2010}. However, current methods are typically limited to sparse crowds or are designed for offline or non-realtime applications. We present an algorithm for realtime anomaly detection in low to medium density crowd videos. Our approach uses online methods to track each pedestrian and learn the trajectory-level behaviors for each agent by combining non-linear motion models and Bayesian learning. Given a video stream, we extract the trajectory of each agent using a realtime multi-person tracking algorithm that can model different interactions between the pedestrians and the obstacles. Next, we use a Bayesian inference technique to compute the trajectory behavior feature for each agent. These trajectory behavior features are used for anomaly detection in terms of pedestrian movement or behaviors. Our approach involves no offline learning and can be used for interactive surveillance and any crowd videos. We refer the readers to read~\cite{Bera_2016_CVPR_Workshops} for more technical details and analysis. \section{Related Work} There is extensive research in computer vision and multimedia analyzing crowd behaviors and movements from videos~\cite{LiCrowdedSceneAnalysis2015}. Most of the work has focused on extracting useful information including behavior patterns and situations for surveillance analysis through activity recognition and abnormal behavior detection. Certain methods focus on classifying the most common, simple behavior patterns (linear, radial, etc.) in a given scene. However, most of these methods are designed for offline applications and tend to use a large number of training videos for offline learning of patterns for detecting common crowd behavior patterns~\cite{Shah_crowdPatterns}, normal and abnormal interactions~\cite{Mahadevan.anomaly.2010}, human group activities~\cite{Ni2009}. Other methods are designed for crowd analysis using a large number of web videos~\cite{mikel_dataDriven}. However, these techniques employ either manual selection methods or offline learning techniques for behavior analysis and therefore, cannot be used for interactive applications. All of these methods perform offline computations, and it is not clear whether they can be directly used for interactive applications. \section{Public Policy Issues} As the legal, social and technological issues surrounding video surveillance are complex, this section provides a brief background of privacy issues especially in the context of our work. The ability of new camera and network technologies to identify, track, and investigate the activities of formerly anonymous individuals fundamentally changes the nature of video surveillance. While the various technological developments overlap, for these guidelines we conceive of four distinct types of surveillance technologies, each of which calls for differing rules and restrictions (a) observation technologies; (b) recording technologies; (c) tracking technologies; and (d) identification technologies. Even though our approach uses many of these technologies at the simultaneously, our technology may be employed to also mitigate the impact of surveillance on constitutional rights and values. Most of the debate around this is because most of the surveillance methods record facial features and details which can be tied to one's personal details, whereas, our research only looks at and learns for trajectories. No personal information is captured. In fact, the learned trajectories are also not stored in any database if the pedestrian is deemed ``harmless''. We understand that public video surveillance systems have the potential to be used in ways that infringe on privacy and anonymity rights. Commentators often assume that there is ``no reasonable expectation of privacy'' in streets or parks or other areas open to view. As mentioned earlier, our approach takes a very safe and cautious approach at protecting personal information. We do not look at any visual cues relating to the person being tracked. Only pedestrian trajectory-level information is processed. It is often said that the risk of harm to constitutional rights and values posed by a public video surveillance system increases with its duration. The longer a system operates, the more activities and information it captures—permitting more and greater violations of privacy and anonymity and the correspondingly higher probability of public outcry and legal liability. Our approach is risk-free is this respect since, even though we use past trajectory to learn pedestrian behavior, the history we learn is only for a few seconds, and once the model is trained, that history is not used and deleted forever. As mentioned earlier, since we only use minimal information about the pedestrian (the trajectories), there is no threat to privacy and anonymity. There has been extensive research which models human gaits as potential identifiers of a person~\cite{chen2014average}, but so far no research has been able to establish a personal identification metrics with only trajectories. \section{Trajectory Behavior Learning} In this section, we present our interactive trajectory-level behavior computation algorithm. \subsection{Terminology and Notation} We first introduce the notation used in the remainder of the paper. \textbf{Pedestrians:} We use the term {\em pedestrian} to refer to independent individuals or human-like agents in the crowd. Their trajectories and movements are extracted by our algorithm using a realtime multi-person tracker. \textbf{State representation:} A key aspect of our approach is to compute the state of each pedestrian and each pedestrian cluster in the crowd video. Intuitively, the state corresponds to the low-level motion features that are used to compute the trajectory-level behavior features. In the remainder of the paper, we assume that all the agents are moving on a 2D plane. Realtime tracking of pedestrians is performed in the 2D image space and provides an approximate position, i.e. $(x,y)$ coordinates, of each pedestrian for each frame of the video. In addition, we infer the velocities and intermediate goal positions of each pedestrian from the sequence of its prior trajectory locations. We encode this information regarding a pedestrian's movement at a time instance using a state vector. In particular, we use the vector $\mathbf{x}=[\mathbf{p} \; \mathbf{v} \; \mathbf{g}]^\mathbf{T}$, $\mathbf{x}\in\mathbb{R}^6$ to refer to a pedestrian's state. The state vector consists of three 2-dimensional vectors: $\mathbf{p}$ is the pedestrian's position, $\mathbf{v}$ is its current velocity, and $\mathbf{g}$ is the intermediate goal position. The intermediate goal position is used to compute the optimal velocity that the pedestrian would have taken had there been no other pedestrians or obstacles in the scene. As a result, the goal position provides information about the pedestrian's immediate intent. In practice, this locally optimal velocity tends to be different from $\mathbf{v}$ for a given pedestrian. The state of the entire crowd, which consists of individual pedestrians, is the union of the set of each pedestrian's state $\mathbf{X}=\bigcup _i\mathbf{x_i}$. \textbf{Pedestrian behavior feature:} The pedestrians in a crowd are typically in motion, and their individual trajectories change as a function of time. The behavior of the crowd can be defined using macroscopic or global flow features, or based on the gestures and actions of different pedestrians in the crowd. In this paper, we restrict ourselves to trajectory-level behaviors or movement features per agent and per cluster, including current position, average velocity (including speed and direction), cluster flow, and the intermediate goal position. These features change dynamically. Our goal is to interactively compute these features from tracked trajectories, and then use them for behavior analysis. \subsection{Overview} Our overall approach consists of multiple components: a real-time multi-person tracker, state estimation, and behavior feature learning. One of our approach's benefits and its difference from prior approaches is that our approach does not require offline training using large number of training examples. As a result, it can be directly applied to any new or distinct crowd video. We extend our behavior learning and pedestrian tracking pipeline from~\cite{bera2014,kim2016interactive}. Fig.~\ref{fig:overview} highlights these components. The input into our algorithm is one frame of real-world crowd video at a time, and our goal is to compute these behavior features for each agent from these frames. An adaptive multi-person or pedestrian tracker is used to compute the observed position of each pedestrian on a 2D plane, denoted as ($\mathbf{z}_0 \cdots \mathbf{z}_t$). Furthermore, we use new state estimation and behavior-learning algorithms that can also compensate for the tracking noise and perform robust behavior analysis. We do not make any assumptions about the dynamics or the actual velocity of each agent in the crowd. Since we do not know the dynamics or true state of each agent, we estimate its state $\mathbf{x}$ from the recent observations for each pedestrian. We use a Bayesian inference technique to estimate the most likely state of each pedestrian in an online manner and thereby compute the state of the overall crowd, $\mathbf X$. Based on estimated real crowd states, we compute the trajectory behavior feature of each agent. These features are grouped together to analyze the behavior or movement patterns, and are also used for various training and surveillance applications. \noindent {\bf Interactive State Computation:} We use an online approach that is based on the current and recent states of each pedestrian. In other words, it does not require future knowledge or future state information for any agent. Because we estimate the state during each frame, our formulation can capture the local and global behavior or the intent of each agent. \begin{figure}[ht] \centering \includegraphics[width = 1.0\linewidth]{figures/Anamoly.jpg} \caption{The \textbf{Anomaly Detection} In this example, we see that one pedestrian (marked with green) suddenly makes a U-turn (local feature) in a crowd where everyone is walking in a specific direction/field (global feature). Our system detects this as an anomaly. } \label{fig:AnomalyExample} \end{figure} Our approach uses a realtime multi-person tracking algorithm to extract the pedestrian trajectories from the video. There is considerable research in computer vision literature on online or realtime tracking. To reliably estimate the motion trajectory in a dense crowd setting, we use RVO (reciprocal velocity obstacle)~\cite{van2011reciprocal} -- a local collision-avoidance and navigation algorithm -- as the non-linear motion model. For more details we direct our readers to ~\cite{bera2014,bera2015efficient,bera2015reach} \begin{figure*}[t] \centering \includegraphics[width = 1.0\linewidth]{figures/overview.png} \caption{\textbf{Overview of our approach.} We highlight the different stages of our interactive algorithm: tracking, pedestrian state estimation, and behavior learning. The local and global features refer to individual vs. overall crowd motion features. These computations are performed at realtime rates for each input frame. } \label{fig:overview} \end{figure*} \begin{table}[h] \centering \resizebox{0.9\linewidth}{!}{% \begin{tabular}{|l|l|l|l|} \hline \textbf{Video} & \textbf{Density} & \textbf{Total Frames} & \textbf{BLT} \\ \hline ARENA (01\_01) & Low & 1060 & 0.002 \\ \hline ARENA (01\_02) & Low & 890 & 0.004 \\ \hline ARENA (03\_05) & Low & 1440 & 0.002 \\ \hline ARENA (03\_06) & Low & 1174 & 0.002 \\ \hline ARENA (06\_01) & Low & 2941 & 0.001 \\ \hline ARENA (06\_04) & Low & 1582 & 0.002 \\ \hline ARENA (08\_02) & Low & 792 & 0.002 \\ \hline ARENA (08\_03) & Low & 746 & 0.006 \\ \hline ARENA (10\_03) & Low & 1173 & 0.002 \\ \hline ARENA (10\_04) & Low & 1188 & 0.005 \\ \hline ARENA (10\_05) & Low & 894 & 0.004 \\ \hline ARENA (11\_03) & Low & 329 & 0.001 \\ \hline ARENA (11\_04) & Low & 729 & 0.002 \\ \hline ARENA (11\_05) & Low & 666 & 0.002 \\ \hline ARENA (14\_01) & Low & 1081 & 0.001 \\ \hline ARENA (14\_03) & Low & 1242 & 0.004 \\ \hline ARENA (14\_05) & Low & 1509 & 0.001 \\ \hline ARENA (14\_06) & Low & 857 & 0.002 \\ \hline ARENA (14\_07) & Low & 1312 & 0.004 \\ \hline ARENA (15\_02) & Low & 917 & 0.004 \end{tabular}} \caption{Performance of trajectory level behavior learning on a single core for different benchmarks: We highlight the number of frames of extracted trajectories, the time spent in learning pedestrian behaviors (BLT - Behavior Learning Time (in sec)). Our learning and trajectory computation algorithms demonstrate interactive performance on these complex crowd scene analysis scenarios.} \label{tab:dataset} \end{table} \subsection{Anomaly detection} \label{sec:Anomaly} Anomaly detection is an important problem that has been the focus of research in diverse research areas and applications. It corresponds to the identification of pedestrians, events, or observations that do not conform to an expected pattern or to other pedestrians in a crowd dataset. Typically, the detection of anomalous items or agents can lead to improved automatic surveillance. Anomaly detection can be categorized into two classes based on the scale of the behavior that is being extracted~\cite{kratz2009anomaly}: global anomaly detection and local anomaly detection. A global anomaly typically affects a large portion of, if not the entire, crowd and local anomaly is limited to an individual scale (for example, individuals moving against the crowd flow). We primarily use our trajectory-based behavior characteristics for local anomaly detection. In other words, we detect a few behaviors that are rare and are only observed in the video during certain periods. These periods can be as long as the length of the video or as short as a few hundred frames. In other words, we classify an anomaly as temporally uncommon behavior. For example, a person's behavior going against the flow of crowds may be detected as an anomaly at one point, but the same motion may not be detected as an anomaly later in the frame if many other pedestrians are moving in the same direction. For anomaly detection we compare the distance between the local and global pedestrian features of every pedestrian (we refer the readers to read~\cite{Bera_2016_CVPR_Workshops} for more technical details and analysis. ). When an anomaly appears in a scene, the anomaly features typically tend to be isolated in the cluster of which it is a part. In other words, the pedestrian's motion will be different from that of the surrounding crowd. If the Euclidean distance between the \textit{global} and \textit{local} feature is more than a threshold value, we classify it as an anomaly. \begin{eqnarray} dist(\mathbf{b}^l, \mathbf{b}^g) > Threshold \end{eqnarray} This threshold is a user-tunable parameter. If this threshold is set low, the sensitivity of the anomaly detection will increase and vice-versa. \section{Quantitative Results} We compare the accuracy of our motion segmentation and anomaly detection methods using the quantitative metrics presented in Table 1 and Table V, as described in Li et al. ~\cite{LiCrowdedSceneAnalysis2015}. Table 1 in ~\cite{LiCrowdedSceneAnalysis2015} provides a true detection rate for motion pattern segmentation. It is based on the criterion that the approach successfully detected the regions containing the moving pedestrians. Although we cannot directly compare the numbers with pixel-based performance measures, MOTP values (Table 1) can be an indirect measure for the true detection rate motion segmentation. Compared to the values range of \textbf{0.4-1.0} in [15], the corresponding values computed by our approach are in the range of \textbf{0.7-0.8} in terms of detecting moving pedestrians, even for unstructured videos. These numbers indicate that the performance of our method is comparable to the state of the art. \begin{figure}[htb] \centering \subfloat[]{\includegraphics[width=0.43\linewidth]{./figures/UCSD-1.png}} \subfloat[]{\includegraphics[width=0.48\linewidth]{./figures/UCSD-1_result1.png}} \caption{\textbf{Anomaly Detection}. We also evaluate other datasets like UCSD. Trajectories of 63 real pedestrians are extracted from a video. One person in the middle walks against the flow of crowd. Our method can capture the anomaly of this pedestrian's behavior or movement by comparing the behavior features with those of other pedestrians. } \label{fig:Anomaly} \end{figure} Fig.~\ref{fig:Anomaly} shows the results of anomaly detection in different crowd videos. \textbf{879-38 video dataset~\cite{mikel_dataDriven}}: The trajectories of $63$ pedestrians are extracted from the video. One person in the middle is walking against the flow of pedestrians through a dense crowd. Our method can distinguish the unique behavior of this pedestrian by comparing its behavior features with those found by methods. In \textbf{UCSD-Peds1-Biker} and \textbf{UCSD-Peds1-Cart} benchmarks, our method is able to distinguish parts of the trajectories of the biker and the cart because their speeds were noticeably different from those of other pedestrians. Apart from ARENA, we evaluated the accuracy of the anomaly detection algorithm on the UCSD PEDS1 dataset ~\cite{Mahadevan.anomaly.2010} and compared it with Table V in Li et al. ~\cite{LiCrowdedSceneAnalysis2015} in Table 2. Our method successfully detected the following anomalies in the ARENA - \textit{Person checking vehicle}, \textit{different motion pattern}, \textit{person on a bike}, \textit{push and run}, \textit{abnormal motion near vehicle}, \textit{man touching vehicle}, \textit{hit and run}, \textit{suddenly people running} and \textit{possible mugging}. \begin{table} \centering \label{my-label} \resizebox{\linewidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{{\bf Reference}} & \multirow{2}{*}{{\bf Dataset}} & \multicolumn{5}{c|}{{\bf Performance}} \\ \cline{3-7} & & {\bf \it Area under ROC Curve} & {\bf \it Accuracy} & {\bf \it DR} & {\bf \it Equal Error Rate} & {\bf \it Online/Offline} \\ \hline {\bf Our Method} & \multirow{5}{*}{UCSD} & {\bf 0.873} & {\bf 85\%} & {\bf -} & {\bf 20\%} & {\bf Online} \\ \cline{1-1} \cline{3-7} Wang 2012 & & 0.9 & - & 85\% & - & Offline \\ \cline{1-1} \cline{3-7} Cong 2013 & & 0.86 & - & - & 23.9 & Offline \\ \cline{1-1} \cline{3-7} Cong 2012 & & 0.98-0.47 & 46\% & 46\% & 20\% & Offline \\ \cline{1-1} \cline{3-7} Thida 2013 & & 0.977 & - & & 17.8\% & Offline \\ \hline Our Method & 879-44 & {\bf 0.97} & {\bf 80\%} & {\bf -} & {\bf 13\%} & {\bf Online} \\ \hline Our Method & ARENA & {\bf 0.91} & {\bf 76\%} & {\bf -} & - & {\bf Online} \\ \hline \end{tabular}} \caption{Comparison of Anomaly Detection techniques. Our method has comparable results with the state of the art offline methods in anomaly detection.} \end{table} \begin{table}[] \centering \begin{tabular}{|l|l|l|} \hline \textbf{Video Name} & \textbf{Camera ID} & \textbf{Threat Level} \\ \hline 11\_03 & TRK\_RGB\_1 & High \\ \hline 15\_02 & TRK\_RGB\_1 & High \\ \hline 22\_02 & ENV\_RGB\_3 & High \\ \hline 14\_06 & TRK\_RGB\_1 & Medium \\ \hline 15\_06 & TRK\_RGB\_1 & Medium \\ \hline 14\_07 & TRK\_RGB\_1 & Low \\ \hline 10\_04 & TRK\_RGB\_1 & Low \\ \hline 06\_01 & TRK\_RGB\_1 & Low \\ \hline 10\_05 & TRK\_RGB\_1 & Low \\ \hline \end{tabular} \caption{Details of the anomalies detected in the ARENA Dataset.} \end{table} {\small \bibliographystyle{aaai}
1,477,468,750,863
arxiv
\section{Introduction}\label{sec:Intro} Let $A$ be an excellent Henselian discrete valuation ring with perfect residue field $k$ of exponential characteristic $p \ge 1$. Let $\sX$ be a regular scheme which is projective and flat over $A$. Let $X \subset \sX$ be the reduced special fiber. If the map $\sX \to \Spec(A)$ is an isomorphism, then Gabber's generalization of Suslin's rigidity theorem \cite{Gabber} says that the algebraic $K$-theory of $\sX$ and $X$ are isomorphic with coefficients prime to $p$. However, this rigidity theorem does not hold when the relative dimension of $\sX$ over $A$ is positive. One can then ask if it is possible to prove such an isomorphism for the higher Chow groups (which are the building block of $K$-theory in view of \cite{FS}) in certain bi-degrees. This is the context of the present work. Let $\CH_1(\sX)$ denote the classical Chow group \cite{Fulton} of 1-dimensional cycles on $\sX$. If $\sX$ is smooth over $A$ and $k$ is finite or algebraically closed, Saito and Sato \cite[Corollary~0.10]{SS} showed that there is a restriction map $\rho: {\CH_1(\sX)} \otimes_{\Z} {\Z}/{m \Z} \to {\CH_0(X)}\otimes_{\Z} {\Z}/{m \Z}$ which is an isomorphism, whenever $m$ is prime to the exponential characteristic of $k$. As part of their proof of the above restriction isomorphism, Saito and Sato showed that the {\'e}tale cycle class map for ${\CH_1(\sX)}\otimes_{\Z} {\Z}/{m \Z}$ is an isomorphism more generally for every model $\sX\to \Spec(A)$ with semi-stable reduction, i.e., such that the reduced special fiber $X$ has simple normal crossing (again, under the assumption that the residue field $k$ is finite or algebraically closed). As an application of this, they proved that if $K$ is a local field with finite residue field and $Y$ is smooth and projective over $K$, then $\CH_0(Y)\otimes_{\Z} {\Z}/{m \Z}$ is finite, originally a conjecture due to Colliot-Thélène \cite{CT95}. \begin{comment} No assumption on the model $\sX$ of $Y$ is necessary for the finiteness result of Saito and Sato: in fact, in view of the étale realization isomorphism of \cite[Theorem 1.16]{SS}, the finiteness of the group ${\CH_1(\sX)}\otimes_{\Z} {\Z}/{m \Z}$, which implies the finiteness of $\CH_0(Y)\otimes_{\Z} {\Z}/{m \Z}$, holds more generally for every family $\sX\to \Spec(A)$ with semi-stable reduction, i.e.~such that the reduced special fiber $X$ has simple normal crossing (again, under the assumption that the residue field $k$ is finite or algebraically closed). However, in this greater generality the comparison between ${\CH_1(\sX)} \otimes_{\Z} {\Z}/{m \Z}$ and ${\CH_0(X)} \otimes_{\Z} {\Z}/{m \Z}$ no longer holds. \end{comment} Inspired by an argument originally due to Bloch and discussed in \cite[Appendix A]{EW}, the result of Saito and Sato was revisited and generalized by Esnault, Kerz and Wittenberg in \cite{EKW}. Under the assumption that the reduced special fiber $X$ is a simple normal crossing divisor in $\sX$, it was observed in \cite{EKW} that it is possible to replace the classical Chow group (see \cite{Fulton}) $\CH_0(X)$ of the special fiber $X$ with the Friedlander-Voevodsky \cite{FV} motivic cohomology $H^{2d}(X, \Z(d))$, where $d = \dim_k(X)$, and still prove the existence of an isomorphism \begin{equation}\label{eq:restri-EKW-intro} \rho: {\CH_1(\sX)} \otimes_{\Z} {\Z}/{m \Z} \to H^{2d}(X, {\Z}/{m \Z} (d)), \end{equation} provided that some extra assumptions on $m$ or on the residue field are satisfied. This approach allowed Esnault, Kerz and Wittenberg to generalize the restriction isomorphism of Saito and Sato by allowing the field $k$ to belong to a bigger class than just finite or algebraically closed fields, and the reduced special fiber to be a simple normal crossing divisor than just smooth. In fact, in the case of good reduction, they showed that $\rho$ is an isomorphism for any perfect residue field $k$. Note that there is always a surjective map $H^{2d}(X, \Z(d)) \surj \CH_0(X)$ for a simple normal crossings divisor $X \subset \sX$. But this is not in general an isomorphism, even with finite coefficients. In this paper, we show that if we further replace the $(2d,d)$ motivic cohomology group of the reduced special fiber $X$ by its Levine-Weibel Chow group of 0-cycles \cite{LW}, then the restriction isomorphism of Saito and Sato holds without any condition on $X$ whenever the residue field is algebraically closed. More generally, we prove the following generalization of \cite{EKW} for arbitrary perfect residue fields. We let $\sZ_1(\sX)$ denote the free abelian group on the set of integral 1-dimensional closed subschemes of $\sX$ and let $\sZ^g_1(\sX)$ denote the subgroup of $\sZ_1(\sX)$ generated by integral cycles which are flat over $A$ and do not meet the singular locus of $X$. It follows from the moving lemma of Gabber, Liu and Lorenzini \cite{GLL} that the composite map \[\sZ^g_1(\sX) \inj \sZ_1(\sX) \surj \CH_1(\sX)\] is surjective. For any reduced quasi-projective scheme $Y$ over a field, let $\CH^{LW}_0(Y)$ denote the Levine-Weibel Chow group of 0-cycles on $Y$, first introduced in \cite{LW}. It is a quotient of the free abelian group $\sZ_0(Y\setminus Y_{\rm sing})$ of $0$-cycles supported in the regular locus of $Y$ (see \ref{sec:Chow-grp} for a reminder of its definition). Let $m$ be an integer prime to the exponential characteristic of $k$ and let $\Lambda = {\Z}/{m\Z}$. For an abelian group $M$, write $M_{\Lambda} = M \otimes_{\Z} \Lambda$. Then the following holds. \begin{thm}\label{thm:Main-1} Let $\sX$ be a regular scheme which is projective and flat over an excellent Henselian discrete valuation ring with perfect residue field. Let $X$ denote the reduced special fiber of $\sX$. Then there exists a commutative diagram \begin{equation}\label{eqn:Main-1-0} \begin{tikzcd} \sZ^g_1(\sX)_{\Lambda} \arrow[r, "\wt{\rho}"] \arrow[d, twoheadrightarrow] & \sZ_0(X \setminus X_{\sing})_{\Lambda} \arrow[d, twoheadrightarrow] \\ \CH_1(\sX)_\Lambda & \CH^{LW}_0(X)_{\Lambda} \arrow[l, "\gamma"] \end{tikzcd} \end{equation} such that $\gamma$ is surjective. \end{thm} Here, the vertical maps are the canonical projections, and $\wt{\rho}$ is the group homomorphism given by taking an $1$-cycle in good position and intersecting it with the reduced special fiber $X$. Let us explain how this theorem relates to the construction of \cite{EKW}. Suppose that $\sX$ has semi-stable reduction. Then by \cite[Theorem 5.1]{EKW} there exists a unique surjective homomorphism $\gamma_{EKW}$ making the diagram \[\begin{tikzcd} \sZ^g_1(\sX)_{\Lambda} \arrow[r, "\wt{\rho}"] \arrow[d, twoheadrightarrow] & \sZ_0(X \setminus X_{\sing})_{\Lambda} \arrow[d, twoheadrightarrow] \\ \CH_1(\sX)_\Lambda & H^{2d}(X, \Lambda(d)) \arrow[l, "\gamma_{EKW}"] \end{tikzcd} \] commutative. The group at the bottom right corner is the motivic cohomology group with $\Lambda$ coefficients (as in \eqref{eq:restri-EKW-intro}). Combining this with the cycle class map constructed in \cite{BK-1}, we obtain then a commutative diagram of surjections \[ \begin{tikzcd} \CH_1(\sX)_\Lambda & \CH^{LW}_0(X)_{\Lambda} \arrow[l, twoheadrightarrow, "\gamma"] \arrow[d, twoheadrightarrow, "\cyc_X^{\mathcal{M}}"] \\ & H^{2d}(X, \Lambda(d)) \arrow[lu, twoheadrightarrow, "\gamma_{EKW}"] \end{tikzcd} \] so that we can interpret Theorem \ref{thm:Main-1} as a lift to the Levine-Weibel Chow group of the inverse restriction map considered in \cite{EKW}. Note that $\gamma_{EKW}$ exists only in the semi-stable case, while the diagram \eqref{eqn:Main-1-0} with $\gamma$ exists without assumption on the special fiber. One consequence of \thmref{thm:Main-1} is the following. \begin{cor}\label{cor:sum**} In the notations of \thmref{thm:Main-1}, suppose that the map $\wt{\rho} \colon \sZ_1^g(\sX)_{\Lambda} \to \sZ_0(X\setminus X_{\rm sing})_{\Lambda}$ descends to a morphism between the Chow groups \begin{equation}\label{eq:rho**} \rho \colon \CH_1(\sX)_\Lambda \to \CH^{LW}_0(X)_{\Lambda}. \end{equation} Then $\rho$ is an isomorphism. If moreover $\sX$ has semi-stable reduction, then there is a commutative diagram of isomorphisms \[ \begin{tikzcd} \CH_1(\sX)_\Lambda \arrow[r, "\rho"] \arrow[rd] & \CH^{LW}_0(X)_{\Lambda} \arrow[d, twoheadrightarrow, "\cyc_X^{\mathcal{M}}"]\\ & H^{2d}(X, \Lambda(d)) \end{tikzcd} \] \end{cor} The diagonal arrow in the semi-stable case agrees with the map $\rho$ of \cite[Theorem 1.1]{EKW}. We expect that the homomorphism $\rho$ in \eqref{eq:rho**} always exists, and the reason for such expectation is twofold. On one side, the Levine-Weibel Chow group is expected to be part of a satisfactory theory of cycles on singular varieties, closer to the $K$-theory of vector bundles than the cdh-motivic cohomology. The restriction homomorphism $\rho$ should then be seen as a cycle-theoretic incarnation (in certain bi-degrees) of the restriction map on $K$-groups with $\Lambda$-coefficients \[\iota^*\colon K_0(\sX; \Lambda) \to K_0(X; \Lambda) \] induced by the inclusion $\iota\colon X\hookrightarrow\sX$. The relationship between the Levine-Weibel Chow group and the $K_0$ group has been object of investigation by many authors (we recall here \cite{LW}, \cite{Levine-2}, \cite{LevineBloch}, \cite{PW}, \cite{PW-div}, \cite{KrishnaSrinivas}, \cite{Krishna3Fold} to name a few). It is known that the group $\CH_0^{LW}(X)$ can be used to detect invariants of ``additive'' type. For example, if $X$ is an arbitrary reduced curve over a field $k$, we have \[ \CH_0^{LW}(X)\xrightarrow{\cong} \Pic(X) \cong H^1(X, \mathbb{G}_m)\] generalizing the classical relationship between line bundles and Weil divisors, while \[ H^2(X, \Z(1)) \cong H^2(X_{\rm sn}, \Z(1)) \cong \Pic(X_{\rm sn})\] where $X_{\rm sn}$ denotes the semi-normalization of $X$. This is reflecting the fact that the functor $X\mapsto \Pic(X)$ considered on $\mathbf{Sch}(k)$ rather than on $\mathbf{Sm}(k)$ is not $\mathbb{A}^1$-invariant, and thus can not be captured by an $\mathbb{A}^1$-invariant theory like Voevodsky's motivic cohomology. On the other side, however, with torsion coefficients prime to the exponential characteristic of $k$, there are no additive invariants to detect, and the non-$\mathbb{A}^1$-invariant theory ``collapses'' to the classical one. This statement can be made precise in the context of the theory of motives with modulus, as recently developed by Kahn Saito and Yamazaki. See \cite[Corollary 4.2.6 and Remark 4.2.7 b)]{KSY-RecandMotives} (using some results in \cite{BJAlg}). We therefore conjecture that the cycle class map \begin{equation}\label{eq:cycMot} \cyc_X^{\mathcal{M}}\colon \CH_0^{LW}(X)_\Lambda \to H^{2d}(X, \Lambda(d)) \end{equation} is always an isomorphism with $\Lambda=\Z/m\Z$-coefficients. In a similar spirit, we expect that $\cyc_X^{\mathcal{M}}$ is an isomorphism with integral coefficients (if $k$ admits resolution of singularities, or with $\Z[1/p]$-coefficients otherwise) if the the singularities of $X$ are sufficiently mild, intuitively where additive phenomena do not occur. This is supported by the following result: if the residue field $k$ is algebraically closed and $X \subset \sX$ is a simple normal crossing divisor, it is shown in \cite{BK-1} that there is a canonical isomorphism \begin{equation}\label{eqn:Main-1-1} \cyc_X^\mathcal{M} \colon \CH^{LW}_0(X)\otimes_\Z[1/p] \xrightarrow{\cong} H^{2d}(X, \Z[1/p](d)), \end{equation} which holds integrally if $k$ admits resolution of singularities. In view of the above discussion, the existence of the map $\rho$ in \eqref{eq:rho**} is therefore coherent with the expectation of \cite{EKW} in the semi-stable reduction case, as explained in \cite[1]{EKW}. We are in the situation of the Corollary if we put some extra assumption. \begin{thm}\label{thm:Main-2}Let $\sX$ be as in Theorem \ref{thm:Main-1}, and assume moreover that $A$ has equal characteristic. Then the map $\wt{\rho}$ in ~\eqref{eqn:Main-1-0} descends to a morphism between the Chow groups in the following cases \begin{enumerate} \item If $X$ has only isolated singularities and $k$ is finite. \item If $\dim(X)=2$, (with no further assumptions on the singularities of $X$). \end{enumerate} In both cases, the map $\tilde{\rho}$ induces an isomorphism \[\rho\colon \CH_1(\sX)_{\Lambda} \xrightarrow{\cong} \CH^{LW}_0(X)_{\Lambda},\] and both groups are finite if $k$ is finite. \end{thm} If $A$ has equal characteristic, then the Gersten conjecture for Milnor $K$-theory holds, thanks to \cite{Kerz09}, and the existence of the map $\rho$ can be deduced from the validity of the Bloch-Quillen formula for singular varieties. See Section 5.3 for details (and for a comment about the assumption on the singularities of $X$ in the case of relative dimension 2). \medskip \begin{remk}\label{rem:algclosed} If the residue field $k$ is algebraically closed, the cycle class map to \'etale cohomology $\cyc_X^{\et}\colon \CH_0^{LW}(X)_\Lambda\to H^{2d}_{\et}(X, \Lambda(d))$ is an isomorphism (see \ref{lem:et-iso}). This gives in particular that the map $\tilde{\rho}$ of \eqref{eqn:Main-1-0} descends to a morphism between the Chow groups $\rho\colon \CH_1(\mathcal{X})_{\Lambda}\xrightarrow{\simeq} \CH_0^{LW}(X)_{\Lambda}$, and so $\CH_1(\mathcal{X})_\Lambda\xrightarrow{\simeq} H^{2d}_{\et}(\mathcal{X}, \Lambda(d))$ by proper base change. Note however that this isomorphism between $\CH_1(\mathcal{X})_\Lambda$ and the \'etale cohomology group was already obtained by Bloch \cite[Theorem~A.1]{EW}, and we do not get more information in the algebraically closed field case. \end{remk} \begin{comment} If the residue field $k$ is algebraically closed and $X \subset \sX$ is a simple normal crossing divisor, it is shown in \cite{BK-1} that there is a canonical isomorphism \begin{equation}\label{eqn:Main-1-1} \CH^{LW}_0(X)_{\Lambda} \xrightarrow{\cong} H^{2d}(X, \Lambda(d)), \end{equation} where $\Lambda = \Z$ if $k$ admits resolution of singularities and $\Z[\tfrac{1}{p}]$ otherwise. More generally, it is shown in \cite{BK-1} that the isomorphism \eqref{eqn:Main-1-1} holds without any condition on $X$ if we take $\Lambda = {\Z}/m$ with $m$ prime to the residue characteristic. \thmref{thm:Main-2} therefore shows that the restriction isomorphism, as formulated by Esnault, Kerz and Wittenberg \cite{EKW} for a semi-stable reduction, actually holds for arbitrary reduction (when $k$ is algebraically closed). We expect part (1) of \thmref{thm:Main-2} to hold when $k$ is finite, but we do not have sufficient information about the Levine-Weibel Chow group at present to handle this case. \end{comment} We end the introduction with a brief outline of this text. The proofs of our main theorems are inspired by the ideas of Esnault, Kerz and Wittenberg \cite{EKW}. The new insight is the introduction of the Levine-Weibel Chow group and its modified version from \cite{BK} in the picture and to show how this leads to the above generalizations, using the moving lemmas of Gabber, Liu and Lorenzini \cite{GLL}, some ideas from the Bertini theorems of Jannsen and Saito \cite{SS} and a construction of cycle class maps to {\'e}tale cohomology and to the Nisnevich cohomology of Milnor $K$-sheaves. These cycle class maps play an important role in the calculation of $\CH^{LW}_0(X)$ with torsion coefficients. In \S~\ref{sec:Bertini}, we discuss some forms of Bertini theorems over a base and in \S~\ref{sec:Lifting}, we prove our result for relative curves. We finish the proof of \thmref{thm:Main-1} in \S~\ref{sec:Prf-1}. In \S~\ref{sec:Res-iso}, we construct the cycle class maps for the Levine-Weibel Chow group and prove \thmref{thm:Main-2}. \section{Bertini type theorems over a base}\label{sec:Bertini} In this section, we discuss some of the technical lemmas which we need in order to prove \thmref{thm:Main-1} when $\dim(\sX)$ is at least two. As some of these results are of independent interest and also used elsewhere, we state them separately. We fix the following general framework. \subsection{Setting}\label{sec:general-setting} Let $S$ be the spectrum of a discrete valuation ring $A$ with field of fractions $K$. Let $\eta$ be the generic point of $S$ and $s$ its closed point. Write $k$ for the residue field of $A$, which is assumed to be perfect. We let $\fM = (\pi)$ denote the maximal ideal of $A$. Throughout this text, we fix a regular scheme $\sX$ which is flat and projective over $S$. We let $\phi:\sX \to S$ be the structure morphism and let $d \ge 0$ denote the relative dimension of $\sX$ over $S$. Write $\sX_s = \sX \times_A k:= \sX \times_S \Spec(k)$ for the special fiber of $\phi$ and $X = (\sX_s)_{\red}\hookrightarrow \sX_s$ for the reduced special fiber. Given any scheme $Y$, we write $Y_{\rm sing}\subsetneq Y$ for the singular locus of $Y_{\rm red}$. In this section, we shall assume $k$ to be infinite. \begin{defn}\label{defn:hplane} A hyperplane $H\subset \P^N_S$ of the projective space $\P^N_S$ over $S$ is a closed subscheme of $\P^N_S$ corresponding to an $S$-rational point of the dual $(\P^N_S)^{\vee} := {\rm Gr}_S(N-1,N)$. \end{defn} By definition, an $S$-point of ${\rm Gr}_S(N-1,N)$ corresponds to (an isomorphism class of) a surjection $q\colon \sO_S^{\oplus N+1}\to \sQ$, where $\sQ$ is locally free (hence free, since $S$ is the spectrum of a DVR) of rank $N$. Fixing a basis $\{e_0,\ldots, e_N\}$ of $\sO_S^{\oplus N+1}$, we can write the kernel of $q$ as $\sum_{i=0}^N \langle a_i\rangle e_i \subset \sO_S^{N+1}$ for elements $a_i\in A$, not all in $\fM$. Here, $\langle a \rangle$ is the submodule of $\sO_S$ generated by $a\in A$. If $X_0,\ldots, X_n$ are the homogeneous coordinate functions on $\P^N_S$, then the hyperplane $H$ corresponding to $q$ is the zero locus of the linear polynomial $q(x) = \sum_{i=0}^{N}a_i X_i$. The same equation defines the hyperplane $H_\eta\subset \P^N_K$, the generic fiber of $H$. We denote by $H_s$ the hyperplane in $\P^N_k$ defined by the reduction of $q(x)\mod \pi$. In order to show the existence of good hyperplanes of $\P^N_S$, we will frequently use the following simple but crucial remark, due to Jannsen and Saito. \begin{lem}$($\cite[Theorem~0.1]{JannsenSaito}$)$\label{lem:Jannsen-Saito} Let $P$ be a projective $S$-scheme and let ${\rm sp}\colon P(K)\to P(k)$ be the specialization map, given by $x \mapsto \ov{\{x\}} \cap P_s$. Let $V_1\subset P_\eta$ and $V_2\subset P_s$ be two open dense subsets of $P_\eta$ and $P_s$, respectively. Assume that ${\rm sp}$ is surjective, $P$ has irreducible fibers and $P_s$ is a rational variety over $k$. Then the set \[ U := V_1(K) \cap {\rm sp}^{-1}(V_2(k)) \] is non-empty. \end{lem} \begin{proof} This is extracted from the middle of the proof of \cite[Theorem 0.1]{JannsenSaito}. Before we give the proof, we note that if $x \in P(K)$, then the map $\ov{\{x\}} \to S$ must be an isomorphism and hence $\ov{\{x\}} \cap P_s$ is a unique closed point. In particular, the map ${\rm sp}\colon P(K)\to P(k)$ is well-defined. Let $Z_1 = P_\eta\setminus V_1$ and $Z_2 = P_s\setminus V_2$ be the (reduced) closed complements of $V_1$ and $V_2$, respectively. Write $\ol{Z_1}$ for the closure of $Z_1$ in $P$. One clearly has that $Z_1(K) \subset {\rm sp}^{-1}((\ol{Z_1}\cap P_s)(k))$, so that the interesting set $U$ contains ${\rm sp}^{-1}((V_2 \setminus (\ol{Z_1}\cap P_s) )(k))$. Since ${\rm sp}$ is surjective by assumption, it's enough to observe that $(V_2 \setminus (\ol{Z_1}\cap P_s) )(k)$ is non-empty. Now, we are given that $V_2$ is a dense open subset and $(\ol{Z_1}\cap P_s)$ is a proper closed subset of the irreducible scheme $P_s$. It follows that $V_2 \setminus (\ol{Z_1}\cap P_s)$ is open and dense in $P_s$. Since $k$ is infinite and $P_s$ is rational over $k$, one knows that $V_2 \setminus (\ol{Z_1}\cap P_s)(k)$ is dense in $P_s$. This finishes the proof. \end{proof} If we take $P = (\P^N_S)^{\vee}$, the three conditions of the Lemma are satisfied. Since any hyperplane $H\subset \P_N^S$ is completely determined by its generic fiber $H_\eta$ (as $(\P_S^N)^{\vee}(S) = (\P_K^N)^{\vee}(K)$), we see that the `good' hyperplanes over $S$ are parameterized by subsets of the form $V(K)\cap {\rm sp}^{-1}(U(k))$, for good open subsets $V$ of $(\P^N_K)^{\vee}$ and $U$ of $(\P^N_k)^{\vee}$, representing the prescribed behavior of the generic fiber and of the special fiber of $H$. We call a hyperplane $H$ corresponding to a $K$-rational point of a set of the form $V(K)\cap {\rm sp}^{-1}(U(k))$ \textit{general}. Our first application is the following proposition. \begin{prop}\label{prop:Bertini-regularity} Let $\sX\subset \P^N_S$ be as in \ref{sec:general-setting} such that $d \ge 2$. Then a general hyperplane $H\subset \P^N_S$ intersects $\sX$ transversely, i.e., the fiber product $\sX\times_{\P^N_S} H$ is regular and flat $S$-scheme. If the generic fiber $\sX_\eta$ of $\sX$ is smooth over $K$, then $H_\eta$ is smooth as well. \end{prop} \begin{proof} We first note that since $\sX\to S$ is flat and both $\sX$ and $S$ are regular, it follows that $X = (\sX_s)_{\rm red}$ is equi-dimensional of dimension $d$. We begin by claiming that there exists an open subset $U$ of $(\P^N_k)^\vee$ with the dense subset $U(k)$ of $k$-rational points such that the following hold. Let $H$ be the hyperplane of $\P^N_k$ lying in $U(k)$. Then $H$ does not contain any component of $X$, and if $h$ denotes the image in $\sO_{X,x}$ of a local equation for $H$ at a closed point $x\in X$, either $h$ is a unit or $h \in \mathfrak{m}_{X,x}\setminus \mathfrak{m}_{X, x}^2$. It is clear that there exists a dense open subset $U'$ of $(\P^N_k)^\vee$ such that no hyperplane corresponding to a $k$-rational point of $U'$ contains any irreducible component of $X$. So we only need to find an open subset $U$ of $(\P^N_k)^\vee$ with the dense subset $U(k)$ such that if $H$ is the hyperplane of $\P^N_k$ corresponding to a point of $U(k)$ and if $h$ denotes the image in $\sO_{X,x}$ of a local equation for $H$ at a closed point $x\in X$, either $h$ is a unit or $h \in \mathfrak{m}_{X,x}\setminus \mathfrak{m}_{X, x}^2$. To prove this latter claim, we first assume that $k={\overline{k}}$ is separably (hence algebraically, since $k$ is perfect) closed. Let $W$ be the incidence variety $W\subset X \times (\P^N_k)^{\vee}$ consisting of points $(x, H)$ such that either $H$ contains a component of $X$ or $H$ does not contain any component of $X$ but for any local equation $h$ of $H$ at $x$, one has $h\in \mathfrak{m}_{X,x}^2 \subset \sO_{X, x}$. We need to estimate the dimension of $W$. Let $V=H^0(\P^N_k, \cO_{\P^N_k}(1))$ be the $(N+1)$-dimensional $k$-vector space of linear forms, with basis $\{X_0, X_1, \ldots, X_n\}$. Let $x\in X$ be a closed point. Up to a change of coordinates, we can assume that the hyperplane cut out by $X_0$ does not pass through $x$. We then get an isomorphism $V\xrightarrow{\simeq} \sO_{\P^N_k, x}/ \mathfrak{m}_{\P^N_k,x}^2$, sending $X_0$ to $1$. By composition, we have a surjection \[ \phi_x\colon V \surj \sO_{X,x}/{\mathfrak{m}_{X,x}^2} \] and the kernel of $\phi_x$ is the $k$-vector space $V_x = \{H\in (\P^N_k)^\vee(k)\,|\, x\in H \ \mbox{and} \ h\in \mathfrak{m}_{X,x}^2\}$. Moreover, $V_x$ consists precisely of the hyperplanes which are bad at $x$. Notice now that we have an exact sequence of $k$-vector spaces \[ 0\to \mathfrak{m}_{X,x}/\mathfrak{m}_{X,x}^2\to \sO_{X,x}/\mathfrak{m}_{X,x}^2 \to \sO_{X,x}/\mathfrak{m}_{X,x} = k \to 0. \] In particular, we get $\dim_k (\sO_{X,x}/\mathfrak{m}_{X,x}^2) \geq 1+\dim(\sO_{X,x}) = 1+d$. Thus $\dim_k(V_x)\leq (N+1) - (d+1) = N-d$. If $W_x$ denotes the fiber at $x$ of $W$ along the first projection $p_1\colon W\to X\times (\P^N_k)^\vee\to X$, then we have $W_x = \mathbb{P}(V_x)$ and this implies from the previous estimate that $\dim_k(W_x) \le N-d-1$. Since the projection $p_1$ is surjective, $X$ is equi-dimensional of dimension $d$, and for each $x\in X$, the fiber $W_x$ is a projective space of dimension at most $N-d-1$, we deduce that $W$ has dimension at most $(N-d-1) +d = N-1$. Since $X$ is proper over $k$, the second projection map $p_2\colon W\to (\P^N_k)^\vee$ is closed, hence the image is a proper closed subset of dimension at most $N-1$. We conclude that $U :=(\P^N_k)^\vee \setminus p_2(W)$ is open and dense in $(\P^N_k)^\vee$. Suppose now that $k$ is an arbitrary infinite perfect field and let $\ov{k}$ be an algebraic closure of $k$. Let $X_{\ov{k}}$ denote the base change of $X$ to $\ov{k}$ and let $U \subset (\P^N_{\ov{k}})^\vee$ be the dense open subset of good hyperplanes over $\ov{k}$ obtained as above. Since $k$ is infinite and $(\P^N_{k})^\vee$ is rational, we know that the set of closed points in $(\P^N_{\ov{k}})^\vee$ which are defined over $k$ is dense in $U$. Let $H \in U(k)$ be any such point. Let $x \in X$ be any closed point and let $h$ denote the local equation of $H$ in $\sO_{X,x}$. Suppose that $h$ is not a unit in $\sO_{X,x}$ so that $h \in \fm_{X,x}$. We know that $\pi^{-1}(x)$ is a finite set of closed points $\{x_1, \ldots , x_r\}$, where $\pi: X_{\ov{k}} \to X$ is the projection. Moreover, $H_{\ov{k}}$ has the property that its local equation $h$ lies in ${\fm_{X_{\ov{k}},x_i}} \setminus {\fm^2_{X_{\ov{k}},x_i}}$ for each $i$. It follows that $h$ must lie in $\fm_{X,x} \setminus \fm_{X,x}^2$. In other words, there is an open subset $U \subset (\P^N_{k})^\vee$ with the dense subset $U(k)$ such that every member of $U(k)$ satisfies the desired property. This proves the claim. We now let ${\rm sp}\colon (\P^N_K)^{\vee}(K)\to (\P^N_k)^\vee(k)$ be the specialization map, and let $H$ be any hyperplane corresponding to a $K$-rational point of ${\rm sp}^{-1}(U(k))$ (note that this set is non-empty). Since this is a point in a projective space, say of coordinates $(a_0:a_1:\ldots: a_N)$, we can assume that not all the $a_i$'s are divisible by $\pi$. In particular, $H$ is not vertical, i.e., it is not contained in the special fiber $\P^N_k$. Hence it is automatically flat over $S$. Let $x\in X$ be a closed point and let $h$ be the image in $\sO_{\sX,x}$ of a local equation defining $\sX\cdot H = \sX\times_S H$ in a neighborhood of $x$. If $h$ is a unit in $\sO_{\sX,x}$, then $x\notin \sX\cdot H$ and there is nothing to say. Assume then that $h\in \mathfrak{m}_{\sX,x}\subset \sO_{\sX,x}$ and write $\ov{h}$ for the image of $h$ in $\sO_{X,x}$. By construction, $\ov{h}$ is a local equation for $X\cdot H_s$ and hence $\ov{h}\in \mathfrak{m}_{X,x}\setminus \mathfrak{m}^2_{X,x}$ by our choice of $U$. But this forces $h \in \mathfrak{m}_{\sX,x}\setminus \mathfrak{m}^2_{\sX,x}$ as well. Since $\sO_{\sX, x}$ is regular by assumption, this implies that $\sO_{\sX, x}/(h) = \sO_{\sX\cdot H, x}$ is a regular local ring. We have thus shown that every closed point $x\in (\sX\cdot H)_s$ has an open neighborhood in $\sX\cdot H$ where $\sX\cdot H$ is regular. Since $\sX$ is proper over $S$, these neighborhoods form a cover of $\sX\cdot H$, proving that $\sX\cdot H$ is regular, as required. For the last assertion, suppose that $\sX_\eta$ is smooth over $K$. In this case, the classical theorem of Bertini (see, for example, \cite[6.11]{Jou}) asserts that there exists a dense Zariski open set $V\subset (\P^N_K)^\vee$ parametrizing hyperplanes $H_\eta$ of $\P^N_K$ such that the intersection $\sX_\eta\cdot H_\eta$ is smooth. It is then enough to take $H\in V(K)\cap {\rm sp}^{-1}(U(k))$, which is non-empty by \lemref{lem:Jannsen-Saito}, to get a general hyperplane of $\P^N_S$ which satisfies all the required conditions. \end{proof} \begin{remk}\label{remk:Bertini-rem} The proof of Proposition~\ref{prop:Bertini-regularity} gives in fact a bit more. In the setting of this proposition, we can consider the following situation. Let $(P)_s$ be any property which is generically satisfied by a hyperplane section of $X$ in $\P^N_k$. An example of such property could be `being Cohen-Macaulay' if $X$ is Cohen-Macaulay, or `being irreducible' if $X$ is irreducible (see \cite{Jou}). Here, generically means that the property is satisfied by each hyperplane in a open dense subset $V_P$ of $(\P^N_k)^\vee$. The set $U\cap V_P$ for the open set $U$ constructed above is then open and dense in $(\P^N_k)^\vee$. Thus, any hyperplane $H$ of $\P^N_S$ which corresponds to a $K$-rational point of ${\rm sp}^{-1}((U\cap V_P)(k))$ will intersect $\sX$ transversely, and its special fiber will moreover satisfy the property $(P)$. \end{remk} We will now show that, under some extra conditions, there is a weak version of the Theorem of Altman and Kleiman \cite{AK} on hypersurface sections containing a subscheme. The proof of this fact uses a combination of ideas from Bloch's appendix to \cite{EW} and from \cite[Theorem~4.2]{SS}. \begin{prop}\label{prop:Bertini-AK} Let $\sX\subset \P^N_S$ be as in \ref{sec:general-setting} such that $d \ge 2$. Let $Z\subset \sX$ be a regular, integral, flat relative $0$-cycle over $S$. Let $\sO_\sX(1)$ be the restriction of the line bundle $\sO_{\P^N_S}(1)$ to $\sX$, and let $\sI\subset \sO_{\P^N_S}$ be the ideal sheaf of $Z$ in $\P^N_S$. Assume that $Z\cap X$ is supported on one closed point $x \in X$. Then, for all integers $n \gg 0$ and a general section $\sigma \in H^0(\P^N_S, \sI(n))$, the hypersurface $H = (\sigma)$ defined by $\sigma$ has the following properties. \begin{enumerate} \item $\sX\cdot H$ is regular, flat and projective over $S$. \item $H\supset Z$. \end{enumerate} \end{prop} \begin{proof} Let $W = Z\times_S X$ be the scheme-theoretic intersection of $Z$ with the reduced special fiber. We start by noting that the embedding dimension $e_x(W) := \dim_{k(x)} \mathfrak{m}_{W,x}/ \mathfrak{m}_{W,x}^2$ is at most $1$. Indeed, $W\subset Z$ and $Z$ is regular, finite and flat over $S$ by assumption. Hence $e_x(W) \leq e_x(Z)= \dim (Z) = 1$. As a consequence, if we let $I_{W,x}$ denote the ideal of $W$ in $\sO_{X,x}$, we see that $I_{W,x}/(I_{W,x}\cap \mathfrak{m}_{X,x}^2) \neq 0$. In fact, suppose that $I_{W,x} \subset \mathfrak{m}_{X,x}^2$. Then $\mathfrak{m}_{X,x}/( \mathfrak{m}_{X,x}^2+I_{W,x}) = \mathfrak{m}_{X,x}/ \mathfrak{m}_{X,x}^2$ has dimension $d \ge 2$. But $\mathfrak{m}_{X,x}/( \mathfrak{m}_{X,x}^2+I_{W,x}) = \mathfrak{m}_{W,x}/ \mathfrak{m}_{W,x}^2$ has dimension at most one as shown above. This leads to a contradiction. Let $\ov{\sI}$ be the ideal sheaf of $W$ in $\P^N_k$ and let $n \gg 0$ be any integer such that $\ov{\sI}(n)$ is generated by the global sections $V=H^0(\P^N_k, \ov{\sI}(n))\subset H^0(\P^N_k, \sO(n))$. We now claim that there exists a non-empty open subset $U$ in the space $\mathbb{P}(V)$ such that for any $\sigma \in U(k)$, the image $\sigma_x$ of $\sigma$ in $\sO_{X,x}$ (for a closed point $x\in X$) is either a unit or an element of $\mathfrak{m}_{X,x}\setminus \mathfrak{m}_{X,x}^2$. Since the linear system associated to a basis of $V$ has base locus $W$, this condition is satisfied for $\sigma$ in $U'\subset V$ for each $y \neq x$ thanks to the proof of Proposition~\ref{prop:Bertini-regularity}, with $U'$ open and non-empty. For $n \gg 0$, there is clearly another non-empty open $U''\subset V$ such that for $\sigma \in U''$, the restriction of $\sigma$ has non-zero image in $I_{W,x}/(I_{W,x}\cap \mathfrak{m}_{X,x}^2)$ (which is itself non-zero by the argument above). Let $U = U'\cap U''$. If $n \gg 0$, the map $a\colon H^0(\P^N_S, {\sI}(n)) \to H^0(\P^N_k, \ov{\sI}(n))$ is surjective. Then any $\sigma \in H^0(\P^N_S, {\sI}(n))$ such that $a(\sigma) \in U$ will satisfy the conditions of the proposition. Indeed, it is clear by our choice that $H=(\sigma)$ contains $Z$, while the regularity of $\sX\cdot H$ is proved exactly as in \propref{prop:Bertini-regularity}. \end{proof} \begin{remk}\label{remk:Hens-*} The reader can easily see that when $A$ is Henselian (which is the case for the rest of this text), the assumption in \propref{prop:Bertini-AK} that $Z\cap X$ be supported on one closed point $x \in X$, is redundant. \end{remk} \section{Lifting of zero-cycles}\label{sec:Lifting} In this section, we shall recall the definitions of the Chow groups which are used in the statements of the main results. We shall then show how the 0-cycles on the special fiber can be lifted to good 1-cycles on $\sX$. Using this lifting, we shall give a proof of the base case of \thmref{thm:Main-1}, namely, the case of relative curves. This case will be used in the next section to prove the general case of \thmref{thm:Main-1}. We keep the notations of \S~\ref{sec:general-setting}. Throughout this section, we shall assume that the base ring $A$ is excellent and Henselian, with perfect residue field $k$ which is not necessarily infinite. \subsection{The Chow groups of the model and the special fiber}\label{sec:Chow-grp} Let $\sZ_1(\sX)$ be the free abelian group on the set of integral 1-dimensional cycles in $\sX$. Let $\sR_1(\sX)$ be the subgroup of $\sZ_1(\sX)$ generated by the cycles which are rationally equivalent to zero (see, for example, \cite[\S~1]{GLL} or \cite[Chapter~20]{Fulton}). Let $\CH_1(\sX)=\sZ_1(\sX)/\sR_1(\sX)$ be the Chow group of 1-cycles on $\sX$ modulo rational equivalence. We call an integral cycle $Z\in \sZ_1(\sX)$ \textit{good} if it is flat over $S$ and $Z\cap X_{\rm sing} = \emptyset$. We let $\sZ_1^g(\sX) \subset \sZ_1(\sX)$ be the free abelian group on the set of good cycles. In a similar spirit, we write $\sZ_1^{vg}(\sX)\subset \sZ_1^g(\sX)$ for the free abelian group on the set of integral flat $1$-cycles $Z$ which are good in the above sense and are regular as schemes. We call these cycles \textit{very good} on $\sX$. As $\sX$ is projective over $S$, it is an $FA$-scheme in the sense of \cite[2.2(1)]{GLL}. Therefore, the moving Lemma of Gabber, Liu and Lorenzini \cite[Theorem 2.3]{GLL} tells us that the canonical map \begin{equation}\label{eqn:GLL*} \frac{\sZ_1^g(\sX)}{\sZ_1^g(\sX)\cap \sR_1(\sX)} \to \CH_1(\sX) \end{equation} is an isomorphism. In other words, every cycle $\alpha \in \CH_1(\sX)$ has a representative $\alpha = \sum_{i=1}^n n_i [Z_i]$ with each $Z_i$ a good integral cycle. This will play a crucial role in the proofs of our main results. We now recall the definition of Levine-Weibel Chow group of 0-cycles on $X$ from \cite{LW} and its modified version from \cite{BK}. Let $X_{\rm reg}$ denote the disjoint union of the smooth loci of the $d$-dimensional irreducible components of $X$. A regular (or smooth) closed point of $X$ will mean a closed point lying in $X_{\rm reg}$. Let $Y \subsetneq X$ be a closed subset not containing any $d$-dimensional component of $X$ such that $X_{\rm sing} \subseteq Y$. Let $\sZ_0(X,Y)$ be the free abelian group on closed points of $X \setminus Y$. We shall often write $\sZ_0(X,X_{\rm sing})$ as $\sZ_0(X)$. \begin{defn}\label{defn:0-cycle-S-1} Let $C$ be a reduced scheme which is of pure dimension one over $k$. We shall say that a pair $(C, Z)$ is \emph{a good curve relative to $X$} if there exists a finite morphism $\nu\colon C \to X$ and a closed proper subscheme $Z \subsetneq C$ such that the following hold. \begin{enumerate} \item No component of $C$ is contained in $Z$. \item $\nu^{-1}(X_{\rm sing}) \cup C_{\rm sing}\subseteq Z$. \item $\nu$ is local complete intersection at every point $x \in C$ such that $\nu(x) \in X_{\rm sing}$. \end{enumerate} \end{defn} Let $(C, Z)$ be a good curve relative to $X$ and let $\{\eta_1, \cdots , \eta_r\}$ be the set of generic points of $C$. Let $\sO_{C,Z}$ denote the semilocal ring of $C$ at $S = Z \cup \{\eta_1, \cdots , \eta_r\}$. Let $k(C)$ denote the ring of total quotients of $C$ and write $\sO_{C,Z}^\times$ for the group of units in $\sO_{C,Z}$. Notice that $\sO_{C,Z}$ coincides with $k(C)$ if $|Z| = \emptyset$. As $C$ is Cohen-Macaulay, $\sO_{C,Z}^\times$ is the subgroup of group of units in the ring of total quotients $k(C)^\times$ consisting of those $f\in \sO_{C,x}$ which are regular and invertible for every $x\in Z$ (see \cite{EKW}, Section 1 for further details). Given any $f \in \sO^{\times}_{C, Z} \inj k(C)^{\times}$, we denote by ${\rm div}_C(f)$ (or ${\rm div}(f)$ in short) the divisor of zeros and poles of $f$ on $C$, which is defined as follows. If $C_1,\ldots, C_r$ are the irreducible components of $C$, and $f_i$ is the factor of $f$ in $k(C_i)$, we set ${\rm div}(f)$ to be the $0$-cycle $\sum_{i=1}^r {\rm div}(f_i)$, where ${\rm div}(f_i)$ is the usual divisor of a rational function on an integral curve in the sense of \cite{Fulton}. As $f$ is an invertible regular function on $C$ along $Z$, ${\rm div}(f)\in \sZ_0(C,Z)$. By definition, given any good curve $(C,Z)$ relative to $X$, we have a push-forward map $\sZ_0(C,Z)\xrightarrow{\nu_{*}} \sZ_0(X)$. We shall write $\sR_0(C, Z, X)$ for the subgroup of $\sZ_0(X)$ generated by the set $\{\nu_*({\rm div}(f))| f \in \sO^{\times}_{C, Z}\}$. Let $\sR_0(X)$ denote the subgroup of $\sZ_0(X)$ generated by the image of the map $\sR_0(C, Z, X) \to \sZ_0(X)$, where $(C, Z)$ runs through all good curves relative to $X$. We let $\CH^{BK}_0(X) = \frac{\sZ_0(X)}{\sR_0(X)}$. If we let $\sR^{LW}_0(X)$ denote the subgroup of $\sZ_0(X)$ generated by the divisors of rational functions on good curves as above, where we further assume that the map $\nu: C \to X$ is a closed immersion, then the resulting quotient group ${\sZ_0(X)}/{\sR^{LW}_0(X)}$ is denoted by $\CH^{LW}_0(X)$. Such curves on $X$ are called the {\sl Cartier curves}. There is a canonical surjection $\CH^{LW}_0(X) \surj \CH^{BK}_0(X)$. The Chow group $\CH^{LW}_0(X)$ was discovered by Levine and Weibel \cite{LW} in an attempt to describe the Grothendieck group of a singular scheme in terms of algebraic cycles. The modified version $\CH^{BK}_0(X)$ was introduced in \cite{BK}. \begin{comment} A \emph{Cartier curve on $X$ relative to $Y$} is a purely $1$-dimensional closed subscheme $C\hookrightarrow X$ that is reduced, has no component contained in $Y$ and is defined by a regular sequence in $X$ at each point of $C\cap Y$. Let $C$ be a Cartier curve in $X$ relative to $Y$ and let $\{\eta_1, \ldots , \eta_r\}$ denote the set of its generic points. Let $\sO_{C, C \cap Y}$ denote the semilocal ring of $C$ at $(C \cap Y) \cup \{\eta_1, \ldots , \eta_r\}$. Let $k(C)$ denote the ring of total quotients of $C$. Notice that $\sO_{C, C \cap Y}$ and $k(C)$ coincide if $C \cap Y = \emptyset$. Since $C$ is a reduced curve, it is Cohen-Macaulay and hence the canonical map $k(C) \to \stackrel{r}{\underset{i = 1}\prod} \sO_{C, \eta_i}$ is an isomorphism. In particular, the map $\theta_C:\sO^{\times}_{C, C \cap Y} \to \stackrel{r}{\underset{i = 1}\prod} \sO^{\times}_{C, \eta_i}$ is injective. Given $f \in \sO^{\times}_{C, C \cap Y}$, let $\{f_i\} = \theta_C(f)$ and let $(f_i)_{\eta_i}:= {\rm div}(f_i)$ denote the divisor of zeros and poles of $f_i$ on $\ov{\{\eta_i\}}$ in the sense of \cite{Fulton}. We let $(f)_C = \divf(f) := \stackrel{r}{\underset{i =1}\sum} (f_i)_{\eta_i}$. As $f$ is an invertible regular function on $C$ in a neighborhood of $C \cap Y$, we see that $(f)_C \in \sZ_0(X,Y)$. Let $\sR_0(X,Y)$ denote the subgroup of $\sZ_0(X,Y)$ generated by $(f)_C$, where $C$ is a Cartier curve on $X$ relative to $Y$ and $f \in \sO^{\times}_{C, C \cap Y}$. The Chow group of 0-cycles on $X$ relative to $Y$ is the quotient \begin{equation}\label{eqn:0-cyc-1} \CH^{LW}_0(X,Y) = \frac{\sZ_0(X,Y)}{\sR_0(X,Y)}. \end{equation} We shall often write $\sR_0(X,X_{\rm sing})$ as $\sR_0(X)$ and $\CH^{LW}_0(X, X_{\rm sing})$ as $\CH^{LW}_0(X)$. \end{comment} We remark here that the definition of $\CH^{LW}_0(X)$ given above is mildly different from the one given in \cite{LW} because we do not allow non-reduced Cartier curves. However, it does agree with the definition of \cite{LW} if $k$ is infinite by \cite[Lemmas~1.3, 1.4]{Levine-2}. Note that over finite fields the situation is unclear (but see \cite{BKS} for the case of surfaces), since the standard norm trick to reduce to the case of infinite fields for comparison does not work for the Levine-Weibel Chow group. The situation is substantially better if one uses its variant \cite{BK} instead. \subsection{Lifting 0-cycles on the special fiber to 1-cycles on $\sX$} \label{sec:Lifting-prf} From the above definitions of $\CH_1(\sX)$ and $\CH^{LW}_0(X)$, it is not clear if the 1-cycles on $\sX$ always restrict to admissible 0-cycles on $X$, nor if the restriction (whenever defined) preserves the rational equivalence. This question will be addressed in the next section. Here, we solve the reverse problem, namely, we show that the Levine-Weibel 0-cycles on $X$ can be lifted to good 1-cycles on $\sX$, following the idea of \cite{EKW}. Using this lifting, we shall prove \thmref{thm:Main-1}. We fix an integer $m$ prime to the exponential characteristic of $k$ and let $\Lambda = {\Z}/{m\Z}$. For an abelian group $M$, we let $M_{\Lambda} = M \otimes_{\Z} \Lambda$. Let $[Z]\in \sZ_1^g(\sX)$ be an integral good 1-cycle. Intersecting $[Z]$ with the reduced special fiber $X$ gives rise to a 0-cycle $[Z\cap X]$, which is supported in the regular locus of $X$. Here, $[Z\cap X]$ is the 0-cycle in $\sZ_0(X)$ associated to the (possibly non-reduced) 0-dimensional scheme-theoretic intersection $Z \cap X$. This gives rise to the {\sl restriction} homomorphism on the cycle group \begin{equation}\label{eq:restriction-map-generators} \wt{\rho}: \sZ_1^g(\sX)\to \sZ_0(X, X_{\rm sing}), \quad [Z]\mapsto [Z\cap X]. \end{equation} To prove \thmref{thm:Main-1}, we begin by recalling the following result. The proof is classical, and in this form is essentially taken from \cite{EKW}. We review the proof in order to fix our notation. \begin{prop}$($\cite[\S~4]{EKW}$)$\label{prop:rho-is-onto} Given a regular closed point $x \in X$, there exists an integral $1$-cycle $Z_x\subset \sX$ which is regular, finite and flat over $S$ such that $Z_x\times_S X = \{x\}$ scheme-theoretically. In particular, the restriction map $\wt{\rho}$ of \eqref{eq:restriction-map-generators} is surjective. \end{prop} \begin{proof} Let $x\in X_{\rm reg}$ be a closed point and let $\sO_{\sX, x}$ be the local ring of $\sX$ at $x$. Since $\sX$ is regular, $\cO_{\sX, x}$ is a regular local ring. In particular, it is a unique factorization domain. There is then a prime element $\sigma\in \mathfrak{m}_{\sX, x} \setminus \mathfrak{m}_{\sX, x}^2$ and an integer $n>0$ such that $\sigma^n = \pi c$, where $\pi\in \sO_{\sX,x}$ is the uniformizer of $A$ and $c$ is a unit. Indeed, $\pi$ can not be a product of distinct prime elements in $\sO_{\sX,x}$, since $\sO_{\sX, x}\otimes_A (A/(\pi)) = \cO_{\sX_s, x}$ has a unique minimal prime (its reduction, $\sO_{X, x}$ is a regular local ring). We can now complete $\sigma$ to a regular sequence $(\sigma, a_1,\ldots, a_d)$ generating the maximal ideal $\mathfrak{m}_{\sX, x}$ such that the images $(\ov{a}_1, \ldots, \ov{a}_d)$ in $\sO_{X, x} = \sO_{\sX,x}/(\sigma)$ form a regular sequence, generating the maximal ideal $\mathfrak{m}_{X, x}$. Let $\Spec({\sO_{\sX,x}/(a_1,\ldots, a_d)})$ be the closed subscheme of $\Spec ({\sO_{\sX,x}})$ associated to the ideal $(a_1,\ldots, a_d)$. It is clearly integral, regular, local, $1$-dimensional and flat over $S$. If we let $\wt{Z}_x$ denote its closure in $\sX$, then $\wt{Z}_x$ is projective and dominant of relative dimension zero over $A$. In particular, it is finite and flat over $S$. We can therefore write $\wt{Z}_x = \Spec(B)$. Since $S$ is Henselian, the finite $A$-algebra $B$ is totally split. Hence, there is a unique irreducible component $Z_x$ of $\wt{Z}_x$ such that $x\in Z_x$. The scheme $Z_x$ is then regular because its local ring at the unique closed point $x$ agrees with ${\sO_{\sX,x}}/(a_1,\ldots, a_d)$. Furthermore, $Z_x$ is integral, finite and flat over $S$ with $Z_x\times_S X = \{x\}$. \end{proof} Note that thanks to the proposition above, we have in fact shown that the composite map \[ \sZ_1^{vg}(\sX) \inj \sZ_1^{g}(\sX) \xrightarrow{\wt{\rho}} \sZ_0(X, X_{\rm sing}) \] is surjective. \subsection{The case of relative dimension one}\label{sec:dim-1} We continue with the assumption that $A$ is Henselian and $k$ is perfect (but not necessarily infinite). Suppose that $\dim_S(\sX)=1$ so that $\sX$ is a family of projective \textit{curves} over $S$. We shall now give the proof of \thmref{thm:Main-1} in this case. Since $X$ is reduced by construction, we have by \cite[Lemma~3.12]{BK}, the canonical isomorphisms $\CH_0^{LW}(X) \xrightarrow{\simeq} \CH^{BK}_0(X) \xrightarrow{\simeq} \Pic(X) \cong H^1_{\et}(X, \mathbb{G}_m)$. As a scheme, $\sX$ is integral and purely two-dimensional so that we can identify $\CH_1(\sX)$ with $\CH^1(\sX)$. Since $\sX$ is moreover separated, regular (hence locally factorial) and Noetherian, there are classical isomorphisms $\CH^1(\sX) \xrightarrow{\simeq} \Pic(\sX) \cong H^1_{\et}(\sX, \mathbb{G}_m)$. Tensoring these groups with $\Lambda = \Z/m$, the Kummer sequence gives us injections \[ \CH^1(\sX)_\Lambda \xrightarrow{\cong} H^1_{\et}(\sX, \mathbb{G}_m)_{\Lambda} \hookrightarrow H^2_{\et}(\sX, \Lambda(1)) \] \[ \CH^{LW}_0(X)_\Lambda \xrightarrow{\cong} H^1_{\et}(X, \mathbb{G}_m)_{\Lambda} \hookrightarrow H^2_{\et}(X, \Lambda(1)). \] Using these injections, we get a diagram of solid arrows \begin{equation}\label{eqn:case-of-curves} \[email protected]{ & \sZ_1^g(\sX)_{\Lambda} \ar@{->>}[r]^-{\wt{\rho}} \ar@{->>}[d] \ar@{->>}[dl]_-{\alpha_{\sX}} & \sZ_0(X, X_{\rm sing})_{\Lambda} \ar@{->>}[d] \ar@{-->}[dl] \ar@{->>}[dr]^-{\alpha_X} & \\ \CH_1(\sX)_{\Lambda} \ar[r]^-{\cong} \ar[dr]_{cyc^{\et}_{\sX}} & \ \Pic(\sX)_{\Lambda} \ar@{^{(}->}[d] \ar[r]^-{\rho} & \Pic(X)_{\Lambda} \ar@{^{(}->}[d] & \CH^{LW}_0(X)_{\Lambda} \ar[dl]^-{cyc^{\et}_X} \ar[l]_-{\cong} \\ & H^2_{\et}(\sX, \Lambda(1)) \ar[r]^-{\cong} & H^2_{\et}(X, \Lambda(1)). &} \end{equation} All horizontal arrows in the middle are induced by the restriction to the reduced special fiber. In particular, the two squares in the middle are commutative. The two triangles on the top left and top right can be easily seen to be commutative by recalling the construction of the isomorphism between the Picard group and the Chow group of codimension one cycles. The two triangles on the bottom left and bottom right commute by the definition of the cycle class maps to {\'e}tale cohomology. The bottom horizontal arrow ~\eqref{eqn:case-of-curves} is an isomorphism by the rigidity theorem for {\'e}tale cohomology (a consequence of the proper base change theorem, see \cite[Chapter~VI, Corollary~2.7]{Milne}). The top horizontal arrow is surjective by Proposition~\ref{prop:rho-is-onto}. From the commutativity of \eqref{eqn:case-of-curves}, we immediately see that the canonical map $\alpha_{\sX}\colon \sZ_1^g(\sX)_{\Lambda} \surj \CH_1(\sX)_{\Lambda}$ factors via $\wt{\rho}$. Equivalently, we have ${\rm Ker}(\wt{\rho}) \subseteq {\rm Ker} (\alpha_{\sX})$. This gives the dashed arrow $\wt{\gamma}\colon \sZ_0(X, X_{\rm sing})_\Lambda \to \CH_1(\sX)_\Lambda$, which is automatically surjective. A second inspection of \eqref{eqn:case-of-curves}, using this time the fact that $\CH_1(\sX)_\Lambda \to H^2_{\et}(\sX, \Lambda(1))$ is injective, shows similarly that ${\rm Ker} (\alpha_X) \subseteq {\rm Ker} (\wt{\gamma})$. Combining all this, we finally get a surjective group homomorphism $\gamma$ fitting in the commutative diagram \begin{equation}\label{eqn:case-of-curves-0} \[email protected]{ \CH_1(\sX)_{\Lambda} \ar@{^{(}->}[d] & \CH^{LW}_0(X)_{\Lambda} \ar[l]_-{\gamma} \ar@{^{(}->}[d] \\ H^2_{\et}(\sX, \Lambda(1)) \ar[r]^{\cong} & H^2_{\et}(X, \Lambda(1)).} \end{equation} We also deduce from ~\eqref{eqn:case-of-curves-0} that $\gamma$ has to be injective as well. Since $\gamma$ is clearly an inverse of the map $\wt{\rho}$ on the generators, we have then shown the following result which proves \thmref{thm:Main-1} and a general form of the part (1) of \thmref{thm:Main-2} for curves. \begin{prop}\label{prop:lifting-for-curves} Let $A$ be an excellent Henselian discrete valuation ring with perfect residue field. Let $\sX$ be a regular scheme, flat and projective over $S$ of relative dimension one. Then the restriction homomorphism $\wt{\rho}$ of \eqref{eq:restriction-map-generators} induces an isomorphism \[ \rho \colon \CH_1(\sX)_{\Lambda} \xrightarrow{\cong} \CH^{LW}_0(X)_\Lambda. \] \end{prop} \section{Proof of \thmref{thm:Main-1}}\label{sec:Prf-1} We shall now prove \thmref{thm:Main-1} using the Bertini theorems of \S~\ref{sec:Bertini} and the lifting proposition of \S~\ref{sec:Lifting}. We assume $A$ to be an excellent Henselian discrete valuation ring with perfect residue field $k$. The rest of the assumptions and notations are same as in \S~\ref{sec:general-setting}. \subsection{Factorization of $\alpha_{\sX}$ via $\wt{\rho}$} \label{sec:Factor-rho} We begin by showing the first part of Theorem~\ref{thm:Main-1}, i.e., we show that the canonical surjection $\alpha_{\sX}: \sZ^g_1(\sX)_{\Lambda} \surj \CH_1(\sX)_{\Lambda}$ factors through $\wt{\rho}$. This is a consequence of the following result, whose proof goes through the steps of \cite[Proposition~4.1]{EKW}, using Proposition~\ref{prop:Bertini-AK} instead of the Bertini Theorem of Jannsen-Saito proved in \cite{SS}. \begin{prop}$($\cite[Proposition~4.1]{EKW}$)$\label{prop:can-lifting} Let $Z \in \sZ_1^g(\sX)$ be a good, integral 1-cycle and let $n[x] = [Z \cap X]$ for some $x \in X_{\rm reg}$ and $n>0$. Then $\alpha_{\sX}( Z - n Z_x) = 0$ in $\CH_1(\sX)_{\Lambda}$, where $Z_x$ is as in Proposition~\ref{prop:rho-is-onto}. \end{prop} \begin{proof} By the standard pro-$\ell$-extension argument, we can assume that the residue field of $A$ is infinite. The proof is now by induction on the relative dimension of $\sX$ over $S$. The case $d = 0$ is trivial and the case $d=1$ is provided by Proposition~\ref{prop:lifting-for-curves}. We now assume that $d \ge 2$. Assume first that $Z$ is regular as well. The general case will be treated later, using a trick due to Bloch \cite[Appendix~A]{EW}). By an iterated application of Proposition~\ref{prop:Bertini-AK}, we can find \begin{enumerate} \item a hypersurface section $H$ of $\sX$ which is regular, flat and projective over $S$ such that $Z \subset H$, and \item a relative curve $H'$ over $S$ (i.e., $\dim_S H'=1$) which is regular, flat and projective over $S$ and contains $Z_x$. \end{enumerate} We can also assume that $Z'':=H'\cap H$ is regular as well, and that $H'\cap H \cap X$ consists only of the reduced point $x$. Note that we can do this since $x \in X$ is in the regular locus of $X$, so that we can choose $H'$ and $H$ which meets transversely there. By our induction hypothesis, we have that $\alpha_H(Z-nZ'') = 0$ in $\CH_1(H)_{\Lambda}$. Moreover, it follows from \propref{prop:lifting-for-curves} that $\alpha_{H'}(Z''-Z_x) =0$ in $\CH_1(H')_{\Lambda} =\Pic(H')_{\Lambda}$. In particular, we get $n \alpha_{H'}(Z''-Z_x) =0$. But then, we get \[ \alpha_{\sX} ( Z-nZ_x) = (\iota_H)_* (\alpha_H( Z-nZ'')) + (\iota_{H'})_* (\alpha_{H'}(nZ''-nZ_x)) =0 \] in $\CH_1(\sX)_{\Lambda}$, as required. Here, $\iota_H$ (resp. $\iota_{H'}$) is the inclusion $H \inj \sX$ (resp. $H' \inj \sX$). Suppose now that $Z$ is not necessarily regular. Following an idea of Bloch, we let $Z^N$ be the normalization of $Z$. Since $A$ is excellent and $Z$ is finite over $A$ (as it is a good 1-cycle), the map $Z^N\to Z$ is finite. In particular, there is a factorization \[ \[email protected]{ Z^N \ar@{^{(}->}[r] \ar[d] & \P^M_{\sX} \ar[d]^q \\ Z \ar@{^{(}->}[r] & \sX,} \] where $q$ is the canonical projection. We are then reduced to prove the statement in $\P^M_{\sX}$ for $Z^N$ and any regular lift of $Z_x$ to $\P^M_{\sX}$, chosen so that it contains $Z^N \cap \P^M_X$. Since $Z^N$ is now regular, the claim follows from the previous case. \end{proof} An immediate consequence of \propref{prop:can-lifting} is the following. \begin{cor}\label{cor:Factor} The lifting of 0-cycles of Proposition~\ref{prop:rho-is-onto} gives rise to a well-defined group homomorphism $\wt{\gamma}: \sZ_0(X, X_{\rm sing})_{\Lambda} \to \CH_1(\sX)_{\Lambda}$ such that the diagram \begin{equation}\label{eqn:Factor-0} \[email protected]{ \sZ_1(\sX)_{\Lambda} \ar[r]^-{\wt{\rho}} \ar[dr]_-{\alpha_{\sX}} & \sZ_0(X, X_{\rm sing})_{\Lambda} \ar[d]^-{\wt{\gamma}} \\ & \CH_1(\sX)_{\Lambda}} \end{equation} commutes. \end{cor} \begin{proof} Let $Z \in \sZ_1^g(\sX)$ be a good, integral 1-cycle. Since $A$ is Henselian and $Z$ is finite over $A$, the intersection $Z \cap X$ must be supported on a (regular) closed point, say, $x \in X$. In particular, we must have $[Z \cap X] = n[x]$ for some integer $n > 0$. Now, it follows from \propref{prop:can-lifting} that \[ \alpha_{\sX}([Z]) - \wt{\gamma} \circ \wt{\rho}([Z]) = \alpha_{\sX}([Z] - n[Z_x]) = 0 \] and this proves the corollary. \end{proof} \subsection{Factorization of $\wt{\gamma}$ through rational equivalence}\label{sec:Factor-gamma} Now that we have constructed the map $\wt{\gamma}$ at the level of the cycle groups, our next goal is to show that it factors through the cohomological Chow group $\CH^{LW}_0(X)_{\Lambda}$ of the reduced special fiber $X$. In fact, we shall show (probably) more in the sense that $\wt{\gamma}$ actually has a factorization \begin{equation}\label{eqn:gamma-BK} \wt{\gamma}: \sZ_0(X, X_{\rm sing})_{\Lambda} \surj \CH^{LW}_0(X)_{\Lambda} \surj \CH^{BK}_0(X)_{\Lambda} \to \CH_1(\sX)_{\Lambda}. \end{equation} As we will see below, apart from giving us a stronger statement, the approach of working with $\CH^{BK}_0(X)$ also allows us to simplify the Cartier curves that give relations in $\sR^{LW}_0(X)$ which we want to kill in $\CH_1(\sX)_{\Lambda}$. It allows us to assume that the Cartier curves are regularly embedded in $X$. This is an essential requirement in our proof. It is not known if the canonical map $\CH^{LW}_0(X) \surj \CH^{BK}_0(X)$ is an isomorphism in general. We refer to \cite[Theorem~3.17]{BK} for some positive results. We shall closely follow the proof of \cite[Theorem~5.1]{EKW} (and we keep similar notations for the reader's convenience), with one simplification and one complication. The simplification is that using the Levine-Weibel Chow group (or, rather, its variant introduced in \cite{BK}), we don't have to deal with the ``type-$1$'' relations (see \cite[\S~2.2]{EKW}), arising from the relations in the Suslin homology group $H_0^{S}(X_{\rm reg})$. On the other hand, the complication is that without any assumption on the geometry of $X$, we have to consider arbitrary l.c.i. curves $C$ (and not simply SNC subcurves in $X$ as in \textit{loc.cit.}). Note that these l.c.i. curves may not even be embedded inside $X$. In order to lift our complicated relations in $X$ to the model $\sX$, we shall use the argument of \cite[Lemma 2.5]{GLL}. We will need the following commutative algebra Lemma whose proof can be obtained from \cite[Theorem~16.3]{Matsumura}. \begin{lem}\label{lem:regularsequences} Let $R$ be a Noetherian local ring and let $I\subset R$ be an ideal generated by a regular sequence $a_1,\ldots, a_n$. Let $b_1,\ldots, b_n\in I$ be elements such that the image of $\{b_1,\ldots, b_n\}$ in $I/I^2$ is a basis over $R/I$. Then $b_1,\ldots, b_n$ is a regular sequence in $R$. \end{lem} \begin{prop}\label{prop:Factor-gamma-*} The lifting map $\wt{\gamma}\colon \sZ_0(X, X_{\rm sing})_\Lambda \to \CH_1(\sX)_\Lambda$ of \corref{cor:Factor} factors through $\CH^{BK}_0(X)_\Lambda$. \end{prop} \begin{proof} Since the case of relative dimension one is already shown in \S~\ref{sec:dim-1}, we shall assume that $d = \dim_S(\sX) \ge 2$. We need to show that for any good curve $\nu\colon C\to X$ in the sense of Definition~\ref{defn:0-cycle-S-1} and any rational function $f$ on $C$ which is regular along $\nu^{-1}(X_{\rm sing})$, we have $\wt{\gamma}(\nu_*({\rm div}(f)))=0$ in $\CH_1(\sX)_\Lambda$. We will first show that this relation holds when the curve $C$ is regularly embedded inside $X$ (i.e., when the morphism $\nu$ is a regular closed embedding). The general case will be handled by factoring $\nu$ as a regular closed embedding $C\hookrightarrow \P^N_{X}$ followed by the projection $\P^N_X\to X$, and using the fact that the Chow groups $\CH_1(\sX)$ and $\CH^{BK}_0(X)$ admit proper push-forward for smooth morphisms. So, let $C\hookrightarrow X$ be such an embedded l.c.i. curve. Write $C_\infty$ for the finite set of points $(C\cap X_{\rm sing}) \cup \{\eta_1,\ldots, \eta_r \}$, where each $\eta_i$ is a generic point of $C$ and $C\cap X_{\rm sing}$ denotes the set of closed points of the intersection of $C$ with $X_{\rm sing}$. Let $\sO_{X, C_\infty}$ be the semi-local ring of $X$ at $C_{\infty}$ and let $I_{C, C_{\infty}}$ be the ideal of $C$ in $\sO_{X, C_{\infty}}$ so that $\sO_{C, C_\infty} = {\sO_{X, C_\infty}}/{I_{C, C_{\infty}}}$. By definition, $C$ is regularly embedded at each point $x \in C\cap X_{\rm sing}$, and it is regularly embedded at the generic points. Hence, as a module over $\sO_{C, C_\infty}$, the conormal sheaf $I_{C,C_\infty}/ I_{C,C_\infty}^2$ admits a free set of generators, given by the image in $I_{C, C_\infty}/I_{C, C_\infty}^2$ of a regular sequence $a_1, \ldots, a_{d-1}$ in $\cO_{X, C_\infty}$. We shall inductively modify the sequence $a_1, \ldots, a_{d-1}$ (without changing the induced basis of $I_{C,C_\infty}/ I_{C,C_\infty}^2$) in order to construct a good lifting of $C$ to the model $\sX$, following the recipe of \cite[Lemma 2.5]{GLL}. First, we note that according to Definition~\ref{defn:0-cycle-S-1}, the curve $C$ is not contained in $X_{\rm sing}$. By a moving argument, we can also assume that $C$ does not contain any component of $X_{\rm sing}$. Indeed, the Cartier condition of $C$ implies that it will contain a component of $X_{\rm sing}$ only if $\dim (X_{\rm sing}) =0$. On the other hand, in this latter case, we can use a moving argument to ensure that $C$ does not hit $X_{\rm sing}$ (see \cite[Lemma~1.3]{ESV}). Thus, the ideal $I_{C, C_\infty}$ of $\sO_{X, C_{\infty}}$ does not contain, and it is not contained in the localization of any minimal prime $\mathfrak{p}$ of $X_{\rm sing}$ in $\sO_{X, C_\infty}$. Up to possibly adding an element of $I_{C, C_\infty}^2$ to $a_1 \in I_{C, C_\infty} \subset \sO_{X, C_\infty}$, we can now choose $\hat{a}_1 \in \sO_{\sX, C_\infty}$, lifting $a_1$, with the property that $\hat{a}_1$ does not belong to any minimal prime of $X_{\rm sing}$ in $\sO_{\sX, C_\infty}$. In other words, $V(\hat{a}_1)$ in $\Spec(\sO_{\sX, C_\infty})$ does not contain any irreducible component of $X_{\rm sing}$. Moreover, each irreducible component of $V(\hat{a}_1)$ has codimension exactly one in $X_{\rm sing} \times_{\sX} \Spec(\sO_{\sX, C_\infty})$ with the reduced induced closed subscheme structure of $X_{\rm sing}$. Note that thanks to Lemma~\ref{lem:regularsequences}, the modification by adding elements of $I_{C, C_\infty}^2$ gives another regular sequence defining $I_{C, C_\infty}$. We now fix $\hat{a}_1$ and $a_1$ chosen above, and proceed. Since locally $V(\hat{a}_1)\cap C = C$ in $\Spec(\sO_{X, C_\infty})$, the ideal $I_{C, C_\infty}$ is not contained in any minimal prime of $V(\hat{a}_1)\cap X_{\rm sing}$. Thus, we can alter $a_2$ by an element of $I_{C, C_\infty}^2$ so that we can assume that $a_2$ in particular is not in any minimal prime of $V(\hat{a}_1)\cap X_{\rm sing}$. We now lift $a_2$ to $\hat{a}_2\in \sO_{\sX, C_\infty}$ and look at $V(\hat{a}_1, \hat{a}_2)$ in $\Spec(\sO_{\sX, C_\infty})$. As before, it follows by our construction that each irreducible component of $V(\hat{a}_1, \hat{a}_2)$ has codimension exactly one in $X_{\rm sing}\cap V(\hat{a}_1)$. We fix this $\hat{a}_2$ and the corresponding $a_2$. Again, $a_1,a_2, \ldots, a_{d-1}$ (with $a_2$ accordingly modified) form a regular sequence generating $I_{C, C_\infty}$, thanks to Lemma~\ref{lem:regularsequences}. In general, the choice of $\hat{a}_i$ depends on the previously chosen $\hat{a}_1,\ldots, \hat{a}_{i-1}$. It is chosen with the property that $\hat{a_i}$ is a unit at each generic point of $X_{\rm sing}\cap V(\hat{a}_1, \ldots, \hat{a}_{i-1})$, and that $\hat{a}_i$ lifts $a_i\in I_{C, C_\infty}$. This can be achieved, up to elements of $I_{C, C_\infty}^2$, since locally $V(\hat{a}_1, \ldots, \hat{a}_{i-1}) \cap C = C \not \supset V(\hat{a}_1, \ldots, \hat{a}_{i-1}) \cap X_{\rm sing}$. At the end of the process, we get $\hat{a}_1,\ldots, \hat{a}_{d-1} \in \sO_{\sX, C_\infty}$ with the following properties. \begin{enumerate} \item The sequence $\{\hat{a}_1,\ldots, \hat{a}_{d-1}\}$ restricts to a regular sequence $\{{a}_1,\ldots, {a}_{d-1}\}$ generating the ideal $I_{C, C_\infty}$ in $\sO_{X, C_\infty}$. The images of $\{{a}_1,\ldots, {a}_{d-1}\}$ in $I_{C, C_\infty}/ I_{C, C_\infty}^2$ are the basis we started from. \item Let $V(\hat{a}_1,\ldots, \hat{a}_{d-1}) \subset \Spec(\sO_{\sX, C_\infty})$ be the closed subscheme of $\Spec(\sO_{\sX, C_\infty})$ defined by the ideal $(\hat{a}_1,\ldots, \hat{a}_{d-1})$. Then $V(\hat{a}_1,\ldots, \hat{a}_{d-1})$ intersects $X_{\rm sing}$ in at most finitely many points (the intersection could be empty). \item Let $\hat{C}$ be the closure of $V(\hat{a}_1,\ldots, \hat{a}_{d-1}) $ in $\sX$. Then $\hat{C}$ is flat over $S$ and there exists an open neighborhood $U$ of $C_\infty$ in $X$ such that $(\hat{C}\cap X)\cap U$ and $C\cap U$ coincide scheme-theoretically. In particular, if $T$ denotes the (finite) set of closed points of $\hat{C}\cap X_{\rm sing}$ together with the generic points of $\hat{C}\cap X$, then we have an isomorphism $\sO_{\hat{C}\cap X, T}\cong \cO_{C, C_\infty}\times R$, with $R$ an $1$-dimensional semi-local ring. \end{enumerate} Property (2) follows from the fact that, at each step, the generic points of $V(\hat{a}_1,\ldots, \hat{a}_{i})$ have height exactly one at each generic point of $X_{\rm sing}\cap V(\hat{a}_1, \ldots, \hat{a}_{i-1})$. Property (3) is clearly a consequence of (1) and of the construction. It tells us in particular that we can harmlessly throw away any component of $\hat{C}$ which happens to be completely vertical (i.e., the structure map to $S$ factors through the closed point). This is because such a component has to be disjoint from $C$ in a neighborhood of $C_\infty$. Note that $\hat{C}$ can be taken with the reduced scheme structure, but it may not be integral even if $C$ is. It follows from (3) that the map on units $\cO_{\hat{C}, T}^\times \to \cO_{\hat{C}\cap X, T}^\times \times R$ is surjective. We can therefore find an element $\hat{f}$ in the ring of total quotients of $\hat{C}$ (which is by (3) a product of fields) which is a regular and invertible function in a neighborhood of $T$ and which restricts to $(f,1)$ (where $f$ was the given function on $C$). In particular, this implies that $\wt{\rho} ({\rm div}_{\hat{C}}(\hat{f}) )= {\rm div}_{C}(f)$. By ${\rm div}_{\hat{C}}(\hat{f})$, here we mean the sum of the divisors on the irreducible components of $\hat{C}$ if $\hat{C}$ is not integral. Note that ${\rm div}_{\hat{C}}(\hat{f})$ is an element of $\sZ_1^g(\sX)$ and that, we have ${\rm div}_{\hat{C}}(\hat{f}) = \wt{\gamma}({\rm div}_C(f))$ by construction. Since we clearly have ${\rm div}_{\hat{C}}(\hat{f})=0$ in $\CH_1(\sX)_\Lambda$, this completes the proof of the proposition when $\nu: C \inj X$ is a regular closed immersion. We now prove the general case. So suppose we are given a good curve $\nu: C \to X$ and a rational function $f$ on $C$ as in the beginning of the proof of the proposition. By \cite[Lemma~3.5]{BK}, we can assume that the map $\nu: C \to X$ is a complete intersection morphism. Now, we can find a commutative diagram \begin{equation}\label{eqn:Factor-gamma-*-0} \[email protected]{ C \ar@{^{(}->}[r]^-{\nu'} \ar[dr]_-{\nu} & \P^M_X \ar[d]_-{q} \ar[r] & \P^M_{\sX} \ar[d]^-{q} \ar[dr] \\ & X \ar[r] & \sX \ar[r] & S} \end{equation} for some $M \gg0 $ such that $\nu'$ is a regular closed embedding. Letting $\sY = \P^M_{\sX}$ and $Y = \P^M_X$, this gives a diagram \begin{equation}\label{eqn:Factor-gamma-*-1} \[email protected]{ \sZ_0(Y, Y_{\rm sing})_{\Lambda} \ar[r]^-{{\wt{\gamma}}_Y} \ar[d]_-{q_*} & \CH_1(\sY)_{\Lambda} \ar[d]^-{q_*} \\ \sZ_0(X, X_{\rm sing})_{\Lambda} \ar[r]^-{{\wt{\gamma}}_X} & \CH_1(\sX)_{\Lambda}.} \end{equation} Note that the push-forward map $q_*$ on the left is defined since $q$ is smooth (see \cite[Proposition~3.18]{BK}). It is easily seen from the construction of the cycle $Z_x$ associated to a regular closed point of $X$ in \propref{prop:rho-is-onto} that ~\eqref{eqn:Factor-gamma-*-1} commutes. We thus get \[ \wt{\gamma}_X \circ \nu_*(\divf(f)) = \wt{\gamma}_X \circ q_*(\divf_C(f)) = q_* \circ \wt{\gamma}_Y(\divf_C(f)) = 0. \] This finishes the proof of the proposition. \end{proof} {\bf Proof of \thmref{thm:Main-1}:} The construction of $\wt{\rho}$ is given in ~\eqref{eq:restriction-map-generators}. The existence of the map $\gamma$ such that ~\eqref{eqn:Main-1-0} commutes, follows directly from \corref{cor:Factor} and \propref{prop:Factor-gamma-*}, using the fact that the surjection $\sZ_0(X, X_{\rm sing}) \surj \CH^{BK}_0(X)$ factors as $\sZ_0(X, X_{\rm sing}) \surj \CH^{LW}_0(X) \surj \CH^{BK}_0(X)$. The surjectivity of $\gamma$ follows from \propref{prop:rho-is-onto}. $\hfill\square$ \subsection{} In the above notations, we have constructed a surjective group homomorphism \[ \gamma \colon \CH^{LW}_0(X)_{\Lambda} \surj \CH_1(\sX)_\Lambda, \] which is (by construction) an inverse on the level of generators of the naive restriction map \[ \wt{\rho} \colon \sZ_1^g(\sX)\to \sZ_0(X, X_{\rm sing}) \] for any regular projective and flat scheme $\sX$ over $S$ without any assumption on the residue field (apart from it being perfect). This also does not depend on the geometry of the reduced special fiber $X$. In particular, we can summarize what we have shown as follows. \begin{cor}\label{cor:sum} Let $A$ be a Henselian discrete valuation ring with perfect residue field. Let $\sX$ be a regular scheme which is projective and flat over $A$ with reduced special fiber $X$. Suppose that the map $\wt{\rho} \colon \sZ_1^g(\sX)\to \sZ_0(X, X_{\rm sing})$ descends to a morphism between the Chow groups \begin{equation}\label{eq:rho} \rho \colon \CH_1(\sX)_\Lambda \to \CH^{LW}_0(X)_{\Lambda}. \end{equation} Then $\rho$ is an isomorphism. \end{cor} \section{The restriction isomorphism}\label{sec:Res-iso} We shall prove \thmref{thm:Main-2} in this section. In other words, we shall show that the restriction homomorphism $\rho$ of \eqref{eq:rho} does exist if additional assumption on the field $k$ or on the DVR $A$ hold. \subsection{The \'etale cycle class map}\label{ssec:et-cycle-class} The cycle class map from the Chow groups to the {\'e}tale cohomology is well known for smooth schemes. More generally, the {\'e}tale realization of Voevodsky's motives tells us that there are such maps from the Friedlander-Voevodsky motivic cohomology of singular schemes to their {\'e}tale cohomology. But this is not good enough for us since we do not work with the motivic cohomology. In this section, we give a construction of the cycle class map from the Levine-Weibel Chow group of a singular scheme to its {\'e}tale cohomology using Gabber's Gysin maps \cite{Fujiwara}. We let $k$ be a perfect field and let $X$ be an equi-dimensional quasi-projective scheme of dimension $d$ over $k$. Let $m$ be an integer prime to the exponential characteristic of $k$ and let $\Lambda = {\Z}/{m\Z}$. Let $x\in X_{\rm reg}$ be a regular closed point of $X$. We have the sequence of maps \begin{equation}\label{eq:def-et-cycle-class} \Z \surj \Lambda \xrightarrow{\cong} H^0_{\et}(k(x), \Lambda) \stackrel{(1)}{\underset{\cong}\to} H^{2d}_{\{x\}, \et}(X, \Lambda(d)) \xrightarrow{(2)} H^{2d}_{\et}(X, \Lambda(d)). \end{equation} The arrow labeled $(1)$ is the Gysin map \cite{Gabber}, using the fact that $x$ is a regular closed point of $X$. This is an isomorphism by the purity and excision theorems in {\'e}tale cohomology. The arrow labeled $(2)$ is the natural `forget support' map. Let $\delta_x$ denote the composite of all maps in ~\eqref{eq:def-et-cycle-class}. We let $cyc^{\et}_X(x) = \delta_x(1)$ and extend it linearly to define a group homomorphism \begin{equation}\label{eq:def-et-cycle-class-0} cyc^{\et}_X\colon \sZ_0(X, X_{\rm sing}) \to H^{2d}_{\et}(X, \Lambda(d)). \end{equation} We shall now show that this map factors through the modified Chow group $\CH^{BK}_0(X)$. It will then follow that it factors through $\CH^{LW}_0(X)$ as well. So let $\nu \colon (C,Z)\to X$ be a good curve as in Definition~\ref{defn:0-cycle-S-1}. As in the proof of \propref{prop:Factor-gamma-*}, we can assume that $\nu$ is a local complete intersection morphism. In this case, Gabber's construction of push-forward map in {\'e}tale cohomology \cite{Fujiwara} gives us a push-forward map \[ \nu_*\colon H^2_{\et}(C, \Lambda(1))\to H^{2d}_{\et}(X, \Lambda(d)) \] and a diagram \begin{equation}\label{eq:commm-diag-push-etale} \[email protected]{ \sZ_0(C,Z) \ar[r]^-{\nu_*} \ar[d]_-{cyc^{\et}_C} & \sZ_0(X, X_{\rm sing}) \ar[d]^-{cyc^{\et}_X} \\ H^2_{\et}(C, \Lambda(1)) \ar[r]^-{\nu_*} & H^{2d}_{\et}(X, \Lambda(d)).} \end{equation} If $x \in C \setminus Z$ is a closed point so that $\nu(x) \in X_{\rm reg}$, the functoriality of the Gysin maps implies that the composite $H^0_{\et}(k(x), \Lambda(0)) \to H^2_{\et}(C, \Lambda(1)) \xrightarrow{\nu_*} H^{2d}_{\et}(X, \Lambda(d))$ is the push-forward map associated to the finite complete intersection map $\Spec(k(x)) \to X$. Using this fact and the description \eqref{eq:def-et-cycle-class} of the cycle class map on generators, it follows that ~\eqref{eq:commm-diag-push-etale} is commutative. We can identify $\CH_0(C,Z)$ with $\Pic(C)$ according to \cite[Lemma 4.12]{BK}. The Kummer sequence then shows that there is a commutative diagram \[ \[email protected]{ \sZ_0(C,Z) \ar[rr]\ar[rd]_-{cyc^{\et}_C} && \Pic(C) \ar[ld]\\ & H^2_{\et}(C, \Lambda(1)).& } \] This immediately shows that for any rational function $f$ on $k(C)$ such that ${\rm div}_C(f)\in \sR_0(C,Z)$, we have $cyc^{\et}_C({\rm div}_C(f))=0$ in $H^2_{\et}(C, \Lambda(1))$. But then, the commutativity of \eqref{eq:commm-diag-push-etale} proves that $\nu_*( {\rm div}_C(f))$ goes to zero in $H^{2d}_{\et}(X, \Lambda(d))$. We have therefore shown that the map $cyc^{\et}_X$ in ~\eqref{eq:def-et-cycle-class-0} descends to a cycle class map on the Chow group: \begin{equation}\label{eq:def-et-cycle-class-1} cyc^{\et}_X: \CH^{BK}_0(X) \to H^{2d}_{\et}(X, \Lambda(d)). \end{equation} We shall denote its composite with the canonical surjection $\CH^{LW}_0(X) \surj \CH^{BK}_0(X)$ also by $cyc^{\et}_X$. \subsection{The case of algebraically closed fields}\label{sec:Alg-closed} In the notations of \S~\ref{ssec:et-cycle-class}, suppose moreover that $k$ is separably (hence algebraically) closed and $X$ is projective over $k$. Write $X = \cup_{i=1}^n X_i$, where the $X_i$'s are the the irreducible components of $X$. In this case, we have a natural `trace' map \begin{equation}\label{eqn:trace} \tau_X \colon H^{2d}_{\et}(X, \Lambda(d)) \xrightarrow{\cong} \oplus_{i=1}^n H^{2d}_{\et}(X_i, \Lambda(d)) \xrightarrow{\cong} \oplus_{i=1}^n \Lambda. \end{equation} It follows by combining the exact sequence \[ H^{2d-1}_{\et}(X_{\rm sing}, \Lambda(d)) \to H^{2d}_{c, \et}(X_{\rm reg}, \Lambda(d)) \to H^{2d}_{\et}(X, \Lambda(d)) \to H^{2d}_{\et}(X_{\rm sing}, \Lambda(d)), \] \cite[Chapter~VI, Lemma~11.3]{Milne} and the cohomological dimension bound $cd_{\Lambda}(X_{\rm sing}) \le 2d-2$ (as $k$ is separably closed) that the map $\tau_X$ in ~\eqref{eqn:trace} is an isomorphism. Note further that for any regular closed point $x\in X_{\rm reg}$, the composition \[ \Lambda \xrightarrow{\cong} H^0(k(x), \Lambda) \to H^{2d}_{\et}(X, \Lambda(d)) \xrightarrow{\cong} \oplus_{i=1}^n H^{2d}_{\et}(X_i, \Lambda(d)) \xrightarrow{\tau} \oplus_{i=1}^n \Lambda \] sends $1\in \Lambda$ to the element $1$ in the direct summand of $\oplus_{i=1}^n \Lambda$ associated to the unique component of $X$ containing $x$ and to zero in all other summands. Recall now from \cite[\S~1]{ESV} that there is a degree map ${\rm deg}\colon \CH^{BK}_0(X)_{\Lambda} \to \bigoplus_{i=1}^n \Lambda$. This is considered in \textit{loc. cit.} for the Levine-Weibel Chow group, but the discussion there easily shows that it actually factors through the quotient $\CH^{BK}_0(X)$. This map is given by the sum of the degree maps for 0-cycles on the irreducible components of $X$. In particular, for any regular closed point $x \in X$, the degree of $x$ is the element $1$ in the direct summand of $\oplus_{i=1}^n \Lambda$ associated to the unique component of $X$ containing $x$ and is zero in all other summands. Combining these two facts, we have a commutative diagram \begin{equation}\label{eq:diag-et-degree} \[email protected]{ \CH^{BK}_0(X)_{\Lambda} \ar[rr]^-{\deg} \ar[rd]_-{cyc_{X}^{\et}} & & \oplus_{i=1}^n \Lambda \\ & H^{2d}_{\et}(X, \Lambda(d)). \ar[ru]_-{\tau_X}^-{\cong} & } \end{equation} In this setting, we have \begin{lem}\label{lem:et-iso} The degree map induces an isomorphism $\CH^{LW}_0(X)_\Lambda \xrightarrow{\cong} \oplus_{i=1}^n \Lambda$. In particular, the {\'e}tale cycle class map $cyc_{X}^\et: \CH^{LW}_0(X)_\Lambda \to H^{2d}_{\et}(X, \Lambda(d))$ is an isomorphism. \end{lem} \begin{proof} The second statement is a consequence of the first by \eqref{eq:diag-et-degree}. Since the degree map is clearly surjective (as $k$ is algebraically closed), it is enough to prove its injectivity. Since we are working with $\Z/m$-coefficients with $m\in k^\times$, it is in fact enough to prove that the subgroup $\CH_0(X)_{\rm deg =0}$ of 0-cycles of degree zero is $m$-divisible. But this well known as $k$ is algebraically closed. Indeed, given any 0-cycle $\alpha \in \sZ_0(X, X_{\rm sing})$ of degree zero, we can find a reduced Cartier curve $C \subset X$ which is regular along the support of $\alpha$. This implies that $\alpha$ lies in the image of the push-forward map $\Pic^0(C) \to \CH^{LW}_0(X)$. It is therefore enough to know that $\Pic^0(C)$ is $m$-divisible. But this is elementary. \end{proof} It is a straightforward exercise to deduce from the previous Lemma the isomorphism discussed in Remark \ref{rem:algclosed}. \begin{comment} We can now prove \thmref{thm:Main-2} as follows. For reader's convenience, we spell out all the assumptions and the statement of our result again. \begin{thm}\label{thm:Main-2-*} Let $A$ be a Henselian discrete valuation ring with algebraically closed residue field. Let $\sX$ be a regular scheme which is projective and flat over $A$ of relative dimension $d \ge 0$. Let $X$ denote the reduced special fiber of $\sX$ over $A$. Then the restriction homomorphism $\wt{\rho}: \sZ^g_1(\sX) \to \sZ_0(X, X_{\rm sing})$ of ~\eqref{eq:restriction-map-generators} factors through the rational equivalence classes. Moreover, it induces the isomorphisms \begin{enumerate} \item $\rho: \CH_1(\sX)_{\Lambda} \xrightarrow{\cong} \CH^{LW}_0(X)_{\Lambda}$, and \item $cyc^{\et}_{\sX}: \CH_1(\sX)_{\Lambda} \xrightarrow{\cong} H^{2d}_{\et}(\sX, \Lambda(d))$. \end{enumerate} \end{thm} \begin{proof} We consider the diagram \begin{equation}\label{eqn:Main-2*-0} \[email protected]{ \sZ^g_1(\sX)_{\Lambda} \ar[r] \ar[d]_-{\wt{\rho}} & \CH_1(\sX)_{\Lambda} \ar[r]^-{cyc^{\et}_{\sX}} \ar@{-->}[d] & H^{2d}_{\et}(\sX, \Lambda(d)) \ar[d]^-{\cong} \\ \sZ_0(X, X_{\rm sing})_{\Lambda} \ar[r] & \CH^{LW}_0(X)_{\Lambda} \ar[r]^-{cyc^{\et}_{X}}_-{\cong} & H^{2d}_{\et}(X, \Lambda(d)).} \end{equation} The vertical restriction map on the right is an isomorphism by the rigidity theorem for {\'e}tale cohomology \cite[Chapter~VI, Corollary~2.7]{Milne}. Since $\wt{\rho}$ and the vertical arrow on the right are just the restriction maps on cycles and {\'e}tale cohomology, the big outer rectangle is commutative by the construction of the cycle class maps for the regular scheme $\sX$ (see \cite[\S~8.1]{EKW}) and the reduced special fiber $X$. The map $cyc^{\et}_{X}$ is an isomorphism by \lemref{lem:et-iso}. It follows easily that $\wt{\rho}$ preserves the subgroups of cycles which are rationally equivalent to zero. In other words, there exists $\rho:\CH_1(\sX)_{\Lambda} \to \CH^{LW}_0(X)_{\Lambda}$ which makes the left and the right squares in ~\eqref{eqn:Main-2*-0} commute. We now use \corref{cor:sum} to conclude that $\rho$ must be an isomorphism. This in turn implies that $cyc^{\et}_{\sX}$ must be an isomorphism too. \end{proof} As we mentioned in \S~\ref{sec:Intro}, part (1) of \thmref{thm:Main-2-*} was proven by Esnault, Kerz and Wittenberg \cite{EKW}, and part (2) was proven by Saito and Sato \cite{SS} under the assumption that $X$ is a strict normal crossings divisor (in particular, all its components are regular) in $\sX$. \end{comment} \subsection{Results over non-algebraically closed fields} In this final section, we suppose that the Gersten conjecture for Milnor $K$-theory holds for schemes over $A$. Thanks to \cite{Kerz09}, this is the case if $k\subset A$, i.e.,~if $A$ is an equicharacteristic DVR. Let $\Lambda = \Z/m\Z$, with $m$ prime to $p$, and let $n\geq 0$ be a non-negative integer. Recall (see e.g., \cite[8.2]{EKW}), that the $n$-th Milnor $K$-theory sheaf $\sK_{n, \Lambda}^M$ with $\Lambda$-coefficients is defined as the (Zariski or Nisnevich) sheafification of the presheaf on affine schemes sending an $A$-algebra $R$ to the quotient of $\Lambda\otimes_\Z T_n(R)$ by the two-sided ideal generated by elements of the form $a\otimes (1-a)$ with $a, 1-a\in R^\times$. Here $T_n(R)$ is the $n$-th tensor algebra of $R^\times$ (over $\Z$). Since in what follows we will only consider $\Lambda$-coefficients (unless explicitly mentioned), we drop it from the notation. Write $\sK_{n, Y}^M$ for the restriction of $\sK_{n, \Lambda}^M$ to the small (Zariski or Nisnevich) site of $Y$ for any $A$-scheme $Y$. If the residue field of $A$ is finite, we denote by the same symbol the sheaf of improved Milnor $K$-theory, with $\Lambda$ coefficients, in the sense of Kerz \cite{KerzImprovedMilnor}. Let $\sX$ be again a regular scheme which is projective and flat over $A$, of relative dimension $d\geq 0$. One of the consequences of the Gersten conjecture is the so called Bloch formula, relating Milnor K-theory with the Chow groups. In particular, there is a canonical isomorphism \[ cyc^{M}_{\sX}\colon \CH_1(\sX)_\Lambda \xrightarrow{\simeq} H^d(\sX_{\rm Nis}, \sK^M_{d, \sX})\] which is induced by the tautological ``cycle class map'' \[cyc_{\sX}^M \colon \sZ_1(\sX) = \bigoplus_{x\in \sX_{(1)}} \Z \cong \bigoplus_{x\in \sX_{(1)}} K_0^M(k(x)), \] where the right hand side appears as the last term of the Gersten resolution for $\sK_{d, \sX}^M$. Let now $X$ denote as before the reduced special fiber of $\sX$, and let $x\in X_{\rm reg}$ be a regular closed point of $X$. We have a sequence of maps \[ \Lambda \cong K_0^M(k(x))\otimes_{\Z }\Lambda \stackrel{(1)}{\underset{\cong}\to} H^d_{ \{x\}}(X_{\rm Zar}, \sK^M_{d, X}) \xrightarrow{(2)} H^d(X_{\rm Zar}, \sK^M_{d, X}) \xrightarrow{(3)} H^d(X_{\rm Nis}, \sK^M_{d, X}) \] where the isomorphism (1) follows from Kato's computation \cite[Theorem 2]{Kato} (using again the regularity of the point $x$), the map (2) is the canonical forget support map, and the map (3) is the change of topology from Zariski to the Nisnevich site. Extending this map linearly, we get a cycle class map \[cyc^M_{X}\colon \sZ_0(X, X_{\rm sing})\to H^d(X_{\rm Nis}, \sK^M_{d, X}).\] Recall now the following result from \cite{KG} \begin{thm}[Theorem 4.1, \cite{KG}] The cycle class map $cyc^M_{X}$ induces a surjective homomorphism \[ cyc_{X}^M\colon \CH_0^{LW}(X) \twoheadrightarrow H^d(X_{\rm Nis}, \sK^M_{d, X}).\] \end{thm} With this result at disposal, we can consider the following diagram. \begin{equation}\label{eq:restrictionMilnor} \begin{tikzcd} \sZ_1(\sX)^g_{\Lambda} \arrow[r] \arrow[d, "\tilde{\rho}"] & \CH_1(\sX)_\Lambda \arrow{r}{cyc^M_{\sX}}[swap]{\cong} \arrow[d, dashrightarrow, "\rho"]& H^d(\sX_{\rm Nis}, \sK^M_{d, \sX}) \arrow[d, "\rho^M"]\\ \sZ_0(X, X_{\rm sing})_{\Lambda} \arrow[r] & \CH_0^{LW}(X)_{\Lambda} \arrow[r, "cyc^M_X"] & H^d(X_{\rm Nis}, \sK^M_{d, X}) \end{tikzcd} \end{equation} Note that the outer rectangle in \eqref{eq:restrictionMilnor} commutes, since the left vertical map is the restriction on cycles \eqref{eq:restriction-map-generators}, the right vertical map is the restriction homomorphism on Milnor $K$-theory and the composite horizontal maps are by definition the cycle class maps. We can therefore ask wether there exists a homomorphism making the left square commutative as well, i.e., if the map $\tilde{\rho}$ descends to a morphism between the Chow groups. This is clearly implied by the following Conjecture. \begin{conj}[Bloch-Quillen formula] The cycle class map $cyc^M_X$ is an isomorphism. \label{conj:Bloch-Quillen} \end{conj} If $X$ is regular, this is a well-known fact. It was originally proved by Bloch in \cite{BlochK2} for surfaces, and generalized by Kato \cite{Kato} in higher dimension. If $X$ is of dimension $1$, it can be interpreted as the chain of isomorphisms (with integral coefficients) \[ \CH_0^{LW}(X)\xrightarrow{\cong} \Pic(X) \cong H^1(X, \cO_X^\times) \] where the cohomology is taken with respect to the Zariski or the Nisnevich topology. For singular varieties of dimension $\geq 1 $, the status of this conjecture is summarized here. \begin{thm}\label{thm:BQ-true}Conjecture \ref{conj:Bloch-Quillen} is true in the following cases, with integral coefficients. \begin{romanlist} \item $X$ is a quasi-projective surface with isolated singularities, over any field $k$. \item $X$ is a quasi-projective surface with arbitrary singularities. \item $X$ is an affine surface over any perfect field. \item $X$ is projective and regular in codimension $1$, over an algebraically closed field. \item $X$ is quasi-projective with isolated singularities over a finite field. \end{romanlist} \end{thm} Item i) was first verified by Pedrini and Weibel in \cite{PW}, and in the affine case by Levine and Weibel \cite{LW}. The case ii) is due to Levine \cite{LevineBloch} in the case of algebraically closed fields. A modification of Levine's argument can be used to extend the result to the case of an arbitrary (perfect) ground field, provided that one replaces the Levine-Weibel Chow group with its modified version introduced in \cite{BK}. This is done in \cite{BKS}. The affine case iii) and the case of singularities in codimension at least 2 iv) are shown in \cite[Theorem 1.1 and 1.2]{KG} (the arguments are independent from the arguments used in \cite{BKS}), while case v) is \cite[Theorem 1.6]{KrishnaCFT}. Older results in the affine case where obtained by Barbieri-Viale in \cite{BV-Bloch}. We can now give another application of Theorem \ref{thm:Main-1}. \begin{cor}\label{cor:iso} Let $\sX$ and $A$ be as above. Then the restriction homomorphism $\tilde{\rho}$ of \eqref{eq:restriction-map-generators} factors through the rational equivalence classes if $k$ is finite and if $X$ has only isolated singularities, or if $\dim(X)=2$ (without restrictions on the type of singularities). In these cases, it induces an isomorphism \[\rho\colon \CH_1(\sX)_{\Lambda} \xrightarrow{\simeq} \CH_0^{LW}(X)_{\Lambda}.\] If $k$ is finite, both groups are finite. \begin{proof}It is an immediate consequence of the commutative diagram \eqref{eq:restrictionMilnor}, given Theorem \ref{thm:BQ-true}. By Corollary \ref{cor:sum**}, the induced map $\rho$ is automatically an isomorphism. Finally, by \cite[Theorem 1.2]{KrishnaCFT}, the group $ \CH_0^{LW}(X)_{\Lambda}$ is finite if the residue field $k$ is finite. \end{proof} \end{cor} \vskip .3cm \noindent\emph{Acknowledgments.} This project started while the first-named author was visiting the Tata Institute of Fundamental Research in November 2016, and the final part of this project was completed during the extended stay of the authors at the Hausdorff Research Institute for Mathematics (HIM), Bonn in 2017. The authors would like to thank both institutions for invitation and support. The authors would also like to thank H{\'e}l{\`e}ne Esnault, Moritz Kerz and Olivier Wittenberg for sending several valuable comments and suggestions on an earlier draft of this work, as well as the anonymous referee for their help in improving the exposition of the paper.
1,477,468,750,864
arxiv
\section{Introduction} \label{intro} In this paper, the symmetric Obrechkoff methods for solving special class of initial value problems associated with second order ordinary differential equations of the type \begin{equation} \label{dif} y''=f(x,y),\quad y(x_0)=y_0,\quad y'(x_0)=y'_0, \end{equation} in which the first order derivatives do not occur explicitly, are discussed.The numerical integration methods for (\ref{dif}) can be divided into two distinct classes: \begin{enumerate} \item Problems for which the solution period is known (even approximately) in advance. \item Problems for which the period is not known. \end{enumerate} For several decades, there has been strong interest in searching for better numerical methods to integrate first-order and second-order initial value problems, because these problems are usually encountered in celestial mechanics, quantum mechanical scattering theory, theoretical physics and chemistry, and electronics. Generally, the solution of $(1)$ is periodic, so it is expected that the result produced by some numerical methods preserves the analogical periodicity of the analytic solution [9-22]. Computational methods involving a parameter proposed by Gautschi \cite{Ga}, Jain et al. \cite{Ja}, Sommeijer and et al \cite{So} and Steifel and Bettis \cite{St} yield numerical solution of problems of class (1). Chawla and et al \cite{cha,ch,chaw}, Ananthakrishnaiah \cite{Ana}, Shokri and et al. \cite{Sho1,Sh,Sho2,Sho}, Dahlquist \cite{Dal}, Franco \cite{Fra}, Lambert and Watson [9], Tsitouras and Simos \cite{Tsi}, Simos and et al. \cite{Si,Sim,S}, Hairer \cite{Har}, Wang et al. \cite{W,Wa,Wan}, Saldanha and Achar \cite{Sa}, and Daele and Vanden Berghe \cite{Da} have developed methods to solve problems of class (2). Consider Obrechkoff method of the form \begin{equation} \label{ob} \sum _{i=0}^{k}\alpha _iy_{n-j+1}=\sum _{i=1}^lh^{2i}\sum _{j=0}^k \beta_{ij} y_{n-j+1}^{(2i)}, \end{equation} for the numerical integration of the problem (\ref{dif}). The method (\ref{ob}) is symmetric when $\alpha_j = \alpha _{k-j}$ , $\beta_j = \beta_{k-j}$ , $ j=0,1,2,\cdots , k$ , and it is of order $ q $ if the truncation error associated with the linear difference operator is given as \[TE=C_{q+2}h^{q+2}y^{(q+2)},\qquad x_{n-k+1}<\eta< x_{n+1},\] where $C_{q+2}$ is a constant dependent on $h$. When the method (\ref{ob}) is applied to the test problem, we get the characteristic equation as \begin{equation}\label{ro} \rho(\xi)-\sum_{i=1}^l(-1)^iv^{2i}\sigma_i(\xi)=0, \end{equation} where $v=\lambda h$ and \begin{equation} \label{6} \rho(\xi)=\sum_{j=0}^k\alpha_j\xi^{k-j},\quad\sigma_i(\xi)= \sum_{j=0}^k\beta_{ij}\xi^{k-j},\quad i=1,2,\cdots,l. \end{equation} \begin{definition} The method (\ref{ob}) is said to have interval of periodicity $(0,v_0^2)$ if for all $v^2\in(0,v_0^2)$ the roots of Eq. (\ref{ro}) are complex and at least two of them lie on the unit circle and the others lie inside the unit circle. \end{definition} \begin{definition} The method (\ref{ob}) is said to be P-stable if its interval of periodicity is $(0,\infty)$. \end{definition} \begin{definition} For any symmetric multistep methods, the phase-lag (frequency distortion) of order $q$ is given by \begin{equation} \label{7} t(v)=v-\theta(v)=Cv^{q+1}+O(v^{q+2}), \end{equation} where $C$ is the phase lag constant and $q$ is the phase-lag order. \end{definition} The characteristic equation of the method (\ref{ob}) is given by \begin{equation}\label{char} \Omega(s:v^2)=A(v)s^2-2B(v)s+A(v)=0, \end{equation} where \begin{equation} \label{14} A(v)=1+\sum_{i=1}^m(-1)^i\beta_{i0}v^{2i},\quad B(v)=1+\sum_{i=1}^m(-1)^i\beta_{i1}v^{2i}, \end{equation} $\Psi$ contains polynomial functions together with trigonometric polynomials \begin{equation} \Psi_{trig}=\left\{1,t,\cdots,t^K,\cos(r\omega t),\sin(r\omega t),\quad\quad r=1,2,\cdots,P\right\}. \end{equation} The resulting methods are then based on a hybrid set of polynomials and trigonometric functions. If $P$ is limited to $P=\frac{M-2}{2}$, we called method with zero phase-lag. \begin{remark} We present here the trigonometric versions of the set. In case $\omega$ is purely imaginary one obtains the hyperbolic description of this set. This set is characterized by two integer parameters $K$ and $P$. The set in which there is no polynomial part is identified by $K=-1$ while the set in which there is no trigonometric polynomial component is identified by $K=-1$. For each problem one has $K+2P=M-3$, where $M-1$ is the maximum exponent present in the full polynomial basis for the same problem. \end{remark} \section{Construction of the new method} \label{sec:1} From the form \eqref{ob} and without loss of generality we assume $\alpha_j=\alpha_{m-j}$, $\beta_{i,j}=\beta_{i,m-j}$, $j=0(1)\lfloor \frac{m}{2}\rfloor$ and we can write \begin{equation} \label{ob2} y_{n+1}-2y_{n}+ y_{n-1}=\sum_{i=1}^mh^{2i}\left[\beta_{i0} y_{n+1}^{(2i)}+\beta_{i1}y_{n}^{(2i)}+\beta _{i 0} y_{n-1}^{(2i)}\right], \end{equation} when $m=3$ we get \begin{eqnarray}\label{ob3} y_{n+1}-2y_{n}+y_{n-1}&=&h^{2}\left[\beta _{10} (y_{n+1}^{(2)}+y_{n-1}^{(2)})+\beta _{11} y_{n}^{(2)}\right]\nonumber\\ &+&h^4\left[\beta_{20}(y_{n+1}^{(4)}+y_{n-1}^{(4)})+\beta _{21}y_n^{(4)}\right]\nonumber\\ &+&h^6\left[\beta_{30}(y_{n+1}^{(6)}+y_{n-1}^{(6)})+\beta _{31}y_n^{(6)}\right]. \end{eqnarray} $M-3$ for method (\ref{ob3}) is 11 so that if $P=-1$, $K=13$ we obtain classic method and the coefficients of this method are \begin{eqnarray}\label{cla} \beta_{1,0}&=&\frac{229}{7788},\quad\beta_{1,1}=\frac{3665}{3894},\quad\beta_{2,0}=-\frac{1}{2360},\nonumber\\ \beta_{2,1}&=&\frac{711}{12980},\quad\beta_{3,0}=\frac{127}{39251520},\quad\beta_{3,1}=\frac{2923}{3925152}, \end{eqnarray} where its phase-lag is given by \[pl_{clas}:=-\frac{45469}{3394722659328000}v^{12}+O\left(v^{14}\right),\] and its local truncation error is given by \[LTE_{clas}=-\frac{45469}{1697361329664000}h^{14}y^{(14)}+O\left(h^{16}\right).\] If $P=6$, $ K=-1$ we obtain the method with zero phase-lag (PL), and the coefficients of this case are given in \cite{Sa}. \subsection{The first formula} If $P=0$, $K=11$, so we called PL$'$, we have \[\beta_{1,0}=\frac{1}{6v^2}\frac{\beta_{1,0num}}{A},\quad \beta_{1,1}=\frac{1}{3v^2}\frac{\beta_{1,1num}}{A},\quad \beta_{2,0}=\frac{-1}{5040v^2}\frac{\beta_{2,0num}}{A},\] \begin{equation}\label{co3} \beta_{2,1}=\frac{1}{2520v^2}\frac{\beta_{2,1num}}{A},\quad \beta_{3,0}=\frac{-1}{10080v^2}\frac{\beta_{3,0num}}{A},\quad \beta_{3,1}=\frac{1}{5040v^2}\frac{\beta_{3,1num}}{A}, \end{equation} where \[A=15120\cos v-15120+6900v^2-313v^4+660v^2\cos v+13v^4\cos v,\] and \begin{eqnarray} \beta_{1,0num}&=&-45360v^2+3702v^4-89v^6+78v^4\cos v+2v^6\cos v+90720-90720\cos v,\nonumber\\ \beta_{1,1num}&=&45360v^2\cos v+16998v^4-850v^6+37v^6\cos v-90720+90720\cos v\nonumber\\ &+&1902v^4\cos v,\nonumber\\ \beta_{2,0num}&=&-65520v^2\cos v-1597680v^2+105840v^4-1907v^6+17v^6\cos v+3326400\nonumber\\ &-&3326400\cos v,\nonumber\\ \beta_{2,1num}&=&3109680v^2\cos v+14278320v^2-30257v^6+1907v^6\cos v\nonumber\\ &-&34776000+34776000\cos v+105840v^4\cos v,\nonumber\\ \beta_{3,0num}&=&3360v^2\cos v+62160v^2-3814v^4+59v^6+34v^4\cos v-131040+131040\cos v,\nonumber\\ \beta_{3,1num}&=&149520v^2\cos v+1428000v^2-60514v^4+59v^6\cos v-3155040\nonumber\\ &+&3155040\cos v+3814v^4\cos v,\nonumber \end{eqnarray} for small values of $v$ the above formulae are subject to heavy cancelations. In this case the following Taylor series expansion must be used: \begin{eqnarray} \beta_{1,0}&=&\frac{229}{7788}+\frac{45469}{1314147120}v^2+\frac{85771}{341152592352}v^4+ \frac{42739761203}{29358705101073004800}v^6\nonumber\\ &+&\frac{3801508031029}{608197283570236453277184}v^8+\frac{168279971604233}{13575027728788584540475136000}v^{10}\nonumber\\ &-&\frac{266348222900207221}{2703381808485285252094734713548800}v^{12}+\cdots,\nonumber \end{eqnarray} \begin{eqnarray} \beta_{1,1}&=&\frac{3665}{3894}-\frac{45469}{657073560}v^2-\frac{85771}{170576296176}v^4- \frac{42739761203}{14679352550536502400}v^6\nonumber\\ &-&\frac{3801508031029}{304098641785118226638592}v^8 -\frac{168279971604233}{6787513864394292270237568000}v^{10}\nonumber\\ &+&\frac{266348222900207221}{1351690904242642626047367356774400}v^{12}+\cdots,\nonumber \end{eqnarray} \begin{eqnarray} \beta_{2,0}&=&-\frac{1}{2360}-\frac{45469}{30105915840}v^2-\frac{12253}{1116499393152}v^4- \frac{42739761203}{672581244133672473600}v^6\nonumber\\ &-&\frac{3801508031029}{13933246859972689656895488}v^8 -\frac{168279971604233}{310991544332247573109066752000}v^{10}\nonumber\\ &+&\frac{266348222900207221}{61932019612571989411624831619481600}v^{12}+\cdots,\nonumber \end{eqnarray} \begin{eqnarray} \beta_{2,1}&=&\frac{711}{12980}-\frac{1045787}{33116507424}v^2-\frac{1409095}{6140746662336}v^4 -\frac{983014507669}{739839368547039720960}v^6\nonumber\\ &-&\frac{437173423568335}{76632857729849793112925184}v^8 -\frac{3870439346897359}{342090698765472330419973427200}v^{10}\nonumber\\ &+&\frac{266348222900207221}{2961966155383877754469013686149120}v^{12}+\cdots,\nonumber \end{eqnarray} \begin{eqnarray} \beta_{3,0}&=&\frac{127}{39251520}+\frac{45469}{1528454188800}v^2+\frac{12253}{56683815344640}v^4 +\frac{42739761203}{34146432394478756352000}v^6\nonumber\\ &+&\frac{3801508031029}{707380225198613474888540160}v^8+ \frac{168279971604233}{15788801481483338327075696640000}v^{10}\nonumber\\ &-&\frac{266348222900207221}{3144240995715193308590183759142912000}v^{12}+\cdots,\nonumber \end{eqnarray} \begin{eqnarray} \beta_{3,1}&=&\frac{2923}{3925152}-\frac{14231797}{9934952227200}v^2 -\frac{3835189}{368444799740160}v^4-\frac{13377545256539}{221951810564111916288000}v^6\nonumber\\ &-&\frac{1189872013712077}{4597971463790987586775511040}v^8- \frac{52671631112124929}{102627209629641699125992028160000}v^{10}\nonumber\\ &+&\frac{83366993767764860173}{20437566472148756505836194434428928000}v^{12}+\cdots.\nonumber \end{eqnarray} The phase-lag and the local truncation error for the PL$'$ method are given by \begin{eqnarray} LTE_{PL'}&=&(1-\beta_{1,1}-2\beta_{1,0})h^2y_n^{(2)}+ \left(\frac{1}{12}-\beta_{1,0}-2\beta_{2,0}-\beta_{2,1}\right)h^4y^{(4)}\nonumber\\ &+&\left(\frac{1}{360}-\frac{\beta_{1,0}}{12}-\beta_{2,0}-2\beta_{3,0}-\beta_{3,1}\right)h^6y_n^{(6)} +\left(\frac{2}{8!}-\frac{2\beta_{1,0}}{6!}-\frac{2\beta_{2,0}}{4!}-\frac{2\beta_{3,0}}{2!}\right)h^8y_n^{(8)}\nonumber\\ &-&\left(\frac{2}{10!}-\frac{2\beta_{1,0}}{8!}-\frac{2\beta_{2,0}}{6!}-\frac{2\beta_{3,0}}{4!}\right)h^{10}y^{(10)} +\left(\frac{2}{12!}-\frac{2\beta_{1,0}}{10!}-\frac{2\beta_{2,0}}{8!}-\frac{2\beta_{3,0}}{6!}\right)h^{12}y^{(12)}\nonumber\\ &+&\left(\frac{2}{14!}-\frac{2\beta_{1,0}}{12!}-\frac{2\beta_{2,0}}{10!}-\frac{2\beta_{3,0}}{8!}\right)h^{14}y^{(14)} +O\left(h^{16}\right).\nonumber \end{eqnarray} hence \[pl_{PL'}=\frac{731602960042513638469539403}{1287287007659726361217210431335975522416459776000000}v^{24},\] and \[LTE_{PL'}=-\frac{45469}{1697361329664000}\left(y^{(14)}+\omega^2y^{(12)}\right)h^{14},\] where $v=\omega h$, $\omega$ is the frequency and $h$ is the step length. As $v\rightarrow0$, the LTE of the method \eqref{ob3} with derived coefficients \eqref{co3} tends to $\frac{45469}{169736132966400}h^{14}y^{(14)}+O\left(h^{16}\right)$. which agrees with the LTE of the three methods due to Wang \cite{Wan}, Simos \cite{Si} and Daele \cite{Da}, Achar \cite{Achar}, as $H\rightarrow0$. The behavior of the coefficients of the PL$'$ method are shown in Figures 2.1, to 2.6. \begin{figure} \epsfxsize=7cm\epsfysize=7cm \begin{center}\epsfbox{101.eps} \end{center} \caption{Behavior of the coefficient $\beta_{1,0}$ in the method of PL$'$.} \end{figure} \begin{figure} \begin{center} \epsfxsize=8cm\epsfysize=8cm \epsfbox{111.eps}\end{center} \caption{Behavior of the coefficient $\beta_{1,1}$ in the method of PL$'$.} \end{figure} \begin{figure}\begin{center} \epsfxsize=8cm\epsfysize=8cm \epsfbox{201.eps} \end{center} \caption{Behavior of the coefficient $\beta_{2,0}$ in the method of PL$'$.} \end{figure} \begin{figure}\begin{center} \epsfxsize=8cm\epsfysize=8cm \epsfbox{211.eps}\end{center} \caption{Behavior of the coefficient $\beta_{2,1}$ in the method of PL$'$.} \end{figure} \begin{figure}\begin{center} \epsfxsize=8cm\epsfysize=8cm \epsfbox{301.eps} \end{center} \caption{Behavior of the coefficient $\beta_{3,0}$ in the method of PL$'$.} \end{figure} \begin{figure}\begin{center} \epsfxsize=8cm\epsfysize=8cm \epsfbox{311.eps}\end{center} \caption{Behavior of the coefficient $\beta_{3,1}$ in the method of PL$'$.} \end{figure} \subsection{The second formula} If $P=2$, $K=7$, so we called PL$''$, we have \[\beta_{1,0}=\frac{89}{1878}-\frac{7560}{313}\beta_{3,1},\quad \beta_{1,1}=\frac{850}{939}+ \frac{15120}{313}\beta_{3,1},\quad\beta_{2,0}=-\frac{1907}{1577520}+\frac{330}{313}\beta_{3,1},\] \[\beta_{2,1}=\frac{30257}{788760}+\frac{6900}{313}\beta_{3,1},\quad \beta_{3,0}=\frac{59}{3155040}-\frac{13}{626}\beta_{3,1},\quad\beta_{3,1}=\frac{1}{1080}\frac{A}{B},\] where \begin{eqnarray} A&=&-14400+213800\,\cos \left( 3\,v \right) {v} ^{4}\cos \left( v \right) -36000\,\cos \left( 2\,v \right) \cos \left( v \right) {v}^{2}+14400\,\cos \left( 3\,v \right) \cos \left( v \right) \cos \left( 2\,v \right)\nonumber\\ &-&72000\,\cos \left( 3\,v \right) \cos \left( v \right) {v}^{2}+20275\,\cos \left( 3\,v \right) {v}^{4} \cos \left( 2\,v \right) -93600\,\cos \left( 3\,v \right) {v}^{2}\cos \left( 2\,v \right)\nonumber\\ &+&9660\,\cos \left( 3\,v \right) {v}^{6}\cos \left( 2\,v \right) +20832\,\cos \left( 3\,v \right) {v}^{6}\cos \left( v \right) -10332\,\cos \left( v \right) {v}^{6}\cos \left( 2\, v \right) +14400\,\cos \left( v \right)\nonumber\\ &-&14400\,\cos \left( 3\,v \right) \cos \left( 2\,v \right) -14400\,\cos \left( 2\,v \right) \cos \left( v \right) +14400\,\cos \left( 2\,v \right) +14400\,\cos \left( 3\,v \right)\nonumber\\ &-&116475\,\cos \left( 2\,v \right) {v}^{4}\cos \left( v \right) +100800\,\cos \left( 3\,v \right) \cos \left( v \right) \cos \left( 2\,v \right) {v}^{2}\nonumber\\ &+&29400\,\cos \left( 3\,v \right) \cos \left( v \right) \cos \left( 2\,v \right) {v}^{4}+720\, \cos \left( 3\,v \right) \cos \left( v \right) \cos \left( 2\,v \right) {v}^{6}+7200\,\cos \left( v \right) {v}^{2}\nonumber\\ &+&2875\,\cos \left( v \right) {v}^{4}+1830\,\cos \left( v \right) {v}^{6}+28800\, \cos \left( 2\,v \right) {v}^{2}+99200\,\cos \left( 2\,v \right) {v}^{ 4}-46848\,\cos \left( 2\,v \right) {v}^{6}\nonumber\\ &+&64800\,\cos \left( 3\,v \right) {v}^{2}-249075\,\cos \left( 3\,v \right) {v}^{4}+88938\,\cos \left( 3\,v \right) {v}^{6}-14400\,\cos \left( 3\,v \right) \cos \left( v \right)\nonumber\\ &-&810\,\cos \left( 3\,v \right) {v}^{8}\cos \left( 2 \,v \right) +1296\,\cos \left( 3\,v \right) {v}^{8}\cos \left( v \right) -486\,\cos \left( v \right) {v}^{8}\cos \left( 2\,v \right),\nonumber \end{eqnarray} and \begin{eqnarray} B&=&240\,\cos \left( v \right) -81\,\cos \left( 2\,v \right) {v}^ {4}\cos \left( v \right) -240\,\cos \left( 3\,v \right) \cos \left( v \right) -240\,\cos \left( 2\,v \right) \cos \left( v \right)\nonumber\\ &+&96\, \cos \left( 3\,v \right) {v}^{4}\cos \left( v \right) +75\,\cos \left( v \right) {v}^{4}-1107\,\cos \left( 2\,v \right) \cos \left( v \right) {v}^{2}+240\,\cos \left( 3\,v \right) \cos \left( v \right) \cos \left( 2\,v \right)\nonumber\\ &+&115\,\cos \left( v \right) {v}^{2}+992\,\cos \left( 3\,v \right) \cos \left( v \right) {v}^{2}-240-15\,\cos \left( 3\,v \right) {v}^{4}\cos \left( 2\,v \right) +115\,\cos \left( 3\,v \right) {v}^{2}\cos \left( 2\,v \right)\nonumber\\ &-&240\,\cos \left( 3\,v \right) \cos \left( 2\,v \right) -480\,\cos \left( 2\,v \right) {v}^{4}+992\,\cos \left( 2\,v \right) {v}^{2}+405\,\cos \left( 3\,v \right) {v}^{4}-1107\,\cos \left( 3\,v \right) {v}^{2}\nonumber\\ &+&240\,\cos \left( 3\,v \right) +240\,\cos \left( 2\,v \right){v}^{6}.\nonumber \end{eqnarray} For small values of $v$ the above formulae are subject to heavy cancelations. In this case the following Taylor series expansion must be used:\\ \begin{eqnarray} \beta_{1,0}&=&{\frac {229}{7788}}+{\frac {318283}{657073560}}\,{v}^{2}+{\frac { 1512119}{118091281968}}\,{v}^{4}+{\frac {22946405723893}{ 44038057651609507200}}\,{v}^{6}\nonumber\\ &+&{\frac {18296930817563773}{ 651639946682396199939840}}\,{v}^{8}+{\frac {2913158423117216376847}{ 1649365869047813021667729024000}}\,{v}^{10}\nonumber\\ &+&{\frac{8050460719799780764991137}{68936236116374773928415735195494400}}\,{v}^ {12} +\cdots,\nonumber \end{eqnarray} \begin{eqnarray} \beta_{1,1}&=&{\frac {3665}{3894}}-{\frac {318283}{328536780}}\,{v}^{2}-{\frac { 1512119}{59045640984}}\,{v}^{4}-{\frac {22946405723893}{ 22019028825804753600}}\,{v}^{6}\nonumber\\ &-&{\frac {18296930817563773}{ 325819973341198099969920}}\,{v}^{8}-{\frac {2913158423117216376847}{ 824682934523906510833864512000}}\,{v}^{10}\nonumber\\ &-&{\frac{8050460719799780764991137}{34468118058187386964207867597747200}}\,{v}^ {12} +\cdots,\nonumber \end{eqnarray} \begin{eqnarray} \beta_{2,0}&=&-{\frac {1}{2360}}-{\frac {45469}{6021183168}}\,{v}^{2}-{\frac { 99714443}{586162181404800}}\,{v}^{4}-{\frac {4808531881}{ 1130388645602810880}}\,{v}^{6}\nonumber\\ &-&{\frac {176305401655838711}{ 1741655857496586207111936000}}\,{v}^{8}-{\frac {76862259930526632407}{ 35266441127276874790568169676800}}\,{v}^{10}\nonumber\\ &-&{\frac{1089463503416967799081153}{26321108335343095499940553438279680000}}\,{ v}^{12}+\cdots,\nonumber \end{eqnarray} \begin{eqnarray} \beta_{2,1}&=&{\frac {711}{12980}}-{\frac {1045787}{2365464816}}\,{v}^{2}-{\frac { 10184496007}{921111999350400}}\,{v}^{4}-{\frac {162691107254479}{ 369919684273519860480}}\,{v}^{6}\nonumber\\ &-&{\frac {4589005587219802631}{ 195491984004718859981952000}}\,{v}^{8}-{\frac {582588392135442371849}{ 395847808571475125200254965760}}\,{v}^{10}\nonumber\\ &-&{\frac{2446080156637919477841851381}{25176712320762960912986616332267520000}} \,{v}^{12} +\cdots,\nonumber \end{eqnarray} \begin{eqnarray} \beta_{3,0}&=&{\frac {127}{39251520}}+{\frac {45469}{109175299200}}\,{v}^{2}+{\frac {274576771}{7368895994803200}}\,{v}^{4}+{\frac {115636672827803}{ 39837504460225215744000}}\,{v}^{6}\nonumber\\ &+&{\frac {76494288958873853}{ 360908278162557895351296000}}\,{v}^{8}+{\frac {455635060442806091167}{ 30449831428575009630788843520000}}\,{v}^{10}\nonumber\\ &+&{\frac{136101812396019182508073199}{131285852101792282007800655206318080000}} \,{v}^{12} +\cdots,\nonumber \end{eqnarray} \begin{eqnarray} \beta_{3,1}&=&{\frac {127}{39251520}}+{\frac {45469}{109175299200}}\,{v}^{2}+{\frac {274576771}{7368895994803200}}\,{v}^{4}+{\frac {115636672827803}{ 39837504460225215744000}}\,{v}^{6}\nonumber\\ &+&{\frac {76494288958873853}{ 360908278162557895351296000}}\,{v}^{8}+{\frac {455635060442806091167}{ 30449831428575009630788843520000}}\,{v}^{10}\nonumber\\ &+&{\frac{136101812396019182508073199}{131285852101792282007800655206318080000}} \,{v}^{12} +\cdots.\nonumber \end{eqnarray} The phase-lag and the local truncation error for the PL$''$ method are given by \[pl_{PL''}=-\frac{141797497314423651101}{7514399077966985427530263756800000}v^{20},\] and \[LTE_{PL''}=-\frac {45469h^{14}}{1697361329664000}\left(49\omega^4y^{(10)}+ y^{(14)}+36\omega^6y^{(8)}+14\omega^2y^{(12)}\right),\] where $v=\omega h$, $\omega$ is the frequency and $h$ is the step length. The behavior of the coefficients of the PL$''$ method are shown in Figures 4, 5, 6. \begin{figure} \epsfxsize=8cm\epsfysize=8cm \begin{center}\epsfbox{102.eps} \end{center} \caption{Behavior of the coefficient $\beta_{1,0}$ in the method of PL$''$.} \end{figure} \begin{figure} \begin{center} \epsfxsize=8cm\epsfysize=8cm \epsfbox{112.eps}\end{center} \caption{Behavior of the coefficient $\beta_{1,1}$ in the method of PL$''$.} \end{figure} \begin{figure}\begin{center} \epsfxsize=8cm\epsfysize=8cm \epsfbox{202.eps} \end{center} \caption{Behavior of the coefficient $\beta_{2,0}$ in the method of PL$''$.} \end{figure} \begin{figure}\begin{center} \epsfxsize=8cm\epsfysize=8cm \epsfbox{212.eps}\end{center} \caption{Behavior of the coefficient $\beta_{2,1}$ in the method of PL$''$.} \end{figure} \begin{figure}\begin{center} \epsfxsize=8cm\epsfysize=8cm \epsfbox{302.eps} \end{center} \caption{Behavior of the coefficient $\beta_{3,0}$ in the method of PL$''$.} \end{figure} \begin{figure}\begin{center} \epsfxsize=8cm\epsfysize=8cm \epsfbox{312.eps}\end{center} \caption{Behavior of the coefficient $\beta_{3,1}$ in the method of PL$''$.} \end{figure} The characteristic equation $\Omega(s:v^2)=A(v)s^2-2B(v)s+A(v)=0$ has complex roots of unit magnitude when $\left|\cos(\theta(v))\right|=\left|\frac{B(v)}{A(v)}\right|<1$, i.e. when $A(v)^2\pm B(v)^2>0$. Substituting for $A(v)$ and $B(v)$ for these the two-step methods, the interval of periodicity of the classical Obrechkoff method, PL$'$ and PL$''$ methods when $v\rightarrow0$ are obtained [0, 25.2004], [0,408.04] and [0,1428.84] respectively. \section{Numerical example} In this section, we present some numerical results obtained by our new two-step trigonometrically-fitted Obrechkoff methods and compare them with those from other multistep methods as\\ Achar: The 12th order Obrechkoff method of Achar \cite{Achar}.\\ Daele: The 12th order Obrechkoff method of Van Daele \cite{Da}.\\ Neta: The P-stable 8th-order super-implicit method of Neta \cite{Net}.\\ Simos: The 12th order Obrechkoff method of Simos \cite{Si}.\\ Wang: The 12th order Obrechkoff method of Wang \cite{Wan}.\\ \begin{example} We consider the nonlinear \emph{undamped Duffing equation} \begin{equation}\label{duf} y''=-y-y^3+B\cos(\omega x),\quad y(0)=0.200426728067,\quad y'(0)=0, \end{equation} \end{example} where $B=0.002$, $\omega=1.01$ and $x\in\left[0,\frac{40.5\pi}{1.01}\right]$. We use the following exact solution for \eqref{duf} from \cite{Ne}, \[g(x)=\sum_{i=0}^3K_{2i+1}\cos((2i+1)\omega x),\] where \begin{eqnarray} \{K_1,K_3,K_5,K_7\}&=&\{0.200179477536,0.246946143\times10^{-3},\nonumber\\ &&0.304016\times10^{-6},0.374\times10^{-9}\}.\nonumber \end{eqnarray} In order to integrate this equation by a Obrechkoff method, one needs the values of $y'$, which occur in calculating $y^{(4)}$. These higher order derivatives can all be expressed in terms of $y(x)$ and $y'(x)$ through \eqref{duf}, i.e. \begin{eqnarray} y^{(3)}(x)&=&-(1+3y^2(x))y'(x)-B\omega\sin(\omega x)\nonumber,\\ y^{(4)}(x)&=&-(1+3y^2(x))y''(x)-6y(x)y'(x)^2-B\omega^2\cos(\omega x),\nonumber \end{eqnarray} The absolute errors at $x=\frac{40.5\pi}{1.01}$, for the new method, in comparison with methods of Simos, Daele, Achar, Wang and the new method are given in table 3.1 and the CPU times are listed in Table 3.2. Also the absolute errors at $x=2\pi(4\pi)8\pi$, with $h=\frac{\pi}{12}$, for the new method PL$''$, in comparison with methods Neta and the new method are given in table 3.3. \begin{table} \[ \begin{tabular}{cccccc} && \\ \hline $h$ & PL$''$ & Simos & Daele & Achar & Wang \\\hline $\frac{M}{500}$ & 6.08953e-12 & 3.1486e-4 & 4.0560e-5 & 4.0919e-5 & 4.0831e-5\\ $\frac{M}{1000}$ & 7.98859e-12 & 1.8069e-5 & 1.8733e-6 & 1.2708e-6 & 1.2678e-6\\ $\frac{M}{2000}$ & 5.52149e-12 & 1.0752e-6 & 3.8355e-8 & 3.9420e-8 & 3.9327e-8\\ $\frac{M}{3000}$ & 7.27826e-12 & 2.0873e-7 & 5.1344e-9 & 5.1801e-9 & 5.1678e-9\\ $\frac{M}{4000}$ & 6.99211e-12 & 6.5463e-8 & 3.1876e-9 & 1.2324e-9 & 1.2308e-9\\ $\frac{M}{5000}$ & 6.64542e-12 & 2.6673e-8 & 9.8900e-10 & 4.0911e-10 & 4.0741e-10\\ \hline \end{tabular}\] \caption{Comparison of the end-point absolute error in the approximations obtained by using Methods: methods of Simos, Daele, Achar, Wang and the new method for Example 3.1.} \end{table} \begin{table} \[ \begin{tabular}{cccccc} && \\ \hline $h$ & PL$''$ & Simos & Daele & Achar & Wang \\\hline $\frac{M}{500}$ & 1.453 & 1.437 & 1.484 & 1.188 & 1.406 \\ $\frac{M}{1000}$ & 2.874 & 2.892 & 2.938 & 2.312 & 2.891 \\ $\frac{M}{2000}$ & 6.267 & 6.233 & 6.36 & 4.812 & 6.236 \\ $\frac{M}{3000}$ & 9.859 & 9.859 & 9.719 & 7.548 & 9.546 \\ $\frac{M}{4000}$ & 13.424 & 13.548 & 13.39 & 9.986 & 13.063 \\ $\frac{M}{5000}$ & 16.857 & 16.922 & 16.969 & 12.86 & 16.499 \\ \hline \end{tabular}\] \caption{CPU time for the example 3.1, are calculated for comparison among four methods: methods of Simos, Daele, Achar, Wang and our new method PL$''$.} \end{table} \begin{table} \[ \begin{tabular}{cccc} && \\ \hline $x$ & CPU Time for PL$''$ & PL$''$ & Neta \\\hline $2\pi$ & 0.03120020 & 6.06453e-14 & 2.53e-7 \\ $4\pi$ & 0.07800050 & 1.81249e-13 & 1.01e-6 \\ $6\pi$ & 0.09360060 & 3.45171e-13 & 2.25e-6 \\ $8\pi$ & 0.23400150 & 5.09481e-13 & 3.95e-6 \\ $10\pi$ & 0.28080180 & 6.24098e-13 & 6.05e-6 \\ \hline \end{tabular}\] \caption{Comparison of the end-point absolute error in the approximations obtained by using Methods: Neta and the new method for Example 3.1.} \end{table} \begin{example} Consider the initial value problem \[y''=-100y+99\sin(x),\quad y(0)=1,\quad y^{\prime}(0)=11,\] \end{example} with the exact solution $y(t)=\sin(t)+\sin(10t)+\cos(10t)$. This equation has been solved numerically for $0\leq x\leq10\pi$ using exact starting values. In the numerical experiment, we take the step lengths $h=\pi/50$, $\pi/100$, $\pi/200$, $\pi/300$, $\pi/400$ and $\pi/500$. In Table 3.4, we present the absolute errors at the end-point and the CPU times are listed in Table 3.5. \begin{table} \[ \begin{tabular}{ccccc} && \\ \hline $h$ & PL$''$ & Simos & Daele & Achar \\\hline $\frac{\pi}{50}$ & 1.76536e-26 & 3.0541e-11 & 1.2018e-11 & 5.7910e-13 \\ $\frac{\pi}{100}$ & 4.50405e-30 & 2.2800e-13 & 7.3450e-13 & 5.7910e-13 \\ $\frac{\pi}{200}$ & 1.90628e-34 & 4.3960e-13 & 8.6240e-13 & 1.3172e-12 \\ $\frac{\pi}{300}$ & 4.60850e-37 & 2.1074e-12 & 2.6342e-12 & 1.9640e-12 \\ $\frac{\pi}{400}$ & 6.28113e-39 & 1.3768e-12 & 2.9310e-12 & 4.7813e-12 \\ $\frac{\pi}{500}$ & 2.23002e-40 & 6.4658e-12 & 2.8868e-12 & 7.5018e-12 \\ \hline \end{tabular}\] \caption{Comparison of the end-point absolute error in the approximations obtained by using Methods: methods of Simos, Daele, Achar and the new method for Example 3.2.} \end{table} \begin{table} \[ \begin{tabular}{ccccc} && \\ \hline $h$ & PL$''$ & Simos & Daele & Achar \\\hline $\frac{\pi}{50}$ & 0.2652017 & 0.1716011 & 0.2496016 & 0.187201 \\ $\frac{\pi}{100}$ & 0.5772037 & 0.5148033 & 0.5304034 & 0.452403 \\ $\frac{\pi}{200}$ & 1.1388073 & 0.8580055 & 0.8268053 & 0.748805 \\ $\frac{\pi}{300}$ & 1.8096116 & 1.1388073 & 1.1544074 & 0.951606 \\ $\frac{\pi}{400}$ & 2.496016 & 1.3884089 & 1.4040091 & 1.23241 \\ $\frac{\pi}{500}$ & 2.9484189 & 1.7004109 & 1.7784114 & 1.46641 \\ \hline \end{tabular}\] \caption{CPU time for the example 3.2, are calculated for comparison among four methods: methods of Simos, Daele, Achar and the new method PL$''$.} \end{table} \begin{example} Consider the initial value problem \[y''=\frac{8y^2}{1+2x},\quad y(0)=1,\quad y'(0)=-2,\quad x\in[0,4.5],\] \end{example} with the exact solution The theoretical solution of this problem is \[y(x)=\frac{1}{1+2x}.\] The absolute errors at $x=4.5$ for the new method, in comparison with methods of Wang, Simos, Daele and Achar are given in the Table 3.6. The relative CPU times of computation of the new method in comparison with the other four referred methods are given in Table 3.7. \begin{table} \[ \begin{tabular}{cccccc} && \\ \hline $h$ & PL$''$ & Simos & Daele & Achar & Wang \\\hline $\frac{4.5}{500}$ & 2.74277e-21 & 1.2411e-7 & 1.2578e-7 & 1.2633e-7 & 1.2411e-7 \\ $\frac{4.5}{1000}$ & 1.54818e-24 & 3.8166e-9 & 3.9035e-9 & 3.8481e-9 & 3.8166e-9 \\ $\frac{4.5}{2000}$ & 5.84727e-28 & 1.1931e-10 & 1.2288e-10 & 1.2002e-10 & 1.1931e-10\\ $\frac{4.5}{3000}$ & 5.22638e-30 & 1.9194e-11 & 2.0168e-11 & 1.4047e-11 & 1.9194e-11\\ $\frac{4.5}{4000}$ & 1.78375e-31 & 7.8511e-12 & 7.8511e-12 & 2.6818e-12 & 7.8511e-12\\ $\frac{4.5}{5000}$ & 1.28211e-32 & 1.6285e-12 & 1.6285e-12 & 7.4700e-14 & 1.6285e-12\\ \hline \end{tabular}\] \caption{Comparison of the end-point absolute error in the approximations obtained by using five methods of Simos, Daele, Achar, Wang and the new method for Example 3.3.} \end{table} \begin{table} \[ \begin{tabular}{cccccc} && \\ \hline $h$ & PL$''$ & Simos & Daele & Achar & Wang \\\hline $\frac{4.5}{500}$ & 0.3588023 & 0.359 & 0.343 & 0.187 & 0.312 \\ $\frac{4.5}{1000}$ & 0.6084039 & 0.624 & 0.608 & 0.764 & 1.232\\ $\frac{4.5}{3000}$ & 1.2792082 & 1.232 & 1.919 & 1.201 & 1.872\\ $\frac{4.5}{4000}$ & 1.9344124 & 1.888 & 2.590 & 1.622 & 2.558\\ $\frac{4.5}{5000}$ & 2.5584164 & 2.590 & 3.292 & 2.059 & 3.245\\ \hline \end{tabular}\] \caption{CPU time for the example 3.3, are calculated for comparison among four methods of Simos, Daele, Achar, Wang and the new method PL$''$.} \end{table} \section*{Conclusions} In this paper, we have presented the new trigonometrically-fitted two-step symmetric Obrechkoff methods of order 12. The details of the procedure adapted for the applications have been given in Section 2. With trigonometric fitting, we have improved the local truncation error, phase-lag error, interval of periodicity and CPU time for the classes of two-step Obrechkoff methods. The numerical results obtained by the new method for some problems show its superiority in efficiency, accuracy and stability. \section*{Acknowledgements} The authors wish to thank the Professor Theodore E. Simos and the anonymous referees for their careful reading of the manuscript and their fruitful comments and suggestions.
1,477,468,750,865
arxiv
\section{Introduction} All graphs in this paper are finite and simple. We use $[k]$ to denote the set $\sset{1, \dots, k}$. Let $G$ be a graph. A \emph{$k$-coloring} of $G$ is a function $f:V(G) \rightarrow [k]$ such that for every edge $uv \in E(G)$, $f(u) \neq f(v)$, and $G$ is \emph{$k$-colorable} if $G$ has a $k$-coloring. The \textsc{$k$-coloring problem} is the problem of deciding, given a graph $G$, if $G$ is $k$-colorable. This problem is well-known to be $NP$-hard for all $k \geq 3$. A function $L: V(G) \rightarrow 2^{[k]}$ that assigns a subset of $[k]$ to each vertex of a graph $G$ is a \emph{$k$-list assignment} for $G$. For a $k$-list assignment $L$, a function $f: V(G) \rightarrow [k]$ is a \emph{coloring of $(G,L)$} if $f$ is a $k$-coloring of $G$ and $f(v) \in L(v)$ for all $v \in V(G)$. We say that a graph $G$ is \emph{$L$-colorable}, and that the pair $(G,L)$ is {\em colorable}, if $(G,L)$ has a coloring. The \textsc{list-$k$ coloring problem} is the problem of deciding, given a graph $G$ and a $k$-list assignment $L$, if $(G,L)$ is colorable. Since this generalizes the $k$-coloring problem, it is also $NP$-hard for all $k \geq 3$. We denote by $P_t$ the path with $t$ vertices and we use $K_{1,s}$ to denote the complete bipartite graph with parts of size $1$ and $s$ respectively. The one-subdivision of $K_{1,s}$ is obtained by replacing every edge $\{u,v\}$ of $K_{1,s}$ by two edges $\{u,w\}$ and $\{v,w\}$ with a new vertex $w$. For a set $\mathcal{H}$ of graphs, a graph $G$ is $\mathcal{H}$-free if no element of $ \mathcal{H} $ is an induced subgraph of $G$. If $\mathcal{H}=\{H\}$, we say that $G$ is $H$-free. In this paper, we use the terms ``polynomial time'' and ``polynomial size'' to mean ``polynomial in $|V(G)|$'', where $G$ is the input graph. Since the \textsc{$k$-coloring problem} and the \textsc{list-$k$ coloring problem} are $NP$-hard for $k \geq 3$, their restrictions to $H$-free graphs, for various $H$, have been extensively studied. In particular, the following is known: \begin{theorem}[\cite{gps}] Let $H$ be a (fixed) graph, and let $k>2$. If the \textsc{$k$-coloring problem} can be solved in polynomial time when restricted to the class of $H$-free graphs, then every connected component of $H$ is a path. \end{theorem} Thus if we assume that $H$ is connected, then the question of determining the complexity of $k$-coloring $H$-free graph is reduced to studying the complexity of coloring graphs with certain induced paths excluded, and a significant body of work has been produced on this topic. Below we list a few such results. \begin{theorem}[\cite{c1}] \label{3colP7} The \textsc{3-coloring problem} can be solved in polynomial time for the class of $P_7$-free graphs. \end{theorem} \begin{theorem} [\cite{4p6}] The \textsc{4-coloring problem} can be solved in polynomial time for the class of $P_6$-free graphs. \end{theorem} \begin{theorem}[\cite{hoang}] The \textsc{$k$-coloring problem} can be solved in polynomial time for the class of $P_5$-free graphs. \end{theorem} \begin{theorem}[\cite{huang}] The \textsc{4-coloring problem} is $NP$-complete for the class of $P_7$-free graphs. \end{theorem} \begin{theorem}[\cite{huang}] For all $k \geq 5$, the \textsc{$k$-coloring problem} is $NP$-complete for the class of $P_6$-free graphs. \end{theorem} The only case for which the complexity of $k$-coloring $P_t$-free graphs is not known $k=3$, $t \geq 8$. In this paper, we consider the \textsc{list-$3$ coloring problem} for $P_t$-free graphs with no induced 1-subdivision of $ K_{1,s}$. We use $ SDK_s $ to denote the one-subdivision of $K_{1,s}$. The main result is the following: \begin{theorem} For all positive integers $s$ and $t$, the \textsc{list-$3$ coloring problem} can be solved in polynomial time for the class of $(P_t,SDK_s)$-free graphs. \end{theorem} \section{Preliminaries} We need two theorems: the first one is the famous Ramsey Theorem \cite{Ramsey}, and the second is a result of Edwards \cite{edwards}: \begin{theorem}[\cite{Ramsey}] \label{Ramsey} For each pair of positive integers $k$ and $l$, there exists an integer $R(k,l)$ such that every graph with at least $R(k,l)$ vertices contains a clique with at least $k$ vertices or an independent set with at least $l$ vertices. \end{theorem} \begin{theorem}[\cite{edwards}] \label{Edwards} Let $G$ be a graph, and let $L$ be a list assignment for $G$ such that $|L(v)|\leq 2$ for all $v\in V(G)$. Then a coloring of $(G,L)$, or a determination that none exists, can be obtained in time $O(|V(G)|+|E(G)|)$. \end{theorem} Let $G$ be a graph with list assignment $L$. For $X \subseteq V(G)$ we denote by $G|X$ the subgraph induced by $G$ on $X$, by $G \setminus X$ the graph $G|(V(G) \setminus X)$ and by $(G|X,L)$ the list coloring problem where we restrict the domain of the list assignment $L$ to $X$. For $v \in V(G)$ we write $N_G(v)$ (or $N(v)$ when there is no danger of confusion) to mean the set of vertices of $G$ that are adjacent to $v$. For $X\subseteq V(G)$ we write $N_G(X)$ (or $N(X)$ when there is no danger of confusion) to mean $\bigcup_{v\in X}N(v)$. We say that $D\subseteq V(G)$ is a \emph{dominating set} of $G$ if for every vertex $v\in G\setminus D$, $N(v)\cap D\neq \emptyset$. By Theorem~\ref{Edwards}, the following corollary immediately follows. \begin{Corollary} \label{Edwards2} Let $G$ be a graph, $L$ be a $3$-list assignment for $G$ and let $D$ be a dominating set of $G$. Then a coloring of $(G,L)$, or a determination that $(G,L)$ is not colorable, can be obtained in time $O(3^{|D|}(|V(G)|+|E(G)|))$. \end{Corollary} \begin{proof} For every coloring $c$ of $(G|D,L)$, in time $O(|E(G)|)$ we can define a list assignment $L_c$ of $G$ as follows: if $ v \in D $ we set $ L_c(v)=\{c(v)\} $ and if $ v \notin D $ we can pick $u\in N(v)\cap D$ by the definition of a dominating set and set $ L_c(v)=L(v)\setminus c(u)$. Let $\mathcal{L}$ =$\{L_c:$ $c$ is a coloring of $(G|D,L) \}$, then clearly $|\mathcal{L}|\leq 3^{|D|}$ and $(G,L)$ is colorable if and only if there exists a $L_c\in \mathcal{L}$ such that $(G,L_c)$ is colorable. For every $L_c\in \mathcal{L}$, by construction $ |L_c(v)|\leq 2 $ for every $v\in G$ and hence by Theorem~\ref{Edwards}, a coloring of $(G,L_c)$, or a determination that none exists, can be obtained in time $O(|V(G)|+|E(G)|)$. Therefore a coloring of $(G,L)$, or a determination that $(G,L)$ is not colorable, can be obtained in time $O(3^{|D|}(|V(G)|+|E(G)|))$. \end{proof} \section{The Algorithm} Let $s$ and $t$ be positive integers, and let $G=(V,E)$ be a connected $ (P_t,SDK_s,K_4) $-free graph. Pick an arbitrary vertex $a\in V$ and let $S_1=\{a\}$. For $v\in V$, let $d(v)$ be the distance from $v$ to $a$. For $i=1,2,\dots, t-2$, we define the set $S_{i+1}$ as follows: \begin{itemize} \item Let $B_i=N(S_i), W_i=V\setminus(B_i\cup S_i)$. \item Write $S_i=\{v_1,v_2,\dots,v_{|S_i|}\}$ and define $$B_i^j=\left\{v\in \left(B_i\setminus \bigcup_{k=1}^{j-1}B_i^k\right) : v \textnormal{ is adjacent to } v_j \right\}$$ for $j=1,2,\dots |S_i|.$ Then $B_i=\bigcup_{j=1}^{|S_i|}B_i^j$. \item For $j=1,2,\dots, |S_i|$, let $X^j_i\subseteq B_i^j$ be a minimal vertex set such that for every $w\in W_i$, if $N(w)\cap B_i^j\neq \emptyset$, then $N(w)\cap X_i^j\neq \emptyset$. Let $X_i=\bigcup_{j=1}^{|S_i|}X_i^j$. \item Let $S_{i+1}=S_i\cup X_i$. \end{itemize} It is clear that we can compute $S_{t-1}$ in $O(t|V|^2)$ time. Next, we prove some properties of this construction. \begin{lemma}\label{size} For $i=1,2,\dots, t-2$, $|S_{i+1}|\leq |S_i| (1+R(4,R(4,s)))$. \end{lemma} \begin{proof} It is sufficient to show that for each $j=1,2,\dots, |S_i|$, $ |X^j_i| \leq R(4,R(4,s))$. Suppose not, $ |X^\ell_i |=K>R(4,R(4,s))$ for some $\ell\in \{1,2\dots, |S_i|\}$. Let $ X^\ell_i =\{x_1,x_2,\dots,x_K\}$. By the minimality of $X^\ell_i $, for $j=1,2,\dots, K$, there exists $y_j\in W_i$ such that $N(y_j)\cap X^\ell_i=\{x_j\}$. Since $G$ is $K_4$-free, by Theorem \ref{Ramsey}, there exists a stable set $X'\subseteq X^\ell_i$ of size $R(4,s)$. We may assume $X'=\{x_1,x_2,\dots,x_{R(4,s)}\}$. Let $Y'=\{y_1,y_2,\dots,y_{R(4,s)}\}$. Again by Theorem \ref{Ramsey}, there exists a stable set $Y''\subseteq Y'$ of size $s$. We may assume $Y''=\{y_1,y_2,\dots,y_s\}$ and let $X''=\{x_1,x_2,\dots,x_s\}$. Then $G[\{v_\ell\}\cup X''\cup Y'']$ is isomorphic to $ SDK_s $, a contradiction. \end{proof} \begin{lemma}\label{length} For $i=0,1,2,\dots, t-2$, $B_{i+1} \setminus (B_i \cup S_i) = \{v : d(v) = i+1\}$ (where $S_0 = \emptyset$, $B_0 = \{a\}$ and $B_{t-1} = N(S_{t-1})$). \end{lemma} \begin{proof} We use induction to prove this lemma. It is clear that for $i=0$, $B_1 = N(a) = \{v : d(v) = 1\}$. Now suppose this lemma holds for $i < k$, where $k\in \{1,2\dots, t-2\}$. First we show that for every $v\in B_{k+1}\setminus (B_k \cup S_k)$, $d(v) = k+1$. By construction $v\in W_k$, hence $d(v) > k$ by induction. Since $v \in B_{k+1} \setminus B_k$, $v$ has a neighbor $w$ in $S_{k+1} \setminus S_k \subseteq B_k$; and thus $d(v) \leq d(w) + 1 \leq k+1$. Now let $v \in V$ with $d(v) = k+1$. It follows that $v \not\in (B_k \cup S_k)$, and $v \in B_{k+1} \cup W_{k+1}$, and $v$ has a neighbor $w \in V$ with $d(w) = k$. By induction, it follows that $v \in W_k$ and $w \in B_k$. Let $j \in \mathbb{N}$ such that $w \in B_k^j$. Since $v \in W_k$ and $N(w) \cap B_{k}^j \neq \emptyset$, it follows that $v$ has a neighbor in $X_k^j \subseteq X_k \subseteq S_{k+1}$, and therefore $v \in B_{k+1}$, as required. This finishes the proof of Lemma~\ref*{length}. \end{proof} By applying Lemma~\ref{size} and Lemma~\ref{length}, we deduce several properties of $S_{t-1}$. \begin{lemma}\label{S} \begin{enumerate} \item There exists a constant $M_{s,t}$ which only depends on $s$ and $t$ such that $|S_{t-1}|\leq M_{s,t}$. \item $W_{t-1}=V\setminus (S_{t-1}\cup N(S_{t-1}))=\emptyset$. \end{enumerate} \end{lemma} \begin{proof} Since we start with $|S_1|=1$, by applying Lemma~\ref{size} $ t-2 $ times, it follows that $|S_{t-1}|\leq (1+R(4,R(4,s)))^{t-2}$. Let $M_{s,t}=(1+R(4,R(4,s)))^{t-2}$, then the first claim holds. Suppose the second claim does not hold. From Lemma \ref{length}, it follows that $\{v : d(v) \leq t-1\} \subseteq S_{t-1} \cup N(S_{t-1})$. But if $w \in V$ satisfies $d(w) \geq t-1$, then a shortest $w$-$a$-path is an induced path of length at least $t$, a contradiction. Thus the second claim holds. \end{proof} We are now ready to prove our main result, which we rephrase here: \begin{theorem} Let $ M_{s,t}=(1+R(4,R(4,s)))^{t-2} $. There exists an algorithm with running time $O(|V(G)|^4+t|V(G)|^2+3^{M_{s,t}}(V(G)+E(G)))$ with the following specification. \\ \\ {\bf Input:} A $(P_t,SDK_s)$-free graph G and a $3$-list assignment $L$ for $G$. \\ \\ {\bf Output:} A coloring of $(G,L)$, or a determination that $(G,L)$ is not colorable. \end{theorem} \begin{proof} We may assume that $G$ is connected, since otherwise we can run the algorithm for each component of $G$ independently. In time $O(|V(G)|^4)$ we can determine that either $(G,L)$ is not colorable, or $G$ is $K_4$-free. If $G$ is $K_4$-free, we can construct $S_{t-1}$ in $O(tn^2)$ time as stated above. Then by Lemma~\ref{S}, $S_{t-1}$ is a dominating set of $G$ and $|S_{t-1}|\leq M_{s,t}$. Now the theorem follows from Corollary~\ref{Edwards2}. \end{proof}
1,477,468,750,866
arxiv
\section{\large{\bf Introduction}}% Spontaneous symmetry breaking in grand unified theories (GUTs) can produce variety of topological or non-topological defects \cite{{Vilenkin:1984ib,Vilenkin:2000jqa,Bhattacharjee:1991zm,Hill:1982iq,Kibble:1976sj}}. These defects generically arise from the breaking of a group, $G$, to its subgroup, $H$, such that a manifold of equivalent vacua, $\mathcal{M}$, $G/H$, exists. Monopoles form when the manifold $\mathcal{M}$ contains non-contractible two-dimensional spheres \cite{tHooft:1974kcl}, cosmic strings when it contains non-contractible loops and domain walls when $\mathcal{M}$ is disconnected \cite{Hindmarsh:1994re}. The monopoles can be avoided by inflation which naturally incorporates the GUT scale in supersymmetric hybrid inflation \cite{Dvali:1994ms}. It has been shown for a large class of GUT models that in spontaneous symmetry breaking schemes curing the monopole problem, the formation of cosmic strings cannot be avoided \cite{Jeannerot:2003qv}. Cosmic strings are interesting messenger from the early universe due to their characteristic signatures in the stochastic gravitational wave background (SGWB). The evidence for a stochastic process at nanohertz frequencies as reported by recent NANOGrav 12.5 year data has been interpreted as SGWB in a large number of recent papers \cite{King:2020hyd,JohnEllis,Buchmuller:2020lbh,King:2021gmj,Ahmed:2021ucx,Vagnozzi:2020gtf,Benetti:2021uea,Lazarides:2021uxv,Samanta:2020cdk,Blasi:2020mfx,Ashoorioon:2022raz}. Relic gravitational waves (GWs) provide a fascinating window to explore the very early universe cosmology \cite{Ahriche:2018rao}. Cosmic string produce powerful bursts of gravitational radiation that could be detected by interferometric gravitational wave detectors such as LIGO, Virgo, and LISA \cite{LIGOScientific:2019vic, LISA:2017pwj}. In addition, the stochastic gravitational wave background (SGWB) can be detected or constrained by various observations including Big Bang Nucleosynthesis (BBN), pulsar timing experiments and interferometric gravitational wave detectors \cite{Goncharov:2021oub}. Among the various proposed extensions of the minimal supersymmetric standard model (MSSM), the $U(1)_{B-L}$ is the simplest \cite{Ahmed:2021dvo,Buchmuller:2012wn,Ahmed:2020lua}. Here $B$ and $L$ denote the baryon and lepton numbers, respectively and $B-L$ is the difference between baryon and lepton numbers. As a local symmetry, the $B-L$ group resides in the grand unified (GUT) gauge group $SO(10)$. The spontaneous breaking of $U(1)_{B-L}$ at the end of inflation requires an extended scalar sector, which automatically yields hybrid inflation explaining the inhomogeneities of the CMB. In the $B-L$ breaking phase transition, most of the vacuum energy density is rapidly transferred to non-relativistic $B-L$ Higgs bosons, a sizable fraction also into cosmic strings. The decay of heavy Higgs boson and heavy neutrino leads to an elegant explanation of the small neutrino masses via the seesaw mechanism, explaining the baryon asymmetry via thermal and nonthermal leptogenesis \cite{Fukugita:1986hr,Flanz:1994yx,Vagnozzi:2017ovm,Vagnozzi:2018jhn}. The temperature evolution during reheating is controlled by the interplay between the $B-L$ Higgs and the neutrino sector, while the dark matter originates from thermally produced gravitinos. The embedding of $U(1)_{B-L}$ into a simply-connected group such as $SO(10)$ or Pati-salam symmetry ($SU(4)_C \times SU(2)_L \times SU(2)_R$), produces metastable cosmic strings due to the spontaneous pair creation of a monopole and an anti-monopole. Once the string is cut, the monopoles at the two ends are quickly pulled together due to string tension, forcing them to annihilate. If the string network is sufficiently long-lived, it can generate a stochastic gravitational wave background (SGWB) in the range of ongoing and future gravitational wave (GW) experiments \cite{Buchmuller:2019gfy,Masoud:2021prr}. Hybrid inflation, in particular, is one of the most promising models of inflation, and can be naturally realized within the context of supergravity theories. This scenario is based on the inclusion of two scalar fields \cite{Linde:1993cn}, with the first one realizing the slow-roll inflation and the second one, dubbed the ``waterfall'' field, triggering the end of inflationary epoch. While in the standard hybrid inflationary scenario \cite{Dvali:1994ms,Copeland:1994vg,Linde:1997sj}, the GUT gauge symmetry is broken at the end of inflation, in shifted \cite{Jeannerot:2000sv} and smooth variants \cite{Lazarides:1995vr,Ahmed:2022vlc}, the gauge symmetry breaking occurs during inflation and thus, the disastrous magnetic monopoles and other dangerous topological defects are inflated away. In this paper we study standard hybrid inflation in the context of supergravity where a no-scale K\"ahler potential is assumed . We consider the framework of MSSM gauge symmetry augmented by a $U(1)_{B-L}$ factor and investigate the implementation of hybrid inflation and its interplay with the issues of non-thermal leptogenesis, gravitino dark matter and stochastic gravitational wave background (SGWB) generated by metastable cosmic string network. For $\mu$ hybrid inflation see Ref \cite{Afzal:2022vjx}. We consider the value of monopole-string-tension ratio from $\sqrt{\lambda} \simeq 7.4$ (metastable cosmic strings) to $\sqrt{\lambda} \simeq 9.0$ (quasi-stable cosmic strings). The parametric space consistent with successful reheating with non-thermal leptogenesis and gravitino dark matter restrict the allowed values of string tension to the range $10^{-9} \lesssim G\mu_{CS} \lesssim 8 \times 10^{-6}$ and predicts a stochastic gravitational wave background (SGWB) that lies within the 1- and 2-$\sigma$ bounds of recent NANOGrav 12.5 years data, as well as the sensitivity bounds of future gravitational wave (GW) experiments. The layout of the paper is as follows. In Sec.~2 we describe the basic features of the model including the superfields, their charge assignments, and the superpotential constrained by a $U(1)_R$ symmetry. The inflationary setup is described in Sec.~3. The numerical analysis is presented in Sec.~4 including the prospects of observing primordial gravity waves, non-thermal leptogenesis, gravitino cosmology and stochastic gravitational wave background (SGWB) generated by metastable cosmic string network. Our conclusion is summarized in Sec~5. \section{\large{\bf Model Description}}% In this section, we present basic features regarding the gauge symmetry and the spectrum of the effective model in which the inflationary scenario will be implemented. The gauge group $U(1)_{B-L}$ is embedded in a grand unified (GUT) gauge group $SO(10)$ and is based on the gauge symmetry \begin{equation}\label{eq:GBL} G_{B-L}=SU(3)_{C}\times{SU(2)_L}\times U(1)_{Y}\times{U(1)_{B-L}}~\cdot \end{equation} \noindent In addition to the MSSM matter and Higgs superfields, the model supplements six superfields namely; a gauge singlet $S$ whose scalar component acts as inflaton, three right-handed neutrinos $N_{i}^{c}$ and a pair of Higgs singlets $H$ and $\ov{H}$, which are responsible for breaking the gauge group $U(1)_{B-L}$. The charge assignment of these superfields under the gauge symmetry $SU(3)_{C}\times{SU(2)_L}\times U(1)_{Y}\times{U(1)_{B-L}}$ as well as the global symmetries $U(1)_{R}$, $U(1)_B$ and $U(1)_L$ are listed in Table \ref{tab:themodel}. The $U(1)_{B-L}$ symmetry is spontaneously broken when the $H$, $\ov{H}$ singlet Higgs super fields acquire vacuum expectation values (VEVs), providing Majorana masses to the right handed neutrinos. The superpotential of the model, invariant under the symmetries listed in Table \ref{tab:themodel}, is given as \begin{equation}\label{wscalar1} \begin{split} W & = \mu H_{u}H_{d}+ y_u {H_{u}}{Q}{u}^c + y_d {H_{d}} {Q}{d}^c +y_{e}{H_{d}}{L}{e}^c + y_{\nu}{H_{u}}{L}{N}^c\\ &+ \kappa S \left(\ov{H}H-M^{2}\right)+ \beta_{ij}^{\prime}\frac{ H H N^{c}N^{c}}{\Lambda}.\; \end{split} \end{equation} \begin{table}[!t] \begin{center} \begin{tabular}{c|c|c|c|c}\hline\hline { \sc Superfields}&{\sc Representations}&\multicolumn{3}{c}{\sc Global Symmetries}\\\cline{3-5} % &{ \sc under $G_{B-L}$ } & {\hspace*{0.3cm} $U(1)_R$ \hspace*{0.3cm} } & {\hspace*{0.3cm}$B$\hspace*{0.3cm}} &{$ L$} \\\hline \multicolumn{5}{c}{\sc Matter Fields}\\\hline % {$e^c_i$} &{$({\bf 1, 1}, 1, 1)$}& $1$&$0$ & $-1$ \\ % {$N^c_i$} &{$({\bf 1, 1}, 0, 1)$}& $1$ &$0$ & $-1$ \\ {$L_i$} & {$({\bf 1, 2}, -1/2, -1)$} &$0$&{$0$}&{$1$} \\ {$u^c_i$} &{$({\bf 3, 1}, -2/3, -1/3)$}& $1/2$ &$-1/3$& $0$ \\ {$d^c_i$} &{$({\bf 3, 1}, 1/3, -1/3)$}& $1/2$ &$-1/3$& $0$ \\ {$Q_i$} & {$({\bf \bar 3, 2}, 1/6 ,1/3)$} &$1/2$ &$1/3$&{$0$} \\ \hline \multicolumn{5}{c}{\sc Higgs Fields}\\\hline {$ H_{d} $}&$({\bf 1, 2}, -1/2, 0)$& {$1$}&{$0$}&{$0$}\\ {$ H_{u} $} &{$({\bf 1, 2}, 1/2, 0)$}& {$1$} & {$0$}&{$0$}\\ \hline % {$S$} & {$({\bf 1, 1}, 0, 0)$}&$2$ &$0$&$0$ \\ % {$\ov{H}$} &{$({\bf 1, 1}, 0, 1)$}&{$0$} & {$0$}&{$-1$}\\ % {$H$}&$({\bf 1, 1}, 0,-1)$&{$0$}&{$0$}&{$1$}\\ \hline\hline \end{tabular} \end{center} \caption[]{Superfield contents of the model, the corresponding representations under the local gauge symmetry $G_{B-L}$ and the properties with respect to the extra global symmetries, $U(1)_R$, $U(1)_B$ and $U(1)_L$.}\label{tab:themodel} \end{table} \noindent The first line in the above superpotential contains the usual MSSM $\mu$-term and Yukawa couplings supplemented by an additional Yukawa coupling among $L_i$ and $N_i^c$. These Yukawa couplings generate Dirac masses for up and down quarks, charged leptons and neutrinos. The family indices for Yukawa couplings are generally suppressed for simplicity. The first term in the second line is relevent for standard supersymmetric hybrid inflation with $M$ being a GUT scale mass parameter and $\kappa$, a dimensionless coupling constant. The non-renormalizable term in the second line generates Majorana masses for right handed neutrinos $N_i^c$ and induces the decay of inflaton to $N_i^c$. By virtue of the extra global symmetries, the model is protected from dangerous proton decay operators and $R$-parity violating terms. \section{Inflation Potential} We will compute the effective scalar potential contributions from the $F$- and $D$-sector, radiative corrections as well as the soft supersymmetry breaking terms. The superpotential terms relevant for inflation are \begin{equation}\label{wscalar} W\supset \kappa S \left(\ov{H}H-M^{2}\right) . \end{equation} \noindent We consider a no-scale structure K\"ahler potential which, after including contributions from the relevant fields in the model, takes the following form \begin{equation}\label{kahler1} \begin{aligned} K =-3 m^{2}_{P} \log \Bigg[T + T^{\ast}- \frac{1}{3 m^{2}_{P}}\left(H H^{\ast} + \bar{H} \bar{H}^{\ast} +S^{\dagger}S\right) &+ \frac{\xi}{3 m^{2}_{P}}\left(H \bar{H} + H^{\ast} \bar{H}^{\ast}\right) \\ &+ \frac{\gamma}{3m^{4}_{P}}\left(S^{\dagger}S\right)^2+....\Bigg], \end{aligned} \end{equation} where $T$, $T^{*}$ are K\"ahler complex moduli fields, $T=\left(u+i\, v\right)$, hence $T+ T^{\ast}=2u$ and $\xi$ is a dimensionless parameter. Here we choose $u=1/2$. For later convenience, we define \begin{equation} \begin{aligned} \Delta = \Bigg[T + T^{\ast}- \frac{1}{3 m^{2}_{P}}\left(H H^{\ast} + \bar{H} \bar{H}^{\ast} +S^{\dagger}S\right) &+ \frac{\xi}{3 m^{2}_{P}}\left(H \bar{H} + H^{\ast} \bar{H}^{\ast}\right) \\ &+ \frac{\gamma}{3m^{4}_{P}}\left(S^{\dagger}S\right)^2+....\Bigg]~, \end{aligned} \end{equation} so that Eq \ref{kahler1} can be written as \begin{equation} K =-3 m^{2}_{P}\log \Delta. \end{equation} The fields carrying $SU(3)_{C}\times{SU(2)_L}\times U(1)_{Y}\times{U(1)_{B-L}}$ quantum numbers are given in Table \ref{tab:themodel} and denoted collectively here with $\phi_i$. The $D$-term potential is given as, \begin{eqnarray} V_{D}=\frac{1}{2} D^p_{a} D^p_{a}~, \end{eqnarray} where $D_{a}^p$ is defined for $SU(N)$ groups as \begin{equation*} D_{a}^p=-g_{a}K_{,\phi_{i}}\left[t_{a}^p\right]_{i}^{j}\phi_{j} \end{equation*} and for $U(1)$ groups as, \begin{equation*} D_{a}^p=-g_{a}K_{,\phi_{i}}\left[t_{a}^p\right]_{i}^{j}\phi_{j}-g_{a}q_{i} \varsigma~. \end{equation*} Here $K_{,\phi_{i}}\equiv dK/d\phi_{i}$, $\varsigma$ is the Fayet-Iliopoulos coupling constant and $q_i$ are the charges under $U(1)$ group. The $t_{a}^{p}$ are the generators of the corresponding group $G$ and $p = 1, . . . , {\rm dim}(G) $. The $D$-term potential can be written as, \begin{eqnarray} V_{D} = \frac{g_{B-L}^{2}}{2\Delta^2}\left[2\mid \bar{H}\mid^{2}-2\mid H\mid^{2}-\xi\left(2H\bar{H}-2\bar{H}H\right)+\left(q_{H}+q_{\bar{H}}\right)\varsigma\right]^2. \end{eqnarray} The $D$-flat potential can be achieve by parametrization of the fields ${H}$ and $\ov{{H}}$. We can rewrite the complex fields in terms of real scalar fields as \begin{equation} \label{d-flat} \begin{split} {H} & = \frac{Y}{\sqrt{2}}e^{\iota\theta}\cos\vartheta, \quad \ov{{H}}=\frac{Y}{\sqrt{2}}e^{\iota\ov{\theta}}\sin\vartheta, \end{split} \end{equation} where the phases $\theta$, $\ov{\theta}$ and $\vartheta$ can be stabilized at \begin{eqnarray} \vartheta=\frac{\pi}{4}\quad \text{and} \quad \theta=\ov{\theta}=0, \end{eqnarray} along the $D$-flat direction ($\lvert H \rvert = \lvert \ov{{H}} \rvert =\frac{Y}{2}$). The $F$-term SUGRA scalar potential is given by \begin{equation} V_{F}=e^{K/m_{P}^2} \left[\left(K_{i\bar{j}} \right)^{-1}\left(D_{z_{i}}W\right)\left(D_{z_{j}}W\right)^{*}-\frac{3\arrowvert W\arrowvert^{2}}{m_{P}^{2}}\right], \label{Einstein frame SUGRA potential} \end{equation} with $z_{i}$ being the bosonic components of the superfields $z% _{i}\in \{S, H, \bar{H},\cdots\}$, and we have defined \begin{equation} D_{z_{i}}W\equiv \frac{\partial W}{\partial z_{i}}+\frac{\partial K}{\partial z_{i}}\frac{W}{m_{p}^{2}}, \ \ \ \ K_{i\bar{j}}\equiv \frac{\partial^{2}K}{\partial z_{i}\partial z_{j}^{*}}, \end{equation} and $D_{z_{i}^{*}}W^{*}=\left( D_{z_{i}}W\right)^{*}.$ The $F$-term scalar potential during inflation becomes \begin{eqnarray} V_{F}(Y,\,|S|) &=& \frac{\kappa^2}{16} \left( Y^2 - 4 M^2 \right)^2 + \kappa^2 Y^2 |S|^2 - \kappa^2 M^4 \left(\frac{2}{3} - 4 \gamma \right) \left(\frac{|S|}{m_{p}}\right)^2 \notag \\ && + \kappa^2 M^4 \left( -\frac{5}{9} + \frac{14 \gamma }{3} + 16 \gamma ^2\right) \left(\frac{|S|}{m_{p}}\right)^4 \cdots. \end{eqnarray} Using the $F$-flatness condition, $D_{z_{i}}W = 0$, the minima of potential lies at $Y=2M$ and $S=0$. Along the inflationary trajectory, $Y=0$, the gauge group $U(1)_{B-L}$ is unbroken. After the end of inflation, the spontaneous breaking of the gauge group $U(1)_{B-L}$ yields cosmic strings. Defining dimensionless variable $x\equiv |S|/M$, we obtain the following form of potential along the inflationary trajectory, \begin{eqnarray} V_{F}(x) &\simeq& \kappa^2 M^4 \left( 1 -\left(\frac{2}{3} - 4 \gamma \right)\left(\frac{Mx}{m_{p}}\right)^2 + \left( -\frac{5}{9} + \frac{14 \gamma }{3} + 16 \gamma ^2\right)\left(\frac{M x }{m_{p}}\right)^4 + \cdots\right). \label{SHIpot} \end{eqnarray} The action of our model for non-canonically normalized field $x$ is given by \begin{equation} \begin{split} \mathcal{A}= \int dx^{4}\sqrt{-g}\left[\frac{m^2_{p}}{2}\mathcal{R}- K^{i}_{j}\partial_{\mu} x^{i}\partial^{\mu} x^{j} -V({x})\right]. \end{split} \end{equation} Introducing a canonically normalized field $\chi$ that satisfies \begin{equation}\label{jphi} \begin{split} \left(\frac{d\chi}{dx}\right)^{2} =\frac{3 \frac{\gamma M^2}{m_{p}^2} x^2 \left(\frac{M^2}{m_{p}^2} x^2-12\right)+9}{\left(\frac{M^2}{m_{p}^2} x^2 \left( \frac{\gamma M^2}{m_{p}^2} x^2-1\right)+3\right)^2}\sim1. \end{split} \end{equation} Since $\gamma M^2\ll m^2_{p}$, integrating Eq. \eqref{jphi} in this limit, we obtain the canonically normalized field $\chi$ as a function of $x$. The canonically normalized potential as a function of $\chi$ can be written as, \begin{eqnarray} V_F (\chi) &\simeq& \kappa ^{2}M^{4}\left( 1 -\left(\frac{2}{3} - 4 \gamma \right) \left(\frac{M}{m_{p}}\right)^2 \chi^{2}+ \left( -\frac{5}{9} + \frac{14 \gamma }{3} + 16 \gamma ^2\right) \left(\frac{M}{m_{p}}\right)^4 \chi^4+\cdots\right). \label{scalarpot} \end{eqnarray} The effective scalar potential including the well-known radiative corrections and soft SUSY breaking terms, can be expressed as \begin{eqnarray} V(\chi) &\simeq&V_{F}+V_{D}+V_{CW}+V_{Soft} \\ &\simeq&\kappa^2 M^4 \left( 1 -\left(\frac{2}{3} - 4 \gamma \right) \left(\frac{M}{m_{p}}\right)^2 \chi^{2}+ \left( -\frac{5}{9} + \frac{14 \gamma }{3} + 16 \gamma ^2\right) \left(\frac{M}{m_{p}}\right)^4+\right. \notag \\ && + \left.\frac{\kappa ^2}{8\pi ^2}F(\chi) + a \left(\frac{m_{3/2} \chi }{\kappa M}\right)+\left(\frac{M_{S}\chi}{\kappa M}\right)^2 \right), \label{SHIpot} \end{eqnarray} with \begin{equation} a = 2|A-2| \cos \left[\arg S + \arg (1-A)\right], \end{equation} and \begin{equation} F(\chi)=\frac{1}{4}\left(\left(\chi^4+1\right)\log\left(\frac{\chi^4-1}{\chi^4}\right) + 2 \chi^2\log\left(\frac{\chi^2+1}{\chi^2 - 1}\right) + 2 \log\left(\frac{\kappa^2 M^2 \chi^2}{Q^2}\right) - 3 \right). \end{equation} Here, $Q$ is the renormalization scale, $a$ and $M_S$ are the coefficients of soft SUSY breaking linear and mass terms for $S$, respectively, and $m_{3/2}$ is the gravitino mass. For simplicity, we set $M_{S}=m_{3/2}$ and assume a suitable initial condition for $\arg S$ to be stabilized at zero and take $a$ to be constant during inflation (for details see Ref \cite{Buchmuller:2014epa}). \section{Analysis} In this section, we analyze the implications of the model and discuss its predictions regarding the various cosmological observables. We pay particular attention to inflationary predictions and stochastic gravitational waves (GW) spectrum consistent with leptogenesis and gravitino cosmology. \subsection{Inflationary Predictions} The inflationary slow-roll parameters can be expressed in terms of $\chi$ as \begin{equation} \epsilon=\dfrac{1}{4}\left(\frac{m_{p}}{M}\right)^2\left(\frac{V^{\prime}\left(\chi\right)}{V(\chi)}\right)^{2}, \quad\quad \eta=\dfrac{1}{2}\left(\frac{m_{p}}{M}\right)^2\left(\frac{V^{\prime\prime}\left(\chi\right)}{V(\chi)}\right), \end{equation} \begin{equation} s^{2}=\dfrac{1}{4}\left(\frac{m_{p}}{M}\right)^4\left(\frac{V^{\prime}\left(\chi\right)V^{\prime\prime}\left(\chi\right)}{V(\chi)}\right), \end{equation} \noindent where a prime denotes a derivative with respect to $\chi$. The tensor-to-scalar ratio $r$, the scalar spectral index $n_{s}$, and the running of the spectral index $\frac{dn_{s}}{d\ln k}$ are given by \begin{equation} r\simeq16 \epsilon \quad{,}\quad n_{s}\simeq 1+2\eta-6\epsilon \quad{,}\quad \frac{dn_{s}}{d\ln k}\simeq 16\epsilon\eta-24\epsilon^{2}+2s^{2}. \end{equation} The number of e-folds is given by \cite{Garcia-Bellido:1996egv}, \begin{equation} N_{l}=2\left(\frac{M}{m_{p}}\right)^2\int_{\chi_{e}}^{\chi_{l}}\left(\frac{V\left(\chi\right)}{V^{\prime}(\chi)}\right) d\chi= 54+\frac{1}{3}\ln\left[\frac{T_{r}}{10^{9} {\rm GeV}}\right]+\frac{1}{3}\ln\left[\frac{V(\chi_{l})^{1/4}}{10^{16} {\rm GeV}}\right]~, \end{equation} where $l$ denotes the comoving scale after crossing the horizon, $\chi_{l}$ is the field value at $l$, $\chi_e$ is the field value at the end of inflation, (i.e., when $\epsilon=1$), and $T_{r}$ is the reheating temperature which will be discussed in the following section. The amplitude of curvature perturbation $\Delta_{R}$ is given by \begin{equation} \Delta_{R}^{2}=\frac{V\left(\chi\right)}{24 \pi^{2} \epsilon\left(\chi\right)}. \end{equation} \begin{figure}[t] \centering \includegraphics[width=7.93cm]{plots/M_k_r.pdf} \centering \includegraphics[width=7.93cm]{plots/M_k_s0.pdf} \caption{Contours of tensor to scalar ratio $r$ (left panel) and the field $S_0$ (right panel) in the $\kappa-M$ plane, where $M$ is the $B-L$ guage symmetry breaking scale. The boundary is drawn for different constraints shown. The color bar on the right displays the range of string tension parameter $G \mu_{CS}$. The shaded region represents the parametric space that is consistent with gravitino dark matter.} \label{fig1} \end{figure} The results of our numerical calculations are presented in Fig. \ref{fig1} where the variation of parameters is shown in the $\kappa-M$ plane. In our analysis, the scalar spectral index is fixed at the central value of Planck's bounds $n_{s}=0.9655$. To keep the SUGRA expansion, parameterized by $\gamma$, under control we impose $S_{0}\leq m_{p}$. We restrict $M\leq 2\times10^{16}$ GeV and $T_{r}\leq 10^{10}$ GeV to avoid the gravitino problem. We further restrict our numerical results by imposing the following conditions \begin{equation} m_{inf}=2M_{N}, \qquad M_{N}=10 T_{r}, \end{equation} which ensure successful reheating with non-thermal leptogenesis. The boundary curves in Fig. \ref{fig1} represent; $M = 2 \times 10^{16}$ GeV, $T_r = 10^{10}$ GeV, $m_{\text{inf}} = 2 M_N$, $M_N = 10 T_r$ and $G \mu = 10^{-12}$ constraints. The left panel in Fig. \ref{fig1} shows the variation of tensor to scalar ratio $r$, whereas, the right panel shows the variation of the field value $S_{0}$. The color bar depicts the range of string tension $G\mu_{CS}$ obtained in our model. It should be noted that the parameter $\gamma$ which controls the SUGRA corrections, makes this model more predictive than the standard hybrid model of inflation. Using leading order slow-roll approximation, we obtain the following analytical expressions for $n_{s}$ in the small $\kappa$ limit, \begin{equation} n_s = 1 - 2 \left( \frac{2}{3} - 4 \gamma \right). \end{equation} It can readily be check that for $\gamma = 0.162292$, we obtain $n_{s} \sim 0.9655$ which is in excellent agreement with the numerical results displayed in Fig. \ref{fig1}. The above equation therefore gives a valid approximation of our numerical results. For the scalar spectral index $n_s$ fixed at Planck's central value ($0.9655$), we obtain the following ranges of parameters \begin{gather} \nonumber 4.2 \times 10^{-6} \lesssim \kappa \lesssim 6.2 \times 10^{-2}, \\ \nonumber (1.3 \times 10^{13} \lesssim M \lesssim 2.0 \times 10^{16}) ~ \text{GeV}, \\ \nonumber (2 \times 10^{16} \lesssim S_0 \lesssim 2 \times 10^{17}) ~ \text{GeV}, \\ \nonumber 7.1 \times 10^{-23} \lesssim r \lesssim 10^{-4}, \\ 10^{-12} \lesssim G \mu_{CS} \lesssim 8 \times 10^{-6}. \end{gather} Using the Plank's normalization constraint on $\Delta_{R}$, we obtain the following explicit dependence of $r$ on $\kappa$ and $M$ \begin{equation} r \simeq \frac{2 \kappa^2}{3 \, \pi^2 \Delta_{R}^2} \left(\frac{M}{m_P}\right)^4, \end{equation} which explains the behavior of tensor to scalar ratio $r$ in $\kappa-M$ plane. It can readily be checked that for $\kappa \simeq 4.65 \times 10^{-5}$ and $M \simeq 1.42 \times 10^{13}$ GeV, the above equation gives $r \simeq 7.9 \times 10^{-23}$. On the other hand, $\kappa \simeq 2.7 \times 10^{-2}$ and $M \simeq 2.0 \times 10^{16}$ GeV gives $r \simeq 10^{-4}$. These approximate values are very close to the actual values obtained in the numerical calculations. \subsection{\large{\bf Reheating with non-thermal leptogenesis }}\label{sec4} At the end of inflation epoch, the vacuum energy is transfered to the energies of coherent oscillations of the inflaton $S$ and the scalar field $\theta=(\delta H+\delta\bar{H})/\sqrt{2}$ whose decays give rise to the radiation in the universe. The inflaton decay to right handed neutrino is induced by the superpotential term \begin{equation} W \supset \beta_{ij}^{\prime}\frac{ H H N^{c}N^{c}}{\Lambda} ,\label{Infnu1} \end{equation} where $\beta_{ij}^{\prime}$ is a coupling constant and $\Lambda$ represents a high cut-off scale (in a string model this could be identified with the compactification scale). Heavy Majorana masses for the right-handed neutrinos are provided by the following term \begin{equation} M_{\nu^c_{ij}}=\beta_{ij}^{\prime}\frac{\langle H \rangle \langle H \rangle }{\Lambda}~\cdot \end{equation} Also, Dirac neutrino masses of the order of the electroweak scale are obtained from the tree-level superpotential term ${y_{\nu}}_{ij}\,N_{i}^c\,L_{j}\,H_{u}\to {m_{\nu_D}}_{ij}N N^c $ given in~(\ref{wscalar1}). Thus, the neutrino sector is \begin{equation}\label{Infnu2} W\supset {m_{\nu_D}}_{ij}N_iN_j^c+ M_{\nu^c_{ij}}N_i^cN_j^c. \end{equation} The small neutrino masses supported by neutrino oscillation experiments, are obtained by integrating out the heavy right-handed neutrinos and read as \begin{equation} {m_{\nu_D}}_{\alpha\beta}=-\sum_{i}{y_{\nu}}_{i\alpha}{y_{\nu}}_{i\beta}\frac{v_{u}^2}{M_i}~\cdot \label{mneu1} \end{equation} The neutrino mass matrix ${m_{\nu_D}}_{\alpha\beta}$ can be diagonalized by a unitary matrix $U_{\alpha i}$ as ${m_{\nu_D}}_{\alpha\beta} = U_{\alpha i} U_{\beta i} m_{\nu_D}$, where $m_{\nu_D}$ is a diagonal mass matrix $m_{\nu_D} = {\rm diag}(m_{\nu_{1}}, m_{\nu_{2}}, m_{\nu_{3}})$ and $M_{i}$ represent the eigenvalue of mass matrix $M_{\nu^c_{ij}}$. The lepton asymmetry is generated (inducing also baryon asymmetry~\cite{Fukugita:1986hr,Flanz:1994yx}) through right-handed neutrino decays. The lepton number density to the entropy density in the limit $T_r < M_{1}\equiv M_{N}\leq m_{\text{inf}} /2 \leq M_{2,3}$ is defined as \begin{equation} \frac{n_{L}}{s}\sim \frac{3}{2}\frac{T_{r}}{m_{\text{inf}}}\epsilon_{cp}~, \end{equation} where $\epsilon_{cp}$ is the CP asymmetry factor and is generated from the out of equilibrium decay of lightest right-handed neutrino and is given by \cite{Hamaguchi:2002vc}, \begin{equation} \epsilon_{cp}=-\frac{3}{8\pi}\frac{1}{\left({y_{\nu}}{y_{\nu}}^{\dagger}\right)_{11}}\sum_{i=2,3}\operatorname{Im} \left[\left({y_{\nu}}{y_{\nu}}^\dagger\right)_{1i}\right]^2\frac{M_{N}}{M_i}, \end{equation} and $T_{r}$ is reheating temperature which can be as estimated as \begin{eqnarray} T_r \simeq \sqrt[4]{\frac{90}{\pi^2 g_{\star}}} \sqrt{\Gamma \, m_P}~, \label{reheat} \end{eqnarray} where $g_{\star}$ is $228.75$ for MSSM. The $\Gamma$ is the decay width for the inflaton decay into right-handed neutrinos and is given by \cite{Hamaguchi:2002vc} \begin{equation} \Gamma \left({\rm inf} \rightarrow N_{i}^c N_{j}^c \right) = \frac{1}{8 \pi}\left(\frac{M_{N}}{M}\right)^2 \, m_{\text{inf}} \left( 1 - \frac{4 M_{N}^2}{m_{\text{inf}}^2} \right)^{1/2}, \end{equation} with the inflaton mass given by \begin{equation} m_{\text{inf}} = \sqrt{2\kappa^2M^2+M_{S}^2}. \end{equation} Assuming a normal hierarchical pattern of light neutrino masses, the CP asymmetry factor, $\epsilon_{cp}$, becomes \begin{equation} \epsilon_{cp} = \frac{3}{8\pi}\frac{M_{N} m_{\nu_{3}}}{v_{u}^2}\delta_{\rm eff}, \end{equation} where $m_{\nu_3}$ is the mass of the heaviest light neutrino, $v_{u}=\langle H_u \rangle $ is the VEV of the up-type electroweak Higgs and $\delta_{\rm eff}$ is the CP-violating phase. The experimental value of lepton asymmetry is estimated as \cite{Planck:2018vyg}, \begin{eqnarray} \mid n_L/s\mid\approx\left(2.67-3.02\right)\times 10^{-10}. \end{eqnarray} \begin{figure}[!htb] \centering \includegraphics[width=7.93cm]{plots/M_k_tr.pdf} \\ \vspace*{10pt} \centering \includegraphics[width=7.93cm]{plots/M_k_minf.pdf} \centering \includegraphics[width=7.93cm]{plots/M_k_MN.pdf} \caption{Contours of the reheat temperature $T_r$ (top), the inflaton mass $m_{\text{inf}}$ (bottom left) and the right handed neutrino mass $M_N$ (bottom right), in the $\kappa-M$ plane, where $M$ is the $B-L$ guage symmetry breaking scale. The boundary is drawn for different constraints shown. The color bar on the right displays the range of string tension parameter $G \mu_{CS}$. The shaded region represents the parametric space that is consistent with gravitino dark matter.} \label{fig2} \end{figure} In the numerical estimates discussed below we take $m_{\nu_3} = 0.05$ eV, $|\delta_{\rm eff}|=1$ and $v_u = 174$ GeV, while assuming large $\tan \beta $. The non-thermal production of lepton asymmetry, $n_{L}/s$, is given by the following expression \begin{equation} \frac{n_L}{s} \lesssim 3 \times 10^{-10} \frac{T_r}{m_{\text{inf}}}\left(\frac{M_{N}}{10^6 \text{ GeV}}\right)\left(\frac{m_{\nu_3}}{0.05 \text{ eV}}\right) \label{nls}, \end{equation} with $M_{1} \gg T_r $. Using the experimental value of $n_L/s\approx 2.5\times 10^{-10}$ with Eq. \eqref{reheat} and \eqref{nls}, we obtain the following lower bound on $T_r$, \begin{equation} T_r \gtrsim 1.9 \times 10^7 \text{ GeV} \left(\frac{m_{\text{inf}}}{10^{11}\text{ GeV}}\right)^{3/4} \left(\frac{10^{16} \, \text{GeV}}{M_{N}}\right)^{1/2}\left(\frac{m_{\nu_3}}{0.05 \text{ eV}}\right)^{1/2} \label{lepto}. \end{equation} A successful baryogenesis is usually generated through the sphaleron processe where an initial lepton asymmetry, $n_L/s$, is partially converted into a baryon asymmetry \cite{Khlebnikov:1988sr,Harvey:1990qw}. However, the right handed neutrinos produced in inflaton decays are highly boosted, which affects the estimate of the final baryon asymmetry as given in \cite{Buchmuller:2019gfy}. Eq. \eqref{lepto} is used in our numerical analysis to calculate inflationary predictions which are consistent with leptogenesis and baryogenesis. Fig. \ref{fig2} shows the contours of reheating temperature $T_r$ (top), inflaton mass $m_{\text{inf}}$ (bottom left) and right handed neutrino mass $M_N$ (bottom right) in $\kappa-M$ plane. We obtain these parameters in the following ranges \begin{gather} \nonumber (10^{7} \lesssim T_r \lesssim 10^{10}) ~ \text{GeV}, \\ \nonumber (4.7 \times 10^{8} \lesssim M_N \lesssim 5.3 \times 10^{11}) ~ \text{GeV}, \\ (9.4 \times 10^{8} \lesssim m_{\text{inf}} \lesssim 7.6 \times 10^{14}) ~ \text{GeV}. \\ \nonumber \end{gather} The color bar on the right displays the range of string tension parameter $G\mu_{CS}$, while the shaded region corresponds to the stable gravitino, as discussed in the following section. \subsection{Gravitino Dark Matter} An important constraint on the reheat temperature $T_{r}$ arises, when gravitino cosmology is taken into account, that depends on the SUSY breaking mechanism and the gravitino mass $m_{3/2}$. As noted in \cite{Ahmed:2021dvo,Lazarides:2020zof}, one may consider the case of \\ $\alpha)$ a stable LSP gravitino; \\ $\beta)$ unstable long-lived gravitino with mass $m_{3/2} < 25$ TeV; \\ $\gamma)$ unstable short-lived gravitino with mass $m_{3/2} > 25$ TeV. We first consider the case of stable gravitino, in which case it is the lightest SUSY particle (LSP) and assuming it is thermally produced, its relic density is estimated to be \cite{Bolz:2000fu} \begin{equation}\label{omega} \Omega_{3/2} h^{2}=0.08\left(\frac{T_{r}}{10^{10} \; \text{GeV} }\right)\left(\frac{m_{3/2}}{1 \; \text{TeV}}\right)\left(1+\frac{m_{\tilde{g}^{2}}}{3m_{3/2}^{2}}\right)~, \end{equation} where $m_{\tilde{g}}$ is the gluino mass parameter and $h$ is the present Hubble parameter in units of $100$ km $\text{sec}^{-1} \text{Mpc}^{-1}$ and $\Omega_{3/2}= \rho_{3/2}/\rho_c $. \footnote{$\rho_{3/2}$ and $\rho_c$ are the present energy density of the gravitino and the critical energy density of the present universe, respectively.} \footnote{Eq. (\ref{omega}) contains only the dominant QCD contributions for the gravitino production rate. In principle there are extra contributions descending from the electroweak sector as mentioned in \cite{Pradler:2006qh}, \cite{Rychkov:2007uq} and recently revised in \cite{Eberl:2020fml}. If we consider these type of contributions in our analysis, we estimate that (depending on gaugino universality condition) our results will deviate $\sim{(10-15)\%}$.}. A stable LSP gravitino requires $m_{\tilde{g}}>m_{3/2}$ while current LHC bounds on the gluino mass are around $2.2$ TeV \cite{Vami:2019slp}. It is found from Eq. \ref{omega} that the overclosure limit $\Omega_{3/2}<1$ puts a severe upper bound on the reheating temperature $T_{r}$, depending on the gravitino mass $m_{3/2}$. Here, we have omitted the contribution from the decays of squarks and sleptons into gravitinos. \begin{figure}[!ht] \centering \includegraphics[width=12cm]{plots/M_k_m32a.pdf} \caption{Contours of the gravitino mass $m_{3/2}$ in the $\kappa-M$ plane, where $M$ is the $B-L$ guage symmetry breaking scale. The boundary is drawn for different constraints shown. The color bar on the right displays the range of string tension parameter $G \mu_{CS}$. The gray shaded region corresponds to the parametric space where the gravitino is LSP $(m_{\tilde{g}}>m_{3/2})$.} \label{grav} \end{figure}Using the lower bound of relic abundance $\Omega^{2}_{h}=0.144$ \cite{Akrami:2018odb} in the above equation, we display a gray shaded region in Fig. \ref{grav} that satisfies the condition $(m_{\tilde{g}}>m_{3/2})$ and hence, in this region, gravitino is the lightest supersymmetric particle (LSP) and acts as a viable dark matter candidate. For gravity mediated SUSY breaking, the constraints on gravitino mass and reheat temperature from BBN are \cite{Kawasaki:2004qu} \begin{equation} \label{eqlong} \begin{split} T_{r}& \lesssim 10^{7} {\rm GeV \quad for }\quad m_{3/2}= (100-5000)\; {\rm GeV }\; ,\\ T_{r}& \sim (10^7 - 2.5\times10^{9}) \;{\rm GeV } \quad {\rm for} \quad m_{3/2}\geq 5000\; {\rm GeV. } \end{split} \end{equation} The shaded region in above Fig. \ref{grav} describes the gravitino dark matter and the value of gravitino mass varies in the range $1.5 \; \text{GeV} \lesssim m_{3/2} \lesssim 4.2 \times 10^5 \; \text{GeV}$ with reheat temperature $T_r \gtrsim 10^7 \; \text{GeV}$. However, the gravitino mass in the range $100 \; \text{GeV} \lesssim m_{3/2} \lesssim 5000 \; \text{GeV}$ requires a reheat temperature $T_r < 10^7 \; \text{GeV}$, which is not achieved in our model and therefore, this small range of $m_{3/2}$ is ruled out by BBN. An unstable gravitino, could be either long-lived or a short-lived. The lifetime of a long-lived gravitino with mass $m_{3/2}<25$ TeV is about $\tilde{\tau}\gtrsim 1 \; \text{sec}$. A long-lived gravitino leads to the cosmological gravitino problem \cite{Khlopov:1984pf} that originates due to the fast decay of gravitino which may affect the light nuclei abundances and thereby ruin the success of BBN theory. To avoid this problem, one has to take into account the BBN bounds (Eq. \ref{eqlong}) on the reheating temperature. Nevertheless for all range of $5000\leq m_{3/2}\leq 25000$ GeV, a long-lived gravitino scenario is viable and consistent with the BBN bounds \eqref{eqlong}. For short-lived gravitino, the BBN bounds on the reheating temperature are not effective and gravitino decays into the LSP neutralino $\tilde{\chi}_{1}^{0}$, for which the abundance is given by \begin{equation}\label{eqa} \Omega_{\tilde{\chi}_{1}^{0}}h^{2}\simeq 2.8\times10^{11}\times Y_{3/2}\left(\frac{m_{\tilde{\chi}_{1}^{0}}}{1 \; \text{TeV}}\right), \end{equation} where $Y_{3/2}$ is the gravitino Yield and is defined as, \begin{equation}\label{eqb} Y_{3/2}\simeq2.3\times10^{-12}\left(\frac{T_{r}}{10^{10} \; \text{GeV}}\right). \end{equation} Since the LSP neutralino density produced by gravitino decay should not exceed the observed DM relic density, choosing the upper bound of relic abundance $\Omega_{\tilde{\chi}_{1}^{0}}h^{2} = 0.126$ and using equations (\ref{eqb}) and (\ref{eqa}), we find a relation between the reheating temperature $T_{r}$ and $m_{\tilde{\chi}_{1}^{0}}$, given by \begin{eqnarray}\label{eqc} m_{\tilde{\chi}_{1}^{0}}\simeq19.6\left(\frac{10^{11} \; \text{GeV}}{T_{r}}\right)~. \end{eqnarray} For gravity mediation scenario $ m_{\tilde{\chi}_{1}^{0}}\geq18$ GeV \cite{Hooper:2002nq}, which is easily satisfied in the current model. Therefore, the short-lived gravitino scenario is also a viable possibility in this model. The region above the shaded area in Fig. \ref{grav} corresponds to short-lived gravitino. Finally, we obtain the following ranges of string tension $G\mu_{CS}$ and gravitino mass for stable and unstable gravitino consistent with BBN bounds, \begin{gather} \nonumber 10^{-9} \lesssim G\mu_{CS} \lesssim 8 \times 10^{-6}, \\ (-3.2 \times 10^{9} \lesssim am_{3/2} \lesssim 5 \times 10^{8}) ~ \text{GeV}. \end{gather} In next sub section, we analyze stochastic gravitational waves (GW) spectrum, consistent with leptogenesis and gravitino cosmology. \subsection{Gravitational Waves From Cosmic Strings} The superposition of GW sources, such as inflation, cosmic strings and phase transition, would generate a stochastic GW background (SGWB). The tensor perturbations upon horizon re-entry give rise to the inflationary SGWB \cite{ Vagnozzi:2020gtf, Benetti:2021uea,Caprini:2015tfa,Kuroyanagi:2020sfw} which imprint a distinctive signature in the CMB $B$-mode polarization. The amplitude and scale dependence of the inflationary SGWB is parameterized via the tensor-to-scalar ratio $r$ and the tensor spectral index $n_T$, which satisfy the inflationary consistency relation $r = -8 n_T$ \cite{Liddle:1993fq}, within single-field and hybrid slow-roll models. Since $r \geq 0$, this requires $n_T \leq 0$ (red spectrum)\cite{BICEP2:2018kqh}. With current constraints on tensor-to-scalar ratio $r$, the amplitude of the inflationary SGWB on PTA and interferometer scales is far too small to be detectable by these probes and would instead require a strong blue tilted ($n_T > 0$) primordial tensor power spectrum \cite{ Vagnozzi:2020gtf}. For a detailed study on SGWBs from first-order phase transition associated with the spontaneous $U(1)_{B-L}$ gauge symmetry breaking, see Refs \cite{Jinno:2016knw,Hasegawa:2019amx,Haba:2019qol,Dong:2021cxn}. In this section, we study SGWB spectra produce by the decay of cosmic string network \cite{King:2020hyd,JohnEllis,Buchmuller:2020lbh,King:2021gmj}. The breaking of $U(1)_{B-L}$ gauge symmetry generates stable cosmic string network that can put severe bounds on model parameters. These bounds can be relaxed if the cosmic strings are metastable. The embedding of $U(1)_{B-L}$ group in $SO(10)$ GUT gauge group leads to production of metastable cosmic string network which can decay via the Schwinger production of monopole-antimonopole pairs, generating a stochastic gravitational wave background (SGWB), in the range of ongoing and future gravitational wave (GW) experiments. The MSSM matter superfields reside in the $\bm{16}$ (spinorial) representation, whereas the MSSM Higgs doublet reside in $\bm{10}$ representations of $SO(10)$. The $SO(10)$ symmetry breaking to MSSM gauge group is achieved by non-zero VEV of $\bm{45}$ multiplet; \begin{equation} SO(10) \xrightarrow{\langle \bm{45} \rangle} G_{\text{MSSM}} \times U(1)_{\chi} \xrightarrow{\langle \bm{16} \rangle, \langle \bm{\bar{16}} \rangle} G_{\text{MSSM}}, \end{equation} where $G_{\text{MSSM}} \equiv SU(3)_C \times SU(2)_L \times U(1)_Y$ is the MSSM gauge group. The $U(1)_{\chi}$ charge is defined as a linear combination of hypercharge $Y$ and $B-L$ charge, \begin{equation} Q_{\chi} = Y x + Q_{B-L}, \end{equation} with $x$ being a real constant. As a special case of $x = 0$, the model, after spontaneous breaking of $SO(10)$, can be effectively realized as $B-L$ extended MSSM, $U(1)_{\chi=B-L}$. The Higgs superfield pair ($H, \bar{H}$) belong to $(\bm{16} + \bm{\bar{16}})$ representation of $SO(10)$ and is responsible for breaking $G_{B-L}$ to MSSM. The first step gauge symmetry breaking produces magnetic monopoles which are inflated away during inflation, whereas the second step breaking produces metastable cosmic string network. If cosmic strings form after inflation, they exhibit a scaling behavior where the stochastic GW spectrum is relatively flat as a function of the frequency, and the amplitude is proportional to the string tension $\mu_{CS}$. For our case, $\mu_{CS}$ can be written in term of $M$ as \cite{Hill:1987ye}, \begin{equation}\label{cosmicmu} \mu_{CS}= 2\pi M^2 y(\Upsilon), \quad y(\Upsilon) \approx \left\{ \begin{array}{ll} 1.04 \,\Upsilon^{0.195}, & \mbox{ $ \Upsilon > 10^{-2},$} \\ \frac{2.4}{\log[2/\Upsilon]}, & \mbox{ $ \Upsilon < 10^{-2}$},\end{array} \right. \end{equation} where $\Upsilon = \frac{\kappa^2}{2g^2}$ with $g=0.7$ for MSSM. The CMB bound on cosmic string tension, reported by Planck 2018 \cite{Ade:2013xla,Ade:2015xua} is \begin{eqnarray} G \mu_{CS} \lesssim 2.4 \times 10^{-7}, \end{eqnarray} where $G \mu_{CS}$ denotes the dimensionless string tension with the gravitational constant $G= 6.7 \times 10^{-39}~\text{GeV}^{-2}$. The observation of GWs from cosmic strings is crucially dependent on two scales; the energy scale of inflation $\Lambda_{\text{inf}}$, and the scale at which cosmic string generate the GW spectrum $\Lambda_{CS}\equiv \sqrt{\mu_{CS}}$. The amplitude of the tensor mode cosmic microwave background (CMB) anisotropy fixes the energy scale of inflation as $\Lambda_{\text{inf}}\sim V^{1/4}\sim 3.3\times 10^{16} \, r^{1/4}$ \cite{Easther:2006qu}. Using Planck 2-$\sigma$ bounds on tensor-to-scalar ratio $r$ we obtain the upper limit on scale of inflation, $\Lambda_{inf} < 1.6 \times 10^{16} $ GeV \cite{Planck:2018jri}. In our model, strings form after inflation, namely $\Lambda_{\text{inf}}>\Lambda_{CS}$, for which a stochastic gravitational wave background (SGWB) is generated from undiluted strings. The SGWB arising from meta-stable cosmic strings network are expressed relative to critical density as \cite{Blanco-Pillado:2017oxo} \begin{align} \Omega_\text{GW}(f) = \frac{\partial \rho_\text{gw}(f)}{\rho_c \partial \ln f}= \frac{8 \pi f (G \mu_{CS})^2}{3 H_0^2} \sum_{n = 1}^\infty C_n(f) \, P_n \,, \label{eq:Omega} \end{align} where $\rho_\text{gw}$ denotes the GW energy density, $\rho_c$ is the critical energy density of the universe, and $H_0 = 100 \,h\,\textrm{km}\textrm{s}^{-1} \textrm{Mpc}^{-1}$ is the Hubble parameter. The parameter $P_n \simeq\frac{50}{\zeta(4/3)n^{4/3}}$ is the power spectrum of GWs emitted by the $n^{\rm th}$ harmonic of a cosmic string loop and $C_n(f)$ indicates the number of loops emitting GWs that are observed at a given frequency $f$ \begin{figure}[tp] \centering \includegraphics[width=7.90cm]{plots/OmegaGW_a.pdf} \centering \includegraphics[width=7.90cm]{plots/OmegaGW_b.pdf} \caption{Gravitational wave spectra from metastable cosmic strings explaining the NANOGrav excess at 2-$\sigma$ confidence level. The curves in left panel are drawn for all range of string tension obtained in the model with $\sqrt{\lambda}=8$. The curves in right panel are drawn for the range of string tension consistent with non-thermal leptogenesis and gravitino dark matter, with $\sqrt{\lambda}=7,8,9$. The shaded areas in the background indicate the sensitivities of the current and future experiments.} \label{omegagw} \end{figure} \begin{align} \label{eq:Cn} C_n(f) = \frac{2 n}{f^2} \int_{z_\text{min}}^{z_\text{max}}dz\:\frac{\mathcal{N}\left(\ell\left(z\right),\,t\left(z\right)\right)}{H\left(z\right)(1 + z)^6} \,, \end{align} which is a function of number density of cosmic string loops $\mathcal{N}(\ell,t)$, with $\ell = 2n/((1 + z) f)$. For the number density of cosmic string loops, $\mathcal{N}(\ell,t)$, we use the approximate expressions of Blanco-Pillado-Olum-Shlaer (BOS) model given in \cite{Blanco-Pillado:2017oxo,Auclair:2019wcv} \begin{align} \mathcal{N}_r(\ell,t) &= \frac{0.18}{t^{3/2}(\ell+\Gamma G\mu_{CS} t)^{5/2}},\label{eq:nr}\\ \mathcal{N}_{m,r}(\ell,t) &= \frac{0.18\sqrt{t_{eq}}}{t^2(\ell+\Gamma G\mu_{CS} t)^{5/2}} =\frac{0.18(2H_0\sqrt{\Omega_r})^{3/2}}{(\ell+\Gamma G\mu_{CS} t)^{5/2}}(1+z)^3~. \end{align} For our region of interest, the dominant contribution is obtained from the loops generated during the radiation-dominated era. For $t(z)$ and $H(z)$, we use the expressions for $\Lambda$CDM cosmology assuming a standard thermal history of universe, while ignoring the changes in the number of effective degrees of freedom with $z$ \begin{eqnarray}\label{OmegaGW} H(z)&=&H_0\sqrt{\Omega_\Lambda + \Omega_m(1+z)^3+\Omega_r(1+z)^4},\\ t(z) &=& \int_{z_\text{min}}^{z_{\text{max}}} \frac{dz' }{H(z')(1+z')},\quad l(z)=\frac{2n}{(1+z)f}. \end{eqnarray} The integration range in the above equation corresponds to the lifetime of the cosmic string network, from its formation at $z_\text{max} \simeq \frac{T_r}{2.7K}$ until its decay at $z_\text{min}$ given by \cite{Leblond:2009fq,Monin:2008mp,KaiSchmitz}, \begin{equation} z_\text{min} = \left( \frac{70}{H_0}\right)^{1/2} \left( \Gamma \; \Gamma_d \; G \mu_{CS} \right)^{1/4},\quad \Gamma_d =\frac{\mu}{2\pi}e^{-\pi\lambda}, \quad \lambda = \frac{m_M^2}{\mu} \label{zmin} \end{equation} where $\Gamma \simeq 50$, $m_M$ is the monopole mass, $\mu$ is the string tension, and we fix the reheat temperature at $T_r=10^8$~GeV. The dimensionless parameter $\lambda$ is the hierarchy between the GUT and $U(1)_{B-L}$ breaking scales. Fig. \ref{omegagw} shows gravitational wave spectra from metastable cosmic strings for the predicted range of cosmic string tension, $10^{-12} \lesssim G\mu_{CS} \lesssim 10^{-6}$. The curves in left panel are drawn for the GUT and the $B-L$ breaking scales ratio, $\sqrt{\lambda}=8$. The parametric space consistent with successful reheating with non-thermal leptogenesis and gravitino dark matter restricts the value of $G\mu_{CS}$ in the range $10^{-9} \lesssim G\mu_{CS} \lesssim 10^{-6}$. This is shown in the right panel of Fig. \ref{omegagw} where the curves are drawn for $\sqrt{\lambda}=7,8,9$. It can be seen that the GW spectrum for the entire range of $G\mu_{CS}$ passes through most GW detector sensitivities. LIGO O1 \cite{LIGOScientific:2019vic} has excluded cosmic strings formation at $G\mu_{CS} \lesssim 10^{-6}$ in the high frequency regime $10$-$100$ Hz. The low frequency band, $1$-$10$~nHz, can be probed by NANOGrav \cite{Arzoumanian:2020vkk}, EPTA \cite{Ferdman:2010xq} and other GW experiments at nano Hz frequencies. Planned pulsar timing arrays SKA \cite{Smits:2008cf}, space-based laser interferometers LISA \cite{LISA:2017pwj}, Taiji \cite{Hu:2017mde}, TianQin \cite{TianQin:2015yph}, BBO \cite{Corbin:2005ny}, DECIGO \cite{Seto:2001qf}, ground-based interferometers, such as Einstein Telescope \cite{Punturo:2010zz} (ET), Cosmic Explorer \cite{LIGOScientific:2016wof} (CE), and atomic interferometer AEDGE \cite{AEDGE:2019nxb}, will probe GW generated by metastable cosmic string in a wide regime of frequencies. \subsection{Explaining the NANOGrav results} We now discuss the SGWB signal predicted by metastable cosmic strings for recent NANOGrav 12.5 yr results \cite{Arzoumanian:2020vkk}, which constrain the amplitude and slope of a stochastic process. The amplitude of the SGWB is obtained in terms of dimensionless characteristic strain $h_c = A (f/f_\text{yr})^\alpha$ at the reference frequency $f_\text{yr}=32$~nHz as \cite{Buchmuller:2020lbh} \begin{align} \Omega_\text{GW}(f) = \frac{2 \pi^2 f_\text{yr}^2 A^2 }{3 H_0^2} \left( \frac{f}{f_\text{yr}} \right)^{2 \alpha + 2} \equiv \Omega_\text{gw}^\text{yr}\left( \frac{f}{f_{yr}} \right)^{n_{gw}}~\label{yrgw}, \end{align} where $A$ is the strain amplitude. At low GW frequency, $\Omega_\text{GW}$ behaves as $\sim f^{3/2}$, whereas at high GW frequencies, $\Omega_\text{GW} \sim 1$. NANOGrav uses a power law fit with $5-\gamma=2+2\alpha=n_{gw}$ and constrain the parameters $A$ and $\gamma$. This allows us to directly translate the 1- and 2-$\sigma$ NANOGrav bounds given in \cite{Arzoumanian:2020vkk} into the $\Omega_\text{gw}^\text{yr}$-$n_\text{gw}$ plane, as displayed by the yellow shaded regions in Fig. \ref{nano}. Following \cite{JohnEllis}, we extract the amplitude $\Omega_\text{gw}^\text{yr}$ and slope $n_\text{gw}$ using Eq. \eqref{yrgw} by comparing the amplitude at the pivot scale $f_*$ and taking the logarithmic derivative of $\Omega_\text{gw}(f)$ at the desired frequency scale $f_*$ \footnote{Here we have employed numerical differentiation method. For least squares power-law fit method, see \cite{Buchmuller:2020lbh}}, \begin{eqnarray} n_\text{gw}&=& \left.\frac{d\log{\Omega_\text{GW}(f)}}{d\log{f}}\right|_{f=f_*},\\ \Omega_\text{gw}^\text{yr}&=&\Omega_\text{GW}(f_*)\left( \frac{f_\text{yr}}{f_*}\right)^{n_\text{gw}}. \end{eqnarray} \begin{figure}[tp] \centering \includegraphics[width=12.0cm]{plots/gmu_ngw.pdf} \caption{Gravitational wave signals from metastable cosmic strings compared to the NANOGrav observations for different values of the string tension $G \mu_{CS}$ and the hierarchy between the GUT and $U(1)$ breaking scale $\lambda$. The solid colored lines represent fixed values of $G \mu_{CS}$ whereas, the dotted lines represent contours of $\sqrt{\lambda}$. The dark (light) yellow region represent the 1-$\sigma$ (2-$\sigma$) bounds reported by NANOGrav 12.5 yr data \cite{Arzoumanian:2020vkk}.} \label{nano} \end{figure} Fig. \ref{nano} shows the comparison of the predictions from metastable cosmic strings (mesh of solid and dotted curves) with the constraints on the amplitude and tilt from \cite{Arzoumanian:2020vkk} (yellow shaded region). We vary $G \mu_{CS}$ from $10^{-11}$ to $10^{-6}$, however the CMB constraint $G\mu_{CS}\leq 1.3\times 10^{-7}$ only applies to cosmic strings with a life-time exceeding CMB decoupling, corresponding to $\sqrt{\lambda} \gtrsim 8.6$. For each value of $G\mu_{CS}$, we consider the GUT and the $B-L$ breaking scales ratio in the range $\sqrt{\lambda} = 7.4-9.0$, where smaller values lead to a small spectrum at nHz frequencies that can be detected by future experiments, while all values $\sqrt{\lambda} \gtrsim 8.8$ quickly converge towards the result for stable cosmic strings and can be observed by NANOGrav and PPTA experiments \cite{Arzoumanian:2020vkk}. The parametric space in the above model, consistent with successful reheating with non-thermal leptogenesis and gravitino dark matter restrict the allowed values of string tension to the range $10^{-9} \lesssim G\mu_{CS} \lesssim 8 \times 10^{-6}$ that lies within the 1- and 2-$\sigma$ bounds of NANOGrav, as well as the sensitivity bounds of future gravitational wave (GW) experiments. \section{Summary} To summarize, we have investigated various cosmological implications of a generic model based on the $U(1)_{B-L}$ extension of the MSSM gauge symmetry in a no-scale K\"ahler potential setup, highlighting the issues of inflation, leptogenesis and baryogenesis, gravitino as well as the stochastic gravitational wave background (SGWB) from metastable cosmic sting network. The embedding of $U(1)_{B-L}$ into a simply-connected group $SO(10)$, produces metastable cosmic string due to the spontaneous pair creation of a monopole and an anti-monopole, which can generate a stochastic gravitational wave background (SGWB) in the range of ongoing and future gravitational wave (GW) experiments. The interaction between $U(1)_{B-L}$ Higgs and the neutrino superfields generate heavy Majorana masses for the right-handed neutrinos. The heavy Majorana masses explain the tiny neutrino masses via the seesaw mechanism, a realistic scenario for reheating and non-thermal leptogenesis. A wide range of reheat temperature $(10^{7} \lesssim T_r \lesssim 10^{10}) ~ \text{GeV}$ and $U(1)_{B-L}$ symmetry breaking scale $(1.3 \times 10^{13} \lesssim M \lesssim 2.0 \times 10^{16}) ~ \text{GeV}$ is achieved here with successful non-thermal leptogenesis and stable gravitino as a possible dark matter candidate. The metastable cosmic string network admits string tension values in the range $10^{-12} \lesssim G\mu_{CS}\lesssim 8 \times 10^{-6}$. A successful reheating with non-thermal leptogenesis and gravitino dark matter restrict the allowed values of string tension to the range $10^{-9} \lesssim G\mu_{CS} \lesssim 8 \times 10^{-6}$, predicting a stochastic gravitational-wave background that lies within the 1-$\sigma$ bounds of the recent NANOGrav 12.5-yr data, as well as within the sensitivity bounds of future GW experiments. \section*{Acknowledgments} We thank Valerie Domcke, George K Lenotaris and Kazunori Kohri for valuable discussions.The work of S.N is supported by the United Arab Emirates University under UPAR Grant No. 12S004.
1,477,468,750,867
arxiv
\section{From regions to orders} \label{sec:regions2orders} In this section we show that each region $R$ of the type $C$ Catalan arrangement corresponds bijectively to a specific order between the variables $x_i$ and $1+x_i$ for any $i$ in $\llbracket -n,n\rrbracket \setminus \left\{0 \right\}$ where $(x_1,\ldots,x_n)$ denotes the coordinates of any point of $R$ and $x_{-i} = -x_i$ for all $i$ in $\llbracket 1,n\rrbracket$. In the sequel, for any $i$ in $\llbracket -n,n\rrbracket \setminus \left\{0 \right\}$, we denote by :\\ $\bullet\ \alpha_i^{(0)}$ the variable $x_i$, \qquad $\bullet\ \alpha_i^{(1)}$ the variable $1+x_i$.\\ These notations are derived from the paper of O. Bernardi \cite{B18}. We also denote by ${\mathcal A}_{2n}$ the alphabet $\{\alpha_i^{(0)}, \alpha_i^{(1)}, \forall i\in \llbracket -n,n\rrbracket \setminus \left\{0 \right\} \}$. We first define a symmetric annotated 1-sketch and explain its symmetries. Then, in a second time, we will show that the regions of the type $C$ Catalan arrangement are in one-to-one correspondence with symmetric annotated 1-sketches. \subsection{Symmetric annotated 1-sketch} \begin{definition} \label{def:symAnnotSk} A {\it{symmetric annotated $1$-sketch of size 2n}} is a word $\omega = w_1 ...w_{4n}$ that satisfies for all $ i,j \in \llbracket -n,n\rrbracket \setminus \left\{0 \right\}$: \begin{itemize} \item[(i)] $ \left\{w_1,...,w_{2n},...,w_{4n} \right\} = {\mathcal A}_{2n},$ \item[(ii)] $ \alpha_{i}^{(0)}$ appears before $\alpha_{i}^{(1)}$, \item[(iii)] If $ \alpha_{i}^{(0)}$ appears before $\alpha_{j}^{(0)}$ then $ \alpha_{i}^{(1)}$ appears before $\alpha_{j}^{(1)}$, \item[(iv)] If $ \alpha_{i}^{(0)}$ appears before $\alpha_{j}^{(s)}$ then $ \alpha_{-j}^{(0)}$ appears before $\alpha_{-i}^{(s)}$, $\forall s \in \{0,1\}$. \end{itemize} Let $D^{(1)} (2n)$ be the set of {\it{symmetric annotated $1$-sketches of size 2n}}. \end{definition} \begin{example} \label{omega} $\omega = \alpha_{-2}^{(0)} \alpha_{1}^{(0)} \alpha_{-2}^{(1)} \alpha_{3}^{(0)} \alpha_{-3}^{(0)} \alpha_{1}^{(1)} \alpha_{-1}^{(0)} \alpha_{3}^{(1)} \alpha_{-3}^{(1)} \alpha_{2}^{(0)} \alpha_{-1}^{(1)} \alpha_{2}^{(1)} \in D^{(1)} (6)$. \end{example} \begin{remark} \label{rk:symAnnotSk} \begin{enumerate} \item Condition $(ii)$ of Definition \ref{def:symAnnotSk} implies that a symmetric annotated 1-sketch starts with a sequence of $\alpha_i^{(0)}$ letters and ends with a sequence of $\alpha_i^{(1)}$ letters. \item \label{rk:symAnnotSk2} Condition $(iv)$ of Definition \ref{def:symAnnotSk} implies that the subword of $\omega$ composed of the $\alpha_.^{(0)}$ letters has the form $\alpha_{i_1}^{(0)}\ldots \alpha_{i_n}^{(0)} \alpha_{-i_n}^{(0)} \ldots \alpha_{-i_1}^{(0)}$ with $\{|i_1|,\ldots, |i_n|\}=\llbracket 1,n \rrbracket$. Moreover, the subword of $\omega$ composed of the $\alpha_.^{(1)}$ letters is exactly $\alpha_{i_1}^{(1)}\ldots \alpha_{i_n}^{(1)} \alpha_{-i_n}^{(1)} \ldots \alpha_{-i_1}^{(1)}$. \end{enumerate} \end{remark} Furthermore, a symmetric annotated 1-sketch is the result of a specific shuffle between two words on the alphabet ${\mathcal A}_{2n}$ where one is the symmetric of the other in the following sense : \begin{definition} Let $\omega_1$ be a word on ${\mathcal A}_{2n}$ that ends with letter $u$, i.e $\omega_1 = \omega_0 u$. We define the symmetric of $\omega_1$ as a word $\overline{\omega}_1 = \overline{u} \ \overline{\omega}_0$ where $\overline{u} = \alpha_{-k}^{(1-s)}$ if $u =\alpha_{k}^{(s)}$, $s\in \{0,1\}$ and $\overline{\omega}_0$ is recursively defined in the same way. \end{definition} \begin{example} \label{ex:symsk} The symmetric of $\omega_1 =\alpha_{-2}^{(0)} \alpha_{1}^{(0)} \alpha_{-2}^{(1)} \alpha_{3}^{(0)} \alpha_{1}^{(1)} \alpha_{3}^{(1)} $ is $\overline{\omega}_1 = \alpha_{-3}^{(0)} \alpha_{-1}^{(0)} \alpha_{-3}^{(1)} \alpha_{2}^{(0)} \alpha_{-1}^{(1)} \alpha_{2}^{(1)}$. \end{example} Now, a symmetric annotated 1-sketch $\omega$ is the combination of two symmetric words $\omega_1$ and $\omega_2 = \overline{\omega}_1$. As a matter of fact, we will now explain how we obtain $\omega_1$ and $\omega_2$ from $\omega$. We call words of the form $\omega_1$, annotated 1-sketches which formal definition is \begin{definition} \label{definition: annosketch} An annotated 1-sketch of size $n$ is defined by $2n$ letters $\alpha_{j_k}^{(0)}$ and $\alpha_{j_k}^{(1)}$, $k$ in $\llbracket 1,n\rrbracket$ such that $\{| j_1|,\ldots ,| j_n|\}= \llbracket 1,n\rrbracket$ and which satisfies conditions $(ii)$ and $(iii)$ of Definition \ref{def:symAnnotSk}. We denote by $A_{n,s}$, $n \leq s \leq 2n-1$, the set of annotated 1-sketches where the rightmost letter $\alpha_{.}^{(0)}$ is at position $s$ \end{definition} Thus we get that : \begin{proposition} \label{prop:sketchdecomp} Any symmetric annotated 1-sketch $\omega$ is the composition of an annotated 1-sketch $\omega_1$ and its symmetric $\overline{\omega}_1$. \end{proposition} \begin{proof} We define $\omega_1$ as the subword of $\omega$ composed of the $n$ leftmost $ \alpha_{.}^{(0)}$ letters and the corresponding $\alpha_{.}^{(1)}$ letters (if $ \alpha_{i}^{(0)}$ appears in $\omega_1$ then $ \alpha_{i}^{(1)}$ appears in $\omega_1$). Remark \ref{rk:symAnnotSk}(\ref{rk:symAnnotSk2}) implies that $\alpha_{i}^{(0)}$ and $\alpha_{-i}^{(0)}$ cannot both belong to the set of the $n$ leftmost $ \alpha_{.}^{(0)}$ letters of $\omega$. Thus, it is easy to see that $\omega_1$ is an annotated 1-sketch. This remark and condition $(iv)$ of Definition \ref{def:symAnnotSk} also imply that $\omega_2$ the subword of $\omega$ composed of the letters not in $\omega_1$ is the symmetric of $\omega_1$. \end{proof} \begin{example} $\omega$ of Example \ref{omega} is composed of $\omega_1 =\alpha_{-2}^{(0)} \alpha_{1}^{(0)} \alpha_{-2}^{(1)} \alpha_{3}^{(0)} \alpha_{1}^{(1)} \alpha_{3}^{(1)}\in A_{3,4}$ and $\overline{\omega}_1$. \end{example} Conversely, for any annotated 1-sketch $\omega_1$, we can construct a set of symmetric annotated 1-sketches, the result of shuffles between $\omega_1$ and $\overline{\omega}_1$. We first give the definition of these shuffles and then prove the assertion. \begin{definition} \label{def:shuffles} Let $\psi = \alpha_{j_1}^{(1)}\ldots \alpha_{j_k}^{(1)}$. We define the set of shuffles $\psi\bowtie \overline{\psi}$ recursively with $\psi\bowtie \overline{\psi} = \{\epsilon\}$ if $\psi$ is the empty word $\epsilon$, as the set of following words: \begin{itemize} \item $\alpha_{-j_k}^{(0)} \psi'\bowtie \overline{\psi'} \alpha_{j_k}^{(1)}$ with $\psi' = \alpha_{j_1}^{(1)}\ldots \alpha_{j_{k-1}}^{(1)}$ ($\psi'=\epsilon$ if $k=1$), \item $\alpha_{j_1}^{(1)}\ldots \alpha_{j_i}^{(1)}\alpha_{-j_k}^{(0)} \psi'\bowtie \overline{\psi'} \alpha_{j_k}^{(1)}\alpha_{-j_i}^{(0)}\ldots \alpha_{-j_1}^{(0)}$ with $\psi' = \alpha_{j_{i+1}}^{(1)}\ldots \alpha_{j_{k-1}}^{(1)}$ ($\psi'=\epsilon$ if $i=k-1$), $\forall 1\leq i \leq k-1$, \item $\alpha_{j_1}^{(1)}\ldots \alpha_{j_k}^{(1)}\alpha_{-j_k}^{(0)} \ldots \alpha_{-j_1}^{(0)}$. \end{itemize} \end{definition} \begin{example} The set of shuffles $\psi\bowtie \overline{\psi}$ with $\psi = \alpha_{j_1}^{(1)} \alpha_{j_2}^{(1)}$ is composed of the four words $\alpha_{-j_2}^{(0)} \alpha_{-j_1}^{(0)}\alpha_{j_1}^{(1)} \alpha_{j_2}^{(1)}$, $\alpha_{-j_2}^{(0)} \alpha_{j_1}^{(1)}\alpha_{-j_1}^{(0)} \alpha_{j_2}^{(1)}$, $\alpha_{j_1}^{(1)} \alpha_{-j_2}^{(0)}\alpha_{j_2}^{(1)} \alpha_{-j_1}^{(0)}$ and $\alpha_{j_1}^{(1)} \alpha_{j_2}^{(1)}\alpha_{-j_2}^{(0)} \alpha_{-j_1}^{(0)}$. \end{example} \begin{definition} \label{def:sketchesshuffle} Let $\omega_1= \omega_0 \alpha_{j_n}^{(0)} \psi$ with $\psi=\alpha_{j_{s-n+1}}^{(1)} \alpha_{j_{s-n+2}}^{(1)}... \alpha_{j_{n-1}}^{(1)} \alpha_{j_n}^{(1)}$, be an annotated 1-sketch. Then $\omega_1\bowtie \overline{\omega}_1 = \omega_0 \alpha_{j_n}^{(0)} \left[\psi \bowtie \overline{\psi} \right ] \alpha_{-j_n}^{(1)} \overline{\omega}_0 = \{\omega_0 \alpha_{j_n}^{(0)} u \alpha_{-j_n}^{(1)} \overline{\omega}_0, u \in \psi \bowtie \overline{\psi}\}$. \end{definition} \begin{proposition} \label{proposition: shufflesym} For any annotated 1-sketch $\omega_1$ of size $n$, $\omega_1\bowtie \overline{\omega}_1 \subset D^{(1)}(2n)$. \end{proposition} \begin{example} $\omega_1 =\alpha_{-2}^{(0)} \alpha_{1}^{(0)} \alpha_{-2}^{(1)} \alpha_{3}^{(0)} \alpha_{1}^{(1)} \alpha_{3}^{(1)} \in A_{3,4}$. Then $\omega_1\bowtie \overline{\omega}_1$ is the set of $4$ elements:\\ $\alpha_{-2}^{(0)} \alpha_{1}^{(0)} \alpha_{-2}^{(1)} \alpha_{3}^{(0)} \alpha_{-3}^{(0)} \alpha_{1}^{(1)} \alpha_{-1}^{(0)} \alpha_{3}^{(1)} \alpha_{-3}^{(1)} \alpha_{2}^{(0)} \alpha_{-1}^{(1)} \alpha_{2}^{(1)}$, $\alpha_{-2}^{(0)} \alpha_{1}^{(0)} \alpha_{-2}^{(1)} \alpha_{3}^{(0)} \alpha_{-3}^{(0)} \alpha_{-1}^{(0)} \alpha_{1}^{(1)} \alpha_{3}^{(1)} \alpha_{-3}^{(1)} \alpha_{2}^{(0)} \alpha_{-1}^{(1)} \alpha_{2}^{(1)}$, $\alpha_{-2}^{(0)} \alpha_{1}^{(0)} \alpha_{-2}^{(1)} \alpha_{3}^{(0)} \alpha_{1}^{(1)} \alpha_{-3}^{(0)} \alpha_{3}^{(1)} \alpha_{-1}^{(0)} \alpha_{-3}^{(1)} \alpha_{2}^{(0)} \alpha_{-1}^{(1)} \alpha_{2}^{(1)}$, $\alpha_{-2}^{(0)} \alpha_{1}^{(0)} \alpha_{-2}^{(1)} \alpha_{3}^{(0)} \alpha_{1}^{(1)} \alpha_{3}^{(1)} \alpha_{-3}^{(0)} \alpha_{-1}^{(0)} \alpha_{-3}^{(1)} \alpha_{2}^{(0)} \alpha_{-1}^{(1)} \alpha_{2}^{(1)}.$ \end{example} \begin{proof} We must prove that any word of $\omega_1\bowtie \overline{\omega}_1$ is a symmetric annotated 1-sketch, meaning that it verifies conditions $(i)$ to $(iv)$ of Definition \ref{def:symAnnotSk}. Conditions $(i)$, $(ii)$ and $(iii)$ are straightforward since $\omega_1$ and $\overline{\omega}_1$ are annotated 1-sketches, each one the symmetric of the other, and their letters are not permuted. Let $\omega_1= \omega_0 \alpha_{j_n}^{(0)} \psi$ with $\psi=\alpha_{j_{s-n+1}}^{(1)} \alpha_{j_{s-n+2}}^{(1)}... \alpha_{j_{n-1}}^{(1)}\alpha_{j_n}^{(1)}$. A word of $\omega_1 \bowtie \overline{\omega}_1$ is either $\omega_1\overline{\omega}_1$ which obviously verifies condition $(iv)$, or has one of the following form and we can thus check recursively that it verifies condition $(iv)$ : \begin{itemize} \item $\omega_0 \alpha_{j_n}^{(0)} \alpha_{-j_n}^{(0)} \left [\psi' \bowtie \overline{\psi'}\right ] \alpha_{j_n}^{(1)} \alpha_{-j_n}^{(1)} \overline{\omega}_0$, $\psi'=\alpha_{j_{s-n+1}}^{(1)} \alpha_{j_{s-n+2}}^{(1)}... \alpha_{j_{n-1}}^{(1)}$, and thus $\alpha_{-j_n}^{(0)}$ appears before $\alpha_{j_t}^{(1)}$ and $\alpha_{-j_t}^{(0)}$ appears before $\alpha_{j_n}^{(1)}$, for any $t$ in $\llbracket s-n+1, n-1\rrbracket$, \item $\omega_0 \alpha_{j_n}^{(0)}\alpha_{j_{s-n+1}}^{(1)}... \alpha_{j_{k}}^{(1)} \alpha_{-j_n}^{(0)} [ \psi' \bowtie \overline{\psi'} ] \alpha_{j_{n}}^{(1)} \alpha_{-j_{k}}^{(0)}...\alpha_{-j_{s-n+1}}^{(0)} \alpha_{-j_{n}}^{(1)} \overline{\omega}_0$, $\psi'=\alpha_{j_{k+1}}^{(1)}... \alpha_{j_{n-1}}^{(1)}$, and thus $\alpha_{-j_n}^{(0)}$ appears before $\alpha_{j_t}^{(1)}$ and $\alpha_{-j_t}^{(0)}$ appears before $\alpha_{j_n}^{(1)}$, for any $t$ in $\llbracket k+1, n-1\rrbracket$. \end{itemize} \end{proof} \subsection{Bijection between regions and symmetric annotated 1-sketches} A symmetric annotated 1-sketch corresponds to a specific order between the variables $x_i$ and $1+x_i$ for any $i$ in $\llbracket -n,n\rrbracket \setminus \left\{0 \right\}$. We show here that these orders are bijectively related to the coordinates of the points of the regions of the type $C$ Catalan arrangement. \begin{proposition} \label{prop:bijArr2sk} There is a one to one correspondence between regions of the type $C$ Catalan arrangement in $\mathbb{R}\xspace^n$ and the symmetric annotated 1-sketches of size $2n$. \end{proposition} \begin{proof} Observe that for all $x \in {\mathbb{R}}^n$, if there exist $i,j \in \llbracket -n,n\rrbracket \setminus \left\{0 \right\}$ and $s,t \in \left\{0,1 \right\}$ such that $x_i +s =x_j +t$ then $x \in {\cup}_{ H \in \mathcal{C}_{\left\{-1,0,1 \right\}} (n)} H$. Therefore, for any $x = \left\{ x_1,..., x_n \right\}$ that belongs to $ {\mathbb{R}}^n \setminus {\cup}_{ H \in \mathcal{C}_{\left\{-1,0,1 \right\}} (n)} H$, the elements of $ \left\{ x_i +s : i \in \llbracket -n,n\rrbracket \setminus \left\{0 \right\}, s \in \{0,1\} \right\}$ are all distinct, with $x_{-i}= -x_i$ for all $i$. We define $ {\sigma} (x) =w_1 w_2...w_{4n}$, where $w_p = \alpha_{i}^{(s)}$ if $z_p = x_i +s$ with $\left\{z_1 < z_2 <...< z_{4n} \right\} = \left\{ x_i +s : i \in \llbracket -n,n\rrbracket \setminus \left\{0 \right\}, s \in \{0,1\} \right\}$. ${\sigma} (x)$ obviously satisfies conditions $(i)-(iii)$ of Definition \ref{def:symAnnotSk}. We now prove that ${\sigma} (x)$ satisfies condition $(iv)$ of Definition \ref{def:symAnnotSk}. Indeed, if $\alpha_{i}^0$ appears before $\alpha_{j}^s$ with $s \in \left\{0,1 \right\}$ then $x_i < x_j +s$, hence $x_{-j} < x_{-i} +s$. It induces that $\alpha_{-j}^0$ appears before $\alpha_{-i}^s$. Therefore ${\sigma} (x)$ is a symmetric annotated 1-sketch of size $2n$. The mapping ${\sigma}$ is constant over each region of $\mathcal{C}_{\left\{-1,0,1 \right\}} (n)$. Thus, ${\sigma} $ is a mapping from the regions of $\mathcal{C}_{\left\{-1, 0, 1 \right\}}(n)$ to ${D}^{(1)} (2n)$. The mapping $\sigma$ satisfies, $ x_i - x_j < s$ if $\alpha_{i}^{(0)}$ appears before $\alpha_{j}^{(s)}$ and $x_i - x_j > s$ otherwise, for all $i, j \in \llbracket -n,n\rrbracket \setminus \left\{0 \right\}$ and all $s \in \left\{0,1 \right\}$. Thus, $\sigma$ is injective. Finally, for any symmetric annotated 1-sketch $ \omega =w_1 w_2...w_{4n}$, there exists $ x \in {\mathbb{R}}^n \setminus {\cup}_{ H \in \mathcal{C}_{\left\{-1,0,1 \right\}} (n)} H$ such that $ \sigma(x) = \omega$. Indeed, we define $x \in {\sigma}^{-1} (\omega)$ and $z_1, ..., z_{4n}$ by applying the following rule for $p = 1, 2,..., 4n$: if $w_ p =\alpha_{i}^{(0)}$ then $z_p = z_{p-1} + 1/(2n+1)$ and $ z_p = x_i$, while if $ w_ p =\alpha_{i}^{(1)}$ then $z_p = x_i +1$. Therefore ${\sigma} $ is a bijection \end{proof} \section{From orders to forests} \label{sec:orders2forests} In this section, we present a bijection between the symmetric annotated 1-sketches and some rooted labeled ordered forests that we call symmetric forests. We will first define these forests and then expose the bijection. \subsection{Symmetric forests} In order to define a symmetric forest, we need to introduce the notion of sub-descendant in a forest. For any rooted labeled ordered forest $F$, we say that we read the nodes of $F$ in BFS order if we list the labels of the nodes of $F$ in a breadth-first search starting from the root. \begin{definition} Let $i$ and $j$ be two nodes in a rooted ordered forest. We say that $i$ is a sub-descendant of $j$ if $i$ appears after $j$ and strictly before any child of $j$ in the BFS order. We also say that $i$ and $j$ satisfy the sub-descendant property (SDP) if $i$ is a sub-descendant of $j$ implies that $-j$ is a sub-descendant of $-i$. \end{definition} \begin{definition} \label{definition:symrootedforest} A symmetric forest with $2n$ nodes is a rooted labeled ordered forest that satisfies: \begin{enumerate}[label=(\roman*)] \item the first $n$ nodes read in BFS order are labeled $e_1, ..., e_n$ such that $\{| e_1|,\ldots,|e_n|\} =\llbracket 1,n\rrbracket$, \item the last $n$ nodes read in BFS order are labeled $e_{n+1}, ..., e_{2n}$ such that $e_{n+j} = {-e}_{n-j+1}$ with $j \in \llbracket 1,n\rrbracket$, \item for every two nodes $i,j$, $i$ and $j$ satisfy the sub-descendant property. \end{enumerate} We denote by $F_{S}(2n)$ the symmetric forests with $2n$ nodes. \end{definition} \begin{example} For the symmetric forest $G$ in Figure \ref{fig:symForest}, $1$ is a sub-descendant of $-2$ and $2$ is a sub-descendant of $-1$, hence $\left\{1,-2 \right\}$ satisfy the sub-descendant property. Moreover, $G \in F_{S}(6)$. \end{example} As a matter of fact, a symmetric forest is composed of two sub-forests where one is the symmetric of the other in the following sense : \begin{definition} \label{definition:symofrootedforest} Let $F$ be a rooted ordered forest defined on $n$ labeled nodes $e_1,.., e_n$. We define the symmetric of $F$ as a rooted ordered forest $\overline{F}$ with n labeled nodes $-e_n,..., -e_1$ such that for all $i \neq j \in \llbracket 1,n\rrbracket$, $-e_i$ is a sub-descendant of $-e_j$ in $\overline{F}$ if and only if $e_j$ is a sub-descendant of $e_i$ in $F$. \end{definition} We now explain how to decompose a symmetric forest $G$ into a forest $F$ and its symmetric. $F$ is the sub-forest of $G$ defined on the first $n$ nodes read in BFS order. \begin{figure}[h] \centering \includegraphics[width=300.600px] {logo13.pdf} \caption{a symmetric forest $G \in F_{S}(6)$, result of a shuffle between $F$ and $\overline{F}$.} \label{fig:symForest} \end{figure} Thus we have that : \begin{proposition} \label{prop:forestdecomp} A symmetric forest with $2n$ nodes is the composition of a rooted labeled ordered forest with $n$ nodes and its symmetric. \end{proposition} \begin{proof} $F$ is the sub-forest of $G$ defined by the first $n$ nodes read in BFS order. Then the sub-forest of $G$ corresponding to the $n$ last nodes read in BFS order is the forest, symmetric of $F$, by Definition \ref{definition:symrootedforest} and Definition \ref{definition:symofrootedforest}. \end{proof} Conversely, any shuffle between any rooted labeled ordered forest $F$ and its symmetric, is in bijection with a symmetric forest. We first give the definition of a special leaf, then the definition of the shuffles between a forest and its symmetric (see Figure \ref{fig:symOrderedForest}) and finally we prove the assertion. \begin{definition} In a rooted ordered forest with $n$ labeled nodes $e_1, ..., e_n$ such that $\{| e_1|,\ldots,|e_n|\}=\llbracket 1,n\rrbracket$, the special leaves are the leaves which are after the last internal node in the BFS order. If a forest $F$ has only leaves, we consider that its last internal node is a fictif node, parent of the leaves of $F$. Let us call $F_{n,s}, 1 \leq s \leq n$, the set of rooted labeled ordered forests of size $n$ with $s$ special leaves. \end{definition} \begin{example} The rooted labeled ordered forest $F$ of Figure \ref{fig:symForest}, has two special leaves, $1$ and $3$. \end{example} \begin{definition} \label{definition:shufflesrootedforest} Let $F$ be a rooted ordered forest defined on $n$ labeled nodes $e_1,.., e_n$, ordered in BFS order and such that $\{|e_1|,\ldots,|e_n|\}=\llbracket 1,n\rrbracket$, with $s$ special leaves. The set of shuffles between $F$ and its symmetric $\overline{F}$, $F \bowtie \overline{F}$, is the set of forests obtained when we connect $s$ edges from $\left\{-e_n, -e_{n-1},...,-e_{n-s+ 1} \right\}$ to $\left\{e_{n-s}, e_{n-s+1},...,e_{n-1}, e_n \right\}$ such that any pair $(u,v)$, $u$ in $\left\{-e_n, -e_{n-1},...,-e_{n-s+ 1} \right\}$ and $v$ in $\left\{e_{n-s},...,e_{n-1}, e_n \right\}$, satisfies the sub-descendant property and the sequence of the nodes read in BFS order is $e_1,\ldots, e_n, -e_n, -e_{n-1},...,-e_1$. We say that $\left\{-e_n, -e_{n-1},...,-e_{n-s+ 1} \right\}$ and $\left\{e_{n-s},...,e_{n-1}, e_n \right\}$ satisfy the sub-descendant property. \end{definition} \begin{proposition} \label{proposition: shuffleforests} For any rooted ordered forest $F$ with $n$ nodes labeled with $e_1,\ldots, e_n$ such that $\{| e_1|,\ldots,|e_n|\}=\llbracket 1,n\rrbracket$, the set $F \bowtie \overline{F}$ is a set of symmetric forests with $2n$ nodes. \end{proposition} \begin{proof} Conditions $(i)$ and $(ii)$ of Definition \ref{definition:symrootedforest} are verified by definition of the shuffle. Notice that for any connection of $s$ edges from $\left\{-e_n, -e_{n-1},...,-e_{n-s+ 1} \right\}$ to $\left\{e_{n-s}, e_{n-s+1},\right.$ $\left. ...,e_{n-1}, e_n \right\}$, $\left\{-e_{n-s}, -e_{n-s-1},...,-e_1 \right\}$ always satisfies the sub-descendant property with $\left\{e_1, e_2,...,e_n \right\}$, and $\left\{-e_{n}, -e_{n-1},...,-e_{n-s+1} \right\}$ always satisfies the sub-descendant property with $\left\{e_1, e_2,...,e_{n-s-1} \right\}$. By Definition \ref{definition:shufflesrootedforest} ,$\left\{-e_n, -e_{n-1},...,-e_{n-s+ 1} \right\}$ and $\left\{e_{n-s},...,e_{n-1}, e_n \right\}$ satisfy the sub-descendant property. Therefore, for any forest $G$ in $F \bowtie \overline{F}$, for every two nodes $e_i,e_j$, $e_i$ and $e_j$ satisfy the sub-descendant property. Thus, $G$ is a symmetric forest. \end{proof} \begin{figure}[h] \centering \includegraphics[width=450.810px] {logo14_bis.pdf} \caption{the set of shuffles between the forests F and $\overline{F}$ of Figure \ref{fig:symForest}.} \label{fig:symOrderedForest} \end{figure} \vspace{0.5cm} \subsection{Bijection between symmetric annotated 1-sketches and symmetric forests} We will show here that a symmetric annotated 1-sketch corresponds bijectively to a symmetric forest. Moreover, the decomposition of a symmetric annotated 1-sketch (see Proposition \ref{prop:sketchdecomp}) corresponds to the decomposition of a symmetric forest (see Proposition \ref{prop:forestdecomp}) . \begin{proposition} \label{proposition:bijsym2n} There is a one to one correspondence between symmetric annotated 1-sketches of size $2n$ and symmetric forests of size $2n$. \end{proposition} \begin{proof} We now prove the proposition in 3 steps: \paragraph{Step 1:} we first present an algorithm to get the symmetric forest from a symmetric annotated 1-sketch $\omega$ of $D^1{(2n)}$. We define the map $\phi$ between $D^1{(2n)}$ and $F_{S}{(2n)}$ by the following algorithm (see Figure \ref{fig:algorsymOrderedForest}): \begin{itemize} \item[(i)] Read $\omega$ from left to right. \item[(ii)] When $\alpha_{i}^{(0)}$ is read, create a node $i$ such that, if $\alpha_{i}^{(0)}$ is not the first letter, if the preceding letter is $\alpha_{j}^{(0)}$ then $i$ becomes the next right sibling of $j$, and if the preceding letter is $\alpha_{j}^{(1)}$ then $i$ becomes the leftmost child of $j$. \end{itemize} \begin{figure}[h] \centering \includegraphics[width=400.650px] {logo15.pdf} \caption{algorithm to get the symmetric forest $\phi\left(\scriptstyle \alpha_{-2}^{(0)} \alpha_{1}^{(0)} \alpha_{-2}^{(1)} \alpha_{3}^{(0)} \alpha_{-3}^{(0)} \alpha_{1}^{(1)} \alpha_{-1}^{(0)} \alpha_{3}^{(1)} \alpha_{-3}^{(1)} \alpha_{2}^{(0)} \alpha_{-1}^{(1)} \alpha_{2}^{(1)}\normalsize\right)$} \label{fig:algorsymOrderedForest} \end{figure} First note that $\alpha_{i}^{(0)}$ and $\alpha_{-i}^{(0)}$ cannot be both in the first $n$ $ \alpha_{.}^{(0)}$ letters. By definition, the forest $\phi(\omega)$ has $n$ first nodes labeled by the first $n$ $ \alpha_{.}^{(0)}$-letters. And the last $n$ nodes are defined symmetrically as in the symmetric annotated 1-sketches $D^1{(2n)}$. Second, remark that: \begin{remark} \label{rem:bijsk2forests} \begin{enumerate} \item If $ \alpha_{i}^{(1)}$ is not followed by an $ \alpha_{.}^{(0)}$-letter then node $i$ is a leaf. \item If $\alpha_{i}^{(0)}$ appears before $ \alpha_{j}^{(0)}$ in $\omega$ then node $i$ appears before node $j$ in the BFS order of the nodes of the obtained forest. \item The property ``$ \alpha_{i}^{(0)}$ appears before $ \alpha_{j}^{(0)}$ then $ \alpha_{-j}^{(0)}$ appears before $ \alpha_{-i}^{(0)}$'', implies that ``$ i$ appears before $j$ then $-j$ appears before $-i$ in the BFS order of the nodes of the obtained forest''. \item \label{rem:SDP}The property ``$\alpha_{i}^{(0)}$ appears before $\alpha_{j}^{(1)}$ then $\alpha_{-j}^{(0)}$ appears before $\alpha_{-i}^{(1)}$'' is equivalent to ``$\alpha_{j}^{(0)}...\alpha_{i}^{(0)}...\alpha_{j}^{(1)}$ then $\alpha_{-i}^{(0)}...\alpha_{-j}^{(0)}...\alpha_{-i}^{(1)}$''. So, if $i$ is a sub-descendant of $j$ then $-j$ is a sub-descendant of $-i$. \end{enumerate} \end{remark} From these last two remarks, it is clear that $\phi(\omega)$ is a symmetric forest. \paragraph{Step 2:} before showing that $\phi$ is a bijection, we describe the inverse mapping $\psi$. Let $G \in F_{S}(2n)$ and $e_1, e_2, ..., e_{2n}$ be the $2n$ nodes in $G$ read in BFS order. Let $\psi(G)$ be the word $\omega$ defined inductively as follow: \begin{itemize} \item Read the vertices in BFS order. $\omega_1 = \alpha_{e_1}^{(0)}$. \item For any $2 \leq j \leq 2n$, if $e_j$ is the next right sibling of $e_{j-1}$ then $\omega_j = \omega_{j-1} \alpha_{e_j} ^{(0)}$, if $e_j$ is the leftmost child of $e_{i}$ then $\omega_j = \omega_{j-1}\alpha_{e_i} ^{(1)}\alpha_{e_j} ^{(0)}$. \item $\omega = \omega_{2n}\alpha_{e_{2n-s+1}} ^{(1)}\alpha_{e_{2n-s+2}}^{(1) }... \alpha_{e_{2n}}^{(1)} = \omega_{2n}\alpha_{-e_s} ^{(1)}\alpha_{-e_{s-1}} ^{(1) }... \alpha_{-e_1}^{(1)}$ if $G$ has $s$ special leaves \end{itemize} Note that for all $G \in F_{S}(2n)$, the word $\psi(G)$ satisfies the properties (i)-(iv) of symmetric annotated 1-sketches. Hence $\psi$ is a mapping from $F_{S}(2n)$ to $D^1{(2n)}$. \paragraph{Step 3:} it is easy to prove that $\psi(\phi(D^1{(2n)}))= D^1{(2n)}$ and $\phi(\psi(F_{S}(2n)))= F_{S}(2n)$. \end{proof} From Propositions \ref{prop:bijArr2sk} and \ref{proposition:bijsym2n}, we get that: \begin{corollary} \label{cor:regions2symforests} $\Phi = \phi \circ\sigma$ is a bijection from the regions of the Catalan arrangement $\mathcal{C}_{\left\{-1,0,1 \right\}}(n)$ to the symmetric forests $F_S (n)$. \end{corollary} \vspace{0.3cm} The bijection $\phi$ induces a bijection between the annotated 1-sketches of size $n$ with rightest $\alpha_.^{(0)}$-letter at position $s$ and the rooted labeled ordered forests with $2n-s$ special leaves. \begin{proposition} \label{prop:symbij} The mapping $\phi$ induces a bijection between $A_{n,s}$ and $F_{n,2n-s},n \leq s \leq 2n-1$. \end{proposition} \begin{proof} Let $\omega_1 \in A_{n,s}$. It means that $\omega_1= \omega_0 \alpha_{j_n}^{(0)} \alpha_{j_{s-n+1}}^{(1)} \alpha_{j_{s-n+2}}^{(1)}... \alpha_{j_{n-1}}^{(1)} \alpha_{j_n}^{(1)}$. From the first step of the proof of Proposition \ref{proposition:bijsym2n}, we have that: \begin{itemize} \item $\alpha_{j_1}^{(0)}\ldots \alpha_{j_n}^{(0)}$ represent the nodes $j_1,\ldots , j_n$ read in BFS order in $\phi(\omega_1)$, \item If $ \alpha_{i}^{(1)}$ is not followed by an $ \alpha_{.}^{(0)}$-letter then node $i$ is a leaf. \item The last $ \alpha_{.}^{(1)}$-letter followed by a $\alpha_{.}^{(0)}$-letter is $\alpha_{j_{s-n}}^{(1)}$. This implies that the last internal node in the BFS order of the nodes of $\phi(\omega_1)$ is $j_{s-n}$. \end{itemize} Thus, $\phi(\omega_1)$ is a rooted ordered forest with $n$ labeled nodes $j_1, ..., j_n$ where the nodes $j_{s-n+1}, j_{s-n+2}, ..., j_{n} $ are $2n-s$ special leaves and $\phi(\omega_1) \in F_{n,2n-s}$. Conversely, let $ F \in F_{n,2n-s}$ with $2n-s$ special leaves $j_{s-n+1}, j_{s-n+2}, ..., j_{n} $. Then $\alpha_ {j_{n}} ^{(0)}$ is at the $s^{th}$ position in $\psi(F)$, hence it belongs to $A_{n,s}$. It is easy to prove that $\psi(\phi(A_{n,s}))= A_{n,s}$. Similarly, $\phi(\psi(F_{n,2n-s}))= F_{n,2n-s} $. \end{proof} We now show that the the different possible shuffles between an annotated 1-sketch and its symmetric correspond by $\phi$ to the different possible shuffles between a rooted labeled ordered forest and its symmetric. \begin{proposition} \label{prop:compatible} The bijections $\phi$ and $\psi$ are compatible with shuffles and symmetrics. Indeed, let $\omega_1$ be annotated 1-sketch, then $\phi( \overline{\omega}_1)= \overline{\phi(\omega_1)}$ and $\phi( \omega_1\bowtie\overline{\omega}_1)=\phi(\omega_1) \bowtie \overline{\phi(\omega_1)}$. \end{proposition} \begin{proof} Let $\omega_1= \omega_0 \alpha_{j_n}^{(0)} \alpha_{j_{s-n+1}}^{(1)} \alpha_{j_{s-n+2}}^{(1)} \ldots \alpha_{j_{n-1}}^{(1)} \alpha_{j_n}^{(1)} \in A_{n,s}$. From Proposition \ref{prop:symbij}, we get $\phi( \omega_1) \in F_{n,2n-s}$ and $\phi( \overline{\omega}_1) \in F_{n,2n-s}$. Remark that if $\omega_1$ is of the form $\ldots \alpha_i^{(0)}\ldots \alpha_j^{(0)} \ldots \alpha_i^{(1)} \ldots \alpha_j^{(1)} \ldots$, then $\overline{\omega}_1$ is of the form $\ldots \alpha_{-j}^{(0)}\ldots \alpha_{-i}^{(0)} \ldots \alpha_{-j}^{(1)} \ldots \alpha_{-i}^{(1)} \ldots$. It means that in $\phi(\omega_1)$, $j$ is a sub-descendant of $i$ and in $\phi(\overline{\omega}_1)$, $-i$ is a sub-descendant of $-j$. Thus $\phi( \overline{\omega}_1)= \overline{\phi(\omega_1)}$. Moreover a shuffle between $\alpha_{j_{s-n+1}}^{(1)} \alpha_{j_{s-n+2}}^{(1)} \ldots\alpha_{j_n}^{(1)}$ and $\alpha_{-j_n}^{(0)} \ldots \alpha_{-j_{s-n+2}}^{(0)}\alpha_{-j_{s-n+1}}^{(0)}$ corresponds by $\phi$ to a shuffle between the special leaves of $\phi(\omega_1)$, $j_{s-n+1}, j_{s-n+2} \ldots, j_{n}$, and the nodes of $\phi(\overline{\omega}_1)$, $-j_{n}, \ldots, -j_{s-n+2} , -j_{s-n+1}$. Since $\omega_1\bowtie \overline{\omega}_1 = \omega_0 \alpha_{j_n}^{(0)} \left[\alpha_{j_{s-n+1}}^{(1)} \alpha_{j_{s-n+2}}^{(1)} \ldots\alpha_{j_n}^{(1)} \bowtie \alpha_{-j_n}^{(0)} \ldots \alpha_{-j_{s-n+2}}^{(0)}\alpha_{-j_{s-n+1}}^{(0)} \right ] \alpha_{-j_n}^{(1)} \overline{\omega}_0 $, it is straightforward to conclude that $\phi( \omega_1\bowtie\overline{\omega}_1)=\phi(\omega_1) \bowtie \overline{\phi(\omega_1)}$. \end{proof} \section{The number of regions of the type C Catalan arrangement} \label{sec:enumeration} We are now able to compute the number of regions of the type $C$ Catalan arrangement. We first compute the number of rooted ordered forests of size $n$ with $s$ special leaves. \begin{proposition} \label{prop:numla} The number of rooted ordered forests of size $n$ with $s$ special leaves, $C_{n,s}$, verifies the following formula : $ C_{n,s} = \frac{s \binom{2n-s}{n}}{2n-s}$ for $ 1 \leq s \leq n.$ \end{proposition} \begin{proof} Every rooted ordered forest $F$ can be identified with an annotated 1-sketch $\psi(F)$ by Proposition \ref{prop:symbij}. Let $F$ be a rooted ordered forest of size $n$ with $s$ special leaves $e_{n-s+1},..., e_{n-1}, e_n$, then let $\psi(F) =\omega_0 \alpha_{e_n}^{(0)}\alpha_{e_{n-s+1}}^{(1)} \alpha_{e_{n-s+2}}^{(1)}... \alpha_{e_{n-1}}^{(1)} \alpha_{e_n}^{(1)}$. We associate an up step $U$ to each $\alpha_{.}^{(0)}$ letter and a down step $D$ to each $\alpha_{.}^{(1)}$ letter and thus obviously obtain a Dyck path of size $n$. This implies that $C_{n,s}$ is equal to the number of Dyck paths of size $n$ that have forms $P = P_0 UDD...D$, here each $P_0U$ is a lattice path with $n$ up steps and $n-s$ down steps. Consider the family $L$ of all lattice paths from $(0,0)$ to $(2n-s,s)$ consisting of $n$ $U$ and $n-s$ $D$. $L$ is enumerated by $\binom{2n-s}{n}$. Now consider the action of the cyclic group $\mathbb{Z}\xspace_{2n-s}$ on $L$ by cyclic rotation. Pick a lattice path on $L$, by cycle lemma, there exist exactly $s = n-(n-s)$ cyclic rotations that are $1$-dominating (any prefix has strictly more $U$ than $D$). Therefore, each orbit contains exactly $s$ lattice paths such that any prefix has strictly more $U$ than $D$. Thus these $s$ lattice paths are of the forms $UUv$ with $n$ $U$ and $n-s$ $D$, and they end at height $s$. Let $L_0$ be the set of all lattice paths on $L$ such that any prefix has strictly more $U$ than $D$, so the number of elements of $L_0$ is $\frac{s \binom{2n-s}{n}}{2n-s}$. Given the following bijective transformation for any lattice path $P_1$ on $L_0$ by changing $P_1 = UUv$ to $P'_1= UvU = P_0 U,$ we get that $P_0U$ is a lattice path with $n$ up steps and $n-s$ down steps and always above the $x$-axis. Now add $s$ down steps at the end of $P_0U$, then we get a Dyck path that has form $P = P_0 UDD...D$. Therefore, we conclude the formula of $C_{n,s}$ above. \end{proof} Now we are able to enumerate the regions of the type $C$ Catalan arrangement. \begin{proposition} The number of regions of the type $C$ Catalan arrangement is $$ r(C_{\left\{-1,0,1\right\}}(n)) = {2^n}n! \binom{2n}{n} $$ \end{proposition} \begin{proof} The number of labeling of a rooted ordered forest of size $n$ with labels $e_1, \ldots, e_n$ such that $\{|e_1|, \ldots, |e_n|\}=\llbracket 1,n\rrbracket$ is $2^n n!$. Thus, from Corollary \ref{cor:regions2symforests}, we can compute the number of regions of the type $C$ Catalan arrangement, $$ r(C_{\left\{-1,0,1\right\}}(n)) = {2^n}n! \sum\limits_{s=1}^{n} C_{n,s} D_{n,s} $$ where $C_{n,s}$ is given by Proposition \ref{prop:numla} and $D_{n,s}$ is the number of shuffles between any rooted labeled ordered forest of size $n$ with $s$ special leaves and its symmetric. \quad Now we compute $D_{n,s}$. By Propositions \ref{prop:symbij} and \ref{prop:compatible}, this is equal to the number of elements of the set $\omega_1\bowtie \overline{\omega}_1 = \omega_0 \alpha_{j_n}^{(0)} \left[\psi \bowtie \overline{\psi} \right ] \alpha_{-j_n}^{(1)} \overline{\omega}_0 $, with $\omega_1 \in A_{n,2n-s}$, $\omega_1= \omega_0 \alpha_{j_n}^{(0)} \alpha_{j_{n-s+1}}^{(1)} \alpha_{j_{n-s+2}}^{(1)}... \alpha_{j_{n-1}}^{(1)} \alpha_{j_n}^{(1)}$ and $\psi= \alpha_{j_{n-s+1}}^{(1)} \alpha_{j_{n-s+2}}^{(1)}... \alpha_{j_{n-1}}^{(1)} \alpha_{j_n}^{(1)}$. On the other hand, every annotated 1-sketch of size $n$ can be represented by a Dyck path of the same size, so $D_{n,s}$ is obviously the number of shuffles between $s$ down steps and $s$ up steps, here each element of the shuffles is of the form $a_1 a_2...a_s b_s...b_2b_1$ with $(a_i, b_i) \in \left\{ (U,D), (D,U) \right\}$, for all $1 \leq i \leq s$. Therefore, $D_{n,s} = 2^{s}$. Note that we have the recurrence formula $C_{n,s} = C_{n-1,s-1} + C_{n,s+1} $ for all $ 1 \leq s \leq n-1$. Then we get $$ \sum\limits_{s=1}^{n} C_{n,s} D_{n,s}= \sum\limits_{s=1}^{n} \frac{s2^s \binom{2n-s}{n}}{2n-s} = \binom{2n}{n}$$ by induction. \end{proof} It would now be interesting to see if the bijection between our forests and the regions of the Catalan arrangement of type $C$ can be refined to the regions of the Linial arrangement of type $C$, thus giving a bijective interpretation of the enumeration exhibited by C.A. Athanasiadis in \cite{A96}. \bibliographystyle{plain}
1,477,468,750,868
arxiv
\section{Introduction} For $s>1/2$, we consider the following nonlocal Neumann problem \begin{equation}\label{P} \begin{cases} (-\Delta)^s u+u=f(u)\quad&\mbox{in }\Omega,\\ u\ge 0\quad&\mbox{in }\Omega,\\ \mathcal N_s u=0\quad&\mbox{in }\mathbb R^n\setminus \overline \Omega. \end{cases} \end{equation} Here $\Omega$ is a radial domain of $\mathbb R^n$, it is either a ball \begin{equation}\label{palla} \Omega=B_R:=\{x\in\mathbb R^n\,:\, |x|<R\}, \quad R>0, \end{equation} or an annulus \begin{equation}\label{anello} \Omega=A_{R_0,R}:=\{x\in\mathbb R^n\,:\, R_0<|x|<R\}, \quad 0<R_0<R. \end{equation} Furthermore, $n\ge1$, $(-\Delta)^s$ denotes the fractional Laplacian \begin{equation}\label{FL} (-\Delta)^su(x):=c_{n,s}\,\mathrm{PV}\int_{\mathbb R^n}\frac{u(x)-u(y)}{|x-y|^{n+2s}}dy, \end{equation} and $\mathcal N_s$ is the following nonlocal normal derivative \begin{equation}\label{Neu} \mathcal N_s u(x):=c_{n,s}\int_\Omega\frac{u(x)-u(y)}{|x-y|^{n+2s}}dy\quad\mbox{for all }x\in\mathbb R^n\setminus\overline \Omega \end{equation} first introduced in \cite{DROV}, and $c_{n,s}$ is a normalization constant. It is a well known fact that the fractional Laplacian $(-\Delta)^s$ is the infinitesimal generator of a L\'evy process. The notion of nonlocal normal derivative $\mathcal N_s$ has also a particular probabilistic interpretation; we will comment on it later on in Section 2. We stress here that, with this definition of nonlocal Neumann boundary conditions, problem \eqref{P} has a variational structure. In this paper, we study the existence of non-constant solutions of \eqref{P} for a superlinear nonlinearity $f$, which can possibly be supercritical in the sense of Sobolev embeddings. In order to state our main result, we introduce the hypotheses on $f$. We assume that $f\in C^{1,\gamma}([0,\infty))$, for some $\gamma>0$, satisfies the following conditions: \begin{itemize} \item[$(f_1)$] $f'(0)=\lim_{t\to 0^+}\frac{f(t)}{t}\in (-\infty,1)$; \item[$(f_2)$] $\liminf_{t\to\infty}\frac{f(t)}{t}>1$; \item[$(f_3)$] there exists a constant $u_0>0$ such that $f(u_0)=u_0$ and $f'(u_0)>\lambda_2^{+,\mathrm{r}}+1$, \end{itemize} where $\lambda_2^{+,\mathrm{r}}>0$ is the second radial increasing eigenvalue of the fractional Laplacian with (nonlocal) Neumann boundary conditions. Clearly, as a consequence of $(f_1)$, we know that $f(0)=0$ and $f$ is below the line $t$ in a right neighborhood of $0$. The results of the paper continue to hold if we weaken $(f_1)$ as follows \begin{itemize} \item[$(f'_1)$] $f(0)=0$, $f'(0)\in(-\infty,1]$ and $f(t)<t$ in $(0,\bar t)$ for some $\bar t>0$. \end{itemize} A prototype nonlinearity satisfying $(f_1)$ and $(f_2)$ is given by $$ f(t):=t^{q-1}-t^{r-1},\mbox{ with }2\le r<q. $$ For $q$ large enough, the above function satisfies condition $(f_3)$ as well. We observe that $(f_1)$ and $(f_2)$ are enough to prove the existence of a mountain pass-type solution. The additional hypothesis $(f_3)$ is needed to prove that such a solution is non-constant. In particular, the existence of a fixed point $u_0$ of $f$ is a consequence of $(f_1)$, $(f_2)$, and the regularity of $f$; moreover, in view of $\int_\Omega (-\Delta)^s u dx=0$ (cf. \eqref{int-Deltas} below), the fact that $f(t)-t$ must change sign at least once is a natural compatibility condition for the existence of solutions. \smallskip Our main result can be stated as follows. \begin{theorem}\label{thm:main} Let $s>1/2$ and $f\in C^{1,\gamma}([0,\infty))$, for some $\gamma>0$, satisfy assumptions $(f_1)$--$(f_3)$. Then there exists a non-constant, radial, radially non-decreasing solution of \eqref{P} which is of class $C^2$ and positive almost everywhere in $\Omega$. In addition, if $u_{0,1},\ldots,u_{0,N}$ are $N$ different positive constants satisfying $(f_3)$, then \eqref{P} admits $N$ different non-constant, radial, radially non-decreasing, a.e. positive solutions. \smallskip If $\Omega=A_{R_0,R}$, the same existence and multiplicity result holds also for non-constant, radial, radially non-increasing, a.e. positive $C^2$ solutions of \eqref{P}. \end{theorem} We stress here that the situation with Neumann boundary conditions is completely different from the case with Dirichlet boundary conditions. Indeed, as for the local case $s=1$, a Poho\v{z}aev-type identity implies nonexistence of solutions under Dirichlet boundary conditions for critical or supercritical nonlinearities, cf. \cite[Corollary 1.3]{R-OS}, while here, under Neumann boundary conditions, we can find solutions even in the supercritical regime. Moreover, the supercritical nature of the problem prevents {\it a priori} the use of variational methods to attack the problem. Indeed, the energy functional associated to \eqref{P} is not even well-defined in the natural space where we look for solutions, i.e., $H^s_{\Omega,0}$ (cf. Section \ref{sec2}). To overcome this issue, we follow essentially the strategy used in \cite{BNW,CN}. Our starting point is to work in the cone of non-negative, radial, non-decreasing functions \begin{equation}\label{cone} \mathcal C_+(\Omega):=\left\{u\in H^s_{\Omega,0}\,:\, \begin{aligned}& u\mbox{ is radial and }u\ge0\mbox{ in }\mathbb R^n,\\\,&u(r)\le u(s) \mbox{ for all }R_0\le r\le s\le R\end{aligned}\right\}, \end{equation} where with abuse of notation we write $u(|x|):=u(x)$ and in order to treat simultaneously the two cases $\Omega=B_R$ and $\Omega=A_{R_0,R}$, we assimilate $B_R$ into the limit case $A_{0,R}$. This cone was introduced for the local case ($s=1$) by Serra and Tilli in \cite{SerraTilli2011}, it is convex and closed in the $H^s$-topology. The idea of working with radial functions, suggested by the symmetry of the problem, is dictated by the necessity of gaining compactness. Indeed, restricting the problem to the space of radial $H^s$ functions ($H^s_\mathrm{rad}$) allows somehow to work in a 1-dimensional domain, where we have better embeddings than in higher dimension. Nevertheless, in the case of the ball, the energy functional is not well defined even in $H^s_\mathrm{rad}$, since the sole radial symmetry is not enough to prevent the existence of sequences of solutions exploding at the origin. This is the reason for the increasing monotonicity request in the cone $\mathcal C_+$, cf. \cite{CM} for similar arguments in more general domains. Indeed, we can prove that all solutions of \eqref{P} belonging to $\mathcal C_+$ are a priori bounded in $H^s_{\Omega,0}$ and in $L^\infty(\Omega)$. When the domain does not contain the origin, i.e. in the case of the annulus $R_0>0$, the monotonicity request can be avoided and it is possible to work directly in the space $H^s_\mathrm{rad}$. Nonetheless, working in $H^s_{\mathrm{rad}}$ would allow to prove the existence of just one radial weak solution of the equation in \eqref{P} under Neumann boundary conditions, whose sign and monotonicity are not known. Therefore, also in the case of the annulus, even if we do not need to gain compactness, we will work in $\mathcal C_+(A_{R_0,R})$ to find a non-decreasing solution, and in \begin{equation}\label{cone-} \mathcal C_-(A_{R_0,R}):=\left\{u\in H^s_{A_{R_0,R},0}\,:\, \begin{aligned}&u\mbox{ is radial and }u\ge0\mbox{ in }\mathbb R^n,\\&u(r)\ge u(s) \mbox{ for all }R_0\le r\le s\le R\end{aligned}\right\}, \end{equation} to find a non-increasing solution. For simplicity of notation, in the rest of the paper we will simply denote by $\mathcal C$ both $\mathcal C_+(\Omega)$ and $\mathcal C_-(A_{R_0,R})$, when the reasoning will be independent of the particular cone. In both cases, thanks to the a priori estimates, we can modify $f$ at infinity in such a way to obtain a subcritical nonlinearity $\tilde f$. This leads us to study a new {\it subcritical} problem, with the property that all solutions of the new problem belonging to $\mathcal C$ solve also the original problem \eqref{P}. The energy functional associated to the new problem is clearly well-defined in the whole $H^s_{\Omega,0}$. To get a solution of the new problem belonging to $\mathcal C$, we prove that a mountain pass-type theorem holds inside the cone $\mathcal C$. The main difficulty here is that we need to find a critical point of the energy, belonging to a set ($\mathcal C$) which is strictly smaller than the domain ($H^s_{\Omega,0}$) of the energy functional itself. To overcome this difficulty we build a deformation $\eta$ for the Deformation Lemma \ref{deformation} which preserves the cone, cf. also Lemma~\ref{cononelcono}. Once the minimax solution is found, we need to prove that it is non-constant. We further restrict our cone, working in a subset of $\mathcal C$ in which the only constant solution of \eqref{P} is the constant $u_0$ defined in $(f_3)$. In this set, we are able to distinguish the mountain pass solution from the constant using an energy estimate. The multiplicity part of Theorem \ref{thm:main} can be easily obtained by repeating the same arguments around each constant solution $u_0$: in case we have more than one $u_0$ satisfying $(f_3)$, for each $u_{0,i}$, we work in a subset of $\mathcal C$ made of functions $u$ whose image is contained in a neighborhood of $u_{0,i}$. This allows us to localize each mountain pass solution and to prove that to each $u_{0,i}$ corresponds a different solution of the problem. The paper is organized as follows: \begin{itemize} \item In Section 2, we recall some basic properties of our nonlocal Neumann problem. In particular, we describe its variational structure and we establish a strong maximum principle; \item In Section 3, we prove the a priori bounds, both in $L^\infty$ and in the right energy space, which will be crucial for our existence result; \item Section 4 contains the Mountain Pass-type Theorem (Theorem \ref{mountainpass}) which establishes existence of a radial, non-negative, non-decreasing solution and whose main ingredient relies on a Deformation Lemma inside the cone $\mathcal C$ (see Lemma \ref{deformation}); \item Finally, in Section 5, we prove that the solution, found via Mountain Pass argument, is not constant. \end{itemize} \section{The notion of nonlocal normal derivative and the variational structure of the problem}\label{sec2.0} In this section, we comment on the notion of nonlocal normal derivative $\mathcal N_s$ and we describe some structural properties of the nonlocal Neumann problem under consideration, with particular emphasis on its variational structure. As mentioned in the Introduction, we use the following notion of nonlocal normal derivative: \begin{equation}\label{N} \mathcal N_su(x):=c_{n,s}\int_\Omega\frac{u(x)-u(y)}{|x-y|^{n+2s}}\,dy,\quad x\in \mathbb{R}^n\setminus \overline \Omega. \end{equation} As well explained in \cite{DROV}, with this notion of normal derivative, problem \eqref{P} has a variational structure. We emphasize that the operator $(-\Delta)^s $ that we consider is the standard fractional Laplacian on $\mathbb{R}^n$ (notice that the integration in \eqref{FL} is taken on the whole $\mathbb{R}^n$) and not the regional one (where the integration is done only on $\Omega$). This choice will be reflected in the associated energy functional (see e.g. \eqref{energyh}). We note in passing that, in \cite{A}, it is shown that the fractional Laplacian $(-\Delta)^su$ under homogeneous nonlocal Neumann boundary conditions ($\mathcal N_s u=0$ in $\mathbb R^n\setminus\overline{\Omega}$) can be expressed as a regional operator with a kernel having logarithmic behavior at the boundary. There are other possible notions of `\ga Neumann conditions'' for problems involving fractional powers of the Laplacian (depending on which type of operator one considers), which all recover the classical Neumann condition in the limit case $s\uparrow 1$. See Setion 7 in \cite{DROV} and reference therein, for a more precise discussion on possible different definitions. The choice of the standard fractional Laplacian $(-\Delta)^s$ and of the corresponding normal derivative $\mathcal N_s$ has also a specific probabilistic interpretation, that is well described in Section 2 of \cite{DROV}. The idea is the following. Let us a consider a particle that moves randomly in $\mathbb{R}^n$ according to the following law: if the particle is located at a point $x\in \mathbb{R}^n$, it can jump at any other point $y \in \mathbb{R}^n$ with a probability that is proportional to $|x-y|^{-n-2s}$. It is well known that the probability density $u(x,t)$ that the particle is situated at the point $x$ at time $t$, solves the fractional heat equation $u_t + (-\Delta)^s u =0$. If now we replace the whole space $\mathbb{R}^n$ with a bounded domain $\Omega$, we need to specify what are the ``boundary conditions", that is what happens when the particle exits $\Omega$. The choice of the Neumann condition $\mathcal N_s u=0$ corresponds to the following situation: when the particle reaches a point $x \in \mathbb{R}^n \setminus \overline \Omega$, it may jump back at any point $y \in \Omega$ with a probability density that, again, is proportional to $|x-y|^{-n-2s}$. Just as a comparison, if in place of Neumann boundary conditions, one considers the more standard Dirichlet boundary conditions (that in this nonlocal setting, reads $u\equiv 0$ in $\mathbb{R}^n \setminus \overline\Omega$), this would correspond to killing the particle when it exits $\Omega$. We pass now to describe some variational properties of our nonlocal Neumann problem. Let us start with an integration by part formula that justify the choice of $\mathcal N_s u$. In what follows $\Omega^c$ will denote the complement of $\Omega$ in $\mathbb{R}^n$. \begin{lemma}[Lemma 3.3 in \cite{DROV}]\label{by-parts} Let $u$ and $v$ be bounded $C^2$ functions defined on $\mathbb{R}^n$. Then, the following formula holds \begin{equation}\label{int-by-parts} \begin{aligned} \frac{c_{n,s}}{2}\iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}}&\,dx\,dy\\ &\hspace{-2cm}=\int_\Omega v\,(-\Delta)^su\,dx + \int_{\Omega^c}v\,\mathcal N_s u\,dx. \end{aligned} \end{equation} \end{lemma} \begin{remark} As a consequence of Lemma \ref{by-parts}, if $u\in C^2(\mathbb R^n)$ solves \eqref{P}, taking $v\equiv 1$ in \eqref{int-by-parts}, we get \begin{equation}\label{int-Deltas} \int_\Omega (-\Delta)^su\,dx=0. \end{equation} \end{remark} We now introduce the functional space where the problem is set. Let $u,\,v:\mathbb R^n\to \mathbb R$ be measurable functions, we set \begin{equation}\label{seminorm} [u]_{H^s_{\Omega,0}}:= \left(\frac{c_{n,s}}{2}\iint_{\mathbb R^{2n}\setminus(\Omega^c)^2}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}dx\,dy\right)^{1/2} \end{equation} and we define the space $$ H^s_{\Omega,0}:=\{u:\mathbb R^n\to \mathbb R,\,u \in L^2(\Omega)\, :\, [u]_{H^s_{\Omega,0}}<+\infty \} $$ equipped with the scalar product $$ (u,v)_{H^s_{\Omega,0}}:=\int_\Omega uv dx+\frac{c_{n,s}}{2}\iint_{\mathbb R^{2n}\setminus(\Omega^c)^2}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}}dx\,dy, $$ and with the induced norm \begin{equation}\label{norm} \|u\|_{H^s_{\Omega,0}}:=\|u\|_{L^2(\Omega)}+[u]_{H^s_{\Omega,0}}. \end{equation} By \cite[Proposition 3.1]{DROV}, we know that $(H^s_{\Omega,0},(\cdot,\cdot)_{H^s_{\Omega,0}})$ is a Hilbert space. In this paper we will mainly work with the notion of weak solutions for problem \eqref{P}, which naturally belong to the energy space $H^s_{\Omega,0}$ but, at some point (more precisely, when we will apply a strong maximum principle -- see Proposition \ref{MaxPrinc}) we will need to consider also classical solutions. For this reason, let us recall under which condition the fractional Laplacian given by the expression \eqref{FL} is well defined. Let $\mathcal L_s$ denote the following set of functions: \begin{equation}\label{L_s} \mathcal L_s:=\left\{u:\mathbb{R}^n\to \mathbb{R}\,:\,\int_{\mathbb{R}^n}\frac{|u(x)|}{1+|x|^{n+2s}}\,dx <\infty\right\}. \end{equation} Let $\Omega$ be a bounded set in $\mathbb{R}^n$, $s>1/2$, and let $u\in \mathcal L_s$ be a $C^{1,2s+\varepsilon -1}$ function in $\Omega$ for some $\varepsilon>0$. Then $(-\Delta)^s u$ is continuous on $\Omega$ and its value is given by the integral in \eqref{FL} (see Proposition 2.4 in \cite{Silvestre}). In particular, the condition $u \in \mathcal L_s$ ensures integrability at infinity for the integral in \eqref{FL}. Moreover, if $u$ belongs to the energy space $H^s_{\Omega,0}$, then automatically it is in $\mathcal L_s$, according to the following result. \begin{lemma} Let $\Omega$ be a bounded set in $\mathbb{R}^n$. Then $$ H^s_{\Omega,0} \subset \mathcal L_s.$$ \end{lemma} \begin{proof} We prove that if $u \in H^s_{\Omega,0}$, then it satisfies the integrability condition \begin{equation}\label{L_2} \int_{\mathbb{R}^n}\frac{|u(x)|^2}{1+|x|^{n+2s}}\,dx<\infty, \end{equation} which, in particular, implies that $u \in \mathcal L_s$, by using H\"older inequality and observing that $(1+|x|^{n+2s})^{-1}\in L^1(\mathbb{R}^n)$. Throughout this proof we denote by $C$ many different positive constants whose precise value is not important for the goal of the proof and may change from time to time. Let $\Omega'$ be a compact set contained in $\Omega$. We have \begin{equation}\label{chain1} \begin{split} \infty &>\int_\Omega \int_{\mathbb{R}^n }\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy \\ & \ge\int_\Omega \int_{\Omega}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy + \int_{\Omega'} \int_{\mathbb{R}^n \setminus \Omega}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy \\ & \ge \int_\Omega \int_{\Omega}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy \\ &\hspace{2em}+ \frac{1}{2}\int_{\Omega'} \int_{\mathbb{R}^n \setminus\Omega}\frac{|u(x)|^2}{|x-y|^{n+2s}}\,dx\,dy -\int_{\Omega'} \int_{\mathbb{R}^n \setminus\Omega}\frac{|u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy, \end{split} \end{equation} where, in the last estimate we have used that $|a-b|^2 \ge \frac{1}{2} a^2-b^2$ by Young inequality. Since $u\in H^s_{\Omega, 0}$, clearly the first term on the r.h.s is finite. Moreover, using that for $x\in \mathbb{R}^n \setminus\Omega$ and $y \in \Omega'$ one has that $|x-y|\ge \omega$, for some $\omega>0$, and the integrability of the kernel at infinity, we have for every $y\in\Omega'$ $$ \int_{\mathbb{R}^n \setminus\Omega}\frac{1}{|x-y|^{n+2s}}\,dx\le C\int_{\omega}^\infty \tau^{n-1-(n+2s)} d\tau=\frac{C}{\omega^{2s}}, $$ where $C$ is independent of $y\in\Omega'$. Hence, \begin{equation}\label{chain1ba} \begin{split} \int_{\Omega'} \int_{\mathbb{R}^n \setminus\Omega}\frac{|u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy&\le \int_{\Omega'} |u(y)|^2\left(\int_{\mathbb{R}^n \setminus\Omega}\frac{1}{|x-y|^{n+2s}}\,dx\right)\,dy\\ &\le \frac{C}{\omega^{2s}}\int_{\Omega'}|u(y)|^2 dy < \infty. \end{split} \end{equation} Therefore, combining \eqref{chain1} with \eqref{chain1ba}, we deduce that $$ \int_{\Omega'} \int_{\mathbb{R}^n \setminus\Omega}\frac{ |u(x)|^2}{|x-y|^{n+2s}}\,dx\,dy < \infty. $$ Finally, since $\Omega$ (and thus $\Omega'$) is bounded, we have that there exists some number $d$ depending only on $\Omega$ such that $|x-y|\le d+ |x|$ for every $x \in \mathbb{R}^n \setminus\Omega$ and $y \in \Omega'$, which implies that $$ \int_{\Omega'} \int_{\mathbb{R}^n \setminus\Omega}\frac{ |u(x)|^2}{|x-y|^{n+2s}}\,dx\,dy\ge |\Omega'| \int_{\mathbb{R}^n \setminus\Omega}\frac{ |u(x)|^2}{(d+|x|)^{n+2s}}\,dx.$$ This last inequality, together with the fact that $$\int_{\Omega}\frac{ |u(x)|^2}{(d+|x|)^{n+2s}}\,dx <\infty,$$ (since $u \in L^2(\Omega)$) concludes the proof. \end{proof} Since it will be useful later on, we introduce also some standard notation for fractional Sobolev spaces. We set \begin{equation}\label{H_s}[u]_{H^s(\Omega)}:=\left(\frac{c_{n,s}}{2}\iint_{\Omega^2}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy \right)^{\frac{1}{2}}.\end{equation} We denote by $H^s(\Omega)$ the space $$H^s(\Omega):=\left\{u\in L^2(\Omega)\,:\, [u]_{H^s(\Omega)} <\infty\right\},$$ equipped with the norm $$\|u\|_{H^s(\Omega)}=\|u\|_{L^2(\Omega)}+[u]_{H^s(\Omega)}.$$ Notice that in the definition $[u]_{H^s(\Omega)}$ the double integral is taken over $\Omega\times \Omega$, which differs from the seminorm defined in \eqref{seminorm} related to the energy functional of our problem. Since the following obvious inequality holds between the usual $H^s$-seminorm and the seminorm $[\,\cdot\,]_{H^s_{\Omega,0}}$ defined in \eqref{seminorm}: $$ [u]_{H^s_{\Omega,0}}\ge [u]_{H^{s}(\Omega)}\, $$ as an easy consequence of the fractional compact embedding $H^s(\Omega)\hookrightarrow\hookrightarrow L^q(\Omega)$ (see for example Section 7 in \cite{Adams} and remind that $H^s(\Omega)=W^{s,2}(\Omega)$), we have the following.\smallskip \begin{proposition}\label{embedding} The space $H^s_{\Omega,0}$ is compactly embedded in $L^q(\Omega)$ for every $q\in[1,2_s^*)$, where $$2_s^*:=\begin{cases}\frac{2n}{n-2s}\quad&\mbox{if }2s<n,\\ +\infty&\mbox{otherwise}\end{cases}$$ is the fractional Sobolev critical exponent. \end{proposition} Given $h\in L^2(\Omega)$, we consider now the following linear problem \begin{equation}\label{Plin} \begin{cases} (-\Delta)^s u+u=h\quad&\mbox{in }\Omega,\\ \mathcal N_s u=0&\mbox{in }\mathbb{R}^n\setminus \overline{\Omega}. \end{cases} \end{equation} \begin{definition} We say that a function $u\in H^s_{\Omega,0}$ is a weak solution of problem \eqref{Plin} if $$\frac{c_{n,s}}{2}\iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}}\,dx\,dy + \int_\Omega u\, v\,dx=\int_\Omega h\,v.$$ \end{definition} With this definition one can easily see that weak solutions of problem \eqref{Plin} can be found as critical points of the following energy functional defined on the space $H^s_{\Omega,0}$, cf. \cite[Proposition 3.7]{DROV}: \begin{equation}\label{energyh} \mathcal E(u):=\frac{c_{n,s}}{4}\iint_{\mathbb{R}^{2n}\setminus (\Omega^c)^2}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy+\frac{1}{2}\int_\Omega u^2\,dx -\int_\Omega hu\,dx. \end{equation} We state now a strong maximum principle for the fractional Laplacian with nonlocal Neumann conditions. \begin{theorem}\label{MaxPrinc} Let $u\in C^{1,2s+\varepsilon-1}(\Omega)\cap \mathcal L_s$ (for some $\varepsilon>0)$ satisfy $$ \begin{cases} (-\Delta)^s u\ge 0 & \mbox{in } \Omega,\\ u \ge 0 & \mbox{in } \Omega, \\ \mathcal N_s u \ge 0 &\mbox{in } \mathbb{R}^n\setminus \overline\Omega. \end{cases} $$ Then, either $u>0$ or $u \equiv 0$ a.e. in $\Omega$. \end{theorem} \begin{proof} Assume that $u$ is not a.e. identically zero and let us show that $u>0$ a.e. in $\Omega$. We argue by contradiction: suppose that the set in $\Omega$ on which $u$ vanishes has positive Lebesgue measure, and let call it $Z$, i.e. $$Z:=\{x\in \Omega\,|\,u(x)=0\},\quad \mbox{and}\quad |Z|>0.$$ Let now $\bar x \in Z$. Since $u$ satisfies $(-\Delta)^s u\ge 0$ in $\Omega$, using the definition of fractional Laplacian, we have that \[\begin{split} \int_{\mathbb{R}^n \setminus \Omega}\frac{u(\bar x)-u(y)}{|\bar x-y|^{n+2s}}\,dy&\ge -\int_{\Omega}\frac{u(\bar x)-u(y)}{|\bar x-y|^{n+2s}}\,dy\\ &=\int_{\Omega}\frac{u(y)}{|\bar x-y|^{n+2s}}\,dy >0,\end{split}\] where the last strict inequality comes from the fact that we are assuming that $u$ is stritly positive on a subset of $\Omega$ of positive Lebesgue measure (otherwise it would be $u\equiv 0$ a.e. in $\Omega$). Integrating the above inequality on the set $Z$ and using that $|Z|>0$, we deduce that \begin{equation}\label{PM1} \int_Z\int_{\mathbb{R}^n \setminus \Omega}\frac{u(\bar x)-u(y)}{|\bar x-y|^{n+2s}}\,dy\,d\bar x >0.\end{equation} On the other hand, using that $u\ge 0$ in $\Omega$, we have \begin{equation}\label{PM2} \begin{split} c_{n,s}\int_Z\int_{\mathbb{R}^n \setminus \Omega}\frac{u(\bar x)-u(y)}{|\bar x-y|^{n+2s}}\,dy\,d\bar x &\le c_{n,s}\int_\Omega\int_{\mathbb{R}^n \setminus \Omega}\frac{u(x)-u(y)}{|x-y|^{n+2s}}\,dy\,dx \\ &=-c_{n,s}\int_{\mathbb{R}^n \setminus \Omega} \int_\Omega \frac{u(y)-u(x)}{|x-y|^{n+2s}}\,dx\,dy\\ &=-\int_{\mathbb{R}^n \setminus \Omega}\mathcal N_su(y)\,dy \leq 0. \end{split} \end{equation} This contradicts \eqref{PM1} and concludes the proof. \end{proof} \begin{remark} Arguing in the same way, it is easy to see that the above strong maximum principle holds true when adding a zero order term in the equation satisfied in $\Omega$ (that is considering solutions of $(-\Delta)^s u(x) + c(x) u(x) \ge 0$ in $\Omega$). \end{remark} We conclude this Section with two results of \cite{DROV}. The first one gives a further justification of calling $\mathcal N_s$ a ``nonlocal normal derivative". \begin{proposition}[Proposition 5.1 of \cite{DROV}] Let $\Omega$ be any bounded Lipschitz domain of $\mathbb{R}^n$ and let $u$ and $v$ be $C^2$ functions with compact support in $\mathbb{R}^n$. Then, $$\lim_{s\rightarrow 1} \int_{\mathbb{R}^n\setminus \Omega} \mathcal N_su\, v\,dx=\int_{\partial \Omega}\partial_\nu u \, v\,dx,$$ where $\partial_\nu$ denotes the external normal derivative to $\partial \Omega$. \end{proposition} The last result that we recall from \cite{DROV}, describes the spectrum of the fractional Laplacian with zero Neumann boundary conditions. \begin{theorem}[Theorem 3.11 in \cite{DROV}] There exists a diverging sequence of non-negative values $$0=\lambda_1<\lambda_2\le \lambda_3\le \dots,$$ and a sequence of functions $u_i:\mathbb{R}^n\rightarrow \mathbb{R}$ such that \[\begin{cases} (-\Delta)^su_i(x)=\lambda_i u_i(x) & \mbox{for any } x\in \Omega\\ \mathcal N_s u_i(x)=0 & \mbox{for any } x\in \mathbb{R}^n\setminus \overline \Omega. \end{cases}\] Moreover, the functions $u_i$ (restricted to $\Omega$) provide a complete orthogonal system in $L^2(\Omega)$. \end{theorem} \section{A priori bounds for monotone radial solutions}\label{sec2} Without loss of generality, from now on we suppose that $f$ satisfies the further assumption \begin{itemize} \item[$(f_0)$] $f(t)\ge 0$ and $f'(t)\ge 0$ for every $t\in[0,\infty)$. \end{itemize} If this is not the case, it is always possible to reduce problem \eqref{P} to an equivalent one having a non-negative and non-decreasing nonlinearity, cf. \cite[Lemma~ 2.1]{CN}. \smallskip We look for solutions to \eqref{P} in the cone $\mathcal C$ defined in \eqref{cone} and \eqref{cone-}. It is easy to prove that $\mathcal C$ is a closed convex cone in $H^s_{\Omega,0}$, i.e., the following properties hold for all $u,\,v\in\mathcal C$ and $\lambda\ge0$: \begin{itemize} \item[(i)] $\lambda u\in \mathcal C$; \item[(ii)] $u+v\in \mathcal C$; \item[(iii)] if also $-u\in\mathcal C$, then $u\equiv0$; \item[(iv)] $\mathcal C$ is closed in the $H^s$-topology. \end{itemize} We will use the above properties of $\mathcal C$ in Lemma \ref{deformation}. \medskip We state now an embedding result for radial functions belonging to fractional Sobolev spaces, which can be found in \cite{SSV} (see also \cite{BSY}). \begin{lemma} If $s>1/2$ and $0<\bar R<R$, there exists a positive constant $C_{\bar R}=C_{\bar R}(\bar R,n,s)$ such that \begin{equation}\label{embLinf} \left\|u\right\|_{L^\infty(B_R\setminus B_{\bar R})}\le C_{\bar R}\|u\|_{H^s_{B_R\setminus B_{\bar R},0}} \end{equation} for all $u$ radial in $H^s_{B_R\setminus B_{\bar R},0}$. \end{lemma} \begin{proof} The proof is the same as in \cite[Lemma 4.3]{BSY}, we report it here for the sake of completeness. Let $\bar R<\rho< R$. Using that $u$ is radial, $s>1/2$, and the trace inequality for $H^s(B_\rho\setminus B_{\bar R})$ (see e.g. \cite[Section 3.3.3]{Triebel}), we have for every $x\in \partial B_\rho$ $$ \begin{aligned} |u(x)|^2&=\frac{\rho^{1-n}}{n\omega_n}\int_{\partial B_\rho}u^2 d\mathcal H^{n-1}\\ &\le C\frac{\rho^{1-n}}{n\omega_n} \rho^{2s-1}\left\{[u]_{H^s(B_\rho\setminus B_{\bar R})}^2+\frac{1}{\rho^{2s}}\|u\|_{L^2(B_\rho\setminus B_{\bar R})}^2\right\}, \end{aligned} $$ where $\omega_n$ is the volume of the unit sphere in $\mathbb R^n$ and $d\mathcal H^{n-1}$ denotes the $(n-1)$-dimensional Hausdorff measure. We immediately deduce that for every $x\in \partial B_\rho$ \begin{equation* \begin{aligned} |u(x)| &\le \begin{cases} C|x|^{-\frac{n-2s}{2}}\|u\|_{H^s(B_\rho\setminus B_{\bar R})}\;&\mbox{ if } \rho=|x| \ge 1,\medskip \\ C\frac{|x|^{-\frac{n-2s}{2}}}{\rho^s}\|u\|_{H^s(B_\rho\setminus B_{\bar R})}&\mbox{ if } \rho=|x| < 1 \end{cases}\\ &\le C|x|^{-\frac{n-2s}{2}}\left(1+\frac{1}{\rho^s}\right)\|u\|_{H^s(B_\rho\setminus B_{\bar R})}\\ &\le C\bar R^{-\frac{n-2s}{2}}(1+\bar R^{-s})\|u\|_{H^s(B_R\setminus B_{\bar R})} \end{aligned} \end{equation*} Hence, we conclude that $$ \begin{aligned} \left\|u\right\|_{L^\infty(B_R\setminus B_{\bar R})}&\le C\bar R^{-\frac{n-2s}{2}}(1+\bar R^{-s})\|u\|_{H^s(B_R\setminus B_{\bar R})}\\ &\le C\bar R^{-\frac{n-2s}{2}}(1+\bar R^{-s})\|u\|_{H^s_{B_R\setminus B_{\bar R},0}}, \end{aligned} $$ which proves the statement, with $C_{\bar R}:=C\bar R^{-\frac{n-2s}{2}}(1+\bar R^{-s})$. \end{proof} As mentioned above, working in the cones $\mathcal C$ of non-negative, radial and monotone functions has the advantage to have an a priori $L^\infty$ bound, according to the following lemma. In particular, from the proof of the next lemma it will be clear the role of the {\it non-decreasing} monotonicity in the case of the ball. \begin{lemma}\label{L-infty} Let $s>1/2$ and $\Omega$ be the ball $B_R$ or the annulus $A_{R_0,R}$ as in \eqref{palla}, \eqref{anello}. There exists a constant $C=C(R,R_0,n,s)>0$ such that \[ \|u\|_{L^{\infty}(\Omega)}\le C\|u\|_{H^s_{\Omega,0}} \quad \mbox{for all } u\in \mathcal C.\] \end{lemma} \begin{proof} {\it Case $\Omega=B_R$.} In this case, $\mathcal C=\mathcal C_+(B_R)$. Since $u$ is radial and non-decreasing, we have that $\|u\|_{L^\infty(\Omega)}=\|u\|_{L^\infty(B_R\setminus B_{R/2})}$. Hence, the conclusion follows by \eqref{embLinf}, observing that here $\bar R=R/2>0$. {\it Case $\Omega=A_{R_0,R}$.} In the annulus, the same proof as before works both for $u\in\mathcal C_+$ and for $u\in\mathcal C_-$. We observe that in this case the constant $C$ depends on $R_0$ (and not on $R$). \end{proof} Thanks to the previous lemma, it would be enough to restrict the energy functional to $\mathcal C$ to get $\mathcal C$-constrained critical points; this is the approach in \cite{SerraTilli2011}. Nonetheless, as well explained in \cite{SerraTilli2011}, the cone $\mathcal C$ has empty interior in the $H^s$-topology, as a consequence it does not contain enough test functions to guarantee that constrained critical points are indeed free critical points. In \cite{SerraTilli2011}, the authors prove {\it a posteriori} that the constrained critical point that they find is a weak solution of the problem. In the present paper, we follow a different strategy proposed in \cite{BNW}, which, moreover, allows to cover a wider class of nonlinearities. The technique used relies on the truncation method and, for it, we need to prove a priori estimates for the solutions of \eqref{P} belonging to $\mathcal C$. We start with introducing some more useful notation. \smallskip Fix $\delta,\, M>0$ such that \begin{equation}\label{Mdelta} f(t)\ge(1+\delta)t\quad\mbox{for all }t\ge M. \end{equation} The existence of $\delta,\, M>0$ follows by $(f_2)$. We introduce the following set of functions \begin{equation}\label{FmdeltaM} \mathfrak{F}_{M,\delta}:=\left\{g\in C([0,\infty))\,:\,g\ge 0,\quad g(t)\ge(1+\delta)t\mbox{ for all }t\ge M\right\}. \end{equation} We remark that $\mathfrak F_{M,\delta}$ depends on $f$ only through $\delta$ and $M$. In the remaining part of this section, we shall derive some a priori estimates which are uniform in $\mathfrak F_{M,\delta}$ and hence depend only on $M$ and $\delta$, and not on the specific function $g$ belonging to $\mathfrak F_{M,\delta}$. This will be useful in the rest of the paper, since we will deal with a truncated function. We give now the definition of weak solution for a general nonlinear Neumann problem of the form \begin{equation}\label{eq}\begin{cases} (-\Delta)^su +u=g(u) & \mbox{in } \Omega\\ u \ge 0 & \mbox{in } \Omega\\ \mathcal N_s u=0 & \mbox{in } \mathbb{R}^n\setminus \overline \Omega. \end{cases} \end{equation} \begin{definition}\label{def-weak} We say that a non-negative function $u\in H^s_{\Omega,0}$ is a weak solution of problem \eqref{eq} if for every $v\in H^s_{\Omega,0}$ $$\frac{c_{n,s}}{2}\iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}}\,dx\,dy + \int_\Omega u\, v\,dx=\int_\Omega g(u)\,v.$$ \end{definition} The following Lemma gives an $L^1$ bound for solutions to \eqref{eq} with $g$ belonging to the class $\mathfrak F_{M,\delta}$. \begin{lemma}\label{L-1} Let $g$ be any function in $\mathfrak F_{M,\delta}$. Then, there exists a constant $K_1=K_1(R,n,M,\delta)>0$ such that any weak solution $u\in \mathcal C$ of \eqref{eq} satisfies $$\|u\|_{L^1(\Omega)}\le K_1.$$ \end{lemma} \begin{proof} Testing the notion of weak solution with $v\equiv 1$ and using that $g\in\mathfrak F_{M,\delta}$, we get $$\int_\Omega u \, dx=\int_{\{u< M\}}g(u)\,dx + \int_{\{u\ge M\}}g(u)\,dx \ge (1+\delta) \int_{\{u\ge M\}}u\, dx.$$ Hence, $$M|\Omega| \ge \int_{\{u< M\}}u\, dx \ge \delta \int_{\{u\ge M\}}u\, dx$$ and so $$\int_\Omega u\,dx=\int_{\{u< M\}}u\,dx+\int_{\{u\ge M\}}u\,dx\le M|\Omega|\left(1+\frac1\delta\right)=:K_1.$$ \end{proof} The following lemma gives a uniform a priori bound in $L^\infty$ for solutions belonging to the cone $\mathcal C$ of problems \eqref{eq}, with $g\in\mathfrak F_{M,\delta}$. \begin{lemma}\label{L-infty-unif} There exist two positive constants $K_\infty=K_\infty(R_0,R,n,s,M,\delta)$ and $K_2=K_2(R_0,R,n,s,M,\delta)$, such that for any $u\in \mathcal C$ weak solution of problem \eqref{eq}, the following estimates hold: \[\|u\|_{L^\infty(\Omega)}\leq K_\infty\quad \mbox{and}\quad \|u\|_{H^s_{\Omega,0}}\leq K_2.\] \end{lemma} \begin{proof} Choosing again $v \equiv 1$ in the definition of weak solution, we have \begin{equation}\label{identity} \int_\Omega u\,dx=\int_\Omega g(u)\,dx. \end{equation} On the other hand, testing the equation with $u$ itself and using Lemma \ref{L-infty}, we deduce \begin{equation}\label{chain} \begin{split} \|u\|^2_{L^\infty(\Omega)}&\le C^2 \left(\iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy +\int_\Omega u^2\,dx\right) \\ &=C^2\int_\Omega g(u)\,u\,dx\leq C^2\|u\|_{L^\infty(\Omega)}\int_\Omega g(u)\,dx. \end{split}\end{equation} Combining \eqref{identity} with the previous estimate, we conclude that $$\|u\|_{L^\infty(\Omega)}\leq C^2 \|u\|_{L^1(\Omega)}\leq C^2K_1=:K_\infty,$$ where the last estimate comes from Lemma \ref{L-1}. Finally, this bound on $\|u\|_{L^\infty(\Omega)}$ combined with inequality \eqref{chain} above, gives the following uniform bound on $\|u\|_{H^s_{\Omega,0}}$: $$\|u\|_{H^s_{\Omega,0}}^2\le \|u\|_{L^\infty(\Omega)}\int_\Omega g(u)\,dx=\|u\|_{L^\infty(\Omega)}\|u\|_{L^1(\Omega)}\le C^2K_1^2=:K_2^2.$$ \end{proof} We now prove a regularity result for weak solutions of \eqref{P} belonging to the cone $\mathcal C$. \begin{lemma}\label{regularity} Let $u\in \mathcal C$ be a weak solution of \eqref{P}. Then $u\in C^{2}(\mathbb R^n)$. \end{lemma} \begin{proof} By Lemma \ref{L-infty-unif} we know that $u\in L^\infty(\Omega)$. Furthermore, by the nonlocal Neumann boundary conditions, we have that $$ u(x)=\frac{\displaystyle{\int_\Omega}\frac{u(y)}{|x-y|^{n+2s}}dy}{\displaystyle{\int_\Omega}\frac{1}{|x-y|^{n+2s}}dy}\quad\mbox{for every }x\in \mathbb R^n\setminus\overline\Omega. $$ Thus, for any $\varepsilon>0$ we get for every $x\in \mathbb R^n\setminus\Omega_\varepsilon:=\{x\in\mathbb R^n\,:\, \mathrm{dist}(x,\Omega)\ge\varepsilon\}$ $$ u(x)=\frac{\displaystyle{\int_\Omega}\frac{u(y)}{|x-y|^{n+2s}}dy}{\displaystyle{\int_\Omega}\frac{1}{|x-y|^{n+2s}}dy}\le \|u\|_{L^\infty(\Omega)}\frac{\displaystyle{\int_\Omega}\frac{1}{|x-y|^{n+2s}}dy}{\displaystyle{\int_\Omega}\frac{1}{|x-y|^{n+2s}}dy}=\|u\|_{L^\infty(\Omega)}. $$ Therefore, being this estimate uniform in $\varepsilon$, we get $|u(x)|\le \|u\|_{L^\infty(\Omega)}$ for every $x\in \mathbb R^n\setminus\overline\Omega$. Hence, $u\in L^\infty(\mathbb R^n)$ and so, using \cite[Proposition 2.9]{Silvestre} with $w=f(u)-u\in L^\infty(\mathbb R^n)$, we obtain $u\in C^{1,\alpha}(\mathbb R^n)$ for every $\alpha\in (0,2s-1)$. Then, recalling that $f\in C^{1,\gamma}$, we can use a bootstrap argument, and apply \cite[Proposition 2.8]{Silvestre} to conclude the proof. \end{proof} \section{Existence of a mountain pass radial solution}\label{sec:mountain_pass_existece} In this section we prove the existence of a radial solution of \eqref{P} via a Mountain Pass-type Theorem. We are now ready to start the truncation method described in the Introduction: we will modify $f$ in $(K_\infty,+\infty)$, where $K_\infty$ is the $L^\infty$ bound given in Lemma~\ref{L-infty-unif}, in such a way to have a subcritical nonlinearity $\tilde f$.\smallskip \begin{lemma}\label{truncated} For every $\ell\in(2,2_s^*)$, there exists $\tilde{f}\in \mathfrak F_{M,\delta}\cap C^1([0,\infty))$, satisfying $(f_0)$--$(f_3)$, \begin{equation}\label{subcritical} \lim_{t\to\infty}\frac{\tilde{f}(t)}{t^{\ell-1}}=1, \end{equation} and with the property that if $u\in\mathcal C$ solves \begin{equation}\label{tildeP}\begin{cases}(-\Delta)^s u+u=\tilde{f}(u)\quad&\mbox{in }\Omega,\\ u>0&\mbox{in }\Omega,\\ \mathcal N_s u=0&\mbox{in }\mathbb{R}^n\setminus \overline \Omega, \end{cases} \end{equation} then $u$ solves \eqref{P}. \end{lemma} For the proof of the above lemma, we refer the reader to \cite[Lemma 4.3]{BNW}. As a consequence of the previous lemma, condition $(f_1)$, and the regularity of $f$, there exists $C>0$ for which \begin{equation}\label{crescitasottocritica} \tilde f(t)\le C (1+t^{\ell-1})\quad\mbox{for all }t\ge0, \end{equation} where $\ell\in(2,2_s^*)$. From now on in the paper, we consider the trivial extension of $\tilde f$, still denoted with the same symbol $$\tilde f= \begin{cases} \tilde f\quad&\mbox{in }[0,+\infty),\\ 0&\mbox{in }(-\infty, 0). \end{cases}$$ Recalling the Definition \ref{def-weak} of weak solution (applied here with $g=\tilde f$) one can easily see that weak solutions of problem \eqref{tildeP} can be found as critical points of the following energy functional defined on the space $H^s_{\Omega,0}$: \begin{equation}\label{energy} \mathcal E(u):=\frac{c_{n,s}}{4}\iint_{\mathbb{R}^{2n}\setminus (\Omega^c)^2}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy +\frac{1}{2}\int_\Omega u^2\,dx -\int_\Omega \tilde F(u)\,dx, \end{equation} where $\tilde F(t):=\int_0^t \tilde f(\tau)d\tau.$ The proof of this fact follows from the argument in the proof of Proposition 3.7 in \cite{DROV}, with the obvious modifications due the presence of the nonlinearity $\tilde f$. Because of \eqref{subcritical} and the Sobolev embedding, the functional $\mathcal E$ is well defined and of class $C^2$, being $s>1/2$. \begin{lemma}[\textbf{Palais-Smale condition}]\label{PalaisSmale} The functional $\mathcal E$ satisfies the Palais-Smale condition, i.e. every \sl{(PS)}-sequence $(u_k)\subset H^s_{\Omega,0}$, namely a sequence satisfying $$(\mathcal E(u_k)) \mbox{ is bounded}\quad\mbox{ and }\quad \mathcal E'(u_k)\to 0\mbox{ in }(H^s_{\Omega,0})^*,$$ admits a convergent subsequence. \end{lemma} \begin{proof} Reasoning as in \cite[Lemma 3.3]{CN}, as a consequence of \eqref{subcritical}, there exist $\mu\in(2,\ell]$ and $T_0>0$ such that \begin{equation}\label{AmbRa} \tilde{f}(t)t\ge\mu\tilde{F}(t)\quad\mbox{ for all }t\ge T_0. \end{equation} Now, let $(u_k)\subset H^s_{\Omega,0}$ be a (PS)-sequence for $\mathcal E$ as in the statement. We estimate $$ \begin{aligned} \mathcal E(u_k)-\frac1\mu \mathcal E'(u_k)[u_k]\ge &\frac{c_{n,s}}{2} \left(\frac12-\frac1\mu\right)\|u_k\|_{H^s_{\Omega,0}}^2\\ &+\int_{\{u_k\le T_0\}}\left(\frac1\mu\tilde{f}(u_k)u_k-\tilde F(u_k)\right)dx \end{aligned} $$ and, being $(u_k)$ a (PS)-sequence, $$ \mathcal E(u_k)-\frac1\mu \mathcal E'(u_k)[u_k]\le |\mathcal E(u_k)|+\frac1\mu \|\mathcal E'(u_k)\|_*\|u_k\|_{H^s_{\Omega,0}}\le C(1+\|u_k\|_{H^s_{\Omega,0}}) $$ for some $C>0$, where we have denoted by $\|\cdot\|_*$ the norm of the dual space of $H^s_{\Omega,0}$. Since we know that $\int_{\{u_k\le T_0\}}\left(\frac1\mu\tilde{f}(u_k)u_k-\tilde F(u_k)\right)dx$ is uniformly bounded in $k$, we get $$\left(\frac12-\frac1\mu\right)\|u_k\|_{H^s_{\Omega,0}}^2\le C(1+\|u_k\|_{H^s_{\Omega,0}}).$$ Therefore, $(u_k)$ is bounded in $H^s_{\Omega,0}$ and so there exists $u\in H^s_{\Omega,0}$ such that $u_k\rightharpoonup u$ in $H^s_{\Omega,0}$, up to a subsequence. By compact embedding (Proposition~\ref{embedding}), $u_k\to u$ in $L^\ell(\Omega)$ and, up to a subsequence, $u_k\to u$ a.e. in $\Omega$. Again, since $(u_k)$ is a (PS)-sequence \begin{equation}\label{E'to0} |\mathcal E'(u_k)[u_k-u]|\le \|\mathcal E'(u_k)\|_{*}\|u_k-u\|_{H^s_{\Omega,0}}\to 0\quad \mbox{as }k\to\infty. \end{equation} On the other hand, by H\"older's inequality and \eqref{crescitasottocritica}, \begin{equation}\label{int-ftilde} \begin{aligned} \int_\Omega \tilde f(u_k)(u_k-u)\,dx &\le C\int_\Omega(1+u_k^{\ell-1})(u_k-u)dx\\ &\le C\|1+u_k\|^{\ell-1}_{L^\ell(\Omega)}\|u_k-u\|_{L^\ell(\Omega)}\to 0\quad\mbox{as }k\to\infty \end{aligned} \end{equation} and \begin{equation}\label{int-uk-u} \int_\Omega u_k(u_k-u)\,dx=\int_\Omega (u_k-u)^2\,dx+\int_\Omega u(u_k-u)\,dx\to0 \quad\mbox{as }k\to\infty. \end{equation} Recalling that $$ \begin{aligned} \mathcal E'(u_k)&[u_k-u]\\ &=\frac{c_{n,s}}{2}\iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{(u_k(x)-u_k(y))[(u_k-u)(x)-(u_k-u)(y)]}{|x-y|^{n+2s}}\,dx\,dy\\ &\phantom{=}+\int_\Omega u_k(u_k-u)\,dx -\int_\Omega \tilde f(u_k)(u_k-u)\,dx, \end{aligned} $$ by \eqref{E'to0}, we have in view of \eqref{int-ftilde} and \eqref{int-uk-u} \begin{equation}\label{convergence0} \lim_{k\to\infty}\iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{(u_k(x)-u_k(y))[(u_k-u)(x)-(u_k-u)(y)]}{|x-y|^{n+2s}}\,dx\,dy=0. \end{equation} We claim that \eqref{convergence0} implies the following \begin{equation}\label{conv-norms} \lim_{k\to\infty}\iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{|u_k(x)-u_k(y)|^2}{|x-y|^{n+2s}}\,dx\,dy= \iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy. \end{equation} Indeed, by weak lower semicontinuity \begin{equation}\label{wlsc} \iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy\le \liminf_{k\to\infty}\iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{|u_k(x)-u_k(y)|^2}{|x-y|^{n+2s}}\,dx\,dy. \end{equation} Moreover, setting $$a:=u(x)-u(y)\quad\mbox{and}\quad b:= u_k(x)-u_k(y),$$ using the easy inequality $a^2+2b(b-a)\ge b^2$, we deduce $$ \begin{aligned} &\iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy\\ &\hspace{1cm}+2\iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{(u_k(x)-u_k(y))(u_k(x)-u_k(y)-u(x)+u(y))}{|x-y|^{n+2s}}\,dx\,dy\\ &\hspace{1cm}\ge \iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{|u_k(x)-u_k(y)|^2}{|x-y|^{n+2s}}\,dx\,dy. \end{aligned} $$ Thus, by \eqref{convergence0}, we obtain $$ \iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy\ge \limsup_{k\to\infty}\iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{|u_k(x)-u_k(y)|^2}{|x-y|^{n+2s}}\,dx\,dy, $$ which, together with \eqref{wlsc}, proves the claim. Combining \eqref{conv-norms} with the convergence of $L^2$ norms $\|u_k\|_{L^2(\Omega)}^2\to\|u\|_{L^2(\Omega)}^2$, we get $$ \|u_k\|_{H^s_{\Omega,0}}\to\|u\|_{H^s_{\Omega,0}}. $$ Finally, since we also have weak convergence $u_k\rightharpoonup u$ in $H^s_{\Omega,0}$, we conclude that $u_k\to u$ in $H^s_{\Omega,0}$. \end{proof} \begin{remark} We observe that, as already noticed in \cite[Remark 4.13]{BNW}, the truncation method (cf. Lemma \ref{truncated}) and the preliminary a priori estimate (cf. Lemma \ref{L-infty-unif}) are needed to get the subcritical growth of the nonlinearity \eqref{crescitasottocritica} and the Ambrosetti-Rabinowitz condition \eqref{AmbRa}. If the original nonlinearity $f$ of problem \eqref{P} satisfies those further assumptions, it is possible to skip the first part concerning a priori estimates and truncation, and to prove directly the existence of both a non-decreasing and a non-increasing (also for the ball) solutions, just starting from Lemma \ref{PalaisSmale} with $\tilde f=f$. \end{remark} We define \begin{equation}\label{u-u+} \begin{aligned} u_-&:= \sup \{t \in [0,u_0)\,:\, \tilde f(t)=t\},\\ u_+&:= \inf \{t \in (u_0,+\infty) \,:\, \tilde f(t)=t\}. \end{aligned} \end{equation} Since $\tilde f$ is a truncation of $f$, using Lemma \ref{truncated} and the properties satisfied by $f$, we have that $\tilde f(u_0)=u_0$ and $\tilde f'(u_0)>0$, so that $u_0$ is an isolated zero of the function $\tilde f(t)-t$. Hence, \begin{equation}\label{u-+} u_-\neq u_0\quad\mbox{ and }\quad u_+\neq u_0. \end{equation} We point out that $u_+= +\infty$ is possible. Next, in order to localize the solutions, as already explained in the Introduction, we define the restricted cones \begin{equation*} \begin{aligned} \mathcal C_{+,*}&:= \{u \in \mathcal C_+\::\: \text{$u_- \le u \le u_+$ in $\Omega$}\},\\ \mathcal C_{-,*}&:= \{u \in \mathcal C_-\::\: \text{$u_- \le u \le u_+$ in $A_{R_0,R}$}\}. \end{aligned} \end{equation*} As for $\mathcal C$, when it will not be relevant to distinguish between the two cones $\mathcal C_{+,*}$ and $\mathcal C_{-,*}$, we will simply denote by $\mathcal C_*$ either of them \begin{equation}\label{Cstar} \mathcal C_*:= \{u \in \mathcal C\::\: \text{$u_- \le u \le u_+$ in $\Omega$}\}. \end{equation} Clearly, $\mathcal C_*$ is closed and convex. \begin{corollary}\label{conseqPS} Let $c\in\mathbb R$ be such that $\mathcal E'(u)\neq 0$ for all $u\in \mathcal C_*$ with $\mathcal E(u)=c$. Then, there exist two positive constants $\bar\varepsilon$ and $\bar\delta$ such that the following inequality holds $$ \|\mathcal E'(u)\|_*\ge\bar\delta\quad \mbox{for all }u\in \mathcal C_*\mbox{ with }|\mathcal E(u)-c|\le 2\bar\varepsilon. $$ \end{corollary} \begin{proof} The proof follows by Lemma \ref{PalaisSmale}. Indeed, suppose by contradiction that the thesis does not hold, then we can find a sequence $(u_k)\subset \mathcal C_*$ such that $\|\mathcal E'(u_k)\|_*<\frac1k$ and $c-\frac1k \le \mathcal E(u_k)\le c+\frac1k$ for all $k$. Hence, $(u_k)$ is a Palais-Smale sequence, and since $\mathcal E$ satisfies the Palais-Smale condition, up to a subsequence, $u_k\to u$ in $H^s_{\Omega,0}$. Since $(u_k)\subset \mathcal C_*$ and $\mathcal C_*$ is closed, $u\in \mathcal C_*$. The fact that $\mathcal E$ is of class $C^1$ then gives $\mathcal E(u_k)\to c=\mathcal E(u)$ and $\mathcal E'(u_k)\to 0=\mathcal E'(u)$, which contradicts the hypothesis. \end{proof} We define the operator $T:(H^s_{\Omega,0})^*\to H^s_{\Omega,0}$ as \begin{equation}\label{T}T(h)=v, \quad\mbox{where $v$ solves } \quad(P_h)\;\begin{cases}(-\Delta)^s v+v=h\quad&\mbox{in }\Omega,\\ \mathcal N_s v=0&\mbox{in }\mathbb{R}\setminus\overline \Omega. \end{cases} \end{equation} The associated energy of $(P_h)$, given by \eqref{energyh}, is strictly convex, coercive and weakly lower semicontinuous, hence problem $(P_h)$ admits a unique weak solution $v\in H^s_{\Omega,0}$, which is a minimizer of the energy. Hence, the definition of $T$ is well posed and \begin{equation}\label{eq:T_continuous} T\in C((H^s_{\Omega,0})^*;H^s_{\Omega,0}), \end{equation} (see for instance the proof Theorem 3.9 in \cite{DROV}). We introduce also the operator \begin{equation}\label{eq:tildeT_def} \tilde T:H^s_{\Omega,0}\to H^s_{\Omega,0} \quad\textrm{defined by}\quad \tilde T(u)=T(\tilde f(u)), \end{equation} with $T$ given in \eqref{T}. Being $\ell<2_s^*$, $u\in H^s_{\Omega,0}$ implies $u\in L^\ell(\Omega)$. Hence, by \eqref{crescitasottocritica}, $\tilde f(u)\in L^{\ell'}(\Omega)\subset (H^s_{\Omega,0})^*$ and $\tilde T$ is well defined. \begin{proposition}\label{Ttildecompact} The operator $\tilde T$ is compact, i.e. it maps bounded subsets of $H^s_{\Omega,0}$ into precompact subsets of $H^s_{\Omega,0}$. \end{proposition} \noindent The proof of the previous proposition is the same as for \cite[Proposition 3.2]{CN} with the obvious changes due to the different space we are working in, so we omit it. \smallskip In the following lemma we prove that the operator $\tilde T$ preserves the cone $\mathcal C_*$, which in turn will be useful, in Lemma \ref{deformation}, to build a deformation that preserves the cone. As mentioned in the Introduction, this is crucial to guarantee existence of a minimax solution in $\mathcal C_*$. \begin{lemma}\label{cononelcono} The operator $\tilde T$ defined in \eqref{eq:tildeT_def} satisfies $\tilde T(\mathcal C_*)\subseteq \mathcal C_*$. \end{lemma} \begin{proof} We first note that $u\in\mathcal C_*$ implies $\tilde f(u)\in\mathcal C$, by the properties of $\tilde f$. Now, let $u\in\mathcal C_*$ and $v:=\tilde T(u)$. We see that $v\ge0$ in $\Omega$. Indeed, denoting by $v^+$ the positive part of $v$, by an easy observation we have that $|v^+(x)-v^+(y)|\le |v(x)-v(y)|$, and hence $\mathcal E(v^+)\le \mathcal E(v)$. Furthermore, due to uniqueness, $v$ is radial. For the monotonicity, we distinguish the two cases. {\it Case $u\in \mathcal C_{+,*}.$} In this case, we have to prove that $v$ is non-decreasing. It is enough to show that for every $r\in(R_0,R)$ one of the following cases occurs: \begin{itemize} \item[$(a)$] $v(t)\le v(r)$ for all $t\in(R_0,r)$, \item[$(b)$] $v(t)\ge v(r)$ for all $t\in(r,R)$. \end{itemize} Indeed, if $v(\bar t)>v(r)$ for some $R_0<\bar t<r$, by the continuity of $v$, there exists $t\in(\bar t,r)$ for which $v(\bar t)>v(t)> v(r)$ which violates both $(a)$ and $(b)$. Now, we fix $r\in(R_0,R)$. If $\tilde f(u(r))\le v(r)$, we consider the test function $$ \varphi_+(x):=\begin{cases}(v(|x|)-v(r))^+\quad&\mbox{if }R_0<|x|\le r,\\ 0&\mbox{otherwise}. \end{cases} $$ We have \begin{equation}\label{4.16} \begin{split} &\iint_{\mathbb{R}^{2n}\setminus \left((B_r\setminus B_{R_0})^c\right)^2} \frac{(v(x)-v(y))(\varphi_+(x)-\varphi_+(y))}{|x-y|^{n+2s}}dx\,dy \\ &\hspace{5.6cm} +\int_{B_r\setminus B_{R_0}}v(x)\varphi_+(x)dx \\ &\hspace{1em} =\int_{B_r\setminus B_{R_0}}\tilde f(u(x))\varphi_+(x)dx \le \tilde f(u(r))\int_{B_r\setminus B_{R_0}}\varphi_+(x)dx\\ &\hspace{1em} \le v(r)\int_{B_r\setminus B_{R_0}}\varphi_+(x)dx. \end{split} \end{equation} Using again the definition of $\varphi_+$, we obtain \begin{equation}\label{4.17} \begin{aligned} \iint_{\mathbb{R}^{2n}\setminus \left((B_r\setminus B_{R_0})^c\right)^2} &\frac{(v(x)-v(y))(\varphi_+(x)-\varphi_+(y))}{|x-y|^{n+2s}}dx\,dy \\ &\ge \iint_{\mathbb{R}^{2n}\setminus \left((B_r\setminus B_{R_0})^c\right)^2} \frac{|\varphi_+(x)-\varphi_+(y)|^2}{|x-y|^{n+2s}}dx\,dy. \end{aligned} \end{equation} Hence, by \eqref{4.16} and \eqref{4.17} \begin{equation*} \begin{split} 0&\ge\\ &\iint_{\mathbb{R}^{2n}\setminus \left((B_r\setminus B_{R_0})^c\right)^2} \frac{|\varphi_+(x)-\varphi_+(y)|^2}{|x-y|^{n+2s}}dx\,dy + \int_{B_r\setminus B_{R_0}}(v(x)-v(r))\varphi_+(x)dx\\ &=\iint_{\mathbb{R}^{2n}\setminus \left((B_r\setminus B_{R_0})^c\right)^2} \frac{|\varphi_+(x)-\varphi_+(y)|^2}{|x-y|^{n+2s}}dx\,dy + \int_{B_r\setminus B_{R_0}}|\varphi_+(x)|^2dx, \end{split} \end{equation*} which gives $\varphi_+ \equiv 0$, i.e. (a) holds. Analogously, if $\tilde f(u(r)) > v(r)$, we consider the test function $$ \varphi_-(x):=\begin{cases}0\quad&\mbox{if }R_0<|x|\le r,\\ (v(|x|)-v(r))^-&\mbox{otherwise} \end{cases}$$ and we prove that $(b)$ holds. Therefore, we have proved that $v$ is nondecreasing. {\it Case $u\in \mathcal C_{-,*}.$} In this case, we know that $u\in \mathcal C_{-,*}$ and have to prove that $v$ is non-increasing. The proof is the same as for $\mathcal C_{+,*}$ changing the roles of $\varphi_+$ and $\varphi_-$. \smallskip It remains to show that $u_-\le v\le u_+$. By the fact that $\tilde{f}(u_-)= u_-$ and that $\tilde f$ is non-decreasing we get $$(-\Delta)^s (v-u_-)+(v-u_-)=\tilde f(u)-\tilde f(u_-)\ge0.$$ Multiplying the equation above by $(v-u_-)^-$, integrating it over $\Omega$, and using that for any $g$ one has that $-|g^-(x)-g^-(y)|^2\ge (g(x)-g(y))(g^-(x)-g^-(y))$, we get $$ \iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{|(v-u_-)^-(x)-(v-u_-)^-(y)|^2}{|x-y|^{n+2s}}dx\,dy + \int_\Omega (v-u_-)(v-u_-)^-dx \le 0,$$ that is $(v-u_-)^-\equiv 0$ in $\Omega$. In a similar way, we prove that $v\le u^+$ in $\Omega$ (if $u^+<+\infty$). \end{proof} \begin{remark} In what follows, we will use indifferently the quantities $\mathcal E'(u)$, $\nabla \mathcal E(u)$ and $u-\tilde{T}(u)$. Below, we write explicitly the relations among these three objects. Given $\mathcal E': H^s_{\Omega,0}\to (H^s_{\Omega,0})^*$, the differential of $\mathcal E$, for every $u\in H^s_{\Omega,0}$, we denote by $\nabla \mathcal E(u)$ the only function of $H^s_{\Omega,0}$ (whose existence is guaranteed by Riesz's Representation Theorem) such that $$ (\nabla \mathcal E(u),v)_{H^s_{\Omega,0}} = \mathcal E'(u)[v]\quad\mbox{for all }v\in H^s_{\Omega,0}, $$ where $(\cdot,\cdot)_{H^s_{\Omega,0}}$ is the scalar product defined in Section \ref{sec2.0}. In particular, $\|\nabla \mathcal E(u)\|_{H^s_{\Omega,0}}=\|\mathcal E'(u)\|_{*}$, $\|\cdot\|_*$ being the norm in the dual space $(H^s_{\Omega,0})^*$. Now, by the definition \eqref{eq:tildeT_def} of the operator $\tilde T$, we know that, for every $u\in H^s_{\Omega,0}$, $\tilde T(u)=v$, where $v\in H^s_{\Omega,0}$ is the unique solution of $(-\Delta)^sv+v=\tilde f(u)$ in $\Omega$, under nonlocal Neumann boundary conditions. Therefore, for every $u,\,v\in H^s_{\Omega,0}$, it results $$ \begin{aligned} \big(u&-\tilde T(u),v\big)_{H^s_{\Omega,0}}=\\ &=\frac{c_{n,s}}{2}\iint_{\mathbb R^{2n}\setminus(\Omega^c)^2}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}}dx\,dy+\int_{\Omega}uv\,dx\\ &\phantom{=}-\frac{c_{n,s}}{2}\iint_{\mathbb R^{2n}\setminus(\Omega^c)^2}\frac{(\tilde T(u(x))-\tilde T(u(y)))(v(x)-v(y))}{|x-y|^{n+2s}}dx\,dy-\int_{\Omega}\tilde T(u)v\,dx\\ &=\frac{c_{n,s}}{2}\iint_{\mathbb R^{2n}\setminus(\Omega^c)^2}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}}dx\,dy+\int_{\Omega}(u-\tilde f(u))v\,dx\\ =\mathcal E'(u)[v]. \end{aligned} $$ In conclusion, $u-\tilde T(u)=\nabla \mathcal E(u)$ for every $u\in H^s_{\Omega,0}$. \end{remark} \begin{lemma}[\textbf{Deformation Lemma in $\mathcal C_*$}]\label{deformation} Let $c\in\mathbb R$ be such that $\mathcal E'(u)\neq 0$ for all $u\in \mathcal C_*$, with $\mathcal E(u)=c$. Then, there exists a function $\eta:\mathcal C_*\to\mathcal C_*$ satisfying the following properties: \begin{itemize} \item[(i)] $\eta$ is continuous with respect to the topology of $H^s_{\Omega,0}$; \item[(ii)] $\mathcal E(\eta(u))\le \mathcal E(u)$ for all $u\in\mathcal C_*$; \item[(iii)] $\mathcal E(\eta(u))\le c-\bar\varepsilon$ for all $u\in\mathcal C_*$ such that $|\mathcal E(u)-c|<\bar\varepsilon$; \item[(iv)] $\eta(u)=u$ for all $u\in\mathcal C_*$ such that $|\mathcal E(u)-c|>2\bar\varepsilon$, \end{itemize} where $\bar\varepsilon$ is the positive constant corresponding to $c$ given in Corollary \ref{conseqPS}. \end{lemma} \begin{proof} The ideas of this proof are borrowed from \cite[Lemma 4.5]{BNW}, cf. also \cite[Lemma 3.8]{CN}. Let $\chi_1:\mathbb R\to [0,1]$ be a smooth cut-off function such that $$\chi_1(t)= \begin{cases}1\quad&\mbox{if }|t-c|<\bar\varepsilon,\\ 0&\mbox{if }|t-c|>2\bar\varepsilon, \end{cases} $$ where $\bar\delta$ and $\bar\varepsilon$ are given in Corollary \ref{conseqPS}. Let $\Phi: H^s_{\Omega,0}\to H^s_{\Omega,0}$ be the map defined by $$\Phi(u):=\begin{cases}\chi_1(\mathcal E(u))\frac{\nabla \mathcal E(u)}{\|\nabla \mathcal E(u)\|_{H^s_{\Omega,0}}}\quad&\mbox{if }|\mathcal E(u)-c|\le 2\bar\varepsilon,\\ 0&\mbox{otherwise.}\end{cases}$$ Note that the definition of $\Phi$ is well posed by Corollary \ref{conseqPS}. For all $u\in\mathcal C_*$, we consider the Cauchy problem \begin{equation}\label{CauchyProblem} \begin{cases}\frac{d}{dt}\eta(t,u)=-\Phi(\eta(t,u))\quad t\in(0,\infty),\\ \eta(0,u)=u. \end{cases} \end{equation} Being $\mathcal E$ of class $C^2$, there exists a unique solution $\eta(\cdot, u)\in C^1([0,\infty);H^s_{\Omega,0})$, cf. \cite[Chapter \$1]{D}. We shall prove that for all $t>0$, $\eta(t,\mathcal C_*)\subset \mathcal C_*$. Fix $\bar t>0$. For every $u\in\mathcal C_*$ and $k\in\mathbb{N}$ with $k\ge \bar t/\bar\delta$, let $$\begin{cases}\bar\eta_k(0,u):=u,\\ \bar\eta_k\left(t_{i+1},u\right):=\bar\eta_k\left(t_i,u\right)-\frac{\bar t}k\Phi\left(\bar\eta_k\left(t_i,u\right)\right)\quad\mbox{for all }i=0,\dots,k-1, \end{cases} $$ with $$t_i:=i\cdot\frac{\bar t}k\quad\mbox{for all }i=0,\dots,k-1.$$ Let us prove that for all $i=0,\dots,k-1$, $\bar\eta_k\left(t_{i+1},u\right)\in\mathcal C_*$. If $|\mathcal E(u)-c|>2\bar\varepsilon$, then $\bar\eta_k\left(t_{i+1},u\right)=u\in\mathcal C_*$ for every $i=0,\dots,k-1$. Otherwise, let $$\lambda:=\frac{\bar t}k\cdot\frac{\chi_1\left(\mathcal E\left(\bar\eta_k\left(t_i,u\right)\right)\right)}{\|\bar\eta_k\left(t_i,u\right)-\tilde T\left(\bar\eta_k\left(t_i,u\right)\right)\|_{H^s_{\Omega,0}}}.$$ Clearly, $\lambda\le1$ by Corollary \ref{conseqPS}, being $k\ge \bar t/\bar\delta$ and $\|u-\tilde T(u)\|_{H^s_{\Omega,0}}=\|\nabla \mathcal E\|_{H^s_{\Omega,0}}$. Therefore,we have for every $i=0,\dots,k-1$ $$\bar\eta_k\left(t_{i+1},u\right)=(1-\lambda)\bar\eta_k\left(t_i,u\right)+\lambda \tilde T\left(\bar\eta_k\left(t_i,u\right)\right)\in \mathcal C_*$$ by induction on $i$, and by the convexity of $\mathcal C_*$. For every $i=0,\dots,k-1$, we can now define the line segment $$ \eta_k^{(i)}(t,u):= \left(1-\frac{t}{\bar t} k+i\right)\bar\eta_k\left(t_i,u\right)+\left(\frac{t}{\bar t} k-i\right)\bar\eta_k\left(t_{i+1},u\right) $$ for all $t\in\left[t_i,t_{i+1}\right]$. We denote by $\eta_k:=\bigcup_{i=0}^{k-1}\eta^{(i)}_k$ the whole Euler polygonal defined in $[0,\bar t ]$. Being $\mathcal C_*$ convex, we get immediately that for all $t\in[0,\bar t ]$, $\eta_k(t,u)\in\mathcal C_*$. We claim that $\eta_k(\cdot,u)$ converges to the solution $\eta(\cdot,u)$ of the Cauchy problem \eqref{CauchyProblem} in $H^s_{\Omega,0}$. Indeed, for all $i=0,\dots,k-1$, we integrate by parts the equation of \eqref{CauchyProblem} in the interval $[t_i,t_{i+1}]$ and we obtain $$\eta(t_{i+1},u)=\eta(t_i,u)-\frac{\bar t}k\Phi(\eta(t_i,u))+\int_{t_i}^{t_{i+1}}(\tau-t_{i+1})\frac{d}{d\tau}\Phi(\eta(\tau,u))d\tau.$$ On the other hand, we define the error $$\varepsilon_i:=\|\eta(t_i,u)-\eta_k(t_{i},u)\|_{H^s_{\Omega,0}}\quad\mbox{for every }i=0,\dots,k-1.$$ Hence, for every $i=0,\dots,k-1$, we get \begin{equation}\label{error-estimate} \begin{aligned} \varepsilon_{i+1}\le\varepsilon_i+&\frac{\bar t}k\|\Phi(\eta(t_i,u))-\Phi(\eta_k(t_i,u))\|_{H^s_{\Omega,0}}\\ &+\left\|\int_{t_i}^{t_{i+1}}(t_{i+1}-\tau)\frac{d}{d\tau}\Phi(\eta(\tau,u))d\tau\right\|_{H^s_{\Omega,0}}. \end{aligned} \end{equation} Now, since $\Phi$ is locally Lipschitz and $\eta([0,\bar t])\subset H^s_{\Omega,0}$ is compact, \begin{equation}\label{ineqonPhi} \|\Phi(\eta(t_i,u))-\Phi(\eta_k(t_i,u))\|_{H^s_{\Omega,0}}\le\varepsilon_i L_\Phi \end{equation} for some $L_\Phi=L_\Phi(\eta([0,\bar t]))>0$. Furthermore, $$\begin{aligned}\left\|\int_{t_i}^{t_{i+1}}(t_{i+1}-\tau)\frac{d}{d\tau}\Phi(\eta(\tau,u))d\tau\right\|_{H^s_{\Omega,0}}&\le\int_{t_i}^{t_{i+1}}(t_{i+1}-\tau)\left\|\frac{d}{d\tau}\Phi(\eta(\tau,u))\right\|_{H^s_{\Omega,0}}d\tau\\ &\le\frac{\bar t}{k}\int_0^{\bar t}\|\Phi'(\eta(\tau,u))\|_*\|\Phi(\eta(\tau,u))\|_{H^s_{\Omega,0}}d\tau\\ &\le\frac{\bar t^2}{k}\sup_{\tau\in[0,\bar t]}\|\Phi'(\eta(\tau,u))\|_*= \frac{\bar t^2}{k}L_\Phi. \end{aligned}$$ Thus, combining the last inequality with \eqref{ineqonPhi} and \eqref{error-estimate}, we have $$\varepsilon_{i+1}\le\varepsilon_i+\frac{\bar t}k\varepsilon_i L_\Phi+\frac{\bar t^2}{k}L_\Phi\quad\mbox{for all }i=0,\dots,k-1.$$ This implies that $$\varepsilon_{i+1}\le \frac{\bar t^2}k L_\Phi\sum_{j=0}^i\left(1+\frac{\bar t}k L_\Phi\right)^j=\bar t\left[\left(1+\frac{\bar t}k L_\Phi\right)^{i+1}-1\right]\to 0\quad\mbox{as }k\to\infty,$$ where we have used the fact that $\varepsilon_0=0$. By the triangle inequality and the continuity of $\eta(\cdot,u)$ and $\eta_k(\cdot,u)$, this yields the claim. Hence, for all $t\in[0,\bar t]$, $\eta(t,u)\in\mathcal C_*$ by the closedness of $\mathcal C_*$. For all $u\in\mathcal C_*$ and $t>0$ we can write \begin{equation}\label{eq:flusso_decrescente} \begin{aligned}\mathcal E(\eta(t,u))-\mathcal E(u)&=\int_0^t\frac{d}{d\tau}\mathcal E(\eta(\tau,u))d\tau\\ &\hspace{-2.5cm}=-\int_0^t\frac{\chi_1(\mathcal E(\eta(\tau,u)))}{\|\eta(\tau,u)-\tilde T(\eta(\tau,u))\|_{H^s_{\Omega,0}}}\mathcal E'(\eta(\tau,u))[\eta(\tau,u)-\tilde T(\eta(\tau,u))]d\tau\\ &\hspace{-2.5cm} =-\displaystyle{\int_0^t \|\eta(\tau,u)-\tilde T(\eta(\tau,u))\|_{H^s_{\Omega,0}}\chi_1(\mathcal E(\eta(\tau,u)))d\tau}\le0. \end{aligned} \end{equation} Now, let $u\in\mathcal C_*$ be such that $|\mathcal E(u)-c|<\bar\varepsilon$ and let $t\ge 2\bar\varepsilon/\bar\delta$. Then, two cases arise: either there exists $\tau\in[0,t]$ for which $\mathcal E(\eta(\tau,u))\le c-\bar\varepsilon$ and so, by the previous calculation we get immediately that $\mathcal E(\eta(t,u))\le c-\bar\varepsilon$, or for all $\tau\in[0,t]$, $\mathcal E(\eta(\tau,u))> c-\bar\varepsilon$. In this second case, $$c-\bar\varepsilon< \mathcal E(\eta(\tau,u))\le \mathcal E(u)< c+\bar\varepsilon.$$ In particular, by the definition of $\chi_1$, and by Corollary~\ref{conseqPS}, we have that for all $\tau\in[0,t]$ $$\chi_1(\mathcal E(\eta(\tau,u)))=1,\qquad\|\eta(\tau,u)-\tilde T(\eta(\tau,u))\|_{H^s_{\Omega,0}}\ge\bar\delta.$$ Hence, by \eqref{eq:flusso_decrescente}, we obtain $$\mathcal E(\eta(t,u))\le \mathcal E(u)-\displaystyle\int_0^t\bar\delta d\tau \le c+\bar\varepsilon-\bar\delta t\le c-\bar\varepsilon. $$ Finally, if we define with abuse of notation $$\eta(u):=\eta\left(\frac{2\bar\varepsilon}{\bar\delta},u\right),$$ it is immediate to verify that $\eta$ satisfies (i)-(iv). \end{proof} \begin{lemma}[\textbf{Mountain pass geometry}] \label{geometry} Let $\tau>0$ be such that $\tau <\min \{u_0-u_-,u_+-u_0\}$. Then there exists $\alpha>0$ such that \begin{itemize} \item[(i)] $\mathcal E(u)\ge \mathcal E(u_-)+\alpha$ for every $u \in \mathcal C_*$ with $\|u-u_-\|_{L^\infty(\Omega)}=\tau$; \item[(ii)] if $u_+< \infty$, then $\mathcal E(u)\ge \mathcal E(u_+)+\alpha$ for every $u \in \mathcal C_*$ with $\|u-u_+\|_{L^\infty(\Omega)}= \tau$; \item[(iii)] if $u_+=+\infty$, then there exists $\bar u\in\mathcal C_*$ with $\|\bar u-u_-\|_{L^\infty(\Omega)}>\tau$ such that $\mathcal E(\bar u)< \mathcal E(u_-)$. \end{itemize} \end{lemma} \begin{proof} The proof is analogous to the one of \cite[Lemma 4.6]{BNW}, we report it here for the sake of completeness. Suppose by contradiction that there exists a sequence $(w_k) \subset \mathcal C_*$ such that \begin{equation}\label{wnbounded} \|w_k\|_{L^\infty(\Omega)}=w_k(R)=\tau>0\quad\mbox{for all }k \end{equation} and $\limsup \limits_{k \to \infty} \bigl[\mathcal E(u_-+w_k)-\mathcal E(u_-)\bigr] \le 0$. Since $$ \begin{aligned} &\frac12\int_\Omega((u_-+w_k)^2-u_-^2)dx=\int_\Omega\int_0^1(u_-+t w_k)w_k\,dtdx,\\ &\tilde F(u_-+w_k)-\tilde F(u_-)=\int_0^1\tilde f(u_-+tw_k)w_k dt, \end{aligned} $$ we get \begin{align*} \mathcal E&(u_-+w_k)-\mathcal E(u_-)\\ &= \frac{1}2\left([w_k]_{H^s_{\Omega,0}}^2+ \int_\Omega[(u_-+w_k)^2 -u_-^2]dx \right) - \int_\Omega \Bigl(\tilde F(u_- + w_k)-\tilde F(u_-))\,dx\\ &= \frac{1}2\left([ w_k ]_{H^s_{\Omega,0}}^2 + \int_\Omega \int_0^1 \Bigl(u_-+t w_k - \tilde f(u_-+t w_k)\Bigr)w_k\,dt dx\right). \end{align*} Therefore, since by $(f_3)$ and the definition of $u_-$ \begin{equation}\label{fmp} t-\tilde f(t)>0 \qquad \text{for $t \in (u_-,u_0)$,} \end{equation} we conclude that $[w_k]_{H^s_{\Omega,0}} \to 0$. We claim that $(w_k)$ converges to the constant solution $w\equiv \tau$ in the $H^s_{\Omega,0}$ norm. Indeed, using $[w_k]_{H^s_{\Omega,0}} \to 0$ and \eqref{wnbounded}, we have that $(w_k)$ is bounded in $H^s_{\Omega,0}$ and so, up to a subsequence, it weakly converges to some $w\in H^s_{\Omega,0}$. Hence, \begin{equation}\label{fromwc} \begin{aligned} 0&=\lim_{k\to\infty}\iint_{\mathbb R^{2n}\setminus (\Omega^c)^2} \frac{[(w_k-w)(x)-(w_k-w)(y)](w(x)-w(y))}{|x-y|^{n+2s}}dx dy\\ &=\lim_{k\to\infty}\iint_{\mathbb R^{2n}\setminus (\Omega^c)^2} \frac{(w_k(x)-w_k(y))(w(x)-w(y))}{|x-y|^{n+2s}}dx dy -[w]^2_{H^s_{\Omega,0}}. \end{aligned} \end{equation} Moreover, \begin{equation}\label{CauchySchwartz} \frac{c_{n,s}}{2}\iint_{\mathbb R^{2n}\setminus (\Omega^c)^2} \frac{(w_k(x)-w_k(y))(w(x)-w(y))}{|x-y|^{n+2s}}dx dy\le C [w_k]_{H^s_{\Omega,0}}[w]_{H^s_{\Omega,0}}. \end{equation} Combining \eqref{fromwc} and \eqref{CauchySchwartz}, we get $[w]_{H^s_{\Omega,0}}=0$, which implies that $w\equiv \tau$. Thus, $(w_k)$ converges to the constant $\tau$ in $H^s_{\Omega,0}$. By the Dominated Convergence Theorem we can conclude that \begin{align*} 0 &\ge \lim_{k \to \infty} \int_\Omega \int_0^1 \Bigl(u_-+t w_k - \tilde f(u_-+t w_k)\Bigr)w_k\,dtdx\\ &= \int_\Omega \int_0^1 \Bigl(u_-+t \tau - \tilde f(u_-+t\tau)\Bigr)\tau\,dt dx, \end{align*} which contradicts \eqref{fmp}. Hence there exists $\alpha_1>0$ such that (i) holds. In a similar way, now using the fact that $t-\tilde f(t)<0$ for $t \in (u_0,u_+)$, we find $\alpha_{2}>0$ such that (ii) holds if $u_+< \infty$. The claim then follows with $\alpha:= \min \{\alpha_1,\alpha_2\}$. Finally, if $u_+=+\infty$, the existence of a point $\bar u\in \mathcal C_*$ outside the crest centered in $u_-$ is guaranteed by the following estimate (cf. also \cite[Remarks p. 118]{C-Bruto}): \begin{equation}\label{I-infty}\begin{aligned}\mathcal E(t\cdot 1)&=|\Omega|\left(\frac{t^2}2-\int_0^t\tilde f(s)ds\right)\\ &\le |\Omega|\left(\frac{t^2}2-\int_0^M\tilde f(s)ds-(1+\delta)\int_M^ts ds\right)\\ &\le\frac{|\Omega|}2\left(t^2-2M\min_{s\in[0,M]}\tilde f(s)-(1+\delta)(t^2-M^2)\right)\\ &=C-\frac{|\Omega|\delta}2t^2\to-\infty\quad\mbox{as }t\to\infty, \end{aligned} \end{equation} where we have used the fact that $\tilde f\in\mathfrak F_{M,\delta}$. This shows (iii) and concludes the proof. \end{proof} \begin{remark} We observe that, comparing (i) and (ii) in Lemma \ref{geometry}, it is apparent that, whenever $u_+<+\infty$, if $\mathcal E(u_-)<\mathcal E(u_+)$, then $u_+$ plays the role of the center inside the crest of the mountain pass and $u_-$ plays the role of the point outside the crest with less energy, otherwise the roles of $u_-$ and $u_+$ have to be interchanged. \end{remark} Now, let \begin{equation}\label{eq:2} \begin{aligned} U_- &:= \left\{u \in \mathcal C_* \::\: \mathcal E(u)<\mathcal E(u_-)+\frac{\alpha}{2},\: \|u-u_- \|_{L^\infty(\Omega)} < \tau\right\},\\ \\ U_+&:=\begin{cases} \displaystyle{\left\{u \in \mathcal C_* \::\: \mathcal E(u)<\mathcal E(u_+)+\frac{\alpha}{2},\: \|u-u_+ \|_{L^\infty(\Omega)} < \tau\right\}},&\mbox{ if }u_+<\infty,\\ &\\ \left\{u \in \mathcal C_* \, :\, \mathcal E(u)< \mathcal E(u_-),\, \|u-u_-\|_{L^\infty(\Omega)}>\tau\right\},&\mbox{ if }u_+=\infty \end{cases} \end{aligned} \end{equation} where $\tau$ and $\alpha$ are given by Lemma \ref{geometry}, $$ \Gamma:=\left\{ \gamma\in C([0,1];\mathcal C_*)\ :\ \gamma(0) \in U_-,\: \gamma(1) \in U_+\right\}, $$ and \begin{equation}\label{minmax} c:=\inf_{\gamma\in\Gamma}\max_{t\in[0,1]} \mathcal E(\gamma(t)). \end{equation} \begin{remark} The reason for considering two sets, $U_+$ and $U_-$, instead of just two points for the starting and the ending points of the admissible curves will be clear in Lemma \ref{lemma:nonconstant_p>2}. Indeed, this choice makes easier exhibiting an admissible curve along which the energy is lower than the energy of the constant. \end{remark} \begin{proposition}[\textbf{Mountain Pass Theorem}]\label{mountainpass} The value $c$ defined in \eqref{minmax} is finite and there exists a critical point $u\in\mathcal C_*\setminus\{u_-,u_+\}$ of $\mathcal E$ with $\mathcal E(u)=c$. In particular, $u$ is a weak solution of \eqref{P}. \end{proposition} \noindent The proof of the above proposition is standard, once one has the mountain pass geometry (Lemma \ref{geometry}) and the deformation Lemma (Lemma \ref{deformation}). We refer e.g. to \cite[Proposition 3.10]{CN} for a proof given in a very similar situation. \section{Non-constancy of the minimax solution}\label{sec4} In this section we prove that the solution $u\in\mathcal C_*$, whose existence has been established in the previous section, is non-constant. Since we work in the restricted cone $\mathcal C_*$ where the only constant solutions are $u_-$, $u_+$, and $u_0$, and since the mountain pass geometry guarantees that $u\not\equiv u_-$ and $u\not\equiv u_+$ (cf. Proposition \ref{mountainpass}), it is enough to prove that $u\not\equiv u_0$. To this aim, following the idea in \cite[Section 4]{BNW}, we first prove that on the Nehari-type set $$ N_*:=\{u\in\mathcal C_*\setminus\{0\}\,:\, \mathcal E'(u)[u]=0\}, $$ i.e., roughly speaking, on the crest of the mountain pass, the infimum of the energy is strictly less than $\mathcal E(u_0)$, cf. also \cite[Remark 2]{CN}. Then, we explicitly build an admissible curve $\bar \gamma\in \Gamma$ along which the energy is less than $\mathcal E(u_0)$. By \eqref{minmax}, this ensures that the mountain pass level is less than $\mathcal E(u_0)$ and so $u\not\equiv u_0$.\smallskip We start by introducing some useful notation. We denote by $$ H^s_\mathrm{rad}:=\{u\in H^s_{\Omega,0}\,:\, u\mbox{ radial }\}, $$ we introduce also the space of radial, non-decreasing functions $$ H^s_\mathrm{+,r}:=\{u\in H^s_{\Omega,0}\,:\, u\mbox{ radial and radially non-decreasing }\}. $$ We define the second radial eigenvalue $\lambda_2^\mathrm{rad}$ and the second radial increasing eigenvalue $\lambda_2^{+,\mathrm{r}}$ of the fractional Neumann Laplacian in $\Omega$ as follows: \begin{equation}\label{lambda2rad+} \lambda_2^\mathrm{rad}:=\inf_{v\in H^s_\mathrm{rad},\,\int v=0}\frac{[v]^2_{H^s_{\Omega,0}}}{\int_{\Omega}v^2},\qquad\quad \lambda_2^\mathrm{+,r}:=\inf_{v\in H^s_\mathrm{+,r},\, \int v=0}\frac{[v]^2_{H^s_{\Omega,0}}}{\int_{\Omega}v^2}. \end{equation} Clearly, the following chain of inequalities holds by inclusion $H^s_{+,\mathrm{r}}\subset H^s_\mathrm{rad}\subset H^s_{\Omega,0}$ $$ 0<\lambda_2\le \lambda_2^\mathrm{rad}\le \lambda_2^\mathrm{+,r} $$ and, by the direct method of Calculus of Variations, all these infima are achieved. \begin{remark} We observe that in the local case, i.e., for the Neumann Laplacian, it is known that the second radial eigenfunction is increasing, so that the second radial eigenvalue and the second radial {\it increasing} eigenvalue coincide. In this nonlocal setting we do not know whether the same equality holds true. In \cite{BNW}, for the local case, the condition required on $f'(u_0)$ involves the second radial eigenvalue, and the proof of the non-constancy of the solution uses the monotonicity of the associated eigenfunction. In this paper, we need to require an assumption involving $\lambda_2^{+,\mathrm{r}}$, which, as explained above, might be more restrictive. On the other hand, as will be clear in Proposition \ref{constant-sol}, some condition on the derivative of $f$ is needed in order to guarantee the existence of non-constant solutions. \end{remark} \begin{lemma}\label{4.9} Let $v_2\in H^s_{+,\mathrm{r}}$ be the second radial increasing eigenfunction, namely the function that realizes $\lambda_2^{+,\mathrm{r}}$. Let $$ \psi: \mathbb{R}^2 \to \mathbb{R},\qquad \psi(s,t):=\mathcal E'(t(u_0+sv_2))[u_0+sv_2], $$ then there exist $\varepsilon_1,\varepsilon_2>0$ and a $C^1$ function $h:(-\varepsilon_1,\varepsilon_1) \to (1-\varepsilon_2,1+\varepsilon_2)$ such that for $(s,t) \in V:= (-\varepsilon_1,\varepsilon_1) \times (1-\varepsilon_2,1+\varepsilon_2)$ we have \begin{equation}\label{psi=0} \psi(s,t)=0\quad \mbox{if and only if}\quad t=h(s). \end{equation} Moreover, \begin{itemize} \item[(i)] $h(0)=1$, $h'(0)=0$;\smallskip \item[(ii)] $\frac{\partial}{\partial t}\psi(s,t)<0$ for $(s,t)\in V$;\smallskip \item[(iii)] $\mathcal E(h(s)(u_0+sv_2))<\mathcal E(u_0)$ for $s \in (-\varepsilon_1,\varepsilon_1)$, $s\neq 0$. \end{itemize} The same result holds true replacing $v_2$ with the second radial decreasing eigenfunction $-v_2$ (which clearly corresponds to the same eigenvalue $\lambda_2^{+,\mathrm{r}}$). \end{lemma} \begin{proof} The proof is similar to the one of \cite[Lemma 4.9]{BNW}, we report it here because it highlights the importance of assumption $(f_3)$. Part (i) follows by the Implicit Function Theorem applied to $\psi$. Indeed, since $\mathcal E$ is a $C^2$ functional and $\psi$ is of class $C^1$ with $\psi(0,1)=0$, by $(f_3)$ we get \begin{equation}\label{psi'<0} \frac{\partial}{\partial t}\Big|_{(0,1)}\psi(s,t) = \mathcal E''(u_0)[u_0,u_0]= [1-\tilde f'(u_0)]\int_{B} u_0^2 \,dx <0, \end{equation} where we have used only that $\tilde{f}'(u_0)=f'(u_0)>1$. Furthermore, since $\int_\Omega v_2=0$, \begin{equation}\label{psi'=0}\begin{aligned} \frac{\partial}{\partial s}\Big|_{(0,1)}\psi(s,t) &=\mathcal E'(u_0)[v_2]+ \mathcal E''(u_0)[u_0,v_2]\\ &= [1-\tilde f'(u_0)]u_0 \int_\Omega v_2\,dx =0. \end{aligned} \end{equation} Thus, the Implicit Function Theorem guarantees the existence of $\varepsilon_1,\varepsilon_2$ and $h$, as well as property (i). Then, part (ii) is a consequence of the regularity of $\psi$. We prove now (iii), here is where $(f_3)$ plays a crucial role. By (i), we can write $h(s)= 1+o(s)$, for $s\in(-\varepsilon_1,\varepsilon_1)$, $s\neq 0$, so that $$ h(s)(u_0+sv_2)-u_0= sv_2 + o(s) $$ and therefore, by Taylor expansion and $(f_3)$, \begin{align*} \mathcal E(h(s)(u_0+sv_2)) -\mathcal E(u_0) &= \frac{1}{2} \mathcal E''(u_0)[sv_2 + o(s),sv_2+o(s)]+o(s^2)\\&=\frac{s^2}{2} \mathcal E''(u_0)[v_2,v_2]+o(s^2)\\ &=\frac{s^2}{2}\left([v_2]_{H^s_{\Omega,0}}^2+\int_\Omega[1-\tilde f'(u_0)]v_2^2\,dx\right)+o(s^2)\\ &<\frac{s^2}{2}\left([v_2]_{H^s_{\Omega,0}}^2-\lambda_2^{+,\mathrm{r}}\int_\Omega v_2^2\,dx\right)+o(s^2). \end{align*} Then, being $$ [v_2]_{H^s_{\Omega,0}}^2-\lambda_2^{+,\mathrm{r}}\int_\Omega v_2^2dx=0, $$ property (iii) holds taking $\varepsilon_1$, $\varepsilon_2$ smaller if necessary. \end{proof} In the following lemma, we build a curve $\gamma_{\bar \tau}$ along which the energy is always less than $\mathcal E(u_0)$. The admissible curve $\bar \gamma\in \Gamma$ with the same property will be a simple reparametrization of $\gamma_{\bar \tau}$. \begin{lemma}\label{lemma:nonconstant_p>2} Fix $0<t_-<1<t_+$ such that \begin{equation} \label{eq:3} t_- u_0 \in U_-,\quad t_+ u_0 \in U_+ \quad \text{and}\quad u_- < t_- u_0 < u_0 < t_+ u_0 < u_+, \end{equation} where $U_\pm$ are defined in $(\ref{eq:2})$. Let $v_2$ be the second radial increasing eigenfunction as in Lemma \ref{4.9}. For $\tau \ge 0$ define \begin{equation} \label{eq:4} \begin{aligned} \gamma_\tau: [t_-,t_+]\to H^s_{\Omega,0} \qquad &\gamma_\tau(t):= t(u_0+\tau v_2)\\ &(\mbox{resp. }\gamma_\tau(t):= t(u_0-\tau v_2)). \end{aligned} \end{equation} Then there exists $\bar \tau>0$ such that $\gamma_{\bar \tau}(t_\pm ) \in U_\pm$, $\gamma_{\bar \tau}(t) \in \mathcal C_{+,*}$ (resp. $\mathcal C_{-,*}$) for $t_- \le t \le t_+$ and \begin{equation} \label{eq:1} \max_{t_- \le t \le t_+} \mathcal E(\gamma_{\bar \tau}(t))<\mathcal E(u_0). \end{equation} As a consequence, there exists an admissible curve $\bar\gamma\in\Gamma$ along which the energy is always lower than $\mathcal E(u_0)$. \end{lemma} \noindent For the proof of the previous lemma, we refer to \cite[Lemma 4.10]{BNW}, see also \cite[Lemma~4.2]{CN}. Here the monotonicity of $v_2$ (resp. of $-v_2$) is essential to guarantee that $\gamma_{\bar\tau}([t_-,t_+])\subset\mathcal C_{+,*}$ (resp. $\mathcal C_{-,*}$). Finally, the admissible curve $\gamma\in\Gamma$ is given in terms of $\gamma_{\bar\tau}$ as follows $$\bar\gamma(t):=\gamma_{\bar \tau}(t(t_+-t_-)+t_-)\quad\mbox{ for all }t\in[0,1].$$ \begin{proof}[$\bullet$ Proof of Theorem \ref{thm:main}] By Proposition \ref{mountainpass}, there exists a mountain pass type solution $u\in\mathcal C_*\setminus\{u_-,\,u_+\}$ of \eqref{P} such that $\mathcal E(u)=c$. Moreover, by Lemma \ref{lemma:nonconstant_p>2} and the definition of the minimax level $c$ given in \eqref{minmax}, we have that $$ c\le\max_{t\in [0,1]}\mathcal E(\bar\gamma(t))<\mathcal E(u_0), $$ that is $u\not\equiv u_0$, and so $u$ is non-constant. Furthermore, $u>0$ a.e. in $\Omega$ by the maximum principle stated in Theorem \ref{MaxPrinc} combined with the regularity of $u$ given in Lemma \ref{regularity}. Actually, since $u$ is smooth and non-decreasing, $u>0$ in $\Omega\setminus\{0\}$. The multiplicity part of the statement is proved by reasoning in the same way for each $u_{0,i}$, with $i=1,\dots,N$. Indeed, assume without loss of generality that $u_{0,1}<u_{0,2}<\dots<u_{0,N}$. For every $i$, we define $u_{\pm,i}$ and the cone of non-negative, radial, non-decreasing (or non-increasing) functions $\mathcal C_{*,i}$, corresponding to $u_{0,i}$. Then \begin{equation}\label{order} u_{-,1}<u_{+,1}\le u_{-,2}<\dots\le u_{+,N}. \end{equation} Proceeding as in the present and in the previous sections, for every $i$, we get a non-constant positive solution $u_i\in\mathcal C_{*,i}$. Hence, by \eqref{order}, $$ u_{-,1}\underset{\not\equiv}{\le} u_1\underset{\not\equiv}{\le} u_{+,1}\underset{\not\equiv}{\le} u_{-,2}\underset{\not\equiv}{\le} u_2\underset{\not\equiv}{\le} u_{+,2}\underset{\not\equiv}{\le}\dots\underset{\not\equiv}{\le} u_N\underset{\not\equiv}{\le} u_{+,N}, $$ which proves in particular that the $N$ solutions are distinct. \end{proof} The following proposition gives a sufficient condition on $f$ under which problem \eqref{P} admits only constant solutions. We recall that $K_\infty$ denotes the uniform bound on the $L^\infty$ norm of $u$ given in Lemma \ref{L-infty-unif}. \begin{proposition}\label{constant-sol} Let $\delta \in (0, \lambda_2^{\mathrm{rad}})$ and $M>0$. Suppose that $f\in\mathfrak F_{M,\delta}$ satisfies $(f_1)$ and $(f_2)$. If $f'(t)<\lambda_2^{\mathrm{rad}}+1$ for every $t\in [0,K_\infty]$, then problem \eqref{P} admits only constant solutions in $H^s_{\mathrm{rad}}$. \end{proposition} \begin{proof} We first observe that, if $M<K_\infty$, condition $f'<\lambda_2^{\mathrm{rad}}+1$ in $[0,K_\infty]$ is compatible with the consequence \eqref{Mdelta} of $(f_2)$, when $\delta<\lambda_2^{\mathrm{rad}}$. Let $u\in H^s_{\mathrm{rad}}$ be a weak solution of \eqref{P}. We can write $u=v+\mu$ for some $\mu \in \mathbb{R}$ and $v\in H^s_{\mathrm{rad}}$ with $$\begin{aligned}&\int_\Omega v \, dx=0\quad \mbox{and}\\ &\lambda_2^{\mathrm{rad}}\int_\Omega v^2\,dx \leq \frac{c_{n,s}}{2}\iint_{\mathbb{R}^{2n}\setminus(\Omega^c)^2}\frac{|v(x)-v(y)|^2}{|x-y|^{n+2s}}\,dx\,dy + \int_\Omega v^2\,dx. \end{aligned} $$ Using the definition of weak solution for $u=v+\mu$ and testing with $v$, we get \[\begin{split} & (\lambda_2^{\mathrm{rad}}+1) \int_\Omega v^2\,dx \leq \frac{c_{n,s}}{2}\iint_{\mathbb{R}^{2n}\setminus (\Omega^c)^2}\frac{|v(x)-v(y)|^2}{|x-y|^{n+2s}}\,dx\,dy + \int_\Omega v^2\,dx\\ &\hspace{1em} =\int_\Omega f(v+\mu)v\,dx=\int_\Omega [f(v+\mu)-f(\mu)]v\,dx=\int_\Omega f'(\mu + \omega v) v^2\,dx, \end{split} \] where $\omega=\omega(x)$ satisfies $0\le \omega\le 1$ in $\Omega$. Using that $\|u\|_{L^\infty(\Omega)}\le K_\infty$, we deduce that $\|\mu + \omega v\|_{L^\infty(\Omega)}\le K_\infty$. Therefore, since by assumption $f'(\mu + \omega v)< \lambda_2^{\mathrm{rad}}+1$, we conclude that it must be $v=0$ and thus $u$ identically constant. \end{proof} \begin{remark} Some further comments on the condition $(f_3)$ and its variants are now in order. In the local setting, it was first conjectured in \cite{BNW} and then proved in \cite{BGT,MCL,BCN} that if $f'(u_0)$ satisfies \begin{equation}\label{f'u0-k} f'(u_0)>1+\lambda_{k+1}^\mathrm{rad}(R)\quad\mbox{for some }k\ge1, \end{equation} where $\lambda_{k+1}^\mathrm{rad}(R)$ is the $(k+1)$-st radial eigenvalue of the Neumann Laplacian in $B_R$, then the Neumann problem $-\Delta u+u=f(u)$ in $B_R$ admits a radial positive solution having exactly $k$ intersections with the constant $u_0$. It would be interesting to prove a similar result also in this fractional setting. It is worth stressing that the solution $u$ that we find in the present paper is morally the one with one intersection with $u_0$. This is due to the monotonicity of $u\in \mathcal C_*$, the identity holding for solutions of \eqref{P} \begin{equation}\label{u=fu} \int_{\Omega}udx=\int_\Omega f(u)dx, \end{equation} and the fact that $f(t)<t$ for $t\in(u_-,u_0)$ and $f(t)>t$ in $(u_0,u_+)$, cf. \eqref{u-+} and \eqref{Cstar}. We conclude this remark observing that, since $\lambda_{k}^\mathrm{rad}(R)\to0$ as $R\to\infty$, condition \eqref{f'u0-k} can be also read as a condition on the size of the domain $B_R$. \end{remark} \section*{Acknowledgments} This work was partially supported by Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the I\-stituto Nazionale di Alta Matematica (INdAM). The authors are grateful to Proff. Lorenzo Brasco and Benedetta Noris for useful discussions on the subject. The authors acknowledge the supports of both Departments of Mathematics of the Universities of Bologna and of Turin for their visits in Bologna and in Turin, during which parts of this work have been achieved. E. C. is supported by MINECO grants MTM2014-52402-C3-1-P and MTM2017-84214-C2-1-P, and is part of the Catalan research group 2014 SGR 1083. F. C. is partially supported by the INdAM-GNAMPA Project 2019 ``Il modello di Born-Infeld per l'elettromagnetismo nonlineare: esistenza, regolarit\`a e molteplicit\`a di soluzioni'' and the project of the University of Turin Ricerca Locale 2018 Linea B (CDD 09/07/2018)- ``Problemi non lineari'' COLF\_RILO\_18\_01. \noindent \bibliographystyle{abbrv}
1,477,468,750,869
arxiv
\section{Introduction} \label{S:Intro} Evolution of the QED coupling $\alpha(\mu)$ is determined by the renormalization group equation \begin{equation} \mu \frac{d\alpha(\mu)}{d\mu} = - 2 \beta(\alpha(\mu))\,\alpha(\mu)\,,\qquad \beta(\alpha) = \beta_0 \frac{\alpha}{4\pi} + \beta_1 \left(\frac{\alpha}{4\pi}\right)^2 + \cdots \label{Intro:RG} \end{equation} where the 1-loop $\beta$-function coefficient is \begin{equation} \beta_0 = - \frac{4}{3} \label{Intro:beta0} \end{equation} (we use the $\overline{\text{MS}}$ renormalization scheme). The sign $\beta_0<0$ corresponds to the natural picture of charge screening --- when the distance grows ($\mu$ reduces), $\alpha(\mu)$ becomes smaller. Non-abelian gauge theories have $\beta_0>0$. This corresponds to the opposite behaviour called \emph{asymptotic freedom} --- the coupling becomes small at large $\mu$ (small distances). Discovery of this fact came as a great surprise, and completely changed ideas about applicability of quantum field theory for describing the nature. In fact, it was done several times, by several researchers, independently. The first paper in which $\beta_0>0$ was obtained is by Vanyashin, Terentev~\cite{VT:65}. They considered non-abelian gauge theory with the gauge boson mass term introduced by hand. Of course, this theory is non-renormalizable. It contains longitudinal gauge bosons whose interaction grows with energy. They obtained \begin{equation} \beta_0 = \left(\frac{11}{3} - \displaystyle\frac{1}{6} \right) C_A\,. \label{Intro:VT} \end{equation} The contribution $-1/6$ comes from longitudinal gauge bosons. In the massless gauge theory it is canceled by the ghost loop (which was not known in 1965). Now we know that renormalizable massive gauge theory can be constructed using the Higgs mechanism. In this theory, the result~(\ref{Intro:VT}) is correct; the contribution $-1/6$ comes from the Higgs loop. The first completely correct derivation of $\beta_0$ in non-abelian gauge theory has been published by Khriplovich~\cite{Kh:69}. He used the Coulomb gauge which is ghost-free. The Ward identities in this gauge are simple (as in QED), and it is sufficient to renormalize the gluon propagator in order to renormalize the charge. This derivation clearly shows why screening is the natural behaviour, and how non-abelian gauge theories manage to violate these general arguments. It is also very simple. We shall discuss this derivation of $\beta_0$ in Sect.~\ref{S:Khr}, following~\cite{Kh:69}, but using a modernized language. 't~Hooft discovered asymptotic freedom in 1971--72 while studying renormalization of various field theories in dimensional regularization~\cite{tH:72}. He reported his result during a question session after Symanzik's talk at a small conference in Marseilles in June 1972. And finally, Gross, Wilczek~\cite{GW:73} and Politzer~\cite{P:73} discovered it again in 1973. They were first who suggested to apply asymptotically free gauge theory (QCD) to strong interactions, in particular, to deep inelastic scattering. The early history of asymptotic freedom is discussed in several papers, e.g., \cite{Sh:01,tH:01,W:96}, and in the Nobel lectures~\cite{Nobel}. Now $\beta_0$ is derived in every quantum field theory textbook. In the standard approach, the covariant gauge is used (with Faddeev--Popov ghosts, of course). The coupling constant renormalization can be obtained from renormalizing any vertex in the theory together with all propagators attached to this vertex. Usually, the light-quark -- gluon vertex or the ghost -- gluon vertex is used (calculations are slightly shorter in the later case). The 3-gluon vertex or even the 4-gluon one~\cite{W2} can also be used. All these standard derivations can be found, e.g., in~\cite{G:07}. The infinitely-heavy-quark -- gluon vertex can also be used~\cite{G:04}; this derivation is, perhaps, as easy as the ghost -- gluon one. If the Ward identities are QED-like, it is sufficient to renormalize the gluon propagator; no vertex calculations are required. One such case is the Coulomb gauge~\cite{Kh:69}; it has a slight disadvantage of being not Lorentz invariant (this makes gluon loop calculations more difficult). Another example is the background field formalism~\cite{A:81}. It is Lorentz invariant, and provides, probably, the shortest derivation of $\beta_0$ in QCD. However, all these derivations don't explain asymptotic freedom (antiscreening) in simple physical terms. Probably, the simplest explanation of this kind is presented in~\cite{N:81} (it is also discussed in~\cite{W:96}). We shall discuss in in Sect.~\ref{S:Niels}. \section{Coulomb gauge (Khriplovich 1969)} \label{S:Khr} \subsection{Feynman rules} \label{S:Feyn} This gauge has several advantages. It is ghost-free, and Ward identities in it are simple. It also has one significant disadvantage: it is not Lorentz-invariant. Currently, it is widely used in non-relativistic QCD~\cite{NRQCD}. In this effective theory, there is no Lorentz invariance from the very beginning, and the disadvantage does not matter. On the other hand, separating Coulomb and transverse (chromomagnetic) gluons helps to estimate various contributions when considering non-relativistic bound states. The gluon propagators in this gauge are \begin{equation} \raisebox{-0.2mm}{\includegraphics{gluc.eps}} = - i \delta^{ab} D_0(q)\,,\qquad \raisebox{-0.2mm}{\includegraphics{glut.eps}} = - i \delta^{ab} D_0^{ij}(q)\,. \end{equation} The Coulomb gluon propagator \begin{equation} D_0(q) = - \frac{1}{\vec{q}\,^2} \label{GlueProp:Coulomb} \end{equation} does not depend on $q_0$. This means that the Coulomb gluon propagates instantaneously. The transverse gluon propagator \begin{equation} D_0^{ij}(q) = \frac{1}{q^2+i0} \left(\delta^{ij}-\frac{q^i q^j}{\vec{q}\,^2}\right) \label{GlueProp:Transverse} \end{equation} describes a normal propagating massless particle (the usual denominator $q^2+i0$). In accordance with the gauge-fixing condition $\vec{\nabla}\cdot\vec{A}^a=0$, this 3-dimensional tensor is transverse to $\vec{q}$. The three-gluon vertices look as usual. There is no vertex with three Coulomb legs (if we contract the usual three-gluon vertex with $v^\mu=(1,\vec{0})$ in all three indices, it vanishes). We are going to calculate the potential between an infinitely heavy quark and an infinitely heavy antiquark in the colour-singlet state. The infinitely-heavy quark propagator \begin{equation} \raisebox{-10.2mm}{\includegraphics{qua.eps}} = \frac{i}{p_0+i0} \label{QuarkProp} \end{equation} does not depend on $\vec{p}$ (if the mass $M$ were large but finite, it would be \begin{equation*} \frac{i}{p_0 - \frac{\vec{p}\,^2}{2M} + i0}\,; \end{equation*} the kinetic energy disappears at $M\to\infty$). This means that the coordinate-space propagator is $\sim\delta(\vec{r}\,)\theta(t)$: the infinitely heavy quark does not move in space, and propagates forward in time. The energy of an on-shell infinitely heavy quark is 0, and does not depend on $\vec{p}$ (it would be $\vec{p}\,^2/(2M)$ if $M$ were large but finite). An infinitely heavy quark interacts with Coulomb gluons but not with transverse ones: \begin{equation} \raisebox{-10.2mm}{\includegraphics{quac.eps}} =i g_0 t^a\,,\qquad \raisebox{-10.2mm}{\includegraphics{quat.eps}} = 0\,. \label{QuarkVert} \end{equation} The infinitely-heavy antiquark -- Coulomb gluon vertex differs from~(\ref{QuarkVert}) by a minus sign. \subsection{Quark--antiquark potential} \label{S:Pot} We calculate the scattering amplitude of an on-shell infinitely heavy quark and an on-shell infinitely heavy antiquark in the colour-singlet state with the momentum transfer $\vec{q}$ in quantum field theory (Fig.~\ref{F:QQbarScat}), and equate it to the same amplitude in quantum mechanics. In the Born approximation, it is \begin{equation} i U_{\vec{q}}\,. \label{SQM} \end{equation} Two-particle-reducible diagrams in quantum field theory (which can be cut into two disconnected parts by cutting a quark line and an antiquark one) correspond to higher Born approximations in quantum mechanics, and we don't have to consider them. \begin{figure}[ht] \begin{center} \begin{picture}(20,36) \put(11,18){\makebox(0,0){\includegraphics{sca.eps}}} \put(5,3){\makebox(0,0)[t]{$(0,\vec{0})$}} \put(17,3){\makebox(0,0)[t]{$(0,\vec{0})$}} \put(4,33){\makebox(0,0)[b]{$(0,\vec{q})$}} \put(18,33){\makebox(0,0)[b]{$(0,-\vec{q})$}} \end{picture} \end{center} \caption{On-shell scattering amplitude} \label{F:QQbarScat} \end{figure} To the lowest order in $g_0^2$, the scattering amplitude is \begin{equation} \raisebox{-10.2mm}% {\begin{picture}(19,22) \put(9.5,11){\makebox(0,0){\includegraphics{sca1.eps}}} \put(9.5,5){\makebox(0,0){$(0,\vec{q})$}} \end{picture}} = - i C_F g_0^2 D(0,\vec{q}) = i C_F \frac{g_0^2}{\vec{q}\,^2}\,. \label{SQFT} \end{equation} Therefore, \begin{equation} U_{\vec{q}} = C_F g_0^2 D(0,\vec{q}) = - C_F \frac{g_0^2}{\vec{q}\,^2}\,, \label{SQFT2} \end{equation} and we obtain the Coulomb attraction potential \begin{equation} U(r) = - C_F \frac{\alpha_s}{r}\,. \label{Ur} \end{equation} Some people prefer to discuss the quark--antiquark potential in terms of the vacuum average of a Wilson loop (Fig.~\ref{F:Wilson}) with $T\gg r$. Of course, this is exactly the same thing, because the infinitely-heavy quark propagator \emph{is} a straight Wilson line along the 4-velocity $v$. \begin{figure}[ht] \begin{center} \begin{picture}(20,33) \put(10,16.5){\makebox(0,0){\includegraphics{wil.eps}}} \put(3,2){\makebox(0,0){$0$}} \put(17,2){\makebox(0,0){$\vec{r}$}} \put(3,31){\makebox(0,0){$T$}} \end{picture} \end{center} \caption{Wilson loop} \label{F:Wilson} \end{figure} The energy of a colour-singlet quark--antiquark pair separated by $\vec{r}$ is $U(\vec{r}\,)$. Therefore, neglecting boundary effects near time $0$ and $T$, the Wilson loop is \begin{equation} e^{-i\,U(\vec{r}\,)\,T}\,, \label{Wilson} \end{equation} or, to the first order in $\alpha_s$, \begin{equation*} 1 - i\,U(\vec{r}\,)\,T\,. \end{equation*} This order-$\alpha_s$ term is \begin{equation} \raisebox{-15.7mm}{\begin{picture}(24,33) \put(10,16.5){\makebox(0,0){\includegraphics{wil1.eps}}} \put(3,2){\makebox(0,0){$0$}} \put(17,2){\makebox(0,0){$\vec{r}$}} \put(3,31){\makebox(0,0){$T$}} \put(2.5,13.5){\makebox(0,0){$\tau$}} \put(21.5,19.5){\makebox(0,0){$\tau+t$}} \end{picture}} = - i\,C_F\,g_0^2\,T\,\int D(t,\vec{r})\,dt = - i\,C_F\,g_0^2\,T\,\int \frac{d^{d-1}\vec{q}}{(2\pi)^{d-1}}\, D(0,\vec{q})\,e^{i\,\vec{q}\vec{r}} \label{Wilson1} \end{equation} (integration in $\tau$ gives $T$), and hence we obtain~(\ref{Ur}). \subsection{Corrections to the scattering amplitude} \label{S:Corr} Now we want to calculate the first correction to the scattering amplitude in Fig.~\ref{F:QQbarScat}. First of all, there are external-leg renormalization factors. They are given by the derivative of the infinitely-heavy quark self-energy at its mass shell. The infinitely heavy quark only interacts with Coulomb gluons, and its self-energy is \begin{equation} \raisebox{-15.2mm}{\includegraphics{sig.eps}} \sim \int \frac{d^{d-1}\vec{k}}{\vec{k}\,^2} = 0 \label{Sigma} \end{equation} (because the infinitely-heavy quark propagator does not depend on $\vec{k}$). In the coordinate space, the infinitely heavy quark propagates along time, and the Coulomb gluon --- along space. Therefore, the two vertices are at the same space--time point, and the self-energy is \begin{equation} \sim D(t=0,\vec{r}=0) \sim \int \left.\frac{d^{d-1}\vec{k}}{\vec{k}\,^2} e^{i\vec{q}\cdot\vec{r}} \right|_{\vec{r}=0} \sim U(\vec{r}=0) \Rightarrow 0\,. \label{SigmaC} \end{equation} This is the classical self-energy of a point charge, and it is linearly divergent. In dimensional regularization, it is $0$% \footnote{This linear ultraviolet divergence leads to an ultraviolet renormalon singularity at Borel parameter $u=1/2$, and hence to an ambiguity of the on-shell heavy-quark mass proportional to $\Lambda_{\text{QCD}}$~\cite{BB:94} (see also~\cite{G:04}, Chapter~8).}. Transverse gluons don't interact with the infinitely heavy quark, and hence there is only one vertex correction. It vanishes for the same reason: \begin{equation} \raisebox{-15.2mm}{\includegraphics{gam.eps}} = 0\,. \label{Gamma} \end{equation} Therefore, we only need to consider vacuum polarization corrections (Fig.~\ref{F:VacPol}): \begin{equation} U_{\vec{q}} = C_F g_0^2 D(0,\vec{q})\,, \label{Uq} \end{equation} where the Coulomb-gluon self-energy is \begin{equation} \raisebox{-3.2mm}{\includegraphics{pi.eps}} = - i \vec{q}\,^2 \Pi(q)\,, \label{Pi} \end{equation} and its propagator is \begin{equation} D(q) = - \frac{1}{\vec{q}\,^2} \frac{1}{1-\Pi(q)} = - \frac{1}{\vec{q}\,^2} \left(1 + \Pi(q)\right) \label{Dq} \end{equation} (up to the first correction). \begin{figure}[ht] \begin{center} \begin{picture}(24,22) \put(12,11){\makebox(0,0){\includegraphics{vac.eps}}} \put(12,5){\makebox(0,0){$(0,\vec{q})$}} \end{picture} \end{center} \caption{Vacuum polarization corrections} \label{F:VacPol} \end{figure} \subsection{Quark and transverse-gluon loops} \label{S:Quark} There are several contributions to the Coulomb-gluon self-energy at one loop. We begin with the quark-loop contribution $\Pi_q$. It is Lorentz-invariant, and is given by the spectral representation \begin{equation} \Pi_q(q^2) = \int\limits_0^\infty \frac{\rho_q(s)\,ds}{q^2-s+i0} \label{Spectral} \end{equation} with a positive spectral density \begin{equation} \rho_q(s) \geqslant 0\,. \label{Positive} \end{equation} The propagator (and $U_{\vec{q}}$) is a superposition of the massless propagator and massive ones having various masses, with positive weights: \begin{equation} \begin{split} U_{\vec{q}} &= - C_F \frac{g_0^2}{\vec{q}\,^2} \left[ 1 - \int \frac{\rho_q(s)\,ds}{\vec{q}\,^2+s} + \cdots \right]\\ &= - C_F g_0^2 \left[ \left( 1 - \int \frac{\rho_q(s)\,ds}{s} \right) \frac{1}{\vec{q}\,^2} + \int \frac{\rho_q(s)\,ds}{\vec{q}\,^2+s} + \cdots \right]\,. \end{split} \label{Supq} \end{equation} Therefore, the potential $U(r)$ is a superposition of the Coulomb potential and Yukawa ones having various radii, with positive weights: \begin{equation} U(r) = - C_F \frac{g_0^2}{4\pi r} \left[ 1 - \int \frac{\rho_q(s)\,ds}{s} + \int \rho_q(s)\,e^{-\sqrt{s}\,r}\,ds + \cdots \right]\,. \label{Supr} \end{equation} The farther we are from the source, the more Yukawa potentials switch off, and the weaker is the interaction. We have screening. In QED, this is the only effect. It seems that screening follows from very general principles: causality (it allows one to express~(\ref{Spectral}) $\Pi(q^2)$ via its imaginary part $\rho(s)$) and unitarity (it says that this imaginary part~(\ref{Positive}) is a sum of modulus squared of transition amplitudes to intermediate states). The only chance \emph{not} to get screening is to find some contribution which is not given by the spectral representation. \begin{figure}[ht] \begin{center} \begin{picture}(22,20) \put(11,10){\makebox(0,0){\includegraphics{piq.eps}}} \end{picture} \end{center} \caption{Quark loop discontinuity at the cut} \label{F:qloop} \end{figure} Let's complete our calculation. Using Cutkosky rule (Fig.~\ref{F:qloop}) in the $q$ rest frame, we see that two integrations are eliminated by two $\delta$-functions, and \begin{equation} \rho_q(s) = T_F n_f \frac{g_0^2 s^{-\varepsilon}}{(4\pi)^{d/2}} \left( \frac{4}{3} + \mathcal{O}(\varepsilon) \right) \label{rhoq} \end{equation} The ultraviolet divergence of $\Pi_q$ is \begin{equation*} \left. \int \frac{\rho_q(s)\,ds}{s+\vec{q}\,^2} \right|_{\text{UV}} = \frac{4}{3} T_F n_f \frac{g_0^2}{(4\pi)^{d/2}} \int\limits_{\sim\vec{q}\,^2}^\infty s^{-1-\varepsilon} ds = \frac{4}{3} T_F n_f \frac{\alpha_s}{4\pi\varepsilon}\,. \end{equation*} Keeping only this divergent part, we obtain \begin{equation} U_{\vec{q}} = - C_F \frac{g_0^2}{\vec{q}\,^2} \left[ 1 + \frac{4}{3} T_F n_f \frac{\alpha_s}{4\pi\varepsilon} + \cdots \right]\,, \label{Udq} \end{equation} where dots are contributions of other diagrams. Qualitatively, the transverse-gluon loop is just like the quark one. Its contribution in the Coulomb gauge $\Pi_t(q)$ is not Lorentz-invariant, and the spectral representation is a little more complicated: \begin{equation} \Pi_t(q_0^2,\vec{q}\,^2) = \int \frac{\rho_t(s,\vec{q}\,^2)\,ds}{q^2-s+i0}\,. \label{Spectral2} \end{equation} When calculating the discontinuity by the Cutkosky rule (Fig.~\ref{F:gt}), we cannot use the $q$ rest frame, and there is one extra integration not eliminated by $\delta$-functions. The general expression for $\rho_t(s,\vec{q}\,^2)$ is rather complicated (this is the only point in which the Coulomb-gauge derivation is more complicated than the usual covariant one). \begin{figure}[ht] \begin{center} \begin{picture}(32,20) \put(16,10){\makebox(0,0){\includegraphics{pit.eps}}} \end{picture} \end{center} \caption{Transverse-gluon loop discontinuity at the cut} \label{F:gt} \end{figure} Fortunately, we don't need this general expression. In order to obtain the ultraviolet divergence of $\Pi_t(q)$, its limiting form at $s\gg\vec{q}\,^2$ is sufficient. We can simplify the integrand of our single integral, and obtain \begin{equation} \rho_t(s,\vec{q}\,^2) = C_A \frac{g_0^2 s^{-\varepsilon}}{(4\pi)^{d/2}} \left( \frac{1}{3} + \mathcal{O}(\vec{q}\,^2/s,\varepsilon) \right)\,. \end{equation} The ultraviolet-divergent contribution of this diagram to $U_{\vec{q}}$ is \begin{equation} U_{\vec{q}} = - C_F \frac{g_0^2}{\vec{q}\,^2} \left[ 1 + \frac{1}{3} C_A \frac{\alpha_s}{4\pi\varepsilon} + \cdots \right]\,. \label{Udt} \end{equation} \subsection{Coulomb gluon} \label{S:Coulomb} There is one more contribution: the loop with a Coulomb gluon and a transverse one. We can understand its sign qualitatively. Let's consider infinitely heavy quark and antiquark at a distance $r$ (in the colour-singlet state) and the transverse-gluon field (Fig.~\ref{F:QQbarVac}). If we neglect the interaction, the ground-state energy is just \begin{equation} E_0 = U(r)\,, \label{E0} \end{equation} because the vacuum energy of the transverse-gluon field is $0$. \begin{figure}[ht] \begin{center} \begin{picture}(40,22) \put(12,11){\makebox(0,0){\includegraphics{sca0.eps}}} \put(26,11){\makebox(0,0){$+$}} \put(28,13){\makebox(0,0)[l]{transverse-gluon}} \put(28,9){\makebox(0,0)[l]{vacuum}} \end{picture} \end{center} \caption{$Q$, $\bar{Q}$, and the transverse-gluon field} \label{F:QQbarVac} \end{figure} Now let's take into account their interaction in the second order of perturbation theory. Transverse gluons don't interact with infinitely heavy quarks, they can only couple to Coulomb gluons exchanged between the quark and the antiquark (Fig.~\ref{F:Pert2}). The ground-state energy decreases in the second order of perturbation theory. This means that the Coulomb attraction becomes stronger --- antiscreening~\cite{G:77}! \begin{figure}[ht] \begin{center} \begin{picture}(24,22) \put(12,11){\makebox(0,0){\includegraphics{scac.eps}}} \end{picture} \end{center} \caption{Interaction in the second order of perturbation theory} \label{F:Pert2} \end{figure} This loop (Fig.~\ref{F:gc}) depends on $\vec{q}$ but not on $q_0$, because we can always route the external energy $q_0$ via the Coulomb propagator, and it does not depend on energy. Therefore, this loop has no cut in the $q_0$ complex plane, and is not given by the spectral representation. Speaking more formally, we can say that the spectral density is $0$, and the whole result is given by the subtraction term, which does not depend on $q_0$ (but depends on $\vec{q}$). \begin{figure}[ht] \begin{center} \begin{picture}(32,12.5) \put(16,6.25){\makebox(0,0){\includegraphics{pic.eps}}} \end{picture} \end{center} \caption{Loop with Coulomb and transverse gluons} \label{F:gc} \end{figure} This diagram can be easily calculated. Only the transverse-gluon propagator depends on $k_0$: \begin{equation*} \Pi_c(\vec{q}\,^2) = \int \frac{d^d k}{(2\pi)^d} \frac{f(\vec{k},\vec{q})}{k^2+i0}\,. \end{equation*} The integral in $k_0$ can be taken first: \begin{equation*} \int \frac{dk_0}{2\pi} \frac{1}{k_0^2-\vec{k}\,^2+i0} = - \frac{i}{2 \left(\vec{k}\,^2\right)^{1/2}}\,. \end{equation*} We are left with a $(d-1)$-dimensional integral similar to the usual $d$-dimensional massless loop. It can be reduced to $\Gamma$-functions via Feynman parametrization. The result is \begin{equation} \Pi_c(\vec{q}\,^2) = C_A \frac{g_0^2 \left(\vec{q}\,^2\right)^{-\varepsilon}}{(4\pi)^{d/2}} \left( \frac{4}{\varepsilon} + \mathcal{O}(1) \right)\,. \label{Pic} \end{equation} At last, let's assemble all our findings. Keeping only ultraviolet-divergent terms in the corrections, we have the momentum-space potential \begin{equation} U_{\vec{q}} = - C_F \frac{g_0^2}{\vec{q}\,^2} \Biggl\{1 + \frac{g_0^2 (\vec{q}\,^2)^{-\varepsilon}}{(4\pi)^{d/2}} \biggl[ \left( \left(4 - \frac{1}{3}\right) C_A - \frac{4}{3} T_F n_f \right) \frac{1}{\varepsilon} + \cdots \biggr] \Biggr\}\,. \label{Utot} \end{equation} When expressed via the renormalized $\alpha_s(\mu)$: \begin{equation} \frac{g_0^2}{(4\pi)^{d/2}} = \mu^{2\varepsilon} \frac{\alpha_s(\mu)}{4\pi} Z_\alpha e^{\gamma\varepsilon}\,,\qquad Z_\alpha = 1 - \beta_0 \frac{\alpha_s}{4\pi\varepsilon}\,, \label{Renorm} \end{equation} this potential must be finite. Therefore, \begin{equation} \beta_0 = \left(4 - \frac{1}{3}\right) C_A - \frac{4}{3} T_F n_f\,. \label{beta0} \end{equation} The antiscreening term $4$ comes from the Coulomb-gluon loop; it overweights the screening term $-1/3$ from the transverse-gluon loop. \subsection{Ward identity} \label{S;Ward} Until now, we used the potential between infinitely heavy quark and antiquark. Or, if we cut our picture in two halves, the infinitely-heavy quark -- Coulomb gluon vertex. This vertex is convenient, because both the external-leg renormalization and the vertex corrections vanish (Sect.~\ref{S:Corr}). However, we can use some other vertex, for example, the finite-mass quark -- Coulomb gluon vertex, equally easily. The tool needed to this end is the Ward identity. The Coulomb gauge is ghost-free. Therefore, it is natural to expect that the Ward identities in this gauge are simple, like in QED, in contrast to Ward--Slavnov--Taylor identities in covariant gauges, which are complicated by extra ghost terms. We shall proceed exactly as in QED. There, an external photon leg insertion with its polarization 4-vector parallel to its 4-momentum gives a difference of two propagators; most terms cancel pairwise. Now we have a Coulomb gluon, which is polarized along time. Therefore, let's set its incoming momentum to $q=\omega v$, where $v=(1,\vec{0})$ is the 4-velocity of the reference frame. We denote such an external Coulomb gluon by a leg with a black triangle. A dot near a propagator means that its momentum is shifted by $q$. It is easy to check the identities \begin{align} &\omega \raisebox{-0.4mm}{\includegraphics{wq1.eps}} = g_0 \raisebox{-0.4mm}{\includegraphics{wqc.eps}} \otimes \left[ \raisebox{-0.4mm}{\includegraphics{wq2.eps}} - \raisebox{-0.4mm}{\includegraphics{wq3.eps}} \right]\,, \label{Wardq}\\ &\omega \raisebox{-0.4mm}{\includegraphics{wg1.eps}} = g_0 \raisebox{-3.4mm}{\includegraphics{wgc.eps}} \otimes \left[ \raisebox{-0.4mm}{\includegraphics{wg2.eps}} - \raisebox{-0.4mm}{\includegraphics{wg3.eps}} \right]\,, \label{Wardg}\\ &\raisebox{-0.4mm}{\includegraphics{wc1.eps}} = \raisebox{-0.4mm}{\includegraphics{wc2.eps}} = 0\,, \label{Ward0} \end{align} where the right-hand sides are written as $\text{(colour structure)} \otimes \text{(Lorentz structure)}$, and the curved arrow in~(\ref{Wardg}) shows the order of indices in the colour structure $i f^{abc}$. The last equality~(\ref{Ward0}) is obvious: let's consider one of these diagrams with the transverse-gluon line removed; this object is a vector (has one index), and depends on two vectors, $v$ and the transverse-gluon momentum $p$; and the transverse-gluon propagator is transverse to both $v$ and $p$. In covariant gauges, the right-hand side of~(\ref{Wardg}) contains extra terms, where one of the gluon lines becomes longitudinally polarized. These terms produce ghost propagators, thus transforming simple Ward identities to more complicated Ward--Slavnov--Taylor ones. Now we are ready to apply these identities to the one-loop vertex. Let's consider the QED-like diagram with a transverse gluon: \begin{equation} \omega \raisebox{-6.2mm}{\includegraphics{qt1.eps}} = g_0 \raisebox{-4.7mm}{\includegraphics{qtc.eps}} \otimes \left[ \raisebox{-2.2mm}{\includegraphics{qt2.eps}} - \raisebox{-0.2mm}{\includegraphics{qt3.eps}} \right]\,. \label{Ward1} \end{equation} With a Coulomb gluon, \begin{equation} \begin{split} \omega \raisebox{-6.2mm}{\includegraphics{qc1.eps}} &= g_0 \raisebox{-4.7mm}{\includegraphics{qtc.eps}} \otimes \left[ \raisebox{-2.2mm}{\includegraphics{qc2.eps}} - \raisebox{-0.2mm}{\includegraphics{qc3.eps}} \right]\\ &= g_0 \raisebox{-4.7mm}{\includegraphics{qtc.eps}} \otimes \left[ \raisebox{-0.2mm}{\includegraphics{qc4.eps}} - \raisebox{-0.2mm}{\includegraphics{qc3.eps}} \right] = 0\,, \end{split} \label{Ward2} \end{equation} because we can route the extra momentum $q$ along the Coulomb-gluon line instead of the quark one, and the Coulomb propagator does not depend on an additional momentum $q=\omega v$. In QED, the only contribution is~(\ref{Ward1}) (and there is no colour factor in its right-hand side): the difference of the fermion self-energies with the momenta $p+q$ and $p$. In QCD, we also have \begin{equation} \omega \raisebox{-0.2mm}{\includegraphics{gt.eps}} = g_0 \left[ \raisebox{-4.7mm}{\includegraphics{gtc.eps}} - \raisebox{-4.7mm}{\includegraphics{qtc.eps}} \right] \otimes \left[ \raisebox{-0.2mm}{\includegraphics{qt4.eps}} - \raisebox{-0.2mm}{\includegraphics{qt3.eps}} \right]\,. \label{Ward3} \end{equation} Here we have used the definition of $i f^{abc}$ via the commutator in order to re-write the colour structure as a difference: one term is the same as in~(\ref{Ward1}) (and they cancel); the second one is the colour structure $C_F$ of the quark self-energy times that of the elementary vertex $i t^a$. Two remaining contributions vanish, due to~(\ref{Ward0}): \begin{equation} \raisebox{-0.2mm}{\includegraphics{gc1.eps}} = \raisebox{-0.2mm}{\includegraphics{gc2.eps}} = 0\,. \label{Ward4} \end{equation} Qualitatively, we can say that the non-abelian charge flows along both lines in the quark self-energy diagram; the longitudinal gluon insertion into one of these lines ``measures'' the charge flowing along this line; in total, the whole quark charge flows, thus giving the full colour structure of the self-energy $C_F$. The Ward identity provides a relation between the quark -- Coulomb gluon vertex \begin{equation} \raisebox{-11.7mm}{\begin{picture}(22,25) \put(11,12.5){\makebox(0,0){\includegraphics{ga.eps}}} \put(4.5,7){\makebox(0,0)[t]{$p$}} \put(15.5,15.5){\makebox(0,0)[l]{$q$}} \end{picture}} = i g_0 t^a \Gamma(p,q)\,,\qquad \Gamma(p,q) = \gamma_0 + \Lambda(p,q) \label{VertDef} \end{equation} at $q=\omega v$ and the quark self-energy \begin{equation} \raisebox{-4.2mm}{\begin{picture}(22,9) \put(11,5){\makebox(0,0){\includegraphics{si.eps}}} \put(4.5,3){\makebox(0,0)[t]{$p$}} \end{picture}} = - i \Sigma(p) \label{SigmaDef} \end{equation} which is related to the propagator: \begin{equation*} S(p) = \frac{1}{\rlap/p - m_0 - \Sigma(p)}\,. \end{equation*} The Ward identity can be written as \begin{equation} \omega \Lambda(p,\omega v) = \Sigma(p) - \Sigma(p+\omega v) \label{WardLam} \end{equation} or \begin{equation} \omega \Gamma(p,\omega v) = S^{-1}(p+\omega v) - S^{-1}(p)\,. \label{WardGam} \end{equation} We have proved it at one loop, but it stays correct at higher loops, too. Now let's recall how the coupling constant renormalization \begin{equation*} g_0 = Z_\alpha^{1/2} g \end{equation*} is derived. When expressed via the renormalized coupling $g$, the vertex and the propagator should be \begin{equation*} \Gamma = Z_\Gamma \Gamma_r\,,\qquad S = Z_\psi S_r\,, \end{equation*} where the renormalized vertex and propagator are finite at $\varepsilon\to0$. The scattering amplitude is obtained by multiplying the proper vertex by the external-leg renormalization factors: \begin{equation*} g_0 \Gamma Z_\psi Z_A^{1/2} = g \Gamma_r Z_\alpha^{1/2} Z_\Gamma Z_\psi Z_A^{1/2}\,, \end{equation*} and it must be finite. Therefore, $Z_\alpha^{1/2} Z_\Gamma Z_\psi Z_A^{1/2}$ must be finite at $\varepsilon\to0$. But the only minimal renormalization constant finite at $\varepsilon\to0$ is 1: \begin{equation} Z_\alpha = \left( Z_\Gamma Z_\psi \right)^{-2} Z_A^{-1}\,. \label{Zalpha} \end{equation} I.~e., in order to find the coupling-constant renormalization, one needs the vertex renormalization factor and all external-leg renormalization factors for this vertex. The Ward identity makes things simpler. From~(\ref{WardGam}), $Z_\Gamma Z_\psi$ must be finite at $\varepsilon\to0$, and hence \begin{equation} Z_\Gamma Z_\psi = 1\,. \label{WardZ} \end{equation} Therefore, \begin{equation} Z_\alpha = Z_A^{-1}\,. \label{ZalphaWard} \end{equation} The coupling constant renormalization is determined by the Coulomb-gluon propagator renormalization only. And this is exactly what we studied in Sect.~\ref{S:Quark}--\ref{S:Coulomb}. \section{Chromomagnetic properties of vacuum\\ (Nielsen 1981)} \label{S:Niels} \subsection{Dielectric and magnetic properties of vacuum} \label{S:Dielmag} We shall start from QED, and then consider non-abelian theory. We shall use a popular approach to renormalization due to Wilson. Suppose we have the path integral in momentum space. We integrate out fields with momenta $p>\Lambda$: \begin{equation} \int \prod_p d\phi_p e^{i S} = \int \prod_{p<\Lambda} d\phi_p e^{i S_\Lambda}\,, \label{Wilson:path} \end{equation} where the effective action is defined by \begin{equation} e^{i S_\Lambda} = \int \prod_{p>\Lambda} d\phi_p e^{i S}\,. \label{Wilson:Seff} \end{equation} When we are interested in processes with small momenta $p_i\ll\Lambda$, $S_\Lambda$ can be expressed via a local Lagrangian. At the leading order in $1/\Lambda$, it is the standard dimension-4 Lagrangian; it contains renormalized fields (at the scale $\Lambda$) and the coupling $g(\Lambda)$. Now we want to integrate out also fields with momenta between $\Lambda'$ and $\Lambda$ (where $\Lambda'\ll\Lambda$): \begin{equation} e^{i S_{\Lambda'}} = \int \prod_{\Lambda'<p<\Lambda} d\phi_p e^{i S_\Lambda}\,, \label{Wilson:Renorm} \end{equation} and to obtain $g(\Lambda')$. If we are interested in processes with characteristic momentum $p$, we should use $\Lambda'$ not too far separated from $p$, in order to avoid large logarithms. For example, if we consider interaction between a quark and an antiquark at a distance $r$, we should use $\Lambda'\sim1/r$. Then the Coulomb potential will be simply $-e^2(\Lambda')/r$. But if we start from the theory with a high cut-off $\Lambda$, then vacuum modes with momenta between $\Lambda'$ and $\Lambda$ will act as a dielectric medium, and the potential will be $-e^2(\Lambda)/(\varepsilon r)$: \begin{equation} e^2(\Lambda') = \frac{e^2(\Lambda)}{\varepsilon}\,, \label{Dielmag:Renorm} \end{equation} where only modes with momenta between $\Lambda'$ and $\Lambda$ contribute to $\varepsilon$. If $\varepsilon>1$, we have screening; if $\varepsilon<1$ --- antiscreening (asymptotic freedom). In the case of an ordinary matter, its dielectric and magnetic properties are independent. But vacuum should be Lorentz-invariant. Signals should propagate with velocity 1: \begin{equation} \varepsilon \mu = 1\,. \label{Dielmag:Lorentz} \end{equation} Therefore, diamagnetic vacuum ($\mu<1$) means screening, and paramagnetic one ($\mu>1$) --- asymptotic freedom. When we switch magnetic field $B$ on, the vacuum energy changes by \begin{equation} \Delta E_{\text{vac}} = \left(\mu^{-1} - 1\right) \frac{B^2}{2} V\,. \end{equation} We shall show that \begin{equation} \Delta E_{\text{vac}} = - \beta_0 \frac{g^2}{(4\pi)^2} \log\frac{\Lambda^2}{\Lambda^{\prime2}} \cdot \frac{B^2}{2} V\,. \label{Dielmag:DEvac} \end{equation} This means \begin{equation} \mu = \varepsilon^{-1} = 1 + \beta_0 \frac{g^2}{(4\pi)^2} \log\frac{\Lambda^2}{\Lambda^{\prime2}}\,, \label{Dielmag:mu} \end{equation} and therefore \begin{equation} e^2(\Lambda') = \left[1 + \beta_0 \frac{e^2}{(4\pi)^2} \log\frac{\Lambda^2}{\Lambda^{\prime2}} \right]\,e^2(\Lambda)\,, \end{equation} i.e., $\beta_0$ is indeed the 1-loop $\beta$-function coefficient. The vacuum energy of a charged scalar field (describing particles and antiparticles) is \begin{equation} E_{\text{vac}} = 2 \sum_i \frac{\omega_i}{2} = \sum_i \omega_i\,. \label{Dielmag:Escal} \end{equation} For a charged fermion field, it is the energy of the Dirac sea \begin{equation} E_{\text{vac}} = - \sum_i \omega_i\,. \label{Dielmag:Efermi} \end{equation} Therefore, in general we can use~(\ref{Dielmag:Escal}) multiplied by $(-1)^{2s}$. \subsection{Pauli paramagnetism} \label{S:Pauli} How the vacuum energy changes when we switch the magnetic field $B$ on? There are two effects: spin and orbital, and they can be considered separately. First we discuss interaction of the spin magnetic moment with the magnetic field. Without the field, a massless particle has the energy $\omega=k$. When the magnetic field is switched on, the energy becomes \begin{equation} \omega = \sqrt{k^2 - g_s s_z e B}\,, \label{Pauli:omega} \end{equation} where $g_s$ is the gyromagnetic ratio for our spin-$s$ particle (we shall discuss this in Sect.~\ref{S:QED} in more detail). Suppose the magnetic field is along the $z$ axis. Massless particles only have 2 spin projections $s_z = \pm s$. The vacuum energy change at the order $B^2$ is \begin{equation} \begin{split} \Delta E_{\text{Pauli}} &{}= (-1)^{2s} \int \frac{V\,d^3 k}{(2\pi)^3} \left[\sqrt{k^2 + g_s s e B} + \sqrt{k^2 - g_s s e B} - 2 k\right]\\ &{} = - (-1)^{2s} V \frac{(g_s s e B)^2}{4} \int \frac{d^3 k}{(2\pi)^3} \frac{1}{k^3} = - (-1)^{2s} V \frac{(g_s s e B)^2}{8 \pi^2} \int \frac{dk}{k}\\ &{} = - 2 (-1)^{2s} (g_s s)^2 \frac{e^2}{(4\pi)^2} \log\frac{\Lambda^2}{\Lambda^{\prime2}} \cdot \frac{B^2}{2} V\,, \end{split} \label{Pauli:DE} \end{equation} where only modes with momenta between $\Lambda'$ and $\Lambda$ are included. Let's stress once more that what we are calculating is the \emph{vacuum} energy: there are no particles, only empty modes. \subsection{Landau levels} \label{S:LL} Now let's discuss the effect of magnetic field $B$ on the orbital motion. In order not to have complications related to spin, we consider a massless charged scalar field $\varphi$. Its Lagrangian is \begin{equation} L=(D_\mu\varphi)^+ D^\mu\varphi\,. \label{LL:L} \end{equation} For magnetic field $B$ along the $z$ axis, we can choose the vector potential as $A_y = B x$, $A_x=A_z=0$. Then the equation of motion is \begin{equation} \left[ \nabla^2 - e^2 B^2 x^2 - 2 i e B x \frac{\partial}{\partial y} + E^2 \right] \varphi = 0\,. \label{LL:EOM} \end{equation} Its solutions have the form \begin{equation} \varphi = e^{i (k_y y + k_z z)} \varphi(x)\,. \label{LL:sol} \end{equation} The equation for $\varphi(x)$ has the same form as the Schr\"odinger equation for harmonic oscillator: \begin{equation} \left[ - \frac{1}{2} \frac{\partial^2}{\partial x^2} + \frac{\omega^2}{2} x^2 - E_n \right] \psi_n = 0\,. \label{LL:osc} \end{equation} The oscillator energies are \begin{equation} E_n = \omega \left(n + \tfrac{1}{2}\right)\,. \label{LL:En} \end{equation} Comparing~(\ref{LL:EOM}) with~(\ref{LL:osc}), we see that the energies of our massless particle in magnetic field $B$ are \begin{equation} E^2 = k_z^2 + 2 e B \left(n + \tfrac{1}{2}\right)\,, \label{LL:LL} \end{equation} and the corresponding wave functions are \begin{equation} \varphi = e^{i(k_y y + k_z z)} \psi_n\left(x - \frac{k_y}{eB}\right)\,. \label{LL:phi} \end{equation} In other words, $E^2$ consists of discrete Landau levels of transverse motion plus free motion along the magnetic field. Each Landau level has a high degree of degeneracy. In order to find it, let's put our particle into a large box $V=L_x\times L_y\times L_z$. Then the allowed longitudinal momenta are \begin{equation*} k_z = \frac{2\pi}{L_z} n_z\,; \end{equation*} therefore, the number of allowed modes in the interval $d k_z$ is \begin{equation*} d n_z = \frac{L_z\,d k_z}{2\pi}\,. \end{equation*} Similarly, the allowed values of $k_y$ are \begin{equation*} k_y = \frac{2\pi}{L_y} n_y\,. \end{equation*} As we see from~(\ref{LL:phi}), $k_y$ is related to the $x$ coordinate of the center of the Larmor orbit. It must be inside our box: \begin{equation*} \frac{k_y}{eB} \in [0, L_x]\,. \end{equation*} Therefore, \begin{equation*} n_y \in \left[0, \frac{eB\,L_x L_y}{2\pi}\right]\,. \end{equation*} Energy does not depend on $k_y$. Hence the degeneracy of each Landau level is \begin{equation} \frac{eB\,L_x L_y}{2\pi}\,. \label{LL:degen} \end{equation} It is equal to the magnetic flux through our box ($B\,L_x L_y$) measured in flux quanta $2\pi/e$. The spectrum of $E_\bot^2=E^2-k_z^2$ at $B=0$ is continuous. The number of states in the interval $d E_\bot^2$ is \begin{equation*} \frac{L_x L_y}{4\pi} d E_\bot^2\,. \end{equation*} When the magnetic field $B$ is switched on, each interval $\Delta E_\bot^2 = 2eB$ is contracted into a single Landau level (Fig.~\ref{F:Landau}). The number of states in each interval is the Landau level degeneracy~(\ref{LL:degen}). \begin{figure}[ht] \begin{center} \begin{picture}(52,52) \put(26,28.5){\makebox(0,0){\includegraphics{ll.eps}}} \put(6,2){\makebox(0,0){$B=0$}} \put(36,2){\makebox(0,0){$B$}} \put(8,50){\makebox(0,0)[l]{$E_\bot^2/(2eB)$}} \put(5,6){\makebox(0,0)[r]{0}} \put(5,16){\makebox(0,0)[r]{1}} \put(5,26){\makebox(0,0)[r]{2}} \put(5,36){\makebox(0,0)[r]{3}} \put(5,46){\makebox(0,0)[r]{4}} \put(47,11){\makebox(0,0)[l]{$1/2$}} \put(47,21){\makebox(0,0)[l]{$3/2$}} \put(47,31){\makebox(0,0)[l]{$5/2$}} \put(47,41){\makebox(0,0)[l]{$7/2$}} \end{picture} \end{center} \caption{Continuous spectrum at $B=0$ and Landau levels} \label{F:Landau} \end{figure} The vacuum energy of our massless charged scalar field in the magnetic field $B$ is \begin{equation} E_{\text{vac}} = \sum_{n=0}^\infty f\left(n + \tfrac{1}{2}\right)\,, \label{LL:Evac} \end{equation} where \begin{equation} f(x) = \frac{eBV}{(2\pi)^2} \int\limits_{-\infty}^{+\infty} \sqrt{k_z^2 + 2 e B x}\,d k_z\,. \end{equation} \subsection{Euler summation formula} \label{S:Euler} Integrals are more convenient than sums. If we have a smooth function $f(x)$ (i.e., its characteristic length is $L\gg1$), then, obviously, \begin{equation} \sum_{n=0}^N f\left(n + \tfrac{1}{2}\right) \approx \int\limits_0^{N+1} f(x)\,dx\,. \label{Euler:approx} \end{equation} But how to find a correction to this formula? Let's re-write this integral as a sum of integrals over unit intervals: \begin{equation*} I = \int\limits_0^{N+1} f(x)\,dx = \sum_{n=0}^N \int\limits_{-1/2}^{1/2} f\left(n + \tfrac{1}{2} + x\right)\,dx\,. \end{equation*} The smooth function $f(x)$ can be expanded in Taylor series in each interval: \begin{equation*} I = \sum_{n=0}^N \int\limits_{-1/2}^{1/2} \left[f\left(n + \tfrac{1}{2}\right) + \frac{1}{2} f''\left(n + \tfrac{1}{2}\right) x^2 + \cdots \right]\,dx \end{equation*} (terms with odd powers of $x$ don't contribute). Calculating the integrals, we get \begin{equation*} I = \sum_{n=0}^N f\left(n + \tfrac{1}{2}\right) + \frac{1}{24} \sum_{n=0}^N f''\left(n + \tfrac{1}{2}\right) + \cdots \end{equation*} The second sum in the right-hand side is a small correction (because $f''\sim f/L^2$); therefore, we can replace it by an integral (see~(\ref{Euler:approx})): \begin{equation*} I = \sum_{n=0}^N f\left(n + \tfrac{1}{2}\right) + \frac{1}{24} \int\limits_0^{N+1} f''(x)\,dx + \cdots = \sum_{n=0}^N f\left(n + \tfrac{1}{2}\right) + \left. \frac{1}{24} f'(x)\right|_0^{N+1} + \cdots \end{equation*} Finally, we arrive at the Euler summation formula \begin{equation} \sum_{n=0}^N f\left(n + \tfrac{1}{2}\right) = \int\limits_0^{N+1} f(x)\,dx - \left. \frac{1}{24} f'(x)\right|_0^{N+1} + \cdots \label{Euler:Sum} \end{equation} The correction here is of order $1/L^2$. It is easy to find a few more corrections, if desired. \subsection{Landau diamagnetism} \label{S:Landau} The vacuum energy in magnetic field $B$~(\ref{LL:Evac}) can be re-written using the Euler formula~(\ref{Euler:Sum}) as \begin{equation} E_{\text{vac}} = \int\limits_0^\infty f(x) - \left. \frac{1}{24} f'(x) \right|_0^\infty\,. \label{Landau:Evac} \end{equation} The integral here is the vacuum energy at $B=0$ (when the spectrum of $E_\bot^2$ is continuous, Fig.~\ref{F:Landau}). The shift of the vacuum energy due to the magnetic field is \begin{equation} \Delta E_{\text{Landau}} = \frac{1}{24} f'(0) = \frac{e^2 B^2 V}{48 \pi^2} \int \frac{dk}{k} = \frac{1}{3} \frac{e^2}{(4\pi)^2} \log\frac{\Lambda^2}{\Lambda^{\prime2}} \cdot \frac{B^2}{2} V\,, \label{Landau:DE} \end{equation} where only modes with momenta between $\Lambda'$ and $\Lambda$ are included% \footnote{Many subtleties have been swept under the carpet in this derivation sketch. A somewhat more accurate derivation~\cite{N:81} yields the same result.}. \subsection{The QED result} \label{S:QED} The full $\Delta E_{\text{vac}}$ in QED is the sum of the spin contribution~(\ref{Pauli:DE}) and the orbital one~(\ref{Landau:DE}): \begin{equation} \Delta E_{\text{vac}} = - \beta_0 \frac{e^2}{(4\pi)^2} \log\frac{\Lambda^2}{\Lambda^{\prime2}} \cdot \frac{B^2}{2} V\,, \label{QED:DE} \end{equation} where \begin{equation} \beta_0 = 2 \sum_s (-1)^{2s} \left[ (g_s s)^2 - \frac{n_s}{6} \right] \label{QED:beta0} \end{equation} is the sum over all charged fields, and $n_s$ is the number of polarization states: $n_0=1$, $n_{s\neq0}=2$. Let's demonstrate that a Dirac particle (e.g., electron) has gyromagnetic ratio $g_{1/2}=2$. For a massless particle, the Dirac equation is \begin{equation} \rlap{\hspace{0.2em}/}D\psi = 0\,,\qquad D_\mu = \partial_\mu - i e A_\mu\,. \label{QED:Dirac} \end{equation} Let's multiply it by $\rlap{\hspace{0.2em}/}D$: \begin{equation*} \rlap{\hspace{0.2em}/}D^2 \psi = 0\,,\qquad \rlap{\hspace{0.2em}/}D^2 = \partial^2 - e^2 A^2 - 2 i e A^\mu \partial_\mu - i e \gamma^\mu \gamma^\nu \partial_\mu A_\nu \end{equation*} (in the last term, $\partial_\mu$ acts only on $A_\nu$). If we suppose that $\partial\cdot A=0$, \begin{equation*} \rlap{\hspace{0.2em}/}D^2 = D^2 - \frac{ie}{4} F_{\mu\nu} [\gamma^\mu,\gamma^\nu]\,. \end{equation*} We choose $A^\mu=(0,0,B x^1,0)$, then $F_{12}=-F_{21}=-B$ and \begin{equation*} \rlap{\hspace{0.2em}/}D^2 = D^2 + i e B \gamma^1 \gamma^2 = D^2 + 2 e B s_z\,. \end{equation*} The equation of motion in magnetic field $B$ (directed along $z$) becomes \begin{equation} \left[ \nabla^2 - e^2 B^2 x^2 - 2 i e B x \frac{\partial}{\partial y} + 2 e B s_z + E^2 \right] \psi = 0\,. \label{QED:EOM} \end{equation} Its energy spectrum is \begin{equation} E^2 = k_z^2 + 2 e B \left(n + \tfrac{1}{2}\right) - 2 e B s_z \label{QED:E} \end{equation} (see Sect.~\ref{S:LL})% \footnote{Strictly speaking, we should consider spin and orbital effects together, using~(\ref{QED:E}). It is easy to see that in the order $B^2$ they can be treated separately.}. Comparing this with~(\ref{Pauli:omega}), we see that \begin{equation} g_{1/2} = 2\,. \label{QED:g} \end{equation} So, the electron contribution to $\beta_0$~(\ref{QED:beta0}) is $-4/3$; if a charged scalar particle exists, it contributes $-1/3$. Note that for $s=1/2$ the spin effect overweights the orbital one. However, due to the factor $(-1)^{2s}$, the spin effect leads to diamagnetism, and the orbital one to paramagnetism. This is because here we are interested in the Dirac see. In physics of metals, we are interested in positive-energy electrons (below the Fermi surface), and the spin effect gives Pauli paramagnetism, while the orbital one --- Landau diamagnetism. \subsection{The QCD result} \label{S:QCD} Now it is easy to obtain $\beta_0$ in QCD. We have chromomagnetic field instead of magnetic. Let's choose its colour orientation along an axis $a_0$ such that $t^{a_0}$ is diagonal (for the $SU(3)$ colour group with the standard choice of the generators $t^a$, $t^8$ is diagonal, and we choose $a_0=8$). The quark contribution follows from the electron one in QED. The contribution to $\beta_0$ is proportional to the charge squared. The sum of squares of colour ``charges'' of a quark is $\Tr t^{a_0} t^{a_0}$ (no summation). Recalling \begin{equation*} \Tr t^a t^b = T_F \delta^{ab}\,, \end{equation*} we arrive at the contribution \begin{equation} - \left(1 - \tfrac{1}{3}\right) 2 T_F n_f \label{QCD:quark} \end{equation} of $n_f$ quark flavours. If there were scalar quarks, each flavour would contribute \begin{equation*} - \frac{1}{3} 2 T_F\,. \end{equation*} What about gluons? First of all, we have to find their gyromagnetic ratio $g_1$. We shall do this for the $SU(2)$ colour group, because calculations are simpler in this case; the result will be valid for other colour groups, too. Let's consider the $SU(2)$ Yang--Mills equation \begin{equation} D^\nu G^a_{\mu\nu} = \left( \partial^\nu \delta^{ab} + g \varepsilon^{acb} A^{c\nu} \right) G^b_{\mu\nu} = 0\,. \label{QCD:YM} \end{equation} The external field is $A^3_\mu$. We linearize in the small components $A^{1,2}_\mu$. For $A^-_\mu=A^1_\mu-i A^2_\mu$ we get \begin{equation*} D^\nu G^-_{\mu\nu} + i g G^3_{\mu\nu} A^{-\nu} = 0\,, \end{equation*} where \begin{equation*} G^-_{\mu\nu} = D_\mu A^-_\nu - D_\nu A^-_\mu\,,\qquad D_\mu = \partial_\mu - i g A^3_\mu\,. \end{equation*} In the $D^\mu A^-_\mu=0$ gauge, the equation of motion becomes \begin{equation*} D^2 A^-_\mu - 2 i g G^3_{\mu\nu} A^{-\nu} = 0 \end{equation*} (we have used $[D_\mu,D_\nu]=-i g G^3_{\mu\nu}$). Our external field is oriented along $z$ in space and along 3 in colour: $G^3_{12} = -G^3_{21} = -B$. Using $A^{-2} = i s_z A^{-1}$ ($s_z=\pm1$), we finally obtain \begin{equation} \left[ D^2 + 2 i g B s_z \right] A^{-1} = 0\,. \label{QCD:EOM} \end{equation} This equation looks exactly the same as~(\ref{QED:EOM}), and hence $g_1=2$. Gluons with colour $a_1$ such that $t^{a_1}$ is diagonal don't interact with our chromomagnetic field (for the standard $SU(3)$ generators, $a_1=3$). All other gluons can be arranged into pairs with positive and negative ``colour charges'' (particles and antiparticles). The sum of their squares (both signs!) is $C_A$: in the adjoint representation \begin{equation*} \Tr t^a t^b = C_A \delta^{ab}\,, \end{equation*} i.e.\ we have to replace $2 e^2 \to C_A g^2$ in QED results. Finally, we arrive at \begin{equation} \beta_0 = \left(4 - \tfrac{1}{3}\right) C_A - \left(1 - \tfrac{1}{3}\right) 2 T_F n_f\,. \label{QCD:beta0} \end{equation} Pauli paramagnetism of the gluon vacuum $(g_1\cdot1)^2=4$ is stronger than its Landau diamagnetism $-1/3$. This leads to antiscreening (asymptotic freedom). \section*{Acknowledgments} I am grateful to I.B.~Khriplovich for discussions of~\cite{Kh:69}.
1,477,468,750,870
arxiv
\section*{Introduction} On-chip solid-state refrigeration has long been sought for various applications in the sub-kelvin temperature regime, such as cooling astronomical detectors \cite{Moseley,Richards,MillerAPL08}. In a Normal metal - Insulator - Superconductor (NIS) junction \cite{NahumAPL,MuhonenRPP,GiazottoRPM06}, the superconductor density of states gap makes that only high-energy electrons are allowed to tunnel out of the normal metal or, depending on the bias, low-energy ones to tunnel in, so that the electronic bath as a whole is cooled. In SINIS devices based on aluminum, the electronic temperature can drop from 300 mK down to below 100 mK at the optimum bias point. While this level of performance has been demonstrated in micron-scale devices \cite{PekolaPRL04,RajauriaPRL07} with a cooling power in the picoWatt range, a difficulty arises in devices with large-area junctions needed for a sizable cooling power approaching the nanoWatt range. For instance, a high-power refrigerator has been shown to cool an external object from 290 mK down to about 250 mK \cite{UllomAPL13}. One of the main limitation to NIS coolers' full performance is the presence in the superconducting leads of non-equilibrium quasi-particles arising from the high current running through the device. The low quasi-particle relaxation rate and thermal conductivity in a superconductor bound these hot particles in the vicinity of the junction and lead to severe overheating in the superconducting electrodes. There are several methods for reducing the accumulation of quasi-particles in a superconductor. For example a small magnetic field \cite{PeltonenPRB10} can be used to introduce vortices that trap quasi-particles. This approach is however not applicable to electronic coolers with large-area junctions since a vortex also reduces the cooling performance if it resides within a junction. The most common method is to use a normal metal coupled to the superconductor as a quasi-particle trap: quasi-particles migrate to the normal metal and relax their energy there through electron-electron and electron-phonon interaction. In the typical case of a fabrication process based on angle evaporation, quasi-particle traps are formed by the structures mirroring each superconducting electrode, sitting on a side of the cooling junction and featuring the same oxide barrier layer. The trapping efficiency is usually moderate, but can be improved in two ways: the normal metal can be put in direct contact with the superconductor, as out-of-equilibrium quasi-particles would diffuse more efficiently to the trap \cite{Agulo04}, or the trap can be closer to the junction \cite{ONeilPRB12,Luukanen}. In both cases, it is important to prevent inverse proximity effect in the superconductor, which smears locally the superconductor density of states and degrades cooling efficiency. The existence of an optimum transparency for the interface between the trap and the superconducting lead is therefore expected, but remains to be investigated \cite{Kauppila}. In this paper, we present an effective method to evacuate quasi-particles in a SINIS cooler, based on what we call a quasi-particle drain. It is a kind of quasi-particle trap made of a layer of normal metal located at a fraction of superconducting coherence length away from the junction and separated by a thin insulating layer to stop the inverse proximity effect. We compare the cooling performance when varying the quasi-particle drain barrier transparency over a wide range. The efficiency of the quasi-particle drain is demonstrated through the electronic cooling from 300 to 130 mK at a 400 pW cooling power. A simple thermal model captures the effect of the quasi-particle drain reasonably well. \section*{Fabrication and measurement methods} \begin{figure}[tb] \begin{center} \includegraphics[width=0.8\columnwidth,keepaspectratio]{Fig1z.pdf} \caption{(a) False-colored scanning electron side micrograph of a SINIS cooler showing from top to bottom: 100 nm of Cu (cooled normal metal N), 200 nm of Al (superconductor S), and 200 nm of AlMn (quasi-particle drain D) with AlOx insulating layers in-between (not visible). The Cu layer is suspended on the top of the Al/AlMn layers. (b) Top view of the cooler with the measurement setup. The array of holes connects the two NIS junctions, whose size is 70$\times$4 $\mu$m$^2$. Cu appears as orange and the Al/AlMn bilayer as yellow. The cooler junctions are highlighted by blue dashed lines, and the thermometer junctions are highlighted by red dashed lines. The circuit connected to the top electrodes probes electron temperature of the normal island cooled by the two current-biased large junctions. (c) Current-voltage characteristics at a 50 mK temperature of samples A, B, D, E and F with different tunnel barrier thicknesses between the quasi-particle drain and the superconducting leads, see Table 1.} \label{fig1} \end{center} \end{figure} We use the fabrication process described in \mbox{Ref.} \cite{NguyenAPL}, in which a SINIS cooler is obtained by photo-lithography and chemical etch of a NIS multilayer. Here, we add a normal metal layer at the bottom, which is used as a quasi-particle drain. Figure 1a false-colored scanning electron micrograph shows a side view of a typical SINIS cooler, obtained by cutting it with a focused ion beam. From top to bottom, the 100 nm-thick Cu normal metal layer to be cooled is suspended between two 200 nm-thick Al superconducting electrodes. The latter rest on two separate quasi-particle drains made of a 200 nm layer of AlMn deposited on a Si wafer. We choose AlMn \cite{AlMn,ClarkAPL} as a quasi-particle drain normal material because it acts chemically as Al in terms of oxidation and etch. The layers are separated by two aluminum oxide barriers, which we name the drain barrier between the AlMn and Al layers and the cooler barrier between the Al and Cu layers. Sample parameters are given in Table 1. \small \begin{table} \caption{\label{table}SINIS cooler parameters. All coolers are made of layers of 100 nm of Cu, 200 nm of Al, and 200 nm of AlMn and have a junction size of 70$\times$4 $\mu$m$^2$. As an exception, sample F has no AlMn layer as a quasi-particle drain and the thickness of Al is 400 nm. We indicate oxygen pressure and oxidation time used for the preparation of the AlMn/Al drain barrier and of the Al/Cu cooler barrier. 2$R_N$ and $2\Delta$ are the normal state resistance and twice the superconducting energy gap, respectively, obtained by fitting the IV characteristics to \mbox{Eq. 1} for SINIS structures. "Color" refers to the figures throughout the paper.} \begin{indented} \item[]\begin{tabular}{@{}lllllll} \br Sample & Drain barrier & Cooler barrier & $2R_N$ & $2\Delta$ &Color\\ & (mbar, second) & (mbar, second) & $(\Omega)$ & $(\mu eV)$ & & \\ \mr A & 1.3, 10 & 1.3, 300 & 0.71 & 398 & green \\%w45 B & 0.26, 10 & 1.3, 300 & 1.56 & 382 & blue \\%, w49 C & 0.18, 1 & 0.8, 180 & 0.55 & 370 & purple\\%, w51 D & 5$\times$10$^{-4}$, 10 & 1.3, 300 & 1.01 & 228 & gray \\%, w52 E & 0 & 1.3, 180 & 1.31 & 180 & gray \\%, w46 F & N/A & 1, 300 & 0.83 & 390 & red\\%, w23 \br \end{tabular} \end{indented} \end{table} \normalsize Figure 1b is an optical micrograph showing a top view of the cooler. The two NIS junctions area are outlined by dashed blue lines. They are separated by a trench in the Al and AlMn layers, created by chemical over-etch, underneath the array of holes in the suspended Cu layer. Each junction has an area of 70 $\times$ 4 $\mu$m$^2$ and is surrounded by two quasi-particle traps: a side trap made of Cu next to it, and a quasi-particle drain made of AlMn. Two additional small NIS junctions connected to the normal metal are used as a SINIS thermometer. Electron temperature is accessed by comparing the measured voltage $V_{probe}$ drop under a small bias current (typically 10 nA) to a calibration against the cryostat temperature. \section*{Experimental results and discussion} The current flowing through a NIS junction with a voltage V writes \begin{equation} I_{NIS}=\frac{1}{eR_N}\int{dE n_S(E)[f_N(E-eV)-f_S(E)]}, \end{equation} where $n_S=Re[E/\sqrt{E^2-\Delta^2}]$ is the normalized superconductor density of states, $\Delta$ is the superconducting gap, $R_N$ is the normal state junction resistance, and $f_{S,N}$ are the Fermi-Dirac energy distributions of electrons in S and N, respectively. If leakage is negligible, a low sub-gap current then means a low electronic temperature. Figure 1c shows the current voltage characteristics (IV curves) of different samples measured with a standard current-biased 4-probe technique in a dilution cryostat at 50 mK, with a focus on the low-bias regime. The two innermost curves stand for samples E and D, which have no drain barrier and a very thin drain barrier, respectively. Superconductivity in the Al layers is then affected by a strong inverse proximity effect, which results in a depressed critical temperature and a low superconducting gap $\Delta$ so that $2\Delta$ = 180 $\mu$eV and 228 $\mu$eV respectively. As samples A, B, and C are fabricated using a higher oxidation pressure for the drain barrier, they have a typical value for $2\Delta$ of about 350 $\mu$eV and a ratio of minimum conductance to normal-state conductance of about $10^{-4}$, indicating that inverse proximity effect is weak. These samples show a sharp IV characteristic, with sample C (not shown) behaving almost identically to sample B. The sole difference between samples A and B is the drain barrier. Having a thinner barrier, sample B exhibits less current at a given sub-gap bias, \mbox{i.e.} less overheating from quasi-particles in the superconductor. The drain barrier lets quasi-particles get efficiently trapped in the drain, while it stops the inverse proximity effect. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\columnwidth,keepaspectratio]{cooling10.pdf} \caption{(a) Cooling power (solid lines) and input power (dashed lines) of cooler B (blue) and F (red) as a function of the bath temperature. (b) Calculated electron temperature (dashed lines) of the superconductor and measured normal metal temperature (dots, connected by solid lines as a guide to the eye) at the optimum point for samples A, B, C and F. The dashed gray line is a one to one line, indicating the bath temperature. The arrow marks the bath temperature of 250mK, where sample C cools down to below 100 mK.} \label{cooling} \end{center} \end{figure} In a normal metal, the main heat flux from the electron system to the environment is through the coupling with the phonon system. We estimate the cooling power from the electron-phonon coupling power using \begin{equation} \dot{Q}_{ep}=\Sigma^N \mathcal V(T_N^5-T_{ph}^5), \end{equation} where $\Sigma^N=2\times 10^9$ WK$^{-5}$m$^{-3}$ is the electron-phonon coupling constant in Cu \cite{WellstoodPRB,MeschkeJLTP04}, $\mathcal V=83$ $\mu$m$^3$ is the island volume for all samples, $T_N$ is the measured electron temperature and $T_{ph}$ is the phonon temperature, both in the normal island. We assume here that $T_{ph}=T_{bath}$, which leads to an upper limit on the estimation of the cooling power. Figure 2a displays the calculated electron-phonon coupling power $\dot{Q}_{ep}$ (solid lines) and the total Joule power (dashed lines) $P_{IV}=I_{opt} V_{opt}$ measured at the optimum bias, \mbox{i.e.} the bias at which electronic cooling is maximum. The typical power scale for all our present coolers is in the nanoWatt range. Figure 2 b presents the most important result of our work, namely the behavior of the normal island electronic temperature at optimum bias as a function of the bath temperature. All coolers have the largest temperature drop at a bath temperature around 300 mK. At lower temperatures, heating above the bath temperature is observed, which arises from the sub-gap current. Having only the side trap and no quasi-particle drain, sample F shows a poor cooling from 300 mK down to only 230 mK. Carrying a quasi-particle drain, sample A cools to 200 mK. With a reduced drain barrier, sample B cools to 160 mK. In sample C, we obtain the best cooling by further reducing the drain and the cooler barriers: from 300 mK, sample C cools down to 132 mK with a 400 pW cooling power. Most importantly, from a bath temperature of 250 mK, sample C cools down to below 100 mK, see the arrow in \mbox{Fig. 2b}. The electronic temperature of the superconducting electrodes can be accessed by balancing the normal metal electron-phonon coupling power (Eq. 2) with the NIS junction cooling power at a voltage bias $V$: \begin{equation} \dot{Q}_{NIS}=\frac{1}{e^2R_N}\int{dE(E-eV)n_S(E)[f_N(E-eV)-f_S(E)]}. \end{equation} Figure 2 b upper part (dashed lines) displays the electronic temperature of the superconductor $T_S$ derived when assuming again $T_{ph}=T_{bath}$ in the normal metal. Although the latter assumption calls for further discussion, the trend remains that the superconductor gets significantly overheated. \section*{Thermal model} \begin{figure}[t] \begin{center} \includegraphics[width=0.9\columnwidth,keepaspectratio]{comsolx5.pdf} \caption{Finite element study of heat transport in coolers A, B and F. (a) Geometry of the model: N1 is the suspended part of the normal island, N2 stands for the junction areas, S for the superconducting electrodes (S1 under the junctions and S2 under the side traps), ST for the side traps, and D for the quasi-particle drains. The layers are separated by tunnel barriers. (b) Electronic temperature drop as a function of bias at a 300 mK bath temperature, solid lines are measured data while dashed lines are calculated from the model. (c) Temperature of the superconductor S (solid lines) and of the quasi-particle drain D (dashed lines) as a function of the distance from the junction. The black bar indicates the junction location.} \label{comsol} \end{center} \end{figure} In order to describe further the thermal transport in our devices, we consider a one-dimensional multilayered thermal model. It is a set of coupled heat equations for different systems. The model geometry shown in figure 3a includes a normal metal island (N), superconducting leads (S), side traps (ST), and quasi-particle drains (D), similar to the geometry of the studied coolers (figure 1a). Although a non-equilibrium description of a biased NIS junction is possible \cite{VoutilainenPRB,SukumarPRB09}, we assume for simplicity every part of the device can be described by local temperature. We also neglect inverse proximity effect in the superconductor density of electronic states, as can be justified by the sharpness of measured IVs. In the following equations, we will make use of the quantities $\mathcal I_{NIS}$, $\mathcal P_{NIS}$ and $\mathcal P^i_{ep}$ that are the local current $I_{NIS}$, cooling power $\dot{Q}_{NIS}$ and electron-phonon coupling power $\dot{Q}_{ep}$ per unit area in the $xy$ plane, see Fig. 3a. In addition, $T_i$ is the temperature, $d_i$ is the thickness and $\kappa_i$ is the thermal conductivity in the element $i$. Here, temperature gradients are in the $x$-direction, charge and heat currents through the junctions ($\mathcal I_{NIS}$ and $\mathcal P_{NIS}$) are in the $z$-direction, electron-phonon couplings $\dot{Q}_{ep}$ are in the bulk. In the normal metal, the temperature gradient is determined by the electronic thermal conductivity, the local electron-phonon coupling plus, in the region N2 in contact with the junction, the local cooling power as \begin{eqnarray} N1: \kappa_N d_N \nabla^2T_N&=&-\mathcal P_{ep}^N(T_N),\nonumber\\ N2: \kappa_N d_N \nabla^2T_N&=&\mathcal P_{NIS}(V,T_S,T_N,R_N)-\mathcal P_{ep}^N(T_N). \end{eqnarray} In the superconductor, the cooling junction injects a heat current equal to the Joule power plus the cooling power. The effect of the drain in region S1 or of the side trap in region S2 is described using the expression of the heat flow through a NIS junction at zero bias $\mathcal P_{NIS}(V=0)$. Weak electron-phonon coupling in S1 and S2 is neglected here, and \begin{eqnarray} S1: d_S \nabla.(\kappa_S\nabla T_S)&=&\mathcal I_{NIS}(V,T_S,T_N,R_N)V-\mathcal P_{NIS}(V,T_S,T_N,R_N)\nonumber\\ &&-\mathcal P_{NIS}(0,T_S,T_{D},R_{D}),\nonumber\\ S2: d_S \nabla.(\kappa_S\nabla T_S)&=&-\mathcal P_{NIS}(0,T_S,T_{D},R_{D})-\mathcal P_{NIS}(0,T_S,T_{ST},R_N). \end{eqnarray} The drain is submitted to the heat coming from the superconductor through the drain barrier. As it is made of a normal metal, electron-phonon coupling is taken into account as \begin{eqnarray} \kappa_D d_D \nabla^2T_{D}&=&\mathcal P_{NIS}(0,T_S,T_{D},R_{D})-\mathcal P_{ep}^D(T_D), \end{eqnarray} and similarly for the side trap \begin{eqnarray} \kappa_N d_N \nabla^2T_{ST}&=&\mathcal P_{NIS}(0,T_S,T_{ST},R_N)-\mathcal P_{ep}^{ST}(T_{ST}). \end{eqnarray} We solved these coupled differential equations numerically \cite{comsol} using measured parameters from samples A, B, and F at 300 mK. From the measured electrical conductivity, we find that our sputtered Cu films have a residual resistance ratio of about 1.5, which is limited by disorder in the film \cite{Qian}. The deduced value of $\kappa_0=0.9$ WK$^{-2}$m$^{-1}$ ($\kappa_N=\kappa_0T$) is in agreement with the tabulated value in \cite{GiazottoRPM06}. We use $\kappa_D=0.47 \kappa_N$ \cite{AlMn} and $\Sigma^D=10^9$ WK$^{-5}$m$^{-3}$ \cite{ClarkAPL}. We take into account the exponential decay of $\kappa_S$ with temperature \cite{Timofeev}. For the cooler barrier, we use $R_N$ = 500 $\Omega \mu$m$^2$, close to the measured value for sample A (400 $\Omega \mu$m$^2$) and B (700 $\Omega \mu$m$^2$). Based on the different properties of samples A and B (Table 1), we use drain barrier resistivity $R_D$ = 500 $\Omega \mu$m$^2$ for sample A and $R_D$ = 10 $\Omega \mu$m$^2$ for sample B. Note that $R_D$ is the only differing parameter between samples A and B. This guess (based on the value of the cooler barrier and the prediction in \cite{Kauppila}) is necessary as we cannot measure $R_D$ directly. In solving for sample F, equation (6) and the terms for $T_D$ in equation (5) are ignored as F does not have a quasi-particle drain. Solving these equations gives a complete temperature profile: $T_N$, $T_S$, $T_D$, and $T_{ST}$ of the device. Figure 3b compares $T_N$ as a function of bias voltage from the modeling results (dashed lines) with the experimental data (solid lines) at 300 mK. The good match between the two confirms that our simple model captures the essential physics of the device: a thinner drain barrier is the single parameter that enhances the performance of the cooler. Figure 3c shows the calculated temperature profile in the superconductor (solid lines) and in the drain (dashed lines) at optimum bias. It consistently shows that superconducting electrodes get overheated over a typical length scale of about 50 $\mu$m from the junction. Carrying a weak drain barrier transparency, sample A has a local superconductor temperature $T_S$ well above the drain temperature $T_D$. With an improved barrier transparency in sample B, its superconducting electrodes are well thermalized by the drains. One obtains $T_S\approx T_D$ at distances x of about 20 $\mu$m away from the junction. The behavior is consistent with the magnitude of electronic cooling observed, thus demonstrating the effect of an improved drain barrier transparency for a good efficiency of the quasi-particle drain. \section*{Conclusions} We have designed and studied electronic coolers capable to cool an electronic bath from 250 mK, the base temperature of a He$^3$ cryostat, down to below 100 mK, the working regime of a dilution cryostat. With a fine-tuned barrier, the quasiparticle drain is efficient in thermalizing NIS junctions. The related geometry does not impose any limit in making the junction larger, thus opening the possibility to obtain a cooling power well above the present level of 400 pW. The fabrication is low cost and involves only photolithography, the device is of high quality and robust. On this basis, we are developing a platform that integrates coolers and sensors on a single chip, which is of great potential for astrophysics and other low temperature applications. \section*{Acknowledgments} We acknowledge the support of the European Community Research Infrastructures under the FP7 Capacities Specific Programme, MICROKELVIN project number 228464, the EPSRC grant EP/F040784/1, and the Academy of Finland through its LTQ CoE grant (project no. 250280). Samples are fabricated in the Micronova Nanofabrication Center of Aalto University. We thank D. Gunnarsson for help with the sputter. \section*{References}
1,477,468,750,871
arxiv
\section{Introduction and preliminaries} Let $\mathbb Z$, $\mathbb N$, $\mathbb N_{0}$ and $\mathbb C$ denote the set of integers, positive integers, nonnegative integers and complex numbers, respectively. The well-known polylogarithm function is defined as $$ Li_{p}(x):=\sum_{n=1}^\infty \frac{x^n}{n^p}\quad (\lvert x \lvert \leq 1,\quad p \in \mathbb N_{0})\,. $$ Note that when $p=1$, $-Li_{1}(x)$ is the logarithm function $\log(1-x)$. Furthermore, $Li_{n}(1)=\zeta(n)$, where $\zeta(s)$ denotes the Riemann zeta function which is defined as $\zeta(s):=\sum_{n=1}^\infty n^{-s}$. The famous generalized harmonic numbers of order $m$ is defined by the partial sum of the Riemann Zeta function $\zeta(m)$ as: $$ H_n^{(m)}:=\sum_{j=1}^n \frac{1}{j^m} \quad (n, m \in \mathbb N)\,. $$ Before going further, we introduce some notations. Let $p \in \mathbb N$ and $m \in \mathbb N_{0}$, define \begin{align*} &J_{0}(m,p):=\int_{0}^{1}x^{m} Li_{p}(x)\mathrm{d}x\,. \end{align*} Let $p, m \in \mathbb N_{0}$ and $0 \leq x \leq 1$, define \begin{align*} &J_{0}(m,p,x):=\int_{0}^{x}t^{m} Li_{p}(t)\mathrm{d}t\,, \quad J_{1}(m,p,x):=\int_{0}^{x}\log^{m}(t) Li_{p}(t)\mathrm{d}t\,. \end{align*} Let $p, q \in \mathbb N$ and $m \in \mathbb Z$ with $m \geq -2$, define \begin{align*} &J(m,p,q):=\int_{0}^{1}x^{m} Li_{p}(x)Li_{q}(x)\mathrm{d}x\,. \end{align*} Let $p, q \in \mathbb N_{0}$ with $p+q \geq 1$, $r \in \mathbb N$, define \begin{align*} &K(r,p,q):=\int_{0}^{1} \frac{\log^{r}(x) Li_{p}(x)Li_{q}(x)}{x}\mathrm{d}x\,. \end{align*} Freitas \cite{Freitas} showed that integrals $J_{0}(m,p)$, $J(m,p,q)$ and $K(r,p,q)$ satisfy the following recurrence relations: \begin{align*} &J_{0}(m,q)=\frac{\zeta(q)}{m+1}-\frac{1}{m+1}J_{0}(m,q-1)\quad (q \geq 2, m \geq 0)\,,\\ &J(m,p,q)=\frac{\zeta(p)\zeta(q)}{m+1}-\frac{1}{m+1}\bigg(J(m,p-1,q)+J(m,p,q-1)\bigg)\\ &(p, q\geq 2, m \in \mathbb N_{0}\cup\{-2\})\,,\\ &K(r,p,q)=-\frac{1}{r+1}\bigg(K(r+1,p-1,q)+K(r+1,p,q-1)\bigg)\quad (p, q, r \in \mathbb N)\,. \end{align*} From this Freitas proved that integrals $K(r,p,q)$ with $p+q+r$ even and $J(m,p,q)$ could be reduced to zeta values. Note that the proof was constructive, Freitas didn't give explicit evaluations for these integrals. On the contrary, Freitas \cite{Freitas} gave explicit evaluations for $J(-1,p,q)$ and $K(r,0,q)$ with $r+q$ even. An anonymous reviewer told the author that Sofo \cite{Sofo2016,Sofo2018,Sofo2020}and Xu \cite{xu1,xu2,xu3} had made many progresses in the area of integrals involving polylogarithm functions, which the author was initially unaware of. It is known to all that polylogarithmic functions are intrinsically connected with sums of harmonic numbers. For instance, Sofo \cite{Sofo2016} developed closed form representations for infinite series containing generalized harmonic numbers of type $$ \sum_{n=1}^\infty\frac{(-1)^{n+1}H_n^{(3)}}{n^{p}\binom{n+k}{k}}\quad (p=0, 1)\,. $$ The author \cite{Lirusensen} gave closed form representations for generalized hyperharmonic number sums with reciprocal binomial coefficients, which greatly extend Sofo's result. Sofo \cite{Sofo2016} also obtained explicit evaluations for some integrals involving polylogarithm functions. Motivated by the work of Freitas \cite{Freitas}, Sofo \cite{Sofo2018} investigated the representations of integrals of polylogarithms with negative argument of the type $$ \int_{0}^{1}x^{m} Li_{p}(-x)Li_{q}(-x)\mathrm{d}x\, $$ for $m \ge -2$, and for integers $p$ and $q$. For $m=-2, -1, 0$, Sofo also gave explicit representations of the integral in terms of Euler sums and for $m \ge 0$, Sofo obtained a recurrence relation for the integral. As a more general consideration, Sofo \cite{Sofo2020} considered integrals of polylogarithms with alternating argument of the type $$ \int_{0}^{1}x^{m} Li_{p}(x)Li_{q}(-x)\mathrm{d}x\, $$ for integers $p$ and $q$. Similarly, for $m=-2, -1, 0$, Sofo gave explicit representations of the integral in terms of Euler sums. Some more integrals involving polylogarithms were obtained. Xu \cite{xu1} showed that quadratic Euler sums of the form \begin{align*} \sum_{n=1}^\infty \frac{H_n H_n^{(m)}}{n^{p}}\quad(m+p\leq 8)\,, \end{align*} and some integrals of polylogarithm functions of the form \begin{align*} \int_{0}^{1} \frac{Li_{r}(x) Li_{p}(x) Li_{q}(x)}{x}\mathrm{d}x\quad(r+p+q\leq 8)\, \end{align*} can be written in terms of Riemann zeta values. It is interesting that integrals of polylogarithm functions can be related to multiple zeta (star) values. By using integrals of polylogarithm functions, Xu \cite{xu2} gave explicit expressions for some restricted multiple zeta (star) values. Some of lemmas used by Xu \cite{xu2} were also re-discovered by the author in different forms. Furthermore, by using the iterated integral representation of multiple polylogarithm functions, Xu \cite{xu3} proved some conjectures proposed by J. M. Borwein, D. M. Bradley and D. J. Broadhurst \cite{Borwein}. Xu also obtained numerous formulas for alternating multiple zeta values. In this paper, we mainly give explicit expressions for integrals of types $J_{0}(m,p,x)$, $J_{1}(m,p,x)$, $J(m,p,q)$ and $K(r,p,q)$. In addition, some more explicit formulas for integrals involving the logarithm function of types \begin{align*} \int_{0}^{x}\frac{\log^{m}(1-t)}{t^{n}}\mathrm{d}t\,,\quad \int_{0}^{x}\frac{\log^{m}(1+t)}{t^{n}}\mathrm{d}t\,,\quad \int_{0}^{x}\frac{\log^{m}(t)}{(1-t)^{n}}\mathrm{d}t\,\quad (m, n \in \mathbb N, m\geq n) \end{align*} will also be derived. \section{Integrals involving logarithm function} De Doelder \cite{Doelder} used the integral $\int_{0}^{x}\frac{\log^{2}(1-t)}{t}\mathrm{d}t$ to evaluate infinite series of type $\sum_{n=1}^\infty \frac{H_{n}}{n^{2}}x^{n}$. As a natural consideration, The author \cite{LIRUSEN} gave explicit evaluations for infinite series involving generalized (alternating) harmonic numbers of types $\sum_{n=1}^\infty \frac{H_{n}}{n^{3}}x^{n}$, $\sum_{n=1}^\infty \frac{H_{n}}{n^{3}}(-x)^{n}$, $\sum_{n=1}^\infty \frac{H_{n}^{(2)}}{n}x^{n}$, $\sum_{n=1}^\infty \frac{H_{n}^{(2)}}{n}(-x)^{n}$, $\sum_{n=1}^\infty \frac{H_{n}^{(2)}}{n^{2}}x^{n}$, $\sum_{n=1}^\infty \frac{H_{n}^{(2)}}{n^{2}}(-x)^{n}$, $\sum_{n=1}^\infty \frac{\overline{H}_{n}}{n}x^{n}$, $\sum_{n=1}^\infty \frac{\overline{H}_{n}^{(2)}}{n}x^{n}$, $\sum_{n=1}^\infty \frac{\overline{H}_{n}^{(2)}}{n}(-x)^{n}$ in terms of polylogarithm functions. However, it seems difficult to give explicit expressions for infinite series of types $\sum_{n=1}^\infty \frac{H_{n}}{n^{4}}x^{n}$ and $\sum_{n=1}^\infty \frac{H_{n}^{(2)}}{n^{3}}x^{n}$, since the integrals $\int_{0}^{x}\frac{\log^{2}(t)\log^{2}(1-t)}{t}\mathrm{d}t$ and $\int_{0}^{x}\frac{\log^{2}(t)Li_{2}(t)}{1-t}\mathrm{d}t$ are not known to be related to the polylogarithm functions, even with the help of a mathematical package. It is interesting to evaluate similar type integrals, e.g., $\int_{0}^{x}\frac{\log^{m}(1-t)}{t^{n}}\mathrm{d}t$. Before going further, We introduce some notations. \begin{Definition}\label{def1} For $m, n \in \mathbb N$ with $m \geq n$ and $0 \leq x \leq 1$, define the quantities $A(m,n,x)$, $B(m,n,x)$ and $C(m,n,x)$ as \begin{align*} &A(m,n,x):=\int_{0}^{x}\frac{\log^{m}(1-t)}{t^{n}}\mathrm{d}t\,,\quad B(m,n,x):=\int_{0}^{x}\frac{\log^{m}(1+t)}{t^{n}}\mathrm{d}t\,,\\ &C(m,n,x):=\int_{0}^{x}\frac{\log^{m}(t)}{(1-t)^{n}}\mathrm{d}t\,. \end{align*} \end{Definition} \begin{Lem}\label{lem9} Let $m \in \mathbb N$ and $0 \leq x \leq 1$, then we have \begin{align*} &\quad A(m,1,x)\\ &=\log(x)\log^{m}(1-x)+\sum_{k=0}^{m-2}(-1)^{k}(m-k)_{k+1}\log^{m-k-1}(1-x)Li_{k+2}(1-x)\\ &\quad +(-1)^{m-1}m!Li_{m+1}(1-x)+(-1)^{m}m!\zeta(m+1)\,. \end{align*} In particular, we have $A(m,1,1)=(-1)^{m}m!\zeta(m+1)$. \end{Lem} \begin{proof} From the definition of $A(m,1,x)$, by using integration by parts, we can write \begin{align*} &\quad A(m,1,x)\\ &=\log(x)\log^{m}(1-x)-m \int_{1}^{1-x}\frac{\log(1-t)\log^{m-1}(t)}{t}\mathrm{d}t\\ &=\log(x)\log^{m}(1-x)+m \log^{m-1}(1-x)Li_{2}(1-x)\\ &\quad -m(m-1)\int_{1}^{1-x}\frac{Li_{2}(t)\log^{m-2}(t)}{t}\mathrm{d}t\\ &=\log(x)\log^{m}(1-x)+\sum_{k=0}^{m-2}(-1)^{k}(m-k)_{k+1}\log^{m-k-1}(1-x)Li_{k+2}(1-x)\\ &\quad +(-1)^{m-1}m!Li_{m+1}(1-x)+(-1)^{m}m!Li_{m+1}(1)\,. \end{align*} \end{proof} \begin{Lem}[\cite{xu2}]\label{lem10} Let $m \in \mathbb N$ and $x \geq 0$, then we have \begin{align*} B(m,1,x) &=\log(x)\log^{m}(1+x)-\frac{m}{m+1}\log^{m+1}(1+x)+m!\zeta(m+1)\\ &\quad -\sum_{i=1}^{m}\binom{m}{i}i!\log^{m-i}(1+x)Li_{i+1}(\frac{1}{1+x})\,. \end{align*} In particular, we have $$ B(m,1,1) =-\frac{m}{m+1}\log^{m+1}(2)+m!\zeta(m+1)-\sum_{i=1}^{m}\binom{m}{i}i!\log^{m-i}(2)Li_{i+1}(\frac{1}{2})\,. $$ \end{Lem} \begin{proof} From the definition of $B(m,1,x)$, by using integration by parts, we can write \begin{align*} &\quad B(m,1,x)\\ &=\log(x)\log^{m}(1+x)-m \int_{0}^{x}\frac{\log(t)\log^{m-1}(1+t)}{1+t}\mathrm{d}t\\ &=\log(x)\log^{m}(1+x)-m \int_{0}^{x}\log^{m-1}(1+t)\bigg(\frac{\mathrm{d}Li_{2}(\frac{1}{1+t})}{\mathrm{d}t} +\frac{\log(1+t)}{1+t}\bigg)\mathrm{d}t\\ &=\log(x)\log^{m}(1+x)-\frac{m}{m+1}\log^{m+1}(1+x)-m\log^{m-1}(1+x)Li_{2}(\frac{1}{1+x})\\ &\quad +m(m-1) \int_{0}^{x}\frac{Li_{2}(\frac{1}{1+t})\log^{m-2}(1+t)}{1+t}\mathrm{d}t\\ &=\log(x)\log^{m}(1+x)-\frac{m}{m+1}\log^{m+1}(1+x)+m!Li_{m+1}(1)\\ &\quad -\sum_{i=1}^{m}\binom{m}{i}i!\log^{m-i}(1+x)Li_{i+1}(\frac{1}{1+x})\,. \end{align*} \end{proof} \begin{Prop}\label{prop} Let $m \in \mathbb N$ and $0 \leq x \leq 1$, then we have \begin{align*} &\quad C(m,1,x)\\ &=-\log(1-x)\log^{m}(x)+m\sum_{i=2}^{m+1}(-1)^{i-1}\binom{m-1}{i-2}(i-2)!\log^{m+1-i}(x)Li_{i}(x)\,. \end{align*} In particular, we have $C(m,1,1)=(-1)^{m}m!\zeta(m+1)$. \end{Prop} \begin{proof} From the definition of $C(m,1,x)$, by using integration by parts, we can write \begin{align*} &\quad C(m,1,x)\\ &=-\log(1-x)\log^{m}(x)+m \int_{0}^{x}\frac{\log(1-t)\log^{m-1}(t)}{t}\mathrm{d}t\\ &=-\log(1-x)\log^{m}(x)-m \log^{m-1}(x)Li_{2}(x)+m(m-1)\int_{0}^{x}\frac{Li_{2}(t)\log^{m-2}(t)}{t}\mathrm{d}t\\ &=-\log(1-x)\log^{m}(x)+m\sum_{i=2}^{m+1}(-1)^{i-1}\binom{m-1}{i-2}(i-2)!\log^{m+1-i}(x)Li_{i}(x)\,. \end{align*} \end{proof} Now we develop explicit expressions for $A(m,n,x)$, $B(m,n,x)$ and $C(m,n,x)$. \begin{theorem}\label{maintheorem5} Let $m, n \in \mathbb N$ with $m \geq n \geq 2$ and $0 < x \leq 1$, then we have \begin{align*} &\quad A(m,n,x)\\ &=\sum_{y=0}^{n-2}\binom{m}{y}y!(-1)^{y+1}\sum_{i_{0}=n}^{n}\frac{1}{i_{0}-1} \sum_{i_{1}=2}^{i_{0}-1}\frac{1}{i_{1}-1}\cdots\sum_{i_{y}=2}^{i_{y-1}-1}\frac{1}{i_{y}-1} \cdot\frac{\log^{m-y}(1-x)}{x^{i_{y}-1}}\\ &\quad +\sum_{y=0}^{n-2}\binom{m}{y}y!(-1)^{y}\sum_{i_{0}=n}^{n}\frac{1}{i_{0}-1} \sum_{i_{1}=2}^{i_{0}-1}\frac{1}{i_{1}-1}\cdots\sum_{i_{y}=2}^{i_{y-1}-1}\frac{1}{i_{y}-1} \log^{m-y}(1-x)\\ &\quad +\sum_{y=0}^{n-2}\binom{m}{y+1}(y+1)!(-1)^{y+1}\sum_{i_{0}=n}^{n}\frac{1}{i_{0}-1} \sum_{i_{1}=2}^{i_{0}-1}\frac{1}{i_{1}-1}\cdots\sum_{i_{y}=2}^{i_{y-1}-1}\frac{1}{i_{y}-1}\\ &\qquad \times A(m-y-1,1,x)\,, \end{align*} and \begin{align*} &\quad C(m,n,x)\\ &=\frac{(-1)^{m}m!}{n-1}\sum_{y=0}^{n-2}\zeta(m-y) \sum_{i_{1}=2}^{i_{0}-1}\frac{1}{i_{1}-1}\cdots\sum_{i_{y}=2}^{i_{y-1}-1}\frac{1}{i_{y}-1} +\sum_{y=0}^{n-2}\binom{m}{y}y!(-1)^{y}\\ &\quad \times \sum_{i_{0}=n}^{n}\frac{1}{i_{0}-1} \sum_{i_{1}=2}^{i_{0}-1}\frac{1}{i_{1}-1}\cdots\sum_{i_{y}=2}^{i_{y-1}-1}\frac{1}{i_{y}-1} \bigg(\frac{\log^{m-y}(x)}{(1-x)^{i_{y}-1}}-\log^{m-y}(x)\bigg)\\ &\quad +\sum_{y=0}^{n-2}\binom{m}{y+1}(y+1)!(-1)^{y}\sum_{i_{0}=n}^{n}\frac{1}{i_{0}-1} \sum_{i_{1}=2}^{i_{0}-1}\frac{1}{i_{1}-1}\cdots\sum_{i_{y}=2}^{i_{y-1}-1}\frac{1}{i_{y}-1}\\ &\qquad \times A(m-y-1,1,1-x)\,, \end{align*} where $A(m-y-1,1,x)$ and $A(m-y-1,1,1-x)$ are given in Lemma \ref{lem9}. In particular, we have \begin{align*} A(m,n,1)=C(m,n,1)=\frac{(-1)^{m}m!}{n-1}\sum_{y=0}^{n-2}\zeta(m-y) \sum_{i_{1}=2}^{i_{0}-1}\frac{1}{i_{1}-1}\cdots\sum_{i_{y}=2}^{i_{y-1}-1}\frac{1}{i_{y}-1}\,. \end{align*} \end{theorem} \begin{proof} From the definition of $A(m,n,x)$, when $n \geq 2$, by using integration by parts, we can write \begin{align*} &\quad A(m,n,x)\\ &=-\frac{1}{n-1}\cdot\frac{\log^{m}(1-x)}{x^{n-1}} -\frac{m}{n-1}\int_{0}^{x}\frac{\log^{m-1}(1-t)}{t^{n-1}(1-t)}\mathrm{d}t\\ &=-\frac{1}{n-1}\cdot\frac{\log^{m}(1-x)}{x^{n-1}}-\frac{m}{n-1}\sum_{i=1}^{n-1}\int_{0}^{x}\frac{\log^{m-1}(1-t)}{t^{i}}\mathrm{d}t\\ &\quad -\frac{m}{n-1}\int_{0}^{x}\frac{\log^{m-1}(1-t)}{1-t}\mathrm{d}t\\ &=-\frac{1}{n-1}\cdot\frac{\log^{m}(1-x)}{x^{n-1}}+\frac{1}{n-1}\log^{m}(1-x) -\frac{m}{n-1}\sum_{i=1}^{n-1}A(m-1,i,x)\,, \end{align*} successive application of the above relation $n-2$ times, we can obtain that \begin{align*} &\quad A(m,n,x)\\ &=-\frac{1}{n-1}\cdot\frac{\log^{m}(1-x)}{x^{n-1}}+\frac{1}{n-1}\log^{m}(1-x)-\frac{m}{n-1}A(m-1,1,x)\\ &\quad +\frac{m}{n-1}\sum_{i_{1}=2}^{n-1}\frac{1}{i_{1}-1}\cdot\frac{\log^{m-1}(1-x)}{x^{i_{1}-1}} -\frac{m}{n-1}\sum_{i_{1}=2}^{n-1}\frac{1}{i_{1}-1}\log^{m-1}(1-x)\\ &\quad +\frac{m}{n-1}\sum_{i_{1}=2}^{n-1}\frac{m}{i_{1}-1}A(m-2,1,x) +\frac{m}{n-1}\sum_{i_{1}=2}^{n-1}\frac{m}{i_{1}-1}\sum_{i_{2}=2}^{i_{1}-1}A(m-2,i_{2},x)\\ &=\sum_{y=0}^{n-2}\binom{m}{y}y!(-1)^{y+1}\sum_{i_{0}=n}^{n}\frac{1}{i_{0}-1} \sum_{i_{1}=2}^{i_{0}-1}\frac{1}{i_{1}-1}\cdots\sum_{i_{y}=2}^{i_{y-1}-1}\frac{1}{i_{y}-1} \cdot\frac{\log^{m-y}(1-x)}{x^{i_{y}-1}}\\ &\quad +\sum_{y=0}^{n-2}\binom{m}{y}y!(-1)^{y}\sum_{i_{0}=n}^{n}\frac{1}{i_{0}-1} \sum_{i_{1}=2}^{i_{0}-1}\frac{1}{i_{1}-1}\cdots\sum_{i_{y}=2}^{i_{y-1}-1}\frac{1}{i_{y}-1} \log^{m-y}(1-x)\\ &\quad +\sum_{y=0}^{n-2}\binom{m}{y+1}(y+1)!(-1)^{y+1}\sum_{i_{0}=n}^{n}\frac{1}{i_{0}-1} \sum_{i_{1}=2}^{i_{0}-1}\frac{1}{i_{1}-1}\cdots\sum_{i_{y}=2}^{i_{y-1}-1}\frac{1}{i_{y}-1}\\ &\qquad \times A(m-y-1,1,x)\,. \end{align*} Note that $C(m,n,x)=A(m,n,1)-A(m,n,1-x)$, thus we get the desired result. \end{proof} \begin{theorem}\label{maintheorem6} Let $m, n \in \mathbb N$ with $m \geq n \geq 2$ and $x > 0$, then we have \begin{align*} &\quad B(m,n,x)\\ &=\sum_{y=0}^{n-2}\binom{m}{y}y!\sum_{i_{0}=n}^{n}\frac{1}{i_{0}-1} \sum_{i_{1}=2}^{i_{0}-1}\frac{1}{i_{1}-1}\cdots\sum_{i_{y}=2}^{i_{y-1}-1}\frac{1}{i_{y}-1} \cdot\frac{(-1)^{n+i_{y}+y+1}\log^{m-y}(1+x)}{x^{i_{y}-1}}\\ &\quad +\sum_{y=0}^{n-2}\binom{m}{y}y!\sum_{i_{0}=n}^{n}\frac{1}{i_{0}-1} \sum_{i_{1}=2}^{i_{0}-1}\frac{1}{i_{1}-1}\cdots\sum_{i_{y}=2}^{i_{y-1}-1}\frac{1}{i_{y}-1} \cdot(-1)^{n+y+1}\log^{m-y}(1+x)\\ &\quad +\sum_{y=0}^{n-2}\binom{m}{y+1}(y+1)!\sum_{i_{0}=n}^{n}\frac{1}{i_{0}-1} \sum_{i_{1}=2}^{i_{0}-1}\frac{1}{i_{1}-1}\cdots\sum_{i_{y}=2}^{i_{y-1}-1}\frac{1}{i_{y}-1}\\ &\qquad \times (-1)^{n+y}B(m-y-1,1,x)\,, \end{align*} where $B(m-y-1,1,x)$ are given in Lemma \ref{lem10}. In particular, we have \begin{align*} B(m,n,1) &=\sum_{y=0}^{n-2}\binom{m}{y}y!\sum_{i_{0}=n}^{n}\frac{1}{i_{0}-1} \sum_{i_{1}=2}^{i_{0}-1}\frac{1}{i_{1}-1}\cdots\sum_{i_{y}=2}^{i_{y-1}-1}\frac{1}{i_{y}-1}\\ &\qquad \times\log^{m-y}(2)(-1)^{n+y+1}((-1)^{i_{y}}+1)\\ &\quad +\sum_{y=0}^{n-2}\binom{m}{y+1}(y+1)!\sum_{i_{0}=n}^{n}\frac{1}{i_{0}-1} \sum_{i_{1}=2}^{i_{0}-1}\frac{1}{i_{1}-1}\cdots\sum_{i_{y}=2}^{i_{y-1}-1}\frac{1}{i_{y}-1}\\ &\qquad \times (-1)^{n+y}\bigg\{-\frac{m-y-1}{m-y}\log^{m-y}(2)+(m-y-1)!\zeta(m-y)\\ &\qquad -\sum_{i=1}^{m-y-1}\binom{m-y-1}{i}i!\log^{m-y-1-i}(2)Li_{i+1}(\frac{1}{2})\bigg\}\,. \end{align*} \end{theorem} \begin{proof} From the definition of $B(m,n,x)$, when $n \geq 2$, by using integration by parts, we can write \begin{align*} B(m,n,x) &=-\frac{1}{n-1}\cdot\frac{\log^{m}(1+x)}{x^{n-1}} +\frac{m}{n-1}\int_{0}^{x}\frac{\log^{m-1}(1+t)}{t^{n-1}(1+t)}\mathrm{d}t\\ &=-\frac{1}{n-1}\cdot\frac{\log^{m}(1+x)}{x^{n-1}} +\frac{m}{n-1}\sum_{i=1}^{n-1}(-1)^{n-1-i}\int_{0}^{x}\frac{\log^{m-1}(1+t)}{t^{i}}\mathrm{d}t\\ &\quad +\frac{m}{n-1}\int_{0}^{x}\frac{(-1)^{n-1}\log^{m-1}(1+t)}{1+t}\mathrm{d}t\\ &=-\frac{1}{n-1}\cdot\frac{\log^{m}(1+x)}{x^{n-1}}+\frac{(-1)^{n-1}}{n-1}\log^{m}(1+x)\\ &\quad +\frac{m}{n-1}\sum_{i=1}^{n-1}(-1)^{n-1-i}B(m-1,i,x)\,, \end{align*} similar with Theorem \ref{maintheorem5}, successive application of the above relation $n-2$ times, we get the desired result. \end{proof} \section{Integrals involving polylogarithms} In this section, we develop explicit expressions for integrals of types $J_{0}(m,p,x)$, $J_{1}(m,p,x)$, $J(m,p,q)$ and $K(r,p,q)$. Before going further, we introduce some notations and lemmata. Following Flajolet-Salvy's paper \cite{Flajolet}, we write the classical linear Euler sums as $$ S_{p,q}^{+,+}:=\sum_{n=1}^\infty \frac{H_n^{(p)}}{{n}^{q}}\,. $$ \begin{Lem}[\cite{xu2,Lirusensen}]\label{lem1} Let $n, m \in \mathbb N_{0}$ and $x \geq 0$, defining \begin{align*} L(n, m, x):=\int_{0}^{x}y^{n}\log^{m}(y) \mathrm{d}y\,, \end{align*} then we have \begin{align*} L(n, m, x)=\frac{x^{n+1}}{n+1}\sum_{j=0}^{m}\frac{(m+1-j)_{j}}{(n+1)^j}(-1)^{j}\log^{m-j}(x)\,, \end{align*} where $(t)_{n}=t(t+1)\cdots(t+n-1)$ is the Pochhammer symbol. In particular, we have $L(n, m, 1)=\frac{m!(-1)^m}{(n+1)^{m+1}}$. \end{Lem} \begin{Lem}[\cite{Lirusensen}]\label{lem2} Let $n, m \in \mathbb N_{0}$ and $0 \leq x \leq 1$, defining \begin{align*} M(n, m, x):=\int_{x}^{1}y^{n}\log^{m} (1-y) \mathrm{d}y\,, \end{align*} then we have \begin{align*} M(n, m, x)=\sum_{j=0}^{n}\binom{n}{j}(-1)^{j}\frac{(1-x)^{j+1}}{j+1} \sum_{i=0}^{m}\frac{(m+1-i)_{i}}{(j+1)^i}(-1)^{i}\log^{m-i} (1-x)\,. \end{align*} In particular, we have $M(n, m, 0)=(-1)^{m}m!\sum_{j=0}^{n}\binom{n}{j}\frac{(-1)^{j}}{(j+1)^{m+1}}$. \end{Lem} \begin{Lem}\label{lem3} Let $n, m \in \mathbb N_{0}$ and $0 \leq x \leq 1$, then we can obtain that \begin{align*} &\int_{0}^{x}y^{n}\log^{m} (1-y) \mathrm{d}y\\ &=(-1)^{m}m!\sum_{j=0}^{n}\binom{n}{j}\frac{(-1)^{j}}{(j+1)^{m+1}}\\ &\quad-\sum_{j=0}^{n}\binom{n}{j}(-1)^{j}\frac{(1-x)^{j+1}}{j+1} \sum_{i=0}^{m}\frac{(m+1-i)_{i}}{(j+1)^i}(-1)^{i}\log^{m-i} (1-x)\,. \end{align*} \begin{proof} Note that $$ \int_{0}^{x}y^{n}\log^{m} (1-y) \mathrm{d}y=M(n, m, 0)-M(n, m, x)\,, $$ with the help of Lemma \ref{lem2}, we get the desired result. \end{proof} \end{Lem} \begin{theorem}\label{maintheorem1} Let $p \in \mathbb N$ and $m \in \mathbb N_{0}$, then we have \begin{align*} &J_{0}(m,p,x)\\ &=\sum_{j=2}^{p}\frac{(-1)^{p-j}}{(m+1)^{p+1-j}}x^{m+1}Li_{j}(x) +\frac{(-1)^{p-1}}{(m+1)^{p-1}}\bigg(\sum_{j=0}^{m}\binom{m}{j}\frac{(-1)^{j}}{(j+1)^{2}}\\ &\quad+\sum_{j=0}^{m}\binom{m}{j}(-1)^{j}\frac{(1-x)^{j+1}}{j+1} \sum_{i=0}^{1}\frac{(2-i)_{i}}{(j+1)^i}(-1)^{i}\log^{1-i} (1-x)\bigg)\,. \end{align*} In particular, $J_{0}(m,p)$ can be reduced to zeta values and harmonic numbers: $$ J_{0}(m,p)=J_{0}(m,p,1) =\sum_{j=2}^{p}\frac{(-1)^{p-j}}{(m+1)^{p+1-j}}\zeta(j)+\frac{(-1)^{p-1}}{(m+1)^{p}}H_{m+1}\,. $$ Note that we have used the fact \cite{Lirusensen} $$ H_{m+1}=(m+1)\sum_{j=0}^{m}\binom{m}{j}\frac{(-1)^{j}}{(j+1)^{2}}\,. $$ \end{theorem} \begin{proof} By using integration by parts, we have \begin{align*} &\quad J_{0}(m,p,x)\\ &=\frac{1}{m+1}x^{m+1} Li_{p}(x)-\frac{1}{m+1}\int_{0}^{x}t^{m} Li_{p-1}(t)\mathrm{d}t\\ &=\frac{1}{m+1}x^{m+1} Li_{p}(x)-\frac{1}{m+1}J_{0}(m,p-1,x)\\ &=\frac{1}{m+1}x^{m+1} Li_{p}(x)-\frac{1}{(m+1)^{2}}x^{m+1} Li_{p-1}(x)+\frac{1}{(m+1)^{2}}J_{0}(m,p-2,x)\\ &=\sum_{j=2}^{p}\frac{(-1)^{p-j}}{(m+1)^{p+1-j}}x^{m+1}Li_{j}(x)+\frac{(-1)^{p-1}}{(m+1)^{p-1}}J_{0}(m,1,x)\,. \end{align*} Note that $$ J_{0}(m,1,x)=-\int_{0}^{x}t^{m} \log(1-t)\mathrm{d}t\,, $$ with the help of Lemma \ref{lem3}, we get the desired result. \end{proof} \begin{Lem}\label{lem4} Let $m \in \mathbb N_{0}$, then we have \begin{align*} J_{1}(m,0,x) &=x\sum_{j=0}^{m}(m+1-j)_{j}(-1)^{j+1}\log^{m-j}(x)-\log(1-x)\log^{m}(x)\\ &\quad+m\sum_{j=2}^{m+1}(-1)^{j-1}\binom{m-1}{j-2}(j-2)!Li_{j}(x)\log^{m+1-j}(x)\,. \end{align*} \begin{proof} Note that \begin{align*} J_{1}(m,0,x) &=-\int_{0}^{x}\log^{m}(t)\mathrm{d}t+\int_{0}^{x}\frac{\log^{m}(t)}{1-t}\mathrm{d}t\,. \end{align*} with the help of Lemma \ref{lem1} and Proposition \ref{prop}, we get the desired result. \end{proof} \end{Lem} \begin{theorem}\label{maintheorem2} Let $p \in \mathbb N$ and $m \in \mathbb N_{0}$, then we have \begin{align*} &J_{1}(m,p,x)\\ &=\sum_{y=1}^{p}\sum_{i_{1}=0}^{m-1}\cdots\sum_{i_{y}=0}^{m-i_{1}-\cdots-i_{y-1}-1} m(m-1)\cdots(m-i_{1}-\cdots-i_{y}+1)\\ &\qquad \times(-1)^{i_{1}+\cdots+i_{y}+y-1} x Li_{p-y+1}(x)\log^{m-i_{1}-\cdots-i_{y}}(x)\\ &\quad +\sum_{y=1}^{p}\sum_{i_{1}=0}^{m-1}\cdots\sum_{i_{y-1}=0}^{m-i_{1}-\cdots-i_{y-2}-1} m!(-1)^{i_{1}+\cdots+i_{y-1}+y-1}J_{1}(0,p-y+1,x)\\ &\quad +\sum_{i_{1}=0}^{m-1}\cdots\sum_{i_{p}=0}^{m-i_{1}-\cdots-i_{p-1}-1} m(m-1)\cdots(m-i_{1}-\cdots-i_{p}+1)\\ &\qquad \times(-1)^{i_{1}+\cdots+i_{p}+p}J_{1}(m-i_{1}-\cdots-i_{p},0,x)\,, \end{align*} where $J_{1}(0,p-y+1,x)=J_{0}(0,p-y+1,x)$ are given Theorem \ref{maintheorem1} and $J_{1}(m-i_{1}-\cdots-i_{p},0,x)$ are given in Lemma \ref{lem4}. \end{theorem} \begin{proof} By using integration by parts, we can obtain that \begin{align*} &\quad J_{1}(m,p,x)\\ &=x Li_{p}(x)\log^{m}(x)-\int_{0}^{x}\log^{m}(t) Li_{p-1}(t)\mathrm{d}t-m \int_{0}^{x}\log^{m-1}(t) Li_{p}(t)\mathrm{d}t\\ &=x Li_{p}(x)\log^{m}(x)-J_{1}(m,p-1,x)-m J_{1}(m-1,p,x)\\ &=x Li_{p}(x)\log^{m}(x)-m x Li_{p}(x)\log^{m-1}(x)-J_{1}(m,p-1,x)\\ &\quad +m J_{1}(m-1,p-1,x)+m(m-1) J_{1}(m-2,p,x)\\ &=\sum_{i=0}^{m-1}\binom{m}{i}i!(-1)^{i} x Li_{p}(x)\log^{m-i}(x)+(-1)^{m}m!J_{1}(0,p,x)\\ &\quad +\sum_{i=0}^{m-1}\binom{m}{i}i!(-1)^{i+1} J_{1}(m-i,p-1,x)\\ &=\sum_{i_{1}=0}^{m-1}\binom{m}{i_{1}}i_{1}!(-1)^{i_{1}} x Li_{p}(x)\log^{m-i_{1}}(x)+(-1)^{m}m!J_{1}(0,p,x)\\ &\quad +\sum_{i_{1}=0}^{m-1}\binom{m}{i_{1}}i_{1}!(-1)^{i_{1}+1}\bigg\{(-1)^{m-i_{1}}(m-i_{1})!J_{1}(0,p-1,x)\\ &\quad +\sum_{i_{2}=0}^{m-i_{1}-1} \binom{m-i_{1}}{i_{2}}i_{2}!(-1)^{i_{2}} x Li_{p-1}(x)\log^{m-i_{1}-i_{2}}(x)\\ &\quad +\sum_{i_{2}=0}^{m-i_{1}-1} \binom{m-i_{1}}{i_{2}}i_{2}!(-1)^{i_{2}+1} J_{1}(m-i_{1}-i_{2},p-2,x)\bigg\}\\ &=\sum_{y=1}^{p}\sum_{i_{1}=0}^{m-1}\cdots\sum_{i_{y}=0}^{m-i_{1}-\cdots-i_{y-1}-1} m(m-1)\cdots(m-i_{1}-\cdots-i_{y}+1)\\ &\qquad \times(-1)^{i_{1}+\cdots+i_{y}+y-1} x Li_{p-y+1}(x)\log^{m-i_{1}-\cdots-i_{y}}(x)\\ &\quad +\sum_{y=1}^{p}\sum_{i_{1}=0}^{m-1}\cdots\sum_{i_{y-1}=0}^{m-i_{1}-\cdots-i_{y-2}-1} m!(-1)^{i_{1}+\cdots+i_{y-1}+y-1}J_{1}(0,p-y+1,x)\\ &\quad +\sum_{i_{1}=0}^{m-1}\cdots\sum_{i_{p}=0}^{m-i_{1}-\cdots-i_{p-1}-1} m(m-1)\cdots(m-i_{1}-\cdots-i_{p}+1)\\ &\qquad \times(-1)^{i_{1}+\cdots+i_{p}+p}J_{1}(m-i_{1}-\cdots-i_{p},0,x)\,. \end{align*} \end{proof} \begin{Lem}[{{Abel's lemma on summation by parts} \cite{Abel,Chu}}]\label{lem5} Let $\{f_k\}$ and $\{g_k\}$ be two sequences, and define the forward difference and backward difference, respectively, as $$ \Delta\tau_k=\tau_{k+1}-\tau_k\quad\hbox{and}\quad \nabla\tau_k=\tau_k-\tau_{k-1}\,, $$ then, there holds the relation: \begin{align*} \sum_{k=1}^\infty f_k\nabla g_k=\lim_{n\to\infty}f_n g_n-f_1 g_0-\sum_{k=1}^\infty g_k \Delta f_k \,. \end{align*} \end{Lem} We now provide a criterion concerning the exchange of summation and integral for improper integrals. \begin{Lem}\label{exchange} Given a series of functions $\sum_{n=1}^\infty u_{n}(x), a\leq x \leq b$ with $u_{n}(x)\geq 0$, $\sum_{n=1}^\infty u_{n}(b)=\infty$ and $\sum_{n=1}^\infty u_{n}(x)$ converges for $a \leq x < b$. Suppose $u_{n}(x)$ is integrable (Riemann integrable or integrable as an improper integral) on $[a, b]$, the improper integral $\int_{a}^{b}\sum_{n=1}^\infty u_{n}(x)\mathrm{d}x$ converges, and $\sum_{n=1}^\infty u_{n}(x)$ internally closed uniform converges on $[a, b)$, i.e. for any $a\leq c < d <b$, $\sum_{n=1}^\infty u_{n}(x)$ converges uniformly on $[c, d]$, then we can exchange summation and integral, i.e. $$ \int_{a}^{b}\sum_{n=1}^\infty u_{n}(x)\mathrm{d}x=\sum_{n=1}^\infty \int_{a}^{b} u_{n}(x)\mathrm{d}x\,. $$ \end{Lem} \begin{proof} Since the improper integral $\int_{a}^{b}\sum_{n=1}^\infty u_{n}(x)\mathrm{d}x$ converges, then for any $\epsilon >0$, there exists $\delta >0$, s.t. $\lvert \int_{b-\delta}^{b}\sum_{n=1}^\infty u_{n}(x)\mathrm{d}x \rvert\leq\frac{\epsilon}{3}$. It is not hard to see that we can choose $\delta >0$ such that $b-\delta-a>0$. Note that, $\sum_{n=1}^\infty u_{n}(x)$ converges uniformly on $[a, b-\delta]$, then for fixed $\epsilon, \delta >0$, there exists $N > 0$, for any $M \geq N$, we have $$ \left\lvert \sum_{n=M+1}^\infty u_{n}(x) \right\rvert\leq\frac{\epsilon}{3(b-\delta-a)}\quad x\in[a, b-\delta]\,. $$ Thus we can obtain that \begin{align*} &\quad \left\lvert \sum_{n=1}^{M} \int_{a}^{b}u_{n}(x)\mathrm{d}x-\int_{a}^{b} \sum_{n=1}^\infty u_{n}(x)\mathrm{d}x \right\rvert\\ &\leq\left\lvert \int_{a}^{b-\delta}\sum_{n=1}^{M} u_{n}(x)-\sum_{n=1}^{\infty} u_{n}(x)\mathrm{d}x\right\rvert+\left\lvert\int_{b-\delta}^{b} \sum_{n=1}^{M} u_{n}(x)\mathrm{d}x \right\rvert+\left\lvert\int_{b-\delta}^{b} \sum_{n=1}^{\infty} u_{n}(x)\mathrm{d}x \right\rvert\\ &\leq(b-\delta-a)\frac{\epsilon}{3(b-\delta-a)}+\frac{\epsilon}{3}+\frac{\epsilon}{3}\\ &=\epsilon\,, \end{align*} which completes the proof. \end{proof} \begin{Lem}\label{lem6} Let $m \in \mathbb N_{0}$ and $p \in \mathbb N$, then we have \begin{align*} &\quad J(m,p,1)\\ &=\sum_{j=2}^{p}(-1)^{p-j}\zeta(j)\bigg(\sum_{i=2}^{p+1-j}\frac{-1}{(m+1)^{p+2-j-i}}\zeta(i) +\sum_{i=1}^{p+1-j}\frac{1}{(m+1)^{p+2-j-i}}H_{m+1}^{(i)}\bigg)\\ &\quad +(-1)^{p-1}\bigg\{\sum_{i=2}^{p}\frac{-1}{(m+1)^{p-i+1}}\bigg((1+\frac{i}{2})\zeta(i+1) -\frac{1}{2}\sum_{k=1}^{i-2}\zeta(k+1)\zeta(i-k)\\ &\qquad -\sum_{n=1}^{m+1}\frac{H_{n}}{n^{i}}\bigg)+\frac{1}{(m+1)^{p}} \bigg(H_{m+1}^{2}+\sum_{b=0}^{m}\frac{H_{m+1}-H_{b}}{m+1-b}\bigg)\bigg\}\,. \end{align*} \end{Lem} \begin{proof} From the definition of $J(m,p,1)$, we can write \begin{align*} J(m,p,1) &=\sum_{n=1}^\infty \frac{1}{n} \int_{0}^{1}x^{m+n} Li_{p}(x)\mathrm{d}x\\ &=\sum_{j=2}^{p}(-1)^{p-j}\zeta(j)\sum_{n=1}^\infty \frac{1}{n(m+n+1)^{p+1-j}}+\sum_{n=1}^\infty \frac{(-1)^{p-1}H_{m+n+1}}{n(m+n+1)^{p}}\,. \end{align*} For the first part, by using fraction expansion, we have \begin{align*} &\quad \sum_{n=1}^\infty \frac{1}{n(m+n+1)^{p+1-j}}\\ &=\sum_{n=1}^\infty \bigg(\sum_{i=2}^{p+1-j}\frac{-1}{(m+1)^{p+2-j-i}}\cdot\frac{1}{(n+m+1)^{i}} +\frac{1}{(m+1)^{p-j}}\cdot\frac{1}{n(n+m+1)}\bigg)\\ &=\sum_{i=2}^{p+1-j}\frac{-1}{(m+1)^{p+2-j-i}}\bigg(\zeta(i)-H_{m+1}^{(i)}\bigg) +\frac{1}{(m+1)^{p-j+1}}H_{m+1}\\ &=\sum_{i=2}^{p+1-j}\frac{-1}{(m+1)^{p+2-j-i}}\zeta(i) +\sum_{i=1}^{p+1-j}\frac{1}{(m+1)^{p+2-j-i}}H_{m+1}^{(i)}\,. \end{align*} For the second part, by using fraction expansion, we have \begin{align*} &\quad \sum_{n=1}^\infty \frac{H_{m+n+1}}{n(m+n+1)^{p}}\\ &=\sum_{n=1}^\infty H_{m+n+1}\bigg(\sum_{i=2}^{p}\frac{-1}{(m+1)^{p-i+1}}\cdot\frac{1}{(n+m+1)^{i}} +\frac{1}{(m+1)^{p-1}}\cdot\frac{1}{n(n+m+1)}\bigg)\\ &=\sum_{i=2}^{p}\frac{-1}{(m+1)^{p-i+1}}\bigg(\sum_{n=1}^\infty \frac{H_{n}}{n^{i}}-\sum_{n=1}^{m+1} \frac{H_{n}}{n^{i}}\bigg)+\frac{1}{(m+1)^{p-1}}\sum_{n=1}^\infty\frac{H_{m+n+1}}{n(n+m+1)}\,. \end{align*} Note that \begin{align*} S_{1,i}^{+,+}=\sum_{n=1}^\infty \frac{H_{n}}{n^{i}}=(1+\frac{i}{2})\zeta(i+1) -\frac{1}{2}\sum_{k=1}^{i-2}\zeta(k+1)\zeta(i-k)\,.\quad \cite{Flajolet} \end{align*} Set $$ f_{n}:=H_{n+m+1}\quad\hbox{and}\quad g_{n}:=\frac{1}{n+1}+\cdots+\frac{1}{n+m+1}\,, $$ by using Lemma \ref{lem5}, we have \begin{align*} &\quad -\sum_{n=1}^\infty \frac{(m+1) H_{n+m+1}}{n(n+m+1)}\\ &=\sum_{n=1}^\infty H_{n+m+1}\left(\bigg(\frac{1}{n+1}+\cdots+\frac{1}{n+m+1}\bigg)-\bigg(\frac{1}{n}+\cdots+\frac{1}{n+m}\bigg)\right)\\ &=-H_{m+2}\bigg(\frac{1}{1}+\cdots+\frac{1}{1+m}\bigg)-\sum_{n=1}^\infty \bigg(\frac{1}{n+1}+\cdots+\frac{1}{n+m+1}\bigg)\frac{1}{n+m+2}\\ &=-H_{m+1}^{2}-\sum_{n=1}^\infty \sum_{b=0}^{m}\frac{1}{n+b}\cdot\frac{1}{n+m+1}\\ &=-H_{m+1}^{2}-\sum_{b=0}^{m}\frac{1}{m+1-b}\bigg(H_{m+1}-H_{b}\bigg)\,. \end{align*} Combining the above results, we get the desired result. \end{proof} \begin{Remark} In the proof of the above theorem, we exchange the order of summation and integration, i.e. \begin{align*} \int_{0}^{1}x^{m}\sum_{n=1}^\infty \frac{x^{n}}{n} Li_{p}(x)\mathrm{d}x=\sum_{n=1}^\infty \frac{1}{n} \int_{0}^{1}x^{m+n} Li_{p}(x)\mathrm{d}x\,. \end{align*} To verify this, we only need to note that for any $0<\delta<1$, $\frac{x^{m+n}}{n} Li_{p}(x)$ is monotonic increasing on the interval $[0, 1-\delta]$ and the series $\sum_{n=1}^\infty \frac{x^{m+n}}{n} Li_{p}(x)$ converges when $x=1-\delta$. With the help of Lemma \ref{exchange}, we get the desired result. The other cases can be checked in a similar manner. \end{Remark} Now we provide another formula for $J(m,p,1)$. \begin{Lem}\label{lem7} Let $m \in \mathbb N_{0}$ and $p \in \mathbb N$, then we have \begin{align*} J(m,p,1) &=\sum_{i=2}^{p}\frac{(-1)^{p-i}}{(m+1)^{p-i+1}}\bigg((1+\frac{i}{2})\zeta(i+1) -\frac{1}{2}\sum_{k=1}^{i-2}\zeta(k+1)\zeta(i-k)\\ &\quad +\sum_{k=2}^{i}(-1)^{i-k}\zeta(k)H_{m+1}^{(i-k+1)} +\sum_{j=1}^{m+1}\frac{(-1)^{i-1}}{j^{i}}H_{j}\bigg)\\ &\quad +\frac{(-1)^{p-1}}{(m+1)^{p}} \bigg(H_{m+1}^{2}+\sum_{b=0}^{m}\frac{H_{m+1}-H_{b}}{m+1-b}\bigg)\,. \end{align*} \end{Lem} \begin{proof} From the definition of $J(m,p,1)$, we can write \begin{align*} J(m,p,1) &=-\sum_{n=1}^\infty \frac{1}{n^{p}} \int_{0}^{1}x^{m+n} \log(1-x)\mathrm{d}x\\ &=\sum_{n=1}^\infty \frac{1}{n^{p}} \frac{H_{m+n+1}}{m+n+1}\\ &=\sum_{n=1}^\infty H_{m+n+1}\bigg(\sum_{i=2}^{p}\frac{(-1)^{p-i}}{(m+1)^{p-i+1}}\cdot\frac{1}{n^{i}} +\frac{(-1)^{p-1}}{(m+1)^{p-1}}\cdot\frac{1}{n(n+m+1)}\bigg)\,. \end{align*} For the first part, by using fraction expansion, we have \begin{align*} &\quad \sum_{n=1}^\infty \frac{H_{m+n+1}}{n^{i}}\\ &=\sum_{n=1}^\infty \frac{H_{n}}{n^{i}} +\sum_{j=1}^{m+1}\sum_{n=1}^\infty \frac{1}{n^{i}(n+j)}\\ &=\sum_{n=1}^\infty \frac{H_{n}}{n^{i}}+\sum_{j=1}^{m+1}\sum_{n=1}^\infty \bigg(\sum_{k=2}^{i}\frac{(-1)^{i-k}}{j^{i-k+1}}\cdot\frac{1}{n^{k}} +\frac{(-1)^{i-1}}{j^{i-1}}\cdot\frac{1}{n(n+j)}\bigg)\\ &=\sum_{n=1}^\infty \frac{H_{n}}{n^{i}}+\sum_{k=2}^{i}(-1)^{i-k}\zeta(k)H_{m+1}^{(i-k+1)} +\sum_{j=1}^{m+1}\frac{(-1)^{i-1}}{j^{i}}H_{j}\,. \end{align*} For the second part, from the previous lemma we know that \begin{align*} &\quad \sum_{n=1}^\infty \frac{(m+1) H_{n+m+1}}{n(n+m+1)} =H_{m+1}^{2}+\sum_{b=0}^{m}\frac{1}{m+1-b}\bigg(H_{m+1}-H_{b}\bigg)\,. \end{align*} Combining the above results, we get the desired result. \end{proof} When $p=1$, we have $$ J(m,1,1)=\frac{1}{m+1} \bigg(H_{m+1}^{2}+\sum_{b=0}^{m}\frac{H_{m+1}-H_{b}}{m+1-b}\bigg)\,. $$ It is known that \cite{Devoto} $$ J(m,1,1)=\frac{2}{m+1}\bigg(H_{m+1}^{(2)}+\sum_{k=1}^{m}\frac{H_{k}}{k+1}\bigg)\,, $$ thus we have the following proposition: \begin{Prop} \begin{align*} H_{m+1}^{(2)} &=\frac{1}{2}\bigg(\sum_{j=0}^{m}\binom{m+1}{j+1}\frac{(-1)^{j}}{j+1}\bigg)^{2} +\frac{1}{2}\sum_{b=0}^{m}\frac{1}{m+1-b}\bigg\{\sum_{j=0}^{m}\binom{m+1}{j+1}\frac{(-1)^{j}}{j+1}\\ &\quad -\sum_{j=0}^{b-1}\binom{b}{j+1}\frac{(-1)^{j}}{j+1}\bigg\} -\sum_{k=1}^{m}\frac{1}{k+1}\sum_{j=0}^{k-1}\binom{k}{j+1}\frac{(-1)^{j}}{j+1}\,. \end{align*} \end{Prop} Now we give explicit expression for $J(-2,p,1)$. \begin{Lem}\label{lem8} Let $p \in \mathbb N$, then we have \begin{align*} J(-2,p,1) &=\zeta(p+1)+\zeta(2)+\sum_{j=2}^{p}\zeta(j)\bigg(1+\sum_{i=2}^{p+1-j}(-1)^{1+i}\zeta(i)\bigg)\\ &\quad +\sum_{i=2}^{p}(-1)^{1+i}\bigg((1+\frac{i}{2})\zeta(i+1) -\frac{1}{2}\sum_{k=1}^{i-2}\zeta(k+1)\zeta(i-k)\bigg)\\ &=2\zeta(2)-\sum_{i=2}^{p}\frac{i}{2}\zeta(i+1)+\frac{1}{2}\sum_{i=3}^{p}\sum_{k=1}^{i-2}\zeta(k+1)\zeta(i-k)\,. \end{align*} \end{Lem} \begin{proof} From the definition of $J(-2,p,1)$, we can write \begin{align*} J(-2,p,1) &=\sum_{n=1}^\infty \frac{1}{n} \int_{0}^{1}x^{n-2} Li_{p}(x)\mathrm{d}x\\ &=\zeta(p+1)+\sum_{n=1}^\infty \frac{1}{n+1}\bigg(\sum_{j=2}^{p}\frac{(-1)^{p-j}}{n^{p+1-j}}\zeta(j) +\frac{(-1)^{p-1}}{n^{p}}H_{n}\bigg)\\ &=\zeta(p+1)+\sum_{j=2}^{p}(-1)^{p-j}\zeta(j)\bigg(\sum_{i=2}^{p+1-j}(-1)^{p+1-j-i}\zeta(i)+(-1)^{p-j}\bigg)\\ &\quad +(-1)^{p-1}\sum_{n=1}^\infty H_{n}\bigg(\sum_{i=2}^{p}(-1)^{p-i}\frac{1}{n^{i}}+(-1)^{p-1}\frac{1}{n(n+1)}\bigg)\,. \end{align*} It is known that \cite{Sofo} $$ \sum_{n=1}^\infty \frac{H_{n}}{n(n+1)}=\zeta(2)\,, $$ thus we get the first result. On the contrary, \begin{align*} J(-2,p,1) &=-\sum_{n=1}^\infty \frac{1}{n^{p}} \int_{0}^{1}x^{n-2} \log(1-x)\mathrm{d}x\\ &=\zeta(2)+\sum_{n=2}^\infty H_{n-1}\bigg(\sum_{i=2}^{p}\frac{-1}{n^{i}}+\frac{1}{n(n-1)H_{n}}\bigg)\\ &=\zeta(2)+\sum_{n=1}^\infty \frac{H_{n}}{n(n+1)}-\sum_{i=2}^{p}\sum_{n=1}^\infty \frac{H_{n}}{n^{i}} +\sum_{i=2}^{p}\zeta(i+1)\,, \end{align*} thus we get the second result. \end{proof} Now we derive explicit expression for $J(m,p,q)$. \begin{theorem}\label{maintheorem3} Let $p, q \in \mathbb N$ with $p \geq q$ and $m \in \mathbb N_{0}\cup\{-2\}$, then we have \begin{align*} &\quad J(m,p,q)\\ &=\sum_{x=1}^{q-1}\sum_{i_{1}=0}^{p-2}\frac{(-1)^{i_{1}+1}}{(m+1)^{i_{1}+1}}\cdots \sum_{i_{x-1}=0}^{p-i_{1}-\cdots-i_{x-2}-2}\frac{(-1)^{i_{x-1}+1}}{(m+1)^{i_{x-1}+1}}\\ &\quad \times \sum_{i_{x}=0}^{p-i_{1}-\cdots-i_{x-1}-2}\frac{(-1)^{i_{x}}}{(m+1)^{i_{x}+1}} \zeta(p-i_{1}-\cdots-i_{x})\zeta(q-x+1)\\ &\quad +\sum_{x=1}^{q-1}\frac{(-1)^{p-2+x}}{(m+1)^{p-2+x}}J(m,1,q-x+1) \sum_{i_{1}=0}^{p-2}\cdots\sum_{i_{x-1}=0}^{p-i_{1}-\cdots-i_{x-2}-2}1\\ &\quad +\sum_{i_{1}=0}^{p-2}\frac{(-1)^{i_{1}+1}}{(m+1)^{i_{1}+1}}\cdots \sum_{i_{q-1}=0}^{p-i_{1}-\cdots-i_{q-2}-2}\frac{(-1)^{i_{q-1}+1}}{(m+1)^{i_{q-1}+1}} J(m,p-i_{1}-\cdots-i_{q-1},1)\,. \end{align*} where $K(m+p+x-1,0,q-x+1)$ and $K(m+i_{1}+\cdots+i_{q},p-i_{1}-\cdots-i_{q}+q,0)$ are given in Lemmata \ref{lem6}, \ref{lem7} and \ref{lem8}. Therefore $J(m,p,q)$ can be reduced to zeta values and generalized harmonic numbers. \end{theorem} \begin{proof} It is known that \cite{Freitas} $$ J(m,p,q)=\frac{\zeta(p)\zeta(q)}{m+1}-\frac{1}{m+1}\bigg(J(m,p-1,q)+J(m,p,q-1)\bigg)\,, $$ successive application of the above relation $p-1$ times, we can obtain the following recurrence relation: \begin{align*} J(m,p,q) &=\sum_{i_{1}=0}^{p-2}\frac{(-1)^{i_{1}}}{(m+1)^{i_{1}+1}} \bigg(\zeta(p-i_{1})\zeta(q)-J(m,p-i_{1},q-1)\bigg)\\ &\quad +\frac{(-1)^{p-1}}{(m+1)^{p-1}}J(m,1,q)\,. \end{align*} Successive application of the new relation gives \begin{align*} &\quad J(m,p,q)\\ &=\sum_{x=1}^{q-1}\sum_{i_{1}=0}^{p-2}\frac{(-1)^{i_{1}+1}}{(m+1)^{i_{1}+1}}\cdots \sum_{i_{x-1}=0}^{p-i_{1}-\cdots-i_{x-2}-2}\frac{(-1)^{i_{x-1}+1}}{(m+1)^{i_{x-1}+1}}\\ &\quad \times \sum_{i_{x}=0}^{p-i_{1}-\cdots-i_{x-1}-2}\frac{(-1)^{i_{x}}}{(m+1)^{i_{x}+1}} \zeta(p-i_{1}-\cdots-i_{x})\zeta(q-x+1)\\ &\quad +\sum_{x=1}^{q-1}\frac{(-1)^{p-2+x}}{(m+1)^{p-2+x}}J(m,1,q-x+1) \sum_{i_{1}=0}^{p-2}\cdots\sum_{i_{x-1}=0}^{p-i_{1}-\cdots-i_{x-2}-2}1\\ &\quad +\sum_{i_{1}=0}^{p-2}\frac{(-1)^{i_{1}+1}}{(m+1)^{i_{1}+1}}\cdots \sum_{i_{q-1}=0}^{p-i_{1}-\cdots-i_{q-2}-2}\frac{(-1)^{i_{q-1}+1}}{(m+1)^{i_{q-1}+1}} J(m,p-i_{1}-\cdots-i_{q-1},1)\,. \end{align*} \end{proof} Freitas \cite{Freitas} gave the following recurrence relation for $K(r,0,q)$: For $r\geq 1$, $q\geq 2$, one has \begin{align*} K(r,0,q) =(-1)^{r+q}\frac{r!}{(q-1)!}K(q-1,0,r+1)+(-1)^{r}r!\bigg(\zeta(r+1)\zeta(r+q+1)\bigg)\,. \end{align*} From this, Freitas showed that $K(r,p,q)$ could be reduced to zeta values as $p+q+r$ even. We now provide an explicit formula for $K(r,p,q)$. \begin{theorem}\label{maintheorem4} Let $p, q, m \in \mathbb N$ with $p \geq q$, then we have \begin{align*} &\quad K(m,p,q)\\ &=\sum_{x=1}^{q}K(m+p+x-1,0,q-x+1)\frac{(-1)^{p+x-1}}{(m+1)_{p+x-1}}\sum_{i_{1}=1}^{p}\cdots \sum_{i_{x-1}=1}^{p-i_{1}-\cdots-i_{x-2}+x-2}1\\ &\quad +\sum_{i_{1}=1}^{p}\frac{(-1)^{i_{1}}}{(m+1)_{i_{1}}}\cdots\sum_{i_{q}=1}^{p-i_{1}-\cdots-i_{q-1}+q-1} \frac{(-1)^{i_{q}}}{(m+i_{1}+\cdots+i_{q-1}+1)_{i_{q}}}\\ &\quad \times K(m+i_{1}+\cdots+i_{q},p-i_{1}-\cdots-i_{q}+q,0)\,. \end{align*} Therefore when $m+p+q$ is even, $K(m,p,q)$ can be reduced to zeta values and generalized harmonic numbers. \end{theorem} \begin{proof} It is known that \cite{Freitas} $$ K(m,p,q)=-\frac{1}{m+1}\bigg(K(m+1,p-1,q)+K(m+1,p,q-1)\bigg)\,, $$ successive application of the above relation $p-1$ times, we can obtain the following recurrence relation: \begin{align*} K(m,p,q) =\sum_{i_{1}=1}^{p}\frac{(-1)^{i_{1}}}{(m+1)_{i_{1}}}K(m+i_{1},p-i_{1}+1,q-1) +\frac{(-1)^{p}}{(m+1)^{p}}K(m+p,0,q)\,. \end{align*} Successive application of the new relation gives \begin{align*} &\quad K(m,p,q)\\ &=\sum_{x=1}^{q}K(m+p+x-1,0,q-x+1)\frac{(-1)^{p+x-1}}{(m+1)_{p+x-1}}\sum_{i_{1}=1}^{p}\cdots \sum_{i_{x-1}=1}^{p-i_{1}-\cdots-i_{x-2}+x-2}1\\ &\quad +\sum_{i_{1}=1}^{p}\frac{(-1)^{i_{1}}}{(m+1)_{i_{1}}}\cdots\sum_{i_{q}=1}^{p-i_{1}-\cdots-i_{q-1}+q-1} \frac{(-1)^{i_{q}}}{(m+i_{1}+\cdots+i_{q-1}+1)_{i_{q}}}\\ &\quad \times K(m+i_{1}+\cdots+i_{q},p-i_{1}-\cdots-i_{q}+q,0)\,. \end{align*} Note that \begin{align*} K(m,0,q)=m!(-1)^{m}\bigg(S_{q,m+1}^{+,+}-\zeta(m+q+1)\bigg)\,,\quad \cite{Freitas} \end{align*} and \begin{align*} S_{p,q}^{+,+} &=\zeta(p+q)\bigg(\frac{1}{2}-\frac{(-1)^{p}}{2}\binom{p+q-1}{p}-\frac{(-1)^{p}}{2}\binom{p+q-1}{q}\bigg)\\ &\quad +\frac{1-(-1)^{p}}{2}\zeta(p)\zeta(q)+(-1)^{p}\sum_{k=1}^{\lfloor \frac{p}{2} \rfloor}\binom{p+q-2k-1}{q-1}\zeta(2k)\zeta(m-2k)\\ &\quad +(-1)^{p}\sum_{k=1}^{\lfloor \frac{q}{2} \rfloor}\binom{p+q-2k-1}{p-1}\zeta(2k)\zeta(m-2k)\,,\quad (p+q \quad \hbox{odd})\cite{Flajolet} \end{align*} where $\zeta(1)$ should be interpreted as $0$ whenever it occurs and $\lfloor x \rfloor$ denotes the floor function. Combining the above results, we get the desired result. \end{proof} \section{Acknowledgements} The author is grateful to the referee for her/his useful comments and suggestions. The author is also grateful to Dr. Wanfeng Liang and Dr. Ke Wang for some useful discussions.
1,477,468,750,872
arxiv
\section{Introduction} Studying galaxy evolution throughout the cosmic time is a key element of modern astrophysics, and is crucial for our understanding of the life cycle of the progenitors of passive elliptical galaxies that we observe in the local Universe. Evidences suggest that the star formation rate (SFR) density has peaked around a redshift of $z \approx 2$ \citep[e.g.,][]{Hopkins06,Madau14,Bethermin2017,Gruppioni2020}, making this epoch (the cosmic noon) particularly interesting. Moreover, the cosmic noon epoch is when the dusty star-forming galaxies (DSFGs) \citep[e.g.,][]{Smail1997,Blain2002,Weiss2013,Casey2014,Donevski2020} contributed significantly to the star formation activity of the Universe \citep[e.g.,][]{Chapman2003, Chapman2005}. Furthermore, the dust-obscured star formation activity plays an important role at higher redshifts \citep[e.g.,][]{Takeuchi2005, Murphy2011, Bethermin2015, Bourne2017, Whitaker2017}. It is therefore crucial to study the massive DSFGs at higher redshift.\smallbreak The affluence of multiwavelength data, especially the far infrared (FIR) detections from $Herschel$, played a central role in understanding how DSFGs evolve as a function of redshift. However, there are still controversies regarding how these galaxies build up their stellar masses. These controversies arise from the systematic uncertainties caused by the heavy dust attenuation in this type of objects \citep[e.g.,][]{Hainline2011, Michalowski2012}. This is caused by the sensitivity of the stellar mass estimate to the type of star formation history (SFH), the choice of the synthetic stellar population (SSP), and the assumed initial mass function (IMF). The debate about the accuracy of derived stellar masses of DSFGs was also discussed in details in \citet{Casey2014}.\smallbreak On the other hand, the growing number of ALMA observations in the recent years is providing an unparalleled help in constraining the evolution of DSFGs. These data are allowing us to build a comprehensive view of the role of these giant IR-bright sources by tracing their molecular gas and dust content \citep[e.g.,][]{Donevski2020}. The wealth of multiwavelength data also contributed to improve significantly estimating physical properties that govern such galaxies, by modeling their spectral energy distribution \citep[SED, e.g.,][]{Burgarella2005,daCunha2008,Noll2009,Conroy2013,Ciesla2014}. \smallbreak To build an SED, different aspects of a galaxy must be considered, most importantly the star formation history (SFH), the change of which has a strong impact on the derived SFR \citep[e.g.][]{Buat14,Ciesla17}, stellar populations of varied ages and metallicities, dust emission with different dust grain sizes and temperatures, nebular and synchrotron emissions, etc. Extinction caused by dust is critically important in any spectrum fitting of a galaxy, since it mutates the shape of the SED the most by absorbing a significant amount of the UV photons and thermally re-emitting them in the IR. This behavior can be modeled by assuming that dust absorbs the shorter wavelength spectrum of galaxies following attenuation laws, which are typically described by simple power laws with varying complexities, and is able to reproduce the observed extinction in galaxies of different redshifts and types. However, dust attenuation laws are not universal \citep[e.g.,][]{Wild2011,Buat2018,Kasia2018,Salim2020}. The need of different attenuation recipes is inevitable in order to reproduce the spectra of galaxies of different masses, IR luminosities and naturally the redshift. This makes it challenging to interpret some of the physical features especially when different attenuation laws can reproduce a good SED of a galaxy \citet{buat2019}. \smallbreak A non-negligent fraction of galaxies exhibit a non-alignment and sometimes a total disconnection between the dust continuum and the stellar population \citep{Dunlop2017,Elbaz2018}. This directly challenges SED fitting techniques that rely on the energetic balance between the UV and the IR, since the key assumption for such techniques is that any physical property derived from one part of the spectrum should be valid for the entire galaxy. Several approaches were investigated to test the validity of this strategy, \citet{buat2019} suggested the decoupling of the stellar continuum from the IR emission by modeling their fluxes apart to compare the derived parameters such as the SFRs, dust luminosities and stellar masses with the ones derived using full SEDs. Statistical samples of such massive and dusty galaxies \citep[e.g.,][]{Dunlop2017, Elbaz2018, buat2019, Donevski2020} offer an important insight into the evolution of dust and gas mass through the cosmic time. However, the nature of these giants is not fully understood.\smallbreak The interstellar medium (ISM) is the most important element in understanding the physical processes of star formation itself, since it contains the building materials for future stars, most importantly the hydrogen. Hydrogen's density was found to be tightly correlated with the SFR, as suggested by \citet{Schmidt59} and investigated by \citet{Kennicutt98}. This correlation is known as the Schmidt-Kennicutt law, and it takes into account the gas in its molecular and atomic forms, albeit the molecular gas having the biggest impact. The mass of this gas can be estimated with the emission of the easily excited CO molecules \citep[e.g.,][]{Carilli2013,Weiss2013b,Decarli2019,Riechers2020}. Tracing the molecular gas with CO emission relies entirely on already established abundances in galaxies of the local Universe. Large interferometers such as ALMA offer unique opportunities to detect these emission lines with an unprecedented accuracy, the luminosity of which can give an estimate of the molecular hydrogen mass of a galaxy, typically using a conversion factor. On the other hand, conversion factors in high-redshift galaxies are highly debated (see \citealp{Bolatto13} for a comprehensive review). Nonetheless, an estimate of the molecular gas reservoir of galaxies at the high-mass end of the main sequence is crucial to characterise their star formation activity. For instance, \citet{Elbaz2018} showed that some galaxies exhibit a SB-like gas depletion time scale despite residing on the MS. \smallbreak Despite the growing number of detection of such heavily dust-obscured ultra-massive objects at high redshift, the progress of SED modeling, and the better comprehension of the high-redshift ISM, we still lack a full picture of how these galaxies form and quench. Were they always steadily forming stars along the MS? Or are they former SBs transiting to the red sequence through the main sequence? To answer these questions, it is essential to understand how is the star formation fueled by the gas in massive objects, and why does this activity cease. Quenching mechanisms are still not fully understood and they might be caused by AGN feedback or outflows \citep[e.g.][]{Cattaneo2009, Dubois2013,Combes2017} to environmental effects that can lead to gas stripping \citep[e.g.][]{Coil2008,Mendez2011}.\smallbreak In this paper, and motivated by the aforementioned questions, we analyze and interpret the multi-wavelengths observations of a pair of galaxies at z$\sim$2, with original COSMOS2015 catalog \citet{laigle16} IDs: 647980 and 648299, hereafter Astarte and Adonis. Astarte is an ultra-massive (M$_{\star} > 10^{11} M_{\odot}$), IR-bright galaxy, for which the CO emission is serendipitously detected with ALMA. Adonis is a low-mass galaxy bright in near-UV (NUV) and optical bands.\smallbreak The structure of this paper is as follows: in Section \ref{section2} we describe the data of the two galaxies analysed in this work. In Section \ref{section3.1} we probe the molecular gas of Astarte using its ALMA-detected CO emission line, and in Section \ref{section3.2} we investigate the morphology of this line compared to multiwavelength detection. In Section \ref{section3.3} we derive the physical properties of the two galaxies using SED fitting. the discussion and the conclusion are in Sections \ref{discussion} and \ref{conclusion} respectively.\newline Throughout this paper, we adopt the stellar initial mass function (IMF) of \citet{Chabrier03} and a $\Lambda$CDM cosmology parameters (WMAP7, \citealp{Komatsu2011}): H$_0$ = 70.4 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{M}$ = 0.272, and $\Omega_{\Lambda}$ = 0.728. \section{Observations}\label{section2} The system of Astarte and Adonis studied in this paper was initially part of a selection of z$\sim$2 galaxies at the high-mass end (M$_{\star}>10^{11}\,M_\odot$) of the main-sequence (MS) of star-forming galaxies \citep[e.g.,][]{Noeske2007,Daddi2010,Rodighiero2011,Schreiber15} detected by \emph{Herschel}/PACS observations of the COSMOS field (PEP survey, \citealp{Lutz2011}). In the COSMOS2011 catalog where the system was selected, the system is not deblended even in the optical and near-IR and appears as a single source. It is probably caused by the fact that this early catalog is mainly built using the $i$ band, where Astarte is particularly faint. The zCOSMOS survey \citet{Lilly2009} measured the spectroscopic redshift at the position of the HST/ACS source from zCOSMOS and found z$_{spec}=2.140$. In the more recent COSMOS catalog \citet{laigle16}, both z and near-infrared bands were used to detect and deblend the object. Adonis and Astarte have thus an individual measurements of their flux in the optical bands of Subaru, the NIR bands of VISTA, and the mid-IR with \emph{Spitzer}/IRAC. Astarte is detected at 250 and 350 $\mu$m with \emph{Herschel}/SPIRE using a 24$\,\mu$m prior \citet{Oliver2012}. The aforementioned deblending, coupled with the FIR detection of Astarte results in a low-mass low-SFR object (SFR = 37 $M_{\odot}$ $yr^{-1}$ with a stellar mass of 9.46$\times 10^{9}$ $M_{\odot}$), and a dust-obscured ultra-massive object (SFR = 131 $M_{\odot}$ $yr^{-1}$ with a stellar mass of 1.41$\times 10^{11}$ $M_{\odot}$), as estimated initially using LePhare \citet{Arnouts2011}. \smallbreak Astarte and Adonis were observed by ALMA as part of a program (2013.1.00914.S, PI: Bethermin) targeting a pilot sample of four massive z$\sim$2 main-sequence galaxies in band-7 continuum and their CO emission. The goal was to measure their gas and dust content and to compare their short-wavelength morphology with their CO and continuum morphologies. \subsection{NUV-IR observations} The NUV (rest-frame FUV) detections of our two galaxies are provided by the Canada France Hawaii Telescope (CFHT) in the \emph{u} band. Visible and NIR detections (rest-frame mid-UV to NUV) are obtained via the broad band Suprime-Cam of Subaru in the \emph{B, V, r, i$^+$} bands and the mid-IR data (rest-frame NIR) are from the IRAC camera of \emph{Spitzer}. The IR-bright Astarte has a MIPS detection at 24\,$\mu$m with a signal-to-noise ratio (hereafter S/N) > 20 and is very bright (S/N $\sim 20)$ in IR detections of \emph{Herschel} where the beam size is large. The 100\,$\mu$m observation from PACS does not detect any of the two galaxies, but provides an upper limit, which is taken into account by the SED fitting of Astarte since it constrains the far-IR part of the spectrum. The radio continuum of Astarte is tentatively detected with the Karl G. Jansky Very Large Array (VLA) in the S band at $\nu=$3\,GHz \citet{Smolcic2017}. This tentative detection was not included in the initial catalog of \citet{Smolcic2017}, since it falls just below their detection threshold of 5$\sigma$ (S/N=4.3). Adonis does not have any detection from VLA at 3 GHz. We thus estimated a 3$\sigma$ upper limit from the standard deviation in the cutout image around our two sources. The beam width of the VLA detection is $0.75\arcsec$, and the continuum is shown in Figure~\ref{fig:Figure1}.\smallbreak The \citet{Jin2018} catalog provides JCMT's fluxes at 850$\mu$m for both of our galaxies (2440$\pm$2519 $\mu$Jy for Astarte and 3910$\pm$2516 $\mu$Jy for Adonis). We refrain from using these super-deblended fluxes due to the high uncertainties, probably caused by the degeneracies in the deblending of such a close pair, and since the majority of flux is unexpectedly attributed to the smaller less IR-bright Adonis. Table \ref{tab:Table1} presents a summary of the available photometric data from different instruments of the two galaxies. \subsection{ALMA observation} Astarte was observed at 2.7\,mm with ALMA (band 3) with a time-on-source of 45 minutes using 32 antennas on September the $5^{th}$ of 2015, cycle-2 (P.I. M.B\'ethermin). We use the Common Astronomy Software Applications package and pipeline (CASA) v5.4\footnote{\url{https://casa.nrao.edu/}} \citep{McMullin2007} for flagging and to reduce the visibility data. The deconvolution was performed with the CLEAN algorithm using natural weighting for an optimal S/N. Multi-frequency synthesis mode of the line-free channels showed a non-significant continuum emission of the spectrum therefore its subtraction was not needed. In the deconvolution process, the cell size was set to $0.1\arcsec$. The achieved synthesized beam size is $0.78\arcsec \times 0.56\arcsec$, the velocity resolution of the cube is 21.36\,km\,s$^{-1}$ and the rms is 0.47\,mJy\,beam$^{-1}$\,km\,s$^{-1}$ per channel. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{RGB.pdf} \caption{\footnotesize Integrated flux of ALMA-detected CO emission (green contours), along with the VLA-detected radio continuum at 3 GHz (magenta contours), on the RGB image (VISTA's Ks, H and J) of Astarte and Adonis. The beam size of ALMA is $0.78"\times0.50"$ (lower left beam). The beam FWHM of VLA is $0.75"$. The outermost contour of the CO integrated flux (green) is at 2$\sigma$ significance. The subsequent contours are in steps of 1$\sigma$ with the innermost contour showing 5$\sigma$. The magenta contours show 2 and 3$\sigma$ significance. The blue cross is centered on Adonis, the red cross is centered on Astarte.} \label{fig:Figure1} \end{figure} \begin{table}[!htbp] \begin{center} \tiny \begin{tabular}{l c c c c} \hline\hline & & & Astarte & Adonis \\ \hline COSMOS15 ID & & & 647980 & 648299 \\ \hline redshift & & & z$_{phot} = 2.153$ & z$_{spec} = 2.140$\\ \hline\hline Telescope/ & Filter & $\lambda$ & $S_{\nu}$ & $S_{\nu}$\\ Instrument& & ($\mu$m) & ($\mu$Jy) & ($\mu$Jy) \\ \hline\hline CFHT/ & u & 0.383 & $0.104\pm0.032$ & $0.492\pm0.032$\\ MegaCam & & & & \\ \hline Subaru/ & B & 0.446 & $0.127\pm0.018$ & $0.596\pm0.032$ \\ Suprime-Cam & V & 0.548 & $0.252\pm0.033$ & $0.938\pm0.049$ \\ & r & 0.629 & $0.246\pm0.029$ & $0.904\pm0.041$ \\ & i$^+$ & 0.768 & $0.331\pm0.035$ & $0.973\pm0.042$ \\ & z$^{++}$ & 0.910 & $0.719\pm0.062$ & $1.329\pm0.063$ \\ \hline VISTA/ & Y & 1.02 & $0.836\pm0.155$ & $1.519\pm0.162$ \\ VIRCam & J & 1.25 & $2.691\pm0.175$ & $2.682\pm0.181$ \\ & H & 1.65 & $4.234\pm0.241$ & $3.243\pm0.254$ \\ & Ks & 2.15 & $9.536\pm0.351 $ & $4.776\pm0.362$\\ \hline \textit{Spitzer} & IRAC1 & 3.6 & $18.60\pm0.07$ & $3.70\pm0.10$ \\ & IRAC2 & 4.5 & $25.10\pm0.10$ & $2.80\pm0.13$\\ & IRAC3 & 5.8 & $25.10\pm2.00$ & $3.60\pm2.60$ \\ & IRAC4 & 8.0 & $15.30\pm3.30$ & - \\ \hline \textit{Spitzer} & MIPS1 & 24 & \multicolumn{2}{c}{$351\pm17$} \\ \hline \textit{Herschel} & PACS & 100 & \multicolumn{2}{c}{$<6734$} \\ \hline \textit{Herschel} & SPIRE & 250 & $17792\pm744$ & - \\ & SPIRE & 350 & $16058\pm1026$& - \\ \hline ALMA & band 3 & 3100 & \multicolumn{2}{c}{$<117$}\\ \hline VLA & S & 1.3$\times10^5$ & $9.9\pm2.3$ & $<7.3$ \\ \hline \end{tabular} \caption{\footnotesize Summary of the data of the two sources observed through the different instruments. $S_{\nu}$ is the flux in ($\mu$Jy). $\lambda$ is the center of the specific filter band.} \label{tab:Table1} \end{center} \end{table} \section{Results} \subsection{Probing the molecular gas of Astarte}\label{section3.1} In the data cube we find only one significant line and no significant continuum source in the field of view. The line extraction procedure along with the derivation of the luminosity and the gas mass are described in the following subsections. \subsubsection{Line extraction} The ALMA-detected emission line of Astarte corresponds to the CO(3-2), with a peak at an observed frequency of $\nu_{obs} = 109.65$ GHz implying $z_{CO(3-2)} = 2.154$, which agrees with the photometric redshift $z_{phot, Astarte}=2.153^{+0.051}_{-0.058}$ \citet{laigle16}. This validates Astarte as the origin of the detected-CO emission. We do not detect Astarte in the continuum and measured a 3-$\sigma$ upper limit from the map of 0.117\,mJy. The expected flux densities from the SED modeling discussed in Sect.\ref{SED_results} are 0.007\,mJy and 0.049\,mJy for Adonis and Astarte, respectively. It is thus not surprising that none of our two sources are detected. The flux uncertainty was determined by deriving the standard deviation in the source-free pixels in the non primary-beam corrected map, since it has similar noise levels across the emission-free pixels (the noise at the central region is $\sim$ 2$\%$ higher than the outermost region of the map). The achieved S/N is 5.2 for the brightest channel of the CO(3-2) of Astarte. The emission line was extracted by fitting a Gaussian over the profile. The goodness of the Gaussian fit was verified with a $\chi^2$ test, its properties are summarized in Figure \ref{fig:Figure2} along with the redshifted CO(3-2) line. The full width at half maximum (FWHM) of the Gaussian is found to be 152.74$\pm$33.21 km s$^{-1}$. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{freq_velo.pdf} \caption{\footnotesize Spectral profile of Astarte with the redshifted CO(3$\rightarrow$2) line (dashed vertical grey line) and the Gaussian fit (red) with its properties.} \label{fig:Figure2} \end{figure} The spectroscopic redshift of the system at the position of HST detection found by by the zCOSMOS survey \citet{Lilly2009} is $z_{spec}=2.140$. This $z_{spec}$ corresponds to that of Adonis since only this UV-bright galaxy is detected with HST/ACS. Taking into account the redshift difference of Astarte and Adonis, the corresponding radial velocity difference $\Delta V$ is 1335$\, km\,s^{-1}$. This velocity difference is greater than what is found in interacting pairs of galaxies, which is typically $\Delta V < 350\, km\,s^{-1}$ \citep{Lambas2003, Alonso2004}. Outflows and absorption in the UV lines could account for few hundreds of km\,s$^{-1}$ \citet{Cassata2020}, or a division of the Hubble flow and peculiar motions, which could account for a significant velocity contribution if it is along the line of sight. Therefore, this does not eliminate a possible interaction between Astarte and Adonis. \subsubsection{Line integrated flux and luminosity} The intensity is calculated by integrating over the Gaussian fit of the line, which is then converted to the apparent line luminosity (L$^{\prime}$), by using the expression from \cite{Solomon1997} that expresses $L^\prime$ with the integrated source brightness temperature in units of \text{$K\ km\ s^{-1}\ pc^2$}: \[ L^{\prime}_{line} = 3.25 \times 10^7 \times I \times\frac{D_L^2}{(1 + z)^3 \nu_{obs}^2}, \] where $D_L$ is the luminosity distance in (Mpc), $I$ is the intensity in (Jy km/s), and $\nu_{obs}$ is the observed frequency in (GHz). As a consistency test, we also estimated the integrated flux of the line using the moment-zero map, which was obtained by summing the channels where the emission line is detected. The line flux is measured in the moment-0 map using a 2-dimension Gaussian fit of the source. As shown in \citet{Bethermin2020}, there is no significant difference between this method and a fit in the uv plane for faint compact sources observed by ALMA. The resulting flux densities of the two methods are presented in Table\,\ref{tab:Table2}. There is 1.2\,$\sigma$ significant difference between the intensities derived by each methods. The spectrum is extracted at a single point assuming a point source, while the 2-dimension fit can recover the flux from an extended source. This small difference of flux suggests that our source could be marginally resolved. Hereafter, we use the flux from the moment-0 map, which takes this account. However, we cannot formally exclude another faint and diffuse component at larger scale considering the depth of our data. Figure \ref{fig:Figure1} shows the flux-integrated moment-0 map of Astarte represented by confidence levels contours. The size of the CO disk is $\sim74\ kpc$. \begin{table}[h] \begin{center} \begin{tabular}{ccc@{\extracolsep{\fill}}c} \hline\hline \small Peak flux & \small$I_{CO(3-2)}^{spec}$ & \small$I_{CO(3-2)}^{mom}$ & \small$L^{\prime}_{CO(3-2)}$ \\ \small density (mJy) & \multicolumn{2}{c}{(\small Jy km s$^{-1}$)} & \small (10$^9$ K km s$^{-1}$ pc$^2)$ \\ \hline \small $1.690\pm0.277$ & \small$0.251\pm0.062$ & \small$0.328\pm0.047$ &\small $8.508\pm1.219$ \\ \hline \end{tabular} \caption{\footnotesize Summary of the CO(3-2) emission line properties of Astarte. $I_{CO(3-2)}^{spec}$ is achieved by integrating over the Gaussian of the emission line. $I_{CO(3-2)}^{mom}$ is the intensity derived from the moment-0 map.} \label{tab:Table2} \end{center} \end{table} \subsubsection{Deriving the molecular gas mass}\label{alpha} To derive the total mass of the molecular gas in a galaxy we assume that the H$_2$ mass is proportional to the CO(1-0) line luminosity which is the commonly used tracer of the cold star-forming molecular clouds, thanks to its small excitation potential requirement. The H$_2$ mass can be derived using a conversion factor $\alpha_{CO}$ \citep[e.g.][]{Downes2003, Greve2005, Tacconi2006, Carilli2013, Bothwell2013}: \[M_{H_2} = \alpha_{CO}\ L^{\prime}_{CO(1-0)}\] where M$_{H_2}$ is the mass of the molecular hydrogen in $M_\odot$, $\alpha_{CO}$ is the conversion factor and $L^{\prime}_{CO(1-0)}$ is the line luminosity in \text{$K\ km\ s^{-1}\ pc^2$}. The practice of H$_2$ mass derivation with this method is very common especially for galaxies at high redshifts where information and spatial resolution is often limited. Our CO(3-2) line luminosity has to be converted to CO(1-0) luminosity using a luminosity line ratio $r_{31} = L^{\prime}_{CO(3-2)}/L^{\prime}_{CO(1-0)}$. We use $r_{31}=0.42\pm0.07$, which is the average ratio found for $z=1.5$ SFGs by \citealt{Daddi2015}. This results in: \[L^{\prime}_{CO(1-0)} = (2.03\pm0.59) \times 10^{10}\,K\,km\,s^{-1}\,pc^2.\] To convert this luminosity into hydrogen mass, we use two conversion factors: $\alpha_{CO}=0.8$ and a galactic conversion factor of $\alpha_{CO}=4.36$. The first one manages to recover the molecular gas mass in starbursts (SBs) and submillimeter galaxies (SMGs), where the gas is efficiently heated by dust. The galactic conversion factor is suitable for normal main sequence galaxies \citep{DS98,Bolatto13,Carilli2013}. For $\alpha_{CO} = 0.8$, the mass of the molecular hydrogen is: \[M_{H_2 (\alpha=0.8)} = (1.62\pm0.47) \times 10^{10}\ M_{\odot}.\] Whereas $\alpha_{CO} = 4.36$ results in $M_{H_2} = (8.85\pm2.57)\times 10^{10}\ M_{\odot}$, five times larger gas reservoir than the one derived with $\alpha_{CO} = 0.8$. \subsubsection{Dynamical mass} With the velocity FWHM of the CO(3-2) line, we use the method described in \citet{Bothwell13} to estimate the dynamical mass of Astarte. Assuming that a rotating disk is the origin of the detected line, the dynamical mass can be written as in \citet{Neri03}: \[M_{dyn}\ \ (M_{\odot}) = 4\times10^4\ \Delta V^2\ R / sin^2 (i),\] where $\Delta V$ is the FWHM of the line velocity, $i$ is the inclination angle of the disk and R is the radius of the disk in kpc. For a random inclination of \(\langle i \rangle = 57.3^{\circ}\) \citep{Law2009}, the dynamical mass is found to be (1.11 $\pm${0.23}) $\times 10^{11}\,$M$_{\odot}$.\newline The gas mass to dynamical mass ratio for a galactic conversion factor is therefore $M_{H_{2}}/M_{dyn}=0.76\pm0.33$. For $\alpha_{CO} = 0.8$, $M_{H_{2}}/M_{dyn}=0.15\pm0.07$.\smallbreak The hydrogen mass derived assuming a galactic conversion factor is able to trace the dynamical mass of Astarte, despite the relatively low FWHM of the CO line, as it is for similar FWHM values in \citet{Bothwell2013}. \subsubsection{CO emission morphology}\label{section3.2} We investigate the morphology of the CO(3-2) emission line of Astarte in relation to other wavelength detections of the system, to closely study the association of the CO component with the UV, optical and IR components, as shown in Figure \ref{fig:Figure3}.\newline HST's observation in the I band (mid-UV rest-frame) do not show Astarte due to its heavy dust obscuration. However, the young stellar population of the less-dusty Adonis is visible in HST's I band and is bright in the u band detection of CFHT (rest-frame far-UV) and in the J band of VISTA (rest-frame NUV). In the Ks bands of CFHT and VISTA, which correspond to rest-frame visible light, Adonis becomes less-bright, and is very faint at higher wavelengths observations of ALMA and VLA.\smallbreak The dusty Astarte is not visible in the u band of CFHT (rest-frame FUV). It is however detected in the Ks bands of VISTA and CFHT showing a bright stellar population in the visible rest-frame wavelengths. A spatial offset (of $\sim$ 0.39") is visible between the ALMA-detected CO emission and the emission of the stellar population (observed in the Ks bands) of Astarte. \citet{Faisst2020} found an average offset between the COSMOS2015 catalogue and the Gaia reference frame of $\Delta (RA) = -63.9^{+70.7}_{-60.2}\, milliarcsec$ and $\Delta (Dec) = -1.4^{+80.4}_{-67.3}\, milliarcsec$. This systematic offset cannot explain the visible offset between the CO emission and the rest-frame optical counterparts of Astarte. Moreover, we show that the continuum detected by the VLA at 3 GHz of Astarte and its CO emission detected by ALMA are aligned, eliminating the possibility of a systematic error due to ALMA's synthesized beam size. \smallbreak Although the original spectroscopic redshift of 2.140 (for both sources) found by zCOSMOS \citep{Lilly2009}, was derived from the visible range of HST's observation, where Astarte is not observed, ALMA offers a spectroscopic redshift for the latter (z$_{ALMA}$ = 2.154). This shows the importance of long-wavelengths detections especially for dust-obscured galaxies where the UV-NIR emission is heavily attenuated \citep{Schreiber18a, Wang2019}. \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{CUTOUTS.pdf} \caption{\footnotesize ALMA-detected CO(3-2) emission line contour map (red contours) of Astarte overlaid on detections from different telescopes/instruments at different bands as specified in every figure. From upper left to lower right: CFHT U band at 0.383 $\mu m$. HST's I band at 0.805 $\mu m$. VISTA J band at 1.252 $\mu m$. CFHT Ks band at 2.146 $\mu m$. VISTA Ks band at 2.147 $\mu m$ and on the VLA detection at 3GHz. The outermost contour is 3$\sigma$, and the subsequent contours are in steps of 1$\sigma$ with red innermost contour showing 5$\sigma$. The beam size is 0.78"$\times$ 0.50". The white bar shows 1 arcsecond scale. } \label{fig:Figure3} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{SED.pdf} \caption{\footnotesize Best fits of the constructed SEDs of Astarte and Adonis along with their relative residuals. The SED of Astarte (left) is produced using LF17 attenuation law. The SED of Adonis (right) is produced using CF00 attenuation law. The best fit is shown in black. The unattenuated stellar emission is shown with the blue line. The filled region shows the difference between the unattenuated and the attenuated stellar emission, absorbed by dust. Red dots are the best fit values of the observations which are shown with the purple boxes. Upper limits are shown as purple triangles. } \label{fig:Figure4} \end{figure*} \subsection{SFRs, stellar masses and dust luminosities}\label{section3.3} \begin{table*}[t] \begin{center} \begin{tabular}{l c c} \hline\hline Parameter & & Values \\ \hline\hline \multicolumn{3}{c}{Star formation history \citet{Ciesla17}} \\ \hline Stellar age$^{(i)}$ & $age_{main}$ & 0.8 - 3.2\,Gyr by a bin of 0.2\,Gyr \\ e-folding time$^{(ii)}$ & $\tau_{main}$ &0.8, 1, 3, 5, 8\,Gyr \\ Age of burst/quench episode & $t_{flex}$ & 5, 10, 50, 100\,Myr \\ SFR ratio after/before & $r_{SFR}$ & 10$^{-2}$, 10$^{-1}$, 0, 10$^1$, 10$^2$, 10$^3$ \\ \hline\hline \multicolumn{3}{c}{Stellar synthesis population \citep{bc03}} \\ \hline Initial mass function & IMF & \citep{Chabrier03}\\ Metallicity & $Z$ & 0.02 \\ Separation age & & 10\,Myr\\ \hline\hline \multicolumn{3}{c}{Dust attenuation laws} \\ \hline \multicolumn{3}{c}{\citep{Calzetti00}} \\ \hline Colour excess of young stars & E(B-V) & 0.1 - 1 by a bin of 0.1 \\ Reduction factor$^{(iii)}$ & f$_{att}$ & 0.3, 0.5, 0.8, 1.0\\ \hline \multicolumn{3}{c}{\citep{CharlotFall00}, \textbf{\citep{lofaro17}}} \\ \hline V-band attenuation in the ISM & A$_V^{ISM}$ & 0.3 - 3 by a bin of 0.1 \\ Av$_{V}^{ISM}$ / (A$_V^{BC}+A_V^{ISM})$ & $\mu$ & 0.3, 0.5, 0.8, 1 \\ Power law slope of the ISM & & -0.7, \textbf{-0.48} \\ Power law slope of the BC & & -0.7 \\ \hline\hline \multicolumn{3}{c}{Dust emission model \citep{dl2014}} \\ \hline Mass fraction of PAH & q$_{PAH}$ & 1.77, 2.50, 3.19 \\ Minimum radiation field & U$_{min}$ & 10, 25, 30, 40 \\ Power law slope & $\alpha$ & 2 \\ \hline\hline \multicolumn{3}{c}{Synchrotron emission} \\ \hline FIR/radio correlation coefficient & & 2.3 - 2.9 by a bin of 0.1 \\ Power law slope slope & $\alpha_{synchrotron}$ & 0.4 - 0.9 by a bin of 0.1 \\ \hline \end{tabular} \caption{Input parameters used to fit the SEDs of Astarte and Adonis with CIGALE. (i) The stellar age is the age of the main stellar population. (ii) The e-folding time is the time required for the assembly of the majority of the stellar population. (iii) The reduction factor f$_{att}$ is the color excess in old stars relative to the young ones.} \label{tab:Table3} \end{center} \end{table*} \begin{table}[t] \begin{center} \begin{tabular}{c c c c c} \hline\hline & Attenuation law & $\chi^2$ & reduced $\chi^2$ & BIC \\ \hline & C00 & 43.22 & 2.12 & 73.18 \\ Astarte & CF00 & 18.34 & 0.97 & 51.30 \\ & LF17 & 16.06 & 0.84 & 49.01 \\ \hline & C00 & 16.28 & 1.10 & 46.78\\ Adonis & CF00 & 10.51 & 0.70 & 41.01 \\ & LF17 & 13.99 & 0.93 & 44.49 \\ \hline \end{tabular} \caption{\footnotesize Comparison between the quality of fits of Astarte and Adonis produced with CIGALE with the three attenuation laws.} \label{tab:Table4} \end{center} \end{table} \subsubsection{The SED modeling} We use the SED modeling code CIGALE\footnote{\url{http://cigale.lam.fr}} \citep{Boquien19} to derive the physical properties of our sources. The code allows to model galaxies' SED from the UV to the radio wavelengths, taking into account the energetic balance between the emission absorbed by dust in the UV-visible range and the IR emission. CIGALE offers a variety of modules for each physical process a galaxy may undergo. The modules that we use in our SED fitting procedures are described below. \subsubsection{Stellar component} To model the stellar component of Astarte and Adonis, we use the stellar population synthesis of \citet{bc03}. This stellar library computes the direct stellar contribution to the spectrum (UV-NIR range) by populating the galaxy with young and old stars of different masses, as well as the required gas mass that will produce such population. This model was developed based on observations of nearby stellar populations and it describes well the various stellar emissions that one expects to encounter in any galaxy. These models depend on the metallicity and the separation age\footnote{Age of the separation between the young stellar population and the old one.}. We use a solar-like metallicity and we take into account nebular emission since they contribute to the total SED model from the UV to NIR.\smallbreak Different stellar demographics must be modeled with an appropriate SFH in any SED modeling, since it is critical to estimate the contribution of the young and old stars to the total flux. An appropriate SFH is key to derive the SFR of a galaxy as it strongly depends on the assumptions made \citep{Ciesla17}. CIGALE offers different SFH scenarios varying from the simple delayed SFH to more complex ones containing episodes of bursts or sudden drops in SFRs. We use the SFH proposed by \citet{Ciesla17} which is a combination of a smooth delayed buildup of the stellar population to model the long term SFH of a galaxy, and a recent flexibility in the last few hundred Myrs to allow for recent and drastic SFR variations (burst or quench). This SFH model has been proven to limit biases by decoupling the estimations of the stellar mass, mainly constrained by rest frame NIR data, from the SFR, constrained by UV and IR data \citep[e.g.,][]{Ciesla16,Ciesla18,Schreiber18b,Schreiber18a}. This kind of SFH was used in the study of high-redshift ($z<3$) passive galaxies to model their SED \citep[e.g.,][]{Schreiber18b,Schreiber18a,Merlin18}. We limit the recent burst/quench episodes to the last 100 Myrs of the life of our sources. The recent burst is motivated by the ALMA detection, however, it is important to note that this burst makes it difficult to constrain the past SFH. The burst part of the SFH is usually responsible for fitting the UV data, whereas the previous SFH (delayed) is driven by for the older stellar population, manifested in the visible part of the SED. \subsubsection{Attenuation laws} Two prominent attenuation laws are the ones of \citet{Calzetti00} (hereafter C00) and \citet{CharlotFall00} (hereafter CF00). They are widely used in the literature, and along with their alternations can describe the behavior of the extinction in the UV to NIR caused by dust.\newline C00 and its recipes is in its core equivalent to reducing the short-wavelength flux coming from a stellar population by an opaque screen, with the opacity being dependant on the total extinction of the stellar emission at the B and V bands.\newline Another approach is CF00 power-law which is fundamentally different from C00: it attributes different attenuation to the ISM and to the birth clouds (hereafter BC). This makes the dust more effective at absorbing the UV light since the young stellar emission has to pass through the dust in the BC and the ISM. Stars that are older are attenuated only by the ISM dust. The CF00 approach is slightly more complex and physical than C00 for high redshift ultra dusty galaxies, embodying different dust distributions and densities throughout a galaxy. C00 and CF00 rely for their efficiency of attenuating the stellar population on power-laws for their slopes. The power-law slopes for BCs and ISM in CF00 were originally fixed at -0.7 each. \citet{lofaro17}'s recipe (hereafter LF17) of CF00 was tuned by assuming a power-law for the slope of the attenuation in the ISM equal to -0.48. This recipe provides a steeper attenuation curve at shorter wavelengths. In this work we use these three attenuation laws and compare their best fits and their effects on deriving the physical properties of our sources.\smallbreak To assess which attenuation laws to use when different modules can produce good and comparable fits, we employ the Bayesian information criterion (BIC), defined as the $\chi^2 + k\ln{n}$, where $\chi^2$ is the non reduced goodness of the fit, $k$ is the degrees of freedom of the model and $n$ is the total number of photometric fluxes used in the fit of the galaxy. We then evaluate the preference of a model over the other one by calculating the difference between their BICs: $\Delta BIC > 2$ translates into a notable difference between the two laws and the fit with the lowest $\chi^2$ is preferred. This method was used by \citet{Ciesla18} for carefully choosing successful scenarios of SFHs of quenching galaxies, and by \citet{buat2019} for assessing SEDs of $z\sim2$ ALMA-detected galaxies. \subsubsection{Dust emission} To model the dust emission we use the \citet{dl2014} IR emission models, which was calibrated using high resolution observation of the Andromeda galaxy. \citet{dl2014} considers a variety of dust grains heated by different intensities coming from the stars and the photodissociation regions, and is an improved version of the older \citet{dl2007} model by varying dust opacity across the radius of a galaxy. This IR model was successful in reproducing dust emissions of millions of \emph{Herschel}-detected galaxies as a part of the HELP project \citep{Kasia2018}. \subsubsection{Synchrotron emission} The VLA detection at 3\,GHz of Astarte and Adonis allow us to model the synchrotron emission of our objects, taking into account a non-thermal power law of the synchrotron spectrum and the ratio of the FIR/radio correlation. The different parameters used to build our SEDs are shown in Table \ref{tab:Table3}. \subsubsection{SED fitting results}\label{SED_results} In the case of Astarte, CF00 and LF17 attenuation laws result in best SED fits over C00. The BIC of every model was calculated and is shown in Table \ref{tab:Table4} along with the other quality of fit assessments. $\Delta BIC_{(C00, CF00)}$ = 21.88 and $\Delta BIC_{(CF00, LF17)}$ = 2.29, this privileged the best fit produced with LF17 attenuation law and therefore was taken into account in deriving the physical properties. Despite the uncertainties on any assumed SFH model, the adopted SFH here fitted the short wavelength data the best. \smallbreak For the less-massive Adonis, CF00 gave overall better fits than C00 and LF17, with $\Delta BIC_{(LF17, CF00)}$ = 3.48 and $\Delta BIC_{(C00, CF00)}$ = 5.77. The best SED of Astarte and Adonis are shown in Figure \ref{fig:Figure4}. The signature of dust attenuation is clear in the two SEDs, where the heavily dust-obscured Astarte has more attenuation of its overall stellar mass than it is the case with Adonis. The derived properties of both galaxies are shown in Table \ref{tab:Table5}. The L$_{IR}$ of Astarte of the order of 10$^{12}$, qualifies it to be an Ultra Luminous IR Galaxy (hereafter ULIRG). While Adonis's poor dust content is manifested in the weaker IR luminosity and lower dust mass. \smallbreak To closely inspect the visible dissociation of the gas and the stellar population in Astarte, we follow the method used in \citet{buat2019} by dissecting the stellar continuum and the IR emission apart and comparing their derived properties with the ones obtained using full SEDs. Taking into account the UV-NIR data (0.3 - 8 $\mu m$), the best fit for the stellar continuum was obtained with the C00 law, with $\Delta BIC_{(CF00,C00)} = 8.3$ and $\Delta BIC_{(LF17,C00)} = 2.7$. The better quality fit of the stellar continuum produced using C00 is expected, since this power-law effectively attenuates the young stellar population, while the other two laws can be equally efficient in attenuating the older stars, a behavior that translates into a rise in the near IR absorbed light and therefore a rise in the total IR emission. The IR luminosity derived from the stellar continuum gives $L_{dust}=(2.43\pm1.01)\times 10^{12}\ L_{\odot}$, relatively close to $L_{dust}$ derived from the full SED. The stellar mass derived from the stellar emission gives $(1.3\pm0.2)\times10^{11}\ M_{\odot}$ and the SFR$_{(UV-NIR)}=430\ M_{\odot}yr^{-1}$. From the IR data (MIPS - ALMA continua), we get $L_{dust}=(3.25\pm0.08)\times 10^{12}\ L_{\odot}$, consistent with the one derived with the full SED. This result is in agreement with the results of \citet{buat2019} where they found consistent dust luminosities derived from both the full SED and the IR data, while $L_{dust}$ deduced from the stellar continuum was underestimated. \begin{table}[t] \begin{center} \begin{tabular}{l c c} \hline\hline Physical property & Astarte & Adonis\\ \hline redshift & $z_{CO}=2.154$ & $z_{spec}=2.140$ \\ L$_{IR}\ (10^{12}\ L_{\odot})$ & $3.16\pm 0.06$ & $0.62\pm0.04$ \\ SFR$\ (M_{\odot}/yr)$ & $395\pm20$ & $129\pm59$\\ M$_{\star} (M_{\odot})$& $(3.74\pm0.19)\times10^{11}$ & $(9.37\pm1.76)\times 10^{9}$\\ M$_{dust} (10^{9} M_{\odot})$ & $1.01\pm0.11$ & $0.86\pm0.13$ \\ \hline \end{tabular} \caption{\footnotesize Summary of the physical properties obtained for Astarte and Adonis obtained with CIGALE.} \label{tab:Table5} \end{center} \end{table} \section{Discussion}\label{discussion} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{sSFR.pdf} \caption{\footnotesize Relative position of the sSFR (SFR/M$_{\star}$) and stellar mass of Astarte using the three attenuation laws to the main sequence of \citet{Schreiber15} at z=2. The yellow star shows the relative position of Adonis to the MS. Magenta squares denote PHIBSS CO-detected SFGs at z$\approx2$ \citep{tacconi13}. Turquoise circles are ULIRGs at 2<z<2.5 from \citet{Genzel10}. The solid line shows the MS of \citet{Schreiber15}. The dashed and dotted lines are $MS \times 4$ and $MS \times 10$ respectively.} \label{fig:Figure5} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{Mmol_sSFR.pdf} \caption{\footnotesize Molecular gas masses derived with the CO conversion factor versus the sSFR. The magenta squares are from T13. Turquoise circles are ULIRGs at 2<z<2.5 from G10. The red circle shows the position of Astarte with the used $\alpha_{CO}=0.8$, and the associated error bar shows the variation of the molecular gas mass using 0.3<$\alpha_{CO}$<1. The blue triangle shows the molecular gas of Astarte for a galactic CO conversion factor of 4.36.} \label{fig:Figure6} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{LIR_LCO.pdf} \caption{\footnotesize Correlation between CO(1-0) luminosities and the total IR luminosity. The magenta-filled squares are from T13 and turquoise-filled circles are the sources from G10. The solid black line is the linear regression for MS galaxies \citep{Sargent14} and the dashed line is that for SBs (the regression lines are from the complete sample in \citealt{Sargent14}). } \label{fig:Figure7} \end{figure} Figure \ref{fig:Figure5} shows the relative position of our galaxies to the MS of \citet{Schreiber15}. Adonis lies on 10$\times$MS qualifying it to be a strong starburst, despite its relatively low SFR. While being a starburst, this type of source cannot be detected even by the deepest 3\,mm ALMA survey (expected flux of 7\,$\mu$Jy), which have a 1-$\sigma$ noise of 9.7\,$\mu$Jy \citet{Gonzales2020}.\newline Astarte is a MS galaxy with all the different attenuation recipes used. However, there is a clear difference concerning the position of Astarte relative to the MS as a result of the three attenuation laws. This is attributed to the significant difference in the derived stellar masses, with CF00 and LF17 attenuation laws resulting in larger stellar mass than C00 due to the highest attenuation in NIR. This contributes to a lower specific SFR (sSFR = SFR/M$_{\star}$) since SFRs do not differ significantly with the three laws.\smallbreak Host halo masses of such z$\sim$2 \emph{Herschel}-detected massive MS galaxies were investigated in \citet{Bethermin2014} using clustering and X-ray stacking and were found to reside in halos of $>$10$^{13}$ M$_\odot$. Such halo masses are also expected from the stellar mass to halo mass relation \citep{Behroozi13, Durkalec18, Behroozi19}. Astarte is $\sim$4 times less massive than the average central galaxies at z$\sim$1 of \citet{Hilton2013} and \citet{Burg2013}, which indicates that such MS giant galaxies continue to grow either through in situ star formation or accretion of other galaxies through the cosmic time until lower redshifts.\smallbreak We compare Astarte with CO-detected samples of the same redshift range from \citet{Genzel10} (labeled G10) and \citet{tacconi13} (labeled T13). These samples of SFGs have constrained CO detections, and well-investigated physical parameters (in \citealp{Sargent14}).\smallbreak We compare the molecular gas mass of Astarte derived from the CO emission line with that of G10 and T13 galaxies in Figure~\ref{fig:Figure6}. G10 uses a galactic conversion factor for SFGs and $\alpha_{CO}=1$ for ULIRGs, while T12 adopts a galactic conversion factor for all their sources. Our choice of $\alpha_{CO}=0.8$ underestimates the molecular gas mass of Astarte than the galaxies of G10 and T13. However, $\alpha_{CO}=4.36$ produces a higher molecular gas mass with respect to its sSFR.\smallbreak In Figure~\ref{fig:Figure7} we show the correlation between CO luminosities and the total infrared luminosities. IR luminosities were derived from the SFRs of all the sources (G10, T13 and Astarte) using the Kennicutt relation \citep{Kennicutt98}. The initial choice of $r_{31} = 0.42\pm0.07$ (the average in \citealp{Daddi2015}) places Astarte on the SB line from \citet{Sargent14}, contradicting its SED result. We therefore investigate the lowest excitation ratio from \citet{Daddi2015} of $r_{31} = 0.27\pm0.07$. This lower ratio moves Astarte closer to the MS within the error bars.\smallbreak Using the total molecular gas mass of Astarte derived with the least excited CO(3-2) from \citet{Daddi2015} ($r_{31} = 0.27\pm0.07$), and assuming a galactic conversion factor, we estimate the gas fraction $f_{gas} = M_{gas} / (M_{gas}+M_{\star})$ = $(0.27\pm0.07)$. Although this falls within the lower limits of typical molecular gas fractions found in SFGs at z$\approx$2 in \citet{Santini2014} and \citet{Bethermin2015}, an $\alpha_{CO}$ adapted for SB with a higher r$_{31}$ ratio reduces the gas fraction significantly.\smallbreak The gas mass derived with a galactic conversion factor gives Astarte a rather small depletion time of $0.22\pm0.07$ Gyrs, making it very efficient at forming stars (for comparison, SBs have a depletion time of the order of $\sim100$ Myrs, (see Fig. 10 in \citealt{Bethermin2015}). Recently, \citet{Elbaz2018} found that compact SFGs on the MS with a relatively low depletion time are not uncommon. These active ultra-massive objects can be hidden at the higher end of the tail of the MS. The average depletion time for \citet{Elbaz2018} galaxies is around 0.25 Gyrs, and although we do not detect the continuum of Astarte with ALMA, its CO emission is compact as it is the case for the continuum of ALMA-detected galaxies from \citet{Elbaz2018}. This is also confirmed in \citet{Puglisi2019} where compact massive galaxies at the top of the MS exhibit high SFRs at their cores following their SB epoch.\smallbreak \section{Conclusion}\label{conclusion} In this paper, we analyzed two galaxies, Astarte and Adonis, at the peak of the SFR density using multiwavelength dissection combining ALMA observations with UV-submm SED modeling. We investigated the molecular gas content of Astarte through the ALMA detection of its CO(3-2) emission, relying on different excitation ratios of $L^{\prime}_{CO(3-2)}/L^{\prime}_{CO(1-0)}$ and different CO conversion factors. A galactic conversion factor when used along with the least excitation ratio from \citet{Daddi2015} confirmed the relative position of Astarte to the MS, as found from its SED modeling. Although the obtained gas fraction is on the lower limits of that in MS galaxies \citep{Santini2014, Bethermin2015}, a possible explanation might be that the CO(3-2) instantaneous emission does not fully recover the molecular mass and the dynamics of Astarte, due to its weak excitation \citep{Daddi2015}. Detections of other transition levels of CO would be helpful to better constrain the molecular mass of Astarte, and therefore its physical characteristics.\smallbreak The physical dissociation of the CO line and the rest-frame stellar population in Astarte was also investigated as done in \citet{buat2019}, by deriving physical properties from the stellar emission (UV-NIR) and the IR emission apart. As in \citet{buat2019}, the dust luminosity derived from the full SED is in agreement with the one derived from the the IR emission, while $L_{dust}$ derived from the stellar emission is slightly underestimated. Furthermore, C00 attenuation law was preferred when fitting the stellar continuum only. This is consistent with the results of \citet{buat2019} for galaxies with the same radii extension of rest-frame stellar emission and ALMA-detected emission. LF17 attenuation law, which was tuned for ULIRGs at $z\sim2$ succeeds the most in mimicking the dust attenuation of Astarte when fitting the whole UV-submm SED, this however results in higher stellar mass. The molecular mass of Astarte, obtained with a galactic conversion factor and the lowest excitation ratio from \citet{Daddi2015}, contributes to $\sim0.9$ of its total dynamical mass, which is a larger contribution than what is found in ULIRGs of \citet{Neri03}. SFRs and stellar masses derived from the SED fittings show that Adonis is a SB galaxy, while Astarte is on the MS of SFGs. However, the small gas fraction makes Astarte very efficient in forming stars, whose depletion time is an order of magnitude lower than what is expected in typical MS galaxies \citep{Bethermin2015}. This SB-like star formation activity on the MS was found for massive compact SFGs in \citet{Puglisi2019}, in their post-SB phase. Low depletion times of MS massive galaxies were also found in \citet{Elbaz2018}, confirming that Astarte is caught in the middle of quenching following an earlier SB activity.\smallbreak Central galaxies at z$\sim$1 from \citet{Hilton2013} and \citet{Burg2013} are $\sim$ 4 times more massive than Astarte. This indicates that even massive objects that are on the high-end of the MS, when the Universe was undergoing its peak in the star formation rate density, continue their mass assembly down to lower redshift. \begin{acknowledgements} M.H. and K.M have been supported by the National Science Centre (UMO-2018/30/E/ST9/00082). We acknowledge and thank the referee for a thorough and constructive report, which helped improving this work. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2013.1.00914.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. \end{acknowledgements} \bibliographystyle{aa}
1,477,468,750,873
arxiv
\section{Introduction} \subsection{Chemical introduction to the subject} When speaking about chemical bonding, it is useful to make a distinction between effects that are discussed at the collective (molecular, crystalline) level, and those associated to a fragment. For example, the energy lowering obtained when atoms form a molecule is providing information that we qualify here as collective. However, there are many quantities that are obtained when looking at pairs of atoms forming a molecule, for example: \begin{itemize} \item lines drawn connecting atoms, e.g, C-H, since the 19th century. \item the energy attributed to a pair of them (e.g., that of the CC or CH bonds in saturated hydrocarbons), \item nearly invariant distances between types of atoms, e.g., the length of the CH bond, \item patterns to understand spectra, e.g., attributed to the CH stretching frequency, \item \dots \end{itemize} In this paper we are interested in describing grouping of electrons in some spatial domain, $\Omega$. We use quantum mechanical calculations, and start with the Schrödinger equation. The Hamiltonian gives a natural partitioning, and it is reasonable to use it (see, e.g.~\cite{Hel-33,Rue-62}), as well as an energy partitioning resulting from it. In many cases the physical origin of the formation of groups is the Pauli principle. This directs us toward analyzing the wave function. Due to the complicated structure of the wave function, its reduction to three-dimensional objects is desired. It is worth mentioning in this direction the work of Artmann~\cite{Art-46} and that of Daudel~\cite{Dau-53}. Using the electron density, $\rho$, as proposed by Bader for the Quantum Theory of Atoms in Molecules (QTAIM)~\cite{Bad-90} had, and still has a great success. The Pauli principle is hidden in the density. It is made more explicit~\cite{SavJepFlaAndPreSch-92} in the Electron Localization Function (ELF) of Becke and Edgecombe~\cite{BecEdg-90}. The Maximum Probability Domains, MPDs~\cite{Sav-02} (or their simplified variants), of interest in this paper, originate from Daudel's idea of partitioning 3D space (into so-called "loges"), using the wave function squared. However, instead of making a partition of the whole molecular (or crystalline) space, with MPDs one concentrates on specific spatial regions, thus reducing the computational effort, and avoiding the propagation of errors produced in a region different from that of interest. \subsection{Quantum mechanical introduction to the subject} We speak about having two electrons in a bond, eight electrons in the valence shell of Ne, atomic charges, and so on. The operator that gives the number of electrons in a spatial domain, $\Omega$, is \begin{equation} \label{eq:n-hat} \hat{N}(\Omega) = \int_\Omega \hat{\rho}({\bf r}_i - {\bf r}) \, d {\bf r} \end{equation} where \begin{equation} \hat{\rho}({\bf r}) = \sum_{i=1}^N \delta({\bf r}_i-{\bf r}) \end{equation} is the density {\em operator}, $N$ the total number of electrons in the system, $\delta$ is Dirac's $\delta$ function, ${\bf r}_i$ are the positions of the electrons, and ${\bf r}$ refers to an arbitrary position in the three-dimensional space. The eigenfunctions of the Hamiltonian operator are not, in general, eigenfunctions of $\hat{N}_\Omega$. \footnote{The operators do not commute for arbitrary $\Omega$, while they trivially commute for $\Omega$ being the whole space, as in this case $\hat{N}$ becomes $N$, the number of electrons in the system.} As a result, we cannot specify a given number of electrons in $\Omega$. However, we can specify a {\em probability} to have a given number of electrons in $\Omega$. In this paper we choose a number of electrons, $n$. It is provided by chemical intuition, e.g., of having eight electrons in the valence shell of the Ne atom. We are interested in the spatial region that maximizes the probability of having that chosen number of electrons in it. This is a {\em Maximum Probability Domain} (MPD, see appendix~\ref{app:mpd} for details). Note that the probabilities of finding an arbitrary number of particles can be obtained for any spatial region, $\Omega$, for example in the basins of the electron density as provided by QTAIM~\cite{Bad-90}, or those of the electron localization function, ELF.~\cite{ChaFueSav-03} Note that an error produced by an approximation to a MPD produces only second order errors in the probabilities, because the probability is maximal for an MPD. As with localized orbitals one may consider electron pairs, and obtain spatial regions that can be associated to one or more nuclei (lone pairs, two-center bonds, three-center-bonds, dots). However, the number of electrons considered for an MPD, $n$, can be adapted to the question of interest. For example, one may want to search for a given ion in a crystal and choose $n$ equal to the number of electrons in that ion.~\cite{CauSav-ion-11} Furthermore, one may consider spatially disconnected regions, for example when considering spin couplings of electrons on different centers. The definition of the probabilities and the MPDs use the wave functions squared. Thus, there is no restriction to ground states. The same definitions can be applied to time-dependent processes.~\cite{Sav-18} One can consider probabilities for multiple domains, e.g., establish connections to resonant structures.~\cite{PenFraBla-b-07,PenFraBla-c-07} It is possible to define a joint probability (of having $n_A$ electrons in $\Omega_A$, and $n_B$ electrons in $\Omega_B$), or a conditional probability (of having $n_A$ electrons in $\Omega_A$ given that there are $n_B$ electrons in $\Omega_B$).~\cite{Dau-53, GalCarLodCanSav-05, PenFraBla-a-07, PenFra-19, SceSav-22} \section{How numbers are obtained} \subsection{Probabilities} \subsubsection{Choosing the relevant quantities to be obtained} Traditionally, one looks at the population of a spatial region. We can see from Eq.~\ref{eq:n-hat} that the expectation value of the number of particles in the domain $\Omega$ is just its population, \begin{equation} \label{eq:pop-mean} \langle \Psi | \hat{N}(\Omega) | \Psi \rangle = \int_\Omega \rho({\bf r}) d{\bf r} \end{equation} $\Psi$ is the wave function of the system, $\rho$ its one-particle density, and $N$ the number of electrons in the system. Being a mean value, we can express it in terms of probabilities. \begin{equation} \label{eq:pop-p} \mu(\Omega) = \langle \Psi | \hat{N}(\Omega) | \Psi \rangle = \sum_{n=0}^N n \, P(n,\Omega) \end{equation} $P(n,\Omega)$ is the probability to have $n$ electrons in $\Omega$. Its expression is given in appendix~\ref{app:mpd}. In the same way, we can express the variance, \begin{equation} \label{eq:sigma} \sigma^2(\Omega) = \sum_{n=0}^N (n-\mu)^2 P(n,\Omega) \end{equation} \begin{figure}[htb] \begin{center} \includegraphics[width=0.95\textwidth]{p-h2o-aim.pdf} \end{center} \caption{Probability distribution for the atomic (QTAIM) basins of O and H in the water molecule (dots) and the normal distributions (dashed lines) with the same $\mu$ and $\sigma$.~\cite{ChaFueSav-03} The results for H are in red, for O in blue.} \label{fig:p-h2o-aim-be} \end{figure} It is tempting to indicate just $\mu$ and $\sigma$. If the distribution of probabilities were normal, $\mu$ and $\sigma$ would be sufficient to recover all information about the distribution of probabilities. However, the distribution is not normal. As $n$ is always an integer, we do not have a continuous probability distribution, so it cannot be a normal distribution. To avoid this argument, we can argue that we use only a Gaussian function of a real $n$, but read it only at integer values of $n$ to obtain the values of the probabilities. In many cases, this is expected to work well (cf. Ref.~\cite{PfiBohFul-85}). However, there is also another aspect to consider with normal distributions: a normal probability distribution function is non-zero for arguments that extend to negative values and to values larger than $N$. This is physically impossible. We should restrict the reading on the Gaussian curve only to values $n=0,1,\dots,N$. Let us take as a numerical example, where $\Omega$ is an atomic basin (QTAIM) in the water molecule, Fig~\ref{fig:p-h2o-aim-be}.~\cite{ChaFueSav-03} Choosing just points for integer values on a normal distribution gives the absurd interpretation that there is a significant probability ($\approx 0.2$) to have -1 electron in the H atom basin. Furthermore, there is a similar probability to have 11 electrons in the O atom basin (10 being present in the water molecule). Fig.~\ref{fig:p-h2o-aim-be} can induce us to believe that the probability to find $n$ electrons in $\Omega$ could be read (up to a precision of about 0.1) at admissible values of a normal probability distribution function with the same mean and variance as provided by the physical probability distribution . However, let us consider now the dissociation of the H$_2$ molecule. The covalent (ground state) dissociation produces for $\Omega$ being the half space containing a H atom, $P(n=1,\Omega)=1$, yielding $\mu=1,\sigma=0$. In this case, indicating $\mu$ and $\sigma$ is sufficient. A different situation arises when we consider a state that dissociates into the ionic form, H$^+$~\dots~H$^- \leftrightarrow$~H$^-$~\dots~H$^+$. We get $P(n=0,\Omega)=P(n=2,\Omega)=1/2$, $P(n=1)=0$. The mean (the population) is the same as for the covalent case, $\mu=1$, while the variance is different, $\sigma^2=1/2$. A Gaussian form with the mean at $\mu=1$ yields a maximum (0.56) at $n=1$, where $P$ is 0, and too low estimates at $n=0,2$, namely, 0.21 instead of 0.5. In statistics, more information from probability distributions is summarized by introducing higher order (standardized) moments, e.g the third power of $(n-\mu)$ (skewness) or the fourth power (kurtosis). However, if we look at the data, we see that the number of cases where the probabilities $P(n,\Omega)$ significantly differs from zero is small, and already contains all the relevant information. Thus, we may use directly the significant $P(n,\Omega)$ instead of using statistical summaries. If needed, the latter can be easily obtained once the $P(n,\Omega)$ are known, as, for example, in Eqs.~\eqref{eq:pop-p} and \eqref{eq:sigma}. \subsubsection{Computing $P(n,\Omega)$} For Slater determinants (as obtained from Hartree-Fock or Kohn-Sham calculations), $P(n,\Omega)$ can be computed from the overlap integrals \begin{equation} S_{ij}(\Omega) = \int_\Omega \phi_i({\bf r}) \phi_j({\bf r}) d{\bf r} \end{equation} $\phi_i, \phi_j$ are the orbitals present in the Slater determinant.~\cite{Sav-02,CanKerLodSav-04} Note that here the integration is not performed over $\R^3$, but over $\Omega \subset \R^3$. For large systems, it is convenient to use localized orbitals in order to neglect $S_{ij}$ between distant orbitals (that do not overlap in $\Omega$). Multi-determinant wave functions can be also used.~\cite{FraPenBla-07, FraPenBla-08,PenFraBla-a-07} Quantum Monte Carlo calculations are very flexible in the choice of wave functions and are convenient for estimating $P(n,\Omega)$.~\cite{SceCafSav-07} Moreover, computing the probabilities with samples drawn from a Monte Carlo sampling of the squared wave function is particularly simple: one simply counts the number of electrons present in $\Omega$ for each configuration generated during the calculations. The ratio between the number of configurations presenting $n$ electrons in $\Omega$ and the total number of configurations is an estimator of $P(n,\Omega)$. As a rule, the number of configurations needed to obtain a reasonable probability is much lower than that for obtaining a reasonable energy simply because the number of digits needed is much lower for probabilities. \subsubsection{Sensitivity to the choice of the wave function} All the interpretative methods raise the question whether the refinement of the method (such that the change of the basis set) significantly changes the conclusions. The simplest wave function capturing the physics should be sufficient. In the case we discuss, the Pauli principle is already described by a single determinant wave function, so methods like Hartree-Fock or the Kohn-Sham method should be sufficient in most cases. For example, it is known that the density can be reasonably obtained with relatively low level methods (defining atomic basins in QTAIM). As a counter-example, consider $|\nabla \rho|/\rho$. It is a good detector of the shell structure in atoms, but its topology is sensitive to the (Gaussian) basis set used.~\cite{KohSavPre-91} In many cases, using a correlated wave function does not change significantly the probabilities. MPDs show often little sensitivity to the wave function used -- as long as the Pauli principle is the underlying cause of the property studied. Nevertheless, there are cases (of near-degeneracy) when correlation effects are felt, and multi-determinant wave functions that describe correctly the situation should be better used. For example, at dissociation, the \ch{H2} molecule yields at Hartree-Fock level $P(n=1,\Omega)=1/2$, where $\Omega$ is the half-space defined by a plane perpendicular to the H-H axis, at the midpoint between the nuclei. Correlation effects can be seen also at equilibrium distance. For example, let us consider the \ch{F2} molecule. We divide again the space between the two atoms by choosing a plane perpendicular to the molecular axis, at equal distance from the two nuclei. For the correlated wave function, we obtain for $\Omega$ corresponding to the half-space $P(n=9)=0.61$, $P(n=8)=P(n=10)=0.19$, while from the Hartree-Fock we find a higher importance of the ionic functions, $P(n=9)=0.47$, $P(n=8)=P(n=10)=0.25$. Details concerning the wave functions used below can be found in appendix~\ref{app:comput}. Some of the wave functions used are at Hartree-Fock level, some can be considered quite accurate. We do not expect qualitative changes in our discussion by further improvement of the wave function. \subsection{Spatial domains} \subsubsection{MPDs are not basins} When using $\rho$, or ELF one has functions defined in 3D. The spatial regions are obtained by constructing basins.~\cite{Bad-90,SilSav-94} To obtain MPDs one directly constructs the spatial regions, by choosing to maximize the probability to have $n$ electrons in it. \subsubsection{Known spatial regions} There are limiting cases where MPDs are known. \begin{itemize} \item The probability to have all $N$ electrons in $\Omega$ is maximal when $\Omega$ is the whole space. We are sure that all electrons are in it, $P=1$. \item When the volume of $\Omega$ is vanishing, we know that there are no electrons in it. In this case, $P \rightarrow 1$. \item Spatial regions can be equivalent by symmetry. If we have determined the MPD for one of the elements that are equivalent by symmetry, we can obtain all the others by performing symmetry operations. \item If we have determined the MPD for a given $n$, $\Omega_n$, the MPD for $N-n$ electrons is the remaining space; $P(n,\Omega_n)=P(N-n,\R^3\backslash\Omega_n)$. \end{itemize} \subsubsection{Shape optimization} Algorithms to deform $\Omega$ to maximize the probability exist (see, e.g., ~\onlinecite{CanKerLodSav-04, BraDalDapFre-20}). Such (shape-optimization) algorithms even allow having spatial regions that are not connected. One starts with a given spatial domain, $\Omega$, and computes $P(n,\Omega)$. $\Omega$ is slightly deformed to increase the $P(n,\Omega)$, until the latter is maximized. Unfortunately, there are some drawbacks. \begin{itemize} \item The programs to compute the MPDs are not widely distributed. \item The algorithms are computationally demanding, in spite of the fact that $\Omega$ has to be considered in a restricted region of space. \item For algorithms based on quantum Monte Carlo sampling, there are difficulties when the number of configurations is low at the separation surface or $\Omega$, introducing some uncertainty. \end{itemize} One may treat the last two problems by using smooth boundaries instead of having sharp boundaries. The smoothing functions can depend on parameters that could be optimized directly. One should keep in mind that smoothing the borders can lower the probabilities.~\cite{SceSav-22} At first, this may seem counter-intuitive. To understand it, one can imagine smoothing the boundaries of the MPD for $n$ electrons, $\Omega_n$, as mixing to some degree spatial regions for which $P(n,\Omega)$ is lower. \subsubsection{Partial optimization of the domain} Recall that -- when we are close to the maximizing domain, $\Omega_n$ -- the errors in $P(n,\Omega)$ are only of second order in the change between $\Omega$ and $\Omega_n$. Thus, instead of smoothing the boundaries, we can stop before reaching the full optimization of $\Omega$.~\cite{SceSav-22} One way to do it is to define specific shapes, and optimize a reduced number of parameters. Let us give some examples of such incomplete optimization. \begin{itemize} \item In a molecule, the atomic core is not identical to the spherical one obtained for the isolated atom. However, we can assume that it can transferred from the atom. In all cases treated so far, the difference observed is at most in the second decimal of the probabilities. \item One can define points in space that are used to define "centers" around which the MPDs are constructed as Voronoi cells. The positions of the centers can be varied, in order to maximize the probabilities. More flexibility may be gained by modifying the definition of distances, e.g., by introducing weights. \item Often, one can use a good guess for MPDs, e.g., ELF basins. \end{itemize} Let us take as an example the construction of domains in the \ch{H2O} molecule. We first determine the core domain, by maximizing the probability to have 2 electrons within a sphere around the O nucleus. For a radius of 0.36 bohr, we obtain the maximal probability 0.73. We choose four points; two are in the plane defined by the plane of the nuclei, two are in the plane perpendicular to the previous one. For example, we may start with a tetrahedron having two vertices on the H nuclei, and the O nucleus in the center. The centers define Voronoi cells. We further exclude the core domain, and compute the probability of having 2 electrons in one of the regions containing a H atom. We now change the positions of the points, respecting the symmetry of the molecule, to maximize the probability. The maximum is reached with $P=0.45$. We can repeat this procedure for maximizing the probability of having two electrons in the region that corresponds to one of the lone pairs (one of the centers in the plane perpendicular to the HOH plane). The maximum is reached with $P=0.41$. The probabilities obtained this way do not differ by more than 0.01 from those obtained after full optimization. The domains obtained are shown in Fig.~\ref{fig:voronoi}. \begin{figure} \centering \includegraphics{voronoi-h2o.pdf} \caption{Spatial domains that maximize the probability for a domain associated to the OH bond electron pair (orange), or the O lone pair (brown), by excluding a region associated to the O core, and by defining Voronoi cells with moving centers. The positions of the nuclei are indicated by small spheres (red: O, white: one of the H atoms, the other being hidden behind the orange surface).} \label{fig:voronoi} \end{figure} It is also interesting to consider several spatial regions, e.g., for analyzing statistical correlations between them~\cite{GalCarLodCanSav-05, PenFraBla-a-07, PenFra-19} or electrons distributed over disconnected spatial regions.~\cite{Sav-21} A problem that appears when considering joint probabilities is that they are lower than that of the individual ones. Recall that for independent events, the probability of the joint event is the product of probabilities; as the probabilities are between 0 and 1, their product is lower than each of the individual probabilities. In this context, it is more reasonable to consider conditional probabilities, e.g., the probability to have $n_A$ electrons in $\Omega_A$ given that there are $n_B$ electrons in $\Omega_B$~\cite{Dau-53, GalCarLodCanSav-05, PenFra-19,SceSav-22}, \begin{equation} P_{A|B}= \frac{P_{A \cap B}}{P_B}. \end{equation} \subsubsection{Describing the spatial domains} Of course, the spatial extension of the MPD can be graphically shown, and this is consistent with the pictorial attitude existing in chemistry. Some numbers can also be used to describe them when the parametrization of the domain is simple. For example, the core region of the Ne atom can be represented by a sphere of radius $r$ maximizing the probability of having two electrons, and the valence region is the complement. The probability is maximal for $r \approx 0.27$~bohr. Similarly, for diatomic molecules a single number is sufficient to describe the position of the plane perpendicular to the molecular axis, which defines the boundary between the two atomic domains. \subsubsection{Multiplicity of MPDs} \label{multiplicity} For a given molecule, the MPDs are defined by indicating a number $n$ of electrons in them, and are obtained by optimization of the spatial domain. The latter process can lead to several solutions (several local maxima may exist). In this respect, the MPDs behave in a way similar to localized orbitals: equivalent solutions exist. There are trivial cases. For example, if we search in the H$_2$O molecule for a MPD for $n=2$ electrons, we can find a domain corresponding to the core, or to any of the OH bonds, or any of the two lone pairs. Note that some of these solutions have chemically different significance, e.g., core \textit{vs} lone pairs. Other solutions may be equivalent by symmetry e.g., the MPDs corresponding to the two OH bonds. Notice the analogy to localized orbitals. These also can lower the symmetry, and equivalent solution exist. A simple example is that of the $\pi$ orbitals in benzene, where one has three localized orbitals for a six-fold axis. Sometimes one needs a moment of reflection to discover this effect. For example, in trans-HSiSiH one has three electron pairs connected to the two Si atoms. However, the system is invariant under the inversion operation (${\bf r} \rightarrow -{\bf r}$) that is lost once a set three MPDs are found; the symmetry operation produces another equivalent set.~\cite{SceCafSav-07} In general, we expect a displacement of the MPD to lower the probability associated to it. For example, transforming the MPD into another $\Omega$ by inversion through the position of the C nucleus lowers the probability from 0.55 to 0.36. However, an infinite set of equivalent solutions can be produced by symmetry. This also presents an analogy to localized molecular orbitals.~\cite{Eng-71,EngSalRue-71} For example, in the HCCH molecule, we can find a banana bond between the two C atoms, but cylindrical symmetry dictates that any rotation around the molecular axis yields an equivalent solution. The same type of situation appears in atoms, e.g., the Ne atom. When searching for a pair of electrons we will find a domain avoiding the core region and resembling to an $sp^3$ hybrid, pointing into an arbitrary direction. However, any rotation with the center on the Ne nucleus produces an equivalent MPD. In the uniform electron gas, any translation produces equivalent MPDs. One expects that in metals deformations of the MPD have little effect on the probability. A study of the Kronig-Penney model gives a hint in this direction.~\cite{Sav-21} Methods like ELF give in such cases solutions that average out the effect of different solutions: one gets a single bonding region for the CC bond in acetylene, a valence shell for the Ne atom, a constant value through the uniform electron gas. However, in the case of HSiSiH discussed above, this "averaging out" produces four basins. This is disturbing, because there are only three bonds.~\cite{SceCafSav-07} In the case of MPDs, one can consider larger groups, e.g., $n=6$ electrons for the triple CC bond in HCCH, or $n=8$ electrons for the valence shell of Ne. \section{What the numbers tell us: comforting and disturbing results} \subsection{Comforting results} \subsubsection{Conceptual advantage} The MPDs are simple to explain, applicable to simple or complicated wave functions. As the number of electrons in a spatial domain is user-defined, one is not compelled to study a given object. For example, one can use MPDs to find electron pairs, bonding regions in diamond, as with ELF~\cite{CauSav-11}, or to find ions in crystals, as with QTAIM~\cite{CauSav-ion-11}. In many instances it gives results that are consistent with those obtained with other methods, such as QTAIM or ELF. This can be seen, for example when looking at crystals in rock salt structure (when recognizing ions)~\cite{CauSav-ion-11}, or at crystals with diamond structure (for covalent bonds)~\cite{CauSav-11}. This is very encouraging, taking into account the wide success of QTAIM and ELF. \subsubsection{Producing reasonable numbers} We are used to look at populations (as defined in Eqs~\ref{eq:pop-mean},\ref{eq:pop-p}). While numbers obtained with different approaches can slightly differ, there is some consensus about what we should expect from some populations: that of "standard" bonds should be close to 2, that of atomic shells, etc. The population cannot be used to define a spatial region; one cannot define atomic shells by requesting that the number of electrons integrate to a specified number. For example, we can find in Be an infinity of spherical shells, between $r_{min} \in(0,r_{core})$ and $r_{max} \in (r_{core}, \infty)$, where $r_{core} \approx 1$~bohr, just requesting that the integral of the density between $r_{min}$ and $r_{max}$ equals two. One may ask whether all methods give equivalent results. For example, it would not be worth computing the MPD if ELF and the Laplacian of the density would give the same result. Very often, the MPDs are close to other spatial regions, e.g., the ELF basins when searching for regions characterizing electron pairs. However, it is known that the shell structure of atoms is not always correctly reproduced by the Laplacian of the density; for example, the last shell of the Zn atom is merged with the penultimate shell.~\cite{Bad-90} ELF separates them, but the population of the valence shell is 2.2 instead of 2.~\cite{KohSav-96} With MPDs, the difference between the expected population of the valence shell and that obtained is at the second decimal, 1.96 with the Hartree-Fock wave function. (This is also roughly the accuracy we expect for the data discussed in this article.) The last shell is also separated in atoms like Nb, or Mb yielding in these cases a population of 1.0.~\cite{Sav-02}. In molecules like \ch{CH4} or \ch{H2O}, MPDs define regions of space that are conventionally attributed to the bonding or the lone pairs. The populations are very close to the expected number, 2. Even when the electrons are "crowded", like in the \ch{N2} molecule, the population is not too far from 2 (it is 2.2). There can be also qualitative differences, between, say, ELF and MPD results, in particular when several alternative classical bonding situations exist~\cite{SceCafSav-07}. Differences may appear because ELF is producing spatial domains that respect symmetry, e.g., the spherical shells in atoms, while there may be several ways MPDs can be produced. In this respect, MPDs resemble localized orbitals. \subsubsection{Different structures, similar MPDs} \begin{figure}[htb] \begin{center} \includegraphics[width=0.75\textwidth]{c2h2pse.jpg} \\ \includegraphics[width=0.75\textwidth]{si2h2but.jpg} \end{center} \caption{MPDs in acetylene \ch{C2H2}, top, and \ch{Si2H2}, bottom. The MPDs containing the H nuclei are silver-colored; one of the triple banana bond MPDs in \ch{C2H2} is shown in purple; the lone pair of Si is shown dark blue while the bent Si-Si bond MPD is shown in red.} \label{fig:c2h2-si2h2} \end{figure} Fig.~\ref{fig:c2h2-si2h2} shows MPDs for \ch{C2H2} and \ch{Si2H2}. Some of the MPDs are not shown for clarity; they can be easily obtained by symmetry operations. We know that the most stable structure of \ch{C2H2} is linear, while for \ch{Si2H2} we have a "butterfly" structure~\cite{LeiKraFre-05}. With MPDs, however, we get a different perspective. In both cases, we find three electron pairs between the heavy atoms, and one electron pair pointing out from the other heavy atom. In \ch{C2H2}, the first three correspond to the three "banana" bonds, and the last to the CH bond. In \ch{Si2H2}, the first three correspond to the two three-center SiHSi bonds, and one SiSi bond, while the latter corresponds to a lone pair. It is as if electron pairs like a tetrahedral arrangement, and nuclei arrange to fit into it. The probabilities are 0.4 for the bent bond regions (CC and SiSi), and 0.5 for the regions corresponding to the other electron pairs. Apparently there is an extra localization provided by the protons. \clearpage \subsubsection{Selecting the relevant region} \begin{figure}[htb] \begin{center} \includegraphics[width=0.75\textwidth]{p-h4.pdf} \end{center} \caption{Probability to have 2 electrons in the lower (or upper) half-space (blue), or left (or right) half-space (red) as function of the deformation of the rectangular arrangement of the nuclei; on the abscissa the ratio of the sides of the rectangle.} \label{fig:p-h4} \end{figure} During a chemical reaction MPDs evolve. Let us consider, for example, the potential energy surface of the following reaction \begin{equation} \begin{array}{ccc} \text{H} & - & \text{H} \\ \vdots & & \vdots \\ \vdots & & \vdots \\ \text{H} & - & \text{H} \end{array} \; \longrightarrow \; \begin{array}{ccc} \text{H} & - & \text{H} \\ | & & | \\ \text{H} & - & \text{H} \end{array} \; \longrightarrow \; \begin{array}{ccc} \text{H} & \dots\;\; \dots & \text{H} \\ | & & | \\ \text{H} & \dots\;\; \dots& \text{H} \end{array} \end{equation} For a rectangular arrangement of the nuclei, we divide the space symmetrically into a region containing the upper two H nuclei, $\Omega_{\text{up}}$, and one containing the lower two H nuclei, $\Omega_{\text{down}}$. We can also divide it into a left and right region ($\Omega_{\text{left}}, \Omega_{\text{right}}$). The probability to find two electrons for the region where the H nuclei are closer to each other is higher than that for the other division. For the first structure indicated above on the left, we have $P(n=2,\Omega_{\text{up}}) = P(n=2,\Omega_{\text{down}}) > P(n=2,\Omega_{\text{left}}) = P(n=2,\Omega_{\text{right}})$ while for the structure shown on the right, $P(n=2,\Omega_{\text{up}}) = P(n=2,\Omega_{\text{down}}) < P(n=2,\Omega_{\text{left}}) = P(n=2,\Omega_{\text{right}})$ The transition between the two "best" choices occurs at the square arrangement. The evolution of probabilities is shown in figure~\ref{fig:p-h4}. It shows that passing through the square region can be associated with a change of the chemical description. \subsubsection{The effect of the Pauli principle} $P(n,\Omega)$ can be significantly larger than that obtained using a binomial distribution, \begin{equation} P_{ind}(n,p)= \frac{N!}{n! (N-n)!} p^n (1-p)^{N-n} \end{equation} This distribution is obtained when considering that each of the $N$ electrons would have the probability $p$ to be in $\Omega$. For some choices of $n$ and $\Omega$, \[ P(n,\Omega_n) > \max_p P_{ind}(n,p) \] i.e., the wave function produces a larger probability than the largest that could be produced by statistically independent particles. The main reason behind it is the Pauli principle. \begin{figure}[htb] \begin{center} \includegraphics[width=0.95\textwidth]{p-be.pdf} \end{center} \caption{Probability distribution for the MPD corresponding to the valence shell of Be (full circles) compared to that of independent particles (empty circles).} \label{fig:p-be} \end{figure} Let us consider as an example the Be atom. We separate it into two regions, an inner sphere, corresponding to the core, and the rest of the space, corresponding to valence. All interpretative models give the sphere a radius of $\approx$1~bohr. Fig.~\ref{fig:p-be} shows the probability distribution obtained when making a core/valence separation with the MPDs. It is compared with that would be obtained for independent particles, namely that obtained with a binomial distribution (producing the highest possible outcome for 2 electrons in each of the regions, $p=1/2$). One clearly sees a higher probability of having 2 electrons in each of the shells when using the Hartree-Fock wave function, that satisfies the Pauli principle. \subsection{Disturbing results} In addition to the potential of providing "chemical" answers using quantum mechanical calculations, the more detailed character given by the probability distributions raises also some questions. \subsubsection{Low probabilities} When we define a spatial domain, $\Omega \in \R^3$, quantum mechanics tells us that electrons can cross its limits. Even when the average number of electrons in the region corresponds to our expectation, we know that electrons can get into the domain, or out of the domain. In most cases, there is a non-negligible probability to find a number of electrons different from the chemically expected one. Let us recall the procedure used. When constructing an MPD we consider $P(n=n,\Omega)$ where $n$ corresponds to chemical intuition, and find $\Omega_n$, the region that maximizes $P(n,\Omega)$. For example, choosing $n=2$ we can find in the methane molecule a region for a CH bond, that gives $P(n=2,\Omega_2)$. Although this is the best (highest) number we can get for the probability, we find numerically, even for good wave functions, that it is only slightly above 1/2. This means that quantum mechanical fluctuations induce almost the same probability to have a smaller, or a larger number of electrons in this spatial domain. The dominating contribution comes from having $n\pm1$ electrons in $\Omega_n$. In the water molecule, the probability to have two electrons in the lone pair (or the OH bond) is even smaller than 1/2 (the probability to have a number of electrons <2, or >2 is larger than that of having just 2 electrons in it). Nevertheless, the population of the MPD is close to 2, because the probability to have $n<2$, electrons, or $n>2$ are nearly equal. However, the Slater determinant can be built from orbitals having nodes in the same spatial domain. When we cut out a spatial region, say, a spherical shell in an atom, it is possible to different orbitals to coexist. For example, the $3d$ orbitals can penetrate the region mainly occupied by the $4s$ orbitals, and this can explain increasing fluctuations between the M and N shells in Zn. Let us look closer at the numbers. \subsubsection{Strong fluctuations} Let us construct the MPD of an atomic valence shell, i.e., find $\Omega$ as extending from some radius to infinity, such that $P(n=N_{val},\Omega)$ is maximal, $N_{val}$ being the expected number of electrons in the valence shell. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth]{p-ne.pdf} \includegraphics[width=0.45\textwidth]{p-ar.pdf} \\ \includegraphics[width=0.45\textwidth]{p-kr.pdf} \includegraphics[width=0.45\textwidth]{p-xe.pdf} \end{center} \caption{Probability distributions for the MPDs corresponding to the valence shells of noble gas atoms (full circles). For comparison, the probability distributions for independent particles obtained for 8 electrons in the valence shell bring able to exchange electrons with the next deeper shell (empty circles).} \label{fig:p-noble} \end{figure} Let us consider the noble gas atoms, Fig.~\ref{fig:p-noble}. For Ne and Ar, only the probability of having $N_{val}=8$ electrons in the valence shell is clearly higher than that for independent particles. This changes for Kr and Xe: we note that finding $N_{val} \pm 1$ electrons in it is more probable than what one expects for independent particles (exchanging with the deeper shell). We can attribute the increase in $P(n=8\pm1,\Omega_8)$ to the penetration of the $d$ orbitals of the deeper shell into the valence shell. The Pauli principle is satisfied not by spatially separating the electrons into regions, but within the same region of space. \begin{figure}[htb] \begin{center} \includegraphics[width=0.95\textwidth]{ppp.pdf} \end{center} \caption{Probability to have a number of electrons equal to the expected one (red), larger by one (blue), or smaller by one (black) in the MPD corresponding to the valence shell of atoms with nuclear charge $Z \le 54$; H, He, and Pd ($Z=0,1$ or $46$) are not shown, as the results correspond to trivial expectations: the MPDs correspond to the whole space (for $1s^n$), or vanish ($5s^0$). } \label{fig:ppp} \end{figure} Let us analyze the probability of having $N_{val} \pm 1$ electrons in the periodic table (Li-Xe) for Hartree-Fock wave functions~\cite{BunBarBun-93}, cf. Fig.~\ref{fig:ppp}. One notices a certain symmetry of the distribution: the probabilities of having $N_{val}-1$ or $N_{val}+1$ electrons are, in general, quite close. If one considers the probability to have not only $N_{val}-1$, or $N_{val}+1$ electrons, but $n < N_{val}$, or $n > N_{val}$ electrons, one obtains in the worst case studied (Xe) an almost equal number for the three probabilities: $P(N > N_{val}) \approx P(N < N_{val}) \approx P(N_{val})$. As the MPD is the spatial region yielding the highest possible value for $P(N_{val})$ this casts a shadow of doubt on our image of spatially separated valence shells. In analogy with valence bond, let us consider the atom formed by a core, $C$, and a valence shell $V$. In this spirit, one could write: \[ C^+V^- \leftrightarrow C V \leftrightarrow C^-V^+ \leftrightarrow \dots \] to indicate that electrons can quit and enter a specific region. The charges indicated are produced by the separation into shells. In contrast to valence bond methods, this does not invoke changing orbital occupancies as in valence bond methods. (Recall that our results are obtained from a Hartree-Fock wave function with a prescribed orbital occupancy.) In analogy to valence bond methods it is possible to indicating weights. Here these are given by the probabilities, e.g., that for $C^+V^-$ is $P(N_{val}+1,\Omega_{\N_{val}})$. For example, we see in Fig.~\ref{fig:p-noble} that the probability to have 9 electrons in the valence shell of Kr is around 1/4, that we can interpret as a "weight" of $C^+ V^-$. Such stronger fluctuations do not occur only in atoms. For example, the values for the probabilities obtained for the MPDs corresponding to the six electrons of the triple bond in HCCH or N$_2$ are quite comparable to those obtained for the valence shell of Xe. In HCCH the probability to have six electrons between the two C atoms is around 1/3, while that to have five (or to have seven) electrons in the same region is around 1/4.~\cite{SceSav-22} In the \ch{N2} molecule, the probability to find 2 electrons in the lone pair is around 0.45, and in a banana bond only 0.34. The probability to have only one electron in the banana bond is slightly lower (0.32), and that of having three electrons in it 0.17. \subsubsection{Choosing the relevant object of study} \begin{figure}[htb] \begin{center} \includegraphics[width=0.95\textwidth]{kr-choices.pdf} \end{center} \caption{Probabilities to have $n=34,35,36$, or $37$ electrons in the domain associated to \ch{Kr^{++}} (blue, rhombus), \ch{Kr+} (yellow, triangle), Kr (green rhombus), \ch{Kr-} (orange, square); points are connected by lines to guide the eye.} \label{fig:kr-choices} \end{figure} Sometimes chemical intuition guides us well in guessing a good number of electrons. For example, when we are interested in describing an atom, we know its nuclear charge, and it seems natural to choose the same numbers of electrons. However, would it not be better to sometimes choose an ion? Let us consider the \ch{KrF2} molecule. For the Kr atom, we should choose $n=36$, while for \ch{Kr-}, \ch{Kr+}, \ch{Kr^{++}} we should choose $n=37,35$, or $n=34$, respectively. We take two planes perpendicular to the molecular axis, at equal distance $z$ from the Kr nucleus.~\footnote{For a discussion of the choice of the domains, see Ref.~\cite{SceSav-22}.} Figure~\ref{fig:kr-choices} shows the probabilities to have $n=34,35,36$ or 37 electrons between the two planes. We see that there is no clear-cut preference for choosing Kr as a reference: the best (the highest probability) we can get is not better than the one obtained for the separation into \ch{Kr+}, or \ch{Kr^{++}}. Once we have made the choice, the answers are different. If we choose the Kr domain, we obtain a probability of 0.36 to describe the region as a Kr atom ($n=36$), and 0.22 as a \ch{Kr+} ion ($n=35$). If we choose the \ch{Kr+} domain, we obtain a probability of 0.40 to describe the region as a \ch{Kr+} ion, and 0.28 as a Kr atom. \section{Questions of attitude} \subsubsection{Three practical questions} \begin{itemize} \item Are MPDs ready for "mass production"? \\ To obtain MPDs there is a need for new algorithms and programs. Progress is made, but slowly. Furthermore, the existing programs take some time for the optimization of the spatial domain, and this is opposed by all those who think that it is worth having a long quantum mechanical calculation, but not for producing an interpretation using it. Evidently, the present authors do not share this opinion. \item How much input is needed from the user ? \\ Some methods (QTAIM, ELF), just let the program work (maybe with a little help). With MPDs, the users have to specify the number of particles they are interested in, an initial guess of the region where the MPD is of interest. \item When is an interpretative method that we, theoreticians, propose successful? \\ When experimentalists use it. With MPDs we are not yet so far. \end{itemize} \subsubsection{Do we need MPDs?} We could imagine that our computers could give, e.g., by machine learning all the answers to the questions we would like to ask. Would it be sufficient? One would like the interpretative methods give tools to let us think independently of the computer. The next question is whether we should accept the computer help us to think about chemistry. Maybe a common answer is that given by Prof. C. Pisani (University of Torino) when he criticized ELF: ``With MO theory, you can help yourself using the back of an envelope''. Here is a philosophical support for this attitude of independence of external support. \begin{quote} Socrates. At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth [Toth]; \dots \, he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days the god Thamus [Amun] was the king of the whole country of Egypt \dots. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. \dots But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: \dots this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. \\ Plato, \emph{Phaedrus}\footnote{Plato, Phaedrus 274b, translated by B. Jowett, \url{http://classics.mit.edu/Plato/phaedrus.html}} \end{quote} The present authors are full of admiration for those who are able to use only the back of the envelope. However, \begin{itemize} \item experience accumulated using computers may help developing such methods, \item nowadays, we live with Wikipedia in our pocket and it is a good starting point for our thinking; the computers can give us ideas we can think about. \end{itemize} \section{Conclusion} The present paper considers the probabilities to have a chosen number of electrons in a spatial domain. If these domains are optimized in the sense of maximizing the probability, they can be associated to classical chemical concepts. For example, one would consider two electrons for defining a region of a lone pair, or that of a single bond. As the probabilities \begin{enumerate}[label=\roman*] \item are not needed to high accuracy, and \item have second order errors when the departure from the optimal domain is of first order, \end{enumerate} high accuracy in optimization is not needed. Sometimes the chemical question is not well set. For example, should we define an atomic region, and look at the probability of having a number of electrons different number of electrons in it, or should we start by first defining an ionic region? The results obtained are not the same. Furthermore, the highest probability to have a chemically significant electron number in a given spatial domain is often not far away from that of having a different number of electrons in the same region. In some cases the probability is higher. For example, the probability of having 2 electrons in the core and the rest in the valence for the atoms decreases from 0.9 to 0.7 from Li to Ne. However, one is used to consider a statistical event relevant if the probability is higher than 0.95 ($\approx 2\sigma$ for a normal distribution). This was never observed in the systems presented here. Does this mean that we should give up the chemical concepts associated to a given number of electrons in a spatial region? The main argument for not doing so is the success of the chemical concepts. Did we not look at the right quantities? Finally, they seem to be recovered in an average sense, because the distribution of probabilities are often symmetric around the maximum, mean values, {\it i.e.}, populations, are most often used in discussions. However, we should not forget the quantum nature produces more information, and it may be worth looking into it its implications. \section*{Acknowledgement} The authors are grateful to Pascal Pernot (Université Paris-Saclay) for stimulating discussions, and to the editor of the present volume, Paul Popelier (University of Manchester), for improving our manuscript.
1,477,468,750,874
arxiv
\section{Methods} \noindent $\textbf{Crystal growth and magnetic characterizations.}$ $\rm{(MnBi_2Te_4)(Bi_2Te_3)_{\emph{n}}(\emph{n} = 1, 2,...)}$ single crystals were grown by flux method \cite{N33}. Mn powder, Bi lump and Te lump were weighed with the ratio of Mn: Bi: Te = 1: 8: 13 (MnTe: $\rm{Bi_2Te_3}$ = 1: 4). The mixture was loaded into a corundum crucible sealed into a quartz tube. The tube was then placed into a furnace and heated to 1100 °C for 20 h to allow sufficient homogenization. After a rapid cooling to 600 °C at 5 °C/h, the mixture was cooled slowly to 585 °C (581 °C) at 0.5 °C/h for $\rm{MnBi_4Te_7}$ ($\rm{MnBi_6Te_{10}}$) and kept at this temperature for 2 days. Finally, the single crystals were obtained after centrifuging. The centimeter-scale plate-like $\rm{MnBi_4Te_7}$ and $\rm{MnBi_6Te_{10}}$ single crystals can be easily exfoliated. Magnetic measurements of $\rm{MnBi_4Te_7}$ and $\rm{MnBi_6Te_{10}}$ single crystals were measured by the vibrating sample magnetometer (VSM) option in a Quantum Design Physical Properties Measurement System (PPMS-9 T). The temperature-dependent magnetization measurements are described in detail in Supplementary Materials. \bigskip \noindent $\textbf{Preparation of the ultra-thin samples.}$ The $\rm{MnBi_4Te_7}$ and $\rm{MnBi_6Te_{10}}$ flakes with different thicknesses were first mechanically exfoliated on a polydimethylsiloxane (PDMS) substrate by the Scotch tape method. The exfoliated samples on PDMS substrates were then dry-transferred onto 285 nm $\rm{SiO_2}$/Si substrates with evaporated gold films. Then, a layer of PMMA was spin-coated on the thin flakes for protection. \bigskip \noindent $\textbf{AFM characterization.}$ The thickness of the ultra-thin samples wwas verified by the atomic force microscopy characterization using the Oxford Cypher S system in tapping mode. According to the height line profiles, the $\rm{MnBi_4Te_7}$ and $\rm{MnBi_6Te_{10}}$ were confirmed to possess an alternated lattice structure of BT (1 nm) + MBT (1.4 nm) and BT (1 nm) + BT (1 nm) + MBT (1.4 nm). See more details in Supplementary Materials. \bigskip \noindent $\textbf{RMCD measurements.}$ The RMCD measurements were performed based on the Attocube closed-cycle cryostat (attoDRY2100) down to 1.6 K and up to 9 T in the out-of-plane direction. The linearly polarized light of 633 nm HeNe laser was modulated between left and right circular polarization by a photoelastic modulator (PEM) and focused on the sample through a high numerical aperture (0.82) objective. The reflected light was detected by a photomultiplier tube (THORLABS PMT1001/M). The magnetic reversal under external magnetic field was detected by the RMCD signal determined by the ratio of the a.c. component of PEM at 50.052 kHz and the a.c. component of chopper at 779 Hz (dealt by a two-channel lock-in amplifier Zurich HF2LI). The errors of ratio of FM and AFM components are determined by the instability of the data acquired during RMCD measurements. \bigskip \noindent $\textbf{STEM characterization.}$ Samples for cross-sectional investigations were prepared by standard lift-out procedures using an FEI Helios NanoLab G3 CX focused ion beam system. To minimize sidewall damage and make the samples sufficiently thin to be electronically transparent, final milling was carried out at a voltage of 5 kV and a fine milling at 2 kV. Aberration-corrected STEM imaging was performed using a Nion HERMES-100 operating at an acceleration voltage of 60 kV and a probe forming semi-angle of 32 mrad. HAADF images were acquired using an annular detector with a collection semi-angle of 75-210 mrad. EELS measurements were performed using a collection semi-angle of 75 mrad, an energy dispersion of 0.3 eV per channel, and a probe current of $\sim$20 pA. The Mn-$L$ (640 eV) and Te-$M$ (572 eV) absorption edges were integrated for elemental mapping after background subtraction. The original spectrum images were processed to reduce random noise using a principal component analysis (PCA) tool. HAADF image simulations were computed using the STEM\_CELL software simulation package matching the microscope experimental settings described above and using a supercell with a thickness $\sim$20 nm. \section{\label{sec:level1}DATA AVAILABILITY} The data that support the findings of this study will be available at an open-access repository with a doi link, when accepted for publishing. \section{\label{sec:level3}ACKNOWLEDGEMENT} This work was supported by the National Key R\&D Program of China (Grants No. 2018YFA0306900, No. 2017YFA0206301, 2019YFA0308602, No. 2019YFA0308000, and 2018YFA0305800), the National Natural Science Foundation of China (Grants No. 62022089, No. 12074425, and No. 11874422), Strategic Priority Research Program (B) of the Chinese Academy of Sciences (Grant No. XDB33000000), Beijing Natural Science Foundation (Grant No. JQ21018 and BJJWZYJH01201914430039), and the fundamental Research Funds for the Central Universities (E1E40209). \section{Author contributions} Y.Y., S.Y., and X.X. conceived the project, designed the experiments, analyzed the results and wrote the manuscript. S.Y. and X.X. conducted the RMCD measurements. H.W. and T.X. grew the $\rm{MnBi_4Te_7}$ and $\rm{MnBi_6Te_{10}}$ bulk crystals. M.X., S.T., and H.L. grew the $\rm{MnSb_2Te_4}$ bulk crystal. Y.H. prepared the few-layer samples. Y.P. and J.Y. performed the magnetic characterizations of the bulk crystals. R.G. performed the STEM characteristics under the supervision of W.Z. Y.Z. and Z.L. helped with results analysis. All authors discussed the results and contributed to the manuscript. \section{ADDITIONAL INFORMATION} Competing interests: The authors declare no competing financial interests.
1,477,468,750,875
arxiv
\section{Introduction} Linear models are widely used and provide a versatile approach for analyzing correlated responses, such as longitudinal data, growth data or repeated measurements. In such models, each subject~$i$, $i=1,\ldots,n$, is observed at $k_i$ occasions, and the vector of responses~$\by_i$ is assumed to arise from the model \[ \by_i=\bX_i\bbeta+\bu_i, \] where $\bX_i$ is the design matrix for the $i$th subject and $\bu_i$ is a vector whose covariance matrix can be used to model the correlation between the responses. One possibility is the linear mixed effects model, in which the random effects together with the measurement error yields a specific covariance structure depending on a vector $\btheta$ consisting of some unknown covariance parameters. Other covariance structures may arise, for example if the $\bu_i$ are the outcome of a time series. See e.g.,~\cite{jennrich&schluchter1986} or~\cite{fitzmaurice-laird-ware2011}, for several possible covariance structures. Maximum likelihood estimation of $\bbeta$ and $\btheta$ has been studied, e.g., in~\cite{hartley&rao1967,rao1972,laird&ware1982}, see also~\cite{fitzmaurice-laird-ware2011,demidenko2013}. To be resistant against outliers, robust methods have been investigated for the linear mixed effects models, e.g., in~\cite{pinheiro-liu-wu2001,copt2006high,copt&heritier2007,heritier-cantoni-copt-victoriafeser2009,agostinelli2016composite,chervoneva2014}. This mostly concerns S-estimators, originally introduced in the multiple regression context by Rousseeuw and Yohai~\cite{rousseeuw-yohai1984} and extended to multivariate location and scatter in~\cite{davies1987,lopuhaa1989}, to multivariate linear regression in~\cite{vanaelst&willems2005}, and to linear mixed effects models in~\cite{copt2006high,heritier-cantoni-copt-victoriafeser2009,chervoneva2014}. A unified approach to S-estimation in balanced linear models with structured covariances can be found in~\cite{lopuhaa-gares-ruizgazenARXIVE2022}. S-estimators are well known smooth versions of the minimum volume ellipsoid estimator~\cite{rousseeuw1985} that are highly resistant against outliers and are asymptotically normal at $\sqrt{n}$-rate. Unfortunately, the choice of the tuning constant corresponding to an S-estimator, forces a trade-off between robustness and efficiency. For this reason, S-estimators may also serve as initial estimators to further improve the efficiency of regression estimator in a second step. One possibility are MM-estimators introduced by Yohai~\cite{yohai1987} in the multiple regression setup. Extensions to multivariate location and scatter can be found in Lopuha\"a~\cite{lopuhaa1992highly}, Tatsuoka and Tyler~\cite{tatsuoka&tyler2000}, and Salibi\'an-Barrera \emph{et al}~\cite{SalibianBarrera-VanAelst-Willems2006}. An extension to linear mixed effects models was discussed in Copt and Heritier~\cite{copt&heritier2007} and to multivariate linear regression by Kudraszow and Maronna~\cite{kudraszow-maronna2011}. \begin{comment} The approaches for multivariate location and scatter in~\cite{tatsuoka&tyler2000} and~\cite{SalibianBarrera-VanAelst-Willems2006} are similar to one in~\cite{yohai1987}. One first obtains initial high breakdown estimators of location and of the shape of the scatter matrix, and uses these to determine an auxiliary univariate M-estimator of scale. The estimators of location and shape are then updated, after which the latter is combined with the M-estimator of scale to determine the final estimator of scatter. The approach for multivariate linear regression in~\cite{kudraszow-maronna2011} is similar. The approaches for multivariate location and scatter in~\cite{lopuhaa1992highly} and for linear mixed effects models in~\cite{copt&heritier2007} differ in the sense that the entire initial scatter matrix is used as a preliminary auxiliary statistic. \end{comment} We will extend the approaches in~\cite{lopuhaa1992highly} and~\cite{copt&heritier2007} to balanced linear models with structured covariance matrices, and postpone MM-estimation for unbalanced models to a future manuscript. The balanced setup is already quite flexible and includes several specific multivariate statistical models. Of main interest are high breakdown estimators with high normal efficiency for linear mixed effects models, but our approach also includes high breakdown estimators in several other standard multivariate models, such as multiple regression, multivariate linear regression, and multivariate location and scatter. We provide sufficient conditions for the existence of the estimators and corresponding functionals, establish their asymptotic properties, such as consistency and asymptotic normality, and derive their robustness properties in terms of breakdown point and influence function. All results are obtained for a large class of identifiable covariance structures, and are established under very mild conditions on the distribution of the observations, which goes far beyond models with elliptically contoured densities. In this way, some of our results are new and others are more general than existing ones in the literature. The paper is organized as follows. In Section~\ref{sec:structured covariance model}, we explain the model in detail and provide some examples of standard multivariate models that are included in our setup. In Section~\ref{subsec:def MM Yohai} we define the regression M-estimator and M-functional and in Section~\ref{subsec:Existence MM Yohai} we give conditions under which they exist. In Section~\ref{subsec:continuity general} we establish continuity of the regression M-functional, which is then used to obtain consistency of the regression M-estimator. Section~\ref{subsec:BDP Yohai} deals with the breakdown point. Section~\ref{sec:score equations} provides the preparation for Sections~\ref{subsec:IF MM Yohai} and~\ref{sec:asymptotic normality}, in which we determine the influence function and establish asymptotic normality. \section{Balanced linear models with structured covariances} \label{sec:structured covariance model} We consider independent observations $(\by_1,\bX_1),\ldots,(\by_n,\bX_n)$, for which we assume the following model \begin{equation} \label{def:model} \by_i = \bX_i\bbeta+\bu_i, \quad i=1,\ldots,n, \end{equation} where $\by_i\in\R^{k}$ contains repeated measurements for the $i$-th subject, $\bbeta\in\R^q$ is an unknown parameter vector, $\bX_i\in\R^{k\times q}$ is a known design matrix, and the $\mathbf{u}_i\in\R^{k}$ are unobservable independent mean zero random vectors with covariance matrix $\bV\in\text{PDS}(k)$, the class of positive definite symmetric $k\times k$ matrices. The model is balanced in the sense that all~$\mathbf{y}_i$ have the same dimension. Furthermore, we consider a structured covariance matrix, that is, the matrix $\bV=\bV(\btheta)$ is a known function of unknown covariance parameters combined in a vector $\btheta\in\R^l$. We first discuss some examples that are covered by this setup. An important case of interest is the (balanced) linear mixed effects model. For a general formulation covered by our setup, see~\cite{lopuhaa-gares-ruizgazenARXIVE2022}. A specific example is the model \begin{equation} \label{def:linear mixed effects model Copt} \by_i=\bX_i\bbeta+\sum_{j=1}^r \bZ_j\gamma_{ij}+\beps_i, \quad i=1,\ldots,n, \end{equation} considered in~\cite{copt&heritier2007}. This model arises from $\bu_i=\sum_{j=1}^r \bZ_j\gamma_{ij}+\beps_i$, for $i=1,\ldots,n$, where the~$\bZ_j$'s are known $k\times g_j$ design matrices and the $\gamma_{ij}\in\R^{g_j}$ are independent mean zero random variables with covariance matrix $\sigma_j^2\bI_{g_j}$, for $j=1,\ldots,r$, independent from $\beps_i$, which has mean zero and covariance matrix~$\sigma_0^2\bI_k$. In this case, $\bV(\btheta)=\sum_{j=1}^r\sigma_j^2\bZ_j\bZ_j^T+\sigma_0^2\bI_k$ and~$\btheta=(\sigma_0^2,\sigma_1^2,\ldots,\sigma_r^2)$. Another example of~\eqref{def:model}, is the multivariate linear regression model \begin{equation} \label{def:multivariate linear regression model} \by_i=\bB^T\bx_i+\bu_i, \qquad i=1,\ldots,n, \end{equation} considered in~\cite{kudraszow-maronna2011}, where $\bB\in\R^{q\times k}$ is a matrix of unknown parameters, $\bx_i\in\R^q$ is known, and~$\mathbf{u}_i$, for $i=1,\ldots,n$, are independent mean zero random variables with covariance matrix~$\bV(\btheta)=\bC\in\text{PDS}(k)$. In this case, the vector of unknown covariance parameters is given by \begin{equation} \label{def:theta for unstructured} \btheta=\vch(\bC)=(c_{11},\ldots,c_{1k},c_{22},\ldots,c_{kk})^T\in\R^{\frac12k(k+1)}, \end{equation} The model can be obtained as a special case of~\eqref{def:model}, by taking $\bX_i=\bx_i^T\otimes \bI_k$ and $\bbeta=\vc(\bB^T)$, where $\vc(\cdot)$ is the $k^2$-vector that stacks the columns of a matrix. Clearly, the multiple linear regression model considerd in~\cite{yohai1987} is a special case with $k=1$. Also the multivariate location-scale model, as considered in~\cite{lopuhaa1992highly} (see also~\cite{SalibianBarrera-VanAelst-Willems2006,tatsuoka&tyler2000}), can be obtained as a special case of~\eqref{def:model}, by taking $\bX_i=\bI_k$, the $k\times k$ identity matrix. In this case, $\bbeta\in\R^k$ is the unknown location parameter and covariance matrix $\bV(\btheta)=\bC\in\text{PDS}(k)$, with $\btheta$ as in~\eqref{def:theta for unstructured}. Model~\eqref{def:model} also includes examples, for which $\bu_1,\ldots,\bu_n$ are generated by a time series. An example is the case where $\bu_i$ has a covariance matrix with elements \begin{equation} \label{def:autogressive orde 1 covariance} v_{st}=\sigma^2\rho^{|s-t|}, \quad s,t=1,\ldots,n. \end{equation} This arises when the $\bu_i$'s are generated by an autoregressive process of order one. The vector of unknown covariance parameters is $\btheta=(\sigma^2,\rho)\in(0,\infty)\times[-1,1]$. A general stationary process leads to \begin{equation} \label{def:autogressive orde 1 covariance} v_{st}=\theta_{|s-t|+1}, \quad s,t=1,\ldots,n, \end{equation} in which case $\btheta=(\theta_1,\ldots,\theta_k)^T\in\R^k$, where $\theta_{|s-t|+1}$ represents the autocovariance over lag~$|s-t|$. Throughout the manuscript we will assume that the parameter $\btheta$ is identifiable in the sense that, \begin{equation} \label{def:identifiable} \bV(\btheta_1)=\bV(\btheta_2) \quad\Rightarrow\quad \btheta_1=\btheta_2. \end{equation} This is true for all examples mentioned above. \section{Definitions} \label{subsec:def MM Yohai} The definition of MM-estimators involves the use of a single real-valued function $\rho$, or the use of multiple real-valued functions $\rho_0$ and $\rho_1$. Moreover, depending on the specific statistical model of interest, the breakdown behavior of the corresponding MM-estimator may depend on whether the $\rho$-functions are bounded or unbounded. Since we intend to include both possibilities, we first discuss them both. \subsection{Bounded and unbounded $\rho$-functions} \label{subsec:bounded rho} Yohai~\cite{yohai1987} defines the regression MM-estimator in multiple stages. By means of a function $\rho_0$, an M-estimator of scale is determined from residuals, that are obtained from an initial high breakdown regression estimator. Given the M-estimator of scale, a final regression M-estimator is determined by means of a function~$\rho_1$. The conditions imposed on the two $\rho$-functions are similar to the following conditions. \begin{quote} \begin{itemize} \item[(R-BND)] $\rho$ is symmetric around zero with $\rho(0)=0$ and $\rho$ is continuous at zero. There exists a finite constant $c>0$, such that $\rho$ is strictly increasing on $[0,c]$ and constant on~$[c,\infty)$; put $a=\sup\rho=\rho(c)$. \end{itemize} \end{quote} In addition, the two $\rho$-functions are related. Suitable tuning of the bounded function $\rho_0$ ensures a high breakdown point of the scale M-estimator, and by imposing the relationship between $\rho_0$ and~$\rho_1$, the final regression M-estimator inherits the high breakdown point from the scale M-estimator. Typical choices for bounded $\rho_0$ and $\rho_1$ that satisfy (R-BND), can be determined from Tukey's biweight, defined as \begin{equation}\label{def:biweight} \rho_{\mathrm{B}}(s;c) = \begin{cases} \displaystyle{\frac{s^2}2-\frac{s^4}{2c^2}+\frac{s^6}{6c^4}}, & |s|\leq c\\ \\[-10pt] \dfrac{c^2}{6} & |s|>c, \end{cases} \end{equation} by taking $\rho_0(d)=\rho_{\mathrm{B}}(d;c_0)$ and $\rho_1(d)=\rho_{\mathrm{B}}(d;c_1)$, where the cut-off constants are chosen such that $0<c_0<c_1<\infty$. The cut-off constant $c_0$ can be tuned such that the MM-estimator inherits the breakdown point of the initial regression estimator, whereas the constant $c_1$ can be tuned such that the MM-estimator has high efficiency at the model with Gaussian errors. In~\cite{lopuhaa1992highly} this idea has been extended to multivariate location and scatter by determining a location M-estimator after first obtaining a high breakdown covariance estimator. After rescaling the observations with the initial covariance estimator, a location M-estimator is obtained by minimizing an object function that involves only a single $\rho$-function that satisfies the following condition. \begin{quote} \begin{itemize} \item[(R-UNB)] $\rho$ is symmetric, $\rho(0)=0$ and $\rho(s)\to\infty$, as $s\to\infty$. The functions $\rho'$ and $u(s)=\rho'(s)/s$ are continuous, $\rho'\geq 0$ on $[0,\infty)$ and there exists a $s_0$ such that $\rho'$ is nondecreasing on $(0,s_0)$ and nonincreasing on $(s_0,\infty)$. \end{itemize} \end{quote} In view of the results found by Huber~\cite{huber1984}, an unbounded $\rho$-function is used in~\cite{lopuhaa1992highly} to avoid that the breakdown point of the location M-estimator depends on the configuration of the sample, which is the case for bounded $\rho$-functions. With an unbounded $\rho$-function, the location M-estimator is shown to inherit the breakdown point of the initial covariance estimator. A typical choice of an unbounded $\rho$-function that satisfies~(R-UNB) is \begin{equation}\label{def:huber psi} \rho_{\mathrm{H}}(s;c) = \begin{cases} \dfrac{s^2}2, & |s|\leq c,\\ \\[-10pt] -\dfrac{c^2}2+c|s|, & |s|>c, \end{cases} \end{equation} whose derivative $\psi_\mathrm{H}=\rho_\mathrm{H}'$ is a bounded monotone function known as Huber's $\psi$-function. The constant $c$ can be tuned such that the location M-estimator has high efficiency at the multivariate normal distribution. Tatsuoka and Tyler~\cite{tatsuoka&tyler2000} and Salibi\'an-Barrera \emph{et al}~\cite{SalibianBarrera-VanAelst-Willems2006}, propose a different version of location MM-estimators also using bounded $\rho$-functions. Instead of using the entire covariance matrix as auxiliary statistic, they estimate the shape of the scatter matrix along with location parameter and only use a univariate auxiliary estimator for the scale of the scatter matrix. In~\cite{SalibianBarrera-VanAelst-Willems2006} it is shown that the location and shape estimators in the second step inherit the breakdown point of the initial estimators used in the first step. Kudraszow and Maronna~\cite{kudraszow-maronna2011} use a similar version for multivariate linear regression and also establish that the regression and shape estimators in the second step inherit the breakdown point of the initial estimators used in the first step. Copt and Heritier~\cite{copt&heritier2007} treat regression MM-estimators in the context of linear mixed effects models. They allow both bounded and unbounded $\rho$-functions and briefly discuss the pros and cons, but do not explicitly derive the breakdown point. \subsection{The regression M-estimator and corresponding M-functional} Extending the approach in~\cite{lopuhaa1992highly} to the regression parameter $\bbeta$ in the current setup~\eqref{def:model} seems straightforward. First obtain a high breakdown structured covariance estimator and determine a regression M-estimator from the re-scaled observations. However, in order to make sure that the resulting M-estimator inherits the breakdown point from the initial covariance estimator, the use of an unbounded $\rho$-function, as in~\cite{lopuhaa1992highly}, does not seem to be suitable. The presence of the design matrices $\bX_i$ in the object function to be minimized, makes things more complex than for multivariate location. Alternatively, one could minimize a single object function based on a bounded $\rho$-function. However, in view of the results in Huber~\cite{huber1984}, in this case it seems difficult to ensure that the resulting regression M-estimator inherits the breakdown point from the initial covariance estimator. We will show that an approach similar to~\cite{yohai1987}, using two bounded $\rho$-functions that are suitably related turns out to be helpful. For the moment, we intend to include both bounded as well as unbounded $\rho$-functions in our approach. In order to do so, the estimator for $\bbeta$ is defined in two stages as follows. \begin{definition} \label{def:MM-estimator general} Let $\bV_{0,n}$ be a (high breakdown) positive definite symmetric covariance estimator. For a function $\rho_1:\R\to[0,\infty)$, define $\bbeta_{1,n}$ as the vector that minimizes \begin{equation} \label{eq:MM estimator general} R_n(\bbeta) = \frac{1}{n} \sum_{i=1}^{n} \rho_1\left( \sqrt{(\by_i-\bX_i\bbeta)^T\bV_{0,n}^{-1}(\by_i-\bX_i\bbeta)} \right). \end{equation} \end{definition} At this point, $\rho_1$ can be either bounded or unbounded. Later on, we will further specify under what conditions on $\rho_1$, several properties hold for~$\bbeta_{1,n}$. Note that one may choose any initial (high breakdown) covariance estimator, but in our setup we typically think of a structured covariance estimator~$\bV_{0,n}=\bV(\btheta_{0,n})$, where $\btheta_{0,n}$ is an initial estimator for the vector of covariance parameters. This means that $\bV_{0,n}$ is not necessarily affine equivariant, and similarly for~$\bbeta_{1,n}$. However, it is not difficult to see that $\bbeta_{1,n}$ is regression equivariant, i.e., \[ \bbeta_{1,n}(\{(\by_i+\bX_i\bb,\bX_i),i=1,\ldots,n\})=\bbeta_{1,n}(\{(\by_i,\bX_i),i=1,\ldots,n\})+\bb. \] for all $\bb\in\R^q$. The corresponding functional is defined similarly. \begin{definition} \label{def:MM-functional general} Let $\bV_0(P)$ be a positive definite symmetric covariance functional. For a function $\rho_1:\R\to[0,\infty)$, define $\bbeta_1(P)$ as the vector that minimizes \begin{equation} \label{eq:MM-functional general} R_{P}(\bbeta) = \int \rho_1\left( \sqrt{(\by-\bX\bbeta)^T\bV_0(P)^{-1}(\by-\bX\bbeta)} \right) \,\dd P(\by,\bX). \end{equation} \end{definition} The functional $\bbeta_1(P)$ is regression equivariant in the sense that \[ \bbeta_1(P_{\by+\bX\bb,\bX})=\bbeta_1(P_{\by,\bX})+\bb, \] for all $\bb\in\R^q$, where $P_{\by,\bX}$ denotes the distribution of $(\by,\bX)$. Clearly, if one takes $P=\mathbb{P}_n$, the empirical measure of the sample $(\by_1,\bX_1),\ldots,(\by_n,\bX_n)$, then $\bbeta_1(\mathbb{P}_n)$ is equal to the estimator~$\bbeta_{1,n}$. As before, $\bV_0(P)$ can be any covariance functional, but in our setup we typically think of a structured covariance $\bV_0(P)=\bV(\btheta_0(P))$, where $\btheta_0(P)$ is an initial functional representing the vector of covariance parameters. An example of an estimator $\btheta_{0,n}$ and corresponding functional~$\btheta_0(P)$ that yield a high breakdown structured covariance estimator~$\bV(\btheta_{0,n})$, is the S-estimator $\btheta_{0,n}$ and its corresponding functional proposed in~\cite{lopuhaa-gares-ruizgazenARXIVE2022}. Definitions~\ref{def:MM-estimator general} and~\ref{def:MM-functional general} coincide with the ones for the multivariate location M-estimator in~\cite{lopuhaa1992highly}, when we choose $\bX_i=\bI_k$ and $\bV(\btheta)=\bC\in\text{PDS}(k)$, with $\btheta$ as in~\eqref{def:theta for unstructured}. For the multiple linear regression model~\eqref{def:multivariate linear regression model}, it follows that if $\bbeta_{1,n}$ exists, then it satisfies score equation~(2.6) and equation~(2.7) in~\cite{yohai1987}. Similarly, for the linear mixed effects model~\eqref{def:linear mixed effects model Copt}, it follows that if $\bbeta_{1,n}$ exists, then it satisfies a score equation similar to equation~(8) in~\cite{copt&heritier2007}. We should emphasize that score equations like (2.6) in~\cite{yohai1987} and (8) in~\cite{copt&heritier2007} are useful to obtain asymptotic properties, but they do not guarantee that $\bbeta_{1,n}$ inherits the breakdown point of the estimators used in the first step. Breakdown behavior is typically established from the minimization problem in Definition~\ref{def:MM-estimator general} itself. If this minimization problem has a solution $\bbeta_{1,n}$ and if this solution inherits the high breakdown point from $\bV_{0,n}$, then~$\bbeta_{1,n}$ will be a zero of the corresponding score equation with a high breakdown point. But just being a zero of the score equation does not ensure a high breakdown point. Indeed, the breakdown point of the MM-estimators in the multiple linear regression model~\cite{yohai1987} and the multivariate location-scale model~\cite{lopuhaa1992highly}, have been obtained from the respective minimization problems, and similarly for the MM-estimators in~\cite{SalibianBarrera-VanAelst-Willems2006} and~\cite{kudraszow-maronna2011}. For the MM-estimator in the linear mixed effects model~\cite{copt&heritier2007}, the robustness properties have not been investigated. In view of the fact that the use of bounded or unbounded $\rho$-functions may lead to different breakdown behavior, the breakdown point of MM-estimators for the linear mixed effects model considered in~\cite{copt&heritier2007} will be investigated in Section~\ref{subsec:BDP Yohai}. \section{Existence} \label{subsec:Existence MM Yohai} Consider the functional $\beta_1(P)$, as defined in Definition~\ref{def:MM-functional general}. We will establish existence of~$\beta_1(P)$, where we allow both bounded and unbounded $\rho_1$. Existence of the corresponding estimator $\bbeta_{1,n}$ will follow from this. Also of interest is the special case in which $P$ is such that $\by\mid\bX$ has an elliptically contoured density of the form \begin{equation} \label{eq:elliptical} f_{\bmu,\bSigma}(\by) = \text{det}(\bSigma)^{-1/2} h\left( (\by-\bmu)^T \bSigma^{-1} (\by-\bmu) \right), \end{equation} with $\bmu=\bX\bbeta\in\R^k$ and $\bSigma=\bV(\btheta)\in\text{PDS}(k)$, and $h:[0,\infty)\to[0,\infty)$. For the linear mixed effects model in~\cite{copt&heritier2007}, it is assumed that $\by\mid\bX$ has a multivariate normal distribution, which is a special case of~\eqref{eq:elliptical} with $h(t)=(2\pi)^{-k/2}\exp(-t/2)$. For bounded $\rho_1$, we want to rule out the pathological case, where $P$ has all of its mass outside the ellipsoid centered around the origin with covariance structure $\bV_0(P)$ and radius~$c_1$. To this end we require the following condition on~$P$. \begin{quote} \begin{itemize} \item[(A)] Suppose that \[ R_P(\textbf{0}) = \int \rho_1\left( \sqrt{\by^T\bV_0(P)^{-1}\by} \right)\,\dd P(\by,\bX) < \sup\rho_1. \] \end{itemize} \end{quote} Clearly, if $\by\mid\bX\sim \mathcal{N}(\bX\bbeta,\bSigma)$ and $\rho_1$ satisfies (R-BND), this condition is trivially fulfilled. We then have the following theorem for bounded~$\rho_1$. \begin{theorem} \label{th:existence MM bounded rho} Let $\rho_1:\R\to[0,\infty)$ satisfy condition (R-BND), and suppose that $\bX$ has full rank with probability one. \begin{itemize} \item[(i)] If $P$ satisfies~(A), then there is at least one vector $\bbeta_1(P)$ that minimizes $R_P(\bbeta)$. \item[(ii)] When $P$ is such that $\by\mid\bX$ has an elliptically contoured density from~\eqref{eq:elliptical} with parameters $\bmu=\bX\bbeta$ and $\bSigma$, and if $\bV_0(P)=\bSigma$, then $R_P(\bb)\geq R_P(\bbeta)$, for all~$\bb\in\R^q$. When $h$ in~\eqref{eq:elliptical} and $-\rho_1$ have a common point of decrease, then $R_P(\bb)$ is uniquely minimized by $\bbeta_1(P)=\bbeta$. \end{itemize} \end{theorem} \begin{proof} (i) Let $0<\lambda_1<\infty$ be the largest eigenvalue of $\bV_0(P)$, and let $\lambda_k(\bX^T\bX)>0$ denote the smallest eigenvalue of $\bX^T\bX$. Let $\|\cdot\|$ denote the Euclidean norm. Then we have that \begin{equation} \label{eq:lower bound d Yohai} \begin{split} \sqrt{(\by-\bX\bb)^T\bV_0(P)^{-1}(\by-\bX\bb)} &\geq \frac{\|\by-\bX\bb\|}{\sqrt{\lambda_1}} \geq \frac{\|\bX\bb\|-\|\by\|}{\sqrt{\lambda_1}}\\ &\geq \frac{1}{\sqrt{\lambda_1}} \left( \|\bb\|\sqrt{\lambda_k(\bX^T\bX)}-\|\by\| \right). \end{split} \end{equation} Then by dominated convergence and (R-BND), it follows that \begin{equation} \label{eq:limit RP} \lim_{\|\bb\|\to\infty} R_P(\bb) = \int \lim_{\|\bb\|\to\infty} \rho_1\left( \sqrt{(\by-\bX\bb)^T\bV_0(P)^{-1}(\by-\bX\bb)} \right) \,\dd P(\by,\bX)\\ = \sup\rho_1. \end{equation} According to condition~(A), this means that there exists a constant $M>0$, such that \begin{equation} \label{eq:def M Yohai} R_P(\bb) > R_P(\mathbf{0}), \quad \text{for all }\|\bb\|>M. \end{equation} Therefore, for minimizing~$R_P(\bb)$ we may restrict ourselves to the set $K=\{\bb\in\R^q:\|\bb\|\leq M\}$. By dominated convergence and (R-BND), it also follows that $R_P(\bb)$ is continuous on the compact set~$K$, and therefore it must attain at least one minimum~$\bbeta_1(P)$. (ii) Write \[ R_P(\bb) = \E_\bX \left[ \E_{\by\mid\bX} \left[ \rho_1\left( \sqrt{(\by-\bX\bb)^T\bSigma^{-1}(\by-\bX\bb)} \right) \right] \right]. \] By change of variables $\by=\bSigma^{1/2}\bz+\bmu$, the inner conditional expectation can be written as \[ \int \rho_1\left( \|\bSigma^{-1/2}(\by-\bX\bb)\| \right) f_{\bmu,\bSigma}(\by)\,\dd\by = \int \rho_1\left( \|\bz-\bSigma^{-1/2}\bX(\bb-\bbeta)\| \right) h(\bz^T\bz)\,\dd\bz. \] Next, we apply Lemma~4 from Davies~\cite{davies1987} to the functions $\xi(d)=1-\rho_1(\sqrt{d})/a_0$ and $g=h$ and taking $\Lambda=\bI_k$. Since $-\rho_1$ and $h$ have a common point of decrease, for all $\bX$, it follows that \[ \int \rho_1\left( \|\bz-\bSigma^{-1/2}\bX(\bb-\bbeta)\| \right) h(\bz^T\bz)\,\dd\bz \leq \int \rho_1\left( \|\bz\| \right) h(\bz^T\bz)\,\dd\bz, \] with a strict inequality unless $\bSigma^{-1/2}\bX(\bb-\bbeta)=\mathbf{0}$, i.e., unless $\bb=\bbeta$, since $\bX$ has full rank with probability one. Finally, with the same change of variables $\bz=\bSigma^{-1/2}(\by-\bmu)$, the right hand side can be written as \[ \int \rho_1\left( \|\bz\| \right) h(\bz^T\bz)\,\dd\bz = \int \rho_1\left( \|\bSigma^{-1/2}(\by-\bX\bbeta)\| \right) f_{\bmu,\bSigma}(\by)\,\dd\by. \] After taking expectations $\E_\bX$, we conclude that $R_P(\bb)\leq R_P(\bbeta)$, with a strict inequality, unless~$\bb=\bbeta$. This proves the theorem. \end{proof} For bounded $\rho_1$, the function $R_P(\bbeta)$ in~\eqref{eq:MM-functional general} is well defined. This is not necessarily true for unbounded $\rho_1$. However, this will be the case when $P$ has a first moment. For unbounded $\rho_1$ we have the following result. \begin{theorem} \label{th:existence MM unbounded rho} Let $\rho_1:\R\to[0,\infty)$ satisfy condition (R-UNB). Suppose that $\E_P\|\bs\|<\infty$ and that~$\bX$ has full rank with probability one. \begin{itemize} \item[(i)] For every $\bbeta\in\R^q$ fixed, $R_P(\bbeta)<\infty$. \item[(ii)] There is at least one vector $\bbeta_1(P)$ that minimizes $R_P(\bbeta)$. When $\rho_1$ is also strictly convex, then $\bbeta_1(P)$ is uniquely defined. \item[(iii)] When $P$ is such that $\by\mid\bX$ has an elliptically contoured density from~\eqref{eq:elliptical} with parameters $\bmu=\bX\bbeta$ and $\bSigma$, and if $\bV_0(P)=\bSigma$, then $R_P(\bb)\geq R_P(\bbeta)$, for all $\bb\in\R^q$. When $h$ in~\eqref{eq:elliptical} is strictly decreasing, then $R_P(\bb)$ is uniquely minimized by $\bbeta_1(P)=\bbeta$. \end{itemize} \end{theorem} \begin{proof} Let $0<\lambda_k\leq \lambda_1<\infty$ be the smallest and largest eigenvalue of $\bV_0(P)$. (i) Condition (R-UNB) implies that $\rho_1(s)\leq\rho_1(s_0)$, for $s\in[0,s_0]$, and that for $s>s_0$, \begin{equation} \label{eq:prop rho general} \rho_1(s)=\int_{0}^{s_0}\rho_1'(t)\,\dd t+\int_{s_0}^{s}\rho_1'(t)\,\dd t \leq \rho_1(s_0)+(s-s_0)\rho_1'(s_0). \end{equation} Hence, for $\|\bV_0(P)^{-1/2}(\by-\bX\bbeta)\|>s_0$, we have that \begin{equation} \label{eq:bound rho1} \begin{split} \rho_1\left(\|\bV_0(P)^{-1/2}(\by-\bX\bbeta)\|\right) &\leq \rho_1(s_0)+\|\by-\bX\bbeta\|\lambda_k^{-1/2}\rho_1'(s_0)-s_0\rho_1'(s_0)\\ &\leq \rho_1(s_0)+(\|\by\|+\|\bX\|\cdot\|\bbeta\|)\lambda_k^{-1/2}\rho_1'(s_0). \end{split} \end{equation} \begin{comment} Together with Lemma~2.1 in~\cite{lopuhaa1992highly}, it follows that \[ \begin{split} \left| \rho_1\left( \|\bV_0(P)^{-1/2}(\by-\bX\bb)\| \right) - \rho_1\left( \|\bV_0(P)^{-1/2}\by\| \right) \right| &\leq s_0\rho_1'(s_0) + \rho_1\left(\|\bV_0(P)^{-1/2}\bX\bb\|\right)\\ &\leq s_0\rho_1'(s_0) + \rho_1\left(\|\bX\bb\|\lambda_k^{-1/2}\right). \end{split} \] Either $\|\bX\bb\|\lambda_k^{-1/2}\leq s_0$, or $\|\bX\bb\|\lambda_k^{-1/2}>s_0$, in which case \[ \begin{split} \rho_1\left(\|\bX\bb\|\lambda_k^{-1/2}\right) &\leq \rho_1(s_0)+\|\bX\bb\|\lambda_k^{-1/2}\rho_1'(s_0)-s_0\rho_1'(s_0)\\ &\leq \rho_1(s_0)+\|\bX\|\cdot\|\bb\|\lambda_k^{-1/2}\rho_1'(s_0)-s_0\rho_1'(s_0). \end{split} \] \end{comment} Since $\E_P\|\bs\|<\infty$, we find that for any $\bbeta\in\R^q$ fixed, \[ \int \rho_1\left( \|\bV_0(P)^{-1/2}(\by-\bX\bb)\| \right) \,\dd P(\bs) \leq \rho_1(s_0)+(\E_P\|\by\|+\|\bbeta\|\E_P\|\bX\|)\lambda_k^{-1/2}\rho_1'(s_0)<\infty, \] which proves part~(i). \begin{comment} \[ \begin{split} & \int \left| \rho_1\left( \|\bV_0(P)^{-1/2}(\by-\bX\bb)\| \right) - \rho_1\left( \|\bV_0(P)^{-1/2}\by\| \right) \right| \,\dd P(\bs)\\ &\quad\leq s_0\rho_1'(s_0) + \rho_1(s_0) + \|\bb\|\lambda_k^{-1/2}\rho_1'(s_0) \E_P\|\bX\|<\infty. \end{split} \] \end{comment} (ii) We first argue that for minimizing $R_P(\bbeta)$, we can restrict ourselves to a compact set. Note that $R_P(\textbf{0})<\infty$, according to part~(i). Now, suppose that $\|\bbeta\|>M$. Then from~\eqref{eq:lower bound d Yohai}, \begin{equation}\label{eq:lower bound quadratic form} \sqrt{(\by-\bX\bb)^T\bV_0(P)^{-1}(\by-\bX\bb)} \geq \frac{1}{\sqrt{\lambda_1}} \left( \|\bb\|\sqrt{\lambda_k(\bX^T\bX)}-\|\by\| \right) \geq \frac{\sqrt{M}}{2\sqrt{\lambda_1}}, \end{equation} on the set \begin{equation}\label{def:AM} A_M= \left\{ (\by,\bX)\in\R^{k+qk}: \lambda_k(\bX^T\bX)\geq 1/M;\,\|\by\|\leq \sqrt{M}/2 \right\}. \end{equation} Since $\bX$ has full rank with probability one, $P(A_M)\to1$, as $M\to\infty$, and $\rho_1(\sqrt{M}/(2\sqrt{\lambda_1}))\to\infty$, according to (R-UNB). This implies that for $M$ sufficiently large, \[ R_P(\bbeta) \geq \rho_1(\sqrt{M}/(2\sqrt{\lambda_1}))P(A_M) > R_P(\textbf{0}). \] \begin{comment} From~\eqref{eq:lower bound d Yohai} and Fatou's lemma, together with the fact that $\lambda_k(\bX^T\bX)>0$ with probability one, it follows that \[ \lim_{\|\bb\|\to\infty} R_P(\bb) = \infty. \] As before, this means that there exists a constant $M>0$, such that~\eqref{eq:def M Yohai} holds. \end{comment} Therefore, that there exists a constant $M>0$, such that for minimizing $R_P(\bbeta)$ we may restrict ourselves to the compact set $K=\{\bbeta\in\R^q:\|\bbeta\|\leq M\}$. \begin{comment} With Lemma~2.1 in~\cite{lopuhaa1992highly}, for all $\bb\in K$, we have \[ \left| \rho_1\left( \|\bV_0(P)^{-1/2}(\by-\bX\bb)\| \right) - \rho_1\left( \|\bV_0(P)^{-1/2}\by\| \right) \right| \leq s_0\rho_1'(s_0) + \rho_1(s_0) + M\lambda_k^{-1/2}\rho_1'(s_0)\|\bX\|. \] \end{comment} Since $\E_P\|\bs\|<\infty$, from~\eqref{eq:bound rho1} and dominated convergence, it follows that $R_P(\bbeta)$ is continuous on~$K$ and therefore it must attain at least one minimum $\bbeta_1(P)$ on the compact set $K$. It is easily seen that strict convexity of $\rho_1$ implies strict convexity of $R_P$, which means that $\bbeta_1(P)$ is unique. (iii) Because $\bbeta_1(\cdot)$ is regression equivariant, we may assume that $\bbeta=\mathbf{0}$. Write \[ \begin{comment} R_P(\bb) = \E_\bX\left[ \E_{\by|\bX} \left\{ \rho_1\left( \|\bV_0(P)^{-1/2}(\by-\bX\bb)\| \right) - \rho_1\left( \|\bV_0(P)^{-1/2}\by\| \right) \right\} \right]. \end{comment} R_P(\bb) = \E_\bX \left[ \E_{\by|\bX} \left[ \rho_1\left( \|\bV_0(P)^{-1/2}(\by-\bX\bb)\| \right) \right]\right]. \] Since $\bz=\bV_0(P)^{-1/2}\by=\bSigma^{-1/2}\by$ has an elliptically contoured density with parameters~$(\mathbf{0},\bI_k)$, the inner conditional expectation can be written as \[ \begin{comment} \iint \left( \big\{0\leq s\leq \rho_1(\|\bz-\bV_0(P)^{-1/2}\bX\bb\|)\big\} - \big\{0\leq s\leq \rho_1(\|\bz\|)\big\} \right) f(\|\bz\|) \,\dd s\,\dd\bz. \end{comment} \iint \big\{0\leq s\leq \rho_1(\|\bz-\bV_0(P)^{-1/2}\bX\bb\|)\big\} f(\|\bz\|) \,\dd s\,\dd\bz. \] From here on, we can copy the proof of Theorem~2.1 in~\cite{lopuhaa1992highly} and conclude that \begin{equation} \label{eq:ineq RP general} \E_{\by|\bX} \left[ \rho_1\left( \|\bV_0(P)^{-1/2}(\by-\bX\bb)\| \right) \right] \geq 0, \quad \bX-\text{a.s.} \end{equation} It follows that $R_P(\bb)\geq 0=R_P(\mathbf{0})$. When $f$ is strictly decreasing, similar to the proof of Theorem~2.1 in~\cite{lopuhaa1992highly}, it follows that inequality~\eqref{eq:ineq RP general} is strict, which yields $R_P(\bb)>0=R_P(\mathbf{0})$. \end{proof} A direct consequence of Theorems~\ref{th:existence MM bounded rho} and~\ref{th:existence MM unbounded rho} is the existence of~$\bbeta_{1,n}$. \begin{corollary} \label{cor:existence MM estimator} Let $(\by_1,\bX_1),\ldots,(\by_n,\bX_n)$ be a sample, such that $\bX_i$ has full rank for each $i=1,\ldots,n$. \begin{enumerate} \item[(i)] If $\rho_1:\R\to[0,\infty)$ satisfies conditions (R-BND) and $R_n(\textbf{0})<\sup\rho_1$, then there exists at least one $\bbeta_{1,n}$ that minimizes~$R_n(\bbeta)$. \item[(ii)] If $\rho_1:\R\to[0,\infty)$ satisfies condition (R-UNB), then there exists at least one $\bbeta_{1,n}$ that minimizes~$R_n(\bbeta)$. When $\rho_1$ is also strictly convex, then $\bbeta_{1,n}$ is uniquely defined. \end{enumerate} \end{corollary} \begin{proof} Take $P$ equal to the empirical measure $\mathbb{P}_n$ of the sample $(\by_1,\bX_1),\ldots,(\by_n,\bX_n)$. If~$\bX_i$ has full rank, for each $i=1,\ldots,n$, then if $\rho_1:\R\to[0,\infty)$ satisfies either (R-BND) or condition (R-UNB), the corollary follows from Theorems~\ref{th:existence MM bounded rho}(i) and~\ref{th:existence MM unbounded rho}(ii). \end{proof} The condition $R_n(\mathbf{0})<\sup\rho_1$ is not very restrictive. It rules out the pathological case of all observations being outside the ellipsoid centered around the origin with covariance structure $\bV_{0,n}$ and radius~$c_1$. \section{Continuity and Consistency} \label{subsec:continuity general} Consider a sequence $P_t$, $t\geq0$, of probability measures on $\R^k\times\R^{kq}$ that converges weakly to~$P$, as $t\to\infty$. By continuity of the functional $\bbeta_1(P)$ we mean that $\bbeta_1(P_t)\to\bbeta_1(P)$, as $t\to\infty$. An example of such a sequence is the sequence of empirical measures $\mathbb{P}_n$, $n=1,2,\ldots$, that converges weakly to $P$, almost surely. Continuity of the functional~$\bbeta_1(P)$ for this sequence would then mean that the estimator $\bbeta_{1,n}$ is consistent, i.e., $\bbeta_{1,n}=\bbeta_1(\mathbb{P}_n)\to\bbeta_1(P)$, almost surely. Furthermore, continuity of the $\bbeta_1(P)$ also provides a first step in deriving the influence function, in the sense that $\bbeta_1(P_{\epsilon,\bs_0})\to\bbeta_1(P)$, as $\epsilon\downarrow0$, where \begin{equation} \label{def:perturbed P} P_{\epsilon,\bs_0}=(1-\epsilon)P+\epsilon\delta_{\bs_0}, \end{equation} with $\delta_{\bs_0}$ representing the Dirac measure at $\bs_0=(\by_0,\bX_0)$. When $\rho_1$ is bounded, we can obtain continuity of the functional $\bbeta_1(P)$ for general weakly convergent sequences $P_t$, $t\geq0$. When $\rho_1$ is unbounded, this becomes more complicated, but we can still establish continuity for the sequence of empirical measures $\mathbb{P}_n$, $n=1,2,\ldots$, and for the sequence $P_{\epsilon,\bs_0}$, for $\epsilon\downarrow0$. For bounded~$\rho_1$ we have the following theorem. \begin{theorem} \label{th:continuity MM bounded rho} Let $P_t$, $t\geq0$ be a sequence of probability measures on $\R^k\times\R^{kq}$ that converges weakly to~$P$, as $t\to\infty$. Suppose that $\rho_1:\R\to[0,\infty)$ satisfies (R-BND) and suppose that~$P$ is such that (A) holds and that $\bX$ has full rank with probability one. Suppose that for $t$ sufficiently large, $\bV_0(P_t)$ exists and that \begin{equation} \label{eq:conv Vt Yohai} \lim_{t\to\infty}\bV_0(P_t)=\bV_0(P). \end{equation} Then for $t$ sufficiently large, there exists at least one $\bbeta_1(P_t)$ that minimizes $R_{P_t}(\bbeta)$. If~$\bbeta_1(P)$ is the unique minimizer of $R_P(\bbeta)$, then for any sequence~$\bbeta_1(P_t)$, $t\geq 0$, it holds that \[ \lim_{t\to\infty}\bbeta_1(P_t)=\bbeta_1(P). \] \end{theorem} \begin{proof} Similar to Lemma~B.1 in~\cite{lopuhaa-gares-ruizgazenARXIVE2022}, one can show that \begin{equation} \label{eq:prop Lemma 2 Yohai} \lim_{t\to\infty} \int \rho_1\left( d(\bs,\bbeta_{t},\bV_{t}) \right) \,\dd P_t(\bs) = \int \rho_1\left( d(\bs,\bbeta_{L},\bV_{L}) \right) \,\dd P(\bs), \end{equation} for any sequence $(\bbeta_{t},\bV_{t})\to(\bbeta_{L},\bV_{L})$, where \begin{equation} \label{def:mahalanobis} d^2(\bs,\bbeta,\bV)=(\by-\bX\bbeta)^T\bV^{-1}(\by-\bX\bbeta). \end{equation} In particular, this yields that, for every $\bbeta\in\R^q$ fixed, it holds that \begin{equation}\label{eq:conv Rt} R_{P_t}(\bbeta)\to R_P(\bbeta), \end{equation} as $t\to\infty$. We first show that there exists $M>0$, such that for minimizing $R_{P_t}(\bbeta)$, we can restrict ourselves to $\|\bbeta\|\leq M$ for $t$ sufficiently large. Consider the set $A_M$ defined in~\eqref{def:AM}. Due to condition~(A) and the fact that $\bX$ has full rank, with probability one, for any $\eta>0$, we can find an $M>0$, such that \[ \rho_1(\sqrt{M}/(2\sqrt{\lambda_1}))P(A_M)>R_P(\textbf{0})+2\eta. \] On the other hand, for any $\eta>0$, we have \begin{equation} \label{eq:bound difference} |R_{P_t}(\mathbf{0})-R_P(\mathbf{0})|\leq \eta, \end{equation} for $t$ sufficiently large. If $\bbeta$ minimizes $R_{P_t}(\bbeta)$, for $t$ sufficiently large, we must have $\|\bbeta\|\leq M$, since otherwise, according to~\eqref{eq:lower bound quadratic form}, \[ R_{P_t}(\bbeta) \geq \rho_1(\sqrt{M}/(2\sqrt{\lambda_1}))P(A_M) > R_P(\textbf{0})+2\eta \geq R_{P_t}(\textbf{0})+\eta > R_{P_t}(\textbf{0}). \] Hence, for minimizing $R_{P_t}(\bbeta)$, we can restrict to the compact set $K=\{\bbeta\in\R^q:\|\bbeta\|\leq M\}$. Furthermore, as in the proof of Theorem~\ref{th:existence MM bounded rho}, the function $R_{P_t}(\bbeta)$ is continuous on the compact set~$K$, and must therefore attain a minimum $\bbeta_{1}(P_t)$. According to Theorem~\ref{th:existence MM bounded rho} there exists at least one $\bbeta_1(P)$ that minimizes $R_P(\bbeta)$. Now, suppose that $\bbeta_1(P)$ is unique. Because $\bbeta_1(P)$ is regression equivariant, we may assume that $\bbeta_1(P)=\mathbf{0}$. For the sake of brevity, let us write $\bbeta_{1,t}=\bbeta_1(P_t)$, $\bV_{0,t}=\bV_0(P_t)$, and~$R_t=R_{P_t}$. From~\eqref{eq:conv Vt Yohai} it follows that for $t$ sufficiently large, \begin{equation} \label{eq:bounds Vt Yohai} 0<\lambda_k(\bV_0(P))/4\leq \lambda_k(\bV_{0,t})\leq\lambda_1(\bV_{0,t})\leq 4\lambda_1(\bV_0(P))<\infty. \end{equation} Now, consider a sequence $\{(\bbeta_{1,t},\bV_{0,t})\}$, such that $\|\bbeta_{1,t}\|\leq M$ and $\bV_{0,t}$ satisfies~\eqref{eq:bounds Vt Yohai}. Then the sequence $\{(\bbeta_{1,t},\bV_{0,t})\}$ lies in a compact set, so it has a convergent subsequence $(\bbeta_{1,t_j},\bV_{0,t_j})\to(\bbeta_{1,L},\bV_0(P))$. According to~\eqref{eq:prop Lemma 2 Yohai}, it follows that \[ \begin{split} \lim_{j\to\infty} R_{t_j}(\bbeta_{1,t_j}) &= \lim_{j\to\infty} \int \rho_1\left( d(\bs,\bbeta_{1,t_j},\bV_{0,t_j}) \right) \,\dd P_{t_j}(\bs)\\ &= \int \rho_1\left( d(\bs,\bbeta_{1,L},\bV_0(P)) \right) \,\dd P(\bs) = R_P(\bbeta_{1,L}). \end{split} \] Now, suppose that $\bbeta_{1,L}\neq \mathbf{0}$. Then, since $R_P(\bbeta)$ is uniquely minimized at $\bbeta=\mathbf{0}$, this would mean that there exists $\eta>0$, such that together with~\eqref{eq:bound difference}, \[ R_{t_j}(\bbeta_{1,t_j}) > R_P(\bbeta_{1,l})+2\eta \geq R_P(\mathbf{0})+2\eta \geq R_{t_j}(\mathbf{0})+\eta > R_{t_j}(\mathbf{0}), \] for $t_j$ sufficiently large, This would mean that $\bbeta_{1,t_j}$ is not the minimizer of $R_{t_j}(\bbeta)$. We conclude that $\bbeta_{1,L}=\mathbf{0}$, which proves the theorem. \end{proof} There are several examples of covariance functionals that satisfy~\eqref{eq:conv Vt Yohai}, such as the Minimum Covariance Determinant functional (see~\cite{cator-lopuhaa2012}) and the covariance S-functional (see~\cite{lopuhaa1989}), including the Minimum Volume Ellipsoid functional. For a structured covariance functional~$\bV(\btheta_0(P))$ to satisfy~\eqref{eq:conv Vt Yohai}, it is required that the mapping $\btheta\mapsto\bV(\btheta)$ is continuous. This is true for all the examples mentioned in Section~\ref{sec:structured covariance model}. In addition, the functional $\btheta_0(P)$ needs to be continuous. An example is the S-functional $\btheta_0(P)$ defined in~\cite{lopuhaa-gares-ruizgazenARXIVE2022}. A direct corollary $\bbeta_1(P)$ being continuous, is the consistency of the estimator~$\bbeta_{1,n}$. \begin{corollary} \label{cor:consistency MM-estimator bounded} Suppose that $\rho_1:\R\to[0,\infty)$ satisfies (R-BND) and suppose that $P$ is such that~(A) holds and that $\bX$ has full rank with probability one. Suppose that $\bV_{0,n}\to\bV_0(P)$, with probability one. Then for $n$ sufficiently large, there is at least one~$\bbeta_{1,n}$ that minimizes~$R_{n}(\bbeta)$, with probability one. If~$\bbeta_1(P)$ is the unique minimizer of $R_P(\bbeta)$, then for any sequence~$\bbeta_{1,n}$, $n=1,2\ldots$, it holds that \[ \lim_{n\to\infty}\bbeta_{1,n}=\bbeta_1(P), \] with probability one. \end{corollary} \begin{proof} We apply Theorem~\ref{th:continuity MM bounded rho} to the sequence $\mathbb{P}_n$, $n=1,2,\ldots$, of probability measures, where~$\mathbb{P}_n$ is the empirical measure corresponding to $(\mathbf{y}_1,\mathbf{X}_1),\ldots,(\mathbf{y}_n,\mathbf{X}_n)$. According to the Portmanteau Theorem (e.g., see Theorem~2.1 in~\cite{billingsley1968}), $\mathbb{P}_n$ converges weakly to $P$, with probability one. The corollary then follows from Theorem~\ref{th:continuity MM bounded rho}. \end{proof} For unbounded $\rho_1$, we cannot obtain continuity of the functional $\bbeta_1(P)$, for all sequences~$\{P_t\}$ that converge weakly to $P$. However, we can establish strong consistency for the estimator~$\bbeta_{1,n}$. \begin{theorem} \label{th:consistency MM unbounded rho1} Let $\rho_1:\R\to[0,\infty)$ satisfy (R-UNB). Suppose that $\E_P\|\bs\|<\infty$ and that $\bX$ has full rank with probability one. Suppose that $\bV_{0,n}\to\bV_0(P)$, with probability one, and let~$\bbeta_{1,n}$ minimize~$R_{n}(\bbeta)$. If $\bbeta_1(P)$ is the unique minimizer of $R_P(\bbeta)$, then \[ \lim_{n\to\infty}\bbeta_{1,n}=\bbeta_1(P), \] with probability one. \end{theorem} \begin{proof} For the sake of brevity, write $\bV_0$ instead of $\bV_0(P)$. Since $\bV_{0,n}\to \bV_0$, with probability one, there exists \[ 0<L_1=\lambda_k(\bV_0)/4\leq 4\lambda_1(\bV_0)=L_2<\infty, \] such that, for $n$ sufficiently large, all eigenvalues of $\bV_{0,n}$ are between $L_1$ and $L_2$ with probability one. Let $h(\bs;\bbeta,\bV)=\rho_1(\|\bV^{-1/2}(\by-\bX\bbeta)\|)$ and define \[ \begin{split} H(\bbeta,\bV) &= \int h(\bs;\bbeta,\bV)\,\dd P(\bs)\\ H_n(\bbeta,\bV) &= \int h(\bs;\bbeta,\bV)\,\dd \mathbb{P}_n(\bs). \end{split} \] For $M>0$, consider the class of functions \[ \mathcal{F} = \left\{ h(\cdot;\bbeta,\bV): \|\bbeta\|\leq M;\, \lambda_k(\bV)\geq L_1 \right\}. \] Then, according to~\eqref{eq:prop rho general} and~\eqref{eq:bound rho1}, the class $\mathcal{F}$ has envelope \[ \rho_1(s_0)+(\|\by\|+M\|\bX\|)a_1^{-1/2}\rho_1'(s_0), \] which is integrable, due to $\E_P\|\bs\|<\infty$. Hence, by dominated convergence, $H(\bbeta,\bV)$ is continuous on the set $K_M=\{(\bbeta,\bV): \|\bbeta\|\leq M;\, \lambda_k(\bV)\geq L_1\}$. Moreover, the graphs of functions in $\mathcal{F}$ have polynomial discrimination. This can be shown similar to the proof of Lemma~B.6 in~\cite{lopuhaa-gares-ruizgazenARXIVE2022}. \begin{comment} This can be seen as follows. The graph of $h(\cdot;\bbeta,\bV)$ contains a point $(\bs,t)$, with $t\geq 0$, if and only if $\|\bV^{-1/2}(\by-\bX\bbeta)\|\geq \alpha_1(t)$, where $\alpha_1(t)$ is the smallest value of $\alpha$, for which $\rho_1(\alpha)\geq t$. From a collection of points~$\{\bs_i,t_i\}$ the graph picks out those points satisfying \[ \|\bV^{-1/2}(\by_i-\bX_i\bbeta)\|^2-\alpha_1(t_i)^2\geq 0. \] Construct from a point $(\bs_i,t_i)$ a point $\bz_i=(\bs_i,\alpha_1(t_i)^2)\in\R^{k+qk+1}$. On $\R^{k+qk+1}$ define a vector space $\mathcal{G}$ of functions \[ g_{\bA,\gamma}(\mathbf{s},u) = \bs^T\bA\bs+\gamma u, \] for $\bs=(\by,\bX)=(y_1,\ldots,y_k,x_{11},\ldots,x_{kq})$, with parameters $\bA\in\R^{(k+qk)\times (k+qk)}$ and $\gamma\in\R$, and note that for all~$\bbeta\in\R^q$ and positive definite $k\times k$ matrices $\bV$, we can write \begin{equation} \label{eq:vector space of functions} \|\bV^{-1/2}(\by-\bX\bbeta)\|^2=\bs^T\bA\bs, \end{equation} for suitable $\bA\in\R^{(k+qk)\times (k+qk)}$. According to Lemma~18 in~\cite{pollard1984}, the class of sets of the form $\{g\geq 0\}$, for $g\in \mathcal{G}$, has polynomial discrimination. This means it picks out only a polynomial number of subsets from a collection of points $\{\bz_i\}$. The sets $\{g\geq 0\}$ corresponding to parameters~$\bbeta$ and $\bV$, such that~\eqref{eq:vector space of functions} holds, and $\gamma=-1$, pick out even fewer subsets from $\{\bz_i\}$. It follows that the class of the graphs of functions in $\mathcal{F}$, also has polynomial discrimination. \end{comment} From Theorem~24 in~\cite{pollard1984}, we may then conclude \begin{equation} \label{eq:uniform strong law} \sup_{(\bbeta,\bV)\in K_M} \left| H_n(\bbeta,\bV)-H(\bbeta,\bV) \right| \to0 \end{equation} with probability one. As a first consequence, we find that \begin{equation} \label{eq:bound R-difference} \left| R_P(\mathbf{0}) - R_n(\mathbf{0}) \right| \leq \left| H_n(\mathbf{0},\bV_{0,n}) - H(\mathbf{0},\bV_{0,n}) \right| + \left| H(\mathbf{0},\bV_{0,n}) - H(\mathbf{0},\bV_0) \right| \to 0, \end{equation} with probability one, due to~\eqref{eq:uniform strong law} and continuity of~$H(\bbeta,\bV)$. Next, we argue there exists~$M>0$, such that for $n$ sufficiently large $\|\bbeta_{1,n}\|\leq M$. Since $\E_P\|\bs\|<\infty$, as in the proof of Theorem~\ref{th:existence MM unbounded rho}(i), this ensures that \[ R_P(\mathbf{0}) = \int \rho_1\left(\|\bV_0^{-1/2}\by\|\right)\,\dd P(\bs)<\infty. \] Then, consider the set $A_M$ defined in~\eqref{def:AM} and choose $M>0$, such that \[ \rho_1(\sqrt{M}/(2\sqrt{\lambda_1}))P(A_M)>R_P(\textbf{0}). \] Then, for $n$ sufficiently large, we must have $\|\bbeta_{1,n}\|\leq M$, since otherwise, according to~\eqref{eq:lower bound quadratic form}, \[ R_{n}(\bbeta_{1,n}) \geq \rho_1(\sqrt{M}/(2\sqrt{\lambda_1}))\mathbb{P}_n(A_M) \to \rho_1(\sqrt{M}/(2\sqrt{\lambda_1}))P(A_M) > R_P(\mathbf{0}), \] as $n\to\infty$, with probability one, which would imply that for $n$ sufficiently large, $R_{n}(\bbeta_{1,n})>R_{n}(\mathbf{0})$, with probability one. Then suppose that $\bbeta_1(P)$ is the unique minimizer of $R_P(\bbeta)$. Because $\bbeta_1(P)$ is regression equivariant, we may assume that $\bbeta_1(P)=\mathbf{0}$. This means that for any $\delta>0$, there exist $\alpha>0$, such that \[ \inf_{\|\bbeta\|>\delta} \int \rho_1\left(\frac{\|\bV_0^{-1/2}(\by-\bX\bbeta)\|}{1+\alpha}\right)\,\dd P(\bs) > R_P(\mathbf{0}). \] Because $\bV_{0,n}^{-1/2}\bV_0^{1/2}\to \bI_k$, with probability one, we can choose $n$ sufficiently large such that $\lambda_k(\bV_{0,n}^{-1/2}\bV_0^{1/2})\geq 1/(1+\alpha)^2$. Then, since \[ \begin{split} \|\bV_{0,n}^{-1/2}(\by-\bX\bbeta)\|^2 &= \|\bV_{0,n}^{-1/2}\bV_0^{1/2}\bV_0^{-1/2}(\by-\bX\bbeta)\|^2\\ &\geq \lambda_k(\bV_{0,n}^{-1/2}\bV_0^{1/2}) \|\bV_0^{-1/2}(\by-\bX\bbeta)\|^2\\ &\geq \frac{\|\bV_0^{-1/2}(\by-\bX\bbeta)\|^2}{(1+\alpha)^2}, \end{split} \] we find \[ \inf_{\|\bbeta\|>\delta} H_n(\bbeta,\bV_{0,n}) \geq \inf_{\|\bbeta\|>\delta} \int \rho_1\left(\frac{\|\bV_0^{-1/2}(\by-\bX\bbeta)\|}{1+\alpha}\right)\,\dd P(\bs) > R_P(\mathbf{0}). \] Furthermore, $\left| R_P(\mathbf{0}) - H_n(\mathbf{0},\bV_{0,n}) \right| \to0$, with probability one, as $n\to\infty$, according to~\eqref{eq:bound R-difference}. Hence, for $n$ sufficiently large, we would find that for all $\delta>0$, \[ \inf_{\|\bbeta\|>\delta} H_n(\bbeta,\bV_{0,n}) > H_n(\mathbf{0},\bV_{0,n}). \] Therefore, for all $\delta>0$, we must have $\|\bbeta_{1,n}\|\leq \delta$, for $n$ sufficiently large, with probability one. This means $\bbeta_{1,n}\to\mathbf{0}$, with probability one. \end{proof} \section{Global robustness: breakdown point} \label{subsec:BDP Yohai} Consider a collection of points $\mathcal{S}_n=\{\bs_i=(\by_i,\bX_i),i=1,\ldots,n\}\subset \R^k\times \R^{kq}$. To emphasize the dependence on the collection $\mathcal{S}_n$, we sometimes denote the estimators in Definition~\ref{def:MM-estimator general} by~$\bbeta_{1,n}(\mathcal{S}_n)$ and $\bV_{0,n}(\mathcal{S}_n)$. To investigate the global robustness of $\bbeta_{1,n}$, we compute that finite-sample (replacement) breakdown point. For a given collection $\mathcal{S}_n$ the finite-sample breakdown point (see Donoho and Huber~\cite{donoho&huber1983}) of regression estimator $\bbeta_{1,n}$ is defined as the smallest proportion of points from~$\mathcal{S}_n$ that one needs to replace in order to carry the estimator over all bounds. More precisely, \begin{equation} \label{def:BDP beta} \epsilon_n^*(\bbeta_{1,n},\mathcal{S}_n) = \min_{1\leq m\leq n} \left\{ \frac{m}{n}: \sup_{\mathcal{S}_m'} \left\| \bbeta_{1,n}(\mathcal{S}_n)-\bbeta_{1,n}(\mathcal{S}_m') \right\| =\infty \right\}, \end{equation} where the minimum runs over all possible collections $\mathcal{S}_m'$ that can be obtained from $\mathcal{S}_n$ by replacing~$m$ points of $\mathcal{S}_n$ by arbitrary points in $\R^k\times \R^{kq}$. The finite sample (replacement) breakdown point of a covariance estimator $\bV_{0,n}$ at a collection~$\mathcal{S}_n$, is defined as \begin{equation} \label{def:BDP V} \epsilon_n^*(\bV_{0,n},\mathcal{S}_n) = \min_{1\leq m\leq n} \left\{ \frac{m}{n}: \sup_{\mathcal{S}_m'} \text{dist}(\bV_{0,n}(\mathcal{S}_n),\bV_{0,n}(\mathcal{S}_m')) =\infty \right\}, \end{equation} with $\text{dist}(\cdot,\cdot)$ defined as $\text{dist}(\bA,\mathbf{B}) = \max\left\{ \left|\lambda_1(\bA)-\lambda_1(\mathbf{B})\right|, \left|\lambda_k(\bA)^{-1}-\lambda_k(\mathbf{B})^{-1}\right| \right\}$, where the minimum runs over all possible collections $\mathcal{S}_m'$ that can be obtained from $\mathcal{S}_n$ by replacing~$m$ points of $\mathcal{S}_n$ by arbitrary points in $\R^k\times \R^{kq}$. So the breakdown point of $\bV_{0,n}$ is the smallest proportion of points from~$\mathcal{S}_n$ that one needs to replace in order to make the largest eigenvalue of~$\bV_{0,n}(\mathcal{S}_m')$ arbitrarily large (explosion), or to make the smallest eigenvalue of~$\bV_{0,n}(\mathcal{S}_m')$ arbitrarily small (implosion). When we estimate a structured covariance matrix $\bV(\btheta)$ by $\bV(\btheta_{0,n})$, we need to specify what the breakdown point of $\btheta_{0,n}$ is. Since, the estimator $\btheta_{0,n}$ determines the covariance estimator $\bV(\btheta_{0,n})$, it seems natural to let the breakdown point of $\btheta_{0,n}$ correspond to the breakdown point of the covariance estimator (see also~\cite{lopuhaa-gares-ruizgazenARXIVE2022}). The breakdown behavior of $\bbeta_{1,n}$ depends on whether the function $\rho_1$ in Definition~\ref{def:MM-estimator general} is bounded or unbounded. Huber~\cite{huber1984} pointed put that the breakdown point of location M-estimators constructed with a bounded $\rho$-function not only depends on the function $\rho$, but also on the configuration of the sample. Depending on the configuration of the sample, the breakdown point can be any value between 0 and $1/2$. For this reason, the multivariate location M-estimator (see~\cite{lopuhaa1992highly}) is constructed with an unbounded $\rho$-function. In this way, the location M-estimator inherits the breakdown point of the initial covariance estimator. Unfortunately, the use of an unbounded function $\rho_1$ in Definition~\ref{def:MM-estimator general}, does not seem suitable for the breakdown behavior of the regression MM-estimator. The presence of the design matrices~$\bX_i$ makes things more complicated than in the multivariate location case. Nevertheless, for unbounded~$\rho_1$, we can establish a result similar to the one in~\cite{lopuhaa1992highly}, when all design matrices are the same. An example is of course when all $\bX_i=\bI_k$ as in the location-scale model, but another example occurs in linear mixed effects models for which all subjects have the same the design matrix $\bX$ representing particular contrasts for the fixed effects. \begin{proposition} \label{prop:BDP MM unbounded} Suppose that $\rho_1$ satisfies (R-UNB). Let $\mathcal{S}_n\subset \R^{p}$ be a collection of~$n$ points $\bs_i=(\by_i,\bX_i)$, $i=1,\ldots,n$. Suppose that $\bX_i=\bX$, for all $i=1,\ldots,n$, where $\bX$ is fixed and has full rank. Then for any $\bbeta_{1,n}$ that minimizes $R_n(\bbeta)$, it holds that \[ \epsilon_n^*(\bbeta_{1,n},\mathcal{S}_n) \geq \epsilon_n^*(\bV_{0,n},\mathcal{S}_n). \] \end{proposition} \begin{proof} For $\bt\in\R^k$, let \[ \widetilde{R}_n(\bt) = \frac1n \sum_{i=1}^{n} \left\{ \rho_1\left( \sqrt{(\by_i-\bt)^T\bV_{0,n}^{-1}(\by_i-\bt)} \right) - \rho_1\left( \sqrt{\by_i^T\bV_{0,n}^{-1}\by_i} \right) \right\}, \] be the object function for the location M-estimator in~\cite{lopuhaa1992highly}. Then we can write \[ R_n(\bbeta) = R_n(\mathbf{0}) + \widetilde{R}_n(\bX\bbeta). \] Because $\bX$ has full rank, $\bbeta_{1,n}$ minimizes $R_n(\bb)$ if and only if $\bt_{1,n}=(\bX^T\bX)^{-1}\bX^T\bbeta_{1,n}$ minimizes~$\widetilde{R}_n(\bt)$. As $\bX$ is considered to be fixed, this means that $\bbeta_{1,n}$ breaks down precisely when~$\bt_{1,n}$ does. Hence from Theorem~4.1 in~\cite{lopuhaa1992highly} we conclude that $\epsilon_n^*(\bbeta_{1,n},\mathcal{S}_n)\geq\epsilon_n^*(\bV_{0,n},\mathcal{S}_n)$. \end{proof} The use of a bounded function $\rho_1$ in Definition~\ref{def:MM-estimator general} also does not seem very suitable, in view of the results found by Huber ~\cite{huber1984}. \begin{comment} This is illustrated by the following proposition. \begin{proposition} \label{prop:BDP bounded} Suppose that $\rho_1$ satisfies (R-BND). Let $\mathcal{S}_n\subset \R^{p}$ be a collection of~$n$ points $\bs_i=(\by_i,\bX_i)$, $i=1,\ldots,n$. Suppose that $\bX_i=\bX$, for all $i=1,\ldots,n$, where $\bX$ is fixed and has full rank. Let $\bV_{0,n}$ be a positive definite symmetric covariance estimator, with $\epsilon_n^*(\bV_{0,n},\mathcal{S}_n)\leq \lfloor(n+1)/2\rfloor/n$. Let $\bbeta_{1,n}$ be any minimizer of $R_n(\bbeta)$, such that \[ \sup_{\mathds{S}_m'} \sum_{\mathcal{S}_n} \rho_1\left( \sqrt{(\by_i-\bX\bbeta_{1,n})^T\bV_{0,n}(\mathcal{S}_m')^{-1}(\by_i-\bX\bbeta_{1,n})} \right) =n a_1 A, \] where the supremum is taken over all collections $\mathds{S}_m'$ of $n$ points that can be obtained by replacing at most $m$ points of $\mathcal{S}_n$ by arbitrary points. Then \[ \epsilon_n^*(\bbeta_{1,n},\mathcal{S}_n) \geq \min\left( \epsilon_n^*(\bV_{0,n},\mathcal{S}_n),\frac{\lceil n(1-A)/2\rceil}{n} \right). \] When $\lfloor n(1-A)\rfloor\leq n\epsilon_n^*(\bV_{0,n},\mathcal{S}_n)-2$, then \[ \epsilon_n^*(\bbeta_{1,n},\mathcal{S}_n) \leq \frac{\lfloor n(1-A)\rfloor+1}{n} \] \end{proposition} \begin{proof} Let $\mathcal{S}'_m$ be a corrupted collection obtained from the original collection $\mathcal{S}_n$ by replacing at most \[ m<\min\left(n\epsilon_n^*(\bV_n,\mathcal{S}_n),n(1-A)/2\right) \leq \min\left(n\epsilon_n^*(\bV_n,\mathcal{S}_n)-1,\lceil n(1-A)/2\rceil-1\right) \] points. Then $\bV_n(\mathcal{S}'_m)$ does not break down and we must show that $\|\bbeta_n(\mathcal{S}'_m)\|$ stays bounded. Let $R_n^*(\bbeta)$ denote the function from~\eqref{eq:MM estimator general} corresponding to the corrupted collection and write $\bV_m'=\bV_n(\mathcal{S}_m')$ for the covariance estimate based on the corrupted collection. Since $\mathcal{S}_m'$ has at least one point from the original sample, for which $\bX_i$ has full rank, according to Theorem~\ref{th:existence MM}, there exists a minimizer of $R_n^*(\bbeta)$. Since $\bV_m'$ does not break down, there exists $0<\lambda_k\leq\lambda_1<\infty$, such that all its eigenvalues are between $\lambda_k$ and $\lambda_1$. Furthermore, since $m<n(1-r)/2$, there exists $0<\delta<1$, such that $2m+(n-m)\delta<n(1-r)=n(1-b_0/a_0)$, and let $c>0$ be such that $\rho(d)>a_0(1-\delta)$, for $|d|>c$. Now, let $\bb$ be any vector such that \begin{equation}\label{eq:prop bb} \|\by_i-\bX_i\bb\|\geq c\sqrt{\lambda_1}, \text{ for all }(\by_i,\bX_i)\in \mathcal{S}_n. \end{equation} Then, \[ \begin{split} \sum_{\mathcal{S}_m'}\rho(d(\bs_i,\bbeta_n(\mathcal{S}_n),\bV_m')) &= \sum_{\mathcal{S}_m'\cap\mathcal{S}_n}\rho(d(\bs_i,\bbeta_n(\mathcal{S}_n),\bV_m')) + \sum_{\mathcal{S}_m'\setminus\mathcal{S}_n}\rho(d(\bs_i,\bbeta_n(\mathcal{S}_n),\bV_m'))\\ &\leq \sum_{\mathcal{S}_n}\rho(d(\bs_i,\bbeta_n(\mathcal{S}_n),\bV_m'))+ma_0\\ &= \textcolor[rgb]{1.00,0.00,0.00}{nb_0}+ma_0 \end{split} \] \textcolor[rgb]{1.00,0.00,0.00}{\textbf{NOT true, since we have $\bV_m'$ instead of $\bV_n$}}\\ and \[ \begin{split} \sum_{\mathcal{S}_m'}\rho(d(\bs_i,\bb,\bV_m')) &= \sum_{\mathcal{S}_m'\cap\mathcal{S}_n}\rho(d(\bs_i,\bb,\bV_m')) + \sum_{\mathcal{S}_m'\setminus\mathcal{S}_n}\rho(d(\bs_i,\bb,\bV_m'))\\ &\geq \sum_{\mathcal{S}_m'\cap\mathcal{S}_n} \rho\left(\frac{\|\by_i-\bX_i\bb\|}{\sqrt{\lambda_1}}\right)\\ &\geq (n-m)a_0(1-\delta). \end{split} \] We conclude that \[ \sum_{\mathcal{S}_m'}\rho(d(\bs_i,\bbeta_n(\mathcal{S}_n),\bV_m')) \leq nb_0+ma_0 < (n-m)a_0(1-\delta) \leq \sum_{\mathcal{S}_m'}\rho(d(\bs_i,\bb,\bV_m')). \] This means that any $\bb$ that minimizes $R_n^*(\bbeta)$ cannot satisfy~\eqref{eq:prop bb} and therefore must satisfy $\|\by_i-\bX_i\bb\|\leq c\sqrt{\lambda_1}$ for some $(\by_i,\bX_i)\in \mathcal{S}_n$. This means that $\|\bbeta_n(\mathcal{S}'_m)\|$ stays bounded. On the other hand, let $\mathcal{S}'_m$ be a corrupted collection obtained from the original collection $\mathcal{S}_n$ by replacing $m=\lfloor n(1-r)\rfloor+1$ points. Note that $m\leq n\epsilon_n^*(\bV_m',\mathcal{S}_n)-1$, so that $\bV_m'$ does not break down. Hence, there exists $0<\lambda_k\leq\lambda_1<\infty$, such that all eigenvalues of $\bV_m'$ are between $\lambda_k$ and~$\lambda_1$. Furthermore, because $m>n(1-r)$ there exists a $\delta>0$, such that $m-m\delta>n(1-r)=n(1-b_0/a_0)$. Let $c>0$ be such that $\rho(d)\geq a_0(1-\delta)$ for $|d|\geq c$. Let $\bb^*$ be any $q$-vector and let $(\by^*,\bX^*)$ be such that $\bX^*$ is of full rank and $\by^*=\bX^*\bb^*$. Assume that all contaminated points are equal to $(\by^*,\bX^*)$. Then for all $\bb\in\R^q$ such that $\|\by^*-\bX^*\bb\|\geq c\sqrt{\lambda_1}$, we obtain \[ \begin{split} \sum_{\mathcal{S}_m'}\rho(d(\bs_i,\bb,\bV_m')) &= \sum_{\mathcal{S}_m'\cap\mathcal{S}_n}\rho(d(\bs_i,\bb,\bV_m')) + \sum_{\mathcal{S}_m'\setminus\mathcal{S}_n}\rho(d(\bs_i,\bb,\bV_m'))\\ &\geq \sum_{\mathcal{S}_n}\rho(d(\bs_i,\bbeta_n(\mathcal{S}_n),\bV_m'))-ma_0 + m \rho\left(\|\by^*-\bX^*\bb\|/\sqrt{\lambda_1}\right)\\ &\geq nb_0-ma_0 + ma_0(1-\delta) = nb_0-ma_0\delta, \end{split} \] and \[ \begin{split} \sum_{\mathcal{S}_m'}\rho(d(\bs_i,\bb^*,\bV_m')) &= \sum_{\mathcal{S}_m'\cap\mathcal{S}_n}\rho(d(\bs_i,\bb^*,\bV_m')) + \sum_{\mathcal{S}_m'\setminus\mathcal{S}_n}\rho(d(\bs_i,\bb^*,\bV_m'))\\ &\leq (n-m)a_0+m\rho(0) = (n-m)a_0. \end{split} \] We conclude that \[ \sum_{\mathcal{S}_m'}\rho(d(\bs_i,\bb^*,\bV_m')) \leq (n-m)a_0<nb_0-ma_0\delta \leq \sum_{\mathcal{S}_m'}\rho(d(\bs_i,\bb,\bV_m')). \] This means that $\|\by^*-\bX^*\bbeta_n(\mathcal{S}_m')\|<c\sqrt{\lambda_1}$, which means that $\|\bX^*(\bb^*-\bbeta_n(\mathcal{S}_m'))\|<c\sqrt{\lambda_1}$. Since~$\bX^*$ has full rank, this means that by sending $\bb^*$ to infinity, $\bbeta_n$ breaks down. \end{proof} \end{comment} However, the approach followed by Yohai~\cite{yohai1987}, which relates the bounded function $\rho_1$ to another bounded function $\rho_0$ in the first stage, turns out to be adequate. Let $\bbeta_{0,n}$ be an initial regression estimate, such that together with the covariance estimate $\bV_{0,n}$, it holds that \begin{equation} \label{eq:cond stage 1} \frac1n \sum_{i=1}^n \rho_0 \left( \sqrt{(\by_i-\bX_i\bbeta_{0,n})^T\bV_{0,n}^{-1}(\by_i-\bX_i\bbeta_{0,n})} \right) = b_0, \end{equation} for a function $\rho_0$ that satisfies (R-BND) and suppose that $\rho_1$ satisfies (R-BND), such that \begin{equation} \label{eq:ineq rho} \frac{\rho_1(s)}{a_1} \leq \frac{\rho_0(s)}{a_0}. \end{equation} Next, we proceed as in Definition~\ref{def:MM-estimator general}, i.e., define $\bbeta_{1,n}$ as the vector that minimizes~\eqref{eq:MM estimator general}. Estimates~$\bbeta_{0,n}$ and $\bV_{0,n}$ can be any two (high breakdown) estimators satisfying~\eqref{eq:cond stage 1}. However, natural candidates for our setup are the S-estimates $(\bbeta_{0,n},\btheta_{0,n})$, with $\bV_{0,n}=\bV(\btheta_{0,n})$, defined in~\cite{lopuhaa-gares-ruizgazenARXIVE2022} by means of the function $\rho_0$. In order to formulate the breakdown point of $\bbeta_{1,n}$ using bounded $\rho$-functions, we first need to discuss the following. Recall that $(\by_1,\bX_1),\ldots,(\by_n,\bX_n)$ are represented as points in $\R^k\times\R^{kq}$. Note however, that for linear models with intercept the first column of each $\bX_i$ consists of 1's. This means that the points $(\by_i,\bX_i)$ are concentrated in a lower dimensional subset of $\R^k\times\R^{kq}$. A similar situation occurs when all $\bX_i$ are equal to the same design matrix. In view of this, define~$\mathcal{X}\subset\R^{kq}$ as the subset with the lowest dimension $p=\text{dim}(\mathcal{X})\leq kq$ satisfying \begin{equation} \label{def:Xspace} P(\bX\in \mathcal{X})=1. \end{equation} Hence, $P$ is concentrated on the subset $\R^k\times \mathcal{X}$ of $\R^k\times\R^{kq}$, which may be of lower dimension~$k+p$ than $k+kq$. Let $\mathcal{S}_n=\{\bs_1,\ldots,\bs_n\}$, with $\bs_i=(\by_i,\bX_i)$ be a collection of $n$ points in~$\R^k\times \mathcal{X}$. Define \begin{equation} \label{def:k(S)} \kappa(\mathcal{S}_n) = \text{maximal number of points of $\mathcal{S}_n$ lying on the same hyperplane in~$\R^k\times \mathcal{X}$.} \end{equation} For example, if the distribution $P$ is absolutely continuous, then $\kappa(\mathcal{S}_n)\leq k+p$ with probability one. We then have the following theorem. \begin{theorem} \label{th:BDP MM Yohai} Suppose that $(\bbeta_{0,n},\bV_{0,n})$ satisfy~\eqref{eq:cond stage 1}, with $\rho_0$ such that~\eqref{eq:ineq rho} holds, where~$\rho_0$ and~$\rho_1$ satisfy (R-BND). Let $\mathcal{S}_n\subset \R^{k}\times \mathcal{X}$ be a collection of~$n$ points $\bs_i=(\by_i,\bX_i)$, $i=1,\ldots,n$. Let $r_0=b_0/a_0$ and suppose that $0<r_0\leq (n-k(\mathcal{S}_n))/(2n)$, where $k(\mathcal{S}_n)$ is defined by~\eqref{def:k(S)}. Then for any $\bbeta_{1,n}$ that minimizes $R_n(\bbeta)$, it holds that \[ \epsilon_n^*(\bbeta_{1,n},\mathcal{S}_n) \geq \min\left( \epsilon_n^*(\bV_{0,n},\mathcal{S}_n),\frac{\lceil{nr_0}\rceil}n \right) \] \end{theorem} \begin{proof} Suppose we replace $m$ points, where $m$ is such that \[ m\leq \lceil nr_0\rceil-1 \quad\text{and}\quad m\leq n\epsilon_n^*(\bV_{0,n},\mathcal{S}_n)-1. \] Let $\mathcal{S}_m'$ be the corrupted collection of points. Write $\bbeta_{0,m}=\bbeta_{0,n}(\mathcal{S}_m')$, and $\bV_{0,m}=\bV_{0,n}(\mathcal{S}_m')$. Then $\bV_{0,m}$ does not break down, so that there exist constants $0<L_1\leq L_2<\infty$, not depending on $\mathcal{S}_m'$ such that \[ 0<L_1\leq\lambda_k(\bV_{0,m})\leq \lambda_1(\bV_{0,m})\leq L_2<\infty. \] For any $\bbeta\in\R^q$, define the cylinder \[ \mathcal{C}_{1,m}(\bbeta) = \left\{ (\by,\bX)\in\R^{p}: (\by-\bX\bbeta)^T \bV_{0,m}^{-1} (\by-\bX\bbeta) \leq c_1^2 \right\}. \] Consider the function \[ R_m(\bbeta) = \frac{1}{n} \sum_{\bs_i\in S_m'} \rho_1\left( \sqrt{(\by_i-\bX_i\bbeta)^T\bV_{0,m}^{-1}(\by_i-\bX_i\bbeta)} \right), \] for the corrupted sample $\mathcal{S}_m'$. For any $\bbeta$ that minimizes $R_m(\bbeta)$, it holds $R_m(\bbeta)\leq R_m(\bbeta_{0,m})$. Therefore, for such $\bbeta$, according to~\eqref{eq:ineq rho} and~\eqref{eq:cond stage 1}, we have that \[ \begin{split} R_m(\bbeta) &\leq \frac{1}{n} \sum_{\bs_i\in S_m'} \rho_1\left( \sqrt{(\by_i-\bX_i\bbeta_{0,m})^T\bV_{0,m}^{-1}(\by_i-\bX_i\bbeta_{0,m})} \right)\\ &\leq \frac{a_1}{a_0} \frac{1}{n} \sum_{\bs_i\in S_m'} \rho_0\left( \sqrt{(\by_i-\bX_i\bbeta_{0,m})^T\bV_{0,m}^{-1}(\by_i-\bX_i\bbeta_{0,m})} \right) = r_0a_1. \end{split} \] Let $\mathbb{P}_m'$ be the empirical measure corresponding to the corrupted collection $\mathcal{S}_m'$. Then it holds that \[ \begin{split} \mathbb{P}_m'(\mathcal{C}_{1,m}(\bbeta)) &= \frac1n \sum_{\bs_i\in S_m'} \mathds{1} \left\{ \bs_i\in \mathcal{C}_{1,m}(\bbeta) \right\}\\ &\geq 1- \frac{1}{na_1} \sum_{\bs_i\in S_m'} \rho_1\left( \sqrt{(\by_i-\bX_i\bbeta)^T\bV_{0,m}^{-1}(\by_i-\bX_i\bbeta)} \right) \geq 1-r_0. \end{split} \] It follows that the cylinder $\mathcal{C}_{1,m}(\bbeta)$ must contain at least $\lceil n-nr_0\rceil$ number of points from the corrupted collection $\mathcal{S}'_m$. Furthermore, since $r_0\leq (n-k(\mathcal{S}_n))/(2n)$, for any such subset of $\mathcal{S}'_m$ it holds that it contains \[ \lceil n-nr_0\rceil-m = n-\lfloor{nr_0}\rfloor-\lceil nr_0\rceil+1 \geq k(\mathcal{S}_n)+1 \] points of the original collection $\mathcal{S}_n$. Let $J_0$ be a subset of $k(\mathcal{S}_n)+1$ points from the original collection $\mathcal{S}_n$ contained in $\mathcal{C}_{1,m}(\bbeta)$. By definition, $k(\mathcal{S}_n)+1$ original points cannot be on the same hyperplane, so that \[ \gamma_n = \inf_{J\subset \mathcal{S}_n} \inf_{\|\bgamma\|=1}\max_{\bs\in J} \|\bX\bgamma\|>0. \] where the first infimum runs over all subsets $J\subset \mathcal{S}_n$ of $k(\mathcal{S}_n)+1$ points. By definition of~$\gamma_n$, there exists an original point~$\bs_0\in J_0\subset \mathcal{S}_n\cap \mathcal{C}_{1,m}(\bbeta)$, such that \[ \|\bbeta\| = \|\bX_0\bbeta\| \times \frac{\|\bbeta\|}{\|\bX_0\bbeta\|} \leq \frac{1}{\gamma_n} \|\bX_0\bbeta\|. \] Because $\bs_0\in \mathcal{C}_{1,m}(\bbeta)$, it follows that \[ \|\by_0-\bX_0\bbeta\|^2 \leq (\by_0-\bX_0\bbeta)^T\bV_{0,n}^{-1}(\by_0-\bX_0\bbeta) \lambda_1(\bV_{0,m}) \leq c_1^2L_2 \] and because $\bs_0\in \mathcal{S}_n$, we have that \[ \|\bX_0\bbeta\| \leq c_1\sqrt{L_2} + \max_{(\by_i,\bX_i)\in \mathcal{S}_n} \|\by_i\| <\infty. \] We conclude that for minimizing $R_m(\bbeta)$ we can restrict ourselves to a compact set $K_n$, only depending on the original collection~$\mathcal{S}_n$. Firstly, since $R_m(\bbeta)$ is continuous, this implies there exists at least one $\bbeta_{1,n}(\mathcal{S}'_m)$, which minimizes $R_m(\bbeta)$. Secondly, since any $\bbeta_{1,n}(\mathcal{S}'_m)$ must be in~$K_n$, which only depends on the original collection~$\mathcal{S}_n$, the estimate $\bbeta_{1,n}(\mathcal{S}'_m)$ does not break down. \end{proof} Theorem~\ref{th:BDP MM Yohai} is comparable to Theorem~1 in~\cite{SalibianBarrera-VanAelst-Willems2006} and Theorem~3 in~\cite{kudraszow-maronna2011}. Together with Proposition~\ref{prop:BDP MM unbounded}, the result in Theorem~\ref{th:BDP MM Yohai} provides sufficient conditions for the MM-estimators in the linear mixed effects model used in~\cite{copt&heritier2007}, to inherit the breakdown point from the initial covariance estimate. It can be shown that if $\bV_{0,n}$ satisfies~\eqref{eq:cond stage 1} for some $\bbeta_{0,n}$, it must have a breakdown point that is less than or equal to $\lceil nr_0\rceil/n$ (e.g., see the proof of Theorem 4 in~\cite{lopuhaa-gares-ruizgazenARXIVE2022}). This means that from Theorem~\ref{th:BDP MM Yohai}, we have \[ \epsilon_n^*(\bbeta_{1,n},\mathcal{S}_n) \geq \epsilon_n^*(\bV_{0,n}). \] Moreover, if $(\bbeta_{0,n},\btheta_{0,n})$ are S-estimators, as defined in~\cite{lopuhaa-gares-ruizgazenARXIVE2022}, such that $\bV_{0,n}=\bV(\btheta_{0,n})$, then $\epsilon_n^*(\bV_{0,n})=\lceil nr_0\rceil/n$ according to Theorem~4 in~\cite{lopuhaa-gares-ruizgazenARXIVE2022}, so that \[ \epsilon_n^*(\bbeta_{1,n},\mathcal{S}_n) \geq \frac{\lceil{nr_0}\rceil}n. \] The largest possible value of the breakdown point occurs when $r_0=(n-\kappa(\mathcal{S}_n))/(2n)$, in which case $\lceil nr_0\rceil/n=\lceil (n-\kappa(\mathcal{S}_n))/2\rceil/n=\lfloor (n-\kappa(\mathcal{S}_n)+1)/2\rfloor/n$. When the collection~$\mathcal{S}_n$ is in general position, then $\kappa(\mathcal{S}_n)= k+p$. In that case the breakdown point is at least~$\lfloor (n-k-p+1)/2\rfloor/n$. When all $\bX_i$ are equal to the same $\bX$, one has $p=0$ and $\kappa(\mathcal{S}_n)=k$. In that case, the breakdown point is at least $\lfloor (n-k+1)/2\rfloor/n$. This coincides with the maximal breakdown point for affine equivariant estimators for $k\times k$ covariance matrices (see~\cite{davies1987}). \section{Score equations} \label{sec:score equations} Recall the definition of the functional $\bbeta_1(P)$ in Section~\ref{subsec:def MM Yohai}, which minimizes $R_{P}(\bbeta)$. Then $\bbeta_1(P)$ is also a solution of $\partial R_P(\bbeta)/\partial \bbeta=\mathbf{0}$. In order to allow changing the order of integration and differentiation in $R_P(\bbeta)$, we require an additional condition on $\rho_1$. \begin{quote} \begin{itemize} \item[(R-CD1)] $\rho_1$ is continuously differentiable and $u_1(s)=\rho_1'(s)/s$ is continuous, \end{itemize} \end{quote} If $\rho_1$ satisfies (R-CD1), then \[ \frac{\partial}{\partial \bbeta} \rho_1\left( \sqrt{(\by-\bX\bbeta)^T\bV_0(P)^{-1}(\by-\bX\bbeta)} \right) = u_1(d_0)\bX^T\bV_0(P)^{-1}(\by-\bX\bbeta) \] where $d_0=d(\bs,\bbeta,\bV_0(P))$, as defined in~\eqref{def:mahalanobis}. This means that \[ \left\| \frac{\partial}{\partial \bbeta} \rho_1\left( \sqrt{(\by-\bX\bbeta)^T\bV_0(P)^{-1}(\by-\bX\bbeta)} \right) \right\| = \frac{|u_1(d_0)d_0|\cdot\|\bX\|}{\sqrt{\lambda_k(\bV_0(P))}}. \] When $\rho_1$ satisfies either (R-BND) or (R-UNB), the function $u_1(s)s$ is uniformly bounded. This means that in both cases the right hand side is bounded by a constant times $\|\bX\|$. Hence, if~$\E_P\|\bX\|<\infty$, then by dominated convergence \[ \begin{split} & \frac{\partial}{\partial \bbeta} \int \rho_1\left( \sqrt{(\by-\bX\bbeta)^T\bV_0(P)^{-1}(\by-\bX\bbeta)} \right) \,\dd P(\by,\bX)\\ &= \int \frac{\partial}{\partial \bbeta} \rho_1\left( \sqrt{(\by-\bX\bbeta)^T\bV_0(P)^{-1}(\by-\bX\bbeta)} \right) \,\dd P(\by,\bX)\\ &= \int u_1(d_0)\bX^T\bV_0(P)^{-1}(\by-\bX\bbeta) \,\dd P(\by,\bX). \end{split} \] We conclude that, if $\E_P\|\bX\|<\infty$, the functional $\bbeta_1(P)$ satisfies score equations \begin{equation} \label{eq:score psi MM Yohai} \int \Psi_1(\bs,\bbeta,\bV_0(P)) \,\dd P(\by,\bX) = \mathbf{0}, \end{equation} where \begin{equation} \label{def:psi beta MM general} \Psi_1(\bs,\bbeta,\bV) = u_1(d)\bX^T\bV^{-1}(\by-\bX\bbeta), \end{equation} with $d=d(\bs,\bbeta,\bV)$, as defined in~\eqref{def:mahalanobis}. Score equation~\eqref{eq:score psi MM Yohai} coincides with equation (3.8) in~\cite{lopuhaa1992highly} for the multivariate location-scale model. If $P$ is the empirical measure $\mathbb{P}_n$ corresponding to $(\by_1,\bX_1),\ldots,(\by_n,\bX_n)$, then~\eqref{eq:score psi MM Yohai} coincides with equation (2.6) for the multiple regression model in~\cite{yohai1987} and with equation~(8) for the linear mixed effects model~\eqref{def:linear mixed effects model Copt} in~\cite{copt&heritier2007}. Furthermore, for the empirical measure~$\mathbb{P}_n$, equation~\eqref{eq:score psi MM Yohai} is also similar to equation~(16) for the location MM-estimator in~\cite{SalibianBarrera-VanAelst-Willems2006} and to equation~(2.10) for the multivariate linear regression MM-estimator in~\cite{kudraszow-maronna2011}. Let \begin{equation} \label{def:Lambda beta MM general} \Lambda_1(\bbeta,\bV) = \int \Psi_1(\bs,\bbeta,\bV)\,\dd P(\bs). \end{equation} A vector $\bb\in\R^q$ is called a \emph{point of symmetry} of $P$, if for almost all $\bX$, it holds that \[ P\left(\bX\bb+A\mid\bX\right) = P\left(\bX\bb-A\mid\bX\right) \] for all measurable sets $A\subset\R^k$, where for $\lambda\in\R$ and $\bb\in\R^q$, $\bX\bb+\lambda A$ denotes the set~$\{\bX+\lambda \by:\by\in A\}$. If $\bb$ is a point of symmetry of $P$, it has the property that \begin{equation} \label{eq:point of symmetry} \Lambda_1(\bb,\bV)=\mathbf{0} \quad \text{for all non-singular symmetric matrices }\bV. \end{equation} This will become very useful in determining asymptotic properties, such as the influence function for $\bbeta_1(P)$ and asymptotic normality of $\bbeta_{1,n}$. Note that if $P$ is such that $\by\mid\bX$ has an elliptically contoured density as defined in~\eqref{eq:elliptical}, then the vector $\bmu=\bX\bbeta$ is a point of symmetry. \section{Local robustness: the influence function} \label{subsec:IF MM Yohai} For $0<\epsilon<1$ and $\bs_0=(\by_0,\bX_0)\in\R^{k}\times \R^{kq}$ fixed, consider $P_{\epsilon,\bs_0}=(1-\epsilon)P+\epsilon\delta_{\bs_0}$, as defined in~\eqref{def:perturbed P}. The \emph{influence function} of the functional~$\bbeta_1(\cdot)$ at probability measure $P$, is defined as \begin{equation} \label{def:IF regression Yohai} \text{IF}(\bs;\bbeta_1,P) = \lim_{\epsilon\downarrow0} \frac{\bbeta_1((1-\epsilon)P+\epsilon\delta_{\bs_0})-\bbeta_1(P)}{\epsilon}, \end{equation} if this limit exists (see Hampel~\cite{hampel1974}). We intend to include both bounded and unbounded functions $\rho_1$ in Definition~\ref{def:MM-functional general}. For bounded~$\rho_1$, it follows from Theorem~\ref{th:continuity MM bounded rho} that, under suitable conditions, the functional~$\bbeta_1(P)$ is continuous. In particular, this means that \begin{equation} \label{eq:cont beta1} \lim_{\epsilon\downarrow0} \bbeta_1(P_{\epsilon,\bs_0})=\bbeta_1(P). \end{equation} For unbounded $\rho_1$, the functional $\bbeta_1(P)$ is not necessarily continuous, but we can still establish~\eqref{eq:cont beta1}. \begin{lemma} \label{lem:beta1 continuous} Let $\rho_1:\R\to[0,\infty)$ satisfy (R-UNB). Suppose that $\E_P\|\bs\|<\infty$ and that $\bX$ has full rank with probability one. Suppose that $\bV_0(P_{\epsilon,\bs_0})$ exists and that $\bV_0(P_{\epsilon,\bs_0})\to\bV_0(P)$, as $\epsilon\downarrow0$. Suppose that~$\bbeta_{1}(P_{\epsilon,\bs_0})$ minimizes $R_{P_{\epsilon,\bs_0}}(\bbeta)$. If $\bbeta_1(P)$ is the unique minimizer of~$R_P(\bbeta)$, then \[ \lim_{\epsilon\downarrow0} \bbeta_1(P_{\epsilon,\bs_0})=\bbeta_1(P). \] \end{lemma} \begin{proof} Define the functions $h(\bs;\bbeta,\bV)$ and $H(\bbeta,\bV)$ as in the proof of Theorem~\ref{th:consistency MM unbounded rho1}, and let \[ H_{\epsilon,\bs_0}(\bbeta,\bV)=\int h(\bs;\bbeta,\bV)\,\dd P_{\epsilon,\bs_0}(\bs). \] Let $K_M$ be the same set of pairs $(\bbeta,\bV)$ as in the proof of Theorem~\ref{th:consistency MM unbounded rho1}. Then for $(\bbeta,\bV)\in K_M$, \[ |h(\bs;\bbeta,\bV)| \leq \rho_1(s_0)+(\|\by\|+M\|\bX\|)L_1^{-1/2}\rho_1'(s_0). \] Instead of~\eqref{eq:uniform strong law}, we now have \[ \sup_{(\bbeta,\bV)\in K_M} \left| H_{\epsilon,\bs_0}(\bbeta,\bV)-H(\bbeta,\bV) \right| \leq \epsilon \left( \sup_{(\bbeta,\bV)\in K_M} |H(\bbeta,\bV)| + \sup_{(\bbeta,\bV)\in K_M} h(\bs_0;\bbeta,\bV) \right). \] Because $\E_P\|\bs\|<\infty$, the first supremum on the right hand side is bounded and similarly, the second supremum is bounded by a constant depending on $\bs_0$ and $\rho_1$. Therefore,\[ \sup_{(\bbeta,\bV)\in K_M} \left| H_{\epsilon,\bs_0}(\bbeta,\bV)-H(\bbeta,\bV) \right| \to 0,\quad \text{as }\epsilon\downarrow0. \] From here on, one can mimic the proof of Theorem~\ref{th:consistency MM unbounded rho1} and show that $\bbeta_1(P_{\epsilon,\bs_0})\to\bbeta_1(P)$. \end{proof} Now, that we have established~\eqref{eq:cont beta1} for both bounded and unbounded $\rho_1$, we have the following general result for the influence function. \begin{theorem} \label{th:IF MM} Suppose that $\rho_1$ either satisfies (R-BND) and (R-CD1) or satisfies~(R-UNB), and suppose that $\E_P\|\bs\|<\infty$. Let $\bbeta_1(P_{\epsilon,\bs_0})$ and $\bbeta_1(P)$ be a solution to the minimization problem in Definition~\ref{def:MM-functional general} at $P_{\epsilon,\bs_0}$ and $P$, respectively. Suppose that $(\bbeta_1(P_{\epsilon,\bs_0}),\bV_0(P_{\epsilon,\bs_0}))\to (\bbeta_1(P),\bV_0(P))$, as $\epsilon\downarrow0$, and suppose that $\bbeta_1(P)$ is a point of symmetry of $P$. Suppose that~$\Lambda_1$, as defined in~\eqref{def:Lambda beta MM general}, has a partial derivative $\partial\Lambda_1/\partial\bbeta$ that is continuous at $(\bbeta_1(P),\bV_0(P))$ and that $\mathbf{D}_1=(\partial\Lambda_1/\partial\bbeta)(\bbeta_1(P),\bV_0(P))$ is non-singular. Then for $\bs_0\in\R^p$, \[ \mathrm{IF}(\bs_0;\bbeta_1,P) = -\mathbf{D}_1^{-1}\Psi_1(\bs_0,\bbeta_1(P),\bV_0(P)), \] where $\Psi_1$ is defined in~\eqref{def:psi beta MM general}. \end{theorem} \begin{proof} Denote \[ \begin{split} \bxi_{\epsilon,\bs_0} &= (\bbeta_{1,\epsilon,\bs_0},\bV_{0,\epsilon,\bs_0})=(\bbeta_1(P_{\epsilon,\bs_0}),\bV_0(P_{\epsilon,\bs_0}))\\ \bxi_P &= (\bbeta_{1,P},\bV_{0,P})=(\bbeta_1(P),\bV_0(P)) \end{split} \] Since $\bV_{0,\epsilon,\bs_0}\to \bV_{0,P}$ and the fact that $\E_P\|\bs\|<\infty$, it follows from Theorem~\ref{th:continuity MM bounded rho} and Lemma~\ref{lem:beta1 continuous} that there exists $\bbeta_1(P_{\epsilon,\bs_0})$ minimizing $R_P(\bbeta)$ at $P=P_{\epsilon,\bs_0}$. According to Section~\ref{sec:score equations}, this means that~$\bxi_{\epsilon,\bs_0}$ satisfies the score equation~\eqref{eq:score psi MM Yohai} for the regression M-functional $\bbeta_1$ at $P_{\epsilon,\bs_0}$, that is \[ \int \Psi_1(\bs,\bxi_{\epsilon,\bs_0})\,\dd P_{\epsilon,\bs_0}(\bs)=\mathbf{0}. \] We decompose as follows \[ \begin{split} \mathbf{0} &= \int \Psi_1(\bs,\bxi_{\epsilon,\bs_0})\,\dd P_{\epsilon,\bs_0}(\bs)\\ &= (1-\epsilon)\int \Psi_1(\bs,\bxi_{\epsilon,\bs_0})\,\dd P(\bs)+\epsilon\Psi_1(\bs_0,\bxi_{\epsilon,\bs_0})\\ &= (1-\epsilon)\Lambda_1(\bxi_{\epsilon,\bs_0})+\epsilon\Psi_1(\bs_0,\bxi_{\epsilon,\bs_0}). \end{split} \] We first determine the order of $\bbeta_{1,\epsilon,\bs_0}-\bbeta_{1,P}$, as $\epsilon\downarrow0$. Because $\bxi\mapsto\Psi_1(\bs_0,\bxi)$ is continuous, it follows that \[ \Psi_1(\bs_0,\bxi_{\epsilon,\bs_0})=\Psi_1(\bs_0,\bxi_P)+o(1), \qquad \text{as } \epsilon\downarrow0. \] Furthermore, because $\Lambda_1$ has a partial derivative $\partial\Lambda_1/\partial\bbeta$ that is continuous at $\bxi_P=(\bbeta_{1,P},\bV_{0,P})$, we have that \[ \begin{split} \Lambda_1(\bxi_{\epsilon,\bs_0}) &= \Lambda_1(\bbeta_{1,P},\bV_{0,\epsilon,\bs_0}) + \frac{\partial \Lambda_1}{\partial \bbeta} (\bbeta_{1,P},\bV_{0,\epsilon,\bs_0}) (\bbeta_{1,\epsilon,\bs_0}-\bbeta_{1,P}) + o(\|\bbeta_{1,\epsilon,\bs_0}-\bbeta_{1,P}\|)\\ &= \Lambda_1(\bbeta_{1,P},\bV_{0,\epsilon,\bs_0}) + \Big( \bD_1+o(1) \Big) \Big(\bbeta_{1,\epsilon,\bs_0}-\bbeta_{1,P}\Big) + o(\|\bbeta_{1,\epsilon,\bs_0}-\bbeta_{1,P}\|). \end{split} \] Since $\bbeta_{1,P}$ is a point of symmetry of $P$, according to~\eqref{eq:point of symmetry} it holds that $\Lambda_1(\bbeta_{1,P},\bV_{0,\epsilon,\bs_0})=\mathbf{0}$. It follows that \[ \mathbf{0}= (1-\epsilon) \bD_1 \Big(\bbeta_{1,\epsilon,\bs_0}-\bbeta_{1,P}\Big) + o(\|\bbeta_{1,\epsilon,\bs_0}-\bbeta_{1,P}\|) + o(\epsilon) + \epsilon\Psi_1(\bs_0,\bxi_{P}). \] Because $\bD_1$ is non-singular and $\Psi_1(\bs_0,\bxi_P)$ is fixed, this implies $\bbeta_{1,\epsilon,\bs_0}-\bbeta_{1,P}=O(\epsilon)$. After inserting this in the previous equality, it follows that \[ \begin{split} \mathbf{0} &= (1-\epsilon) \bD_1 \Big(\bbeta_{1,\epsilon,\bs_0}-\bbeta_{1,P}\Big) + \epsilon\Psi_1(\bs_0,\bxi_P) +o(\epsilon)\\ &= \bD_1 \Big(\bbeta_{1,\epsilon,\bs_0}-\bbeta_{1,P}\Big) + \epsilon\Psi_1(\bs_0,\bxi_P) +o(\epsilon). \end{split} \] We conclude \[ \frac{\bbeta_{1,\epsilon,\bs_0}-\bbeta_{1,P}}{\epsilon} = -\bD_1^{-1}\Psi_1(\bs_0,\bxi_P)+o(1). \] This means that the limit of the left hand side exists and \[ \text{IF}(\bs_0;\bbeta_1,P) = \lim_{\epsilon\downarrow0} \frac{\bbeta_1((1-\epsilon)P+\epsilon\delta_{\bs_0})-\bbeta_1(P)}{\epsilon} = -\bD_1^{-1}\Psi_1(\bs_0,\bxi_P). \] \end{proof} When $P$ is such that $\by\mid\bX$ has an elliptically contoured density~\eqref{eq:elliptical} we can obtain a more detailed expression for the influence function. This requires the following additional condition on the function~$\rho_1$. \begin{quote} \begin{itemize} \item[(R-CD2)] $\rho_1$ is twice continuously differentiable. \end{itemize} \end{quote} We then have the following corollary. \begin{corollary} \label{cor:IF MM} Suppose that $P$ is such that $\by\mid\bX$ has an elliptically contoured density~$f_{\bmu,\bSigma}$ from~\eqref{eq:elliptical}, with $(\bX\bbeta_1(P),\bV_0(P))=(\bmu,\bSigma)$. Let $\bbeta_1(P_{\epsilon,\bs_0})$ and $\bbeta_1(P)$ be a solution to the minimization problem in Definition~\ref{def:MM-functional general} at $P_{\epsilon,\bs_0}$ and $P$, respectively, and suppose that $(\bbeta_1(P_{\epsilon,\bs_0}),\bV_0(P_{\epsilon,\bs_0}))\to (\bbeta_1(P),\bV_0(P))$, as $\epsilon\downarrow0$. Suppose that $\rho_1:\R\to[0,\infty)$ either satisfies (R-BND), (R-CD1) and (R-CD2), or satisfies (R-UNB) and (R-CD2), such that~$u_1(s)$ and $u_1'(s)s$ are bounded. Let \begin{equation} \label{def:alpha} \alpha_1 = \mathbb{E}_{\mathbf{0},\bI_k} \left[ \left(1-\frac{1}{k}\right) \frac{\rho_1'(\|\bz\|)}{\|\bz\|} + \frac1k \rho_1''(\|\bz\|) \right], \end{equation} and suppose that $\mathbb{E}_{0,\bI_k} \left[ \rho_1''(\|\by\|) \right]>0$. If $\bX$ has full rank with probability one, then for $\bs_0=(\by_0,\bX_0)$ we have \[ \mathrm{IF}(\bs_0,\bbeta_1,P) = \frac{u_1(d_0)}{\alpha_1} \Big( \E\left[\bX^T\bSigma^{-1}\bX\right] \Big)^{-1} \bX_0^T\bSigma^{-1}(\by_0-\bX_0\bbeta_1(P)) \] where $d_0=d(\bs_0,\bbeta_1(P),\bSigma)$, as defined in~\eqref{def:mahalanobis}. \end{corollary} \begin{proof} Consider $\partial \Lambda_{1}/\partial\bbeta$. We have \begin{equation} \label{eq:derivative psi1 wrt beta} \frac{\partial\Psi_1(\bs,\bbeta,\bV)}{\partial\bbeta} = - \frac{u_1'(d)}{d} \bX^T\bV^{-1}(\by-\bX\bbeta) (\by-\bX\bbeta)^T \bV^{-1}\bX - u_1(d)\bX^T\bV^{-1}\bX, \end{equation} where $d=d(\bs,\bbeta,\bV)$, as defined in~\eqref{def:mahalanobis}. Since $u_1(s)$ and $u_1'(s)s=\rho_1''(s)-u_1(s)$ are bounded, similar to the proof of Lemma~B.3 in~\cite{lopuhaa-gares-ruizgazenARXIVE2022}, it follows that for $(\bbeta,\bV)$ in the neighborhood~$N$ of~$(\bbeta_1(P),\bV_0(P))$, it holds that \begin{equation} \label{eq:derivative Lambda1} \frac{\partial\Lambda_1(\bbeta,\bV)}{\partial\bbeta} = \int \frac{\partial\Psi_1(\bs,\bbeta,\bV)}{\partial \bbeta}\,\dd P(\bs), \end{equation} and that $\partial\Lambda_1/\partial\bbeta$ is continuous at $(\bbeta_1(P),\bV_0(P))$. Then, similar to the first part of the proof of Lemma~2 in~\cite{lopuhaa-gares-ruizgazenARXIVE2022}, it can be shown that \[ \bD_1 = \frac{\partial\Lambda_1(\bbeta_1(P),\bV_0(P))}{\partial \bbeta} = -\alpha_1 \mathbb{E}\left[\mathbf{X}^T\bSigma^{-1}\mathbf{X}\right]. \] The corollary then follows from Theorem~\ref{th:IF MM}. \end{proof} Since $u_1(s)s$ is bounded, the influence function is uniformly bounded in $\by_0$, but not in~$\bX_0$. This illustrates the phenomenon in linear regression that leverage points can have a large effect on the regression estimator. The expression found in Corollary~\ref{cor:IF MM} is the same as the one found for the regression S-functional in~\cite{lopuhaa-gares-ruizgazenARXIVE2022} defined with the function $\rho_1$ (see their Corollary~5). For the multivariate location-scale model, for which $\bX=\bI_k$, Theorem~\ref{th:IF MM} coincides with Theorem~4.2 in~\cite{lopuhaa1992highly} and Corollary~\ref{cor:IF MM} coincides with the results found in~\cite{SalibianBarrera-VanAelst-Willems2006}. Furthermore, the expressions found in Theorem~\ref{th:IF MM} and Corollary~\ref{cor:IF MM} are similar to the ones obtained for the regression MM-functionals in~\cite{yohai1987} and~\cite{kudraszow-maronna2011}, respectively. For the linear mixed effects model~\eqref{def:linear mixed effects model Copt} the expression for the influence function follows from Theorem~\ref{th:IF MM}. For the special case of this model with multivariate normal errors, as considered in~\cite{copt&heritier2007}, the expression for the influence function can be obtained from Corollary~\ref{cor:IF MM}. \begin{remark} \label{rem:IF multivariate linear regression} The multivariate linear regression model~\eqref{def:multivariate linear regression model} is obtained from~\eqref{def:model} by taking~$\bX=\bx^T\otimes\bI_k$ and $\bbeta=\vc(\bB^T)$. For this model, the expression in Corollary~\ref{cor:IF MM} for~$\bbeta_1(P)=\mathbf{0}$ is similar, but slightly different from the one in Theorem~4 in~\cite{kudraszow-maronna2011}. It seems that Theorem~4 in~\cite{kudraszow-maronna2011} contains some typos. When~$\bT_1$ is the regression MM-functional considered in~\cite{kudraszow-maronna2011}, then in our notation $\bbeta_1=\vc(\bT_1^T)$. We believe that at a distribution~$P$, which is such that $\by\mid\bx$ has an elliptically contoured density with parameters~$\bmu_0=\bB_0^T\bx$ and $\bSigma_0$, the correct expression for the influence function of $\bT_1^T$ should be \[ \text{\rm IF}(\by_0,\bx_0;\bT_1^T,P) = \frac{1}{\alpha_1} u_1\left( \sqrt{(\by_0-\bB_0^T\bx_0)\bSigma_0^{-1}(\by_0-\bB_0^T\bx_0)} \right) (\by_0-\bB_0^T\bx_0)\bx_0^T \left( \mathbb{E}\left[\bx\bx^T\right] \right)^{-1}, \] with $\alpha_1$ defined in~\eqref{def:alpha} and $u_1(s)=\rho_1'(s)/s$. \end{remark} \section{Asymptotic Normality} \label{sec:asymptotic normality} Corollary~\ref{cor:consistency MM-estimator bounded} and Theorem~\ref{th:consistency MM unbounded rho1} provide conditions under which $\bbeta_{1,n}\to\bbeta_1(P)$, with probability one, for $\rho_1$ satisfying either (R-BND) or (R-UNB). The next theorem establishes asymptotic normality for $\bbeta_{1,n}$ defined with either a bounded or an unbounded function $\rho_1$. \begin{theorem} \label{th:asymp normal MM} Suppose that $\rho_1:\R\to[0,\infty)$ either satisfies (R-BND) and (R-CD1), or satisfies (R-UNB). Suppose that $u_1$ is of bounded variation and let $\E\|\mathbf{s}\|^2<\infty$. Let $\bbeta_{1,n}$ and~$\bbeta_1(P)$ be solutions to the minimization problems in Definitions~\ref{def:MM-estimator general} and~\ref{def:MM-functional general}, respectively. Suppose that $(\bbeta_{1,n},\bV_{0,n})\to (\bbeta_1(P),\bV_0(P))$, in probability, and suppose that~$\bbeta_1(P)$ is a point of symmetry of $P$. Suppose that $\Lambda_1$, as defined in~\eqref{def:Lambda beta MM general}, has a partial derivative~$\partial\Lambda_1/\partial\bbeta$ that is continuous at $(\bbeta_1(P),\bV_0(P))$ and that $\mathbf{D}_1=(\partial\Lambda_1/\partial\bbeta)(\bbeta_1(P),\bV_0(P))$ is non-singular. Then $\sqrt{n}(\bbeta_{1,n}-\bbeta_1(P))$ is asymptotically normal with mean zero and covariance matrix \[ \bD_1^{-1} \E \left[ \Psi_1(\mathbf{s},\bbeta_1(P),\bV_0(P)) \Psi_1(\mathbf{s},\bbeta_1(P),\bV_0(P))^T \right] \bD_1^{-1}. \] \end{theorem} \begin{proof} Recall that the estimator can be written as $\bbeta_{1,n}=\bbeta_1(\mathbb{P}_n)$. This means that it satisfies~\eqref{eq:score psi MM Yohai} for $P=\mathbb{P}_n$: \begin{equation} \label{eq:score psi MM estimator Yohai} \int \Psi_1(\bs,\bbeta_{1,n},\bV_{0,n}) \,\dd \mathbb{P}_n(\bs) = \mathbf{0}, \end{equation} where $\Psi_1$ is defined in~\eqref{def:psi beta MM general}. Writing $\bxi_n=(\bbeta_{1,n},\bV_{0,n})$ and $\bxi_P=\bxi(P)$, we decompose~\eqref{eq:score psi MM estimator Yohai} as follows \begin{equation} \label{eq:decomposition MM estimator Yohai} \begin{split} \mathbf{0}= \int \Psi_1(\mathbf{s},\bxi_n)\,\dd P(\mathbf{s}) &+ \int \Psi_1(\mathbf{s},\bxi_P)\,\dd (\mathbb{P}_n-P)(\mathbf{s})\\ &+ \int \left( \Psi_1(\mathbf{s},\bxi_n)-\Psi_1(\mathbf{s},\bxi_P) \right) \,\dd (\mathbb{P}_n-P)(\mathbf{s}). \end{split} \end{equation} According to~Lemma~B.8 in~\cite{lopuhaa-gares-ruizgazenARXIVE2022}, the third term is of the order $o_P(1/\sqrt{n})$, whereas according to the central limit theorem the second term is of the order $O_P(1/\sqrt{n})$. This means we can write \[ \mathbf{0}=\Lambda_1(\bxi_n)+O_P(1/\sqrt{n}), \] where \[ \begin{split} \Lambda_1(\bxi_n) &= \Lambda_1(\bbeta_1(P),\bV_{0,n}) + \frac{\partial \Lambda_1}{\partial \bbeta} (\bbeta_1(P),\bV_{0,n}) (\bbeta_{1,n}-\bbeta_1(P)) + o(\|\bbeta_{1,n}-\bbeta_1(P)\|)\\ &= \Lambda_1(\bbeta_1(P),\bV_{0,n}) + \Big( \bD_1+o_P(1) \Big) \Big(\bbeta_{1,n}-\bbeta_1(P)\Big) + o(\|\bbeta_{1,n}-\bbeta_1(P)\|). \end{split} \] Since $\bbeta_1(P)$ is a point of symmetry of $P$, according to~\eqref{eq:point of symmetry} it holds that $\Lambda_1(\bbeta_1(P),\bV_{0,n})=\mathbf{0}$. It follows that \[ \mathbf{0}= \bD_1 (\bbeta_{1,n}-\bbeta_1(P)) + o(\|\bbeta_{1,n}-\bbeta_1(P)\|) + O_P(1/\sqrt{n}). \] Because $\bD_1$ is non-singular, this implies $\bbeta_{1,n}-\bbeta_1(P)=O(1/\sqrt{n})$. After inserting this in~\eqref{eq:decomposition MM estimator Yohai}, it follows that \[ \mathbf{0} = \bD_1 \Big(\bbeta_{1,n}-\bbeta_1(P)\Big) + \int \Psi_1(\mathbf{s},\bxi_P)\,\dd (\mathbb{P}_n-P)(\mathbf{s})\\ +o_P(1/\sqrt{n}). \] We conclude \[ \sqrt{n}(\bbeta_{1,n}-\bbeta_1(P)) = -\bD_1^{-1} \sqrt{n}\int \Psi_1(\mathbf{s},\bxi_P)\,\dd (\mathbb{P}_n-P)(\mathbf{s})\\ + o_P(1). \] Since $\E[\Psi_1(\mathbf{s},\bxi_P)]=\Lambda_1(\bxi_P)=\mathbf{0}$, it follows that \[ \sqrt{n}\int \Psi_1(\mathbf{s},\bxi_P)\,\dd (\mathbb{P}_n-P)(\mathbf{s}) = \frac{1}{\sqrt{n}} \sum_{i=1}^{n} \Psi_1(\mathbf{s}_i,\bxi_P) \] converges in distribution to a multivariate normal random vector with mean zero and covariance \[ \E \left[ \Psi_1(\mathbf{s},\bxi_P)\Psi_1(\mathbf{s},\bxi_P)^T. \right] \] This finishes the proof. \end{proof} When $P$ is such that $\by\mid\bX$ has an elliptically contoured density~\eqref{eq:elliptical} we can obtain a more detailed expression for the asymptotic covariance. \begin{corollary} \label{cor:asymp norm MM} Suppose that $P$ is such that $\by\mid\bX$ has an elliptically contoured density~$f_{\bmu,\bSigma}$ from~\eqref{eq:elliptical} with parameters $(\bmu,\bSigma)$. Suppose that $\bbeta_1(P)$ is the unique minimizer of $R_P(\bbeta)$, such that $\bX\bbeta_1(P)=\bmu$ and suppose that $\bV_0(P)=\bSigma$. Let $\bbeta_{1,n}$ and $\bbeta_1(P)$ be solutions to the minimization problems in Definitions~\ref{def:MM-estimator general} and~\ref{def:MM-functional general}, respectively, and suppose that $(\bbeta_{1,n},\bV_{0,n})\to (\bbeta_1(P),\bV_0(P))$, in probability. Suppose that $\rho_1:\R\to[0,\infty)$ either satisfies (R-BND), (R-CD1) and (R-CD2) or satisfies (R-UNB) and (R-CD2), such that $u_1(s)$ is of bounded variation and $u_1'(s)s$ is bounded, and let $\E\|\mathbf{s}\|^2<\infty$. Let~$\alpha_1$ be defined in~\eqref{def:alpha} and suppose that $\mathbb{E}_{0,\bI_k} \left[ \rho_1''(\|\by\|) \right]>0$. If $\bX$ has full rank with probability one, then $\sqrt{n}(\bbeta_{1,n}-\bbeta_1(P))$ is asymptotically normal with mean zero and covariance matrix \[ \frac{\E_{\mathbf{0},\bI_k}\left[\rho_1'(\|\bz\|)^2\right]}{k\alpha_1^2} \left( \mathbb{E}\left[\mathbf{X}^T\bSigma^{-1}\mathbf{X}\right] \right)^{-1}. \] \end{corollary} \begin{proof} Similar to the proof of Corollary~\ref{cor:IF MM} it can be shown that \[ \bD_1 = \frac{\partial\Lambda_1(\bbeta_1(P),\bV_0(P))}{\partial \bbeta} = -\alpha_1 \mathbb{E}\left[\mathbf{X}^T\bSigma^{-1}\mathbf{X}\right]. \] According to Theorem~\ref{th:asymp normal MM}, it follows that $\sqrt{n}(\bbeta_n-\bbeta(P))$ is asymptotically normal with mean zero and covariance matrix \[ \frac1{\alpha_1^2} \left( \mathbb{E}\left[\mathbf{X}^T\bSigma^{-1}\mathbf{X}\right] \right)^{-1} \E \left[ \Psi_1(\mathbf{s},\bbeta_1(P),\bV_0(P)) \Psi_1(\mathbf{s},\bbeta_1(P),\bV_0(P))^T \right] \left( \mathbb{E}\left[\mathbf{X}^T\bSigma^{-1}\mathbf{X}\right] \right)^{-1} \] where $\Psi_1$ is defined in~\eqref{def:psi beta MM general}. Similar to the proof of Corollary~6 in~\cite{lopuhaa-gares-ruizgazenARXIVE2022}, we find that \[ \E \left[ \Psi_1(\mathbf{s},\bbeta_1(P),\bV_0(P)) \Psi_1(\mathbf{s},\bbeta_1(P),\bV_0(P))^T \right] = \frac{\E_{\mathbf{0},\bI_k}\left[\rho_1'(\|\bz\|)^2\right]}{k\alpha_1^2} \mathbb{E}\left[\mathbf{X}^T\bSigma^{-1}\mathbf{X}\right]. \] This proves the corollary. \end{proof} The expression found in Corollary~\ref{cor:asymp norm MM} coincides with the one for the regression S-estimator in~\cite{lopuhaa-gares-ruizgazenARXIVE2022} defined with the function $\rho_1$ (see their Corollary~6). For the multivariate location-scale model, for which $\bX=\bI_k$, Theorem~\ref{th:asymp normal MM} for unbounded $\rho_1$, coincides with Theorem~3.2 in~\cite{lopuhaa1992highly}. The expression for the asymptotic variance in Corollary~\ref{cor:asymp norm MM} for bounded $\rho_1$, coincides with the results mentioned at the beginning of Section~2.4 in~\cite{SalibianBarrera-VanAelst-Willems2006}. Furthermore, the results in Theorem~\ref{th:asymp normal MM} and Corollary~\ref{cor:asymp norm MM} are similar to the ones obtained in~\cite{yohai1987} and~\cite{kudraszow-maronna2011} for regression MM-estimators in the multiple and multivariate linear regression model, respectively. For the linear mixed effects model~\eqref{def:linear mixed effects model Copt} with $\bX_i=\bX$ being the same for each subject and assuming multivariate normal measurement errors, Theorem~1 in~\cite{copt&heritier2007} provides asymptotic normality of the regression MM-estimator. Our Theorem~\ref{th:asymp normal MM} and Corollary~\ref{cor:asymp norm MM} are extensions of this result to a larger class of linear mixed effects models also allowing error distributions much more general than the multivariate normal. \begin{remark} \label{rem:Asymp norm multivariate linear regression} Similar to Remark~\ref{rem:IF multivariate linear regression}, the expression in Corollary~\ref{cor:asymp norm MM} for the multivariate linear regression model slightly differs from the one in Proposition~7 in~\cite{kudraszow-maronna2011}. It seems that the expression in equation~(6.4) in~\cite{kudraszow-maronna2011} contains a small typo. When~$\widehat{\bB}_n$ is the regression MM-estimator considered in~\cite{kudraszow-maronna2011}, then in our notation $\bbeta_{1,n}=\vc(\widehat{\bB}_n^T)$. We believe that at a distribution $P$, which is such that $\by\mid\bx$ has an elliptically contoured density with parameters~$\bmu_0=\bB_0^T\bx$ and $\bSigma_0$, the correct expression for the asymptotic variance of $\bbeta_{1,n}=\vc(\widehat{\bB}_n^T)$ should be \[ \frac{\E_{\mathbf{0},\bI_k}\left[\rho_1'(\|\bz\|)^2\right]}{k\alpha_1^2} \left( \left( \E[\bx\bx^T] \right)^{-1} \otimes \bSigma_0 \right), \] with $\alpha_1$ defined in~\eqref{def:alpha}. \end{remark} Note that for $\rho_{\text{B}}(s;c)$ and $\rho_{\text{H}}(s;c)$, as defined in~\eqref{def:biweight} and~\eqref{def:huber psi}, respectively, it holds that they both converge to $s^2/2$, as $c\to\infty$. This means that the least squares estimators can be obtained as limiting case of the regression M-estimator defined with $\rho_1$ equal to either $\rho_{\text{B}}(s;c_1)$ or $\rho_{\text{H}}(s;c_1)$, for $c_1\to\infty$. In both cases, the scalar \[ \lambda= \frac{\E_{\mathbf{0},\bI_k}\left[\rho_1'(\|\bz\|)^2\right]}{k\alpha_1^2} \to \frac{\E_{\mathbf{0},\bI_k}\left[\|\bz\|^2\right]}{k}, \quad \text{as }c_1\to\infty. \] For the multivariate normal $\E_{\mathbf{0},\bI_k}\left[\|\bz\|^2\right]=k$, so that the scalar $1/\lambda$ may serve as an index for the asymptotic efficiency relative to the least squares estimator in all models that are included in our setup. When using the (bounded) biweight function from~\eqref{def:biweight}, Table~1 in~\cite{kudraszow-maronna2011} gives the cut-off values $c_0$ for which the initial estimators $(\bbeta_{0,n},\bV_{0,n})$ in Theorem~\ref{th:BDP MM Yohai} defined with $\rho_0(s)=\rho_{\text{B}}(s;c_0)$ have breakdown point 0.5. For the regression M-estimator $\bbeta_{1,n}$ defined with $\rho_1(s)=\rho_{\text{B}}(s;c_1)$, information on cut-off values $c_1$ and asymptotic relative efficiencies can be found at several places in the literature. Table 1 in~\cite{lopuhaa1989} provides values of $\lambda$, that correspond to the location S-estimator defined with $\rho_{\text{B}}(s;c_1)$, for varying dimensions $k=1,2,10$ and varying breakdown points $\epsilon^*=10\%,20\%,\ldots,50\%$, from which $c_1$ can be determined from \[ \frac{6\E_{\Phi} \left[\rho_{\text{B}}(\|\bz\|;c_1) \right]}{c_1^2} = \epsilon^*, \] where the expectation is with respect to the standard multivariate normal distribution. Table~2 in~\cite{kudraszow-maronna2011} provides values of $c_1$ for the multivariate regression MM-estimator, for asymptotic efficiencies $1/\lambda=80\%,90\%,0.95\%$ and varying dimensions $k=1,\ldots,5,10$. Finally, Table~3.1 in~\cite{vanaelst&willems2005} gives asymptotic efficiencies that correspond to the multivariate regression S-estimator for varying dimensions $k=1,2,3,5,10,30,50$ and breakdown points $\epsilon^*=25\%,50\%$. When using the (unbounded) Huber function from~\eqref{def:huber psi}, Proposition~\ref{prop:BDP MM unbounded} shows that $\bbeta_{1,n}$ inherits the breakdown point of the initial covariance estimator $\bV_{0,n}$, as long as all $\bX_i=\bX$ are the same and of full rank. For example, this applies to the linear mixed effects model considered in~\cite{copt&heritier2007}. For the regression M-estimator $\bbeta_{1,n}$ defined with $\rho_1(s)=\rho_{\text{H}}(s;c_1)$, information on cut-off values $c_1$ and asymptotic relative efficiencies for the multivariate location M-estimator can be found in Table~1 in~\cite{maronna1976} for varying dimensions $k=2,4,6,10$ and ``winsorizing proportions'' $w=0\%, 10\%,20\%,30\%$, from which $c_1$ can be determined via \[ P_{\Phi}(\|\bz\|>c_1)=w. \] Table~1 in~\cite{lopuhaa1989} provides values of $\lambda$, that correspond to the location S-estimator defined with $\rho_{\text{H}}(s;c_1)$, for varying dimensions $k=1,2,10$ and the same values for $w$. \begin{comment} \newpage \section{MM estimator following Lopuha\"a~\cite{lopuhaa1992highly}} In our setup, we define the regression MM-estimator for $\bbeta$ as follows. \begin{definition} Let $\bV_n$ be a positive definite symmetric covariance matrix. Define $\bbeta_n$ as the vector that minimizes \begin{equation}\label{def:MM estimator} R_n(\bbeta) = \frac{1}{n} \sum_{i=1}^{n} \rho\left( \sqrt{(\by_i-\bX_i\bbeta)^T\bV_n^{-1}(\by_i-\bX_i\bbeta)} \right) \end{equation} \end{definition} The idea of the MM-estimator is to take for $\bV_n$ a robust covariance estimator with high breakdown point. The $\rho$-function can then be tuned such that regression MM-estimator $\bbeta_n$ inherits the high breakdown point from $\bV_n$ and at the same time achieves high efficiency at the normal distribution. Note that one may choose any (robust) covariance estimator, but in our setup we typically think of a structured covariance estimator $\bV_n=\bV(\btheta_n)$. This means that $\bV_n$ is not necessarily affine equivariant, and similarly for $\bbeta_n$. However, it is not difficult to see that $\bbeta_n$ is regression equivariant, i.e., \[ \bbeta_n(\{(\by_i+\bX_i\bb,\bX_i),i=1,\ldots,n\})=\bbeta_n(\{(\by_i,\bX_i),i=1,\ldots,n\})+\bb. \] for all $\bb\in\R^q$. This can be seen as follows. Suppose that for observations $(\by_i,\bX_i)$, $i=1,\ldots,n$, a minimizer $\bbeta_n$ exists. For observations $(\by_i+\bX_i\bb,\bX_i)$, for $i=1,\ldots,n$, we can write \[ \sum_{i=1}^{n} \rho\left( \|\bV_n^{-1/2}(\by_i+\bX_i\bb-\bX_i\bbeta)\| \right) = \sum_{i=1}^{n} \rho\left( \|\bV_n^{-1/2}(\by_i-\bX_i(\bbeta-\bb))\| \right). \] Minimizing the right hand side over all $\bbeta\in\R^q$ is equivalent by minimizing over all $\widetilde{\bbeta}=\bbeta-\bb\in\R^q$. The latter has solution $\widetilde{\bbeta}=\bbeta_n$, so that minimizing the left hand side over all $\bbeta\in\R^q$ leads to minimizer $\bbeta_n+\bb$. The corresponding functional is defined as the vector $\bbeta(P)$ that minimizes \begin{equation}\label{def:MM functional} R_P(\bbeta) = \int \rho\left( \sqrt{(\by-\bX\bbeta)^T\bV(P)^{-1}(\by-\bX\bbeta)} \right) \,\dd P(\by,\bX), \end{equation} where $\bV(P)$ is a positive definite symmetric covariance functional. Again, we typically think of a structured covariance functional $\bV(P)=\bV(\btheta(P))$, which may not be affine equivariant. Hence, $\bbeta(P)$ may not be affine equivariant, but it is regression equivariant, i.e., $\bbeta(P_{\by+\bX\bb,\bX})=\bbeta(P_{\by,\bX})+\bb$, for all $\bb\in\R^q$. Clearly, when we take $P=\mathbb{P}_n$, the empirical measure corresponding to the observations $(\by_i,\bX_i)$, $i=1,\ldots,n$, definition~\eqref{def:MM functional} reduces to~\eqref{def:MM estimator}. \subsection{Existence of the MM-functional} \label{subsec:existence MM} The conditions on the function $\rho:\R\to[0,\infty)$ differ in the various versions of MM-estimators in the literature. Yohai~\cite{yohai1987} and Tatsuoka and Tyler~\cite{tatsuoka&tyler2000} consider bounded $\rho$-functions. Similar to Huber~\cite{huber1984}, Lopuha\"a~\cite{lopuhaa1992highly} requires $\rho(s)\to\infty$, as $s\to\infty$, to include the potential use of Huber's $\psi$-function to determine the location MM-estimator and to avoid that the breakdown point of $\bbeta_n$ depends on the configuration of the sample. Copt and Heritier~\cite{copt&heritier2007} consider both possibilities, without paying attention to the robustness properties. In view of our current setting, we consider $\rho$-functions of the same type as the ones used for S-estimators. Since $\rho$ is bounded, clearly $R_P(\bbeta)$ is well defined. We then have the following theorem about the existence of the MM-functional. \begin{theorem} \label{th:existence MM} Let $\rho:\R\to[0,\infty)$ satisfy conditions (R1)-(R3), and suppose that $\bX$ has full rank with positive probability. \begin{itemize} \item[(i)] There is at least one vector $\bbeta(P)$ that minimizes $R_P(\bbeta)$. \item[(ii)] When $P$ is such that $\by\mid\bX$ has an elliptically contoured density from~\eqref{eq:elliptical} with parameters $\bmu=\bX\bbeta_0$ and $\bSigma$, and if $\bV_P=\bSigma$, then $R_P(\bbeta)\geq R_P(\bbeta_0)$, for all $\bbeta\in\R^q$. When $h$ in~\eqref{eq:elliptical} has a common point of decrease with $a_0-\rho$, then $R_P(\bbeta)$ is uniquely minimized by $\bbeta(P)=\bbeta_0$. \end{itemize} \end{theorem} \begin{proof} Let $0<\lambda_k\leq \lambda_1<\infty$ be the smallest and largest eigenvalue of $\bV(P)$. (i) We have that \begin{equation} \label{eq:lower bound d} \begin{split} \sqrt{(\by-\bX\bbeta)^T\bV(P)^{-1}(\by-\bX\bbeta)} &\geq \frac{\|\by-\bX\bbeta\|}{\sqrt{\lambda_1}} \geq \frac{\|\bX\bbeta\|-\|\by\|}{\sqrt{\lambda_1}}\\ &\geq \frac{1}{\sqrt{\lambda_1}} \left( \|\bbeta\|\sqrt{\lambda_k(\bX^T\bX)}-\|\by\| \right). \end{split} \end{equation} Let $A\subset\R^{p}$ be a subset on which $\bX$ has full rank and such that $P(A)>0$. Then by dominated convergence it follows that \[ \begin{split} \lim_{\|\bbeta\|\to\infty} R_P(\bbeta) &\geq \lim_{\|\bbeta\|\to\infty} \int_A \rho\left( \sqrt{(\by-\bX\bbeta)^T\bV(P)^{-1}(\by-\bX\bbeta)} \right) \,\dd P(\by,\bX)\\ &= \int_A \lim_{\|\bbeta\|\to\infty} \rho\left( \sqrt{(\by-\bX\bbeta)^T\bV(P)^{-1}(\by-\bX\bbeta)} \right) \,\dd P(\by,\bX) = \sup\rho>0. \end{split} \] This means that there exists a constant $M>0$, such that \begin{equation} \label{eq:def M} R_P(\bbeta)>0=R_P(\mathbf{0}), \quad \text{for all }\|\bbeta\|>M. \end{equation} Therefore, for minimizing~$R_P(\bbeta)$ we may restrict ourselves to the set $K=\{\bbeta\in\R^q:\|\bbeta\|\leq M\}$. By dominated convergence, it follows that $R_P(\bbeta)$ is continuous and therefore it must attain at least one minimum~$\bbeta(P)$ on the compact set $K$. (ii) Write \[ R_P(\bbeta) = \E_\bX \left[ \E_{\by\mid\bX} \left[ \rho\left( \sqrt{(\by-\bX\bbeta)^T\bSigma^{-1}(\by-\bX\bbeta)} \right) \right] \right]. \] By change of variables $\by=\bSigma^{1/2}\bz+\bmu$, the inner conditional expectation can be written as \[ \int \rho\left( \|\bSigma^{-1/2}(\by-\bX\bbeta)\| \right) f_{\bmu,\bSigma}(\by)\,\dd\by = \int \rho\left( \|\bz-\bSigma^{-1/2}\bX(\bbeta-\bbeta_0)\| \right) h(\bz^T\bz)\,\dd\bz. \] Next, we apply Lemma~4 from Davies~\cite{davies1987} to the functions $\xi(d^2)=1-\rho(d)/a_0$ and $g=h$ and taking $\Lambda=\bI_k$. Since $\xi$ and $a_0-\rho$ have a common point of decrease, it follows that \[ \int \rho\left( \|\bz-\bSigma^{-1/2}\bX(\bbeta-\bbeta_0)\| \right) h(\bz^T\bz)\,\dd\bz \leq \int \rho\left( \|\bz\| \right) h(\bz^T\bz)\,\dd\bz \] with a strict inequality unless $\bSigma^{-1/2}\bX(\bbeta-\bbeta_0)=\mathbf{0}$, i.e., unless $\bbeta=\bbeta_0$, since $\bX$ has full rank with probability one. Finally, with the same change of variables $\bz=\bSigma^{-1/2}(\by-\bmu)$, the right hand side can be written as \[ \int \rho\left( \|\bz\| \right) h(\bz^T\bz)\,\dd\bz = \int \rho\left( \|\bSigma^{-1/2}(\by-\bX\bbeta_0)\| \right) f_{\bmu,\bSigma}(\by)\,\dd\by. \] After taking expectations $\E_\bx$, we conclude that $R_P(\bbeta)\leq R_P(\bbeta_0)$, with a strict inequality, unless~$\bbeta=\bbeta_0$. This proves the theorem. \end{proof} A direct consequence is the existence of the MM-estimator. Let $(\by_1,\bX_1),\ldots,(\by_n,\bX_n)$ be a sample, for which $\bX_i$ has full rank for each $i=1,\ldots,n$. Then if $\rho:\R\to[0,\infty)$ satisfies conditions (R1)-(R3), if follows from Theorem~\ref{th:existence MM}(i) with $P$ equal to the empirical measure $\mathbb{P}_n$ of the sample, that there exists at least one $\bbeta_n$ that minimizes~$R_n(\bbeta)$. \subsection{Breakdown point of MM estimators} \label{subsec:bdp MM} When the function $\rho$ is bounded, then the breakdown point not only depends on the shape of $\rho$ and the scaling constant, but also on the sample configuration. Depending on the configuration the (replacement) breakdown point can become arbitrarily close to 0 or $1/2$. This has also been observed by Huber~\cite{huber1984} for location M-estimators; see also the comments on page~408 in~\cite{lopuhaa1992highly}. Note that Huber~\cite{huber1984} uses addition breakdown, which results in a lower and upper bound that lie close together. For the replacement breakdown point these bounds lie further apart. \begin{theorem} \label{th:bdp MM bounded rho} Let $\mathcal{S}_n\subset \R^{p}$ be a collection of~$n$ points $\bs_i=(\by_i,\bX_i)$, $i=1,\ldots,n$, such that $\bX_i$ has full rank for all $i=1,\ldots,n$. Let $\rho:\R\to[0,\infty)$ satisfy (R1)-(R3) and let $\bV_n$ be a positive definite symmetric covariance estimator, with $\epsilon_n^*(\bV_n,\mathcal{S}_n)\leq \lfloor(n+1)/2\rfloor/n$. Let $\bbeta_n$ be any mimimizer of of $R_n(\bbeta)$, such that \[ \sum_{i=1}^n \rho(d(\bs_i,\bbeta_n,\bV_n))=nb_0. \] Then \[ \epsilon_n^*(\bbeta_n,\mathcal{S}_n) \geq \min\left( \epsilon_n^*(\bV_n,\mathcal{S}_n),\frac{\lceil n(1-r)/2\rceil}{n} \right), \] where $r=b_0/a_0$. When $\lfloor n(1-r)\rfloor\leq n\epsilon_n^*(\bV_n,\mathcal{S}_n)-2$, then \[ \epsilon_n^*(\bbeta_n,\mathcal{S}_n) \leq \frac{\lfloor n(1-r)\rfloor+1}{n} \] \end{theorem} \begin{proof} Let $\mathcal{S}'_m$ be a corrupted collection obtained from the original collection $\mathcal{S}_n$ by replacing at most \[ m<\min\left(n\epsilon_n^*(\bV_n,\mathcal{S}_n),n(1-r)/2\right) \leq \min\left(n\epsilon_n^*(\bV_n,\mathcal{S}_n)-1,\lceil n(1-r)/2\rceil-1\right) \] points. Then $\bV_n(\mathcal{S}'_m)$ does not break down and we must show that $\|\bbeta_n(\mathcal{S}'_m)\|$ stays bounded. Let $R_n^*(\bbeta)$ denote the function from~\eqref{def:MM estimator} corresponding to the corrupted collection and write $\bV_m'=\bV_n(\mathcal{S}_m')$ for the covariance estimate based on the corrupted collection. Since $\mathcal{S}_m'$ has at least one point from the original sample, for which $\bX_i$ has full rank, according to Theorem~\ref{th:existence MM}, there exists a minimizer of $R_n^*(\bbeta)$. Since $\bV_m'$ does not break down, there exists $0<\lambda_k\leq\lambda_1<\infty$, such that all its eigenvalues are between $\lambda_k$ and $\lambda_1$. Furthermore, since $m<n(1-r)/2$, there exists $0<\delta<1$, such that $2m+(n-m)\delta<n(1-r)=n(1-b_0/a_0)$, and let $c>0$ be such that $\rho(d)>a_0(1-\delta)$, for $|d|>c$. Now, let $\bb$ be any vector such that \begin{equation}\label{eq:prop bb} \|\by_i-\bX_i\bb\|\geq c\sqrt{\lambda_1}, \text{ for all }(\by_i,\bX_i)\in \mathcal{S}_n. \end{equation} Then, \[ \begin{split} \sum_{\mathcal{S}_m'}\rho(d(\bs_i,\bbeta_n(\mathcal{S}_n),\bV_m')) &= \sum_{\mathcal{S}_m'\cap\mathcal{S}_n}\rho(d(\bs_i,\bbeta_n(\mathcal{S}_n),\bV_m')) + \sum_{\mathcal{S}_m'\setminus\mathcal{S}_n}\rho(d(\bs_i,\bbeta_n(\mathcal{S}_n),\bV_m'))\\ &\leq \sum_{\mathcal{S}_n}\rho(d(\bs_i,\bbeta_n(\mathcal{S}_n),\bV_m'))+ma_0\\ &= \textcolor[rgb]{1.00,0.00,0.00}{nb_0}+ma_0 \end{split} \] \textcolor[rgb]{1.00,0.00,0.00}{\textbf{NOT true, since we have $\bV_m'$ instead of $\bV_n$}}\\ and \[ \begin{split} \sum_{\mathcal{S}_m'}\rho(d(\bs_i,\bb,\bV_m')) &= \sum_{\mathcal{S}_m'\cap\mathcal{S}_n}\rho(d(\bs_i,\bb,\bV_m')) + \sum_{\mathcal{S}_m'\setminus\mathcal{S}_n}\rho(d(\bs_i,\bb,\bV_m'))\\ &\geq \sum_{\mathcal{S}_m'\cap\mathcal{S}_n} \rho\left(\frac{\|\by_i-\bX_i\bb\|}{\sqrt{\lambda_1}}\right)\\ &\geq (n-m)a_0(1-\delta). \end{split} \] We conclude that \[ \sum_{\mathcal{S}_m'}\rho(d(\bs_i,\bbeta_n(\mathcal{S}_n),\bV_m')) \leq nb_0+ma_0 < (n-m)a_0(1-\delta) \leq \sum_{\mathcal{S}_m'}\rho(d(\bs_i,\bb,\bV_m')). \] This means that any $\bb$ that minimizes $R_n^*(\bbeta)$ cannot satisfy~\eqref{eq:prop bb} and therefore must satisfy $\|\by_i-\bX_i\bb\|\leq c\sqrt{\lambda_1}$ for some $(\by_i,\bX_i)\in \mathcal{S}_n$. This means that $\|\bbeta_n(\mathcal{S}'_m)\|$ stays bounded. On the other hand, let $\mathcal{S}'_m$ be a corrupted collection obtained from the original collection $\mathcal{S}_n$ by replacing $m=\lfloor n(1-r)\rfloor+1$ points. Note that $m\leq n\epsilon_n^*(\bV_m',\mathcal{S}_n)-1$, so that $\bV_m'$ does not break down. Hence, there exists $0<\lambda_k\leq\lambda_1<\infty$, such that all eigenvalues of $\bV_m'$ are between $\lambda_k$ and~$\lambda_1$. Furthermore, because $m>n(1-r)$ there exists a $\delta>0$, such that $m-m\delta>n(1-r)=n(1-b_0/a_0)$. Let $c>0$ be such that $\rho(d)\geq a_0(1-\delta)$ for $|d|\geq c$. Let $\bb^*$ be any $q$-vector and let $(\by^*,\bX^*)$ be such that $\bX^*$ is of full rank and $\by^*=\bX^*\bb^*$. Assume that all contaminated points are equal to $(\by^*,\bX^*)$. Then for all $\bb\in\R^q$ such that $\|\by^*-\bX^*\bb\|\geq c\sqrt{\lambda_1}$, we obtain \[ \begin{split} \sum_{\mathcal{S}_m'}\rho(d(\bs_i,\bb,\bV_m')) &= \sum_{\mathcal{S}_m'\cap\mathcal{S}_n}\rho(d(\bs_i,\bb,\bV_m')) + \sum_{\mathcal{S}_m'\setminus\mathcal{S}_n}\rho(d(\bs_i,\bb,\bV_m'))\\ &\geq \sum_{\mathcal{S}_n}\rho(d(\bs_i,\bbeta_n(\mathcal{S}_n),\bV_m'))-ma_0 + m \rho\left(\|\by^*-\bX^*\bb\|/\sqrt{\lambda_1}\right)\\ &\geq nb_0-ma_0 + ma_0(1-\delta) = nb_0-ma_0\delta, \end{split} \] and \[ \begin{split} \sum_{\mathcal{S}_m'}\rho(d(\bs_i,\bb^*,\bV_m')) &= \sum_{\mathcal{S}_m'\cap\mathcal{S}_n}\rho(d(\bs_i,\bb^*,\bV_m')) + \sum_{\mathcal{S}_m'\setminus\mathcal{S}_n}\rho(d(\bs_i,\bb^*,\bV_m'))\\ &\leq (n-m)a_0+m\rho(0) = (n-m)a_0. \end{split} \] We conclude that \[ \sum_{\mathcal{S}_m'}\rho(d(\bs_i,\bb^*,\bV_m')) \leq (n-m)a_0<nb_0-ma_0\delta \leq \sum_{\mathcal{S}_m'}\rho(d(\bs_i,\bb,\bV_m')). \] This means that $\|\by^*-\bX^*\bbeta_n(\mathcal{S}_m')\|<c\sqrt{\lambda_1}$, which means that $\|\bX^*(\bb^*-\bbeta_n(\mathcal{S}_m'))\|<c\sqrt{\lambda_1}$. Since~$\bX^*$ has full rank, this means that by sending $\bb^*$ to infinity, $\bbeta_n$ breaks down. \end{proof} Both inequalities in Theorem~\ref{th:bdp MM bounded rho} are obtained for situations in which the covariance estimate~$\bV_n$ does not break down. For types of contamination for which $\bV_n$ does break down, it is hard to say anything about whether $\bbeta_n$ breaks down or not. Even if it is know that breakdown is the result of $\lambda_k(\bV_n(\mathcal{S}_m'))\downarrow 0$ and/or $\lambda_1(\bV_n(\mathcal{S}_m'))\to\infty$, it is not clear what sort of contamination causes breakdown. The breakdown point of $\bbeta_n$ depends on the configuration of the sample. There are two extreme cases. Suppose that all points in the sample are equal to the same point $(\by_0,\bX_0)$, which is such that there exists a $\bbeta_0$, with $\by_0=\bX_0\bbeta_0$. Then clearly $R_P(\bbeta)$ is minimized by $\bbeta_n=\bbeta_0$ and for any $\bV_n$ one has \[ \sum_{i=1}^n \rho(d(\bs_i,\bbeta_n,\bV_n)) = \sum_{i=1}^n \rho(0) = 0. \] In that case $r=0$, so that \[ \epsilon_n^*(\bbeta_n,\mathcal{S}_n) \geq \min\left( \epsilon_n^*(\bV_n,\mathcal{S}_n),\frac{\lceil n/2\rceil}{n} \right). \] \textcolor[rgb]{1.00,0.00,0.00}{\textbf{This seems to contradict the maximal breakdown point for regression equivariant estimators?}}\\ On the other hand, suppose that all design matrices are equal to the same $\bX$ and that all $\by$-observations are far apart, i.e., \[ \min_{i\ne j} \left| d(\bs_i,\bb,\bV_n)-d(\bs_j,\bb,\bV_n) \right| > 4c. \] Let $\bbeta_n$ be the minimizer of $R_P(\bbeta)$. Because the Mahalanobis distances are so far apart, we have $\rho(d(\bs_i,\bbeta_n,\bV_n))=a_0$, for at least $n-1$ points. That means that \[ \sum_{i=1}^n \rho(d(\bs_i,\bbeta_n,\bV_n))=nb_0\geq (n-1)a_0, \] so that $r\geq 1-1/n$, or $n(1-r)\leq 1$. If $n\epsilon_n^*(\bV_n,\mathcal{S}_n)\geq 3$, this implies that the breakdown point \[ \epsilon_n^*(\bbeta_n,\mathcal{S}_n)\leq 1/n. \] \subsection{Continuity and consistency} \label{subsec:continuity MM} \begin{theorem} \label{th:continuity MM} Let $P_t$, $t\geq0$ be a sequence of probability measures on $\R^p$ that converges weakly to~$P$, as $t\to\infty$. Let that $\rho:\R\to[0,\infty)$ satisfies (R1)-(R3). Suppose that $\bV(P_t)$ exists and satisfies \begin{equation} \label{eq:conv Vt} \lim_{t\to\infty}\bV(P_t)=\bV(P). \end{equation} Then for $t$ sufficiently large, there is at least one $\bbeta(P_t)$ that minimizes $R_{P_t}(\bbeta)$. If $\bbeta(P)$ is the unique minimizer of $R_P(\bbeta)$, then for any sequence minimizers $\bbeta(P_t)$, $t\geq 0$, it holds that \[ \lim_{t\to\infty}\bbeta(P_t)=\bbeta(P). \] \end{theorem} \begin{proof} For the sake of brevity, let us write $\bbeta_t=\bbeta(P_t)$, $\bV_t=\bV(P_t)$, and $R_t=R_{P_t}$. First note that similar to Lemma~\ref{lem:billingsley structured}, one can show that \begin{equation} \label{eq:prop Lemma 2} \lim_{t\to\infty} \int \rho\left( d(\bs,\bbeta_t,\bV_t) \right) \,\dd P_t(\bs) = \int \rho\left( d(\bs,\bbeta_L,\bV_L) \right) \,\dd P(\bs) \end{equation} for any sequence, $(\bbeta_t,\bV_t)\to(\bbeta_L,\bV_L)$, where $d^2(\bs,\bbeta,\bV)=(\by-\bX\bbeta)^T\bV^{-1}(\by-\bX\bbeta)$. This yields that, for every $\bbeta\in\R^q$ fixed, it holds that \[ R_t(\bbeta)\to R_P(\bbeta), \] as $t\to\infty$. Choose $M>0$ as in~\eqref{eq:def M}. Then it must hold that, for $t$ sufficiently large $\|\bbeta_t\|\leq M$, since otherwise $R_t(\bbeta_t)>0=R_t(\mathbf{0})$. We conclude that for minimizing $R_t(\bbeta)$, we can restrict ourselves to the compact set $K=\{\bbeta\in\R^q:\|\bbeta\|\leq M\}$. Furthermore, as in the proof of Theorem~\ref{th:existence MM}, the function $\bbeta\mapsto R_t(\bbeta)$ is continuous, and must therefore attain a minimum $\bbeta_t$ on the set $K$. Next, suppose that $\bbeta(P)$ is the unique minimizer of $R_P(\bbeta)$. Because $\bbeta(P)$ is regression equivariant, we may assume that $\bbeta(P)=\mathbf{0}$. From~\eqref{eq:conv Vt} it follows that for $t$ sufficiently large, \begin{equation} \label{eq:bounds Vt} 0<\lambda_k(\bV(P))/4\leq \lambda_k(\bV_t)\leq\lambda_1(\bV_t)\leq 4\lambda_1(\bV(P))<\infty. \end{equation} Now, consider a sequence $\{(\bbeta_t,\bV_t)\}$, such that $\|\bbeta_t\|\leq M$ and $\bV_t$ satisfies~\eqref{eq:bounds Vt}. Then the sequence $\{(\bbeta_t,\bV_t)\}$ lies in a compact set, so it has a convergent subsequence $(\bbeta_{t_j},\bV_{t_j})\to(\bbeta_L,\bV(P))$. According to~\eqref{eq:prop Lemma 2}, it follows that \[ \lim_{j\to\infty} R_{t_j}(\bbeta_{t_j}) = \int \rho\left( d(\bs,\bbeta_{t_j},\bV_{t_j}) \right) \,\dd P_{t_j}(\bs) = \int \rho\left( d(\bs,\bbeta_L,\bV(P)) \right) \,\dd P(\bs) = R_P(\bbeta_L). \] Now, suppose that $\bbeta_L\neq \mathbf{0}$. Then, since $R_P(\bbeta)$ is uniquely minimized at $\bbeta=\mathbf{0}$, this means that \[ R_{t_j}(\bbeta_{t_j})>R_P(\mathbf{0})=0=R_{t_j}(\mathbf{0}), \] which would means that $\bbeta_{t_j}$ is not the minimizer of $R_{t_j}(\bbeta)$. We conclude that $\bbeta_L=\mathbf{0}$, which proves the theorem. \end{proof} A direct corollary is consistency of the MM-estimator for $\bbeta$. \begin{corollary} \label{cor:consistency MM-estimator} Let that $\rho:\R\to[0,\infty)$ satisfies (R1)-(R3). Suppose that $\bV_n=\bV(\mathbb{P}_n)$ exists and suppose that $\bV(\cdot)$ is continuous at $P$. Then for $n$ sufficiently large, there is at least one~$\bbeta_n$ that minimizes $R_n(\bbeta)$. If $\bbeta(P)$ is the unique minimizer of $R_P(\bbeta)$, then for any sequence minimizers~$\bbeta_n$, $n=1,2\ldots$, it holds that \[ \lim_{n\to\infty}\bbeta_n=\bbeta(P), \] with probability one. \end{corollary} \begin{proof} We apply Theorem~\ref{th:continuity MM} to the sequence $\mathbb{P}_n$, $n=1,2,\ldots$, of probability measures, where~$\mathbb{P}_n$ is the empirical measure corresponding to $(\mathbf{y}_1,\mathbf{X}_1),\ldots,(\mathbf{y}_n,\mathbf{X}_n)$. According to the Portmanteau Theorem (e.g., see Theorem~2.1 in~\cite{billingsley1968}), $\mathbb{P}_n$ converges weakly to $P$, with probability one. The corollary then follows from Theorem~\ref{th:continuity MM}. \end{proof} \subsection{Influence function} \label{subsec:IF MM} Recall the definition of the MM-functional defined in~\eqref{def:MM functional} as the vector $\bbeta(P)$ that minimizes \[ R_P(\bbeta) = \int \rho\left( \sqrt{(\by-\bX\bbeta)^T\bV(P)^{-1}(\by-\bX\bbeta)} \right) \,\dd P(\by,\bX), \] where $\bV(P)$ is a positive definite symmetric covariance functional. Then $\bbeta(P)$ is also a solution to \[ \frac{\dd R_P(\bbeta)}{\dd \bbeta}=\mathbf{0}. \] If we impose condition (R4) and require that $\E_P\|\bX\|^2<\infty$, then we may interchange differentiation and integration with respect to $P$, so that \[ \int \frac{\dd }{\dd \bbeta} \rho\left( \sqrt{(\by-\bX\bbeta)^T\bV(P)^{-1}(\by-\bX\bbeta)} \right) \,\dd P(\by,\bX) = \mathbf{0}, \] or equivalently \begin{equation} \label{eq:score psi MM} \int \Psi_{\text{MM}}(\bs,\bbeta,\bV(P)) \,\dd P(\by,\bX) = \mathbf{0}, \end{equation} where \begin{equation} \label{def:psi beta MM} \Psi_{\text{MM}}(\bs,\bbeta,\bV) = u(d)\bX^T\bV^{-1}(\by-\bX\bbeta), \end{equation} with $d^2=(\by-\bX\bbeta)^T\bV^{-1}(\by-\bX\bbeta)$. For $0<h<1$ and $\bs=(\by,\bX)\in\R^{p}$ fixed, define the perturbed probability measure \[ P_{h,\bs}=(1-h)P+h\delta_{\bs}, \] where $\delta_{\bs}$ denotes the Dirac measure at $\bs\in\R^{p}$. The \emph{influence function} of the functional $\bxi(\cdot)$ at probability measure $P$, is defined as \begin{equation} \label{def:IF regression} \text{IF}(\bs;\bbeta,P) = \lim_{h\downarrow0} \frac{\bbeta((1-h)P+h\delta_{\bs})-\bbeta(P)}{h}, \end{equation} if this limit exists. \begin{theorem} \label{th:IF MM} Suppose that $\rho:\R\to[0,\infty)$ satisfies (R1)-(R4) and that $\bV(P)$ satisfies~\eqref{eq:conv Vt}. Suppose that $\E_P\|\bX\|^2<\infty$ and that $\bbeta(P)$ uniquely minimizes $R_P(\bbeta)$. Let $\Psi_{\text{MM}}$ be defined in~\eqref{def:psi beta MM} and for $\bxi=(\bbeta,\bV)$, let \begin{equation} \label{def:Lambda beta MM} \Lambda_{\text{MM}}(\bxi) = \int \Psi_{\text{MM}}(\bs,\bbeta,\bV)\,\dd P(\bs). \end{equation} Suppose that $\Lambda_{\text{MM}}$ has a partial derivative $\partial\Lambda_{\text{MM}}/\partial\bbeta$ that is continuous at $\xi(P)=(\bbeta(P),\bV(P))$ and suppose that $\mathbf{D}_{\text{MM}}(P)=(\partial\Lambda_{\text{MM}}/\partial\bbeta)(\xi(P))$ is non-singular. Then for $\bs_0\in\R^p$, \[ \mathrm{IF}(\bs_0;\bbeta,P) = -\mathbf{D}_{\text{MM}}(P)^{-1}\Psi_{\text{MM}}(\bs_0,\bbeta(P),\bV(P)). \] \end{theorem} \begin{proof} First consider the regression functional at $P_{h,\bs_0}$. From the Portmanteau theorem~\cite[Theorem 2.1]{billingsley1968} it can easily be seen that $P_{h,\bs_0}\to P$ weakly, as~$h\downarrow0$. Therefore, from~\eqref{eq:conv Vt} and Theorem~\ref{th:continuity MM}, it follows that for~$h$ sufficiently small, there exists at least one~$\bbeta(P_{h,\bs_0})$ that minimizes~$R_{P_{h,\bs_0}}(\bbeta)$, and that \[ (\bbeta(P_{h,\bs_0}),\bV(P_{h,\bs_0})) \to (\bbeta(P),\bV(P)), \quad \text{as } h\downarrow0. \] Denote $\bxi_{h,\bs_0}=(\bbeta_{h,\bs_0},\bV_{h,\bs_0})=(\bbeta(P_{h,\bs_0}),\bV(P_{h,\bs_0}))$ and $\bxi_P=(\bbeta_P,\bV_P)=(\bbeta(P),\bV(P))$. Then $\bxi_{h,\bs_0}$ satisfies the score equation~\eqref{eq:score psi MM} for the MM-functional at $P_{h,\bs_0}$, that is \[ \int \Psi_{\text{MM}}(\bs,\bxi_{h,\bs_0})\,\dd P_{h,\bs_0}(\bs)=\mathbf{0}. \] We decompose as follows \[ \begin{split} \mathbf{0} &= \int \Psi_{\text{MM}}(\bs,\bxi_{h,\bs_0})\,\dd P_{h,\bs_0}(\bs)\\ &= (1-h)\int \Psi_{\text{MM}}(\bs,\bxi_{h,\bs_0})\,\dd P(\bs)+h\Psi_{\text{MM}}(\bs_0,\bxi_{h,\bs_0})\\ &= (1-h)\Lambda_{\text{MM}}(\bxi_{h,\bs_0})+h\Psi_{\text{MM}}(\bs_0,\bxi_{h,\bs_0})\\ \end{split} \] We first determine the order of $\bbeta_{h,\bs_0}-\bbeta_P$, as $h\to0$. Because $\bxi\mapsto\Psi_{\text{MM}}(\bs_0,\bxi)$ is continuous, it follows that \[ \Psi_{\text{MM}}(\bs_0,\bxi_{h,\bs_0})=\Psi_{\text{MM}}(\bs_0,\bxi_P)+o(1), \qquad \text{as } h\downarrow0. \] Furthermore, because $\Lambda_{\text{MM}}$ has a partial derivative $\partial\Lambda_{\text{MM}}/\partial\bbeta$ that is continuous at $\xi_P=(\bbeta_P,\bV_P)$, we have that \[ \begin{split} \Lambda_{\text{MM}}(\bxi_{h,\bs_0}) &= \Lambda_{\text{MM}}(\bxi_P) + \frac{\partial \Lambda_{\text{MM}}}{\partial \bbeta} (\bbeta_P,\bV_{h,\bs_0}) (\bbeta_{h,\bs_0}-\bbeta_P) + o(\|\bbeta_{h,\bs_0}-\bbeta_P\|)\\ &= \Lambda_{\text{MM}}(\bxi_P) + \Big( \bD_{\text{MM}}(P)+o(1) \Big) \Big(\bbeta_{h,\bs_0}-\bbeta_P\Big) + o(\|\bbeta_{h,\bs_0}-\bbeta_P\|)\\ \end{split} \] Since $\bbeta_P$ is the MM-functional at $P$, it is a zero of the corresponding score equation, i.e., $\Lambda_{\text{MM}}(\bxi_P)=\mathbf{0}$. It follows that \[ \mathbf{0}= (1-h) \bD_{\text{MM}}(P) \Big(\bbeta_{h,\bs_0}-\bbeta_P\Big) + o(\|\bbeta_{h,\bs_0}-\bbeta_P\|) + o(h) + h\Psi_{\text{MM}}(\bs_0,\bxi_P). \] Because $\bD(P)$ is non-singular and $\Psi_{\text{MM}}(\bs_0,\bxi_P)$ is fixed, this implies $\bbeta_{h,\bs_0}-\bbeta_P=O(h)$. After inserting this in the previous equality, it follows that \[ \begin{split} \mathbf{0} &= (1-h) \bD_{\text{MM}}(P) \Big(\bbeta_{h,\bs_0}-\bbeta_P\Big) + h\Psi_{\text{MM}}(\bs_0,\bxi_P) +o(h)\\ &= \bD_{\text{MM}}(P) \Big(\bbeta_{h,\bs_0}-\bbeta_P\Big) + h\Psi_{\text{MM}}(\bs_0,\bxi_P) +o(h). \end{split} \] We conclude \[ \frac{\bbeta_{h,\bs_0}-\bbeta_P}{h} = -\bD_{\text{MM}}(P)^{-1}\Psi_{\text{MM}}(\bs_0,\bxi_P)+o(1). \] This means that the limit of the left hand side exists and \[ \text{IF}(\bs_0;\bbeta,P) = \lim_{h\downarrow0} \frac{\bbeta((1-h)P+h\delta_{\bs_0})-\bbeta_P}{h} = -\bD_{\text{MM}}(P)^{-1}\Psi_{\text{MM}}(\bs_0,\bxi_P). \] \end{proof} \subsection{Asymptotic normality} \label{subsec:asymp norm MM} Recall the definition of the MM-estimator $\bbeta_n$ defined as the vector that minimizes \[ R_n(\bbeta) = \frac{1}{n} \sum_{i=1}^{n} \rho\left( \sqrt{(\by_i-\bX_i\bbeta)^T\bV_n^{-1}(\by_i-\bX_i\bbeta)} \right). \] Then $\bbeta_n$ is a solution of the following score equation \begin{equation} \label{eq:score psi MM estimator} \int \Psi_{\text{MM}}(\bs,\bbeta,\bV_n) \,\dd \mathbb{P}_n(\bs) = \mathbf{0}, \end{equation} where $\Psi_{\text{MM}}$ is defined in~\eqref{def:psi beta MM}. \begin{theorem} \label{th:asymp normal MM} Suppose that $\rho:\R\to[0,\infty)$ satisfies (R1)-(R4) and that $\bV(P)$ satisfies~\eqref{eq:conv Vt}. Let $\bbeta_n=\bbeta(\mathbb{P}_n)$ be a minimizer of $R_n(\bbeta)$. Suppose that $\E\|\mathbf{s}\|^4<\infty$ and that~$\bbeta(P)$ is the unique minimizer of~$R_P(\bbeta)$. Let $\Psi_{\text{MM}}$ be defined in~\eqref{def:psi beta MM} and $\Lambda_{\text{MM}}$ by~\eqref{def:Lambda beta MM}. Suppose that $\Lambda_{\text{MM}}$ has a partial derivative $\partial\Lambda_{\text{MM}}/\partial\bbeta$ that is continuous at $\xi(P)=(\bbeta(P),\bV(P))$ and suppose that $\mathbf{D}_{\text{MM}}(P)=(\partial\Lambda_{\text{MM}}/\partial\bbeta)(\xi(P))$ is non-singular. Then $\sqrt{n}(\bbeta_n-\bbeta(P))$ is asymptotically normal with mean zero and covariance matrix \[ \bD_{\text{MM}}(P)^{-1} \E \left[ \Psi_{\text{MM}}(\mathbf{s},\bxi(P)) \Psi_{\text{MM}}(\mathbf{s},\bxi(P))^T \right] \bD_{\text{MM}}(P)^{-1}. \] \end{theorem} \begin{proof} Writing $\bxi_P=\bxi(P)$, we decompose~\eqref{eq:score psi MM estimator} as follows \begin{equation} \label{eq:decomposition MM estimator} \begin{split} \mathbf{0}= \int \Psi_{\text{MM}}(\mathbf{s},\bxi_n)\,\dd P(\mathbf{s}) &+ \int \Psi_{\text{MM}}(\mathbf{s},\bxi_P)\,\dd (\mathbb{P}_n-P)(\mathbf{s})\\ &+ \int \left( \Psi_{\text{MM}}(\mathbf{s},\bxi_n)-\Psi_{\text{MM}}(\mathbf{s},\bxi_P) \right) \,\dd (\mathbb{P}_n-P)(\mathbf{s}). \end{split} \end{equation} According to~\eqref{eq:stoch equi vector}, the third term is of the order $o_P(1/\sqrt{n})$, whereas according to the central limit theorem the second term is of the order $O_P(1/\sqrt{n})$. This means we can write \[ \mathbf{0}=\Lambda_{\text{MM}}(\bxi_n)+O_P(1/\sqrt{n}), \] where \[ \begin{split} \Lambda_{\text{MM}}(\bxi_n) &= \Lambda_{\text{MM}}(\bxi_P) + \frac{\partial \Lambda_{\text{MM}}}{\partial \bbeta} (\bbeta_P,\bV_n) (\bbeta_n-\bbeta_P) + o(\|\bbeta_n-\bbeta_P\|)\\ &= \Lambda_{\text{MM}}(\bxi_P) + \Big( \bD_{\text{MM}}(P)+o_P(1) \Big) \Big(\bbeta_n-\bbeta_P\Big) + o(\|\bbeta_n-\bbeta_P\|). \end{split} \] Since $\bbeta_P$ is the MM-functional at $P$, it is a zero of the corresponding score equation, i.e., $\Lambda_{\text{MM}}(\bxi_P)=\mathbf{0}$. It follows that \[ \mathbf{0}= \bD_{\text{MM}}(P) (\bbeta_n-\bbeta_P) + o(\|\bbeta_n-\bbeta_P\|) + O_P(1/\sqrt{n}). \] Because $\bD_{\text{MM}}(P)$ is non-singular, this implies $\bbeta_n-\bbeta_P=O(1/\sqrt{n})$. After inserting this in~\eqref{eq:decomposition MM estimator}, it follows that \[ \mathbf{0} = \bD_{\text{MM}}(P) (\bbeta_n-\bbeta_P\Big) + \int \Psi_{\text{MM}}(\mathbf{s},\bxi_P)\,\dd (\mathbb{P}_n-P)(\mathbf{s})\\ +o_P(1/\sqrt{n}). \] We conclude \[ \sqrt{n}(\bbeta_n-\bbeta_P) = -\bD_{\text{MM}}(P)^{-1} \sqrt{n}\int \Psi_{\text{MM}}(\mathbf{s},\bxi_P)\,\dd (\mathbb{P}_n-P)(\mathbf{s})\\ + o_P(1). \] Since $\E_P\Psi_{\text{MM}}(\mathbf{s},\bxi_P)=\Lambda_{\text{MM}}(\bxi_P)=\mathbf{0}$, it follows that \[ \sqrt{n}\int \Psi_{\text{MM}}(\mathbf{s},\bxi_P)\,\dd (\mathbb{P}_n-P)(\mathbf{s}) = \frac{1}{\sqrt{n}} \sum_{i=1}^{n} \Psi_{\text{MM}}(\mathbf{s}_i,\bxi_P) \] converges in distribution to a multivariate normal random vector with mean zero and covariance \[ \E_P \left[ \Psi_{\text{MM}}(\mathbf{s},\bxi_P)\Psi_{\text{MM}}(\mathbf{s},\bxi_P)^T \right] \] this finishes the proof. \end{proof} \newpage \section{MM estimator with unbounded $\rho$} \begin{definition} Let $\bV_n$ be a positive definite symmetric (structured) covariance matrix. Define $\bbeta_n$ as the vector that minimizes \begin{equation} \label{def:MM estimator unbounded} R_n(\bbeta) = \frac{1}{n} \sum_{i=1}^{n} \left\{ \rho\left( \sqrt{(\by_i-\bX_i\bbeta)^T\bV_n^{-1}(\by_i-\bX_i\bbeta)} \right) - \rho\left( \sqrt{\by_i^T\bV_n^{-1}\by_i} \right) \right\} \end{equation} \end{definition} The idea is that $\bV_n$ is robust covariance estimator with high breakdown point. The $\rho$-function can then be tuned such that regression estimator $\bbeta_n$ inherits the high breakdown point from $\bV_n$ and at the same time achieves high efficiency at the normal distribution. Note that one may choose any (robust) covariance estimator, but in our setup we typically think of a structured covariance estimator $\bV_n=\bV(\btheta_n)$. This means that $\bV_n$ is not necessarily affine equivariant, and similarly for $\bbeta_n$. However, it is not difficult to see that $\bbeta_n$ is regression equivariant, i.e., \[ \bbeta_n(\{(\by_i+\bX_i\bb,\bX_i),i=1,\ldots,n\})=\bbeta_n(\{(\by_i,\bX_i),i=1,\ldots,n\})+\bb. \] for all $\bb\in\R^q$. This can be seen as follows. Suppose that for observations $(\by_i,\bX_i)$, $i=1,\ldots,n$, as minimizer $\bbeta_n$ exists. Consider observations $(\by_i+\bX_i\bb,\bX_i)$, for $i=1,\ldots,n$. Then \[ \sum_{i=1}^{n} \rho\left( \|\bV_n^{-1/2}(\by_i+\bX_i\bb-\bX_i\bbeta)\| \right) = \sum_{i=1}^{n} \rho\left( \|\bV_n^{-1/2}(\by_i-\bX_i(\bbeta-\bb))\| \right). \] Minimizing the right hand side over all $\bbeta\in\R^q$ is equivalent by minimizing over all $\widetilde{\bbeta}=\bbeta-\bb\in\R^q$. The latter has solution $\widetilde{\bbeta}=\bbeta_n$, so that minimizing the left hand side over all $\bbeta\in\R^q$ leads to minimizer $\bbeta_n+\bb$. The corresponding functional is defined as the vector $\bbeta_P$ that minimizes \begin{equation} \label{def:MM functional unbounded} R_P(\bbeta) = \int \left\{ \rho\left( \sqrt{(\by-\bX\bbeta)^T\bV(P)^{-1}(\by-\bX\bbeta)} \right) - \rho\left( \sqrt{\by^T\bV(P)^{-1}\by} \right) \right\} \,\dd P(\by,\bX), \end{equation} where $\bV(P)$ is a positive definite symmetric covariance functional. Again, we typically think of a structured covariance functional $\bV(P)=\bV(\btheta(P))$, which may not be affine equivariant. Hence, $\bbeta_P$ may not be affine equivariant, but it is regression equivariant, i.e., $\bbeta(P_{\by+\bX\bb,\bX})=\bbeta(P_{\by,\bX})+\bb$, for all $\bb\in\R^q$. Clearly, when we take $P=\mathbb{P}_n$, the empirical measure corresponding to the observations $(\by_i,\bX_i)$, $i=1,\ldots,n$, definition~\eqref{def:MM functional} reduces to~\eqref{def:MM estimator}. \subsection{Existence} Lopuha\"a~\cite{lopuhaa1992highly} requires the following condition on the function $\rho:\R\to[0,\infty)$. \begin{quote} \begin{itemize} \item[(R)] $\rho$ is symmetric, $\rho(0)=0$ and $\rho(s)\to\infty$, as $s\to\infty$. The functions $\rho'$ and $u(s)=\rho'(s)/s$ are continuous, $\rho'\geq 0$ on $[0,\infty)$ and there exists a $s_0$ such that $\rho'$ is nondecreasing on $(0,s_0)$ and nonincreasing on $(s_0,\infty)$. \end{itemize} \end{quote} This condition corresponds to unbounded $\rho$-functions and it includes the function \[ \rho_H(s) = \begin{cases} \frac12s^2, & |s|\leq k\\ -\frac12k^2+k|s|, & |s|\geq k, \end{cases} \] of which the derivative $\psi_H=\rho_H'$ is known as Huber's $\psi$-function. In order that $R_P(\bbeta)$ is well defined we impose the following condition \begin{quote} \begin{itemize} \item[(C)] $\E_P\|\bX\|<\infty$, where $\|\bX\|^2=\sum_{i=1}^{k}\sum_{j=1}^{q}x_{ij}^2$. \end{itemize} \end{quote} The following theorem is an extension of Theorem~2.1 in~\cite{lopuhaa1992highly}. \begin{theorem} Let $\rho:\R\to[0,\infty)$ satisfy (R) and $P$ satisfy (C), and suppose that $\bX$ has full rank with probability one. \begin{itemize} \item[(i)] For every $\bbeta\in\R^q$ fixed, $R_P(\bbeta)$ is well defined. \item[(ii)] There is at least one vector $\bbeta_P$ that minimizes $R_P(\bbeta)$. When $\rho$ is also strictly convex, then $\bbeta_P$ is uniquely defined. \item[(iii)] When $P$ is such that $\by\mid\bX$ has an elliptically contoured density with parameters $\bmu=\bX\bbeta_0$ and $\bSigma$, and if $\bV_P=\bSigma$, then $R_P(\bbeta)\geq R_P(\bbeta_0)$, for all $\bbeta\in\R^q$. When $h$ in~\eqref{eq:elliptical} is strictly decreasing, then $R_P(\bbeta)$ is uniquely minimized by $\bbeta_P=\bbeta_0$. \end{itemize} \end{theorem} \begin{proof} Let $0<\lambda_k\leq \lambda_1<\infty$ be the smallest and largest eigenvalue of $\bV(P)$. (i) Condition (R) implies that $\rho(s)\leq\rho(s_0)$, for $s\in[0,s_0]$, and for $s>s_0$, \begin{equation} \label{eq:prop rho} \rho(s)=\int_{0}^{s_0}\rho'(t)\,\dd t+\int_{s_0}^{s}\rho'(t)\,\dd t \leq \rho(s_0)+(s-s_0)\rho'(s_0). \end{equation} Together with Lemma~2.1 in~\cite{lopuhaa1992highly}, it follows that \[ \begin{split} \left| \rho\left( \|\bV(P)^{-1/2}(\by-\bX\bbeta)\| \right) - \rho\left( \|\bV(P)^{-1/2}\by\| \right) \right| &\leq s_0\rho'(s_0) + \rho\left(\|\bV(P)^{-1/2}\bX\bbeta\|\right)\\ &\leq s_0\rho'(s_0) + \rho\left(\|\bX\bbeta\|\lambda_k^{-1/2}\right). \end{split} \] Either $\|\bX\bbeta\|\lambda_k^{-1/2}\leq s_0$ or \[ \begin{split} \rho\left(\|\bX\bbeta\|\lambda_k^{-1/2}\|\right) &\leq \rho(s_0)+\|\bX\bbeta\|\lambda_k^{-1/2}\rho'(s_0)-s_0\rho'(s_0)\\ &\leq \rho(s_0)+\|\bX\|\|\bbeta\|\lambda_k^{-1/2}\rho'(s_0)-s_0\rho'(s_0). \end{split} \] We conclude that, if $P$ satisfies (C), then \[ \begin{split} & \int \left| \rho\left( \|\bV(P)^{-1/2}(\by-\bX\bbeta)\| \right) - \rho\left( \|\bV(P)^{-1/2}\by\| \right) \right| \,\dd P(\bs)\\ &\quad\leq s_0\rho'(s_0) + \rho(s_0) + \|\bbeta\|\lambda_k^{-1/2}\rho'(s_0) \E_P\|\bX\|<\infty. \end{split} \] (ii) We have that \begin{equation} \label{eq:lower bound d} \begin{split} \sqrt{(\by-\bX\bbeta)^T\bV(P)^{-1}(\by-\bX\bbeta)} &\geq \frac{\|\by-\bX\bbeta\|}{\sqrt{\lambda_1}} \geq \frac{\|\bX\bbeta\|-\|\by\|}{\sqrt{\lambda_1}}\\ &\geq \frac{1}{\sqrt{\lambda_1}} \left( \|\bbeta\|\sqrt{\lambda_k(\bX^T\bX)}-\|\by\| \right). \end{split} \end{equation} Then by Fatou's lemma, together with the fact that $\lambda_k(\bX^T\bX)>0$ with probability one, it follows that \[ \lim_{\|\bbeta\|\to\infty} R_P(\bbeta) = \infty. \] This means that there exists a constant $M>0$, such that \begin{equation} \label{eq:def M unbounded} R_P(\bbeta)>0=R_P(\mathbf{0}), \quad \text{for all }\|\bbeta\|>M. \end{equation} Therefore, for minimizing $R_P(\bbeta)$ we may restrict ourselves to the set $K=\{\bbeta\in\R^q:\|\bbeta\|\leq M\}$. By Lemma~2.1 in~\cite{lopuhaa1992highly} and dominated convergence it follows that $R_P(\bbeta)$ is continuous on $K$ and therefore it must attain at least one minimum $\beta(P)$ on the compact set $K$. It is easily seen that strict convexity of $\rho$ implies strict convexity of $R_P$, which means that $\beta(P)$ is unique. (iii) Because $\bbeta(\cdot)$ is regression equivariant, we may assume that $(\bbeta_0,\bSigma)=(\mathbf{0},\bSigma)$. Write \[ R_P(\bbeta) = \E_\bX\left[ \E_{\by|\bX} \left\{ \rho\left( \|\bV_0(P)^{-1/2}(\by-\bX\bbeta)\| \right) - \rho\left( \|\bV_0(P)^{-1/2}\by\| \right) \right\} \right]. \] Since $\bz=\bV_0(P)^{-1/2}\by=\bSigma^{-1/2}\by$ has an elliptically contoured density with parameters $(\mathbf{0},\bI_k)$, the inner conditional expectation can be written as \[ \iint \left( \big\{0\leq s\leq \rho(\|\bz-\bV_0(P)^{-1/2}\bX\bbeta\|)\big\} - \big\{0\leq s\leq \rho(\|\bz\|)\big\} \right) f(\|\bz\|) \,\dd s\,\dd\bz. \] From here on, we can copy the proof of Theorem~2.1 in~\cite{lopuhaa1992highly} and conclude that \begin{equation} \label{eq:ineq RP} \E_{\by|\bX} \left\{ \rho\left( \|\bV_0(P)^{-1/2}(\by-\bX\bbeta)\| \right) - \rho\left( \|\bV_0(P)^{-1/2}\by\| \right) \right\} \geq 0, \quad \bX-\text{a.s.} \end{equation} It follows that $R_P(\bbeta)\geq 0=R_P(\mathbf{0})$. When $f$ is strictly decreasing, similar to the proof of Theorem~2.1 in~\cite{lopuhaa1992highly} it follows that inequality~\eqref{eq:ineq RP} is strict, which yields $R_P(\bbeta)>0=R_P(\mathbf{0})$. \end{proof} A direct consequence is the existence of the MM-estimator. Let $(\by_1,\bX_1),\ldots,(\by_n,\bX_n)$ be a sample, for which $\bX_i$ has full rank for each $i=1,\ldots,n$. Then if $\rho:\R\to[0,\infty)$ satisfies condition~(R), if follows from Theorem~\ref{th:existence MM unbounded rho}(i) with $P$ equal to the empirical measure $\mathbb{P}_n$ of the sample, that there exists at least one $\bbeta_n$ that minimizes~$R_n(\bbeta)$. \subsection{Breakdown Point} The following result on the breakdown point of the MM-estimator is an extension of Theorem~4.1 in~\cite{lopuhaa1992highly}. For the MM-estimator for multivariate location, the use of an unbounded $\rho$-function has the advantage that the breakdown point of the MM-estimator does not depend on the configuration of the sample and is at least as large as that of the covariance estimator used in the first step. However for our setup, contamination of the design matrices would spoil everything. This can be seen from the following extension of Lemma~4.1 in~\cite{lopuhaa1992highly}. \begin{lemma} \label{lem:Lemma41} Let $Q$ and $H$ be probability measures and $0\leq \epsilon\leq 1$. Define \[ Q_{\epsilon,H}=(1-\epsilon)Q+\epsilon H. \] Suppose that $\E_Q\|\bs\|<\infty$ and $\E_H\|\bX\|<\infty$ and suppose that there exist $k_1$ and $k_2$, such that \[ 0<k_1^2\leq \inf_H\lambda_k(\bV(Q_{\epsilon,H}))\leq \lambda_1(\bV(Q_{\epsilon,H}))\leq k_2^2<\infty. \] Then there exists a constant $0<K<\infty$ independent of $H$, such that \[ R_{Q_{\epsilon,H}}(\bbeta) \geq (1-\epsilon) \int \rho\left(\frac{\|\bX\bbeta\|}{k_2}\right)\dd Q(\bs) -\epsilon \int \rho\left(\frac{\|\bX\bbeta\|}{k_2}\right)\dd H(\bs) - K. \] \end{lemma} \begin{proof} Write $\bV_{\epsilon}=\bV(Q_{\epsilon,H})$. We then have \[ \begin{split} & \rho(\|\bV_{\epsilon}^{-1/2}(\by-\bX\bbeta)\|)-\rho(\|\bV_{\epsilon}^{-1/2}\by\|)\\ &= \rho(\|\bV_{\epsilon}^{-1/2}\bX\bbeta\|) + \rho(\|\bV_{\epsilon}^{-1/2}(\by-\bX\bbeta)\|)-\rho(\|\bV_{\epsilon}^{-1/2}\bX\bbeta\|) - \rho(\|\bV_{\epsilon}^{-1/2}\by\|). \end{split} \] The third term on the right hand side is bounded from below by $-\rho(\|\by\|/k_1)$. By Lemma~2.1 in~\cite{lopuhaa1992highly}, the second term om the right hand side is bound from below by $-s_0\rho'(s_0)-\rho(\|\by\|/k_1)$. It follows that \[ \begin{split} R_{Q_{\epsilon,H}}(\bbeta) &\geq (1-\epsilon) \left\{ \int \rho(\|\bV_{\epsilon}^{-1/2}\bX\bbeta\|)\,\dd Q(\bs) - 2 \int \rho(\|\by\|/k_1)\,\dd Q(\bs) - s_0\rho'(s_0) \right\}\\ &\quad+ \epsilon\int \left\{ \rho(\|\bV_{\epsilon}^{-1/2}(\by-\bX\bbeta)\|)-\rho(\|\bV_{\epsilon}^{-1/2}\by\|) \right\}\,\dd H(\bs)\\ &\geq (1-\epsilon) \left\{ \int \rho(\|\bX\bbeta\|/k_2)\,\dd Q(\bs) - 2 \int \rho(\|\by\|/k_1)\,\dd Q(\bs) - s_0\rho'(s_0) \right\}\\ &\quad- \epsilon \left\{ s_0\rho'(s_0) + \int \rho(\|\bV_{\epsilon}^{-1/2}\bX\bbeta\|) \right\}\,\dd H(\bs)\\ &\geq (1-\epsilon) \int \rho\left(\frac{\|\bX\bbeta\|}{k_2}\right)\dd Q(\bs) -\epsilon \int \rho\left(\frac{\|\bX\bbeta\|}{k_2}\right)\dd H(\bs) - K, \end{split} \] where \[ K = s_0\rho'(s_0) - 2\int \rho(\|\by\|/k_1)\,\dd Q(\bs) \leq s_0\rho'(s_0)<\infty. \] By property~\eqref{eq:prop rho}, \[ \rho\left(\frac{\|\bX\bbeta\|}{k_2}\right) \leq \rho(s_0) + (\|\bX\bbeta\|-k_2)\frac{\rho'(s_0)}{k_2} \leq \rho(s_0) + (\|\bX\|\cdot\|\bbeta\|-k_2)\frac{\rho'(s_0)}{k_2}. \] Since $\E_Q\|\bs\|<\infty$ and $\E_H\|\bX\|<\infty$, the integrals on the right hand side are well defined. \end{proof} \begin{theorem} Let $\mathcal{S}_n\subset \R^{p}$ be a collection of~$n$ points $\bs_i=(\by_i,\bX_i)$, $i=1,\ldots,n$. Let $\rho:\R\to[0,\infty)$ satisfy (R) and let $\bV_n$ be a positive definite symmetric covariance estimator, with $\epsilon_n^*(\bV_n,\mathcal{S}_n)\leq \lfloor(n+1)/2\rfloor/n$. Then for any minimizer $\bbeta_n$ of $R_n(\bbeta)$, it holds that \[ \epsilon_n^*(\bbeta_n,\mathcal{S}_n) \geq \epsilon_n^*(\bV_n,\mathcal{S}_n). \] \end{theorem} \end{comment}
1,477,468,750,876
arxiv
\section{Introduction} Hamiltonian systems describe conservative dynamics and non-dissipative phenomena in, for example, classical mechanics, transport problems, fluids and kinetic models. We consider Hamiltonian systems that depend on a set of parameters associated with the geometric configuration of the problem or which represent physical properties of the problem. The development of numerical methods for the solution of parametric Hamiltonian systems in many-query and long-time simulations is challenged by two major factors: the high computational cost required to achieve sufficiently accurate approximations, and the possible onset of numerical instabilities resulting from failing to satisfy the conservation laws underlying non-dissipative dynamics. Model order reduction (MOR) and reduced basis methods (RBM) provide an effective procedure to reduce the computational cost of such simulations by replacing the original high-dimensional problem with models of reduced dimensionality without compromising the accuracy of the approximation. The success of RBM relies on the assumption that the problem possesses a low-rank nature, i.e. that the set of solutions, obtained as time and parameters vary, is of low dimension. However, non-dissipative phenomena do not generally exhibit such global low-rank structure and are characterized by slowly decaying Kolmogorov $n$-widths. This implies that traditional reduced models derived via linear approximations are generally not effective. In recent years, there has been a growing interest in the development of model order reduction techniques for transport-dominated problems to overcome the limitations of linear global approximations. A large class of methods consist in constructing nonlinear transformations of the solution manifold and to recast it in a coordinate framework where it admits a low-rank structure, e.g. \cite{OR13,IL14,W17,RSSM18,ELMV19,CMS19,LC20,T20}. A second family of MOR techniques focuses on online adaptive methods that update local reduced spaces depending on parameter and time, e.g. \cite{C15,PW15,RPM19}. To the best of our knowledge, none of the aforementioned methods provide any guarantee on the preservation of the physical properties and the geometric structure of the problem considered, and they might therefore be unsuitable to treat non-dissipative phenomena. This work concerns the design of reduced order models that preserve the geometric structure of Hamiltonian problems and, furthermore, accommodate the \emph{local} low rank nature of the dynamics by adapting in time the reduced space and its dimension. In greater detail, we consider finite-dimensional parametric Hamiltonian systems in canonical symplectic form. For their model order reduction we adopt the dynamical reduced basis method introduced in \cite{P19} and inspired by dynamical low-rank matrix approximations \cite{KL07}. The gist of the method is to approximate the full model solution in a low-dimensional manifold that evolves in time and possesses the geometric structure of the full phase-space. The reduced dynamics is then derived via a symplectic projection of the Hamiltonian vector field onto the tangent space of the reduced symplectic manifold at each reduced state. While the reduced basis evolves in time, its dimension is usually fixed and decided at the beginning of the simulation. However, it frequently happens that the dimension does not correctly reflect the effective rank of the solution at all times. Consider, as an example, the linear advection problem in 1D, where the parameter represents the transport velocity. The solution is represented by a matrix whose columns are the solution vectors associated with different parameter values. It is clear that, if the initial condition does not depend on the parameter, its rank is equal to one. However, as the initial condition is advected in time with different velocities, its rank rapidly increases. Approximating such dynamics with a time-dependent sequence of reduced manifolds of rank-1 matrices yields poor approximations. An overapproximation of the initial condition, and possibly of the solution at other times, could improve the accuracy but will inevitably yield situations of rank-deficiency, as observed in \cite[Section 5.3]{KL07}. This example demonstrates that, in a dynamical reduced basis approach, it is crucial to accurately capture the rank of the full model solution at each time. In this work, we propose a novel adaptive dynamical scheme where, not only the reduced space is evolving, but also its dimension may change over time. The proposed rank-adaptive algorithm can be summarized as follows. We consider the structure-preserving temporal discretization of the reduced dynamics introduced in \cite{P19}. At the end of any given temporal interval we compute a surrogate error for all tested parameters and check the ratio of the norms of the error indicators at consecutive rank updates. If this error slope exceeds a chosen tolerance, we extract the singular vector associated with the most relevant mode of the dynamics via SVD of the error indicator. This vector, together with its symplectic dual, provides the direction in which the reduced space in augmented. Since the initial condition in the updated manifold is rank-deficient we also propose an algorithm to perform a regularization of the velocity field describing the reduced flow such that the resulting vector belongs to the tangent space of the updated reduced manifold, and the Hamiltonian structure is then preserved. The remainder of the paper is organized as follows. In \Cref{sec:pbm}, we introduce parametrized Hamiltonian systems and describe their geometric structure and physical properties. The dynamical reduced basis method proposed in \cite{P19}, which we adopt here, is summarized in \Cref{sec:DLR}. The problem of overapproximation and rank-deficiency is discussed in \Cref{sec:evol-rank-deficient}, where the regularization algorithm is introduced. \Cref{sec:pRK} deals with the numerical temporal integration of the reduced dynamics: first, we summarize the structure-preserving methods introduced in \cite{P19} for the evolution of the reduced basis and expansion coefficients, and then we design partitioned RK schemes that are accurate with order 2 and 3 and preserve the geometric structure of each evolution problem. \Cref{sec:rank-adaptivity} pertains to the rank-adaptive algorithm. We describe the major steps: computation of the error indicator, criterion for the rank update, and update of the reduced state. The computational complexity of the adaptive dynamical reduced basis algorithm is thoroughly analyzed in \Cref{sec:cost}. \Cref{sec:numerical_tests} is devoted to extensive numerical simulations of the proposed algorithm and its numerical comparisons with global reduced basis methods. Finally, \Cref{sec:conclusions} concludes with a few remarks. \section{Problem formulation}\label{sec:pbm} Let $\ensuremath{\mathcal{T}}:=(t_0,T]$ be a temporal interval and let $\Sprm\subset \mathbb{R}^d$, with $d\geq 1$, be a compact set of parameters. For each $\prm\in\Sprm$, we consider the initial value problem: For $u_0(\prm)\in\Vd{\Nf}$, find $u(\cdot,\prm)\in C^1(\ensuremath{\mathcal{T}},\Vd{\Nf})$ such that \begin{equation}\label{eq:HamSystem} \left\{ \begin{array}{ll} \dot{u}(t;\prm) = \ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(u(t;\prm);\prm), & \quad \quad\mbox{for }\;t\in\ensuremath{\mathcal{T}},\\ u(t_0;\prm) = u_0(\prm),& \end{array}\right. \end{equation} where the dot denotes the derivative with respect to time $t$, $\Vd{\Nf}$ is a $\Nf$-dimensional vector space and $C^1(\ensuremath{\mathcal{T}},\Vd{\Nf})$ denotes continuous differentiable functions in time taking values in $\Vd{\Nf}$. In this work we focus on evolution problems that can be expressed as Hamiltonian systems in canonical form. The phase-space of a canonical Hamiltonian system is a symplectic manifold. \begin{definition}[Symplectic structure] Let $\Vd{\Nf}$ be a $\Nf$-dimensional vector space over $\r{}$. A \emph{symplectic structure} on $\Vd{\Nf}$ is a skew-symmetric, non-degenerate bilinear form $\omega:\Vd{\Nf}\times\Vd{\Nf}\rightarrow\r{}$, namely \begin{equation*} \omega(v_1,v_2) = -\omega(v_2,v_1), \qquad \omega(v_1,v_2) = 0,\;\forall v_2\in\Vd{\Nf}\; \Rightarrow\; v_2=0. \end{equation*} A vector space $\Vd{\Nf}$ endowed with a symplectic structure $\omega$ is called a symplectic vector space, denoted as $(\Vd{\Nf},\omega)$. \end{definition} A result by Darboux \cite{Darb82} ensures that, on the symplectic vector space $(\Vd{\Nf},\omega)$, there exist local coordinates, called \emph{canonical coordinates}, in which the symplectic form $\omega$ has the canonical form, namely $\omega(v_1,v_2) = v_1^\top \J{\Nf} v_2$, for all $v_1, v_2\in\Vd{\Nf}$, where $\J{\Nf}$ is the \emph{Poisson tensor}, defined as \begin{equation}\label{eq:J} \J{\Nf} := \begin{pmatrix} 0_{\Nfh} & \ensuremath{I}_{\Nfh} \\ -\ensuremath{I}_{\Nfh} & 0_{\Nfh} \\ \end{pmatrix}\in\R{\Nf}{\Nf}, \end{equation} with $\ensuremath{I}_{\Nfh}, 0_{\Nfh} \in\R{\Nfh}{\Nfh}$ denoting the identity and zero matrices, respectively. Canonical coordinates on a symplectic vector space allow to define a global basis that is symplectic and orthonormal. \begin{definition}[Orthosymplectic basis]\label{def:ortsym} Let $(\Vd{\Nf},\omega)$ be a $\Nf$-dimensional symplectic vector space. Then, the set of vectors $\{e_i\}_{i=1}^{\Nf}$ is said to be \emph{orthosymplectic} in $\Vd{\Nf}$ if \begin{equation* \omega(e_i,e_j)=(\J{\Nf})_{i,j}\,,\quad\mbox{ and }\quad (e_i,e_j)=\delta_{i,j}\,,\qquad\forall i,j=1\ldots,\Nf, \end{equation*} where $(\cdot,\cdot)$ is the Euclidean inner product and $\J{\Nf}$ is the canonical symplectic tensor \eqref{eq:J} on $\Vd{\Nf}$. \end{definition} An evolution problem \eqref{eq:HamSystem} is Hamiltonian if the vector field $\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(\cdot,\prm)\in\Vd{\Nf}$ can be written in canonical coordinates as \begin{equation}\label{eq:HamVector} \ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(u(t;\prm);\prm)=\J{\Nf}\nabla_u\ensuremath{\Hcal}(u(t;\prm);\prm), \qquad \forall\, u\in\Vd{\Nf}, \;\mbox{and}\; \prm\in\Sprm\;\mbox{fixed}, \end{equation} where $\ensuremath{\Hcal}:\Vd{\Nf}\times\Sprm\rightarrow\r{}$ is the Hamiltonian function, $\J{\Nf}$ is the canonical symplectic tensor \eqref{eq:J}, and $\nabla_u$ is the gradient with respect to the state variable $u$. For any function $\ensuremath{\Hcal}$, the associated vector field $\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}$, defined in \eqref{eq:HamVector}, is unique and it is called \emph{Hamiltonian vector field}. Hamiltonian dynamical systems in canonical symplectic form are characterized by symplectic flows. Specifically, for any fixed parameter $\prm\in\Sprm$, the vector field $\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}$ determines a phase flow, i.e. a one-parameter group of diffeomorphisms $\Phi^t_{\ensuremath{\mathcal{X}}_{\ensuremath{\mathcal{H}}}}:\Vd{\Nf}\rightarrow\Vd{\Nf}$ satisfying $d_t\Phi^t_{\ensuremath{\mathcal{X}}_{\ensuremath{\mathcal{H}}}}(u)=\ensuremath{\mathcal{X}}_{\ensuremath{\mathcal{H}}}(\Phi^t_{\ensuremath{\mathcal{X}}_{\ensuremath{\mathcal{H}}}}(u);\prm)$ for all $t\in\ensuremath{\mathcal{T}}$ and $u\in\Vd{\Nf}$, with $\Phi^0_{\ensuremath{\mathcal{X}}_{\ensuremath{\mathcal{H}}}}(u)=u$. The flow map $\Phi^t_{\ensuremath{\mathcal{X}}_{\ensuremath{\mathcal{H}}}}$ of a vector field $\ensuremath{\mathcal{X}}_{\ensuremath{\mathcal{H}}}\in \Vd{\Nf}$ is Hamiltonian if and only if $\Phi^t_{\ensuremath{\mathcal{X}}_\ensuremath{\mathcal{H}}}$ is a symplectic diffeomorphism on its domain, i.e., for each $t\in\ensuremath{\mathcal{T}}$, the pullback of the flow map satisfies $(\Phi^t_{\ensuremath{\mathcal{X}}_\ensuremath{\Hcal}})^*\omega=\omega$, where ${\cdot}^*$ marks the adjoint. \section{Dynamical reduced basis method for Hamiltonian systems} \label{sec:DLR} We are interested in solving the Hamiltonian system \eqref{eq:HamSystem} for a given set of $\Np$ vector-valued parameters $\{\prm_j\}_{j=1}^{\Np}\subset\Sprm$, that, with a small abuse of notation, we denote $\prmh\in \Sprmh$. Then, the state variable $u$ in \eqref{eq:HamSystem} can be thought of as a matrix-valued application $u(\cdot;\prmh):\ensuremath{\mathcal{T}}\rightarrow \Vd{\Nf}^{\Np}\subset\R{\Nf}{\Np}$ where $\Vd{\Nf}^{\Np}:=\Vd{\Nf}\times\ldots\times\Vd{\Nf}$. Throughout, for a given matrix $\ensuremath{\mathcal{R}}\in\R{\Nf}{\Np}$, we denote with $\ensuremath{\mathcal{R}}_{j}\in\r{\Nf}$ the vector corresponding to the $j$-th column of $\ensuremath{\mathcal{R}}$, for any $j=1,\ldots,\Np$. The Hamiltonian system \eqref{eq:HamSystem}, evaluated at $\prmh$, can be recast as a set of ordinary differential equations in a $\Nf\times\Np$ matrix unknown in $\Vd{\Nf}^{\Np}$ as follows. For $\ensuremath{\mathcal{R}}_0(\prm_h):=\big[u_0(\prm_1)|\ldots|u_0(\prm_{\Np})\big]\in\Vd{\Nf}^{\Np}$, find $\ensuremath{\mathcal{R}}\in C^1(\ensuremath{\mathcal{T}},\Vd{\Nf}^{\Np})$ such that \begin{equation}\label{eq:HamSystemMatrix} \left\{ \begin{array}{ll} \dot{\ensuremath{\mathcal{R}}}(t) = \ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(\ensuremath{\mathcal{R}}(t),\prm_h) = \J{\Nf}\nabla\ensuremath{\Hcal}(\ensuremath{\mathcal{R}}(t);\prmh),&\quad\quad\mbox{for }\; t\in\ensuremath{\mathcal{T}},\\ \ensuremath{\mathcal{R}}(t_0) = \ensuremath{\mathcal{R}}_0(\prm_h), & \end{array}\right. \end{equation} where $\ensuremath{\Hcal}:\Vd{\Nf}^{\Np}\rightarrow\r{\Np}$ and, for any $\ensuremath{\mathcal{R}}\in\Vd{\Nf}^{\Np}$, its gradient $\nabla\ensuremath{\Hcal}(\ensuremath{\mathcal{R}};\prmh)\in\Vd{\Nf}^{\Np}$ is defined as $(\nabla\ensuremath{\Hcal}(\ensuremath{\mathcal{R}};\prmh))_{i,j}=\frac{\partial \ensuremath{\Hcal}_j}{\partial \ensuremath{\mathcal{R}}_{i,j}}$, for any $i=1,\ldots,\Nf$, $j=1,\ldots,\Np$. The function $\ensuremath{\Hcal}_j$ is the Hamiltonian of the dynamical system \eqref{eq:HamSystem} corresponding to the parameter $\prm_j$, for $j=1,\ldots,\Np$. We assume that, for a fixed sample of parameters $\prmh\in\Sprmh$, the vector field $\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(\cdot;\prmh)\in\Vd{\Nf}^{\Np}$ is Lipschitz continuous in the Frobenius norm $\norm{\cdot}$ uniformly with respect to time, so that \eqref{eq:HamSystemMatrix} is well-posed. For the model order reduction of \eqref{eq:HamSystemMatrix} we consider the dynamical reduced basis method introduced in \cite{P19} and based on dynamical low-rank matrix approximations. Here we propose an adaptive dynamical scheme where, not only the reduced space is evolving, but also its dimension may change over time. Let us first split the time domain $\ensuremath{\mathcal{T}}$ into the union of intervals $\ensuremath{{\Tcal_{\tau}}}:=(t^{\tau-1},t^{\tau}]$, $\tau=1,\ldots,N_{\tau}$, with $t^0:=t_0$ and $t^{N_{\tau}}:=T$, and we define the local time step as $\dt_{\tau}=t^{\tau}-t^{\tau-1}$ for every $\tau$. Let $\Nrht\in\mathbb{N}$ be given in each temporal interval $\ensuremath{{\Tcal_{\tau}}}$ under the assumptions that $\Nrt\leq \Np$ and $\Nrht\ll\Nfh$. We consider a local approximation of the solution of \eqref{eq:HamSystemMatrix} of the form \begin{equation}\label{eq:Rrb} \ensuremath{\mathcal{R}}(t)\approx R(t) = \sum_{i=1}^{\Nrt} U_i(t) Z_i(t,\prm_h) = U(t)Z(t), \qquad\forall\, t\in\ensuremath{{\Tcal_{\tau}}}, \end{equation} where $U(t)=\big[U_1|\ldots|U_{\Nrt}\big]\in\R{\Nf}{\Nrt}$, and $Z\in\R{\Nrt}{\Np}$ is such that $Z_{i,j}(t)=Z_i(t,\prm_j)$ for $i=1,\ldots,\Nrt$, $j=1,\ldots,\Np$, and any $t\in\ensuremath{{\Tcal_{\tau}}}$. With this notation, we introduce the collection of reduced spaces of $\Nf\times\Np$ matrices having rank at most $\Nrt$, and characterized as \begin{equation* \ensuremath{\mathcal{M}}_{\Nrt} := \{R\in\R{\Nf}{\Np}:\; R = UZ\;\mbox{ with }\; U\in\ensuremath{{\Ucal_{\tau}}},\, Z\in \ensuremath{{\Zcal_{\tau}}} \},\qquad\forall\,\tau=1,\ldots,N_{\tau}, \end{equation*} where $U$ represents the reduced basis and it is taken to be orthogonal and symplectic, while $Z$ are the expansion coefficients in the reduced basis, i.e. \begin{equation}\label{eq:ManUZ} \begin{aligned} \ensuremath{{\Ucal_{\tau}}} & :=\{U\in\R{\Nf}{\Nrt}:\;U^\top U=\ensuremath{I}_{\Nrt},\; U^\top \J{\Nf} U = \J{\Nrt}\}, \\ \ensuremath{{\Zcal_{\tau}}} & :=\{Z\in\R{\Nrt}{\Np}:\;\rank{ZZ^\top + \J{\Nrt}^\top ZZ^\top \J{\Nrt}} = \Nrt\}. \end{aligned} \end{equation} To approximate the Hamiltonian system \eqref{eq:HamSystemMatrix} in $\ensuremath{{\Tcal_{\tau}}}$ with an evolution problem on the reduced space $\ensuremath{\mathcal{M}}_{\Nrt}$ we need to prescribe evolution equations for the reduced basis $U(t)\in\ensuremath{{\Ucal_{\tau}}}$ and the expansion coefficients $Z(t)\in\ensuremath{{\Zcal_{\tau}}}$. For this, we follow the approach proposed in \cite{MN17} and \cite{P19}, and derive the reduced flow describing the dynamics of the reduced state $R$ in \eqref{eq:Rrb} by applying to the Hamiltonian vector field $\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}$ the symplectic projection $\Pi_{\T{R(t)}{\ensuremath{\mathcal{M}}_{\Nrt}}}$ onto the tangent space of the reduced manifold at the current state. The resulting local evolution problem reads: Find $R\in C^1(\ensuremath{{\Tcal_{\tau}}},\ensuremath{\mathcal{M}}_{\Nrt})$ such that \begin{equation}\label{eq:dynn} \dot{R}(t) = \Pi_{\T{R}{\ensuremath{\mathcal{M}}_{\Nrt}}}\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(R(t),\prm_h),\qquad\quad\mbox{for }\; t\in\ensuremath{{\Tcal_{\tau}}}, \end{equation} where we assume, for the time being, that the initial condition of \eqref{eq:dynn} at time $t^{\ensuremath{{\tau-1}}}$, $\tau\geq 1$, is given, and we refer to \Cref{sec:rank-update} for a complete description of how such an initial condition is prescribed. By exploiting the characterization of the projection operator $\Pi_{\T{R(t)}{\ensuremath{\mathcal{M}}_{\Nrt}}}$ in \cite[Proposition 4.2]{P19}, we obtain the local evolution equations for the factors $U$ and $Z$ in the modal decomposition of the reduced solution \eqref{eq:Rrb}, as in \cite[Proposition 6.9]{MN17} and \cite[Equation (4.10)]{P19}. In more details, for any $\tau\geq 1$, given $(U(t^{\ensuremath{{\tau-1}}}),Z(t^{\ensuremath{{\tau-1}}}))\in \ensuremath{{\Ucal_{\tau}}}\times \ensuremath{{\Zcal_{\tau}}}$ we seek $(U,Z)\in C^1(\ensuremath{{\Tcal_{\tau}}},\ensuremath{{\Ucal_{\tau}}})\times C^1(\ensuremath{{\Tcal_{\tau}}},\ensuremath{{\Zcal_{\tau}}})$ such that \begin{subequations}\label{eq:UZred} \begin{empheq}[left = \empheqlbrace\,]{align} & \dot{Z}(t) = \J{\Nr}\nabla_Z \ensuremath{\Hcal_{U}}(Z,\prmh), &\mbox{for }\; t\in\ensuremath{{\Tcal_{\tau}}},\label{eq:Z}\\ & \dot{U}(t) = (\ensuremath{I}_{\Nf}-UU^\top)(\J{\Nf}YZ^\top - YZ^\top\J{\Nrt}^\top) (ZZ^\top+\J{\Nrt}^\top ZZ^\top\J{\Nrt})^{-1}, &\mbox{for }\; t\in\ensuremath{{\Tcal_{\tau}}}, \label{eq:U} \end{empheq} \end{subequations} where $Y(t):=\nabla\ensuremath{\Hcal}(R(t);\prmh)\in\Vd{\Nrt}^{\Np}$, and $R(t)=U(t)Z(t)$ for all $t\in\ensuremath{{\Tcal_{\tau}}}$. Observe that the local expansion coefficients $Z\in\ensuremath{{\Zcal_{\tau}}}$ satisfy a Hamiltonian system \eqref{eq:Z} of reduced dimension $\Nrt$, where the reduced Hamiltonian is defined as $\ensuremath{\Hcal_{U}}(Z;\prmh):=\ensuremath{\Hcal}(UZ;\prmh)$. To compute the initial condition of the reduced problem at time $t_0$ we perform the complex SVD \cite[Section 4.2]{peng2016symplectic} of $\ensuremath{\mathcal{R}}_0(\prm_h)\in\R{\Nf}{\Np}$ in \eqref{eq:HamSystemMatrix}, truncated at the $\Nrh_1$-th mode. Then, the initial reduced basis $U_0\in\ensuremath{\mathcal{U}}_1$ can be derived from the unitary matrix of left singular vectors of $\ensuremath{\mathcal{R}}_0(\prm_h)$, via the isomorphism between $\ensuremath{\mathcal{U}}_1$ and the Stiefel manifold of unitary $\Nfh\times\Nrh_1$ complex matrices, \emph{cf.} \cite[Lemma 6.1]{MN17}. The expansion coefficients matrix is initialized as $Z_0 = U_0^\top \ensuremath{\mathcal{R}}_0(\prm_h)$. \begin{comment} \cp{I need to introduce the complex equivalent to be able to discuss the approximation properties of the scheme later.} We consider the reduced space $\ensuremath{\mathcal{M}}_{\Nrt}$ endowed with the Frobenius inner product induced by the ambient space $\C{\Nf}{\Nrt}$, namely $\langle A,B\rangle:=\tr(A^\ensuremath{\mathsf{H}} B)$, where $A^\ensuremath{\mathsf{H}}$ denotes the conjugate transpose of the complex matrix $A$, and we will denote with $\norm{\cdot}$ the Frobenius norm. \begin{theorem Let $\ensuremath{\mathcal{C}}\in C^1(\ensuremath{\mathcal{T}},\C{\Nfh}{\Np})$ denote the exact solution and let $C\in C^1(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{M}}_{\Nrh})$ be the solution of ... at time $t\in\ensuremath{\mathcal{T}}$. Assume that no crossing of the singular values of $\ensuremath{\mathcal{C}}$ occurs, namely \begin{equation*} \sigma_n(\ensuremath{\mathcal{C}}(t)) > \sigma_{n+1}(\ensuremath{\mathcal{C}}(t)),\qquad \forall\, t\in\ensuremath{\mathcal{T}}. \end{equation*} Let $\Pi_{\ensuremath{\mathcal{M}}_{\Nrh}}$ be the $\norm{\cdot}$-orthogonal projection onto $\ensuremath{\mathcal{M}}_{\Nrh}$. Then, at any time $t\in\ensuremath{\mathcal{T}}$, the error between the approximate solution $C(t)$ and the best rank-$\Nrh$ approximation of $\ensuremath{\mathcal{C}}(t)$ can be bounded as \begin{equation*} \norm{C(t)-\Pi_{\ensuremath{\mathcal{M}}_{\Nrh}}\ensuremath{\mathcal{C}}(t)} \leq \int_{\ensuremath{\mathcal{T}}}\bigg(L_{\ensuremath{\mathcal{X}}}+\dfrac{\norm{\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(\ensuremath{\mathcal{C}}(s),\prmh)}}{\sigma_{n}(\ensuremath{\mathcal{C}}(s))-\sigma_{n+1}(\ensuremath{\mathcal{C}}(s))}\bigg) \norm{\ensuremath{\mathcal{C}}(s)-\Pi_{\ensuremath{\mathcal{M}}_{\Nrh}}\ensuremath{\mathcal{C}}(s)} e^{\mu(t-s)}\,ds, \end{equation*} where $L_{\ensuremath{\mathcal{X}}}\in\r{}$ denotes the Lipschitz continuity constant of $\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}$ and $\mu\in\r{}$ is defined as \begin{equation*} \mu:=L_{\ensuremath{\mathcal{X}}} + 2\, \sup_{t\in\ensuremath{\mathcal{T}}} \dfrac{\norm{\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(\ensuremath{\mathcal{C}}(t),\prmh)}}{\sigma_{n}(\ensuremath{\mathcal{C}}(t))}. \end{equation*} \end{theorem} \end{comment} \section{Partitioned Runge--Kutta methods}\label{sec:pRK} Partitioned Runge--Kutta (RK) methods were originally introduced to deal with stiff evolution problems by splitting the dynamics into a stiff and a nonstiff part so that the two subsystems could be treated with different temporal integrators. There are many other situations where a dynamical system possesses a natural partitioning, for example Hamiltonian or singularly perturbed problems, or nonlinear systems with a linear part. In our setting, the factorization of the reduced solution \eqref{eq:Rrb} into the basis $U$ and the coefficients $Z$ provides the natural splitting expressed in \eqref{eq:UZred}. In this Section we first consider structure-preserving numerical approximations of the evolution problems \eqref{eq:U} and \eqref{eq:Z}, treated separately, by recalling the methods proposed in \cite{P19}. Then, for the numerical integration of the coupled system \eqref{eq:UZred}, we design partitioned RK schemes that are accurate with order $2$ and $3$ and preserve the geometric structure of each evolution problem. For the temporal approximation of \eqref{eq:Z} for $Z$, we rely on symplectic methods, \emph{cf.} e.g. \cite{HaLuWa06}. The evolution equation \eqref{eq:U} for the reduced basis is approximated using tangent methods that we briefly summarize here. Tangent methods allow to obtain, at a computational cost linear in $\Nfh$, a discrete reduced basis that is orthogonal and symplectic. We refer to \cite[Section 5.3]{P19} for further details. The tangent space of the manifold $\ensuremath{{\Ucal_{\tau}}}$ of orthosymplectic $\Nf\times\Nrt$ matrices, at a point $Q\in\ensuremath{{\Ucal_{\tau}}}$, can be characterized as $\TsUt{Q}:=\{V\in\R{\Nf}{\Nrt}:\;Q^\top V\in \fr{so}(\Nrt),\;V\J{\Nrt}=\J{\Nf}V\}$, where $\fr{so}(\Nrt)$ denotes the group of skew-symmetric real $\Nrt\times\Nrt$ matrices. Let us assume to know, in each temporal interval $\ensuremath{{\Tcal_{\tau}}}$, the approximate solution $Q:=U_{\tau-1}\in\ensuremath{\mathcal{U}}_{\tau}$ of $U(t^{\tau-1})$. Then, any element of $\ensuremath{{\Ucal_{\tau}}}$, in a neighborhood of $Q$, can be expressed as the image of a vector $V\in\TsUt{Q}$ via the retraction \begin{equation}\label{eq:retraction} \begin{aligned} \ensuremath{\mathcal{R}}_Q:\TsUt{Q} & \longrightarrow \ensuremath{{\Ucal_{\tau}}}\\ V & \longmapsto \ensuremath{\mathrm{cay}}(\Theta_Q(V)Q^\top-Q \Theta_Q(V)^\top) Q, \end{aligned} \end{equation} where $\ensuremath{\mathrm{cay}}$ is the Cayley transform and $2\Theta_Q(V):=(2\ensuremath{I}_{\Nf}-QQ^\top) V\in\R{\Nf}{\Nrt}$. We refer to \cite[Section 5.3]{P19} for further details on the derivation of the map \eqref{eq:retraction}. Since $\ensuremath{\mathcal{R}}_Q$ is a retraction by construction, rather than solving \eqref{eq:U} for $U$, one can derive the local behavior of $U$ in a neighborhood of $Q$ by evolving $V(t)$, with $U(t)=\ensuremath{\mathcal{R}}_Q(V(t))$, in the tangent space $\TsUt{Q}$. By computing the local inverse of the tangent map of the retraction $\ensuremath{\mathcal{R}}_Q$, the evolution problem for the vector $V$ reads: for any $t\in\ensuremath{{\Tcal_{\tau}}}$, \begin{equation}\label{eq:EvolTM} \dot{V}(t) = f_{\tau}(V(t),Z(t);\prmh) := -Q(\ensuremath{\mathcal{R}}_Q(V)^{\top} Q+\ensuremath{I}_{\Nrt})^{-1}(\ensuremath{\mathcal{R}}_Q(V)+Q)^{\top} \Phi+ \Phi - Q\Phi^\top Q, \end{equation} where $\Phi:=\big(2\ensuremath{\mathcal{F}}(\ensuremath{\mathcal{R}}_Q(V),Z;\prmh) - (\Theta_Q(V)Q^\top-Q \Theta_Q(V)^\top)\ensuremath{\mathcal{F}}(\ensuremath{\mathcal{R}}_Q(V),Z;\prmh)\big) (Q^{\top}\ensuremath{\mathcal{R}}_Q(V)+\ensuremath{I}_{\Nrt})^{-1}$, and $\ensuremath{\mathcal{F}}$ is defined in \eqref{eq:Fcal}, \emph{cf.} \cite[Section 5.3.1]{P19}. The resulting set of evolution equations describes the reduced dynamics in each temporal interval $\ensuremath{{\Tcal_{\tau}}}$ as: given $(U_{\tau-1},Z_{\tau-1})\in\ensuremath{{\Ucal_{\tau}}}\times\ensuremath{{\Zcal_{\tau}}}$, find $Z(t)\in\ensuremath{{\Zcal_{\tau}}}$ and $V(t)\in\TsUt{U_{\tau-1}}$ such that $U(t)=\ensuremath{\mathcal{R}}_{U_{\tau-1}}(V(t))$ for all $t\in\ensuremath{{\Tcal_{\tau}}}$ and \begin{equation}\label{eq:PRK-ZV} \left\{ \begin{array}{ll} \dot{Z}(t) = \ensuremath{\mathcal{G}}(\ensuremath{\mathcal{R}}_{U_{\tau-1}}(V(t)),Z(t);\prmh), &\quad\quad\mbox{for }\;t\in\ensuremath{{\Tcal_{\tau}}},\\ \dot{V}(t) = f_{\tau}(V(t),Z(t);\prmh), &\quad\quad\mbox{for }\;t\in\ensuremath{{\Tcal_{\tau}}},\\ V(t^{\tau-1}) = 0\in\TsUt{U_{\tau-1}}, &\\ Z(t^{\tau-1}) = Z_{\tau-1}\in\ensuremath{{\Zcal_{\tau}}}, & \end{array} \right. \end{equation} where $\ensuremath{\mathcal{G}}:=\J{\Nr}\nabla \ensuremath{\Hcal_{U}}(Z,\prmh)$ from \eqref{eq:Z} and $f_{\tau}$ is defined in \eqref{eq:EvolTM}. For the numerical approximation of \eqref{eq:PRK-ZV}, we rely on partitioned Runge--Kutta methods. Let $P_Z=(\{b_i\}_{i=1}^{\Ns},\{a_{ij}\}_{i,j=1}^{\Ns})$ be the collection of coefficients of the Butcher tableau describing an $\Ns$-stage \emph{symplectic} RK method, and let $\widehat{P}_U=(\{\widehat{b}_i\}_{i=1}^{\Ns},\{\widehat{a}_{ij}\}_{1\leq j<i\leq \Ns})$ be the set of coefficients of an $\Ns$-stage \emph{explicit} RK method. Then, the numerical approximation of \eqref{eq:PRK-ZV} via partitioned RK integrators reads \begin{equation}\label{eq:PRK-ZVh} \begin{array}{ll} & Z_{\tau} = Z_{\tau-1}+\dt\sum\limits_{i=1}^\Ns b_i k_i, \qquad V_{\tau} = \dt\sum\limits_{i=1}^\Ns \widehat{b}_i \widehat{k}_i, \\[0.5em] & \qquad k_1 = \ensuremath{\mathcal{G}}(U_{\tau-1}, Z_{\tau-1} + \dt\sum\limits_{j=1}^{\Ns} a_{1,j}k_j;\prmh), \qquad \widehat{k}_1 = \ensuremath{\mathcal{F}}(U_{\tau-1}, Z_{\tau-1} + \dt\sum\limits_{j=1}^{\Ns} a_{1,j}k_j;\prmh), \\ & \qquad k_i = \ensuremath{\mathcal{G}}\bigg( \ensuremath{\mathcal{R}}_{U_{\tau-1}}\big(\dt\sum\limits_{j=1}^{i-1} \widehat{a}_{i,j}\widehat{k}_j\big), Z_{\tau-1} + \dt\sum\limits_{j=1}^{\Ns} a_{i,j}k_j;\prmh\bigg), \qquad i=2,\ldots,\Ns,\\ & \qquad \widehat{k}_i = f_{\tau}\bigg(\dt\sum\limits_{j=1}^{i-1} \widehat{a}_{i,j}\widehat{k}_j, Z_{\tau-1} + \dt\sum\limits_{j=1}^{\Ns} a_{i,j}k_j;\prmh\bigg). \qquad\qquad\quad\quad\!\!\! i=2,\ldots,\Ns,\\ & U_{\tau} = \ensuremath{\mathcal{R}}_{U_{\tau-1}}(V_{\tau}). \end{array} \end{equation} Runge--Kutta methods of order 2 and 3 with the aforementioned properties can be characterized in terms of the coefficients $P_Z$ and $\widehat{P}_U$ as in the following result. \begin{lemma}\label{lemma:pRKcond} Consider the numerical approximation of \eqref{eq:PRK-ZV} with the $\Ns$-stage partitioned Runge--Kutta method \eqref{eq:PRK-ZVh} obtained by coupling the Runge--Kutta methods $P_Z=(\{b_i\}_{i=1}^{\Ns},\{a_{ij}\}_{i,j=1}^{\Ns})$ and $\widehat{P}_U=(\{\widehat{b}_i\}_{i=1}^{\Ns},\{\widehat{a}_{ij}\}_{1\leq j<i\leq \Ns})$. Then, the following statements hold. \begin{itemize} \item \emph{Symplectic condition} \cite[Theorem VI.4.3]{HaLuWa06}. The Runge--Kutta method $P_Z$ is symplectic if \begin{equation}\label{eq:symCond} b_ia_{ij}+b_ja_{ji}=b_ib_j,\qquad \forall\;i,j=1,\ldots,\Ns. \end{equation} \item \emph{Order condition} \cite[Theorem II.2.13]{HNW93}. The Runge--Kutta method $P_Z$ has order $k$, with \begin{align} & k=2\quad\mbox{iff}\qquad\sum_{i=1}^{\Ns} \widehat{b}_i=1,\quad \sum_{i,j=1}^{\Ns} b_i a_{ij}=\dfrac12;\label{eq:RKp2}\\ & k=3\quad\mbox{iff}\qquad\sum_{i=1}^{\Ns} b_i=1,\quad \sum_{i,j=1}^{\Ns} b_i a_{ij}=\dfrac12,\quad \sum_{i=1}^{\Ns} b_i \bigg(\sum_{j=1}^{\Ns} a_{ij}\bigg)^2 = \dfrac13,\quad \sum_{i,j,\ell=1}^{\Ns}b_i a_{ij}a_{j\ell} = \dfrac16.\label{eq:RKp3} \end{align} \item \emph{Coupling condition} \cite[Section III.2.2]{HaLuWa06}. The partitioned Runge--Kutta method $(P_Z,\widehat{P}_U)$ has order $p$, if $P_Z$ and $\widehat{P}_U$ are both of order $k$ and \begin{align} & k=2\quad\mbox{if}\qquad\sum_{i=1}^{\Ns}\sum_{j=1}^{i-1} b_i \widehat{a}_{ij} = \dfrac12,\qquad \sum_{i=1}^{\Ns}\sum_{j=1}^{\Ns} \widehat{b}_i a_{ij} = \dfrac12; \label{eq:pRKp2}\\ & k=3\quad\mbox{if}\qquad\sum_{i=1}^{\Ns} a_{ij} = \sum_{i=1}^{j-1} \widehat{a}_{ij},\qquad \sum_{i,\ell=1}^{\Ns}\sum_{j=1}^{i-1} b_i \widehat{a}_{ij}a_{j\ell} = \dfrac16,\qquad \sum_{i,j,\ell=1}^{\Ns} \widehat{b}_i a_{ij}a_{j\ell} = \dfrac16.\label{eq:pRKp3} \end{align} \end{itemize} \end{lemma} Partitioned Runge--Kutta of order 2 and 3 can be derived as described in \Cref{app:pRK}. \section{Reduced dynamics under rank-deficiency} \label{sec:evol-rank-deficient} By rank-deficient reduced dynamics we indicate the evolution problem resulting from model order reduction in situations of overapproximation. More specifically, as pointed out in \cite[Section 5.3]{KL07}, this might happen when a full model solution with effective rank $r<\Nrh$ is approximated, via a dynamical low-rank technique, by a rank-$\Nrh$ matrix. In this situation it is not clear how the effective rank of the reduced solution will evolve over time: in each temporal interval $\ensuremath{{\Tcal_{\tau}}}$, the dynamics may not remain on the reduce manifold $\ensuremath{\mathcal{M}}_{\Nrt}$ and the matrix $S(Z):=ZZ^\top+\J{\Nrt}^\top ZZ^\top\J{\Nrt}$ may become singular or severely ill conditioned. This happens, for example, when the full model state at time $t_0$ is approximated with a rank deficient matrix, or, as we will see in the rank-adaptive algorithm in \Cref{sec:rank-adaptivity}, when the reduced solution at a fixed time is used as initial condition to evolve the reduced system on a manifold of states with increased rank. In this Section, we propose an algorithm to deal with the overapproximation while maintaining the geometric structure of the Hamiltonian dynamics and of the factors $U$ and $Z$ in \eqref{eq:Rrb}. \begin{lemma}[Characterization of the matrix $S$]\label{lem:S} Let $S:=ZZ^\top+\J{\Nr}^\top ZZ^\top\J{\Nr}\in\R{\Nr}{\Nr}$ with $Z\in\R{\Nr}{\Np}$ and $\Np\geq \Nr$. $S$ is symmetric positive semi-definite and it is skew-Hamiltonian, namely $S\J{\Nr}-\J{\Nr}S^\top=0$. Moreover, if $S$ has rank $\Nr$ then $S$ is non-singular and $S^{-1}$ is also skew-Hamiltonian. In particular, the null space of $S$ is even dimensional and contains all pairs of vectors $(v,\J{\Nr}v)\in\r{\Nr}\times\r{\Nr}$ such that both $v$ and $\J{\Nr}v$ belong to the null space of $Z^\top$. \end{lemma} \begin{proof} It can be easily verified that $S$ is symmetric positive semi-definite and skew-Hamiltonian. Any eigenvalue of a skew-Hamiltonian matrix has even multiplicity, hence the null space of $S$ has even dimension. Since $S$ is positive semi-definite, $v\in\ker{S}$ if and only if $ZZ^\top v=0$ and $ZZ^\top\J{\Nr}v=0$, that is $\ker{S}=\ker{Z^\top}\cap\ker{Z^\top\J{\Nr}}$. Observe that all the elements $v$ of the kernel of $Z^\top$ are such that $\J{\Nr}^\top v\in\ker{Z^\top\J{\Nr}}$. \end{proof} \begin{comment} \smallskip \noindent \textbf{Under which conditions on $Z$ the full-rank condition is satisfied?}\\ \textbf{Fact}: Let $\Np\geq \Nr$. If $\rank{Z}=\Nr$ i.e. $Z$ is full rank, then the full rank condition is satisfied.\\ Indeed the matrix $C:=Z^\top Z\in\R{\Nr}{\Nr}$ is symmetric positive semi-definite, and if $Z$ is full rank then $C$ is positive definite. The same holds for $\J{\Nr}^\top C\J{\Nr}$ and for the sum $S:=C+\J{\Nr}^\top C\J{\Nr}$. A symmetric positive definite matrix is full rank.\\[1ex] % To have $S$ full rank one could show/impose that $\ker{C}\cap\ker{\J{\Nr}^\top C\J{\Nr}}=\emptyset$. Alternatively, using some isomorphism with the complex matrices, we have that $\rank{S}=\Nr$ if and only if $\rank{Y}=\Nrh$ where $Y:=Z_q+ i Z_p\in\C{\Nrh}{\Nrh}$ and $Z=[Z_q | Z_p]$. \cp{How is the reduced solution characterized when the full rank condition is not satisfied?} \end{comment} In addition to the algebraic limitations associated with the solution of a rank-deficient system, the fact that the matrix $S$ might be singular or ill conditioned prevents the reduced basis from evolving on the manifold of the orthosymplectic matrices. To show this, let $\ensuremath{\mathcal{F}}(\cdot,\cdot;\prmh):\R{\Nf}{\Nrt}\times \ensuremath{{\Zcal_{\tau}}}\rightarrow\R{\Nf}{\Nrt}$ denote the velocity field of the evolution \eqref{eq:U} of the reduced basis, namely \begin{equation}\label{eq:Fcal} \ensuremath{\mathcal{F}}(U,Z;\prmh) := (\ensuremath{I}_{\Nf}-UU^\top)(\J{\Nf}YZ^{\top} - YZ^{\top}\J{\Nrt}^\top)S^{-1}, \qquad\forall\, U\in\R{\Nf}{\Nrt},\, Z\in\ensuremath{{\Zcal_{\tau}}}. \end{equation} As shown in \cite[Proposition 4.3]{P19}, if $U(t^{\ensuremath{{\tau-1}}})\in\ensuremath{{\Ucal_{\tau}}}$ then $U(t)\in\R{\Nf}{\Nrt}$ solution of \eqref{eq:U} in $\ensuremath{{\Tcal_{\tau}}}$ satisfies $U(t)\in \ensuremath{{\Ucal_{\tau}}}$ for all $t\in\ensuremath{{\Tcal_{\tau}}}$, owing to the fact that $\ensuremath{\mathcal{F}}(U,Z;\prmh)$ belongs to the horizontal space $H_U:=\{X_U\in\R{\Nf}{\Nrt}:\; X_U^\top U=0,\,X_U\J{\Nrt}=\J{\Nf}X_U\}$. \begin{lemma}\label{lem:FinTM} The function $\ensuremath{\mathcal{F}}(\cdot,\cdot;\prmh):\R{\Nf}{\Nrt}\times \ensuremath{{\Zcal_{\tau}}}\rightarrow\R{\Nf}{\Nrt}$ defined in \eqref{eq:Fcal} is such that $\ensuremath{\mathcal{F}}(U,Z;\prmh)\in H_U$ if and only if $U\in\ensuremath{{\Ucal_{\tau}}}$ and $Z\in \ensuremath{{\Zcal_{\tau}}}$. \end{lemma} \begin{proof} Let $X_U:=\ensuremath{\mathcal{F}}(U,Z;\prmh)=(\ensuremath{I}_{\Nf}-UU^\top) AS^{-1}$, where $A:=\J{\Nf}YZ^{\top} - YZ^{\top}\J{\Nrt}^\top$.. The condition $X_U^\top U=0$ is satisfied for every $U\in\R{\Nf}{\Nrt}$ orthogonal and $Z\in\R{\Nrt}{\Np}$. Concerning the second condition, it can be easily shown, \emph{cf.} \cite[Proposition 4.3]{P19}, that $A=\J{\Nf}A\J{\Nrt}^\top$ and $\J{\Nf}(\ensuremath{I}_{\Nf}-UU^\top) = (\ensuremath{I}_{\Nf}-UU^\top)\J{\Nf}$. Hence, $\J{\Nf}X_U = (\ensuremath{I}_{\Nf}-UU^\top)A \J{\Nrt}S^{-1}$ and this is equal to $X_U\J{\Nrt}$ if and only if $\J{\Nrt}S^{-1}=S^{-1}\J{\Nrt}$. This condition follows from \Cref{lem:S}. \end{proof} \Cref{lem:FinTM} can be equivalently stated by considering the velocity field $\ensuremath{\mathcal{F}}$ as a function of the triple $(U,Z,S(Z))$. Then $\ensuremath{\mathcal{F}}(U,Z,S(Z);\prmh)$ belongs to $H_U$ if and only if $U\in\ensuremath{{\Ucal_{\tau}}}$, $Z\in\R{\Nrt}{\Np}$ and $S(Z)$ is non-singular, symmetric and skew-Hamiltonian. If the matrix $S$ is not invertible, i.e. $Z\notin\ensuremath{{\Zcal_{\tau}}}$, its inverse needs to be replaced by some approximation $S^\dagger$. By \Cref{lem:FinTM}, if $S^\dagger$ is not symmetric skew-Hamiltonian, then $\ensuremath{\mathcal{F}}^\dagger(U,Z;\prmh):=(\ensuremath{I}_{\Nf}-UU^\top) AS^\dagger$ does no longer belong to the horizontal space $H_U$. If, for example, $S^\dagger$ is the pseudo inverse of $S$, then the above condition is theoretically satisfied, but in numerical computations only up to a small error, because, if $S$ is rank-deficient, then its pseudoinverse corresponds to the pseudoinverse of the truncated SVD of $S$. To overcome these issues in the numerical solution of the reduced dynamics \eqref{eq:UZred}, we introduce two approximations: first we replace the rank-deficient matrix $S$ with an $\ensuremath{\varepsilon}$-regularization that preserves the skew-Hamiltonian structure of $S$ and then, in finite precision arithmetic, we set as velocity field for the evolution of the reduced basis $U$ an approximation of $\ensuremath{\mathcal{F}}$ in the space $H_{U(t)}$, for all $t\in\ensuremath{{\Tcal_{\tau}}}$. The $\ensuremath{\varepsilon}$-regularization consists in diagonalizing $S$ and then replacing, in the resulting diagonal factor, the elements below a certain threshold with a fixed factor $\ensuremath{\varepsilon}\in\r{}$. This is possible since (real) symmetric matrices are always diagonalizable by orthogonal transformations. However, unitary transformations do not preserve the skew-Hamiltonian structure. We therefore consider the following Paige Van Loan (PVL) decomposition, based on symplectic equivalence transformations. \begin{lemma}[{\cite{VanLoan84}}]\label{lem:PVL} Given a skew-Hamiltonian matrix $S\in\R{\Nr}{\Nr}$ there exists a symplectic orthogonal matrix $W\in\R{\Nr}{\Nr}$ such that $W^\top S W$ has the PVL form \begin{equation}\label{eq:PVL} W^\top S W = \begin{pmatrix} S_{\Nrh} & R\\ & S_{\Nrh}^\top \end{pmatrix}, \end{equation} where $S_{\Nrh}\in\R{\Nrh}{\Nrh}$ is an upper Hessenberg matrix. \end{lemma} In our case, since the matrix $S$ is symmetric, its PVL decomposition \eqref{eq:PVL} yields tridiagonal matrices with identical blocks $S_{\Nrht}=S_{\Nrht}^\top$. We further diagonalize $S_{\Nrht}$ using orthogonal transformations to obtain $S_{\Nrht}= T^\top D_{\Nrht}T$, with $T^\top T=\ensuremath{I}_{\Nrht}$ and diagonal $D_{\Nrht}\in\R{\Nrht}{\Nrht}$. Hence, \begin{equation*} S = W\begin{pmatrix} T^\top D_{\Nrht}T & \\ & T^\top D_{\Nrht}T \end{pmatrix} W^\top=:QDQ^\top,\;\mbox{ with } \; Q:=W\begin{pmatrix} T^\top &\\ & T^\top \end{pmatrix}, \quad D:=\begin{pmatrix} D_{\Nrht} & \\ & D_{\Nrht} \end{pmatrix}. \end{equation*} It can be easily verified that $Q\in\R{\Nrt}{\Nrt}$ is orthogonal and symplectic. The PVL factorization \Cref{lem:PVL} can be implemented as in, e.g., \cite[Algorithms 1 and 2]{BKM05}, with arithmetic complexity $O(\Nrht^3)$. The factorization is based on orthogonal symplectic transformations obtained from Givens rotations and symplectic Householder matrices, defined as the direct sum of Householder reflections \cite{PVL81}. Once the matrix $S$ has been brought in the PVL form, we perform the $\ensuremath{\varepsilon}$-regularization. Introduce the diagonal matrix $D_{\Nrht,\ensuremath{\varepsilon}}\in\R{\Nrht}{\Nrht}$ defined as, \begin{equation*} (D_{\Nrht,\ensuremath{\varepsilon}})_i = \left\{ \begin{array}{ll} (D_{\Nrht})_i & \mbox{if}\; (D_{\Nrht})_i>\ensuremath{\varepsilon}\\ \ensuremath{\varepsilon} & \mbox{otherwise}, \end{array}\right. \qquad \forall\, 1\leq i\leq \Nrht, \end{equation*} and let us denote with $D_{\ensuremath{\varepsilon}}\in\R{\Nrt}{\Nrt}$ the diagonal matrix composed of two blocks, both equal to $D_{\Nrht,\ensuremath{\varepsilon}}$. The matrix $\ensuremath{S_{\varepsilon}}:=Q D_{\ensuremath{\varepsilon}} Q^\top \in\R{\Nrt}{\Nrt}$ is symmetric positive definite and skew-Hamiltonian. Its distance to $S$ is bounded, in the Frobenius norm, as $\norm{S-\ensuremath{S_{\varepsilon}}} = \norm{Q(D-D_{\ensuremath{\varepsilon}})Q^\top} = \norm{D-D_{\ensuremath{\varepsilon}}}\leq \sqrt{m_{\ensuremath{\varepsilon}}}\,\ensuremath{\varepsilon}$, where $m_{\ensuremath{\varepsilon}}$ is the number of elements of $D_{\Nrht}$ that are smaller than $\ensuremath{\varepsilon}$. Since the $\ensuremath{\varepsilon}$-regularized matrix $\ensuremath{S_{\varepsilon}}$ is invertible, $\ensuremath{S_{\varepsilon}}^{-1}$ exists and is skew-Hamiltonian. This property allows to construct the vector field $\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon}}:=(\ensuremath{I}_{\Nf}-UU^\top)(\J{\Nf}YZ^{\top} - YZ^{\top}\J{\Nrt}^\top)\ensuremath{S_{\varepsilon}}^{-1}\in\R{\Nf}{\Nrt}$ with the property that $\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon}}$ belongs to the tangent space of the orthosymplectic $\Nf\times\Nrt$ matrix manifold. To gauge the error introduced by approximating the velocity field $\ensuremath{\mathcal{F}}$ in \eqref{eq:Fcal} with $\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon}}$, let us denote with $\ensuremath{\mathcal{L}}$ the operator $\ensuremath{\mathcal{L}}:=(\ensuremath{I}_{\Nf}-UU^\top)(\J{\Nf}YZ^{\top} - YZ^{\top}\J{\Nrt}^\top)$, so that \eqref{eq:U} reads $\dot{U}S=\ensuremath{\mathcal{L}}$. Then, the error made in the evolution of the reduced basis \eqref{eq:U}, by the $\ensuremath{\varepsilon}$-regularization, is \begin{equation*} \begin{aligned} \norm{\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon}}S-\ensuremath{\mathcal{L}}} & = \norm{\ensuremath{\mathcal{L}}(\ensuremath{S_{\varepsilon}}^{-1}S-\ensuremath{I}_{\Nrt})} = \norm{\ensuremath{\mathcal{L}} Q(D_{\ensuremath{\varepsilon}}^{-1}D-\ensuremath{I}_{\Nrt})Q^\top}\\ & \leq\norm{\ensuremath{\mathcal{L}}}\norm{D_{\ensuremath{\varepsilon}}^{-1}D-\ensuremath{I}_{\Nrt}} = \frac{\sqrt{2}}{\ensuremath{\varepsilon}}\,\norm{\ensuremath{\mathcal{L}}}\, \sqrt{\sum_{j=\Nrht-m_{\ensuremath{\varepsilon}}+1}^{\Nrht}|D_j-\ensuremath{\varepsilon}|^2}\,. \end{aligned} \end{equation*} Observe that the resulting vector field $\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon}}$ belongs to the space $H_U$ by construction. However, in finite precision arithmetic, the distance of the computed $\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon}}$ from $H_U$ might be affected by a small error that depends on the norm of the operators $\ensuremath{\mathcal{L}}$ and $\ensuremath{S_{\varepsilon}}$. This rounding error can affect the symplecticity of the reduced basis over time, whenever the matrix $S$ is severely ill conditioned. To guarantee that the evolution of the reduced basis computed in finite precision remains on the manifold of orthosymplectic matrices with an error of the order of machine precision, we introduce a correction of the velocity field $\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon}}$. Observe that any $X_U\in H_U$ is of the form $X_U=[F|\J{\Nf}^\top F]$, with $F\in\R{\Nf}{\Nrht}$ satisfying $U^\top F=0_{\Nrt\times \Nrht}$. Let us write $\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon}}$ as $\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon}}=[F|G]$, with $F^\top=[F_1^\top|F_2^\top]\in\R{\Nrht}{\Nf}$ and $G^\top=[G_1^\top|G_2^\top]\in\R{\Nrht}{\Nf}$. Since $U^\top \ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon}}=[U^\top F|U^\top G]=0_{\Nrt\times\Nrt}$, we can take $\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon},\star}:=[F|\J{\Nf}^\top F]$. Alternatively, we can define $\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon},\star}:=[W|\J{\Nf}^\top W]$ where $W^\top=[X^\top|-Y^\top]\in\R{\Nrht}{\Nf}$ and $2 X:=F_1+G_2$, $2Y:=G_1-F_2$. It easily follows that, with either definitions, $\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon},\star}$ belongs to $H_U$ and the error in the Frobenius norm is \begin{equation*} \norm{\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon}}-\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon},\star}}^2 = \dfrac14 \norm{\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon}}\J{\Nrt}-\J{\Nf}\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon}}}^2 = \norm{G-\J{\Nf}^\top F}^2. \end{equation*} We summarize the regularization scheme in \Cref{algo:reg}. \begin{algorithm} \caption{$\ensuremath{\varepsilon}$-regularization}\label{algo:reg} \begin{algorithmic}[1] \Procedure{\textsc{Regularization}}{$U\in\ensuremath{{\Ucal_{\tau}}}, Z\in\R{\Nrt}{\Np},\ensuremath{\varepsilon}$} \State Compute $S \gets ZZ^\top+\J{\Nrt}^\top ZZ^\top\J{\Nrt}$ \If{$\rank{S}<\Nrt$} \State Compute the PVL factorization $Q D Q^\top = S$ \State Set $\ensuremath{S_{\varepsilon}}\gets QD_{\ensuremath{\varepsilon}}Q^\top$ where $D_{\ensuremath{\varepsilon}}$ is the $\ensuremath{\varepsilon}$-regularization of $D$ \State Compute $\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon}}\gets(\ensuremath{I}_{\Nf}-UU^\top)(\J{\Nf}YZ^{\top} - YZ^{\top}\J{\Nrt}^\top)\ensuremath{S_{\varepsilon}}^{-1}$ \label{line:Fe} \State Compute $\ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon},\star}$ by enforcing the skew-Hamiltonian constraint \State Set $\ensuremath{\mathcal{F}}\gets \ensuremath{\mathcal{F}}_{\ensuremath{\varepsilon},\star}$ \Else \State Compute $\ensuremath{\mathcal{F}}\gets(\ensuremath{I}_{\Nf}-UU^\top)(\J{\Nf}YZ^{\top} - YZ^{\top}\J{\Nrt}^\top)S^{-1}$ \label{line:F} \EndIf \State \Return velocity field $\ensuremath{\mathcal{F}}\in H_U$ \EndProcedure \end{algorithmic} \end{algorithm} \section{Rank-adaptivity}\label{sec:rank-adaptivity} In this Section, we propose a rank-adaptive algorithm that enables the update of the size of the reduced manifold at the end of each temporal interval. The method is summarized in \Cref{algo:rank-update}. Here we focus on the case where the current rank of the reduced solution is too small to accurately reproduce the full model solution. In cases where the rank is too large, one can perform an $\ensuremath{\varepsilon}$-regularization following \Cref{algo:reg} or decrease the rank by looking at the spectrum of the reduced state and remove the modes associated with the lowest singular values. \subsection{Error indicator}\label{sec:err-indicator} Error bounds for parabolic problems are long-established and have been widely used to certify global reduced basis methods, \emph{cf.} e.g. \cite{grepl2005posteriori,urban2014improved}. However, their extension to noncoercive problems often results in pessimistic bounds that cannot be used to properly assess the quality of the reduced approximation. Few works have focused on the development of error estimates (not bounds) for reduced solutions of advection-dominated problems. In this work, we propose an error indicator based on the linearized residual of the full model. A related approach, known as Dual-Weighted Residual method (DWR) \cite{meyer2003efficient}, consists in deriving an estimate of the approximation error via the dual full model and the linearization of the error of a certain functional of interest (e.g. surface integral of the solution, stress, displacement, ...). Despite the promising results of this approach, the arbitrariness in the choice of the functional clashes with the goal of having a procedure as general as possible. We begin with the continuous full model \eqref{eq:HamSystemMatrix} and, for its time integration, we consider the implicit RK scheme used in the temporal discretization of the dynamical system for the expansion coefficients $Z$ in \eqref{eq:PRK-ZVh}, and having coefficients $(\{b_i\}_{i=1}^{\Ns},\{a_{ij}\}_{i,j=1}^{\Ns})$. Then, assuming that $\ensuremath{\mathcal{R}}_{\ensuremath{{\tau-1}}}\in\R{\Nf}{\Np}$ is known, \begin{equation}\label{eqn:start_error_equation} \begin{array}{lll} & \ensuremath{\mathcal{R}}_{\tau}=\ensuremath{\mathcal{R}}_{\tau-1} + \dt \sum\limits_{i=1}^{s} b_i k_i, \\[0.5em] &\qquad k_1 = \J{2N}\nabla_{\ensuremath{\mathcal{R}}}\ensuremath{\Hcal}( \ensuremath{\mathcal{R}}_{\tau-1}),\\[0.5em] &\qquad k_i = \J{2N}\nabla_{\ensuremath{\mathcal{R}}}\ensuremath{\Hcal}\bigg( \ensuremath{\mathcal{R}}_{\tau-1}+\dt \sum\limits_{j=1}^{s} a_{i,j}k_j; \prmh \bigg) \qquad i=2,\dots, s. \end{array} \end{equation} The discrete residual operator, in the temporal interval $\ensuremath{{\Tcal_{\tau}}}$, is \begin{equation}\label{eqn:residual_operator} \ensuremath{\rho}_{\tau}(\ensuremath{\mathcal{R}}_{\tau},\ensuremath{\mathcal{R}}_{\tau-1};\prmh) = \ensuremath{\mathcal{R}}_{\tau}-\ensuremath{\mathcal{R}}_{\tau-1}-\dt \sum_{i=1}^{s}b_i k_i=0. \end{equation} We consider the linearization of the residual operator \eqref{eqn:residual_operator} at $\left(R_{\tau},R_{\tau-1}\right)$, where $R_{\tau}$ is the approximate reduced solution at time $t^{\tau}$, obtained from \eqref{eq:PRK-ZVh} as $R_{\tau}=U_{\tau}Z_{\tau}$; thereby \begin{equation}\label{eqn:linearization_residual} \begin{array}{lll} \ensuremath{\rho}_{\tau}(\ensuremath{\mathcal{R}}_{\tau},\ensuremath{\mathcal{R}}_{\tau-1};\prmh) = & \ensuremath{\rho}_{\tau}(R_{\tau},R_{\tau-1};\prmh) + \dfrac{\partial \ensuremath{\rho}_{\tau}}{\partial \ensuremath{\mathcal{R}}_{\tau}} \biggr\rvert_{\left(R_{\tau},R_{\tau-1}\right)} \left( \ensuremath{\mathcal{R}}_{\tau}-R_{\tau}\right)\\[0.5em] & + \dfrac{\partial \ensuremath{\rho}_{\tau}}{\partial \ensuremath{\mathcal{R}}_{\tau-1}} \biggr\rvert_{\left(R_{\tau},R_{\tau-1}\right)} \left( \ensuremath{\mathcal{R}}_{\tau-1}-R_{\tau-1}\right) + \mathcal{O}\left(\left\| \ensuremath{\mathcal{R}}_{\tau}-R_{\tau}\right\|^2 + \left\| \ensuremath{\mathcal{R}}_{\tau-1}-R_{\tau-1}\right\|^2\right). \end{array} \end{equation} Similar procedures have been adopted in the formulation of the piecewise linear methods for the approximation of nonlinear operators, providing accurate approximations in case of low-order nonlinearities. From the residual operator, an approximation of the local error $\ensuremath{\mathcal{R}}_{\tau}-R_{\tau}$ is given by the matrix-valued quantity $\mathbf{E}_{\tau}$ defined as \begin{equation}\label{eqn:error_equation} \mathbf{E}_{\tau} := -\bigg(\dfrac{\partial \ensuremath{\rho}_{\tau}}{\partial \ensuremath{\mathcal{R}}_{\tau}} \biggr\rvert_{\left(R_{\tau},R_{\tau-1}\right)}\bigg)^{-1} \bigg(\ensuremath{\rho}_{\tau}(R_{\tau},R_{\tau-1};\prmh)+ \dfrac{\partial \ensuremath{\rho}_{\tau}}{\partial \ensuremath{\mathcal{R}}_{\tau-1}} \biggr\rvert_{\left(R_{\tau},R_{\tau-1}\right)} \left(\ensuremath{\mathcal{R}}_{\tau-1}-R_{\tau-1}\right)\bigg). \end{equation} The quantity defined by \eqref{eqn:error_equation} is the first order approximation of the error between the reduced and the full model solution. In particular, it quantifies the discrepancy due to the local approximation \eqref{eq:Rrb}. Even if the linearization error is negligible, the computational cost related to the assembly of the entire full-order residual $\rho$ and its Jacobian, together with the solution of a linear system for any instance of the $\Np$ parameters $\prmh$, makes the indicator unappealing if used in the context of highly efficient reduced approximations. In \cite{meyer2003efficient}, a hierarchical approach has been proposed to alleviate the aforementioned computational bottleneck but it relies on the offline phase to capture the dominant modes of the exact error. Instead, in this work, we solve \eqref{eqn:error_equation} on a subset of the $\Np$ vector-valued parameters $\prmh$ of cardinality $\widetilde{\Np}\ll \Np$, and only at a few time step during the simulation. Although the assembly and solution of the sparse linear system in \eqref{eqn:error_equation} has, for example, arithmetic complexity $\mathcal{O}(N^{\frac{1}{2}})$ \cite{george1981computer} for problems originating from the discretization of two-dimensional PDEs, this sampling strategy allows to reduce the computational cost required by the error estimator as compared to the evolution of the reduced basis and the coefficients, as discussed in \Cref{sec:numerical_tests}. \subsection{Criterion for rank update}\label{sec:criterion_rank_update} Let $\mathbf{E}_{\tau}\in\R{\Nf}{\Np}$ be the error indicator matrix obtained in \eqref{eqn:error_equation}. To decide when to activate the rank update algorithm, we take into account that, for advection-dominated and hyperbolic problems discretized using spectral methods, the error accumulates, and the effect of unresolved modes on the resolved dynamic contributes to this accumulation \cite{couplet2003intermodal}. Moreover, it has been noticed \cite{spantini2013preconditioning} that, for many problems of practical interest, the modes associated with initially negligible singular values might become relevant over time, potentially causing a loss of accuracy if a reduced manifold of fixed dimension is employed. Let us define $t^{\tau}$ as the current time, $t^{*}$ as the last time at which the dimension of the reduced basis $U$ was updated and let $\lambda_{\tau}$ be the number of past updates at time $t^{\tau}$. At the beginning of the simulation $t^{*}=t^{0}$ and $\lambda_{0}=0$. The rank update is performed if the ratio between the norms of error indicators at $t^{\tau}$ and $t^{*}$ satisfies the criterion \begin{equation}\label{eq:ratio_tmp} \dfrac{\norm{\mathbf{E}_{\tau}}}{\norm{\mathbf{E}_{*}}} > r c^{\lambda_{\tau}}\,, \end{equation} where $r, c\in\mathbb{R}$ are control parameters. The ratio of the norms of the error indicator gives a qualitatively indication of how the error is increasing in time and \eqref{eq:ratio_tmp} fixes a maximum acceptable growing slope. Deciding what represents an acceptable slope is a problem-dependent task but the numerical results in Section \ref{sec:numerical_tests} show little sensitivity of the algorithm with respect to $r$ and $c$. Moreover, the variable $\lambda_{\tau}$ induces a frequent rank-update when $n_{\tau}$ is small and vice versa when $n_{\tau}$ is large, hence controlling both the efficiency and the accuracy of the updating algorithm. Note that other (combinations of) criteria are possible: one alternative is to check that the norm of the error indicator remains below a fixed threshold; another possibility is to control the norm of some approximate gradient of the error indicator, etc. By numerically testing these various criteria, we observe that, at least in the numerical simulations performed, the criterion \eqref{eq:ratio_tmp} based on the ratio of error indicators is reliable and robust and gives the largest flexibility. \subsection{Update of the reduced state}\label{sec:rank-update} If criterion \eqref{eq:ratio_tmp} is satisfied, the rank adaptive algorithm updates the current reduced solution to a new state having a different rank. Specifically, assume that, in the time interval $\ensuremath{\mathcal{T}}_{\ensuremath{{\tau-1}}}$, we have solved the discrete reduced problem \eqref{eq:PRK-ZVh} to obtain the reduced solution $R_{\ensuremath{{\tau-1}}}=U_{\ensuremath{{\tau-1}}} Z_{\ensuremath{{\tau-1}}}$ in $\ensuremath{\mathcal{M}}_{\Nrh_{\ensuremath{{\tau-1}}}}$. As a first step, we derive an updated basis $U\in\ensuremath{{\Ucal_{\tau}}}$ from $U_{\ensuremath{{\tau-1}}}\in\ensuremath{\mathcal{U}}_{\tau-1}$, with $\Nrht=\Nrh_{\tau-1}+1$. To this aim, we enlarge $U_{\ensuremath{{\tau-1}}}$ with two extra columns derived from an approximation of the error, analogously to a greedy strategy. In greater detail, with the algorithm described in \Cref{sec:err-indicator}, we derive the error matrix $\mathbf{E}_{\tau}$ associated with the reduced solution at the current time. Via a thin SVD, we extract the left singular vector associated with the principal component of the error matrix, and we normalize it in the $2$-norm to obtain the vector $e\in\r{\Nf}$. We finally enlarge the basis $U_{\ensuremath{{\tau-1}}}$ with the two columns $[e\,|\,\J{\Nf}^\top e]\in \R{\Nf}{2}$. The rationale for this choice is that we seek to increase the accuracy of the low-rank approximation by adding to the reduced basis the direction that is worst approximated by the current reduced space. Numerical evidence of the improved quality of the updated basis in approximating the full model solution is provided in \Cref{sec:SW1D}. From the updated matrix $[U_{\ensuremath{{\tau-1}}}|\,e\,|\,\J{\Nf}^\top e]\in\R{\Nf}{\Nrt}$, we construct an orthosymplectic basis in the sense of \Cref{def:ortsym}, by performing a QR-like decomposition using symplectic unitary transformations. In particular, we employ a symplectic (modified) Gram-Schmidt algorithm \cite{Salam05}, with the possibility of adding reorthogonalization \cite{Giraud03} to enhance the stability and robustness of the algorithm. Once the updated reduced basis $U\in\ensuremath{{\Ucal_{\tau}}}$ is computed, we derive the matrix $Z\in\R{\Nrt}{\Np}$ by expanding the current reduced solution $R_{\ensuremath{{\tau-1}}}$ in the updated basis. Therefore, the updated $Z$ satisfies $UZ = R_{\ensuremath{{\tau-1}}}$, which results in $Z = U^\top R_{\ensuremath{{\tau-1}}}$. \begin{remark} Since the updated reduced state coincides with the reduced solution $R_{\ensuremath{{\tau-1}}}$ at time $t^{\ensuremath{{\tau-1}}}$, all invariants of \eqref{eq:HamSystemMatrix} preserved by the partitioned Runge--Kutta scheme \eqref{eq:PRK-ZVh} are conserved during the rank update. \end{remark} Observe that, even if the current reduced state $R_{\ensuremath{{\tau-1}}}$ is in $\ensuremath{\mathcal{M}}_{\ensuremath{{2n_{\tau}{-}2}}}$, it does not belong to the manifold $\ensuremath{\mathcal{M}}_{\Nrt}$. Indeed, one easily shows that $Z=U^\top R_{\ensuremath{{\tau-1}}}\in\R{\Nrt}{\Np}$ does not satisfy the full-rank condition, \begin{equation*} \begin{aligned} \rank{S(Z)} & =\rank{U^\top U_{\ensuremath{{\tau-1}}} [Z_{\ensuremath{{\tau-1}}} Z_{\ensuremath{{\tau-1}}}^\top + \J{\Nrt}^\top Z_{\ensuremath{{\tau-1}}} Z_{\ensuremath{{\tau-1}}}^\top \J{\Nrt}] U_{\ensuremath{{\tau-1}}}^\top U}\\ &\leq \min\{\rank{U^\top U_{\ensuremath{{\tau-1}}}}, \rank{Z_{\ensuremath{{\tau-1}}} Z_{\ensuremath{{\tau-1}}}^\top + \J{\Nrt}^\top Z_{\ensuremath{{\tau-1}}} Z_{\ensuremath{{\tau-1}}}^\top \J{\Nrt}}\} \leq \ensuremath{{2n_{\tau}{-}2}}. \end{aligned} \end{equation*} As shown in \Cref{lem:FinTM}, the fact that $Z\notin\ensuremath{{\Zcal_{\tau}}}$ implies that the velocity field $\ensuremath{\mathcal{F}}$ in \eqref{eq:Fcal}, describing the evolution of the reduced basis, is not well-defined. Therefore, we need to introduce an approximate velocity field for the solution of the reduced problem \eqref{eq:UZred} in the temporal interval $\ensuremath{\mathcal{T}}_{\tau}$ with initial conditions $(U,Z)\in\ensuremath{{\Ucal_{\tau}}}\times \R{\Nrt}{\Np}$. We refer to \Cref{sec:evol-rank-deficient} for a discussion about this issue and the description of the algorithm designed to solve the rank-deficient reduced dynamics ensuing from the rank update. \begin{algorithm} \caption{Rank update}\label{algo:rank-update} \begin{algorithmic}[1] \Procedure{\textsc{Rank\_update}}{$U_{\ensuremath{{\tau-1}}}, Z_{\ensuremath{{\tau-1}}}, \mathbf{E}_*$} \State Compute the error indicator matrix $\mathbf{E}_{\ensuremath{{\tau-1}}}\in\R{\Nf}{\Np}$ \eqref{eqn:error_equation} \If{criterion \eqref{eq:ratio_tmp} is satisfied} \State Compute $Q\Sigma V^\top = \mathbf{E}_{\ensuremath{{\tau-1}}}$ via thin SVD \label{line:thinSVD} \State Set $e\gets Q_1/\norm{Q_1}_2$ where $Q_1\in\r{\Nf}$ is the first column of the matrix $Q$ \State Construct the enlarged basis $\overline{U}\gets [U_{\ensuremath{{\tau-1}}}|\,e\,|\,\J{\Nf}^\top e]\in\R{\Nf}{(2\Nrh_{\ensuremath{{\tau-1}}}+2)}$ \State Compute $U$ via symplectic orthogonalization of $\overline{U}$ with symplectic Gram-Schmidt \State Compute the coefficients $Z\gets U^\top U_{\ensuremath{{\tau-1}}}Z_{\ensuremath{{\tau-1}}}$ \label{line:Z} \State Set $\Nrht=\Nrh_{\ensuremath{{\tau-1}}}+1$ \Else \State $U\gets U_{\ensuremath{{\tau-1}}}$, $Z\gets Z_{\ensuremath{{\tau-1}}}$ and $\Nrht=\Nrh_{\ensuremath{{\tau-1}}}$ \EndIf \State \Return updated factors $(U,Z)\in\ensuremath{{\Ucal_{\tau}}}\times \R{\Nrt}{\Np}$ \EndProcedure \end{algorithmic} \end{algorithm} \subsection{Approximation properties of the rank-adaptive scheme} To gauge the local approximation properties of the rank-adaptive scheme for the solution of the reduced dynamical system \eqref{eq:UZred}, we consider the temporal interval $\ensuremath{{\Tcal_{\tau}}}$ where the first rank update is performed. In other words, assume that $R_{\ensuremath{{\tau-1}}}=U_{\ensuremath{{\tau-1}}}Z_{\ensuremath{{\tau-1}}}$, with $(U_{\ensuremath{{\tau-1}}},Z_{\ensuremath{{\tau-1}}})\in\ensuremath{\mathcal{U}}_{\ensuremath{{\tau-1}}}\times\ensuremath{\mathcal{Z}}_{\ensuremath{{\tau-1}}}$, is the numerical approximation of the solution $R(t^{\ensuremath{{\tau-1}}})\in\ensuremath{\mathcal{M}}_{\Nr_{\ensuremath{{\tau-1}}}}$ of the reduced dynamical system \eqref{eq:dynn} at time $t^{\ensuremath{{\tau-1}}}$ with $\Nrh_{\ensuremath{{\tau-1}}}=\Nrh_{\tau-2}=\ldots=\Nrh_1$. After the rank update at time $t^{\ensuremath{{\tau-1}}}$, the reduced state $R$ satisfies the local evolution problem \begin{equation}\label{eq:R} \left\{ \begin{array}{ll} \dot{R}(t) =\ensuremath{\mathcal{P}}^{\ensuremath{\varepsilon}}_{R}\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(R(t),\prm_h),\qquad\quad\mbox{for }\; t\in\ensuremath{{\Tcal_{\tau}}},\\ R(t^{\ensuremath{{\tau-1}}}) = R_{\ensuremath{{\tau-1}}}=U_{\ensuremath{{\tau-1}}}^{\Nrht}Z_{\ensuremath{{\tau-1}}}^{\Nrht}, & \end{array}\right. \end{equation} where $(U_{\ensuremath{{\tau-1}}}^{\Nrht},Z_{\ensuremath{{\tau-1}}}^{\Nrht})\in\ensuremath{{\Ucal_{\tau}}}\times\R{\Nrt}{\Np}$ are the rank-updated factors, and \begin{equation*} \ensuremath{\mathcal{P}}^{\ensuremath{\varepsilon}}_{R}\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}:= (\ensuremath{I}_{\Nf}-UU^\top)(\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}Z^\top + \J{\Nf}\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}Z^\top\J{\Nrt}^\top)\ensuremath{S_{\varepsilon}}(Z)^{-1}Z+ UU^\top \ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}},\qquad\forall\, R=UZ\in\mathbb{R}^{\Nf}{\Np}. \end{equation*} We make the assumption that the reduced problem \eqref{eq:dynn} is well-posed. Let $\ensuremath{\mathcal{R}}(t)\in\ensuremath{\mathcal{V}}^{\Np}_{\Nf}$ be the full model solution of problem \eqref{eq:HamSystemMatrix} in the temporal interval $\ensuremath{{\Tcal_{\tau}}}$ with given initial condition $\ensuremath{\mathcal{R}}(t^{\ensuremath{{\tau-1}}})$. The error between the approximate reduced solution of \eqref{eq:R} and the full model solution at time $t^{\tau}\in\ensuremath{\mathcal{T}}$ is given by \begin{equation*} R_{\tau}-\ensuremath{\mathcal{R}}(t^{\tau}) = \big(R_{\tau}-R(t^{\tau})\big) + \big(R(t^{\tau})-\ensuremath{\mathcal{R}}(t^{\tau})\big). \end{equation*} The quantity $e^{\tau}_{\textrm{A}}:=R_{\tau}-R(t^{\tau})$ is the approximation error associated with the partitioned Runge--Kutta discretization scheme, and can be treated using standard convergence analysis techniques, in light of the fact that the retraction map is Lipschitz continuous in the Frobenius norm, as shown in \cite[Proposition 5.7]{P19}. The term $e_{\textrm{RA}}(t):=R(t)-\ensuremath{\mathcal{R}}(t)$, for any $t\in\ensuremath{{\Tcal_{\tau}}}$, is associated with the rank update and can be bounded as \begin{equation*} \begin{aligned} d_t\norm{e_{\textrm{RA}}} & \leq \norm{\ensuremath{\mathcal{P}}^{\ensuremath{\varepsilon}}_{R}\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(R) - \ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(\ensuremath{\mathcal{R}})} \leq \norm{\ensuremath{\mathcal{P}}^{\ensuremath{\varepsilon}}_{R}\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(R) - \ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(R)} + \norm{\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(R)-\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(\ensuremath{\mathcal{R}})}\\ & \leq L_{\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}}\norm{e_{\textrm{RA}}} + \norm{(\ensuremath{I}_{\Nf}-\ensuremath{\mathcal{P}}^{\ensuremath{\varepsilon}}_{R})\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(R)}, \end{aligned} \end{equation*} where $L_{\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}}$ is the Lipschitz continuity constant of $\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}$. Gronwall's inequality \cite{Gro19} gives, for all $t\in\ensuremath{{\Tcal_{\tau}}}$, \begin{equation}\label{eq:err} \norm{e_{\textrm{RA}}(t)} \leq \norm{e_{\textrm{RA}}(t_0)}\,e^{L_{\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}} t}+ \int_{t^{\ensuremath{{\tau-1}}}}^{t^{\tau}} e^{L_{\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}}(t-s)}\norm{(\ensuremath{I}_{\Nf}-\ensuremath{\mathcal{P}}^{\ensuremath{\varepsilon}}_{R})\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(R)}\,ds. \end{equation} Observe that the estimate \eqref{eq:err} depends on the distance between the Hamiltonian vector field at the reduced state and its image under the map $\ensuremath{\mathcal{P}}^{\ensuremath{\varepsilon}}_{R}$ that approximates the orthogonal projection operator on the tangent space of $\ensuremath{\mathcal{M}}_{\Nrt}$. Although a rigorous bound for this term is not available, we expect that it can be controlled arbitrary well by increasing the size of the reduced basis, as will also be demonstrated in \Cref{sec:numerical_tests}. Moreover, the estimate \eqref{eq:err} on the whole temporal interval $\ensuremath{\mathcal{T}}$ depends exponentially on the final time $T$. A linear dependence on $T$ can be obtained only in special cases, for example when $\nabla_{\ensuremath{\mathcal{R}}} \ensuremath{\Hcal}$ is uniformly negative monotone. \begin{comment} \bigskip In this section we analyze the local approximation properties of the rank-adaptive scheme for the solution of the reduced dynamical system \eqref{eq:UZred}. We make the assumption that the reduced problem \eqref{eq:dynn} is well-posed. Let us consider the temporal interval $\ensuremath{{\Tcal_{\tau}}}$ where the first rank update is performed. In other words, assume that $R_{\ensuremath{{\tau-1}}}=U_{\ensuremath{{\tau-1}}}Z_{\ensuremath{{\tau-1}}}$, with $(U_{\ensuremath{{\tau-1}}},Z_{\ensuremath{{\tau-1}}})\in\ensuremath{\mathcal{U}}_{\ensuremath{{\tau-1}}}\times\ensuremath{\mathcal{Z}}_{\ensuremath{{\tau-1}}}$, is the numerical approximation of the solution $R(t^{\ensuremath{{\tau-1}}})\in\ensuremath{\mathcal{M}}_{\Nr_{\ensuremath{{\tau-1}}}}$ of the reduced dynamical system \eqref{eq:dynn} at time $t^{\ensuremath{{\tau-1}}}$ with $\Nrh_{\ensuremath{{\tau-1}}}=\Nrh_{\tau-2}=\ldots=\Nrh_1$. After the rank update at time $t^{\ensuremath{{\tau-1}}}$, the reduced state $R$ satisfies the local evolution problem \begin{equation}\label{eq:R} \left\{ \begin{array}{ll} \dot{R}(t) =\ensuremath{\mathcal{P}}^{\ensuremath{\varepsilon}}_{R}\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(R(t),\prm_h),\qquad\quad\mbox{for }\; t\in\ensuremath{{\Tcal_{\tau}}},\\ R(t^{\ensuremath{{\tau-1}}}) = R_{\ensuremath{{\tau-1}}}=U_{\ensuremath{{\tau-1}}}^{\Nrht}Z_{\ensuremath{{\tau-1}}}^{\Nrht}, & \end{array}\right. \end{equation} where $(U_{\ensuremath{{\tau-1}}}^{\Nrht},Z_{\ensuremath{{\tau-1}}}^{\Nrht})\in\ensuremath{{\Ucal_{\tau}}}\times\R{\Nrt}{\Np}$ are the rank-updated factors, and \begin{equation*} \ensuremath{\mathcal{P}}^{\ensuremath{\varepsilon}}_{R}\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}:= (\ensuremath{I}_{\Nf}-UU^\top)(\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}Z^\top + \J{\Nf}\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}Z^\top\J{\Nrt}^\top)\ensuremath{S_{\varepsilon}}(Z)^{-1}Z+ UU^\top \ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}},\qquad\forall\, R=UZ\in\mathbb{R}^{\Nf}{\Np}. \end{equation*} We consider the auxiliary function $\ensuremath{\mathfrak{R}}\in C^1(\ensuremath{{\Tcal_{\tau}}},\ensuremath{\mathcal{M}}_{\Nrt})$ solution of the dynamical system \begin{equation}\label{eq:Ra} \left\{ \begin{array}{ll} \dot{\ensuremath{\mathfrak{R}}}(t) =\Pi_{\T{\ensuremath{\mathfrak{R}}}{\ensuremath{\mathcal{M}}_{\Nrt}}}\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}(\ensuremath{\mathfrak{R}}(t),\prm_h),\qquad\quad\mbox{for }\; t\in\ensuremath{{\Tcal_{\tau}}},\\ \ensuremath{\mathfrak{R}}(t^{\ensuremath{{\tau-1}}}) = \Pi_{\ensuremath{\mathcal{M}}_{\Nrt}}\ensuremath{\mathcal{R}}(t^{\ensuremath{{\tau-1}}}). & \end{array}\right. \end{equation} Let $\ensuremath{\mathcal{R}}(t)\in\ensuremath{\mathcal{V}}^{\Np}_{\Nf}$ be the full model solution of problem \eqref{eq:HamSystemMatrix} in the temporal interval $\ensuremath{{\Tcal_{\tau}}}$ with given initial condition $\ensuremath{\mathcal{R}}(t^{\ensuremath{{\tau-1}}})$. The error between the approximate reduced solution of \eqref{eq:R} and the full model solution at time $t^{\tau}\in\ensuremath{\mathcal{T}}$ is bounded as \begin{equation} \begin{aligned} \norm{R_{\tau}-\ensuremath{\mathcal{R}}(t^{\tau})} & \leq \norm{R_{\tau}-R(t^{\tau})} + \norm{R(t^{\tau})-\ensuremath{\mathcal{R}}(t^{\tau})}\\ & \leq \norm{R_{\tau}-R(t^{\tau})} + \norm{R(t^{\tau})-\ensuremath{\mathfrak{R}}(t^{\tau})} + \norm{\ensuremath{\mathfrak{R}}(t^{\tau})-\ensuremath{\mathcal{R}}(t^{\tau})}. \end{aligned} \end{equation} The quantity $e^{\tau}_{\textrm{A}}:=\norm{R_{\tau}-R(t^{\tau})}$ is the approximation error associated with the partitioned Runge--Kutta discretization scheme, and can be treated using standard convergence analysis techniques, in light of the fact that the retraction map is Lipschitz continuous in the Frobenius norm, as shown in \cite[Proposition 5.7]{P19}. The term $e^{\tau}_{\textrm{DLR}}:=\norm{\ensuremath{\mathfrak{R}}(t^{\tau})-\ensuremath{\mathcal{R}}(t^{\tau})}$ it the local dynamical reduced basis error and it is associated with the low-rank approximation on the manifold $\ensuremath{\mathcal{M}}_{\Nrt}$ of the evolution in $\ensuremath{{\Tcal_{\tau}}}$. An upper bound for this quantity has been derived in \cite[Theorem 4.9]{P19} by extending the result of \cite[Theorem 32]{FL18} via the isomorphism between $\ensuremath{\mathcal{M}}_{\Nrt}$ and complex $\Nfh\times\Np$ matrices of rank $\Nrht$. The term $e_{\textrm{RA}}:=\norm{R(t^{\tau})-\ensuremath{\mathfrak{R}}(t^{\tau})}$ is associated with the rank update. Observe that, if the algorithm does not adapt the rank at the beginning of the interval $\ensuremath{{\Tcal_{\tau}}}$, then $e_{\textrm{RA}}=0$. To gauge the quantity $e_{\textrm{RA}}$ resulting from the rank-adaptive algorithm, let us consider the decomposition of $\ensuremath{\mathfrak{R}}\in\ensuremath{\mathcal{M}}_{\Nrt}$ given by $\ensuremath{\mathfrak{R}}(t)=\ensuremath{\mathfrak{U}}(t)\ensuremath{\mathfrak{Z}}(t)$, for any $t\in\ensuremath{{\Tcal_{\tau}}}$. We make the assumption that the $\ensuremath{\varepsilon}$-regularization $\ensuremath{S_{\varepsilon}}(\ensuremath{\mathfrak{Z}})$ of the operator $S(\ensuremath{\mathfrak{Z}}):=\ensuremath{\mathfrak{Z}}\Za^\top+\J{\Nrt}^\top \ensuremath{\mathfrak{Z}}\Za^\top\J{\Nrt}$ coincide with $S(\ensuremath{\mathfrak{Z}})$ so that $\ensuremath{\mathcal{P}}^{\ensuremath{\varepsilon}}_{\ensuremath{\mathfrak{R}}}\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}} =\Pi_{\T{\ensuremath{\mathfrak{R}}}{\ensuremath{\mathcal{M}}_{\Nrt}}}\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}$ for any $\ensuremath{\mathfrak{R}}\in\ensuremath{\mathcal{M}}_{\Nrt}$ solution of \eqref{eq:Ra}. This implies that $R(t)$ solution of \eqref{eq:R} and $\ensuremath{\mathfrak{R}}(t)$ solution of \eqref{eq:Ra}, at time $t\in\ensuremath{{\Tcal_{\tau}}}$, are characterized by the same flow, and they describe the streamline trajectories associated with the initial data $R_{\ensuremath{{\tau-1}}}$ and $\Pi_{\ensuremath{\mathcal{M}}_{\Nrt}}\ensuremath{\mathcal{R}}(t^{\ensuremath{{\tau-1}}})$, respectively. The distance between the two initial conditions on $\ensuremath{\mathcal{M}}_{\Nrt}$ is bounded by \begin{equation}\label{eq:errR0} \begin{aligned} \norm{R_{\ensuremath{{\tau-1}}}-\Pi_{\ensuremath{\mathcal{M}}_{\Nrt}}\ensuremath{\mathcal{R}}(t^{\ensuremath{{\tau-1}}})} \leq &\,\norm{R_{\ensuremath{{\tau-1}}}-R(t^{\ensuremath{{\tau-1}}})} + \norm{R(t^{\ensuremath{{\tau-1}}})-\Pi_{\ensuremath{\mathcal{M}}_{\Nr_{\ensuremath{{\tau-1}}}}}\ensuremath{\mathcal{R}}(t^{\ensuremath{{\tau-1}}})} \\ & + \norm{\Pi_{\ensuremath{\mathcal{M}}_{\Nr_{\ensuremath{{\tau-1}}}}}\ensuremath{\mathcal{R}}(t^{\ensuremath{{\tau-1}}})-\Pi_{\ensuremath{\mathcal{M}}_{\Nrt}}\ensuremath{\mathcal{R}}(t^{\ensuremath{{\tau-1}}})} \end{aligned} \end{equation} The first term in \eqref{eq:errR0} it the approximation error $e_{\textrm{A}}^{\ensuremath{{\tau-1}}}$ in the temporal interval $\ensuremath{\mathcal{T}}_{\ensuremath{{\tau-1}}}$ associated with the partitioned Runge--Kutta temporal integrator. The term $\norm{R(t^{\ensuremath{{\tau-1}}})-\Pi_{\ensuremath{\mathcal{M}}_{\Nr_{\ensuremath{{\tau-1}}}}}\ensuremath{\mathcal{R}}(t^{\ensuremath{{\tau-1}}})}$ is the distance between the dynamical low-rank solution and the best rank-$\Nr_{\ensuremath{{\tau-1}}}$ approximation of the full model solution $\ensuremath{\mathcal{R}}(t^{\ensuremath{{\tau-1}}})$ at time $t^{\ensuremath{{\tau-1}}}$. As for $e_{\textrm{DLR}}^{\tau}$, this quantity can be bounded as in \cite[Theorem 4.9]{P19}. Finally, the last term in \eqref{eq:errR0} is the distance between the best rank-$\Nr_{\ensuremath{{\tau-1}}}$ and the best rank-$\Nrt$ approximation of the continuous exact solution $\ensuremath{\mathcal{R}}(t^{\ensuremath{{\tau-1}}})$, and it is proportional to the singular values $\sigma_{\Nr_{\ensuremath{{\tau-1}}}+j}(\ensuremath{\mathcal{R}}(t^{\ensuremath{{\tau-1}}}))$ for $j=1,\ldots,2(\Nrht-\Nrh_{\ensuremath{{\tau-1}}})$. \end{comment} \begin{comment} In order to gauge the quality of the approximation introduced by the rank-adaptive algorithm, we first consider the error between the updated reduced state and the best low-rank approximation of the full model solution $\ensuremath{\mathcal{R}}(t^{\ensuremath{{\tau-1}}})$ at time $t^{\ensuremath{{\tau-1}}}$. To this aim, we need to consider the matrix $\ensuremath{\mathcal{C}}(t)\in\C{\Nfh}{\Np}$ obtained from $\ensuremath{\mathcal{R}}(t)\in\R{\Nf}{\Np}$ for any $t\in\ensuremath{\mathcal{T}}$ via the isomorphism described in \Cref{sec:DLR}. The distance between the reduced solution $C_{\ensuremath{{\tau-1}}}\in\ensuremath{\mathcal{M}}^{c}_{\ensuremath{n_{\tau}{-}1}}$ at time $t^{\ensuremath{{\tau-1}}}$ and the best rank-$(\ensuremath{n_{\tau}{-}1})$ approximation of $\ensuremath{\mathcal{C}}(t^{\ensuremath{{\tau-1}}})$ is bounded, in the Frobenius norm induced by the ambient space, by the integral over $\ensuremath{\mathcal{T}}_{\ensuremath{{\tau-1}}}$ of the projection error of $\ensuremath{\mathcal{C}}(t)$ into $\ensuremath{\mathcal{M}}^c_{\ensuremath{n_{\tau}{-}1}}$ with a constant that depends on the inverse of the difference between the $\ensuremath{n_{\tau}{-}1}$ and the $\Nrht$ singular values of $\ensuremath{\mathcal{C}}(t)$, for any $t\in\ensuremath{\mathcal{T}}_{\ensuremath{{\tau-1}}}$. The rank-adaptive algorithm that we propose aims at identifying the situation when this difference becomes large. Ideally, solving the reduced problem \eqref{eq:UZred} in the interval $\ensuremath{{\Tcal_{\tau}}}$ on the enlarged manifold $\ensuremath{\mathcal{M}}_{\Nrt}$ should improve the quality of the model order reduction as it takes into account a larger approximation of the spectrum of $\ensuremath{\mathcal{C}}$. If we look at the approximation properties of $C_{\ensuremath{{\tau-1}}}$ in the enlarged reduced space $\ensuremath{\mathcal{M}}^c_{\Nrt}$ we get \begin{equation*} \begin{aligned} \norm{C_{\ensuremath{{\tau-1}}}-\Pi_{\ensuremath{\mathcal{M}}^c_{\Nrht}}\ensuremath{\mathcal{C}}(t^{\ensuremath{{\tau-1}}})} & \leq \norm{C_{\ensuremath{{\tau-1}}}-\Pi_{\ensuremath{\mathcal{M}}^c_{\ensuremath{n_{\tau}{-}1}}}\ensuremath{\mathcal{C}}(t^{\ensuremath{{\tau-1}}})} + \norm{\Pi_{\ensuremath{\mathcal{M}}_{\ensuremath{n_{\tau}{-}1}}}\ensuremath{\mathcal{C}}(t^{\ensuremath{{\tau-1}}})-\Pi_{\ensuremath{\mathcal{M}}^c_{\Nrht}}\ensuremath{\mathcal{C}}(t^{\ensuremath{{\tau-1}}})}. \end{aligned} \end{equation*} The last term is proportional to the $\Nrht$-th singular values of the matrix $\ensuremath{\mathcal{C}}$ at time $t^{\ensuremath{{\tau-1}}}$, while the first term is controlled by the projection error of $\ensuremath{\mathcal{C}}_{\ensuremath{{\tau-1}}}$ into $\ensuremath{\mathcal{M}}^c_{\ensuremath{n_{\tau}{-}1}}$. \end{comment} \section{Computational complexity of the rank-adaptive algorithm}\label{sec:cost} In this Section we discuss the computational cost required for the numerical solution of the reduced problem \eqref{eq:UZred} with the rank-adaptive algorithm introduced in \Cref{sec:rank-adaptivity}. In each temporal interval $\ensuremath{{\Tcal_{\tau}}}$, the algorithm consists of two main steps: the evolution step, which entails the repeated evaluation of the velocity fields $\ensuremath{\mathcal{F}}$ and $\ensuremath{\mathcal{G}}$ in \eqref{eq:PRK-ZVh} at each stage of the Runge--Kutta temporal integrator, and the rank update step, which requires the evaluation of the error indicator and the update of the approximate reduced solution at the current time step. The rank update strategy introduced in \Cref{sec:rank-adaptivity}, and summarized in \Cref{algo:rank-update}, has an arithmetic complexity of $O(\Nfh\Np^2)+ O(\Nfh\Nrht^2) + O(\Nfh\Np\Nrht)$, and the computational bottleneck is the computation of the error indicator. As suggested in \Cref{sec:err-indicator}, sub-sampling techniques can be employed to overcome this limitation. The evolution step consists in solving the discrete reduced system \eqref{eq:PRK-ZVh} in each temporal interval. To understand the computational complexity of this step, we neglect the number of nonlinear iterations required by the implicit temporal integrators for the evolution of the coefficients $Z$. The solution of \eqref{eq:PRK-ZVh} requires the evaluation of four operators: the velocity fields $\ensuremath{\mathcal{G}}$ and $\ensuremath{\mathcal{F}}$, the retraction $\ensuremath{\mathcal{R}}$ and its inverse tangent map $f_{\tau}$. The algorithms proposed in \cite[Section 5.3.1]{P19} for the computation of $\ensuremath{\mathcal{R}}$ and $f_{\tau}$ have arithmetic complexity $O(\Nfh\Nrht^2)$. We denote with $C_{\ensuremath{\Hcal}}=C_{\ensuremath{\Hcal}}(\Nfh,\Nrht,\Np)$ the computational cost to evaluate the gradient of the reduced Hamiltonian at the reduced solution. Finally, the velocity field $\ensuremath{\mathcal{F}}$ is computed via \Cref{algo:reg} with a computational complexity of $O(\Nfh\Nrht\Np)+O(\Nfh\Nrht^2)+ O(\Np\Nrht^2)+O(\Nrht^3)$, while $C_{\ensuremath{\Hcal}}$ is the cost to evaluate $Y$. It follows that the rank-adaptive algorithm for the solution of the reduced system \eqref{eq:PRK-ZV} with a partitioned Runge--Kutta scheme has a computational complexity being at most linear in the dimension of the full model $\Nfh$, provided the computational cost $C_{\ensuremath{\Hcal}}$ to evaluate the Hamiltonian vector field at the reduced solution has a comparable cost. Concerning the latter, observe that the assembly of the reduced state $R$ from the factors $U$ and $Z$ and the matrix-vector multiplication $U^\top\nabla_R\ensuremath{\Hcal}(R;\prmh)$ require $O(\Nfh\Np\Nrht)$ operations. Therefore, the computational bottleneck of the algorithm is associated with the evaluation of the Hamiltonian gradient at the reduced state $R$. This problem is well-known in model order reduction and emerges whenever reduced models involve non-affine and nonlinear operators, \emph{cf.} e.g. \cite[Chapters 10 and 11]{QMN16}. Several hyper-reduction techniques have been proposed to mitigate or overcome this limitation, resulting in approximations of nonlinear operators that can be evaluated at a cost independent of the size of the full model. However, we are not aware of any hyper-reduction method able to \emph{exactly} preserve the Hamiltonian phase space structure during model reduction. Furthermore, hyper-reduction methods entail an offline phase to learn the low-rank structure of the nonlinear operators by means of snapshots of the full model solution. Compared to traditional \emph{global} model order reduction, in a dynamical reduced basis approach the constraints on the computational complexity of the reduced operators is less severe since we allow the dimension of the full model to enter, albeit at most linearly, the computational cost of the operations involved. This means that the dynamical model order reduction can accommodate Hamiltonian gradients where each vector entry depends only on a few, say $k\ll\Nfh$, components of the reduced solution, with a resulting computational cost of $C_{\ensuremath{\Hcal}}=O(\Nfh\Np\Nrht)+O(k\Nfh\Np)$. This is the case when, for example, the dynamical system \eqref{eq:HamSystem} ensues from a \emph{local} discretization of a partial differential equation in Hamiltonian form. Note that this assumption is also required for the effective application of discrete empirical interpolation methods (DEIM) \cite{ChSo10}. When dealing with low-order polynomial nonlinearities of the Hamiltonian vector field, we can use tensorial techniques to perform the most expensive operations only once and not at each instance of the parameter, as discussed in the following. \subsection{Efficient treatment of polynomial nonlinearity}\label{sec:nonlinear} Let us consider the explicit expression of the cost $C_{\ensuremath{\Hcal}}$ for different Hamiltonian functions $\ensuremath{\Hcal}$. If the Hamiltonian vector field $\ensuremath{\mathcal{X}}_{\ensuremath{\Hcal}}$ in \eqref{eq:HamSystemMatrix} is linear, then \begin{equation*} \ensuremath{\mathcal{G}}(U,Z;\prmh) = \J{\Nr} U^\top \nabla_R\ensuremath{\Hcal}(R;\prmh) = \J{\Nr} U^\top A UZ,\qquad \forall\, R=UZ\in\ensuremath{\mathcal{M}}_{\Nrt}, \end{equation*} where $A\in\R{\Nf}{\Nf}$ is a given linear application, associated with the spatial discretization of the Hamiltonian function $\ensuremath{\Hcal}$. Standard matrix-matrix multiplication to compute $\ensuremath{\mathcal{G}}$ has arithmetic complexity $O(\Nfh\Nrht^2) + O(\Np\Nrht^2) + O(\Nrht k)$, where $k$ is the number of nonzero entries of the matrix $A$. The computational complexity of the algorithm is therefore still linear in $\Nfh$ provided the matrix $A$ is sparse. This is the case in applications we are interested in where the Hamiltonian system \eqref{eq:HamSystemMatrix} ensues from a local spatial approximation of a partial differential equation. In case of low-order polynomial nonlinearities, we use the tensorial representation \cite{cstefuanescu2014comparison} of the nonlinear function and rearrange the order of computing. The gist of this approach is to exploit the structure of the polynomial nonlinearities to separate the quantities that depend on the dimension of the full model from the reduced variables, by manipulating the order of computation of the various factors. Consider the evolution equations for the coefficients $Z$ in \eqref{eq:Z} for a single value $\prm_j$ of the parameter $\prmh\in\Sprmh$. The corresponding reduced Hamiltonian vector is can be expressed in the form \begin{equation}\label{eqn:polynomial_nonlinearity} \J{\Nr}\nabla_{Z_j}\ensuremath{\Hcal}_U(Z_j;\prm_j)=U^{T} J_{2N}G^{\{q\}}\bigg (\mathop{\bigotimes}\limits_{i=1}^{q}A_{i}UZ_j\bigg) =\underbrace{U^{T} J_{2N}G^{\{q\}}\bigg (\mathop{\bigotimes}\limits_{i=1}^{q}A_{i}U\bigg)}_{\mathcal{G}_U}\underbrace{\bigg( \mathop{\bigotimes}\limits_{i=1}^{q} Z_j \bigg)}_{\mathcal{Z}}, \end{equation} where $Z_j\in\ensuremath{{\Zcal_{\tau}}}$ with $\Np=1$, $q\in\mathbb{N}$ is the polynomial degree of the nonlinearity, $A_i\in\mathbb{R}^{\Nf\times\Nf}$ are sparse discrete differential operators, $G^{\{q\}}$ represents the matricized $q$-order tensor and $\otimes$ denotes the Kronecker product. The last expression in \eqref{eqn:polynomial_nonlinearity} allows to separate the computations involving factors of size $\Nfh$ from the reduced coefficients $Z$, so that the matrix $\mathcal{G}_U\in\mathbb{R}^{\Nrt\times (\Nrt)^{q}}$ can be precomputed during the offline phase. In the case of the proposed dynamical reduced basis method, we employ the tensorial POD approach to reduce the computational complexity of the evaluation of $\ensuremath{\mathcal{G}}$, the RHS of \eqref{eq:Z}, and its Jacobian needed in the implicit symplectic integrator at each time step of the numerical integrator. We start by noticing that a straightforward calculation of the second expression in \eqref{eqn:polynomial_nonlinearity} suggests $O(c\Nfh\Np\Nrht)+O(c\Np q k)+O(c \Nfh \Np q)$ operations, where the first term is due to the reduced basis ansatz and the Galerkin projection, the second term to the multiplication by the sparse matrices $A_i$ and the third term to the evaluation of a polynomial of degree $q$ for each entry of a $\Nf\times\Np$ matrix. The constant $c$ represents the number of iterations of the Newton solver and $k:=\max_i k_i$, where $k_i$ is the number of nonzero entries of $A_i$. % Moreover, in each iteration we evaluate not only the nonlinear term but also its Jacobian, with an additional cost of $O(c\Nfh\Np(q-1))+O(c\Np k_{\mathcal{G}}\Nrht)+O(c\Nfh\Np\Nrht^2)$ operations, with $k_{\mathcal{G}}$ being the number of nonzero entries of the full-order Jacobian. These terms represent, respectively, the operations required to evalute the polynomial functions in the Jacobian, the assembly of the Jacobian matrix and its Galerkin projection onto the reduced basis. This high computational cost can again be mitigated by resorting to the second formula in \eqref{eqn:polynomial_nonlinearity}, where the term $\mathcal{G}_U$ is precomputed at each iteration, for each stage of the partitioned RK integrator \eqref{eq:PRK-ZVh}. % To estimate the computational cost of the procedure we resort to the multi-index notation by introducing $\mathbf{n}:=\left(n_{\tau},\dots,n_{\tau}\right)\in\mathbb{R}^{n}$ and hence $\mathcal{G}_U\mathcal{Z}$ in \eqref{eqn:polynomial_nonlinearity} can be recast as \begin{equation}\label{eqn:matrix_tensorial_POD} \mathcal{G}_U\mathcal{Z} = \underbrace{ U^TJ_{\Nr} \sum_{\ell\leq 2\mathbf{n}}}_{\text{(III)}} \prod_{1<i\leq q} \overbrace{ \text{diag}\underbrace{\left( A_{i}U_{\ell} \right)}_{\text{(I)}} \underbrace{A_1 U_{\ell}}_{\text{(I)}}}^{\text{(II)}} Z_{j}^{\ell}. \end{equation} The arithmetic complexity of this step is $O(qk\Nrht)+O((q-1)\Nfh\Nrht^{q})+O(\Nfh\Nrht^{q+1})$, where the first term is due to the matrix multiplication of the $q$ matrices $A_iU$ in (I), the second term to the pointwise and diagonal matrices multiplications involved in the computations of (II) and the third term to the multiplications by $U^TJ_{2N}$ in (III). We stress that the cost required to assemble $\mathcal{G}_U$ is independent of the number of parameters $\Np$ and the number of iterations of the nonlinear solver. Once $\mathcal{G}_U$ has been precomputed, the evaluation of of the reduced RHS has a computational cost of $O(c\Np\Nrht^{q+1})$ \cite{cstefuanescu2014comparison}. The same splitting technique is exploited for each evaluation of the reduced Jacobian and most of the precomputed terms in \eqref{eqn:matrix_tensorial_POD} can be reused. The proposed treatment of polynomial nonlinearities results in an effective reduction of the computational cost in case of low-order polynomial nonlinearity $(q=2,3)$, a large set of vector-valued parameters $(\Np\gg 10)$ and a moderate number $\Nrht$ of basis vectors. \section{Numerical tests}\label{sec:numerical_tests} To assess the performance of the proposed adaptive dynamical structure preserving reduced basis method, we consider finite-dimensional parametrized Hamiltonian dynamical systems arising from the spatial approximation of partial differential equations. Let $\Omega\subset\r{d}$ be a continuous domain and let $u:\ensuremath{\mathcal{T}}\times \Omega\times\Sprm \rightarrow \r{m}$ belong to a Sobolev space $\ensuremath{\mathcal{V}}$ endowed with the inner product $\big<\cdot,\cdot\big>$. A parametric evolutionary PDE in Hamiltonian form can be written as \begin{equation}\label{eq:HamPDE} \left\{ \begin{aligned} & \dfrac{\partial u}{\partial t}(t,x;\prm) = \ensuremath{\mathcal{J}} \dfrac{\delta\ensuremath{\mathcal{H}}}{\delta u}(u;\prm), & \qquad\mbox{in}\;\Omega\times \ensuremath{\mathcal{T}},\\ & u(0,x;\prm) = u^0(x;\prm), &\qquad\mbox{in}\;\Omega, \end{aligned}\right. \end{equation} with suitable boundary conditions prescribed at the boundary $\partial\Omega$. Here, $\delta$ denotes the variational derivative of the Hamiltonian $\ensuremath{\mathcal{H}}$ defined as \begin{equation*} \dfrac{d}{d\epsilon} \ensuremath{\mathcal{H}}(u+\epsilon v;\prm) \bigg|_{\epsilon=0} = \bigg<\dfrac{\delta\ensuremath{\mathcal{H}}}{\delta u},v\bigg>,\qquad\forall\, u,v\in\ensuremath{\mathcal{V}}, \end{equation*} so that, for $\ell=1,\ldots,m$ and $u_{\ell,k}:=\partial_{x_k} u_{\ell}$, it holds \begin{equation*} \dfrac{\delta\ensuremath{\mathcal{H}}}{\delta u_{\ell}} = \dfrac{\partial H}{\partial u_{\ell}} - \sum_{k=1}^d \dfrac{\partial}{\partial x_k}\left(\dfrac{\partial H}{\partial u_{\ell,k}}\right) + \ldots,\qquad \mbox{with}\qquad \ensuremath{\mathcal{H}}(u;\prm) = \int_{\Omega} H(x,u,\partial_x u,\partial_{xx} u,\ldots;\prm)\,dx. \end{equation*} In the numerical tests, we consider, for any fixed value of the parameter $\prm_j\in\Sprmh$, numerical spatial approximations of \eqref{eq:HamPDE} that yield a $\Nf$-dimensional Hamiltonian system in canonical form \begin{equation}\label{eq:HamODE} \left\{ \begin{aligned} & \dfrac{d u_h}{d t}(t;\prm_j) = \J{\Nf} \nabla\ensuremath{\mathcal{H}}_h(u_h;\prm_j), &\qquad \mbox{in}\;\ensuremath{\mathcal{T}},\\ & u_h(0;\prm_j) = u_h^0(\prm_j), & \end{aligned}\right. \end{equation} where $u_h$ belongs to a finite $\Nf$-dimensional subspace of $\ensuremath{\mathcal{V}}$, $\nabla_{u}$ is the gradient with respect to the state variable $u_h$ and $\ensuremath{\mathcal{H}}_h:\r{\Nf}\rightarrow\r{}$ is such that $\Delta x_1\ldots\Delta x_d\ensuremath{\mathcal{H}}_h$ is a suitable approximation of $\ensuremath{\mathcal{H}}$. Testing \eqref{eq:HamODE} for $\Np$ values $\prmh = \{\prm_j\}_{j=1}^{\Np}$ of the parameter, yields a matrix-valued ODE of the form \eqref{eq:HamSystemMatrix}, where the $j$-th column of the unknown matrix $\ensuremath{\mathcal{R}}(t)\in\R{\Nf}{\Np}$ is equal to $u_h(t,\prm_j)$ for all $j=1,\ldots,\Np$. We validate our adaptive dynamical reduced basis method on several representative Hamiltonian systems of the form \eqref{eq:HamODE}, of increasing complexity, and compare the quality of the adaptive dynamical approach with a reduced model with a global basis. For the global model, we consider the method proposed in \cite[Section 4.2]{peng2016symplectic}, where a reduced basis is built via a complex SVD of a suitable matrix of snapshots and the reduced model is derived via symplectic Galerkin projection onto the space spanned by the global basis. We analyze and compare the accuracy, conservation properties and efficiency of the reduced models by monitoring the various quantities. To assess the approximation properties of the reduced model, we track the error, in the Frobenius norm, between the full model solution $\ensuremath{\mathcal{R}}$ and the reduced solution $R$ at any time $t\in\ensuremath{\mathcal{T}}$, namely \begin{equation}\label{eqn:error_metric} E(t)=\left \| \ensuremath{\mathcal{R}}(t) - R(t) \right \|. \end{equation} Moreover, we study the conservation of the Hamiltonian via the relative error in the $\ell^1$-norm in the parameter space $\Sprmh$, that is \begin{equation}\label{eqn:relative_error_Hamiltonian} E_{\mathcal{H}_h}(t) = \sum_{i=1}^{\Np} \left | \dfrac{\mathcal{H}\left( U_{\tau}Z_{\tau}^i;\eta_i\right)-\mathcal{H}\left( U_{0}Z_{0}^i;\eta_i\right)}{\mathcal{H}\left( U_{0}Z_{0}^i;\eta_i\right)}\right |. \end{equation} Finally, we monitor the computational cost of the different reduction strategies. Throughout, the runtime is defined as the sum of the lengths of the offline and online phases in the case of the complex SVD (global method); while, for the dynamical approaches it is the time required to evolve basis and coefficients \eqref{eq:PRK-ZV} plus the time required to compute the error indicator and update the dimension of the approximating manifold, in the adaptive case. The adaptive dynamical reduced basis method is numerically tested on two nonlinear problems, the shallow water and Schr\"odinger equations in one and two dimensions. Finally, we consider a preliminary application to particle simulations of plasma physics problem with the reduction of the Vlasov equation with a forced external electric field, modeling the evolution of charged particle beams. All numerical simulations are performed using \textsc{Matlab} computing environment on computer nodes with Intel Xeon E5-2643 (3.40GHz). \begin{comment} \subsection{Linear wave equation}\label{sec:linear_wave_equation} \note{Shall we remove this test?} As first test case, we consider the one dimensional linear wave equation with parametric initial condition, in the domain $\Omega=[0,1]$ and temporal interval $\ensuremath{\mathcal{T}}=(0,T]$ with $T:=50$, that is \begin{equation}\label{eq:wave} \left\{ \begin{aligned} & \dfrac{\partial^2 u}{\partial t^2} - c^2 \dfrac{\partial^2 u}{\partial x^2}=0, & \mbox{in}\;\Omega\times \ensuremath{\mathcal{T}},\\ & u(0,x;\prmh) = u^0(x;\prmh), &\mbox{in}\;\Omega,\\ & \dfrac{\partial u}{\partial t}(0,x;\prmh) = u^{t,0}(x;\prmh), &\mbox{in}\;\Omega, \end{aligned}\right. \end{equation} with periodic boundary conditions, and where the wave speed $c$ is set to $c=0.1$. By introducing the variables $q=u$ and $p=\partial_t q$, the wave equation \eqref{eq:wave} can be recast as a Hamiltonian system in canonical symplectic form \eqref{eq:HamPDE} with Hamiltonian \begin{equation} \ensuremath{\Hcal}(q,p;\prmh) = \dfrac12\int_0^L \bigg[p^2+c^2 \bigg(\dfrac{\partial q}{\partial x}\bigg)^2\bigg]\,dx. \end{equation} Let us consider a partition of the spatial domain $\Omega$ into $\Nfh-1$ equispaced intervals $(x_i,x_{i+1})$ with $x_i= (i-1)\Delta x$ for $i=1,\ldots,\Nfh-1$, and $\Delta x=1/\Nfh=10^{-4}$. We consider a finite difference discretization where the second order derivative is approximated using a centered finite difference scheme, and denoted by $D_{xx}$. Let $u_h(t;\prmh):=(q_1,\ldots,q_{\Nfh},p_1,\ldots,p_{\Nfh})$, for all $t\in\ensuremath{\mathcal{T}}$ and $\prmh\in\Sprmh$, where $\{q_i\}_{i=1}^{\Nfh}$ and $\{p_i\}_{i=1}^{\Nfh}$ are the degrees of freedom associated with the nodal approximation of $u$. Then the semi-discrete system has the form \eqref{eq:HamODE} where, \begin{equation} \nabla\ensuremath{\Hcal}_h(u_h;\prmh) = A(\prmh) u_h= \begin{pmatrix} -c^2 D_{xx} & 0_{\Nfh}\\ 0_{\Nfh} & I_{\Nfh} \end{pmatrix} u_h. \end{equation} The discrete Hamiltonian is \begin{equation} \ensuremath{\Hcal}_h(u_h;\prmh) = \dfrac12 \sum_{i=1}^{\Nfh} \bigg(p_i^2+\dfrac{c^2}{2} \bigg[\bigg( \dfrac{q_{i+1}-q_i}{\Delta x}\bigg)^2 + \bigg(\dfrac{q_i-q_{i-1}}{\Delta x}\bigg)^2\bigg]\bigg), \end{equation} where $q_0=q_{\Nfh}$ and $q_{\Nfh+1}=q_1$ owing to the periodic boundary conditions. We consider the following parametric initial conditions \begin{equation*} u^0(x;\prmh) = \exp{\left(-\dfrac{(x-1/2)^2}{\prmh^2}\right)},\qquad u^{t,0}(x;\prmh) = 0, \end{equation*} where $\prmh$ is a set of $\Np=50$ parameters equidistributed in the interval $\Sprm=[0.04,0.1]$. \cp{Mancano i dettagli su quale modello ridotto stiamo studiando} \begin{figure}[H] \centering \newcommand{\logLogSlopeTriangle}[5] { \pgfplotsextra { \pgfkeysgetvalue{/pgfplots/xmin}{\xmin} \pgfkeysgetvalue{/pgfplots/xmax}{\xmax} \pgfkeysgetvalue{/pgfplots/ymin}{\ymin} \pgfkeysgetvalue{/pgfplots/ymax}{\ymax} \pgfmathsetmacro{\xArel}{#1} \pgfmathsetmacro{\yArel}{#3} \pgfmathsetmacro{\xBrel}{#1-#2} \pgfmathsetmacro{\yBrel}{\yArel} \pgfmathsetmacro{\xCrel}{\xArel} \pgfmathsetmacro{\lnxB}{\xmin*(1-(#1-#2))+\xmax*(#1-#2)} \pgfmathsetmacro{\lnxA}{\xmin*(1-#1)+\xmax*#1} \pgfmathsetmacro{\lnyA}{\ymin*(1-#3)+\ymax*#3} \pgfmathsetmacro{\lnyC}{\lnyA+#4*(\lnxA-\lnxB)} \pgfmathsetmacro{\yCrel}{\lnyC-\ymin)/(\ymax-\ymin)} \coordinate (A) at (rel axis cs:\xArel,\yArel); \coordinate (B) at (rel axis cs:\xBrel,\yBrel); \coordinate (C) at (rel axis cs:\xCrel,\yCrel); \draw[#5] (A)-- node[pos=0.5,anchor=north] {1} (B)-- (C)-- node[pos=0.5,anchor=west] {#4} cycle; } } \begin{tikzpicture} \begin{groupplot}[ group style={group size=2 by 1, horizontal sep=2cm}, width=7cm, height=5cm ] \nextgroupplot[ylabel={$E_{\mathcal{H}_h}(T)$}, xlabel=$\Delta t$, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, ymode=log, xmode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=red] table[x=deltaT,y=Basis_6] {figures/data/LW1D_Hamiltonian/Hamiltonian_relative_error_IM_coeff_Tang_RK2_basis_PRK.txt}; \addplot+[color=blue] table[x=deltaT,y=Basis_8] {figures/data/LW1D_Hamiltonian/Hamiltonian_relative_error_IM_coeff_Tang_RK2_basis_PRK.txt}; \logLogSlopeTriangle{0.5}{0.2}{0.22}{2}{black}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:Hamiltonian_relative_error_method_2}}}; \nextgroupplot[xlabel={$\Delta t$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, ymode=log, xmode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, legend pos = south east] \addplot[color=red] table[x=deltaT,y=Basis_6] {figures/data/LW1D_Hamiltonian/Hamiltonian_relative_error_IM2.txt}; \addplot[color=blue] table[x=deltaT,y=Basis_8] {figures/data/LW1D_Hamiltonian/Hamiltonian_relative_error_IM2.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:Hamiltonian_cumulative_error_coeff}}}; \legend{{$2n_1=6$}, {$2n_1=8$}}; \end{groupplot} \end{tikzpicture} \caption{LW-1D:\ref{fig:Hamiltonian_relative_error_method_2}) Relative Hamiltonian error for the $2$-stage partitioned Runge-Kutta at $t=T$,\ref{fig:Hamiltonian_cumulative_error_coeff}) .} \label{fig:error_Hamiltonian_LW1D} \end{figure} \cp{We should add a plot of the \emph{relative} error in the conservation of the discrete Hamiltonian. } \nr{It is Figure \ref{fig:error_Hamiltonian_SWE1D}. I will add a small paragraph for the comments. Same plots, for Linear Wave and Vlasov, are Figures \ref{fig:error_Hamiltonian_LW1D} and \ref{fig:Hamiltonian_error_Vlasov}.} \nr{In Figure \ref{fig:error_Hamiltonian_LW1D_only_coeff_IM2} there is the contribution of the coefficient equation for the linear wave to show that, when the invariant is quadratic, that part is doing its job.} \cp{Cioe' dal plot della wave equation \ref{fig:error_Hamiltonian_LW1D_only_coeff_IM2} vorresti dire che l'errore nella conservazione dell'Hamiltoniano e' puramente dovuto allo step di proiezione, perche' a base fissa il solutore lo preserva esattamente. Infatti il fatto che RK2 e RK3 diano lo stesso rate non mi piace tanto da far vedere se non sappiamo esattamente perche'. Pero' se effettivamente c'e' un errore indipendente dall'ordine del metodo che e' dovuto al partitioning del metodo allora ha senso. Tu che ne pensi?} \nr{Si, quello che ho fatto è stato sommare, step dopo step, la deviazione relativa dovuta all'equazione dei coefficienti (differenza Hamiltoniano tra prima e dopo evolvere i coefficienti) ed effettivamente essendo l'Hamiltoniano quadratico, questo contributo all'errore e' nullo. Il fatto che poi globalmente l'ordine di convergenza dell'Hamiltoniano sia 2, nel caso del metodo di ordine 2 mi sta anche bene. Non mi aspettavo esatta conservazione. Se mi ricordo bene ci sono esempi, come il Takanashi Imada integrator, dove fanno vedere che se usi un integratore simplettico che preserva il flusso simplettico ma non l'energia, di solito l'errore nell'energia va come $O(th^p)$ con p ordine del metodo. \cp{Si questo e' corretto, si puo' anche dimostrare abbastanza facilmente per la midpoint rule etc. Il problema e' che questo risultato vale per un metodo globale, quando ci accoppi un altro metodo per ottenere il partitioned RK non e' piu' ovvio dimostrarlo (almeno io non sono riuscita).} Sul perche' il metodo di ordine 3 vada come quello di ordine 2 potrebbe essere dovuto a all'evoluzione della base e proiezione oppure al jacobian-free solver magari. Una cosa che volevo testare era anche se effettivamente il metodo di ordine 3 avesse ordine di accuratezza (non errore dell'Hamiltoniano) 3 o collassasse a 2 numericamente. \cp{Avevo provato, ma non e' cosi ovvio da testare perche' dominano gli errori in spazio e base ridotta.} In questo secondo caso il motivo per cui l'ordine nell'Hamiltoniano sia uguale a 2 potrebbe essere dovuto allo stesso motivo per cui il metodo in cui usiamo l'IM2 per i coefficienti abbia ordine 2 per l'hamiltoniano. Per calcolare l'accuratezza pensavo di fare andare il modello dinamico con un $\Delta t$ piccolo e prendere quello come riferimento. Non so se ha troppo senso fare un'analisi simile anche per gli esempi nonlineari dove adattiamo perche' non so che effetto possa avere questo sull'ordine e si possa effettivamente definire un ordine.} \cp{Non insisterei troppo su questo studio. Cioe' l'Hamiltoniano non si conserva esattamente (neanche nel caso quadratico) e questo e' dovuto al partitioning del metodo. Come vada l'errore non lo sappiamo analiticamente e neanche numericamente perche' ci sono troppi errori in gioco che non rendono il quadro chiaro. Io non farei altri test. Per la wave equation spiegherei che c'e' un errore dovuto al cambiamento di base che e' probabilmente indipendente dall'ordine del metodo e quindi RK2 e RK3 danno risultati simili. Lo studio di metodi partitioned RK con proprieta' di conservazione esatte esula dallo scopo di questo lavoro e sarebbe interessante studialo in futuro. Per quanto riguarda gli altri test, rimane il fatto che non sappiamo esattamente come va l'errore nella conservazione dell'Hamiltoniano ma vogliamo mostrare che non esplode. Non so se sei d'accordo con questa interpretazione. Se si' e vuoi, posso commentare io i plot degli Hamiltoniani, ma dimmi tu come preferisci.}\nr{Per me va bene} \end{comment} \subsection{Shallow water equations}\label{sec:SW1D} The shallow water equations (SWE) describe the kinematic behaviour of a thin inviscid single fluid layer flowing over a variable topography. In the setting of irrotational flows and flat bottom topography, the fluid is described by a scalar potential $\phi$ and the canonical Hamiltonian formulation \eqref{eq:HamPDE} is recovered \cite{sultana2013hamiltonian}. The resulting time-dependent nonlinear system of PDEs is defined as \begin{equation}\label{eq:SWE} \left\{ \begin{aligned} & \dfrac{\partial h}{\partial t} + \nabla\cdot(h\nabla\phi)=0, &\qquad \mbox{in}\;\Omega\times \ensuremath{\mathcal{T}},\\ & \dfrac{\partial \phi}{\partial t} + \dfrac12|\nabla\phi|^2 + h =0, &\qquad \mbox{in}\;\Omega\times \ensuremath{\mathcal{T}},\\ & h(0,x;\prmh) = h^0(x;\prmh), &\qquad\mbox{in}\;\Omega,\\ & \phi(0,x;\prmh) = \phi^0(x;\prmh), &\qquad\mbox{in}\;\Omega, \end{aligned}\right. \end{equation} with spatial coordinates $x\in\Omega$, time $t\in\ensuremath{\mathcal{T}}$, state variables $h,\phi:\Omega\times\ensuremath{\mathcal{T}}\mapsto \mathbb{R}$, $\nabla \cdot$ and $\nabla$ divergence and gradient differential operators in $x$, respectively. The variable $\phi$ is the scalar potential of the fluid and $h$ represents the height of the free-surface, normalized by its mean value. The system is coupled with periodic boundary conditions for both the state variables. The evolution problem \eqref{eq:SWE} admits a canonical symplectic Hamiltonian form \eqref{eq:HamPDE} with the Hamiltonian \begin{equation} \ensuremath{\Hcal}(h,\phi;\prm) = \dfrac12 \int_{\Omega} \big( h|\nabla\phi|^2+ h^2 \big)\,dx. \end{equation} We consider numerical simulations in $d=1$ and $d=2$ dimensions on rectangular spatial domains. The domain $\Omega$ is partitioned using a Cartesian mesh in $M-1$ equispaced intervals in each dimension, having mesh width $\Delta x$ and $\Delta y$, when $d=2$. As degrees of freedom of the problem we consider the nodal values of the height and potential, i.e. $u_h(t;\prmh):=(h_h,\phi_h)=(h_1,\dots,h_N,\phi_1,\dots,\phi_N)$, for all $t\in\mathcal{T}$ and $\prmh\in\Sprmh$, where $\Nfh:=M^d$, $h_m=h_{i,j}$ with $m:=(j-1)M+i$, and $i,j=1,\ldots,M$. In 1D, $\Nfh=M$, and the index $j$ is dropped. We consider second order accurate central finite difference schemes to discretize the differential operators in \eqref{eq:SWE}, and denote with $D_x$ and $D_y$ the discrete differential operators acting in the $x$- and $y$-direction, respectively. The semi-discrete formulation of \eqref{eq:SWE} represents a canonical Hamiltonian system with the gradient of the Hamiltonian function with respect to $u_h$ given by \begin{equation}\label{eq:grad_Hamiltonian} \nabla\ensuremath{\Hcal}_h(u_h;\prmh) = \begin{pmatrix} \dfrac{1}{2}\left [ \left ( D_x \phi_h \right )^2 + \left ( D_y \phi_h \right )^2 \right ] + h_h \\ -D_x\left ( h \odot D_x\phi_h \right )-D_y\left ( h \odot D_y\phi_h \right ) \end{pmatrix}, \end{equation} where $\odot$ is the Hadamard product between two vectors. The discrete Hamiltonian is \begin{equation}\label{eq:SW-1D_Ham} \ensuremath{\Hcal}_h(u_h;\prmh) = \dfrac12 \sum_{i,j=1}^{M} \bigg(h_{i,j} \left[ \left( \dfrac{\phi_{i+1,j}-\phi_{i-1,j}}{2\Delta x} \right)^2 + \left( \dfrac{\phi_{i,j+1}-\phi_{i,j-1}}{2\Delta y} \right)^2 \right ] + h_{i,j}^2\bigg). \end{equation} In the one-dimensional case, the operator $D_y$ vanishes. \subsubsection{One-dimensional shallow water equations (SWE-1D)} For this example, we set $\Omega=\left [ -10, 10 \right ]$ and we consider the parameter domain $\Sprm = \left [ \frac{1}{10},\frac{1}{7} \right ] \times \left [ \frac{2}{10},\frac{15}{10} \right ]$. The discrete set of parameters $\Sprmh$ is obtained by uniformly sampling $\Sprm$ with $10$ samples per dimension, for a total of $\Np=100$ different configurations. Problem \eqref{eq:SWE} is completed with the initial condition \begin{equation}\label{eq:init_cond_SWE1D} \begin{cases} h^{0}(x;\prmh) = 1+\alpha e^{-\beta x^2},\\ \phi^{0}(x;\prmh) = 0, \end{cases} \end{equation} with $\prmh=(\alpha,\beta)$, where $\alpha$ controls the amplitude of the initial hump in the depth $h$ and $\beta$ describes its width. We consider a partition of the spatial domain $\Omega$ into $\Nfh-1$ equispaced intervals with $\Nfh=1000$. The full model solution $u_h(t;\prmh)$ is computed using a uniform step size $\Delta t = 10^{-3}$ in the time interval $\mathcal{T}=(0,T:=7]$. We use the implicit midpoint rule as time integrator because, being symplectic, it preserves the geometrical properties of the flow of the semi-discrete equation associated to \eqref{eq:grad_Hamiltonian}. To study the reducibility properties of the problem, we explore the solution manifold and collect the solutions to the high-fidelity model in different matrices. The global snapshot matrix $\mathcal{S}\in\mathbb{R}^{\Nf\times(N_{\tau}\Np)}$ contains the snapshots associated with all sampled parameters $\prmh$ and time steps, while, for any $\tau=1,\ldots,N_{\tau}$, the matrix $\mathcal{S}_{\tau}\in\mathbb{R}^{\Nf\times \Np}$ collects the full model solutions at fixed time $t^{\tau}$. \begin{figure}[H] \centering \begin{tikzpicture} \begin{groupplot}[ group style={group size=2 by 1, horizontal sep=2cm}, width=7cm, height=5cm ] \nextgroupplot[xlabel={index}, ylabel={singular values}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, ymode=log, every axis plot/.append style={ultra thick}, xmin = 1, ymin = 1e-14, ymax = 1, legend style={at={(0.37,0.25)},anchor=north}, ylabel near ticks, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny} ] \addplot[color=black] table[x=Index,y=Values] {figures/data/SW1D_singular_values/full_singular_values_SW1D.txt}; \addplot[color=red] table[x=Index,y=Values] {figures/data/SW1D_singular_values/avg_singular_values_SW1D.txt}; \legend{global,local}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.08,1.05) {\subcaption{\label{fig:singular_values_SW1D_a}}}; \coordinate (spypoint) at (axis cs:2,0.01); \nextgroupplot[xlabel={time $\left [ s \right ]$}, ylabel={$\epsilon$-rank}, axis line style = thick, grid=both, minor tick num=2, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, legend style={at={(1.16,1)},anchor=north}, xmin = 0, xmax = 7, ymin = 0, ymax = 60, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot+[] table[x=Time,y=Rank] {figures/data/SW1D_epsilon_rank/0.1_epsilon_rank_SW1D.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/SW1D_epsilon_rank/0.001_epsilon_rank_SW1D.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/SW1D_epsilon_rank/1e-05_epsilon_rank_SW1D.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/SW1D_epsilon_rank/1e-07_epsilon_rank_SW1D.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/SW1D_epsilon_rank/1e-09_epsilon_rank_SW1D.txt}; \legend{$10^{-1}$,$10^{-3}$,$10^{-5}$,$10^{-7}$,$10^{-9}$}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.02,1.05) {\subcaption{\label{fig:singular_values_SW1D_b}}}; \end{groupplot} \node[pin={[pin distance=3.25cm]368:{% \begin{tikzpicture}[baseline,trim axis right] \begin{axis}[ axis line style = thick, grid=both, minor tick num=2, grid style = {gray,opacity=0.2}, every axis plot post/.append style={ultra thick}, ymode=log, xmin=0,xmax=22, ymin=0.0001,ymax=1, width=4cm, legend style={at={(1.4,1)},anchor=north}, legend cell align=left, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny} ] \addplot[color=black] table[x=Index,y=Values] {figures/data/SW1D_singular_values/full_singular_values_SW1D.txt}; \addplot[color=red] table[x=Index,y=Values] {figures/data/SW1D_singular_values/avg_singular_values_SW1D.txt}; \end{axis} \end{tikzpicture}% }},draw,circle,minimum size=1cm] at (spypoint) {}; \end{tikzpicture} \caption{SWE-1D: \ref{fig:singular_values_SW1D_a}) Singular values of the global snapshots matrix $\mathcal{S}$ and time average of the singular values of the local trajectories matrix $\mathcal{S}_{\tau}$. The singular values are normalized using the largest singular value for each case. \ref{fig:singular_values_SW1D_b}) $\epsilon$-rank of the local trajectories matrix $\mathcal{S}_{\tau}$ for different values of $\epsilon$.} \label{fig:singular_values_SWE1D} \end{figure} In Figure \ref{fig:singular_values_SW1D_a}, we compare the normalized singular values of $\ensuremath{\mathcal{S}}$ and $\ensuremath{\mathcal{S}}_{\tau}$, averaged over time for the latter. Although, in both cases, the exponential decay of the spectrum suggests the existence of reduced approximation spaces, the decay of the singular values of the averaged $\mathcal{S}_{\tau}$ is roughly $5$ times faster than that of $\mathcal{S}$. This difference suggests that a low-rank dynamical approach may be beneficial to reduce the computational cost and to increase the accuracy of the solution of the reduced model compared to a method with a global basis. Furthermore, the evolution of the numerical rank of $\mathcal{S}_{\tau}$ over time, reported in Figure \ref{fig:singular_values_SW1D_b}, shows a rapid growth during the first steps, followed by a mild increase in the remaining part of the simulation. This is compatible with the observations, made in Section \ref{sec:criterion_rank_update}, about the behavior of the singular value spectrum for advection dominated problems. In order to compare the performances of local and global model order reduction, we consider, as global reduced method, the complex SVD approach \cite{peng2016symplectic} with reduced dimension $\Nr\in\left \{ 10, 20, 30, 40, 60, 80 \right \}$. This is used to generate a symplectic reduced basis from the solution of the high-fidelity model \eqref{eq:SWE} obtained every $10$ time steps and by uniformly sampling $\Sprm$ with $4$ samples per dimension. The reduced system is solved using the implicit midpoint rule with the same time step $\Delta t$ used for the full order model. The quadratic operator, describing the evolution of \eqref{eq:SWE}, is reduced by using the approach described in Section \ref{sec:nonlinear} and the reduced operators are computed once during the offline stage. Concerning the adaptive dynamical reduced model, we evaluate the initial condition \eqref{eq:init_cond_SWE1D} at all values $\prmh\in\Sprmh$ and compute the matrix $\mathcal{S}_1\in\mathbb{R}^{\Nf\times \Np}$ having as columns each of the evaluations. As initial condition for the reduced system \eqref{eq:UZred}, we use \begin{equation}\label{eqn:initialization_local_low_rank} \begin{cases} U(0) = U_0,\\ Z(0) = U_0^T\mathcal{S}_1, \end{cases} \end{equation} where $U_0\in\mathbb{R}^{\Nf\times \Nr_1}$ is obtained using the complex SVD applied to the snapshot matrix $\mathcal{S}_1$. System \eqref{eq:UZred} is then evolved using the 2-stage partitioned Runge-Kutta method described in \eqref{lem:2_stage_PRK}. For the following numerical experiments, we consider $\Nr_1\in\left \{ 6,8,10,12 \right \}$ as initial dimensions of the approximating reduced manifolds. As control parameters for the rank update criterion of \Cref{algo:rank-update}, we fix the value $c=1.2$ and study examples with $r\in\left\{ 1.02, 1.05, 1.1, 1.2 \right \}$. Moreover, we examine the case in which the rank-updating algorithm is never triggered, i.e., the basis $U(t)$ evolves in time but its dimension is fixed ($\Nrht=\Nrh_1$ for all $\tau$). In the adaptive case, the error indicator $\mathbf{E}_{\tau}$ in \eqref{eqn:error_equation} is computed every $100$ iterations on the subset $\Sprm_{I}\subset\Sprmh$ obtained by sampling $5$ parameters per dimension from $\Sprmh$. In Figure \ref{fig:error_final_SWE1D}, we compare the global reduced model, the dynamical models for different values of $r$, and the high-fidelity model in terms of total runtime and accuracy at the final time $T$ by monitoring the error \eqref{eqn:error_metric}. The results show that, as we increase the dimension of the global reduced basis, the global reduced model provides accurate approximations but the runtime becomes larger than the one required to solve the high-fidelity problem. Hence, the global method loses the efficiency. The adaptive dynamical reduced approach outperforms the global reduced method by reaching comparable levels of accuracy at a computational time which is one order of magnitude smaller than the one required by the global reduction. Compared to the high-fidelity solver, the adaptive dynamical reduced method achieves an accuracy of $E(T)=2.55\cdot10^{-5}$ with a speedup up of $42$, in the best-case scenario. For this numerical experiment, the effectiveness of the rank update algorithm is limited by the error introduced in the approximation of the initial condition via a reduced basis. While the error is reduced from a factor of $4$ in the case of $\Nr_1=8$ to a factor of $20$ in the case of $\Nr_1=12$, compared to the non adaptive method, the accuracy is not significantly improved when $\Nr_1=6$. We note that, when the adaptive algorithm is effective, the additional computational cost associated with the evaluation of the error indicator and the evolution of a larger basis is balanced by a considerable error reduction. \begin{figure}[H] \centering \begin{tikzpicture}[spy using outlines={rectangle, width=4.15cm, height=5cm, magnification=1.45, connect spies}] \begin{axis}[xlabel={runtime $\left[s\right]$}, ylabel={$E(T)$}, axis line style = thick, grid=both, minor tick num=2, grid style = {gray,opacity=0.2}, xmode=log, ymode=log, ymax = 0.02, ymin = 0.000012, xmax = 30000, xmin = 80, every axis plot/.append style={thick}, width = 9cm, height = 6cm, legend style={at={(1.4,1)},anchor=north}, legend cell align=left, ylabel near ticks, yticklabel pos=right, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot+[mark=x,color=black,mark size=2pt] table[x=Timing,y=Error] {figures/data/SW1D_Pareto/error_final_local_reduced_model_no_indicator_SW1D.txt}; \addplot+[mark=x,color=red,mark size=2pt, every node near coord/.append style={xshift=0.65cm}, every node near coord/.append style={yshift=-0.2cm}, nodes near coords, point meta=explicit symbolic, every node near coord/.append style={font=\footnotesize}] table[x=Timing,y=Error, meta index=2] {figures/data/SW1D_Pareto/error_final_local_reduced_model_indicator_SW1D_1.02_1.2_2.txt}; \addplot+[mark=x,color=cyan,mark size=2pt] table[x=Timing,y=Error] {figures/data/SW1D_Pareto/error_final_local_reduced_model_indicator_SW1D_1.2_1.2_2.txt}; \addplot+[mark=x,color=magenta,mark size=2pt, every node near coord/.append style={xshift=0.65cm}, every node near coord/.append style={yshift=-0.2cm}, nodes near coords, point meta=explicit symbolic, every node near coord/.append style={font=\footnotesize}] table[x=Timing,y=Error, meta index=2] {figures/data/SW1D_Pareto/error_final_global_reduced_model_SW1D.txt}; \addplot+[color=black,dashed] table[x=Timing,y=DummyError] {figures/data/SW1D_Pareto/error_final_full_SW1D.txt}; \coordinate (spypoint) at (axis cs:290,0.00025); \coordinate (magnifyglass) at (14,0.00015); \legend{{Non adaptive}, {$r=1.02, \quad c=1.2$}, {$r=1.2, \quad c=1.2$}, {Global method}, {Full model}}; \end{axis} \spy [black] on (spypoint) in node[fill=white] at (magnifyglass); \end{tikzpicture} \caption{SWE-1D: Error \eqref{eqn:error_metric}, at time $T=7$, as a function of the runtime for the complex SVD method ({\color{magenta}{\rule[.5ex]{1em}{1.2pt}}}), the dynamical RB method ({\color{black}{\rule[.5ex]{1em}{1.2pt}}}) and the adaptive dynamical RB method for different values of the control parameters $r$ and $c$ ({\color{red}{\rule[.5ex]{1em}{1.2pt}}},{\color{cyan}{\rule[.5ex]{1em}{1.2pt}}}). For the sake of comparison, we report the runtime required by the high-fidelity solver ({\color{black}{\rule[.5ex]{0.4em}{1.2pt}}} {\color{black}{\rule[.5ex]{0.4em}{1.2pt}}}) to compute the numerical solutions for all values of the parameter $\prmh\in\Sprmh$.} \label{fig:error_final_SWE1D} \end{figure} To better gauge the accuracy properties of the adaptive dynamical reduced basis method, we compare its error with the error given by the high-fidelity solver for the same initial condition. The solution to the full model, with the projection of \eqref{eq:init_cond_SWE1D} onto the column space of $U_{0}$ as the initial condition, is the target of the adaptive reduced procedure, which aims at correctly representing the high-fidelity solution space at every time step. The importance of having a reduced space that accurately reproduces the initial condition can be inferred from Figure \ref{fig:time_error_SWE1D_6}: the error associated with a poorly resolved initial condition dominates over the remaining sources of error, and adapting the dimension of the reduced basis is not beneficial in terms of accuracy. As noted above, increasing $\Nr_{1}$ not only improves the performance of the non adaptive reduced dynamical procedure but also boosts the potential gain, in terms of relative error reduction, of the adaptive method, as can be seen in Figure \ref{fig:time_error_SWE1D_10}. In Figures \ref{fig:error_basis_time_SWE1D} we report the growth of the dimension of the reduced basis for different initial dimension $\Nr_1$.For the evolution of the error, we do not notice any significant difference as the parameter $r$ for the adaptive criterion \eqref{eq:ratio_tmp} varies. \begin{figure}[H] \centering \begin{tikzpicture} \begin{groupplot}[ group style={group size=2 by 3, horizontal sep=2cm}, width=7cm, height=4cm ] \nextgroupplot[ylabel={$E(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=7, ymin=0.0007, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, legend style={at={(axis cs:0.76,0.0007)},anchor=south west}, legend columns = 2, legend style={nodes={scale=0.83, transform shape}}] \addplot[color=black] table[x=Time,y=Error_6] {figures/data/SW1D_error_time_basis/error_local_reduced_model_no_indicator_SW1D.txt}; \addplot+[color=black,dashed] table[x=Time,y=Error_6] {figures/data/SW1D_error_time_basis/error_full_model_corrupted_SW1D.txt}; \addplot+[color=red] table[x=Timing,y=Error_1_02] {figures/data/SW1D_error_time_basis/error_local_reduced_model_indicator_SW1D_6.txt}; \addplot+[color=blue] table[x=Timing,y=Error_1_05] {figures/data/SW1D_error_time_basis/error_local_reduced_model_indicator_SW1D_6.txt}; \addplot+[color=green] table[x=Timing,y=Error_1_10] {figures/data/SW1D_error_time_basis/error_local_reduced_model_indicator_SW1D_6.txt}; \addplot+[color=cyan] table[x=Timing,y=Error_1_20] {figures/data/SW1D_error_time_basis/error_local_reduced_model_indicator_SW1D_6.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_error_SWE1D_6}}}; \legend{{Non adaptive}, {Target}, {$r=1.02, \, c=1.2$}, {$r=1.05, \, c=1.2$}, {$r=1.1, \, c=1.2$}, {$r=1.2, \, c=1.2$}}; \nextgroupplot[ylabel={$n_{\tau}$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=7, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Basis_6] {figures/data/SW1D_error_time_basis/basis_local_reduced_model_no_indicator_SW1D.txt}; \addplot+[color=red] table[x=Timing,y=Basis_1_02] {figures/data/SW1D_error_time_basis/basis_local_reduced_model_indicator_SW1D_6.txt}; \addplot+[color=blue] table[x=Timing,y=Basis_1_05] {figures/data/SW1D_error_time_basis/basis_local_reduced_model_indicator_SW1D_6.txt}; \addplot+[color=green] table[x=Timing,y=Basis_1_10] {figures/data/SW1D_error_time_basis/basis_local_reduced_model_indicator_SW1D_6.txt}; \addplot+[color=cyan] table[x=Timing,y=Basis_1_20] {figures/data/SW1D_error_time_basis/basis_local_reduced_model_indicator_SW1D_6.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_basis_SWE1D_6}}}; \nextgroupplot[ylabel={$E(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=7, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Error_8] {figures/data/SW1D_error_time_basis/error_local_reduced_model_no_indicator_SW1D.txt}; \addplot+[color=black,dashed] table[x=Time,y=Error_8] {figures/data/SW1D_error_time_basis/error_full_model_corrupted_SW1D.txt}; \addplot+[color=red] table[x=Timing,y=Error_1_02] {figures/data/SW1D_error_time_basis/error_local_reduced_model_indicator_SW1D_8.txt}; \addplot+[color=blue] table[x=Timing,y=Error_1_05] {figures/data/SW1D_error_time_basis/error_local_reduced_model_indicator_SW1D_8.txt}; \addplot+[color=green] table[x=Timing,y=Error_1_10] {figures/data/SW1D_error_time_basis/error_local_reduced_model_indicator_SW1D_8.txt}; \addplot+[color=cyan] table[x=Timing,y=Error_1_20] {figures/data/SW1D_error_time_basis/error_local_reduced_model_indicator_SW1D_8.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_error_SWE1D_8}}}; \nextgroupplot[ylabel={$n_{\tau}$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=7, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Basis_8] {figures/data/SW1D_error_time_basis/basis_local_reduced_model_no_indicator_SW1D.txt}; \addplot+[color=red] table[x=Timing,y=Basis_1_02] {figures/data/SW1D_error_time_basis/basis_local_reduced_model_indicator_SW1D_8.txt}; \addplot+[color=blue] table[x=Timing,y=Basis_1_05] {figures/data/SW1D_error_time_basis/basis_local_reduced_model_indicator_SW1D_8.txt}; \addplot+[color=green] table[x=Timing,y=Basis_1_10] {figures/data/SW1D_error_time_basis/basis_local_reduced_model_indicator_SW1D_8.txt}; \addplot+[color=cyan] table[x=Timing,y=Basis_1_20] {figures/data/SW1D_error_time_basis/basis_local_reduced_model_indicator_SW1D_8.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_basis_SWE1D_8}}}; \nextgroupplot[ylabel={$E(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=7, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Error_10] {figures/data/SW1D_error_time_basis/error_local_reduced_model_no_indicator_SW1D.txt}; \addplot+[color=black,dashed] table[x=Time,y=Error_10] {figures/data/SW1D_error_time_basis/error_full_model_corrupted_SW1D.txt}; \addplot+[color=red] table[x=Timing,y=Error_1_02] {figures/data/SW1D_error_time_basis/error_local_reduced_model_indicator_SW1D_10.txt}; \addplot+[color=blue] table[x=Timing,y=Error_1_05] {figures/data/SW1D_error_time_basis/error_local_reduced_model_indicator_SW1D_10.txt}; \addplot+[color=green] table[x=Timing,y=Error_1_10] {figures/data/SW1D_error_time_basis/error_local_reduced_model_indicator_SW1D_10.txt}; \addplot+[color=cyan] table[x=Timing,y=Error_1_20] {figures/data/SW1D_error_time_basis/error_local_reduced_model_indicator_SW1D_10.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_error_SWE1D_10}}}; \nextgroupplot[ylabel={$n_{\tau}$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=7, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Basis_10] {figures/data/SW1D_error_time_basis/basis_local_reduced_model_no_indicator_SW1D.txt}; \addplot+[color=red] table[x=Timing,y=Basis_1_02] {figures/data/SW1D_error_time_basis/basis_local_reduced_model_indicator_SW1D_10.txt}; \addplot+[color=blue] table[x=Timing,y=Basis_1_05] {figures/data/SW1D_error_time_basis/basis_local_reduced_model_indicator_SW1D_10.txt}; \addplot+[color=green] table[x=Timing,y=Basis_1_10] {figures/data/SW1D_error_time_basis/basis_local_reduced_model_indicator_SW1D_10.txt}; \addplot+[color=cyan] table[x=Timing,y=Basis_1_20] {figures/data/SW1D_error_time_basis/basis_local_reduced_model_indicator_SW1D_10.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_basis_SWE1D_10}}}; \end{groupplot} \end{tikzpicture} \caption{SWE-1D: On the left column, we report the evolution of the error $E(t)$ \eqref{eqn:error_metric} for the adaptive and non adaptive dynamical RB methods for different values of the control parameter $r$ and different dimensions $\Nr_1$ of the approximating manifold of the initial condition. The target error is obtained by solving the full model with initial condition obtained by projecting \eqref{eq:init_cond_SWE1D} onto a symplectic manifold of dimension $\Nr_1$. On the right column, we report the evolution of the dimension of the dynamical reduced basis over time. The adaptive algorithm is driven by the error indicator \eqref{eq:ratio_tmp}, while in the non adaptive setting, the dimension does not change with time. We consider the cases $\Nr_1=6$ (Figs. \ref{fig:time_error_SWE1D_6}, \ref{fig:time_basis_SWE1D_6}), $\Nr_1=8$ (Figs. \ref{fig:time_error_SWE1D_8}, \ref{fig:time_basis_SWE1D_8}), $\Nr_1=10$ (Figs. \ref{fig:time_error_SWE1D_10}, \ref{fig:time_basis_SWE1D_10}).} \label{fig:error_basis_time_SWE1D} \end{figure} Ideally, within each temporal interval, the reduced solution is close, in the Frobenius norm, to the best rank-$\Nrt$ approximation of the full model solution. To verify this property for the adaptive dynamical reduced basis method, we monitor the evolution of the error $E_{\perp}$ between the full model solution $\ensuremath{\mathcal{R}}$, at the current time and for all $\prmh\in\Sprmh$, and its projection onto the space spanned by the current reduced basis evolved following \eqref{eq:U}, namely $E_{\perp}(t)=\norm{\ensuremath{\mathcal{R}}(t) - U(t) U(t)^{\top}\ensuremath{\mathcal{R}}(t)}$. In Figure \ref{fig:SW1D_projection_error}, the projection error is shown for different values of $\Nr_1$ (Figures \ref{fig:projection_error_SWE1D_8} and \ref{fig:projection_error_SWE1D_10}) and the corresponding evolution of the reduced basis dimension is reported (Figures \ref{fig:basis_evolution_SWE1D_8} and \ref{fig:basis_evolution_SWE1D_10}). We notice that, when the dimension of the basis $U$ is not adapted, the projection error tends to increase in time. This can be ascribed to the fact that the effective rank of the high-fidelity solution is growing and the reduced basis is no longer large enough to capture the rank-increasing solution. Adapting $\Nr_{\tau}$ during the simulation results in a zero-growth scenario, with local negative peaks when the basis is enlarged. This indicates that the strategy of enlarging the reduced manifold in the direction of the larger error (see \Cref{sec:rank-update}) yields a considerable improvement of the approximation. \begin{figure}[H] \centering \begin{tikzpicture} \begin{groupplot}[ group style={group size=2 by 2, horizontal sep=2cm}, width=7cm, height=4cm ] \nextgroupplot[ylabel={$E_{\perp}(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=7, enlargelimits=false, xticklabel=\empty, legend style={at={(axis cs:0.79,0.00052)},anchor=south west}, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Error] {figures/data/SW1D_projection_error/projection_error_8_9999999_SW1D.txt}; \addplot+[color=green] table[x=Time,y=Error] {figures/data/SW1D_projection_error/projection_error_8_1.1_SW1D.txt}; \addplot+[thin,color=black,ycomb,dashed] table[x=Time, y expr=0.000385*\thisrow{Spikes}] {figures/data/SW1D_projection_error/nbasis_8_1.1_SW1D.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:projection_error_SWE1D_8}}}; \legend{{Non adaptive}, {$r=1.1, \, c=1.2, \, \Nr_1=8$}}; \nextgroupplot[ylabel={$E_{\perp}(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=7, enlargelimits=false, xticklabel=\empty, legend style={at={(axis cs:0.79,0.000275)},anchor=south west}, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Error] {figures/data/SW1D_projection_error/projection_error_10_9999999_SW1D.txt}; \addplot+[color=cyan] table[x=Time,y=Error] {figures/data/SW1D_projection_error/projection_error_10_1.2_SW1D.txt}; \addplot+[thin,color=black,ycomb,dashed] table[x=Time, y expr=0.000205*\thisrow{Spikes}] {figures/data/SW1D_projection_error/nbasis_10_1.2_SW1D.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:projection_error_SWE1D_10}}}; \legend{{Non adaptive}, {$r=1.2, \, c=1.2, \, \Nr_1=10$}}; \nextgroupplot[xlabel={time $\left [ s \right ]$}, ylabel={$n_{\tau}$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=7, ymin=6, ymax=20, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, yshift=1cm] \addplot[color=green] table[x=Time,y=Basis] {figures/data/SW1D_projection_error/nbasis_8_1.1_SW1D.txt}; \addplot+[thin,color=black,ycomb,dashed] table[x=Time, y expr=12*\thisrow{Spikes}] {figures/data/SW1D_projection_error/nbasis_8_1.1_SW1D.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:basis_evolution_SWE1D_8}}}; \nextgroupplot[xlabel={time $\left [ s \right ]$}, ylabel={$n_{\tau}$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=7, ymin=8, ymax=22, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, yshift=1cm] \addplot[color=cyan] table[x=Time,y=Basis] {figures/data/SW1D_projection_error/nbasis_10_1.2_SW1D.txt}; \addplot+[thin,color=black,ycomb,dashed] table[x=Time, y expr=12*\thisrow{Spikes}] {figures/data/SW1D_projection_error/nbasis_10_1.2_SW1D.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:basis_evolution_SWE1D_10}}}; \end{groupplot} \end{tikzpicture} \caption{SWE-1D: In Figs. \ref{fig:projection_error_SWE1D_8} and \ref{fig:projection_error_SWE1D_10}, we report the evolution of the projection error $E_{\perp}(t)$ for different values of the initial dimension $\Nr_1$ of the reduced manifold. In Figs. \ref{fig:basis_evolution_SWE1D_8} and \ref{fig:basis_evolution_SWE1D_10}, we report the corresponding evolution of the dimension of the reduced manifolds.} \label{fig:SW1D_projection_error} \end{figure} \begin{figure}[H] \centering \begin{tikzpicture} \begin{groupplot}[ group style={group size=2 by 4, horizontal sep=2cm}, width=7cm, height=4cm ] \nextgroupplot[ylabel={$E_{\mathcal{H}_h}(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=7, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, legend style={at={(axis cs:0.76,0.0007)},anchor=south west}, legend columns = 2, legend style={nodes={scale=0.83, transform shape}}] \addplot+[color=black] table[x=Timing,y=Error] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_no_indicator_SW1D_6.txt}; \addplot+[color=red] table[x=Timing,y=Error_1_02] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_SW1D_6.txt}; \addplot+[color=blue] table[x=Timing,y=Error_1_05] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_SW1D_6.txt}; \addplot+[color=green] table[x=Timing,y=Error_1_10] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_SW1D_6.txt}; \addplot+[color=cyan] table[x=Timing,y=Error_1_20] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_SW1D_6.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:error_Hamiltonian_SW1D_6}}}; \nextgroupplot[axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=7, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, legend pos = south east, legend columns = 2, legend style={nodes={scale=0.83, transform shape}}] \addplot+[color=black] table[x=Timing,y=Error] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_no_indicator_SW1D_8.txt}; \addplot+[color=red] table[x=Timing,y=Error_1_02] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_SW1D_8.txt}; \addplot+[color=blue] table[x=Timing,y=Error_1_05] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_SW1D_8.txt}; \addplot+[color=green] table[x=Timing,y=Error_1_10] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_SW1D_8.txt}; \addplot+[color=cyan] table[x=Timing,y=Error_1_20] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_SW1D_8.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:error_Hamiltonian_SW1D_8}}}; \legend{{Non adaptive}, {$r=1.02, \, c=1.2$}, {$r=1.05, \, c=1.2$}, {$r=1.1, \, c=1.2$}, {$r=1.2, \, c=1.2$}}; \nextgroupplot[ylabel={$E_{\mathcal{H}_h}(t)$}, xlabel={$t$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=7, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot+[color=black] table[x=Timing,y=Error] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_no_indicator_SW1D_10.txt}; \addplot+[color=red] table[x=Timing,y=Error_1_02] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_SW1D_10.txt}; \addplot+[color=blue] table[x=Timing,y=Error_1_05] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_SW1D_10.txt}; \addplot+[color=green] table[x=Timing,y=Error_1_10] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_SW1D_10.txt}; \addplot+[color=cyan] table[x=Timing,y=Error_1_20] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_SW1D_10.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:error_Hamiltonian_SW1D_10}}}; \nextgroupplot[xlabel={$t$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=7, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot+[color=black] table[x=Timing,y=Error] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_no_indicator_SW1D_12.txt}; \addplot+[color=red] table[x=Timing,y=Error_1_02] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_SW1D_12.txt}; \addplot+[color=blue] table[x=Timing,y=Error_1_05] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_SW1D_12.txt}; \addplot+[color=green] table[x=Timing,y=Error_1_10] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_SW1D_12.txt}; \addplot+[color=cyan] table[x=Timing,y=Error_1_20] {figures/data/SW1D_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_SW1D_12.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:error_Hamiltonian_SW1D_12}}}; \end{groupplot} \end{tikzpicture} \caption{SWE-1D: Relative error \eqref{eqn:relative_error_Hamiltonian} in the conservation of the discrete Hamiltonian \eqref{eq:SW-1D_Ham} for the dynamical reduced basis method with initial reduced dimensions $\Nr_1=6$ (Fig. \ref{fig:error_Hamiltonian_SW1D_6}), $\Nr_1=8$ (Fig. \ref{fig:error_Hamiltonian_SW1D_8}), $\Nr_1=10$ (Fig. \ref{fig:error_Hamiltonian_SW1D_10}) and $\Nr_1=12$ (Fig. \ref{fig:error_Hamiltonian_SW1D_12}).} \label{fig:error_Hamiltonian_SWE1D} \end{figure} In Figure \ref{fig:error_Hamiltonian_SWE1D} we show the relative error in the conservation of the Hamiltonian for different dimensions of the reduced manifold, and values of the control parameters $r$ and $c$. As the Hamiltonian \eqref{eq:SW-1D_Ham} is a cubic quantity, we do not expect exact conservation associated with the proposed partitioned Runge--Kutta temporal integrators. However, the preservation of the symplectic structure both in the reduction and in the discretization yields a good control on the Hamiltonian error, as it can be observed in Figure \ref{fig:error_Hamiltonian_SWE1D}. \subsubsection{Two-dimensional shallow water equations (SWE-2D)} We set $\Omega=[-4,4]^2$ as the spatial domain and $\Sprm=\left [ \frac{1}{5}, \frac{1}{2}\right ]\times\left[ \frac{11}{10},\frac{17}{10} \right]$ as the domain of parameters. We consider $10$ uniformly spaced values of the parameter for each dimension of $\Sprm$ to define the discrete subset $\Sprmh$. As initial condition, we consider \begin{equation}\label{eq:init_cond_SWE2D} \begin{cases} h^{0}(x,y;\prmh) = 1+\alpha e^{-\beta (x^2+y^2)},\\ \phi^{0}(x,y;\prmh) = 0, \end{cases} \end{equation} where $\prmh=(\alpha,\beta)$ represents the natural extension to the two-dimensional setting of the parameter used in the previous example. The domain $\Omega$ is partitioned using $M=50$ points per dimension, so that the resulting mesh width is $\Delta x= \Delta y = 8\cdot 10^{-2}$. The time domain $\mathcal{T}=\left [ 0, T:=20 \right ]$ is split into $N_{\tau}=10000$ uniform intervals of length $\Delta t = 2\cdot 10^{-3}$. The symplectic implicit midpoint is employed as time integrator in the high-fidelity solver, while the reduced dynamics \eqref{eq:UZred} is integrated using the 2-stage partitioned RK method. The spatial and temporal domains considered for this numerical experiment are taken so that the solution of the high-fidelity model is characterized by circular waves that interact and overlap because of the periodic boundary conditions, as shown in Figure \ref{fig:surf_comparison_SW2D}. The increased complexity of the two-dimensional dynamics is reflected in the behaviour of the spectrum of the matrix snapshots. In Figure \ref{fig:singular_values_SW2D_a}, we show the normalized singular values of the global snapshot matrix $\mathcal{S}\in\mathbb{R}^{\Nf\times(N_{\tau} \Np)}$ and the average of the $N_{\tau}$ local-in-time snapshot matrices $\mathcal{S}_{\tau}\in\mathbb{R}^{\Nf\times \Np}$. The decay of the singular values of the local trajectories is one order of magnitude faster than of the global (in time) snapshots, suggesting that there exists an underlying \emph{local} low-rank structure that can be exploited to improve the efficiency of the reduced model. The evolution of the numerical rank of $\mathcal{S}_{\tau}$, reported in Figure \ref{fig:singular_values_SW2D_b}, indicates that, while the matrix-valued initial condition is exactly represented using an extremely small basis, the full model solution at times $t\geq 2$ requires a relatively large basis to be properly approximated, and hence adapting the dimension of the reduced manifold becomes crucial. \begin{figure}[H] \centering \begin{tikzpicture} \begin{groupplot}[ group style={group size=2 by 1, horizontal sep=2cm}, width=7cm, height=5cm ] \nextgroupplot[xlabel={index}, ylabel={singular values}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, ymode=log, every axis plot/.append style={ultra thick}, xmin = 1, ymin = 1e-14, ymax = 1, ylabel near ticks, legend style={at={(0.31,0.26)},anchor=north}, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny} ] \addplot[color=black] table[x=Index,y=Values] {figures/data/SW2D_singular_values/full_singular_values_SW2D.txt}; \addplot[color=red] table[x=Index,y=Values] {figures/data/SW2D_singular_values/avg_singular_values_SW2D.txt}; \legend{global,local}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.06,1.05) {\subcaption{\label{fig:singular_values_SW2D_a}}}; \coordinate (spypoint) at (axis cs:2,0.001); \nextgroupplot[xlabel={time $\left [ s \right ]$}, ylabel={$\epsilon$-rank}, axis line style = thick, grid=both, minor tick num=2, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin = 0, xmax = 20, ymin = 0, ymax = 60, legend style={at={(1.16,1)},anchor=north}, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot+[] table[x=Time,y=Rank] {figures/data/SW2D_epsilon_rank/0.1_epsilon_rank_SW2D.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/SW2D_epsilon_rank/0.001_epsilon_rank_SW2D.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/SW2D_epsilon_rank/1e-05_epsilon_rank_SW2D.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/SW2D_epsilon_rank/1e-07_epsilon_rank_SW2D.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/SW2D_epsilon_rank/1e-09_epsilon_rank_SW2D.txt}; \legend{$10^{-1}$,$10^{-3}$,$10^{-5}$,$10^{-7}$,$10^{-9}$}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.02,1.05) {\subcaption{\label{fig:singular_values_SW2D_b}}}; \end{groupplot} \node[pin={[pin distance=3.2cm]373:{% \begin{tikzpicture}[baseline,trim axis right] \begin{axis}[ axis line style = thick, grid=both, minor tick num=2, grid style = {gray,opacity=0.2}, every axis plot post/.append style={ultra thick}, tiny, ymode=log, xmin=0,xmax=22, ymin=0.00002,ymax=1, width=4cm, legend style={at={(1.4,1)},anchor=north}, legend cell align=left, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny} ] \addplot[color=black] table[x=Index,y=Values] {figures/data/SW2D_singular_values/full_singular_values_SW2D.txt}; \addplot[color=red] table[x=Index,y=Values] {figures/data/SW2D_singular_values/avg_singular_values_SW2D.txt}; \end{axis} \end{tikzpicture}% }},draw,circle,minimum size=1cm] at (spypoint) {}; \end{tikzpicture} \caption{SWE-2D: \ref{fig:singular_values_SW2D_a}) Singular values of the global snapshots matrix $\mathcal{S}$ and time average of the singular values of the local trajectories matrix $\mathcal{S}_{\tau}$. The singular values are normalized using the largest singular value for each case. \ref{fig:singular_values_SW2D_b}) $\epsilon$-rank of the local trajectories matrix $\mathcal{S}_{\tau}$ for different values of $\epsilon$. } \end{figure} We employ the complex SVD method to build a global reduced order model, using the same sampling rates in time and parameter space as in the 1D test case. With none of the dimensions considered, i.e., $2n\in\{10,20,40,60,80,120\}$, we obtain results that are both accurate (error smaller than $10^{-1}$) and computationally less expensive than solving the high-fidelity model. Hence, for this two-dimensional test, we only compare the performances of the adaptive and the non-adaptive dynamical reduced basis method in terms of accuracy and computational time. As initial condition for the reduced dynamics \eqref{eq:UZred} we consider the initialization \eqref{eqn:initialization_local_low_rank} where $\mathcal{S}_1$ is given by \eqref{eq:init_cond_SWE2D}. Moreover, for the adaptive method, we compute the error indicator every $10$ iterations and on a subset $\Sprm_I\subset\Sprmh$ of $25$ uniformly sampled parameters. Different combinations of the initial reduced manifold dimension $\Nr_{1}=\{4,6,8\}$, and control parameters $r=\{1.1,1.2,1.3\}$ and $c=\{1.1,1.2,1.3\}$, are considered to study their impact on the accuracy of the method. Figure \ref{fig:surf_comparison_SW2D} shows the high-fidelity solution for $(\alpha,\beta)=(\frac{1}{3},\frac{17}{10})$ with its adaptive reduced approximation at different times. The results are qualitatively equivalent. \begin{figure}[H] \centering \begin{tikzpicture} \pgfplotsset{ colormap={parula}{ rgb=(0.208100000000000,0.166300000000000,0.529200000000000) rgb=(0.211623809523810,0.189780952380952,0.577676190476191) rgb=(0.212252380952381,0.213771428571429,0.626971428571429) rgb=(0.208100000000000,0.238600000000000,0.677085714285714) rgb=(0.195904761904762,0.264457142857143,0.727900000000000) rgb=(0.170728571428571,0.291938095238095,0.779247619047619) rgb=(0.125271428571429,0.324242857142857,0.830271428571429) rgb=(0.0591333333333334,0.359833333333333,0.868333333333333) rgb=(0.0116952380952381,0.387509523809524,0.881957142857143) rgb=(0.00595714285714286,0.408614285714286,0.882842857142857) rgb=(0.0165142857142857,0.426600000000000,0.878633333333333) rgb=(0.0328523809523810,0.443042857142857,0.871957142857143) rgb=(0.0498142857142857,0.458571428571429,0.864057142857143) rgb=(0.0629333333333333,0.473690476190476,0.855438095238095) rgb=(0.0722666666666667,0.488666666666667,0.846700000000000) rgb=(0.0779428571428572,0.503985714285714,0.838371428571429) rgb=(0.0793476190476190,0.520023809523810,0.831180952380952) rgb=(0.0749428571428571,0.537542857142857,0.826271428571429) rgb=(0.0640571428571428,0.556985714285714,0.823957142857143) rgb=(0.0487714285714286,0.577223809523810,0.822828571428572) rgb=(0.0343428571428572,0.596580952380952,0.819852380952381) rgb=(0.0265000000000000,0.613700000000000,0.813500000000000) rgb=(0.0238904761904762,0.628661904761905,0.803761904761905) rgb=(0.0230904761904762,0.641785714285714,0.791266666666667) rgb=(0.0227714285714286,0.653485714285714,0.776757142857143) rgb=(0.0266619047619048,0.664195238095238,0.760719047619048) rgb=(0.0383714285714286,0.674271428571429,0.743552380952381) rgb=(0.0589714285714286,0.683757142857143,0.725385714285714) rgb=(0.0843000000000000,0.692833333333333,0.706166666666667) rgb=(0.113295238095238,0.701500000000000,0.685857142857143) rgb=(0.145271428571429,0.709757142857143,0.664628571428572) rgb=(0.180133333333333,0.717657142857143,0.642433333333333) rgb=(0.217828571428571,0.725042857142857,0.619261904761905) rgb=(0.258642857142857,0.731714285714286,0.595428571428571) rgb=(0.302171428571429,0.737604761904762,0.571185714285714) rgb=(0.348166666666667,0.742433333333333,0.547266666666667) rgb=(0.395257142857143,0.745900000000000,0.524442857142857) rgb=(0.442009523809524,0.748080952380952,0.503314285714286) rgb=(0.487123809523809,0.749061904761905,0.483976190476191) rgb=(0.530028571428571,0.749114285714286,0.466114285714286) rgb=(0.570857142857143,0.748519047619048,0.449390476190476) rgb=(0.609852380952381,0.747314285714286,0.433685714285714) rgb=(0.647300000000000,0.745600000000000,0.418800000000000) rgb=(0.683419047619048,0.743476190476191,0.404433333333333) rgb=(0.718409523809524,0.741133333333333,0.390476190476190) rgb=(0.752485714285714,0.738400000000000,0.376814285714286) rgb=(0.785842857142857,0.735566666666667,0.363271428571429) rgb=(0.818504761904762,0.732733333333333,0.349790476190476) rgb=(0.850657142857143,0.729900000000000,0.336028571428571) rgb=(0.882433333333333,0.727433333333333,0.321700000000000) rgb=(0.913933333333333,0.725785714285714,0.306276190476191) rgb=(0.944957142857143,0.726114285714286,0.288642857142857) rgb=(0.973895238095238,0.731395238095238,0.266647619047619) rgb=(0.993771428571429,0.745457142857143,0.240347619047619) rgb=(0.999042857142857,0.765314285714286,0.216414285714286) rgb=(0.995533333333333,0.786057142857143,0.196652380952381) rgb=(0.988000000000000,0.806600000000000,0.179366666666667) rgb=(0.978857142857143,0.827142857142857,0.163314285714286) rgb=(0.969700000000000,0.848138095238095,0.147452380952381) rgb=(0.962585714285714,0.870514285714286,0.130900000000000) rgb=(0.958871428571429,0.894900000000000,0.113242857142857) rgb=(0.959823809523810,0.921833333333333,0.0948380952380953) rgb=(0.966100000000000,0.951442857142857,0.0755333333333333) rgb=(0.976300000000000,0.983100000000000,0.0538000000000000) }, } \begin{groupplot}[ group style={group size=4 by 2, horizontal sep=2cm}, width=3cm, height=3cm ] \captionsetup{labelfont={color=white,bf}} \nextgroupplot[ylabel={{\footnotesize $u\left (t;\left( \frac{1}{3}, \frac{17}{10}\right)\right)$}}, scale only axis, enlargelimits=false, axis on top, axis equal image, xticklabels={,,}, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, ytick={12.5,37.5,62.5,87.5}, yticklabels={$-3$,$-1$,$1$,$3$}] \addplot graphics[xmin=0,xmax=100,ymin=0,ymax=100] {figures/data/SW2D_fancy_plot/surf_full_1.png}; \coordinate (start) at (axis cs:0,120); \coordinate (first) at (axis cs:50.5,120); \coordinate (first1) at (axis cs:50.5,125); \node [text width=1em,anchor=north west] at (rel axis cs: 0.02,1.05) {\subcaption{\label{fig:surf_full_SW2D_1}}}; \nextgroupplot[scale only axis, enlargelimits=false, axis on top, axis equal image, yticklabels={,,}, xticklabels={,,}, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, xshift=-1.7cm] \addplot graphics[xmin=0,xmax=100,ymin=0,ymax=100] {figures/data/SW2D_fancy_plot/surf_full_2.png}; \coordinate (second) at (axis cs:50.5,120); \coordinate (second2) at (axis cs:50.5,125); \node [text width=1em,anchor=north west] at (rel axis cs: 0.02,1.05) {\subcaption{\label{fig:surf_full_SW2D_2}}}; \nextgroupplot[scale only axis, enlargelimits=false, axis on top, axis equal image, yticklabels={,,}, xticklabels={,,}, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, xshift=-1.7cm] \addplot graphics[xmin=0,xmax=100,ymin=0,ymax=100] {figures/data/SW2D_fancy_plot/surf_full_3.png}; \coordinate (third) at (axis cs:50.5,120); \coordinate (third2) at (axis cs:50.5,125); \node [text width=1em,anchor=north west] at (rel axis cs: 0.02,1.05) {\subcaption{\label{fig:surf_full_SW2D_3}}}; \nextgroupplot[scale only axis, enlargelimits=false, axis on top, axis equal image, yticklabels={,,}, xticklabels={,,}, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, xshift=-1.7cm, colorbar sampled, colorbar style={height=6.4cm,anchor=colorbar_pos}, point meta min=0.9417, point meta max=1.3333] \addplot graphics[xmin=0,xmax=100,ymin=0,ymax=100] {figures/data/SW2D_fancy_plot/surf_full_5.png}; \coordinate (end) at (axis cs:100,120); \coordinate (colorbar_pos) at (axis cs:110,313); \coordinate (fourth) at (axis cs:50.5,120); \coordinate (fourth2) at (axis cs:50.5,125); \node [text width=1em,anchor=north west] at (rel axis cs: 0.02,1.05) {\subcaption{\label{fig:surf_full_SW2D_4}}}; \nextgroupplot[ylabel={{\footnotesize{$\sum_{i=1}^{\Nr_{\tau}}U_i(t) Z_i\left (t;\left( \frac{1}{3}, \frac{17}{10}\right)\right)$}}}, scale only axis, enlargelimits=false, axis on top, axis equal image, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, yshift=0.6cm, xtick={12.5,37.5,62.5,87.5}, xticklabels={$-3$,$-1$,$1$,$3$}, ytick={12.5,37.5,62.5,87.5}, yticklabels={$-3$,$-1$,$1$,$3$}] \addplot graphics[xmin=0,xmax=100,ymin=0,ymax=100] {figures/data/SW2D_fancy_plot/loc_red_full_1_6_11_13.png}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.02,1.05) {\subcaption{\label{fig:surf_red_SW2D_1}}}; \nextgroupplot[scale only axis, enlargelimits=false, axis on top, axis equal image, yticklabels={,,}, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, yshift=0.6cm, xtick={12.5,37.5,62.5,87.5}, xticklabels={$-3$,$-1$,$1$,$3$}] \addplot graphics[xmin=0,xmax=100,ymin=0,ymax=100] {figures/data/SW2D_fancy_plot/loc_red_full_2_6_11_13.png}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.02,1.05) {\subcaption{\label{fig:surf_red_SW2D_2}}}; \nextgroupplot[scale only axis, enlargelimits=false, axis on top, axis equal image, yticklabels={,,}, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, yshift=0.6cm, xtick={12.5,37.5,62.5,87.5}, xticklabels={$-3$,$-1$,$1$,$3$}] \addplot graphics[xmin=0,xmax=100,ymin=0,ymax=100] {figures/data/SW2D_fancy_plot/loc_red_full_3_6_11_13.png}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.02,1.05) {\subcaption{\label{fig:surf_red_SW2D_3}}}; \nextgroupplot[scale only axis, enlargelimits=false, axis on top, axis equal image, yticklabels={,,}, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, yshift=0.6cm, xtick={12.5,37.5,62.5,87.5}, xticklabels={$-3$,$-1$,$1$,$3$}] \addplot graphics[xmin=0,xmax=100,ymin=0,ymax=100] {figures/data/SW2D_fancy_plot/loc_red_full_5_6_11_13.png}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.02,1.05) {\subcaption{\label{fig:surf_red_SW2D_4}}}; \end{groupplot} \draw [->,line width=0.25mm] (start) -- (end); \node[circle,fill=black,inner sep=0pt,minimum size=3pt] (0) at (first) {}; \node[inner sep=0pt,minimum size=3pt,label=above:{$t=0s$}] (0) at (first1) {}; \node[circle,fill=black,inner sep=0pt,minimum size=3pt] (5) at (second) {}; \node[inner sep=0pt,minimum size=3pt,label=above:{$t=5s$}] (5) at (second2) {}; \node[circle,fill=black,inner sep=0pt,minimum size=3pt] (15) at (third) {}; \node[inner sep=0pt,minimum size=3pt,label=above:{$t=15s$}] (15) at (third2) {}; \node[circle,fill=black,inner sep=0pt,minimum size=3pt] (20) at (fourth) {}; \node[inner sep=0pt,minimum size=3pt,label=above:{$t=20s$}] (20) at (fourth2) {}; \end{tikzpicture} \caption{SWE-2D: High-fidelity solution (Figs. \ref{fig:surf_full_SW2D_1}, \ref{fig:surf_full_SW2D_2}, \ref{fig:surf_full_SW2D_3} and \ref{fig:surf_full_SW2D_4}) and adaptive dynamical reduced solution (Figs. \ref{fig:surf_red_SW2D_1}, \ref{fig:surf_red_SW2D_2}, \ref{fig:surf_red_SW2D_3} and \ref{fig:surf_red_SW2D_4}) for the parameter $\left(\alpha,\beta\right)=\left(\frac{1}{3},\frac{17}{10}\right)$ and $t=0,5,15$ and $20s$. In the adaptive reduced approach, we set $r=1.1$, $c=1.3$ and $\Nr_1=6$.} \label{fig:surf_comparison_SW2D} \end{figure} Figure \ref{fig:error_runtime_SW2D} reports the error $E(T)$ vs. the runtime required to compute the solution for all $\prmh\in\Sprmh$ by means of the adaptive and non-adaptive dynamical reduced methods, for different values of $\Nr_1$, $r$ and $c$. Observe that the runtime of the high-fidelity solver is $3.29\cdot 10^{5}s$. The results show that both reduction methods are able to accurately approximate the high-fidelity solution, with speed-ups of $261$ for the non-adaptive approach and $113$ for the adaptive approach. The exceptional efficiency of the dynamical reduced approach in this context is a reslt of the combination of three main factors: the low degree polynomial nonlinearity, the large number of degrees of freedom needed to represent the high-fidelity solution and the compact dimension of the local reduced manifold. Despite the small computational overhead for the adaptive method due to the error estimation, the basis update and the larger approximating spaces used, the adaptive algorithm leads to approximations that are one ($\Nr_1=4$) to two ($\Nr_1=10$) orders of magnitude more accurate than the approximations obtained by the non adaptive method. \begin{figure}[H] \centering \begin{tikzpicture}[spy using outlines={rectangle, width=4.4cm, height=5cm, magnification=1.9, connect spies}] \begin{axis}[xlabel={runtime $\left[s\right]$}, ylabel={$E(t)$}, axis line style = thick, grid=both, minor tick num=2, grid style = {gray,opacity=0.2}, xmode=log, ymode=log, xmin = 1000, xmax = 10^(3.8), ymax = 2, ymin = 0.00001, every axis plot/.append style={thick}, width = 9cm, height = 6cm, legend style={at={(1.4,1)},anchor=north}, legend cell align=left, ylabel near ticks, yticklabel pos=right, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot+[mark=x,color=black,size=2pt] table[x=Timing,y=Error] {figures/data/SW2D_Pareto/error_final_local_reduced_model_no_indicator_SW2D.txt}; \addplot+[mark=x,color=red,size=2pt, every node near coord/.append style={xshift=0.65cm}, every node near coord/.append style={yshift=-0.2cm}, nodes near coords, point meta=explicit symbolic, every node near coord/.append style={font=\footnotesize}] table[x=Timing,y=Error, meta index=2] {figures/data/SW2D_Pareto/error_final_local_reduced_model_indicator_SW2D_1.1_1.2.txt}; \addplot+[mark=x,color=blue,size=2pt] table[x=Timing,y=Error] {figures/data/SW2D_Pareto/error_final_local_reduced_model_indicator_SW2D_1.2_1.3.txt}; \addplot+[mark=x,color=green,size=2pt] table[x=Timing,y=Error] {figures/data/SW2D_Pareto/error_final_local_reduced_model_indicator_SW2D_1.3_1.1.txt}; \addplot+[mark=x,color=cyan,size=2pt] table[x=Timing,y=Error] {figures/data/SW2D_Pareto/error_final_local_reduced_model_indicator_SW2D_1.1_1.3.txt}; \legend{Non adaptive, $r=1.1 \quad c=1.2$, $r=1.2 \quad c=1.3$, $r=1.3 \quad c=1.1$, $r=1.1 \quad c=1.3$, Full Model}; \end{axis} \end{tikzpicture} \caption{SWE-2D: Error \eqref{eqn:error_metric}, at time $T=20$, as a function of the runtime for the dynamical RB method ({\color{black}{\rule[.5ex]{1em}{1.2pt}}}) and the adaptive dynamical RB method for different values of the control parameters $r$ and $c$ ({\color{red}{\rule[.5ex]{1em}{1.2pt}}},{\color{blue}{\rule[.5ex]{1em}{1.2pt}}},{\color{green}{\rule[.5ex]{1em}{1.2pt}}},{\color{cyan}{\rule[.5ex]{1em}{1.2pt}}}) for the simulation of all the sampled parameters in $\Sprmh$. For comparison, the high-fidelity model runtime is $3.3\cdot 10^{5}s$.} \label{fig:error_runtime_SW2D} \end{figure} The results presented in Figures \ref{fig:error_basis_time_SWE2D} on the evolution of the error $E(t)$ for $2n_1=\{4,6,8\}$, corroborate the conclusions, already drawn from the 1D test case, regarding the effect of a poorly approximated initial condition on the performances of the adapting procedure. The evolution of the basis dimension is reported in Figures \ref{fig:time_basis_SWE2D_4}, \ref{fig:time_basis_SWE2D_6} and \ref{fig:time_basis_SWE2D_8} for different values of $r$, $c$ and $2n_{1}$. \begin{figure}[H] \centering \begin{tikzpicture} \begin{groupplot}[ group style={group size=2 by 3, horizontal sep=2cm}, width=7cm, height=4cm ] \nextgroupplot[ylabel={$E(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=20, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Error_4] {figures/data/SW2D_error_time_basis/error_local_reduced_model_no_indicator_SW2D.txt}; \addplot+[color=red] table[x=Timing,y=Error_1.1_1.2] {figures/data/SW2D_error_time_basis/error_local_reduced_model_indicator_SW2D_4.txt}; \addplot+[color=blue] table[x=Timing,y=Error_1.2_1.3] {figures/data/SW2D_error_time_basis/error_local_reduced_model_indicator_SW2D_4.txt}; \addplot+[color=green] table[x=Timing,y=Error_1.3_1.1] {figures/data/SW2D_error_time_basis/error_local_reduced_model_indicator_SW2D_4.txt}; \addplot+[color=cyan] table[x=Timing,y=Error_1.1_1.3] {figures/data/SW2D_error_time_basis/error_local_reduced_model_indicator_SW2D_4.txt}; \addplot+[color=black,dashed] table[x=Time,y=Error_4] {figures/data/SW2D_error_time_basis/error_full_model_corrupted_SW2D.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_error_SWE2D_4}}}; \nextgroupplot[ylabel={$\Nr_{\tau}$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=7, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Basis_4] {figures/data/SW2D_error_time_basis/basis_local_reduced_model_no_indicator_SW2D.txt}; \addplot+[color=red] table[x=Timing,y=Basis_1.1_1.2] {figures/data/SW2D_error_time_basis/basis_local_reduced_model_indicator_SW2D_4.txt}; \addplot+[color=blue] table[x=Timing,y=Basis_1.2_1.3] {figures/data/SW2D_error_time_basis/basis_local_reduced_model_indicator_SW2D_4.txt}; \addplot+[color=green] table[x=Timing,y=Basis_1.3_1.1] {figures/data/SW2D_error_time_basis/basis_local_reduced_model_indicator_SW2D_4.txt}; \addplot+[color=cyan] table[x=Timing,y=Basis_1.1_1.3] {figures/data/SW2D_error_time_basis/basis_local_reduced_model_indicator_SW2D_4.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_basis_SWE2D_4}}}; \nextgroupplot[ylabel={$E(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=20, ymax = 4, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, legend style={at={(axis cs:2.3,0.06)},anchor=south west}, legend columns = 2, legend style={nodes={scale=0.83, transform shape}}] \addplot[color=black] table[x=Time,y=Error_6] {figures/data/SW2D_error_time_basis/error_local_reduced_model_no_indicator_SW2D.txt}; \addplot+[color=black,dashed] table[x=Time,y=Error_6] {figures/data/SW2D_error_time_basis/error_full_model_corrupted_SW2D.txt}; \addplot+[color=red] table[x=Timing,y=Error_1.1_1.2] {figures/data/SW2D_error_time_basis/error_local_reduced_model_indicator_SW2D_6.txt}; \addplot+[color=blue] table[x=Timing,y=Error_1.2_1.3] {figures/data/SW2D_error_time_basis/error_local_reduced_model_indicator_SW2D_6.txt}; \addplot+[color=green] table[x=Timing,y=Error_1.3_1.1] {figures/data/SW2D_error_time_basis/error_local_reduced_model_indicator_SW2D_6.txt}; \addplot+[color=cyan] table[x=Timing,y=Error_1.1_1.3] {figures/data/SW2D_error_time_basis/error_local_reduced_model_indicator_SW2D_6.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_error_SWE2D_6}}}; \legend{{Non adaptive}, {Target}, {$r=1.1, \, c=1.2$}, {$r=1.2, \, c=1.3$}, {$r=1.3, \, c=1.1$}, {$r=1.1, \, c=1.3$}}; \nextgroupplot[ylabel={$\Nr_{\tau}$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=20, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Basis_6] {figures/data/SW2D_error_time_basis/basis_local_reduced_model_no_indicator_SW2D.txt}; \addplot+[color=red] table[x=Timing,y=Basis_1.1_1.2] {figures/data/SW2D_error_time_basis/basis_local_reduced_model_indicator_SW2D_6.txt}; \addplot+[color=blue] table[x=Timing,y=Basis_1.2_1.3] {figures/data/SW2D_error_time_basis/basis_local_reduced_model_indicator_SW2D_6.txt}; \addplot+[color=green] table[x=Timing,y=Basis_1.3_1.1] {figures/data/SW2D_error_time_basis/basis_local_reduced_model_indicator_SW2D_6.txt}; \addplot+[color=cyan] table[x=Timing,y=Basis_1.1_1.3] {figures/data/SW2D_error_time_basis/basis_local_reduced_model_indicator_SW2D_6.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_basis_SWE2D_6}}}; \nextgroupplot[xlabel={time $\left [ s \right ]$}, ylabel={$E(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=20, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Error_8] {figures/data/SW2D_error_time_basis/error_local_reduced_model_no_indicator_SW2D.txt}; \addplot+[color=red] table[x=Timing,y=Error_1.1_1.2] {figures/data/SW2D_error_time_basis/error_local_reduced_model_indicator_SW2D_8.txt}; \addplot+[color=blue] table[x=Timing,y=Error_1.2_1.3] {figures/data/SW2D_error_time_basis/error_local_reduced_model_indicator_SW2D_8.txt}; \addplot+[color=green] table[x=Timing,y=Error_1.3_1.1] {figures/data/SW2D_error_time_basis/error_local_reduced_model_indicator_SW2D_8.txt}; \addplot+[color=cyan] table[x=Timing,y=Error_1.1_1.3] {figures/data/SW2D_error_time_basis/error_local_reduced_model_indicator_SW2D_8.txt}; \addplot+[color=black,dashed] table[x=Time,y=Error_8] {figures/data/SW2D_error_time_basis/error_full_model_corrupted_SW2D.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_error_SWE2D_8}}}; \nextgroupplot[xlabel={time $\left [ s \right ]$}, ylabel={$\Nr_{\tau}$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=20, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Basis_8] {figures/data/SW2D_error_time_basis/basis_local_reduced_model_no_indicator_SW2D.txt}; \addplot+[color=red] table[x=Timing,y=Basis_1.1_1.2] {figures/data/SW2D_error_time_basis/basis_local_reduced_model_indicator_SW2D_8.txt}; \addplot+[color=blue] table[x=Timing,y=Basis_1.2_1.3] {figures/data/SW2D_error_time_basis/basis_local_reduced_model_indicator_SW2D_8.txt}; \addplot+[color=green] table[x=Timing,y=Basis_1.3_1.1] {figures/data/SW2D_error_time_basis/basis_local_reduced_model_indicator_SW2D_8.txt}; \addplot+[color=cyan] table[x=Timing,y=Basis_1.1_1.3] {figures/data/SW2D_error_time_basis/basis_local_reduced_model_indicator_SW2D_8.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_basis_SWE2D_8}}}; \end{groupplot} \end{tikzpicture} \caption{SWE-2D: On the left column, we report the evolution of the error $E(t)$ \eqref{eqn:error_metric} for the adaptive and non adaptive dynamical RB methods for different values of the control parameters $r$ and $c$, and for different dimensions $\Nr_1$ of the initial reduced manifold. The target error is obtained by solving the full model with initial condition obtained by projecting \eqref{eq:init_cond_SWE2D} onto a symplectic manifold of dimension $\Nr_1$. On the right column, we report the evolution of the dimension of the dynamical reduced basis over time. The adaptive algorithm is driven by the error indicator \eqref{eq:ratio_tmp}, while in the non adaptive setting, the dimension does not change with time. We consider the cases $\Nr_1=4$ (Figs. \ref{fig:time_error_SWE2D_4}, \ref{fig:time_basis_SWE2D_4}), $\Nr_1=6$ (Figs. \ref{fig:time_error_SWE2D_6}, \ref{fig:time_basis_SWE2D_6}) and $\Nr_1=8$ (Figs. \ref{fig:time_error_SWE2D_8}, \ref{fig:time_basis_SWE2D_8}).} \label{fig:error_basis_time_SWE2D} \end{figure} \subsection{Nonlinear Schr\"odinger equation} The nonlinear Schr\"odinger equation (NLS) is used to model, among others, the propagation of light in nonlinear optical fibers and planar waveguides and to describe the Bose--Einstein condensates in a macroscopic gaseous superfluid wave-matter state at ultra-cold temperature. We consider the model in one and two dimensions: First, we study the approximation of the solution to the 1D nonlinear Schr\"odinger equation in the scenario of a perturbed bright soliton regime \cite{sulem2007nonlinear}. Then, in the 2D setting, we test the adaptive strategy in the case of a Fourier mode cascade, where, starting from an initial condition represented by few low Fourier modes, the energy exchange to higher modes quickly complicates the dynamic of the problem \cite{caputo2011fourier}. More specifically, in the spatial domain $\Omega:=[-L,L]$, we consider the cubic Schr\"odinger equation \begin{equation}\label{eq:schroedinger} \left\{ \begin{aligned} & i \dfrac{\partial u}{\partial t} + \dfrac{\partial^2 u}{\partial x^2} + \gamma |u|^2 u=0, & \mbox{in}\;\Omega\times \ensuremath{\mathcal{T}},\\ & u(t_0,x;\prm) = u^0(x;\prm), &\mbox{in}\;\Omega, \end{aligned}\right. \end{equation} with periodic boundary conditions, and parameters $\prm$ and $\gamma$. By writing the complex-valued solution $u$ in terms of its real and imaginary parts as $u=q+ i v$, \eqref{eq:schroedinger} can be written as a Hamiltonian system in canonical symplectic form with Hamiltonian \begin{equation*} \ensuremath{\Hcal}(q,v;\prm) = \dfrac12 \int_0^L \bigg[ \bigg(\dfrac{\partial q}{\partial x}\bigg)^2+ \bigg(\dfrac{\partial v}{\partial x}\bigg)^2- \dfrac{\gamma}{2} (q^2+v^2)^2 \bigg]\,dx. \end{equation*} \subsubsection{One-dimensional nonlinear Schr\"odinger equation (NLS-1D)} Let us consider a partition of the spatial domain $\Omega=[-20\pi,20\pi]$ into $\Nfh-1$ equispaced intervals $(x_i,x_{i+1})$ with $x_i= (i-1)\Delta x$ for $i=1,\ldots,\Nfh-1$, and $\Delta x=2L/\Nfh$, with $L=20\pi$ and $N=1000$. We consider the initial condition \begin{equation}\label{eq:init_cond_NLS1D} u^{0}(x;\prmh) = \dfrac{\sqrt{2}}{\cosh{(\alpha_h x)}}e^{i\frac{x}{2}}, \end{equation} where the parameter $\prmh:=(\alpha_h,\gamma_h)$ is uniformly sampled from $\Sprmh\subset\Sprm:=[0.98,1.1]^2$ using $10$ values per dimension. For $\prmh^{*}=(1,1)$, \eqref{eq:schroedinger} has an analytical solution in the form of the solitary wave \begin{equation*} u(t,x;\prmh^{*}) = \dfrac{\sqrt{2}}{\cosh{\left(x-t\right)}}e^{i\left(\frac{x}{2}+\frac{3}{4}t\right)}, \end{equation*} while, for $\prmh\neq\prmh^{*}$ , the solution comprises an additional ensemble of solitary waves, moving either left or right. We consider a finite difference discretization where the second order spatial derivative is approximated using a centered scheme. Let $u_h(t;\prmh):=(q_1,\ldots,q_{\Nfh},v_1,\ldots,v_{\Nfh})$, for all $t\in\ensuremath{\mathcal{T}}$ and $\prmh\in\Sprmh$, where $\{q_i\}_{i=1}^{\Nfh}$ and $\{v_i\}_{i=1}^{\Nfh}$ are the degrees of freedom associated with the nodal approximation of $u$. Then, the semi-discrete problem can be recast in the form \eqref{eq:HamODE} with the discrete Hamiltonian \begin{equation*} \ensuremath{\Hcal}_h(u_h;\prmh) = \dfrac12 \sum_{i=1}^{\Nfh} \bigg[ \bigg(\dfrac{q_{i+1}-q_i}{\Delta x}\bigg)^2 + \bigg(\dfrac{v_{i+1}-v_{i}}{\Delta x}\bigg)^2 -\dfrac{\gamma_h}{2}(q_i^2+v_i^2)^2 \bigg], \end{equation*} where $q_0=q_{\Nfh}$ and $q_{\Nfh+1}=q_1$, and similarly for $v$, owing to the periodic boundary conditions. In this experiment, we set the temporal parameters to $\mathcal{T}=[0,50]$ and $\Delta t = 10^{-3}$ for a total of $N_{\tau}=50000$ steps. The symplectic implicit midpoint is employed as time integrator in the high-fidelity solver, while the reduced dynamics \eqref{eq:UZred} is integrated using the 2-stage partitioned RK method. The singular values of the global snapshot matrix $\mathcal{S}\in \mathbb{R}^{\Nf\times (N_{\tau} \Np)}$ and the average singular values of the local snapshot matrix $\mathcal{S}_{\tau}\in\mathbb{R}^{\Nf\times p}$ is displayed in Figure \ref{fig:singular_values_NLS1D_a}. The observed decay of the singular values of the local and global snapshot matrices clearly suggests that the solution set does not appear to be \textit{globally} reducible. Moreover, the smooth evolution of the numerical rank, shown in Figure \ref{fig:singular_values_NLS1D_b}, is compatible with the dynamics of advection of the initial condition together with the development of secondary waves of smaller amplitudes. \begin{figure}[H] \centering \begin{tikzpicture} \begin{groupplot}[ group style={group size=2 by 1, horizontal sep=2cm}, width=7cm, height=5cm ] \nextgroupplot[xlabel={index}, ylabel={singular values}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, xtick={200,600,1000,1400,1800,2200}, grid style = {gray,opacity=0.2}, ymode=log, ylabel near ticks, every axis plot/.append style={ultra thick}, xmin = 1, xmax = 2200, ymin = 1e-14, ymax = 1, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny} ] \addplot[color=black] table[x=Index,y=Values] {figures/data/NLS1D_singular_values/full_singular_values_NLS1D.txt}; \addplot[color=red] table[x=Index,y=Values] {figures/data/NLS1D_singular_values/avg_singular_values_NLS1D.txt}; \legend{global,local}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.095,1.05) {\subcaption{\label{fig:singular_values_NLS1D_a}}}; \coordinate (spypoint) at (axis cs:2,0.02); \nextgroupplot[xlabel={time $\left [ s \right ]$}, ylabel={$\epsilon$-rank}, axis line style = thick, grid=both, minor tick num=2, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, legend style={at={(1.2,1)},anchor=north}, xmin = 0, xmax = 50, ymin = 0, ymax = 60, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot+[] table[x=Time,y=Rank] {figures/data/NLS1D_epsilon_rank/1_epsilon_rank_NLS1D.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/NLS1D_epsilon_rank/3_epsilon_rank_NLS1D.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/NLS1D_epsilon_rank/5_epsilon_rank_NLS1D.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/NLS1D_epsilon_rank/7_epsilon_rank_NLS1D.txt}; \legend{$10^{-1}$,$10^{-3}$,$10^{-5}$,$10^{-7}$}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.02,1.05) {\subcaption{\label{fig:singular_values_NLS1D_b}}}; \end{groupplot} \node[pin={[pin distance=0.8cm]290:{% \begin{tikzpicture}[baseline,trim axis right] \begin{axis}[ axis line style = thick, grid=both, minor tick num=2, grid style = {gray,opacity=0.2}, every axis plot post/.append style={ultra thick}, ymode=log, xmin=0,xmax=22, ymin=0.000005,ymax=1, width=3.5cm, legend style={at={(1.4,1)},anchor=north}, legend cell align=left, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, ] \addplot[color=black] table[x=Index,y=Values] {figures/data/NLS1D_singular_values/full_singular_values_NLS1D.txt}; \addplot[color=red] table[x=Index,y=Values] {figures/data/NLS1D_singular_values/avg_singular_values_NLS1D.txt}; \end{axis} \end{tikzpicture}% }},draw,circle,minimum size=0.9cm] at (spypoint) {}; \end{tikzpicture} \caption{NLS-1D: \ref{fig:singular_values_NLS1D_a}) Singular values of the global snapshots matrix $\mathcal{S}$ and of the time average of the local trajectories matrix $\mathcal{S}_{\tau}$. The singular values are normalized using the largest singular value for each case. \ref{fig:singular_values_NLS1D_b}) $\epsilon$-rank of the local trajectories matrix $\mathcal{S}_{\tau}$ for different values of $\epsilon$.} \end{figure} In light of the fact that the problem does not exhibit global reducibility properties, we only compare the adaptive and the non-adaptive dynamical low rank approximations, without taking into consideration any global reduced model. For the non-adaptive reduced model, we consider, as fixed dimensions of the reduced manifold, $\Nr_1=\left \{ 4, 6, 8, 10, 12 \right \}$. The same values are the initial dimensions used in the adaptive algorithm. Moreover, the control parameters are $r=\left \{ 1.1, 1.2\right \}$ and $c=\left \{ 1.1, 1.2, 1.3 \right \}$. The error indicator is computed every $10$ time steps, and, to further reduce the computational cost, we solve the underlying linear system on a subset $\Gamma_{I}\subset\Gamma_{h}$ of $25$ uniformly sampled parameters. Figure \ref{fig:O_JO_error_NLS1D} confirms that the evolving basis $U$ generated by the dynamical reduced basis method satisfies the orthogonality and symplecticity constraints to machine precision. \begin{figure}[H] \centering \begin{tikzpicture} \begin{groupplot}[ group style={group size=2 by 3, horizontal sep=2cm}, width=7cm, height=5cm ] \nextgroupplot[ylabel={$\left \| U^{\top}(t)U(t)-I_{2N} \right \|$}, xlabel={time $\left [ s \right ]$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=50, ymax = 10^(-13), ymin = 10^(-16), ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, legend columns = 2, legend style={at={(1.2,1.5)},anchor=north}] \addplot+[color=red] table[x=Timing,y=O_1.1_1.2] {figures/data/NLS1D_O_JO_orth/error_orth_NLS1D_6.txt}; \addplot+[color=red,dashed] table[x=Timing,y=O_1.1_1.2] {figures/data/NLS1D_O_JO_orth/error_orth_NLS1D_12.txt}; \addplot+[color=blue] table[x=Timing,y=O_1.3_1.1] {figures/data/NLS1D_O_JO_orth/error_orth_NLS1D_6.txt}; \addplot+[color=blue,dashed] table[x=Timing,y=O_1.3_1.1] {figures/data/NLS1D_O_JO_orth/error_orth_NLS1D_12.txt}; \addplot+[color=green] table[x=Timing,y=O_1.2_1.2] {figures/data/NLS1D_O_JO_orth/error_orth_NLS1D_6.txt}; \addplot+[color=green,dashed] table[x=Timing,y=O_1.2_1.2] {figures/data/NLS1D_O_JO_orth/error_orth_NLS1D_12.txt}; \addplot+[color=cyan] table[x=Timing,y=O_1.3_1.2] {figures/data/NLS1D_O_JO_orth/error_orth_NLS1D_6.txt}; \addplot+[color=cyan,dashed] table[x=Timing,y=O_1.3_1.2] {figures/data/NLS1D_O_JO_orth/error_orth_NLS1D_12.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.02,1.05) {\subcaption{\label{fig:O_error_NLS1D}}}; \legend{{$r=1.1, \quad c=1.2, \quad \Nr_{1}=6 \quad$}, {$r=1.1, \quad c=1.2, \quad \Nr_{1}=12$}, {$r=1.3, \quad c=1.1, \quad \Nr_{1}=6 \quad$}, {$r=1.3, \quad c=1.1, \quad \Nr_{1}=12$}, {$r=1.2, \quad c=1.2, \quad \Nr_{1}=6 \quad$}, {$r=1.2, \quad c=1.2, \quad \Nr_{1}=12$}, {$r=1.3, \quad c=1.2, \quad \Nr_{1}=6 \quad$}, {$r=1.3, \quad c=1.2, \quad \Nr_{1}=12$}, }; \nextgroupplot[ylabel={$\left \| U^\top(t)\J{\Nf}U(t)-\J{\Nrt} \right \|$}, xlabel={time $\left [ s \right ]$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=50, ymax = 10^(-13), ymin = 10^(-16), ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot+[color=red] table[x=Timing,y=JO_1.1_1.2] {figures/data/NLS1D_O_JO_orth/error_Jorth_NLS1D_6.txt}; \addplot+[color=blue] table[x=Timing,y=JO_1.3_1.1] {figures/data/NLS1D_O_JO_orth/error_Jorth_NLS1D_6.txt}; \addplot+[color=green] table[x=Timing,y=JO_1.2_1.2] {figures/data/NLS1D_O_JO_orth/error_Jorth_NLS1D_6.txt}; \addplot+[color=cyan] table[x=Timing,y=JO_1.3_1.2] {figures/data/NLS1D_O_JO_orth/error_Jorth_NLS1D_6.txt}; \addplot+[color=red,dashed] table[x=Timing,y=JO_1.1_1.2] {figures/data/NLS1D_O_JO_orth/error_Jorth_NLS1D_12.txt}; \addplot+[color=blue,dashed] table[x=Timing,y=JO_1.3_1.1] {figures/data/NLS1D_O_JO_orth/error_Jorth_NLS1D_12.txt}; \addplot+[color=green,dashed] table[x=Timing,y=JO_1.2_1.2] {figures/data/NLS1D_O_JO_orth/error_Jorth_NLS1D_12.txt}; \addplot+[color=cyan,dashed] table[x=Timing,y=JO_1.3_1.2] {figures/data/NLS1D_O_JO_orth/error_Jorth_NLS1D_12.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.02,1.05) {\subcaption{\label{fig:JO_error_NLS1D}}}; \end{groupplot} \end{tikzpicture} \caption{NLS-1D: Evolution of the error in the orthogonality \ref{fig:O_error_NLS1D}) and symplecticity \ref{fig:JO_error_NLS1D}) of the reduced basis obtained with the adaptive dynamical RB method for different choices of the control parameters $r$, $c$ and initial dimension of the reduced manifold $\Nr_1$.} \label{fig:O_JO_error_NLS1D} \end{figure} The behavior of the error $E(t)$ \eqref{eqn:error_metric} vs. the runtime of the dynamical reduced basis method is shown in Figure \ref{fig:error_T_NLS1D}. Contrary to our expectations, the error does not decrease monotonically as the computational cost increases, while still retaining an accuracy in the range of $[1.7\cdot10^{-5},1.8\cdot10^{-4}]$, in the adaptive case and of $[2.2\cdot10^{-3},9.1\cdot10^{-2}]$, in the non-adaptive case, with an average speedup of $23$ and $46$, respectively. For this numerical simulation, the choice of the control parameters $r$ and $c$ has some impact on the quality of the approximation and the optimal pair seems to depend on $\Nrh_1$. \begin{figure}[H] \centering \begin{tikzpicture}[spy using outlines={rectangle, width=4cm, height=5.6cm, magnification=1.6, connect spies}] \begin{axis}[xlabel={runtime $\left[s\right]$}, ylabel={$E(T)$}, axis line style = thick, grid=both, minor tick num=2, grid style = {gray,opacity=0.2}, xmode=log, ymode=log, ymax = 0.1, ymin = 0.00001, xmax = 300000, xmin = 900, every axis plot/.append style={thick}, width = 9cm, height = 6cm, legend style={at={(1.25,1)},anchor=north}, legend cell align=left, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot+[mark=x,color=black,mark size=2pt] table[x=Timing,y=Error] {figures/data/NLS1D_Pareto/error_final_local_reduced_model_no_indicator_NLS1D.txt}; \addplot+[mark=x,color=cyan,mark size=2pt] table[x=Timing,y=Error] {figures/data/NLS1D_Pareto/error_final_local_reduced_model_indicator_NLS1D_1.2_1.2.txt}; \addplot+[mark=x,color=blue,mark size=2pt] table[x=Timing,y=Error] {figures/data/NLS1D_Pareto/error_final_local_reduced_model_indicator_NLS1D_1.1_1.1.txt}; \addplot+[mark=x,color=green,mark size=2pt] table[x=Timing,y=Error] {figures/data/NLS1D_Pareto/error_final_local_reduced_model_indicator_NLS1D_1.1_1.2.txt}; \addplot+[mark=x,color=red,mark size=2pt, every node near coord/.append style={xshift=0.95cm}, every node near coord/.append style={yshift=-0.2cm}, nodes near coords, point meta=explicit symbolic, every node near coord/.append style={font=\footnotesize}] table[x=Timing,y=Error, meta index=2] {figures/data/NLS1D_Pareto/error_final_local_reduced_model_indicator_NLS1D_1.2_1.3.txt}; \addplot+[color=black,dashed] table[x=Timing,y=DummyError] {figures/data/NLS1D_Pareto/error_final_full_NLS1D.txt}; \legend{{Non adaptive}, {$r=1.2, \quad c=1.2$}, {$r=1.1, \quad c=1.1$}, {$r=1.1, \quad c=1.2$}, {$r=1.2, \quad c=1.3$}, {Full model}}; \end{axis} \end{tikzpicture} \caption{NLS-1D: Error \eqref{eqn:error_metric}, at final time, as a function of the runtime for the dynamical RB method ({\color{black}{\rule[.5ex]{1em}{1.2pt}}}) and the adaptive dynamical RB method for different values of the control parameters $r$ and $c$ ({\color{red}{\rule[.5ex]{1em}{1.2pt}}},{\color{blue}{\rule[.5ex]{1em}{1.2pt}}},{\color{green}{\rule[.5ex]{1em}{1.2pt}}},{\color{cyan}{\rule[.5ex]{1em}{1.2pt}}}). For the sake of comparison, we report the timing required by the high-fidelity solver ({\color{black}{\rule[.5ex]{0.4em}{1.2pt}}} {\color{black}{\rule[.5ex]{0.4em}{1.2pt}}}) to compute the numerical solution for all values of the parameter $\prmh\in\Sprmh$.} \label{fig:error_T_NLS1D} \end{figure} The evolution of the error $E(t)$, reported in Figures \ref{fig:time_error_NLS1D} for $\Nr_{1}=\{6,8,10\}$, supports the conclusion that the control parameters might have some impact on the accuracy of the reduced model. Indeed, within the tested control parameters, there is one order of magnitude gain between the error of the best and worst approximations at the final time $T$ and for all values of $\Nr_1$. We postpone to future investigations greedy strategies for the selection of optimal $r$ and $c$. \begin{figure}[H] \centering \begin{tikzpicture} \begin{groupplot}[ group style={group size=2 by 3, horizontal sep=2cm}, width=7cm, height=4cm ] \nextgroupplot[ylabel={$E(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=50, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Error_6] {figures/data/NLS1D_error_time_basis/error_local_reduced_model_no_indicator_NLS1D.txt}; \addplot+[color=red] table[x=Timing,y=Error_1.2_1.2] {figures/data/NLS1D_error_time_basis/error_local_reduced_model_indicator_NLS1D_6.txt}; \addplot+[color=blue] table[x=Timing,y=Error_1.1_1.1] {figures/data/NLS1D_error_time_basis/error_local_reduced_model_indicator_NLS1D_6.txt}; \addplot+[color=green] table[x=Timing,y=Error_1.1_1.2] {figures/data/NLS1D_error_time_basis/error_local_reduced_model_indicator_NLS1D_6.txt}; \addplot+[color=cyan] table[x=Timing,y=Error_1.3_1.2] {figures/data/NLS1D_error_time_basis/error_local_reduced_model_indicator_NLS1D_6.txt}; \addplot+[color=black,dashed] table[x=Time,y=Error_6] {figures/data/NLS1D_error_time_basis/error_full_model_corrupted_NLS1D.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_error_NLS1D_6}}}; \nextgroupplot[ylabel={$n_{\tau}$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=50, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Basis_6] {figures/data/NLS1D_error_time_basis/basis_local_reduced_model_no_indicator_NLS1D.txt}; \addplot+[color=red] table[x=Timing,y=Basis_1.2_1.2] {figures/data/NLS1D_error_time_basis/basis_local_reduced_model_indicator_NLS1D_6.txt}; \addplot+[color=blue] table[x=Timing,y=Basis_1.1_1.1] {figures/data/NLS1D_error_time_basis/basis_local_reduced_model_indicator_NLS1D_6.txt}; \addplot+[color=green] table[x=Timing,y=Basis_1.1_1.2] {figures/data/NLS1D_error_time_basis/basis_local_reduced_model_indicator_NLS1D_6.txt}; \addplot+[color=cyan] table[x=Timing,y=Basis_1.3_1.2] {figures/data/NLS1D_error_time_basis/basis_local_reduced_model_indicator_NLS1D_6.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_basis_NLS1D_6}}}; \nextgroupplot[xlabel={time $\left [ s \right ]$}, ylabel={$E(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=50, ymode=log, ymax = 10^(-2), xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, legend columns = 2, legend style={nodes={scale=0.83, transform shape}}] \addplot[color=black] table[x=Time,y=Error_8] {figures/data/NLS1D_error_time_basis/error_local_reduced_model_no_indicator_NLS1D.txt}; \addplot+[color=black,dashed] table[x=Time,y=Error_8] {figures/data/NLS1D_error_time_basis/error_full_model_corrupted_NLS1D.txt}; \addplot+[color=red] table[x=Timing,y=Error_1.2_1.2] {figures/data/NLS1D_error_time_basis/error_local_reduced_model_indicator_NLS1D_8.txt}; \addplot+[color=blue] table[x=Timing,y=Error_1.1_1.1] {figures/data/NLS1D_error_time_basis/error_local_reduced_model_indicator_NLS1D_8.txt}; \addplot+[color=green] table[x=Timing,y=Error_1.1_1.2] {figures/data/NLS1D_error_time_basis/error_local_reduced_model_indicator_NLS1D_8.txt}; \addplot+[color=cyan] table[x=Timing,y=Error_1.3_1.2] {figures/data/NLS1D_error_time_basis/error_local_reduced_model_indicator_NLS1D_8.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_error_NLS1D_8}}}; \nextgroupplot[xlabel={time $\left [ s \right ]$}, ylabel={$n_{\tau}$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=50, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Basis_8] {figures/data/NLS1D_error_time_basis/basis_local_reduced_model_no_indicator_NLS1D.txt}; \addplot+[color=red] table[x=Timing,y=Basis_1.2_1.2] {figures/data/NLS1D_error_time_basis/basis_local_reduced_model_indicator_NLS1D_8.txt}; \addplot+[color=blue] table[x=Timing,y=Basis_1.1_1.1] {figures/data/NLS1D_error_time_basis/basis_local_reduced_model_indicator_NLS1D_8.txt}; \addplot+[color=green] table[x=Timing,y=Basis_1.1_1.2] {figures/data/NLS1D_error_time_basis/basis_local_reduced_model_indicator_NLS1D_8.txt}; \addplot+[color=cyan] table[x=Timing,y=Basis_1.3_1.2] {figures/data/NLS1D_error_time_basis/basis_local_reduced_model_indicator_NLS1D_8.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_basis_NLS1D_8}}}; \nextgroupplot[xlabel={time $\left [ s \right ]$}, ylabel={$E(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=50, ymode=log, ymax = 10^(-2), xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, legend columns = 2, legend style={nodes={scale=0.83, transform shape}}] \addplot[color=black] table[x=Time,y=Error_10] {figures/data/NLS1D_error_time_basis/error_local_reduced_model_no_indicator_NLS1D.txt}; \addplot+[color=black,dashed] table[x=Time,y=Error_10] {figures/data/NLS1D_error_time_basis/error_full_model_corrupted_NLS1D.txt}; \addplot+[color=red] table[x=Timing,y=Error_1.2_1.2] {figures/data/NLS1D_error_time_basis/error_local_reduced_model_indicator_NLS1D_10.txt}; \addplot+[color=blue] table[x=Timing,y=Error_1.1_1.1] {figures/data/NLS1D_error_time_basis/error_local_reduced_model_indicator_NLS1D_10.txt}; \addplot+[color=green] table[x=Timing,y=Error_1.1_1.2] {figures/data/NLS1D_error_time_basis/error_local_reduced_model_indicator_NLS1D_10.txt}; \addplot+[color=cyan] table[x=Timing,y=Error_1.3_1.2] {figures/data/NLS1D_error_time_basis/error_local_reduced_model_indicator_NLS1D_10.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_error_NLS1D_10}}}; \nextgroupplot[xlabel={time $\left [ s \right ]$}, ylabel={$n_{\tau}$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=50, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, legend columns = 2, legend style={nodes={scale=0.83, transform shape}}, legend style={at={(axis cs:5,11)},anchor=south west},] \addplot[color=black] table[x=Time,y=Basis_10] {figures/data/NLS1D_error_time_basis/basis_local_reduced_model_no_indicator_NLS1D.txt};\addlegendentry{{Non adaptive}} \addlegendimage{color=black,dashed} \addlegendentry{{Target}} \addplot+[color=red] table[x=Timing,y=Basis_1.2_1.2] {figures/data/NLS1D_error_time_basis/basis_local_reduced_model_indicator_NLS1D_10.txt};\addlegendentry{{$r=1.2, \, c=1.2$}} \addplot+[color=blue] table[x=Timing,y=Basis_1.1_1.1] {figures/data/NLS1D_error_time_basis/basis_local_reduced_model_indicator_NLS1D_10.txt}; \addlegendentry{{$r=1.1, \, c=1.1$}} \addplot+[color=green] table[x=Timing,y=Basis_1.2_1.1] {figures/data/NLS1D_error_time_basis/basis_local_reduced_model_indicator_NLS1D_10.txt}; \addlegendentry{{$r=1.2, \, c=1.1$}} \addplot+[color=cyan] table[x=Timing,y=Basis_1.3_1.2] {figures/data/NLS1D_error_time_basis/basis_local_reduced_model_indicator_NLS1D_10.txt};\addlegendentry{{$r=1.2, \, c=1.3$}} \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_basis_NLS1D_10}}}; \end{groupplot} \end{tikzpicture} \caption{NLS-1D: On the left column, evolution of the error $E(t)$ \eqref{eqn:error_metric} for the adaptive and non adaptive dynamical RB methods for different values of the control parameters $r$ and $c$, and for different dimensions $\Nr_1$ of the initial reduced manifold. The target error is obtained by solving the full model with initial condition obtained by projecting \eqref{eq:init_cond_NLS1D} onto a symplectic manifold of dimension $\Nr_1$. On the right column, evolution of the dimension of the dynamical reduced basis over time. We consider the cases $\Nr_1=6$ (Figs. \ref{fig:time_error_NLS1D_6}, \ref{fig:time_basis_NLS1D_6}), $\Nr_1=8$ (Figs. \ref{fig:time_error_NLS1D_8}, \ref{fig:time_basis_NLS1D_8}) and $\Nr_1=10$ (Figs. \ref{fig:time_error_NLS1D_10}, \ref{fig:time_basis_NLS1D_10}).} \label{fig:time_error_NLS1D} \end{figure} \subsubsection{Two-dimensional nonlinear Schr\"odinger equation (NLS-2D)} Let us consider the spatial domain $\Omega=[-2\pi, 2\pi]^2$ and the set of parameters $\Gamma = \left [ 0.97,1.03 \right ]^2$. We seek the numerical solution to the 2D extension of \eqref{eq:schroedinger}, with $\gamma=1$ and for $\Np=64$ uniformly sampled parameters $\eta_h:=(\alpha,\beta)\in\Gamma_h$ entering the initial condition \begin{equation}\label{eqn:initial_condition_NLS2D} u^{0}(x,y;\prmh) = \left ( 1+\alpha\sin{x}\right )\left ( 2+\beta\sin{y}\right ). \end{equation} This problem is characterized by an energy exchange between Fourier modes. Although this process is local, it is not well understood how the energy exchange mechanism is influenced by the problem dimension and parameters. In particular, although the values of $\alpha$ and $\beta$ have a limited impact on the low-rank structure of the initial condition \eqref{eqn:initial_condition_NLS2D}, the explicit effect of their variation on the energy exchange process is not known. We use a centered finite difference scheme to discretize the Laplacian operator. The domain $\Omega$ is partitioned using $M=100$ nodes per dimension, for a total of $N=10000$ points. Let $u_h(t;\prmh)$, for all $t\in\ensuremath{\mathcal{T}}$ and $\prmh\in\Sprmh$, be the vector collecting the degrees of freedom associated with the nodal approximation of $u$. The semi-discrete problem is canonically Hamiltonian with the discrete Hamiltonian function \begin{equation*} \begin{aligned} \ensuremath{\Hcal}_h(u_h;\prmh) = \dfrac12 \sum_{i=1}^{\Nfh} \bigg[ & \bigg(\dfrac{q_{i+1,j}-q_{i,j}}{\Delta x}\bigg)^2 + \bigg(\dfrac{v_{i+1,j}-v_{i,j}}{\Delta x}\bigg)^2 +\\ & \bigg(\dfrac{q_{i,j+1}-q_{i,j}}{\Delta y}\bigg)^2 + \bigg(\dfrac{v_{i,j+1}-v_{i,j}}{\Delta y}\bigg)^2 -\dfrac{\gamma}{2}(q_{i,j}^2+v_{i,j}^2)^2 \bigg], \end{aligned} \end{equation*} with periodic boundary conditions for $q_{i,j}$ and $v_{i,j}$. We consider $N_{\tau}=12000$ time steps in the interval $\ensuremath{\mathcal{T}}=\left [ 0,T \right ]$ so that $\Delta t = 2.5\cdot 10^{-4}$. As in the previous examples, the implicit midpoint rule is used as the numerical integrator in the high-fidelity solver. The reduced dynamics \eqref{eq:UZred} is integrated using the 2-stage partitioned RK method. To assess the reducibility of the problem, we collect in $\mathcal{S}\in\mathbb{R}^{\Nf\times(N_{\tau}\Np)}$ the snapshots associated with all parameters $\eta_h$ and times $t^{\tau}$, and in $\mathcal{S}_{\tau}\in\mathbb{R}^{\Nf\times \Np}$ the snapshots associated with all parameters $\eta_h$ at fixed time $t^{\tau}$, with $\tau=1,\dots,N_{\tau}$. Similarly to the 1D case, the slow decay of the singular values of $\mathcal{S}$, reported in Figure \ref{fig:singular_values_NLS2D_a}, suggests that a global reduced basis approach is not viable for model order reduction. The growing complexity of the high-fidelity solution, associated with different values of $\alpha$ and $\beta$, is reflected by the growth of the numerical rank shown in Figure \ref{fig:singular_values_NLS2D_b}. Hence, despite the exponential decay of the singular values of $\mathcal{S}_{\tau}$, Figure \ref{fig:singular_values_NLS2D_b} indicates that this test represents a challenging problem even for the adaptive algorithm and a balance between accuracy and computational cost is necessary while adapting the dimension of the reduced manifold. \begin{figure}[H] \centering \begin{tikzpicture} \begin{groupplot}[ group style={group size=2 by 1, horizontal sep=2cm}, width=7cm, height=5cm ] \nextgroupplot[xlabel={index}, ylabel={singular values}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, ymode=log, ylabel near ticks every axis plot/.append style={ultra thick}, xmin = 1, xmax=1199, ymin = 1e-11, ymax = 1, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny} ] \addplot[color=black] table[x=Index,y=Values] {figures/data/NLS2D_singular_values/full_singular_values_NLS2D.txt}; \addplot[color=red] table[x=Index,y=Values] {figures/data/NLS2D_singular_values/avg_singular_values_NLS2D.txt}; \legend{global,local}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.08,1.05) {\subcaption{\label{fig:singular_values_NLS2D_a}}}; \coordinate (spypoint) at (axis cs:2,0.01); \nextgroupplot[xlabel={time $\left [ s \right ]$}, ylabel={$\epsilon$-rank}, axis line style = thick, grid=both, minor tick num=2, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin = 0, xmax = 3, ymin = 0, ymax = 60, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, legend style={at={(1.2,1)},anchor=north}] \addplot+[] table[x=Time,y=Rank] {figures/data/NLS2D_epsilon_rank/1_epsilon_rank_NLS2D.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/NLS2D_epsilon_rank/3_epsilon_rank_NLS2D.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/NLS2D_epsilon_rank/5_epsilon_rank_NLS2D.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/NLS2D_epsilon_rank/7_epsilon_rank_NLS2D.txt}; \legend{$10^{-1}$,$10^{-3}$,$10^{-5}$,$10^{-7}$}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.16,1.05) {\subcaption{\label{fig:singular_values_NLS2D_b}}}; \end{groupplot} \end{tikzpicture} \caption{NLS-2D: \ref{fig:singular_values_NLS2D_a}) Singular values of the global snapshots matrix $\mathcal{S}$ and of the time average of the local trajectories matrix $\mathcal{S}_{\tau}$. The singular values are normalized using the largest singular value for each case. \ref{fig:singular_values_NLS2D_b}) $\epsilon$-rank of the local trajectories matrix $\mathcal{S}_{\tau}$ for different values of $\epsilon$.} \end{figure} We consider several combinations of $r\in\left \{ 1.1, 1.2 \right \}$ and $c\in\left \{ 1.05,1.1,1.2 \right \}$ and different initial dimensions of the reduced manifold $\Nr_1\in\{6,8\}$. The error indicator is computed every $10$ time steps on a subset $\Gamma_I\subset\Gamma_h$ of $16$ uniformly sampled parameters. Both adaptive and non-adaptive reduced models are initialized using \eqref{eqn:initialization_local_low_rank}, with $U_0$ obtained via a complex SVD of the snapshots matrix $\mathcal{S}_1$ of the initial condition \eqref{eqn:initial_condition_NLS2D}. In line with the fact that the full model solution has a gradually increasing rank (Figure \ref{fig:singular_values_NLS2D_b}), adapting the dimension of the basis improves the accuracy of the approximation, as shown in Figure \ref{fig:error_time_NLS2D}. In terms of the computational cost of the adaptive dynamical model, we record a speedup of at least $58$ times with respect to the high-fidelity model, whose runtime is $6.2 \cdot 10^{5}s$. These results can be explained as for the 2D shallow water test: in the presence of polynomial nonlinearities the strategy proposed in \Cref{sec:nonlinear} allows computational costs that scale linearly with $\Nfh$ instead of $\Nfh^{\frac{1}{2}}$ and $\Nfh^{\frac{2}{3}}$ for problems ensuing from semi-discrete formulations of PDEs in $2$D and $3$D, respectively. In Figure \ref{fig:error_time_NLS2D}, we observe that, although increasing in time, the error associated with the adaptive reduced dynamical model has a smaller slope than the error of the non-adaptive method. \begin{figure}[H] \centering \begin{tikzpicture} \begin{groupplot}[ group style={group size=2 by 2, horizontal sep=2cm}, width=7cm, height=4cm ] \nextgroupplot[ylabel={$E(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=3, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Error_6] {figures/data/NLS2D_error_time_basis/error_local_reduced_model_no_indicator_NLS2D.txt}; \addplot+[color=red] table[x=Timing,y=Error_1.2_1.05] {figures/data/NLS2D_error_time_basis/error_local_reduced_model_indicator_NLS2D_6.txt}; \addplot+[color=green] table[x=Timing,y=Error_1.2_1.1] {figures/data/NLS2D_error_time_basis/error_local_reduced_model_indicator_NLS2D_6.txt}; \addplot+[color=cyan] table[x=Timing,y=Error_1.1_1.05] {figures/data/NLS2D_error_time_basis/error_local_reduced_model_indicator_NLS2D_6.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_error_NLS2D_6}}}; \nextgroupplot[ylabel={$\Nr_{\tau}$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=3, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Basis_6] {figures/data/NLS2D_error_time_basis/basis_local_reduced_model_no_indicator_NLS2D.txt}; \addplot+[color=red] table[x=Timing,y=Basis_1.2_1.05] {figures/data/NLS2D_error_time_basis/basis_local_reduced_model_indicator_NLS2D_6.txt}; \addplot+[color=green] table[x=Timing,y=Basis_1.2_1.1] {figures/data/NLS2D_error_time_basis/basis_local_reduced_model_indicator_NLS2D_6.txt}; \addplot+[color=cyan] table[x=Timing,y=Basis_1.1_1.05] {figures/data/NLS2D_error_time_basis/basis_local_reduced_model_indicator_NLS2D_6.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_basis_NLS2D_6}}}; \nextgroupplot[ylabel={$E(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=3, ymode=log, ymax = 10^(1), legend columns = 2, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, legend style={nodes={scale=0.83, transform shape}}] \addplot[color=black] table[x=Time,y=Error_8] {figures/data/NLS2D_error_time_basis/error_local_reduced_model_no_indicator_NLS2D.txt}; \addplot+[color=red] table[x=Timing,y=Error_1.2_1.05] {figures/data/NLS2D_error_time_basis/error_local_reduced_model_indicator_NLS2D_8.txt}; \addplot+[color=green] table[x=Timing,y=Error_1.2_1.1] {figures/data/NLS2D_error_time_basis/error_local_reduced_model_indicator_NLS2D_8.txt}; \addplot+[color=cyan] table[x=Timing,y=Error_1.1_1.05] {figures/data/NLS2D_error_time_basis/error_local_reduced_model_indicator_NLS2D_8.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_error_NLS2D_8}}}; \legend{{Non adaptive}, {$r=1.2, \, c=1.05$}, {$r=1.2, \, c=1.1$}, {$r=1.1, \, c=1.05$}, {Target}}; \nextgroupplot[ylabel={$\Nr_{\tau}$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=3, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Basis_8] {figures/data/NLS2D_error_time_basis/basis_local_reduced_model_no_indicator_NLS2D.txt}; \addplot+[color=red] table[x=Timing,y=Basis_1.2_1.05] {figures/data/NLS2D_error_time_basis/basis_local_reduced_model_indicator_NLS2D_8.txt}; \addplot+[color=green] table[x=Timing,y=Basis_1.2_1.1] {figures/data/NLS2D_error_time_basis/basis_local_reduced_model_indicator_NLS2D_8.txt}; \addplot+[color=cyan] table[x=Timing,y=Basis_1.1_1.05] {figures/data/NLS2D_error_time_basis/basis_local_reduced_model_indicator_NLS2D_8.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_basis_NLS2D_8}}}; \end{groupplot} \end{tikzpicture} \caption{NLS-2D: On the left column, we report the evolution of the error $E(t)$ \eqref{eqn:error_metric} for the adaptive and non adaptive dynamical RB methods for different values of the control parameters $r$ and $c$, and for different dimensions $\Nr_1$ of the initial reduced manifold. On the right column, we report the evolution of the dimension of the dynamical reduced basis over time. We consider the cases $\Nr_1=6$ (Figs. \ref{fig:time_error_NLS2D_6}, \ref{fig:time_basis_NLS2D_6}) and $\Nr_1=8$ (Figs. \ref{fig:time_error_NLS2D_8}, \ref{fig:time_basis_NLS2D_8}).} \label{fig:error_time_NLS2D} \end{figure} \subsection{Vlasov--Poisson plasma model with forced electric field} The Vlasov--Poisson system describes the dynamics of a collisionless magnetized plasma under the action of a self-consistent electric field. The evolution of the plasma at any time $t\in \ensuremath{\mathcal{T}}\subset\mathbb{R}$ is described in terms of the distribution function $f^s(t,x,v)$ ($s$ denotes the particle species) in the Cartesian phase space domain $(x,v)\in\Omega:=\Omega_x\times\Omega_v\subset\mathbb{R}^2$. \begin{comment} Assume that $\Omega_x:=\mathbb{T}^d=\mathbb{R}^d/(2\pi\mathbb{Z}^d)$ is the $d$-dimensional torus and $\Omega_v:=\mathbb{R}^d$. The 2D-2V Vlasov--Poisson problem ($d=2$) reads: For $f^s_0\in V_{|_{t=0}}$, find $f^s\in C^1(\ensuremath{\mathcal{T}};L^2(\Omega))\cap C^0(\ensuremath{\mathcal{T}};V)$, and the electric field $E\in C^0(\ensuremath{\mathcal{T}};L^2(\Omega_x))$ such that \begin{equation}\label{eq:VP} \begin{aligned} \partial_t f^s+v\,\cdot \nabla_x f^s + \dfrac{q^s}{m^s} E\,\cdot \nabla_v f^s=0, &\qquad\mbox{in}\;\Omega\times\ensuremath{\mathcal{T}},\;\forall s,\\ \nabla_x\cdot E=\sum_{s}q^s \int_{\Omega_v}f^s\, dv, &\qquad\mbox{in}\;\Omega_x\times\ensuremath{\mathcal{T}},\\ f^s(0,x,v)=f^s_0, &\qquad\mbox{in}\;\Omega. \end{aligned} \end{equation} Here $q^s$ denotes the charge and $m^s$ the particle mass, and the space $V$ is defined as $V:=\{f(t,\cdot,\cdot)\in L^2(\Omega):\;f>0,\; f(t,\cdot,v)\sim e^{-v^2}\,\mbox{ as }\,|v|\to\infty\}$. As shown in \cite[Sections 1 and 2]{CCFM17}, the Vlasov--Poisson problem admits a Hamiltonian formulation with a Lie--Poisson bracket and Hamiltonian given by the sum of the kinetic and electric energy as \begin{equation}\label{eq:VPHam} \ensuremath{\Hcal}(f) = \sum_s \dfrac{m^s}{2} \int_{\Omega} v^2 f^s(t,x,v) \,dx\,dv + \dfrac12\int_{\Omega_x} |E(t,x)|^2\,dx. \end{equation} \end{comment} In this work, we consider the one-species ($s=1$) paraxial approximation of the Vlasov-Poisson equation, used in the study of long and thin beams of particles \cite{hirstoaga2019design}. More specifically, we assume that the beam has reached a stationary state, the longitudinal length of the beam is the predominant spatial scale and the velocity along the longitudinal direction is constant. Moreover, we look at the case in which the effects of the self-consistent electric field $E$ are negligible compared to the ones caused by an external electric field that we denote by $\Xi$. The external electric field is assumed to be independent of time and periodic with respect to the longitudinal dimension. Using the scaling argument proposed in \cite{frenod2009long} and the aforementioned assumptions, the problem is: For $f_0\in V_{|_{t=0}}$, find $f\in C^1(\ensuremath{\mathcal{T}};L^2(\Omega))\cap C^0(\ensuremath{\mathcal{T}};V)$ such that \begin{equation}\label{eq:VP_paraxial} \begin{aligned} \partial_t f+\dfrac{1}{\epsilon}v\,\partial_x f + \Xi\,\partial_v f=0, &\qquad\mbox{in}\;\Omega\times\ensuremath{\mathcal{T}},\\ f(0,x,v)=f_0, &\qquad\mbox{in}\;\Omega, \end{aligned} \end{equation} where the electric field $\Xi$ is prescribed at all $t\in\ensuremath{\mathcal{T}}$, $x\in\Omega_x$, the parameter $\epsilon\in\mathbb{R}$ represents a spatial scaling and the Vlasov equation has been normalized so that mass and charge are set to $m=q=1$. In \eqref{eq:VP_paraxial}, since we are considering stationary states, the variable $t$ can be interpreted as the longitudinal coordinate and $\mathcal{T}$ as the longitudinal spatial domain. For the semi-discrete approximation of the Vlasov equation in \eqref{eq:VP_paraxial} we consider a particle method: The distribution function $f$ is approximated by the superposition of $P\in\mathbb{N}$ computational macro-particles each having a weight $\omega_\ell$, so that \begin{equation*} f(t,x,v)\approx f_h(t,x,v) = \sum_{\ell=1}^P \omega_\ell\, S(x-X_\ell(t)) S(v-V_\ell(t)), \end{equation*} where $X(t)$ and $V(t)$ are the vector of the position and velocity of the macro-particles, respectively, and $S$ is a compactly supported shape function, here chosen to be the Dirac delta. The idea of particle methods is to derive the time evolution of the approximate distribution function $f_h$ by advancing the macro-particles along the characteristics of the Vlasov equation. Particle methods, like particle-in-cell (PIC), are widely use in the numerical simulation of plasma problems. However, the slow convergence requires the use of many particles to achieve sufficient accuracy and therefore PIC methods are expensive. Model order reduction, in the number of macro-particles, of these semi-discrete schemes can be crucial and potentially extremely beneficial. The particle approximation of problem \eqref{eq:VP_paraxial} yields a Hamiltonian system where the unknowns are the vectors of position $X$ and velocity $V$ of the particles with the discrete Hamiltonian reads \begin{equation}\label{eq:VPHamN} \begin{aligned} \ensuremath{\Hcal}_h(f_h) = \sum_{\ell=1}^P \dfrac{1}{2\epsilon}\, \omega_\ell V_\ell(t)^2 -\phi(X_{\ell}(t)) = \dfrac{1}{2\epsilon} V(t)^\top W_p V(t) -\phi(X(t)). \end{aligned} \end{equation} Here $\phi$ denotes the potential, defined as $\Xi(x) = -\partial_x \phi(x)$, for all $x\in\Omega_x$, $W_p := \mathrm{diag}(\omega_1,\ldots,\omega_P)$, and $\mathrm{diag}(d)$ denotes the diagonal matrix with diagonal elements given by the vector $d$. For this test we consider $\Nfh=1000$ particles with uniform unitary weight, $\omega_i=1$, for all $i=1,\dots,\Nfh$. The external electric field is given as $\Xi(t,x)=-x^3$ for all $t\in\ensuremath{\mathcal{T}}$ and $x\in\Omega_x$. The entries of the initial position $X(0)$ and velocity $V(0)$ vectors are independently sampled from the perturbed Maxwellian \begin{equation}\label{eqn:VPInitial} f(0,x,v)= \left( \dfrac{1}{\sqrt{2\pi}\alpha}e^{-0.5v^2\alpha^{-2}}\right) \left( 1+\beta\cos\left( 4\pi\dfrac{x+0.8}{1.6}\right)\right), \end{equation} using the inversion sampling technique on the spatial domain $\Omega=[-0.8,0.8]$. The vector-valued parameter $\prm=(\alpha,\beta,\epsilon)$ takes values in the set $\Gamma_h$, derived via uniform samples of the parameter domain $\Gamma=[0.07,0.09]\times[0.02,0.03]\times[0.4,0.8]$ with $\Np=125$ values. The full model solution is computed in the interval $\mathcal{T}=[0,20]$, split into $N_{\tau}=20000$ time steps, using the symplectic midpoint rule. In this setting, particles oscillate along the longitudinal dimension with different transverse velocities and an approximate period of $2\pi\epsilon$, with a bulk of slow particles in the center of the beam spreading thin filaments of faster particles, as shown in Figure \ref{fig:init_cond_final_V1D1V}. \begin{figure}[H] \centering \includegraphics[scale=0.87]{figures/data/V1D1V_scatter_data/fancy_spiral_initial.pdf} \hspace{0.35cm} \includegraphics[scale=0.87]{figures/data/V1D1V_scatter_data/fancy_spiral_final.pdf} \begin{tikzpicture} \node[inner sep=0pt] at (-0.5,0) {(a)}; \node[inner sep=0pt] at (7,0) {(b)}; \end{tikzpicture} \caption{Vlasov 1D1V: Particle distribution associated with the initial condition (\ref{fig:init_cond_final_V1D1V}a) and the high-fidelity solution at $t=T$ (\ref{fig:init_cond_final_V1D1V}b) for all the parameter values $\prmh\in\Sprmh$.} \label{fig:init_cond_final_V1D1V} \end{figure} The reducibility of the problem is studied by computing the normalized singular values of the global snapshots matrix $\mathcal{S}\in\mathbb{R}^{2N\times(N_{\tau}p)}$ and the time average of the normalized singular values of the matrices $\mathcal{S}_{\tau}\in\mathbb{R}^{2N\times p}$ of snapshots at fixed time $t^{\tau}$ for all $\tau=1,\dots,N_{\tau}$, collecting the high-fidelity solutions corresponding to all the sampled parameters $\prmh$. The spectra, reported in Figure \ref{fig:singular_values_V1D1V_a}, suggest that the decay of the global singular values is fast enough to hint at a global low-rank structure of the problem. However, for this test case, an adaptive dynamical approach is expected to be beneficial in capturing the increasing rank of the solution (Figure \ref{fig:singular_values_V1D1V_b}) with a smaller local reduced basis. \begin{figure}[H] \centering \begin{tikzpicture} \begin{groupplot}[ group style={group size=2 by 1, horizontal sep=2cm}, width=7cm, height=5cm ] \nextgroupplot[xlabel={index}, ylabel={singular values}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, ymode=log, ylabel near ticks, every axis plot/.append style={ultra thick}, legend style={at={(0.8,0.36)},anchor=north}, xmin = 1, ymin = 1e-14, ymax = 1, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny} ] \addplot[color=black] table[x=Index,y=Values] {figures/data/V1D1V_singular_values/full_singular_values_V1D1V.txt}; \addplot[color=red] table[x=Index,y=Values] {figures/data/V1D1V_singular_values/avg_singular_values_V1D1V.txt}; \legend{global,local}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.09,1.05) {\subcaption{\label{fig:singular_values_V1D1V_a}}}; \coordinate (spypoint) at (axis cs:2,0.005); \nextgroupplot[xlabel={time $\left [ s \right ]$}, ylabel={$\epsilon$-rank}, axis line style = thick, grid=both, minor tick num=2, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin = 0, xmax = 7, ymin = 0, ymax = 80, legend style={at={(1.16,1)},anchor=north}, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot+[] table[x=Time,y=Rank] {figures/data/V1D1V_epsilon_rank/0.1_epsilon_rank_V1D1V.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/V1D1V_epsilon_rank/0.001_epsilon_rank_V1D1V.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/V1D1V_epsilon_rank/1e-05_epsilon_rank_V1D1V.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/V1D1V_epsilon_rank/1e-07_epsilon_rank_V1D1V.txt}; \addplot+[] table[x=Time,y=Rank] {figures/data/V1D1V_epsilon_rank/1e-09_epsilon_rank_V1D1V.txt}; \legend{$10^{-1}$,$10^{-3}$,$10^{-5}$,$10^{-7}$,$10^{-9}$}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.02,1.05) {\subcaption{\label{fig:singular_values_V1D1V_b}}}; \end{groupplot} \node[pin={[pin distance=3.2cm]372:{% \begin{tikzpicture}[baseline,trim axis right] \begin{axis}[ axis line style = thick, grid=both, minor tick num=2, grid style = {gray,opacity=0.2}, every axis plot post/.append style={ultra thick}, tiny, ymode=log, xmin=0,xmax=22, ymin=0.0001,ymax=1, width=3.6cm, legend style={at={(1.4,1)},anchor=north}, legend cell align=left, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, ] \addplot[color=black] table[x=Index,y=Values] {figures/data/V1D1V_singular_values/full_singular_values_V1D1V.txt}; \addplot[color=red] table[x=Index,y=Values] {figures/data/V1D1V_singular_values/avg_singular_values_V1D1V.txt}; \end{axis} \end{tikzpicture}% }},draw,circle,minimum size=1cm] at (spypoint) {}; \end{tikzpicture} \caption{Vlasov 1D1V: \ref{fig:singular_values_V1D1V_a}) Singular values of the global snapshots matrix $\mathcal{S}$ and time average of the singular values of the local trajectories matrix $\mathcal{S}_{\tau}$. The singular values are normalized using the largest singular value for each case. \ref{fig:singular_values_V1D1V_b}) $\epsilon$-rank of the local trajectories matrix $\mathcal{S}_{\tau}$ for different values of $\epsilon$.} \end{figure} \noindent Figure \ref{fig:Hamiltonian_error_Vlasov} shows the relative error in the conservation of the Hamiltonian for different dimensions of the reduced manifold, and values of the control parameters $r$ and $c$. Although exact Hamiltonian conservation is not guaranteed by the proposed partitioned RK methods, good control in the conservation error, almost independent of the reduced dimension and control parameters, results from the preservation of the symplectic structure both in the reduction and in the discretization. The development of temporal integrators for the Vlasov--Poisson problem that are both structure and energy preserving should be a subject of future investigations. \begin{figure}[H] \centering \begin{tikzpicture} \begin{groupplot}[ group style={group size=2 by 2, horizontal sep=2cm}, width=7cm, height=4cm ] \nextgroupplot[ylabel={$E_{\mathcal{H}_h}(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=20, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Timing,y=Error] {figures/data/V1D1V_Hamiltonian/error_Hamiltonian_local_reduced_model_no_indicator_V1D1V_6.txt}; \addplot[color=red] table[x=Timing,y=Error_1_2] {figures/data/V1D1V_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_V1D1V_6.txt}; \addplot[color=blue] table[x=Timing,y=Error_1_3] {figures/data/V1D1V_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_V1D1V_6.txt}; \addplot[color=green] table[x=Timing,y=Error_1_5] {figures/data/V1D1V_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_V1D1V_6.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:error_Hamiltonian_V1D1V_6}}}; \nextgroupplot[axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=20, ymin=10^(-8), ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, legend pos = south east, legend columns = 2, legend style={nodes={scale=0.83, transform shape}}] \addplot[color=black] table[x=Timing,y=Error] {figures/data/V1D1V_Hamiltonian/error_Hamiltonian_local_reduced_model_no_indicator_V1D1V_8.txt}; \addplot[color=red] table[x=Timing,y=Error_1_2] {figures/data/V1D1V_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_V1D1V_8.txt}; \addplot[color=blue] table[x=Timing,y=Error_1_3] {figures/data/V1D1V_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_V1D1V_8.txt}; \addplot[color=green] table[x=Timing,y=Error_1_5] {figures/data/V1D1V_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_V1D1V_8.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:error_Hamiltonian_V1D1V_8}}}; \legend{Non adaptive, $r=1.2 \quad c=1.1$, $r=1.3 \quad c=1.1$, $r=1.5 \quad c=1.1$}; \nextgroupplot[ylabel={$E_{\mathcal{H}_h}(t)$}, xlabel={$t$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=20, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Timing,y=Error] {figures/data/V1D1V_Hamiltonian/error_Hamiltonian_local_reduced_model_no_indicator_V1D1V_10.txt}; \addplot[color=red] table[x=Timing,y=Error_1_2] {figures/data/V1D1V_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_V1D1V_10.txt}; \addplot[color=blue] table[x=Timing,y=Error_1_3] {figures/data/V1D1V_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_V1D1V_10.txt}; \addplot[color=green] table[x=Timing,y=Error_1_5] {figures/data/V1D1V_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_V1D1V_10.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:error_Hamiltonian_V1D1V_10}}}; \nextgroupplot[xlabel={$t$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=20, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Timing,y=Error] {figures/data/V1D1V_Hamiltonian/error_Hamiltonian_local_reduced_model_no_indicator_V1D1V_12.txt}; \addplot[color=red] table[x=Timing,y=Error_1_2] {figures/data/V1D1V_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_V1D1V_12.txt}; \addplot[color=blue] table[x=Timing,y=Error_1_3] {figures/data/V1D1V_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_V1D1V_12.txt}; \addplot[color=green] table[x=Timing,y=Error_1_5] {figures/data/V1D1V_Hamiltonian/error_Hamiltonian_local_reduced_model_indicator_V1D1V_12.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:error_Hamiltonian_V1D1V_12}}}; \end{groupplot} \end{tikzpicture} \caption{Vlasov 1D1V: Relative error \eqref{eqn:relative_error_Hamiltonian} in the conservation of the discrete Hamiltonian \eqref{eq:VPHamN} for the dynamical adaptive reduced basis method with initial reduced dimensions $\Nr_1=6$ (Fig. \ref{fig:error_Hamiltonian_V1D1V_6}), $\Nr_1=8$ (Fig. \ref{fig:error_Hamiltonian_V1D1V_8}), $\Nr_1=10$ (Fig. \ref{fig:error_Hamiltonian_V1D1V_10}) and $\Nr_1=12$ (Fig. \ref{fig:error_Hamiltonian_V1D1V_12}).} \label{fig:Hamiltonian_error_Vlasov} \end{figure} In Figure \ref{fig:error_final_V1D1V}, we compare the error \eqref{eqn:error_metric} and the runtime of the global reduced model, the dynamical models for different values of $r$, and the high-fidelity model. For the global reduced method, we consider the complex SVD approach with and without the tensorial representation of the RHS and of its Jacobian, \emph{cf.} \Cref{sec:nonlinear}. The results show that, as we increase the dimension of the reduced basis, the runtime cost of the global reduce model becomes larger than the one required to solve the high-fidelity problem, i.e., a global reduction proves ineffective. Both the non-adaptive and the adaptive dynamical reduced approach outperforms the global reduced method by reaching comparable levels of accuracy at a much lower computational cost. In the adaptive algorithm, the additional computational cost associated with the evaluation of the error indicator and the evolution of a larger basis is balanced by a considerable error reduction. \begin{figure}[H] \centering \begin{tikzpicture}[spy using outlines={rectangle, width=6.2cm, height=5cm, magnification=2, connect spies}] \begin{axis}[xlabel={runtime $\left[s\right]$}, ylabel={$E(t)$}, axis line style = thick, grid=both, minor tick num=2, grid style = {gray,opacity=0.2}, xmode=log, ymode=log, ymax = 0.018, ymin = 0.00001, xmax = 60001, every axis plot/.append style={ultra thick}, width = 9cm, height = 6cm, legend style={at={(1.4,1)},anchor=north}, legend cell align=left, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot+[mark=x,color=black,size=2pt, every node near coord/.append style={xshift=0.65cm}, every node near coord/.append style={yshift=-0.2cm}, nodes near coords, point meta=explicit symbolic, every node near coord/.append style={font=\footnotesize}] table[x=Timing,y=Error, meta index=2] {figures/data/V1D1V_Pareto/error_final_local_reduced_model_no_indicator_V1D1V.txt}; \addplot+[mark=x,color=red,size=2pt] table[x=Timing,y=Error] {figures/data/V1D1V_Pareto/error_final_local_reduced_model_indicator_V1D1V_1.2_1.1.txt}; \addplot+[mark=x,color=blue,size=2pt] table[x=Timing,y=Error] {figures/data/V1D1V_Pareto/error_final_local_reduced_model_indicator_V1D1V_1.3_1.1.txt}; \addplot+[mark=x,color=green,size=2pt] table[x=Timing,y=Error] {figures/data/V1D1V_Pareto/error_final_local_reduced_model_indicator_V1D1V_1.5_1.1.txt}; \addplot+[mark=x,color=magenta,size=2pt, every node near coord/.append style={xshift=0.65cm}, every node near coord/.append style={yshift=-0.2cm}, nodes near coords, point meta=explicit symbolic, every node near coord/.append style={font=\footnotesize}] table[x=Timing,y=Error, meta index=2] {figures/data/V1D1V_Pareto/error_final_global_reduced_model_V1D1V_standard.txt}; \addplot+[mark=x,color=magenta,size=2pt,dashed, every node near coord/.append style={xshift=-0.35cm}, every node near coord/.append style={yshift=-0.55cm}, nodes near coords, point meta=explicit symbolic, every node near coord/.append style={font=\footnotesize}] table[x=Timing,y=Error, meta index=2] {figures/data/V1D1V_Pareto/error_final_global_reduced_model_V1D1V_cubic_expansion.txt}; \addplot+[color=black,dashed] table[x=Timing,y=DummyError] {figures/data/V1D1V_Pareto/error_final_full_V1D1V.txt}; \legend{Non adaptive, $r=1.2 \quad c=1.1$, $r=1.3 \quad c=1.1$, $r=1.5 \quad c=1.1$, Global, Global (Tensorial POD), Full model}; \coordinate (spypoint) at (axis cs:800,0.0025); \coordinate (magnifyglass) at (80,0.0019); \end{axis} \end{tikzpicture} \caption{Vlasov 1D1V: Error \eqref{eqn:error_metric}, at final time, as a function of the runtime for the complex SVD method ({\color{magenta}{\rule[.5ex]{1em}{1.2pt}}},{\color{magenta}{\rule[.5ex]{0.45em}{1.2pt}}}{\color{white}{\rule[.5ex]{0.1em}{1.2pt}}}{\color{magenta}{\rule[.5ex]{0.45em}{1.2pt}}}), the dynamical RB method ({\color{black}{\rule[.5ex]{1em}{1.2pt}}}) and the adaptive dynamical RB method for different values of the control parameters $r$ and $c$ ({\color{red}{\rule[.5ex]{1em}{1.2pt}}},{\color{blue}{\rule[.5ex]{1em}{1.2pt}}},{\color{green}{\rule[.5ex]{1em}{1.2pt}}},{\color{cyan}{\rule[.5ex]{1em}{1.2pt}}}). For the sake of comparison, we report the timing required by the high-fidelity solver ({\color{black}{\rule[.5ex]{0.4em}{1.2pt}}} {\color{black}{\rule[.5ex]{0.4em}{1.2pt}}}) to compute the numerical solution for all values of the parameter $\prmh\in\Sprmh$.} \label{fig:error_final_V1D1V} \end{figure} In Figures \ref{fig:time_basis_V1D1V} we report the growth of the dimension of the reduced manifold for different initial dimension $\Nr_1$. As for the evolution of the error, we do not notice any significant difference as the parameter $r$ for the adaptive criterion \eqref{eq:ratio_tmp} varies. The increase of the rank of the full model solution, see Figure \ref{fig:singular_values_V1D1V_b}, is reproduced by the adaptive algorithm up to a tolerance of around $\epsilon=10^{-5}$. \begin{figure}[H] \centering \begin{tikzpicture} \begin{groupplot}[ group style={group size=2 by 3, horizontal sep=2cm}, width=7cm, height=4cm ] \nextgroupplot[ylabel={$E(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=20, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Error_4] {figures/data/V1D1V_error_time_basis/error_local_reduced_model_no_indicator_V1D1V.txt}; \addplot[color=red] table[x=Timing,y=Error_1_2] {figures/data/V1D1V_error_time_basis/error_local_reduced_model_indicator_V1D1V_4.txt}; \addplot[color=blue] table[x=Timing,y=Error_1_3] {figures/data/V1D1V_error_time_basis/error_local_reduced_model_indicator_V1D1V_4.txt}; \addplot[color=green] table[x=Timing,y=Error_1_5] {figures/data/V1D1V_error_time_basis/error_local_reduced_model_indicator_V1D1V_4.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_error_V1D1V_4}}}; \nextgroupplot[ylabel={$\Nr_{\tau}$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=20, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Basis_4] {figures/data/V1D1V_error_time_basis/basis_local_reduced_model_no_indicator_V1D1V.txt}; \addplot[color=red] table[x=Timing,y=Basis_1_2] {figures/data/V1D1V_error_time_basis/basis_local_reduced_model_indicator_V1D1V_4.txt}; \addplot[color=blue] table[x=Timing,y=Basis_1_3] {figures/data/V1D1V_error_time_basis/basis_local_reduced_model_indicator_V1D1V_4.txt}; \addplot[color=green] table[x=Timing,y=Basis_1_5] {figures/data/V1D1V_error_time_basis/basis_local_reduced_model_indicator_V1D1V_4.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_basis_V1D1V_4}}}; \nextgroupplot[ylabel={$E(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=20, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Error_8] {figures/data/V1D1V_error_time_basis/error_local_reduced_model_no_indicator_V1D1V.txt}; \addplot[color=red] table[x=Timing,y=Error_1_2] {figures/data/V1D1V_error_time_basis/error_local_reduced_model_indicator_V1D1V_8.txt}; \addplot[color=blue] table[x=Timing,y=Error_1_3] {figures/data/V1D1V_error_time_basis/error_local_reduced_model_indicator_V1D1V_8.txt}; \addplot[color=green] table[x=Timing,y=Error_1_5] {figures/data/V1D1V_error_time_basis/error_local_reduced_model_indicator_V1D1V_8.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_error_V1D1V_8}}}; \nextgroupplot[ylabel={$\Nr_{\tau}$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=20, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}, legend columns = 2, legend style={nodes={scale=0.83, transform shape}}, legend style={at={(axis cs:3.1,8.4)},anchor=south west}] \addplot[color=black] table[x=Time,y=Basis_8] {figures/data/V1D1V_error_time_basis/basis_local_reduced_model_no_indicator_V1D1V.txt}; \addplot[color=red] table[x=Timing,y=Basis_1_2] {figures/data/V1D1V_error_time_basis/basis_local_reduced_model_indicator_V1D1V_8.txt}; \addplot[color=blue] table[x=Timing,y=Basis_1_3] {figures/data/V1D1V_error_time_basis/basis_local_reduced_model_indicator_V1D1V_8.txt}; \addplot[color=green] table[x=Timing,y=Basis_1_5] {figures/data/V1D1V_error_time_basis/basis_local_reduced_model_indicator_V1D1V_8.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_basis_V1D1V_8}}}; \legend{{Non adaptive}, {$r=1.2, c=1.1$}, {$r=1.3, c=1.1$}, {$r=1.5, c=1.1$}} \nextgroupplot[xlabel={time $\left [ s \right ]$}, ylabel={$E(t)$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=20, ymode=log, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Error_12] {figures/data/V1D1V_error_time_basis/error_local_reduced_model_no_indicator_V1D1V.txt}; \addplot[color=red] table[x=Timing,y=Error_1_2] {figures/data/V1D1V_error_time_basis/error_local_reduced_model_indicator_V1D1V_12.txt}; \addplot[color=blue] table[x=Timing,y=Error_1_3] {figures/data/V1D1V_error_time_basis/error_local_reduced_model_indicator_V1D1V_12.txt}; \addplot[color=green] table[x=Timing,y=Error_1_5] {figures/data/V1D1V_error_time_basis/error_local_reduced_model_indicator_V1D1V_12.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_error_V1D1V_12}}}; \nextgroupplot[xlabel={time $\left [ s \right ]$}, ylabel={$\Nr_{\tau}$}, axis line style = thick, grid=both, minor tick num=2, max space between ticks=20, grid style = {gray,opacity=0.2}, every axis plot/.append style={ultra thick}, xmin=0, xmax=20, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, x tick label style={font=\footnotesize}, y tick label style={font=\footnotesize}, legend style={font=\tiny}] \addplot[color=black] table[x=Time,y=Basis_12] {figures/data/V1D1V_error_time_basis/basis_local_reduced_model_no_indicator_V1D1V.txt}; \addplot[color=red] table[x=Timing,y=Basis_1_2] {figures/data/V1D1V_error_time_basis/basis_local_reduced_model_indicator_V1D1V_12.txt}; \addplot[color=blue] table[x=Timing,y=Basis_1_3] {figures/data/V1D1V_error_time_basis/basis_local_reduced_model_indicator_V1D1V_12.txt}; \addplot[color=green] table[x=Timing,y=Basis_1_5] {figures/data/V1D1V_error_time_basis/basis_local_reduced_model_indicator_V1D1V_12.txt}; \node [text width=1em,anchor=north west] at (rel axis cs: 0.01,1.05) {\subcaption{\label{fig:time_basis_V1D1V_12}}}; \end{groupplot} \end{tikzpicture} \caption{Vlasov 1D1V: On the left, we show the evolution of the error $E(t)$ \eqref{eqn:error_metric} for the adaptive and non adaptive dynamical RB methods for different values of the control parameters $r$ and $c$, and for different dimensions $\Nr_1$ of the initial reduced manifold. On the right, we show the evolution of the dimension of the dynamical reduced basis. We consider the cases $\Nr_1=4$ (Figs. \ref{fig:time_error_V1D1V_4}, \ref{fig:time_basis_V1D1V_4}), $\Nr_1=8$ (Figs. \ref{fig:time_error_V1D1V_8}, \ref{fig:time_basis_V1D1V_8}) and $\Nr_1=12$ (Figs. \ref{fig:time_error_V1D1V_12}, \ref{fig:time_basis_V1D1V_12}).} \label{fig:time_basis_V1D1V} \end{figure} \section{Concluding remarks}\label{sec:conclusions} We have considered parametrized non-dissipative problems in their canonical symplectic Hamiltonian formulation. For their model order reduction, we propose a nonlinear structure-preserving reduced basis method consisting in approximating the problem solution with a modal decomposition where both the expansion coefficients and the reduced basis are evolving in time. Moreover, the dimension of the reduced basis is updated in time according to an adaptive strategy based on an error indicator. The resulting reduced models allow to achieve stable and accurate results with small reduced basis even for problems characterized by a slowly decaying Kolmogorov $n$-width. The strength is the combination of the dynamical adaptivity of the reduced basis and the preservation of the geometric structure underlying key physical properties of the dynamics, illustrated by examples. The study of efficient and structure-preserving algorithms for general nonlinear Hamiltonian vector fields and the development of partitioned Runge--Kutta methods that ensure the exact preservation of (at least linear and quadratic) invariants are still open problems and provide interesting directions of investigation. Moreover, the application of our rank-adaptive reduced basis method to fully kinetic plasma models, like the Vlasov--Poisson problem, might also be subject of future studies. \begin{acknowledgment*} This work was partially supported by AFOSR under grant FA9550-17-1-9241. \end{acknowledgment*} \printbibliography
1,477,468,750,877
arxiv
\section{Motivation and main results} Let $(Z_t^{(q, H)})_{t \geq 0}$ be a Hermite process of order $q \geq 1$ and self-similarity parameter $H \in (\frac{1}{2}, 1)$. It is a $H$-self-similar process with stationary increments, exhibits long-range dependence and can be expressed as a multiple Wiener-It\^{o} integral of order $q$ with respect to a two-sided standard Brownian motion $(B(t))_{t \in \mathbb{R}}$ as follows: \begin{equation}\label{eq:H1} Z^{(q, H)}(t) = c(H, q) \int_{\mathbb{R}^q} \bigg( \int_0^t \prod_{j=1}^q(s- \xi_j)_+^{H_0 - \frac{3}{2}}ds\bigg) dB(\xi_1)\ldots dB(\xi_q), \end{equation} where \begin{equation}\label{eq:H2} c(H, q) = \sqrt{\frac{H(2H - 1)}{q! \beta^q(H_0 - \frac{1}{2}, 2-2H_0)}} \qquad \text{and} \qquad H_0 = 1+\frac{H-1}{q}. \end{equation} Particular examples include the fractional Brownian motion $(q=1)$ and the Rosenblatt process $(q=2)$. For $q \geq 2$, it is no longer Gaussian. All Hermite processes share the same basic properties with fractional Brownian motion such as self-similarity, stationary increments, long-range dependence and even covariance structure. The Hermite process has been pretty much studied in the last decade, due to its potential to be good model for various phenomena. A theory of stochastic integration with respect to $Z^{(q, H)}$, as well as stochastic differential equation driven by this process, have been considered recently. We refer to \cite{Ivan, Nualart} for a recent account of the fractional Brownian motion and its large amount of applications. We refer to \cite{Taqqu2, Tudor2,Tudor} for different aspects of the Rosenblatt process. Furthermore, in the direction of stochastic calculus, the construction of Wiener integrals with respect to $Z^{(q, H)}$ is studied in \cite{Tudor1}. According to this latter reference, stochastic integrals of the form \begin{equation}\label{eq:12} \int_{\mathbb{R}}f(u)dZ^{(q, H)}(u) \end{equation} are well-defined for elements of $\mathcal{H} = \{f: \mathbb{R} \to \mathbb{R}: \int_{\mathbb{R}}\int_{\mathbb{R}} f(u)f(v)|u-v|^{2H-2}dudv < \infty\}$, endowed with the norm \begin{equation}\label{eq:14} ||f||_{\mathcal{H}}^2 = H(2H-1)\int_{\mathbb{R}}\int_{\mathbb{R}} f(u)f(v)|u-v|^{2H-2}dudv. \end{equation} Moreover, when $f \in \mathcal{H}$, the stochastic integral (\ref{eq:12}) can be written as \begin{equation}\label{eq:13} \int_{\mathbb{R}}f(u)dZ^{(q, H)}(u) = c(H, q) \int_{\mathbb{R}^q}\bigg(\int_{\mathbb{R}}f(u) \prod_{j=1}^q(u- \xi_j)_+^{H_0 - \frac{3}{2}}du\bigg)dB(\xi_1)\ldots dB(\xi_q) \end{equation} where $c(H, q)$ and $H_0$ are as in (\ref{eq:H2}). Since the elements of $\mathcal{H}$ may be not functions but distributions (see \cite{Nualart}), it is more practical to work with the following subspace of $\mathcal{H}$, which is a set of functions: $$ |\mathcal{H}| = \bigg\{f: \mathbb{R} \to \mathbb{R}: \int_{\mathbb{R}}\int_{\mathbb{R}} |f(u)||f(v)||u-v|^{2H-2}dudv < \infty \bigg\}.$$ Consider the stochastic integral equation \begin{equation}\label{eq:SDE} X(t) = \xi - \lambda \int_0^tX(s)ds + \sigma Z^{(q, H)}(t), \qquad t \geq 0, \end{equation} where $ \lambda, \sigma > 0$ and where the initial condition $\xi$ can be any random variable. By \cite[Prop. 1]{Tudor1}, the unique continuous solution of (\ref{eq:SDE}) is given by $$X(t) = e^{-\lambda t} \bigg( \xi + \sigma \int_0^t e^{\lambda u} dZ^{(q, H)}(u) \bigg), \qquad t \geq 0.$$ In particular, if for $\xi$ we choose $\xi = \sigma \int_{-\infty}^0 e^{\lambda u} dZ^{q, H}(u)$, then \begin{equation}\label{eq:17} X(t) = \sigma \int_{-\infty}^t e^{-\lambda(t-u)}dZ^{(q, H)}(u), \qquad t\geq 0. \end{equation} According to \cite{Tudor1}, the process $X$ defined by (\ref{eq:17}) is referred to as the Hermite Ornstein-Uhlenbeck process of order $q$. On the other hand, if the initial condition $\xi$ is set to be zero, then the unique continuous solution of (\ref{eq:SDE}) is this time given by \begin{equation}\label{eq:18} X(t) = \sigma \int_0^t e^{-\lambda(t-u)}dZ^{(q, H)}(u), \qquad t\geq 0. \end{equation} In this paper, we call the stochastic process (\ref{eq:18}) the \textit{non-stationary} Hermite Ornstein-Uhlenbeck process of order $q$. It is a particular example of a wider class of moving average processes driven by Hermite process, of the form \begin{equation}\label{eq:Xt} X_t^{(q, H)} := \int_0^t x(t-u) dZ^{(q, H)}(u), \qquad t \geq 0. \end{equation} In many situations of interests (see, e.g., \cite{Viens1, Viens2}), we may have to analyze the asymptotic behavior of the \textit{quadratic functionals of $X_t^{(q, H)}$} for statistical purposes. More precisely, let us consider \begin{equation}\label{eq:Gt} G_T^{(q, H)}(t):=\frac{1}{T^{2H_0 - 1}}\int_0^{Tt}\Big(\big(X_s^{(q, H)}\big)^2 - E\Big[\big(X_s^{(q, H)}\big)^2\Big]\Big) ds. \end{equation} In this paper, we will show that $G_T^{(q, H)}$ converges in the sense of finite-dimensional distribution to the Rosenblatt process (up to a multiplicative constant), irrespective of the value of $q \geq 2$ and $H \in (\frac{1}{2}, 1)$. The case $q=1$ is apart, see Theorem \ref{Theorem2} below. \begin{Theorem}\label{Theorem1} Let $H \in (\frac{1}{2}, 1)$ and let $Z^{(q, H)}$ be a Hermite process of order $q \geq 2$ and self-similarity parameter $H$. Consider the Hermite-driven moving average process $X^{(q, H)}$ defined by (\ref{eq:Xt}), and assume that the kernel $x$ is a real-valued integrable function on $[0, \infty)$ satisfying, in addition, \begin{equation}\label{eq:1} \int_{\mathbb{R}_+^2} |x(u)||x(v)||u-v|^{2H-2}dudv < \infty. \end{equation} Then, as $T \to \infty$, the family of stochastic processes $G_T^{(q, H)}$ converges in the sense of finite-dimensional distribution to $b(H, q)R^{H'}$, where $R^{H'}$ is the Rosenblatt process of parameter $H' = 1 + (2H-2)/q$ (which is the \emph{second-order} Hermite process of parameter $H'$), and the multiplicative constant $b(H, q)$ is given by \begin{equation}\label{eq:19} b(H, q) = \frac{H(2H-1)}{\sqrt{(H_0 -\frac{1}{2})(4H_0 - 3)}}\int_{\mathbb{R}_+^2}x(u)x(v)|u-v|^{(q-1)(2H_0 -2)}dudv. \end{equation} (The fact that (\ref{eq:19}) is well-defined is part of the conclusion of the theorem.) \end{Theorem} Theorem \ref{Theorem1} only deals with $q \geq 2$, because $q=1$ is different. In this case, $Z^{(1, H)}$ is nothing but the fractional Brownian motion of index $H$ and $X^{(1, H)}$ is the fractional Volterra process, as considered by Nourdin, Nualart and Zintout in \cite{Ivan2}. In this latter reference, a Central Limit Theorem for $G_T^{(1, H)}$ has been established for $H \in (\frac{1}{2}, \frac{3}{4})$. Here, we rather study the situation where $H \in (\frac{3}{4}, 1)$ and, in contrast to \cite{Ivan2}, we show a Non-Central Limit Theorem. More precisely, we have the following theorem. \begin{Theorem}\label{Theorem2} Let $H \in (\frac{3}{4}, 1)$. Consider the fractional Volterra process $X^{(1, H)}$ given by (\ref{eq:Xt}) with $q=1$. If the function $x$ defining $X^{(1, H)}$ is an integrable function on $[0, \infty)$ and satisfies (\ref{eq:1}), then the family of stochastic processes $G_T^{(1, H)}$ converges in the sense of finite-dimensional distribution, as $T \to \infty$, to the Rosenblatt process $R^{H''}$ of parameter $H'' = 2H-1$ multiplied by $b(1, H)$ as above. \end{Theorem} It is worth pointing out that, irrespective of the self-similarity parameter $H \in (\frac{1}{2}, 1)$, the normalized quadratic functionals of any non-Gaussian Hermite-driven long memory moving average processes $(q \geq 2)$ exhibits a convergence to a random variable belonging to the second Wiener chaos. It is in strong contrast with what happens in the Gaussian case $(q=1)$, where either central or non-central limit theorems may arise depending on the value of the self-similarity parameter. We note that our Theorem \ref{Theorem2} is pretty close to Taqqu's seminal result \cite{Taqqu1975}, but cannot be obtained as a consequence of it. In contrast, the statement of Theorem \ref{Theorem1} is completely new, and provides new hints on the importance and relevance of the Rosenblatt process in statistics. Our proofs of Theorems \ref{Theorem1} and \ref{Theorem2} are based on the use of chaotic expansions into multiple Wiener-It\^{o} integrals and the key transformation lemma from the classical multiple Wiener-It\^{o} integrals into the one with respect to a random spectral measure (following a strategy initiated by Taqqu in \cite{Taqqu}). Let us sketch them. Since the random variable $X_t^{(q, H)}$ is an element of the $q$-th Wiener chaos, we can firstly rely on the product formula for multiple integrals to obtain that the quadratic functional $G_T^{(q, H)}(t)$ can be decomposed into a sum of multiple integrals of even orders from $2$ to $2q$. Secondly, we prove that the projection onto the second Wiener chaos converges in $L^2(\Omega)$ to the Rosenblatt process: we do this by using its spectral representation of multiple Wiener-It\^{o} integrals and by checking the $L^2(\mathbb{R}^2)$ convergence of its kernel. Finally, we prove that all the remaining terms in the chaos expansion are asymptotically negligible. Our findings and the strategy we have followed to obtain them owe a lot and were influenced by several seminal papers on Non-Central Limit Theorems for functionals of Gaussian (or related) processes, including Dobrushin and Major \cite{Dobrushin Major}, Taqqu \cite{Taqqu} and most recently, Clausel \textit{et al} \cite{CATudor1, CATudor2} and Neufcourt and Viens \cite{Viens}. Our paper is organised as follows. Section $2$ contains preliminary key lemmas. The proofs of our two main results, namely Theorems \ref{Theorem1} and \ref{Theorem2}, are then provided in Section $3$ and Section $4$. \section{Preliminaries} Here, we mainly follow Taqqu \cite{Taqqu}. We describe a useful connection between multiple Wiener-It\^{o} integrals with respect to random spectral measure and the classical stochastic It\^{o} integrals. Stochastic representations of the Rosenblatt process are then provided at the end of the section. \subsection{Multiple Wiener-It\^{o} integrals with respect to Brownian motion} Let $f \in L^2(\mathbb{R}^q)$ and let us denote by $I_q^B(f)$ the $q$th multiple Wiener-It\^{o} integral of $f$ with respect to the standard two-sided Brownian motion $(B_t)_{t \in \mathbb{R}}$, in symbols $$I_q^B(f) = \int_{\mathbb{R}^q} f(\xi_1, \ldots, \xi_q) dB(\xi_1)\ldots dB(\xi_q).$$ When $f$ is symmetric, we can see $I_q^B(f)$ as the following iterated adapted It\^{o} stochastic integral: $$I_q^B(f) = q! \int_{-\infty}^{\infty}dB(\xi_1) \int_{-\infty}^{\xi_1}dB(\xi_2) \ldots \int_{-\infty}^{\xi_{q-1}}dB(\xi_q) f(\xi_1, \ldots, \xi_q).$$ Moreover, when $f$ is not necessarily symmetric one has $I_q^B(f) = I_q^B(\widetilde{f})$, where $\widetilde{f}$ is the symmetrization of $f$ defined by \begin{equation}\label{eq:dola} \widetilde{f}(\xi_1, \ldots, \xi_q) = \frac{1}{q!}\sum_{\sigma \in \mathfrak{S}_q} f(\xi_{\sigma(1)}, \ldots, \xi_{\sigma(q)}). \end{equation} The set of random variables of the form $I_q^B(f), f \in L^2(\mathbb{R}^q)$, is called the $q$th Wiener chaos of $B$. We refer to Nualart's book \cite{Nualart} (chapter 1 therein) or Nourdin and Peccati's books \cite{Ivan, PeccatiIvan} for a detailed exposition of the construction and properties of multiple Wiener-It\^{o} integrals. Here, let us only recall the product formula between two multiple integrals: if $f \in L^2(\mathbb{R}^p)$ and $g \in L^2(\mathbb{R}^q)$ are two symmetric functions then \begin{equation}\label{eq:P1} I_p^B(f)I_q^B(g) = \sum_{r=0}^{p \wedge q} r!\binom{p}{r}\binom{q}{r}I_{p+q-2r}^B(f \widetilde{\otimes}_r g), \end{equation} where the contraction $f \otimes_r g$, which belongs to $L^2(\mathbb{R}^{p+q-2r})$ for every $r = 0, 1, \ldots, p \wedge q$, is given by \begin{align}\label{eq:P2} f \otimes_r g& (y_1, \ldots, y_{p-r}, z_1, \ldots, z_{q-r}) \nonumber\\ & = \int_{\mathbb{R}^r} f(y_1, \ldots, y_{p-r}, \xi_1, \ldots, \xi_r) g(z_1, \ldots, z_{q-r}, \xi_1, \ldots, \xi_r) d\xi_1 \ldots d\xi_r \end{align} and where a tilde denotes the symmetrization, see (\ref{eq:dola}). Observe that \begin{equation}\label{eq:3} \| f \widetilde{\otimes}_r g\|_{L^2(\mathbb{R}^{p+q-2r})} \leq \| f \otimes_r g\|_{L^2(\mathbb{R}^{p+q-2r})} \leq \|f\|_{L^2(\mathbb{R}^p)}\|g\|_{L^2(\mathbb{R}^q)}, \quad r= 0, \ldots, p\wedge q \end{equation} by Cauchy-Schwarz inequality, and that $f \otimes_p g = \left\langle f, g \right\rangle_{L^2(\mathbb{R}^p)}$ when $p=q$. Furthermore, we have the orthogonality property $$E[I_p^B(f)I_q^B(g)] = \begin{cases} & p! \big\langle \widetilde{f}, \widetilde{g} \big\rangle_{L^2(\mathbb{R}^p)} \qquad\text{if } p=q\\ & 0 \qquad\qquad\qquad\quad \text{ if } p \ne q. \end{cases}$$ \subsection{Multiple Wiener-It\^{o} integrals with respect to a random spectral measure} Let $W$ be a Gaussian complex-valued random spectral measure that satisfies $E[W(A)] = 0, E[W(A)\overline{W(B)}] = \mu(A \cap B), W(A) = \overline{W(-A)} $ and $W(\bigcup_{j=1}^n A_j) = \sum_{j=1}^n W(A_j)$ for all disjoint Borel sets that have finite Lebesgue measure (denoted here by $\mu$). The Gaussian random variables $\text{Re}W(A)$ and $\text{Im}W(A)$ are then independent with expectation zero and variance $\mu(A)/2$. We now recall briefly the construction of multiple Wiener-It\^{o} integrals with respect to $W$, as defined in Major \cite{Major} or Section $4$ of Dobrushin \cite{Dobrushin}. To define such stochastic integrals let us introduce the real Hilbert space $\mathscr{H}_m$ of complex-valued symmetric functions $f(\lambda_1, \ldots, \lambda_m), \lambda_j \in \mathbb{R}, j=1, 2, \ldots, m$, which are even, i.e. $f(\lambda_1, \ldots, \lambda_m) = \overline{f(-\lambda_1, \ldots, -\lambda_m)}$, and square integrable, that is, $$\|f \|^2 = \int_{\mathbb{R}^m}|f(\lambda_1, \ldots, \lambda_m)|^2 d\lambda_1\ldots d\lambda_m < \infty.$$ The scalar product is similarly defined: namely, if $f, g \in \mathscr{H}_m$, then $$\left\langle f, g \right\rangle_{\mathscr{H}_m} = \int f(\lambda_1, \ldots, \lambda_m)\overline{g(\lambda_1, \ldots, \lambda_m)}d\lambda_1 \ldots d\lambda_m.$$ The integrals $I_m^W$ are then defined through an isometric mapping from $\mathscr{H}_m$ to $L^2(\Omega)$: $$ f \longmapsto I_m^W(f) = \int_{\mathbb{R}}f(\lambda_1, \ldots, \lambda_m) W(d\lambda_1)\ldots W(d\lambda_m),$$ Following \emph{e.g.} the lecture notes of Major \cite{Major}, if $f \in \mathscr{H}_m$ and $g \in \mathscr{H}_n$, then $E[I_m^W(f)] = 0$ and \begin{equation}\label{eq:21} E[I_m^W(f)I_n^W(g)] = \begin{cases} & m!\left\langle f, g \right\rangle_{\mathscr{H}_m} \text{ if } m=n\\ & 0 \qquad\qquad\quad\text{if } m \ne n. \end{cases} \end{equation} \subsection{Preliminary lemmas} We recall a connection between the classical Wiener-It\^{o} integral $I^B$ and the one with respect to a random spectral measure $I^W$ that will play an important role in our analysis. \begin{lemma}\cite[Lemma 6.1]{Taqqu}\label{Lemma6.1} Let $A(\xi_1, \ldots, \xi_m)$ be a real-valued symmetric function in $L^2(\mathbb{R}^m)$ and let \begin{equation}\label{eq:Fourier} \mathcal{F}A(\lambda_1, \ldots, \lambda_m) = \frac{1}{(2\pi)^{m/2}}\int_{\mathbb{R}^m}e^{i\sum_{j=1}^m \xi_j\lambda_j}A(\xi_1,\ldots, \xi_m)d\xi_1\ldots d\xi_m \end{equation} be its Fourier transform. Then $$\int_{\mathbb{R}^m}A(\xi_1,\ldots, \xi_m)dB(\xi_1) \ldots dB(\xi_m) \overset{(d)}{=} \int_{\mathbb{R}^m}\mathcal{F}A(\lambda_1,\ldots, \lambda_m)W(d\lambda_1)\ldots W(d\lambda_m).$$ \end{lemma} Applying Lemma \ref{Lemma6.1}, we deduce the following lemma which is an extended result of Lemma 6.2 in \cite{Taqqu}. \begin{lemma}\label{Lemma2} Let $$ A(\xi_1, \ldots, \xi_{m+n}) = \int_{\mathbb{R}^2} \phi(z_1, z_2)\prod_{j=1}^{m}(z_1 - \xi_j)_+^{H_0 -\frac{3}{2}}\prod_{k=m+1}^{m+n}(z_2 - \xi_k)_+^{H_0 -\frac{3}{2}}dz_1dz_2$$ where $\frac{1}{2} < H_0 < 1$ and where $\phi$ is an integrable function on $\mathbb{R}^2$ whose Fourier transform is given by (\ref{eq:Fourier}). Let $$\widetilde{A}(\xi_1, \ldots, \xi_{m+n}) = \frac{1}{(m+n)!}\sum_{\sigma \in \mathfrak{S}_{m+n}}A(\xi_{\sigma(1)}, \ldots, \xi_{\sigma(m+n)})$$ be the symmetrization of $A$. Assume that $$\int_{\mathbb{R}^{m+n}}|\widetilde{A}(\xi_1, \ldots, \xi_{m+n})|^2d\xi_1\ldots d\xi_{m+n} < \infty.$$ Then, \begin{align*} &\int_{\mathbb{R}^{m+n}}\widetilde{A}(\xi_1,\ldots, \xi_{m+n})dB(\xi_1)\ldots dB(\xi_{m+n})\\ &\overset{(d)}{=} \bigg(\frac{\Gamma(H_0 - \frac{1}{2})}{\sqrt{2\pi}}\bigg)^{m+n} \int_{\mathbb{R}^{m+n}}W(d\lambda_1) \ldots W(d\lambda_{m+n}) \prod_{j=1}^{m+n} |\lambda_j|^{\frac{1}{2} - H_0}\\ &\qquad\qquad\quad\qquad\times\frac{1}{(m+n)!}\sum_{\sigma \in \mathfrak{S}_{m+n}} 2\pi\mathcal{F}\phi(\lambda_{\sigma(1)}+\ldots+ \lambda_{\sigma(m)}, \lambda_{\sigma(m+1)} + \ldots + \lambda_{\sigma(m+n)}). \end{align*} \end{lemma} \begin{proof} Thanks to Lemma \ref{Lemma6.1}, we first estimate the Fourier transform of $A(\xi_1, \ldots, \xi_{m+n})$. Because the function $u_+^{H_0 - \frac{3}{2}}$ belongs neither to $L^1(\mathbb{R})$ nor to $L^2(\mathbb{R})$, by similar arguments as in the proof of \cite[Lemma 6.2]{Taqqu} let us introduce $$A_T(\xi_1, \ldots, \xi_{m+n})= \begin{cases} &A(\xi_1, \ldots \xi_{m+n}) \text{ if } |\xi_j| < T \text{ }\forall j =1, \ldots, m+n.\\ &0 \qquad\qquad\qquad\text{ otherwise.} \end{cases} $$ Set $$B_\lambda(a, b) = \frac{1}{\sqrt{2\pi}}\int_a^b e^{-iu\lambda}u^{H_0 - \frac{3}{2}}du$$ for $0 \leq a \leq b < \infty$, and $B_\lambda(a, \infty) = \lim_{b \to \infty}B_\lambda(a, b)$. By \cite[page 80]{Taqqu}, we get $$\sup_{0 \leq a \leq b}|B_\lambda(a, b)| \leq \frac{1}{\sqrt{2\pi}}\bigg(\frac{1}{H_0 - \frac{1}{2}} + \frac{2}{|\lambda|}\bigg).$$ Now, \begin{align*} &\mathcal{F}A_T(\lambda_1,\ldots, \lambda_{m+n}) = \frac{1}{(\sqrt{2\pi})^{m+n}}\int_{\mathbb{R}^{m+n}}d\xi_1\ldots d\xi_{m+n} e^{i \sum_{j=1}^{m+n}\lambda_j\xi_j} \int_{\mathbb{R}^2}dz_1dz_2 \phi(z_1, z_2)\\ &\qquad\quad\qquad\qquad\qquad\qquad\times\prod_{j=1}^m(z_1 - \xi_j)_+^{H_0 - \frac{3}{2}}\prod_{j=m+1}^{m+n}(z_2 - \xi_j)_+^{H_0 - \frac{3}{2}}\mathbf{1}_{\{|\xi_j|<T, \forall j =1,\ldots, m+n\}}. \end{align*} The change of variables $\xi_j = z_1 - u_j$ for $j=1, \ldots, m$ and $\xi_j = z_2 - u_j$ for $j=m+1, \ldots, m+n$ yields \begin{align*} &\mathcal{F}A_T(\lambda_1,\ldots, \lambda_{m+n})\\ &= \frac{1}{(\sqrt{2\pi})^{m+n}}\int_{\mathbb{R}^{m+n}}du_1 \ldots du_{m+n} e^{-i \sum_{j=1}^{m+n}\lambda_j u_j} \int_{\mathbb{R}^2}dz_1dz_2 \phi(z_1, z_2)e^{i \sum_{j=1}^m\lambda_jz_1 }e^{i \sum_{j=m+1}^{m+n}\lambda_jz_2}\\ &\qquad\qquad\qquad\times \prod_{j=1}^{m} u_j^{H_0 - \frac{3}{2}}\mathbf{1}_{\{u_j > 0\}}\mathbf{1}_{\{ z_1 - T < u_j < z_1 + T\}}\prod_{j=m+1}^{m+n} u_j^{H_0 - \frac{3}{2}}\mathbf{1}_{\{u_j > 0\}}\mathbf{1}_{\{ z_2 - T < u_j < z_2 + T\}}. \end{align*} Suppose that $\lambda_1,\ldots, \lambda_{m+n}$ are different from zero. Since $\phi$ is integrable on $\mathbb{R}^2$ then \begin{align*} |\mathcal{F}A_T&(\lambda_1,\ldots, \lambda_{m+n})|\\ &\leq \int_{\mathbb{R}^2}dz_1dz_2 |\phi(z_1, z_2)| \prod_{j=1}^m B_{\lambda_j}(\max(0, z_1-T), \max(0, z_1+T))\\ &\qquad\qquad\qquad\quad\quad\times \prod_{j=m+1}^{m+n} B_{\lambda_j}(\max(0, z_2-T), \max(0, z_2+T))\\ & \leq \int_{\mathbb{R}^2}dz_1dz_2 |\phi(z_1, z_2)| \prod_{j=1}^{m+n}\frac{1}{\sqrt{2\pi}}\bigg(\frac{1}{H_0 -\frac{1}{2}} + \frac{2}{|\lambda_j|}\bigg), \end{align*} which is finite and uniformly bounded with respect to $T$. Thus, \begin{align*} &\mathcal{F}A(\lambda_1,\ldots, \lambda_{m+n})= \lim_{T\to\infty}\mathcal{F}A_T(\lambda_1,\ldots, \lambda_{m+n})\\ & = 2\pi \mathcal{F}\phi(\lambda_1+ \ldots+ \lambda_m, \lambda_{m+1} + \ldots + \lambda_{m+n}) \prod_{j=1}^{m+n} \bigg(\frac{1}{\sqrt{2\pi}}\int_0^\infty e^{-iu\lambda_j}u^{H_0 -\frac{3}{2}}du \bigg). \end{align*} The integral inside the product is an improper Riemann integral. After the change of variables $v=u|\lambda_j|$, we get \begin{align*} \mathcal{F}A(&\lambda_1,\ldots, \lambda_{m+n})\\ & = 2\pi \mathcal{F}\phi(\lambda_1+ \ldots+ \lambda_m, \lambda_{m+1} + \ldots + \lambda_{m+n})\\ &\qquad\qquad\qquad\times \prod_{j=1}^{m+n} \bigg(|\lambda_j|^{\frac{1}{2}-H_0}\frac{1}{\sqrt{2\pi}}\int_0^\infty e^{-iu\text{sign} \lambda_j}u^{H_0 -\frac{3}{2}}du \bigg)\\ &= 2\pi \mathcal{F}\phi(\lambda_1+ \ldots+ \lambda_m, \lambda_{m+1} + \ldots + \lambda_{m+n}) \\ &\qquad\qquad\qquad\times\prod_{j=1}^{m+n} \bigg(|\lambda_j|^{\frac{1}{2}-H_0}\frac{1}{\sqrt{2\pi}}\Gamma(H_0 -\frac{1}{2})C(\lambda_j)\bigg), \end{align*} where $C(\lambda) = e^{-i\frac{\pi}{2}(H_0 - \frac{1}{2})}$ for $\lambda > 0, C(-\lambda) =\overline{C(\lambda)}$ and thus $|C(\lambda)|=1$ for all $\lambda \ne 0$, see appendix for the detailed computations. Applying Lemma \ref{Lemma6.1} by noticing that $C(\lambda_j)W(d\lambda_j) \overset{(d)}{=} W(d\lambda_j)$ (see \cite[Proposition 4.2]{Dobrushin}) and symmetrizing the Fourier transform of $A(\lambda_1, \ldots, \lambda_{m+n})$ lead to the desired conclusion. \end{proof} \subsection{Stochastic representations of the Rosenblatt process} Let $(R^H(t))_{t \geq 0}$ be the Rosenblatt process of parameter $H \in (\frac{1}{2}, 1)$. The time representation of $R^H$ is \begin{align*} R^H(t) &= a_1(D)\int_{\mathbb{R}^2}\bigg(\int_0^t (s-\xi_1)_+^{D-\frac{3}{2}}(s-\xi_2)_+^{D-\frac{3}{2}}ds\bigg)dB(\xi_1)dB(\xi_2)\\ &=A_1(H)\int_{\mathbb{R}^2}\bigg(\int_0^t (s-\xi_1)_+^{\frac{H}{2}-1}(s-\xi_2)_+^{\frac{H}{2}-1}ds\bigg)dB(\xi_1)dB(\xi_2), \end{align*} where $D = \frac{H+1}{2}$ and $$a_1(D):= \frac{\sqrt{(D-1/2)(4D-3)}}{\beta(D-1/2, 2-2D)} = \frac{\sqrt{(H/2)(2H-1)}}{\beta(H/2, 1-H)}=:A_1(H).$$ Observe also that $1/2 < H<1 \Longleftrightarrow 3/4 < D < 1$. The corresponding spectral representation of this process, see for instance \cite{Taqqu, Taqqu2} or apply Lemma \ref{Lemma2}, is given by \begin{align*} R^H(t) &= a_2(D)\int_{\mathbb{R}^2}|\lambda_1|^{\frac{1}{2} - D}|\lambda_2|^{\frac{1}{2} - D}\frac{e^{i(\lambda_1+\lambda_2)t}-1}{i(\lambda_1+\lambda_2)}W(d\lambda_1)W(d\lambda_2)\\ &=A_2(H)\int_{\mathbb{R}^2}|\lambda_1|^{-\frac{H}{2}}|\lambda_2|^{-\frac{H}{2}}\frac{e^{i(\lambda_1+\lambda_2)t}-1}{i(\lambda_1+\lambda_2)}W(d\lambda_1)W(d\lambda_2), \end{align*} where $$a_2(D):= \sqrt{\frac{(2D-1)(4D-3)}{2[2\Gamma(2-2D)\sin (\pi (D-1/2))]^2}} = \sqrt{\frac{H(2H-1)}{2[2\Gamma(1-H)\sin (H\pi/2)]^2}}=:A_2(H).$$ \section{Proof of Theorem \ref{Theorem1}} We are now in a position to give the proof of our Theorem \ref{Theorem1}. It is devided into four steps. \subsection{Chaotic decomposition} Using (\ref{eq:13}), we can write $X^{(q, H)}$ as a $q$-th Wiener-It\^{o} integral with respect to the standard two-sided Brownian motion $(B_t)_{t \in \mathbb{R}}$ as follows: \begin{equation}\label{eq:5} X^{(q, H)}_t = \int_{\mathbb{R}^q} L(x, t)(\xi_1,\ldots, \xi_q)dB(\xi_1)\ldots dB(\xi_q) = I_q^B(L(x, t)), \end{equation} where \begin{equation}\label{eq:6} L(x, t)(\xi_1,\ldots, \xi_q): = c(H, q) \int_{\mathbb{R}}\mathbf{1}_{[0, t]}(z) x(t -z) \prod_{j=1}^q (z - \xi_j)_+^{H_0 - \frac{3}{2}}dz, \end{equation} with $c(H, q)$ and $H_0$ given by (\ref{eq:H2}). Applying the product formula (\ref{eq:P1}) for multiple Wiener-It\^{o} integrals, we easily obtain that \begin{equation}\label{eq:dola2} (X_t^{(q, H)})^2 - E[(X_t^{(q, H)})^2] = \sum_{r=0}^{q-1}r!\binom{q}{r}^2I_{2q-2r}^B(L(x, t) \widetilde{\otimes}_r L(x, t)). \end{equation} {\allowdisplaybreaks Let us compute the contractions appearing in the right-hand side of (\ref{eq:dola2}). For every $ 0 \leq r \leq q-1$, by using Fubini's theorem we first have \begin{align*} L(x,& s) \otimes_r L(x, s) (\xi_1, \ldots, \xi_{2q-2r})\\ & = \int_{\mathbb{R}^r}dy_1 \ldots dy_r L(x, s)(\xi_1, \ldots, \xi_{q-r}, y_1, \ldots, y_r)L(x, s)(\xi_{q-r+1}, \ldots, \xi_{2q-2r}, y_1, \ldots, y_r)\\ &= c(H, q)^2 \int_{\mathbb{R}^r}dy_1 \ldots dy_r \int_0^s dz_1x(s-z_1) \prod_{j=1}^{q-r}(z_1 - \xi_j)_+^{H_0 - \frac{3}{2}}\prod_{i=1}^r(z_1 - y_i)_+^{H_0 - \frac{3}{2}}\\ & \qquad\qquad\quad\qquad\qquad\times \int_0^s dz_2x(s-z_2) \prod_{j=q-r+1}^{2q-2r}(z_2 - \xi_j)_+^{H_0 - \frac{3}{2}}\prod_{i=1}^r(z_2 - y_i)_+^{H_0 - \frac{3}{2}}\\ & = c(H, q)^2 \int_{[0, s]^2}dz_1dz_2 x(s-z_1) x(s-z_2) \prod_{j=1}^{q-r}(z_1 - \xi_j)_+^{H_0 - \frac{3}{2}}\prod_{j=q-r+1}^{2q-2r}(z_2 - \xi_j)_+^{H_0 - \frac{3}{2}}\\ &\qquad\qquad\qquad\qquad\quad\times \bigg(\int_{\mathbb{R}}dy (z_1 - y)_+^{H_0 - \frac{3}{2}} (z_2 - y)_+^{H_0 - \frac{3}{2}}\bigg)^r, \end{align*} }and, since for any $z_1, z_2 \geq 0$ \begin{equation}\label{eq:2} \int_{\mathbb{R}}(z_1-y)_+^{H_0 - \frac{3}{2}}(z_2-y)_+^{H_0 -\frac{3}{2}}dy = \beta\Big(H_0 - \frac{1}{2}, 2-2H_0\Big)|z_1-z_2|^{2H_0-2}, \end{equation} we end up with the following expression \begin{align}\label{1} &L(x, s) \otimes_r L(x, s) (\xi_1, \ldots, \xi_{2q-2r}) \nonumber \\ & = c(H, q)^2\beta\Big(H_0 - \frac{1}{2}, 2-2H_0\Big)^r\int_{[0, s]^2}dz_1dz_2 x(s-z_1) x(s-z_2)|z_1 -z_2|^{(2H_0-2)r} \nonumber \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times \prod_{j=1}^{q-r}(z_1 - \xi_j)_+^{H_0 - \frac{3}{2}}\prod_{j=q-r+1}^{2q-2r}(z_2 - \xi_j)_+^{H_0 - \frac{3}{2}}. \end{align} Recall $G_T^{(q, H)}$ from (\ref{eq:Gt}). As a consequence, we can write \begin{equation}\label{eq:8} G_T^{(q, H)} (t)= F_{2q, T}(t) + c_{2q-2}F_{2q-2, T}(t) + \ldots + c_4F_{4, T}(t) + c_2F_{2, T}(t) \end{equation} where $c_{2q-2r}:= r!\binom{q}{r}^2$ and for $ 0 \leq r \leq q-1$, \begin{equation}\label{eq:7} F_{2q-2r, T}(t): = \frac{1}{T^{2H_0 -1}}\int_0^{Tt} I_{2q-2r}^B(L(x,s) \widetilde{\otimes}_r L(x, s))ds, \end{equation} where the kernels in each Wiener integral above are given explicitly in (\ref{1}). \subsection{Spectral representations} Recall the expression of the contractions $L(x, s) \otimes_r L(x, s), 0\leq r \leq q-1$ given in (\ref{1}). Set \begin{align*} \phi_r(s, z_1, z_2) :=&c(H, q)^2\beta\Big(H_0 - \frac{1}{2}, 2-2H_0\Big)^r\\ &\times \mathbf{1}_{[0, s]}(z_1)\mathbf{1}_{[0, s]}(z_2)x(s-z_1)x(s-z_2) |z_1-z_2|^{(2H_0-2)r}. \end{align*} It is a symmetric function with respect to $z_1$ and $z_2$. Furthermore, by H\"{o}lder's inequality, we have \begin{align*} &\int_{\mathbb{R}^2}\Big|\mathbf{1}_{[0, s]}(z_1)\mathbf{1}_{[0, s]}(z_2)x(s-z_1)x(s-z_2) |z_1-z_2|^{(2H_0-2)r}\Big|dz_1dz_2\\ &\leq \int_{[0, s]^2} |x(s-z_1)| |x(s-z_2)| |z_1-z_2|^{(2H_0-2)r}dz_1dz_2 \\ &= \int_{[0, s]^2} |x(z_1)| |x(z_2)| |z_1-z_2|^{r\frac{(2H-2)}{q}}dz_1dz_2\\ &\leq \bigg(\int_{[0, \infty)^2}|x(z_1)||x(z_2)||z_1 -z_2|^{2H-2}dz_1dz_2 \bigg)^{\frac{r}{q}}\bigg(\int_0^\infty |x(z)|dz\bigg)^{2(1-\frac{r}{q})}. \end{align*} Using the integrability of $x$ together with the assumption (\ref{eq:1}), it turns out that $\phi_r(. , z_1, z_2) $ is integrable on $\mathbb{R}^2_+$. Applying Lemma \ref{Lemma2} with $m=n=q-r$, we get \begin{align*} F_{2q-2r, T}(t) &= \frac{1}{T^{2H_0 - 1}}\int_0^{Tt} I_{2q-2r}^B(L(x, s) \widetilde{\otimes}_r L(x, s))ds \nonumber\\ & \overset{(d)}{=} A_r(H, q) \frac{1}{T^{2H_0 -1}} \int_{\mathbb{R}^{2q-2r}}W(d\lambda_1)\ldots W(d\lambda_{2q-2r}) \prod_{j=1}^{2q-2r}|\lambda_j|^{\frac{1}{2}-H_0} \nonumber\\ &\times \frac{1}{(2q-2r)!}\sum_{\sigma \in \mathfrak{S}_{2q-2r}}\int_0^{Tt}ds \int_{[0, s]^2}d\xi_1d\xi_2 x(s-\xi_1) x(s-\xi_2)|\xi_1 -\xi_2|^{(2H_0-2)r} \nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\times e^{i(\lambda_{\sigma(1)} + \ldots+ \lambda_{\sigma(q-r)})\xi_1}e^{i(\lambda_{\sigma(q-r+1)} + \ldots + \lambda_{\sigma(2q-2r)})\xi_2}, \end{align*} where \begin{equation}\label{eq:Ar} A_r(H,q) := c(H, q)^2 \beta(H_0 - \frac{1}{2}, 2- 2H_0)^r \bigg( \frac{\Gamma(H_0 - \frac{1}{2})}{\sqrt{2\pi}}\bigg)^{2q-2r}. \end{equation} The change of variable $s = Ts'$ yields \begin{align*} F_{2q-2r, T}(t) & \overset{(d)}{=} A_r(H, q)T^{2-2H_0} \int_{\mathbb{R}^{2q-2r}}W(d\lambda_1)\ldots W(d\lambda_{2q-2r}) \prod_{j=1}^{2q-2r}|\lambda_j|^{\frac{1}{2}-H_0}\nonumber\\ &\times \frac{1}{(2q-2r)!}\sum_{\sigma \in \mathfrak{S}_{2q-2r}}\int_0^{t}ds \int_{[0, Ts]^2}d\xi_1d\xi_2 x(Ts-\xi_1) x(Ts-\xi_2)|\xi_1 -\xi_2|^{(2H_0-2)r} \nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\times e^{i(\lambda_{\sigma(1)} + \ldots+ \lambda_{\sigma(q-r)})\xi_1}e^{i(\lambda_{\sigma(q-r+1)} + \ldots + \lambda_{\sigma(2q-2r)})\xi_2}. \end{align*} Let us do a further change of variables: $\lambda'_{\sigma(j)} = T\lambda_{\sigma(j)}, j = 1, \ldots, 2q-2r $ and $\xi'_k = Ts - \xi_k, k=1,2$. Thanks to the self-similarity of $W$ with index $1/2$ (that is, $W(T^{-1}d\lambda)$ has the same law as $T^{-1/2}W(d\lambda)$) we finally obtain that \begin{align}\label{4} F_{2q-2r, T}(t) & \overset{(d)}{=} A_r(H, q)T^{-(2-2H_0)(q-1-r)} \nonumber \\ & \times\int_{\mathbb{R}^{2q-2r}}W(d\lambda_1)\ldots W(d\lambda_{2q-2r}) \prod_{j=1}^{2q-2r}|\lambda_j|^{\frac{1}{2}-H_0}\int_0^{t}ds e^{i(\lambda_1 + \ldots + \lambda_{2q-2r})s}\nonumber\\ &\times \frac{1}{(2q-2r)!}\sum_{\sigma \in \mathfrak{S}_{2q-2r}} \int_{[0, Ts]^2}d\xi_1d\xi_2 x(\xi_1) x(\xi_2)|\xi_1 -\xi_2|^{(2H_0-2)r} \nonumber\\ &\qquad\qquad\qquad\qquad\quad\times e^{-i (\lambda_{\sigma(1)} + \ldots+ \lambda_{\sigma(q-r)})\frac{\xi_1}{T}}e^{-i (\lambda_{\sigma(q-r+1)} + \ldots + \lambda_{\sigma(2q-2r)})\frac{\xi_2}{T}}. \end{align} \subsection{Reduction lemma} \begin{lemma}\label{Reduction} Fix $t$, fix $H \in (\frac{1}{2}, 1)$ and fix $q \geq 2$. Assume (\ref{eq:1}) and the integrability of the kernel $x$. Then for any $r \in \{0, \ldots, q-2 \}$, one has $$\lim_{T \to \infty} E[F_{2q-2r, T}(t)^2]= 0.$$ \end{lemma} \begin{proof} Without loss of generality, we may and will assume that $t=1$. From the spectral representation of multiple Wiener-It\^{o} integrals (\ref{4}), one has \eject \begin{align*} &E[F_{2q-2r, T}(1)^2] \\ &= T^{-2(2-2H_0)(q-1-r)} A_r^2(H, q)(2q-2r)!\int_{\mathbb{R}^{2q-2r}} d\lambda_1 \ldots d\lambda_{2q-2r} \prod_{j=1}^{2q-2r}|\lambda_j|^{1-2H_0}\\ &\times \bigg( \frac{1}{(2q-2r)!}\sum_{\sigma \in \mathfrak{S}_{2q-2r}}\int_0^{1}ds e^{i(\lambda_1 + \ldots + \lambda_{2q-2r})s} \int_{[0, Ts]^2}d\xi_1d\xi_2 x(\xi_1) x(\xi_2)|\xi_1 -\xi_2|^{(2H_0-2)r} \\ &\hspace{6cm}\times e^{-i (\lambda_{\sigma(1)} + \ldots+ \lambda_{\sigma(q-r)})\frac{\xi_1}{T}}e^{-i (\lambda_{\sigma(q-r+1)} + \ldots + \lambda_{\sigma(2q-2r)})\frac{\xi_2}{T}} \bigg)^2. \end{align*} Since $x$ is a real-valued integrable function on $[0, \infty)$ satisfying assumption (\ref{eq:1}), we deduce from Lebesgue dominated convergence that, as $T \to \infty$, \begin{align*} & \frac{1}{(2q-2r)!}\sum_{\sigma \in \mathfrak{S}_{2q-2r}}\int_0^{1}ds e^{i(\lambda_1 + \ldots + \lambda_{2q-2r})s} \int_{[0, Ts]^2}d\xi_1d\xi_2 x(\xi_1) x(\xi_2)|\xi_1 -\xi_2|^{(2H_0-2)r} \\ &\hspace{6cm} \times e^{-i (\lambda_{\sigma(1)} + \ldots+ \lambda_{\sigma(q-r)})\frac{\xi_1}{T}}e^{-i (\lambda_{\sigma(q-r+1)} + \ldots + \lambda_{\sigma(2q-2r)})\frac{\xi_2}{T}}\\ &\longrightarrow \int_{[0, \infty)^2}x(u)x(v)|u-v|^{(2H_0-2)r}dudv \int_0^1 e^{i(\lambda_1+\ldots + \lambda_{2q-2r})s}ds. \end{align*} Since $1- \frac{1}{2q} < H_0 < 1$ and $0 \leq r \leq q-2$, we have $ T^{-2(2-2H_0)(q-1-r)} \to 0 $ as $T \to \infty$. Moreover, since $\int_0^1 e^{i(\lambda_1+\ldots + \lambda_{2q-2r})\xi}d\xi = \frac{e^{i(\lambda_1+\ldots + \lambda_{2q-2r})} -1}{i(\lambda_1 + \ldots + \lambda_{2q-2r})}$, \begin{align*} &\int_{\mathbb{R}^{2q-2r}} d\lambda_1 \ldots d\lambda_{2q-2r} \prod_{j=1}^{2q-2r}|\lambda_j|^{1-2H_0} \bigg|\frac{e^{i(\lambda_1+\ldots + \lambda_{2q-2r})} -1}{i(\lambda_1 + \ldots + \lambda_{2q-2r})}\bigg|^2 \leq \bigg(\int_{\mathbb{R}} |\lambda|^{1-2H_0}d\lambda\bigg)^{2q-2r} \end{align*} which is integrable at zero, and \begin{align*} &\int_{\mathbb{R}^{2q-2r}} d\lambda_1 \ldots d\lambda_{2q-2r} \prod_{j=1}^{2q-2r}|\lambda_j|^{1-2H_0} \bigg|\frac{e^{i(\lambda_1+\ldots + \lambda_{2q-2r})} -1}{i(\lambda_1 + \ldots + \lambda_{2q-2r})}\bigg|^2\\ & \leq \int_{\mathbb{R}^{2q-2r}} d\lambda_1 \ldots d\lambda_{2q-2r} \prod_{j=1}^{2q-2r}|\lambda_j|^{1-2H_0} \frac{4}{(\lambda_1 + \ldots + \lambda_{2q-2r})^2} \end{align*} which is integrable at infinity, we have $$\int_{\mathbb{R}^{2q-2r}} d\lambda_1 \ldots d\lambda_{2q-2r} \prod_{j=1}^{2q-2r}|\lambda_j|^{1-2H_0} \bigg|\frac{e^{i(\lambda_1+\ldots + \lambda_{2q-2r})} -1}{i(\lambda_1 + \ldots + \lambda_{2q-2r})}\bigg|^2 < \infty.$$ All these facts taken together imply \begin{equation}\label{eq:10} E[F_{2q-2r, T}(1)^2] \longrightarrow 0, \text{ as } T \to \infty, \text{ for all } 0 \leq r \leq q-2, \end{equation} which proves the lemma. \end{proof} \subsection{Concluding the proof of Theorem \ref{Theorem1}} Thanks to Lemma \ref{Reduction}, we are left to concentrate on the convergence of the term $F_{2, T}$ (belonging to the second Wiener chaos) corresponding to $r=q-1$. Recall from (\ref{4}) that $F_{2, T}(t)$ has the same law as the double Wiener integral with symmetric kernel given by \begin{align}\label{eq:15} f_T(t, \lambda_1, &\lambda_2):= A_{q-1}(H,q) |\lambda_1|^{\frac{1}{2} - H_0}|\lambda_2|^{\frac{1}{2} - H_0}\int_0^t ds e^{i(\lambda_1 +\lambda_2)s} \nonumber\\ &\qquad\quad \times \int_{[0, Ts]^2}d\xi_1d\xi_2 e^{-i(\lambda_1\frac{\xi_1}{T} + \lambda_2\frac{\xi_2}{T})}x(\xi_1)x(\xi_2)|\xi_1 - \xi_2|^{(q-1)(2H_0 - 2)}. \end{align} Observe that $f_T(t, .)$ is symmetric, so there is no need to care about symmetrization. By the isometry property of multiple Wiener-It\^{o} integrals with respect to the random spectral measure, in order to prove the $L^2(\Omega)$-convergence of $c_2F_{2, T}$ to $bR^{H'}$, we can equivalently prove that $c_2f_T(t, .)$ converges in $L^2(\mathbb{R}^2)$ to the kernel of $bR^{H'}(t)$ . First, by Lebesgue dominated convergence, as $T \to \infty$, we have \begin{align*} f_T(t, \lambda_1, \lambda_2)& \longrightarrow A_{q-1}(H, q) \int_{\mathbb{R}^2}x(u)x(v)|u-v|^{(q-1)(2H_0 - 2)}dudv\\ &\qquad\qquad\qquad\qquad\qquad\times |\lambda_1|^{\frac{1}{2} - H_0}|\lambda_2|^{\frac{1}{2} - H_0} \frac{e^{i(\lambda_1 + \lambda_2)t} -1}{i(\lambda_1 + \lambda_2)}. \end{align*} This shows that $f_T(t, .)$ converges pointwise to the kernel of $R^{H'}(t)$, up to some constant. Moreover, for all $0 < S <T $, \begin{align*} &\|f_T(t, .) - f_S(t, .)\|_{L^2(\mathbb{R}^2)}^2\\ & = A_{q-1}^2(H, q) \int_{\mathbb{R}^2}d\lambda_1 d\lambda_2 |\lambda_1|^{1 - 2H_0}|\lambda_2|^{1 - 2H_0}\\ &\quad \times \bigg( \int_0^t ds e^{i(\lambda_1 +\lambda_2)s}\int_{[0, Ts]^2 \setminus [0, Ss]^2}d\xi_1d\xi_2 e^{-i(\lambda_1\frac{\xi_1}{T} + \lambda_2\frac{\xi_2}{T})}x(\xi_1)x(\xi_2)|\xi_1 - \xi_2|^{(q-1)(2H_0 - 2)}\bigg)^2. \end{align*} By Lebesgue dominated convergence, it comes that $\|f_T(t, .) - f_S(t, .)\|_{L^2(\mathbb{R}^2)}^2 \longrightarrow 0$ as $T, S \to \infty$. It follows that $(f_T(t, .))_{T \geq 0}$ is a Cauchy sequence in $L^2(\mathbb{R}^2)$. Hence, the multiple Wiener integral $c_2F_{2, T}$ (with kernel (\ref{eq:15})) converges in $L^2(\Omega)$ to $b(H, q) \times R^{H'}$ with the explicit constant $b(H, q)$ as in (\ref{eq:19}). (Note that $c_2 =q!$). The finite-dimensional convergence then follows from (\ref{4}). The proof of Theorem \ref{Theorem1} is achieved. \qed \section{Proof of Theorem \ref{Theorem2}} We follow the same route as for the proof of Theorem \ref{Theorem1}, with some slight modifications. Here, the chaos decomposition of $G_T^{(1, H)}$ contains uniquely the term $F_{2, T}$ obtained for $q=1$ and $ r = 0$. Its spectral representation is as follows: \begin{align*} F_{2, T}(t)&= \frac{H(2H-1)}{\beta(H-\frac{1}{2}, 2-2H)}\frac{\Gamma^2(H-\frac{1}{2})}{2\pi} \int_{\mathbb{R}^2}W(d\lambda_1)W(d\lambda_2) |\lambda_1|^{\frac{1}{2} - H}|\lambda_2|^{\frac{1}{2} - H}\nonumber\\ &\qquad\qquad\qquad \times \int_0^t ds e^{i(\lambda_1 +\lambda_2)s} \int_{[0, Ts]^2}d\xi_1d\xi_2 e^{-i(\lambda_1\frac{\xi_1}{T} + \lambda_2\frac{\xi_2}{T})}x(\xi_1)x(\xi_2). \end{align*} It is easily seen that that $F_{2, T}$ is well-defined if and only if $3/4 < H <1$. The same arguments as in the proof of Theorem \ref{Theorem1} yield \begin{equation}\label{eq:d} G_T^{(1, H)}(t) = F_{2, T}(t) \longrightarrow \frac{H(2H-1)}{\sqrt{(H-1/2)(4H-3)}}\bigg(\int_0^\infty x(u)du\bigg)^2 \times R^{H''}(t) \end{equation} in $L^2(\Omega)$ as $T \to \infty$, thus completing the proof of the theorem. \qed \section*{Acknowledgements} I would like to sincerely thank my supervisor Ivan Nourdin, who has led the way on this work. I did appreciate his advised tips and encouragement for my first research work. I also warmly thank Frederi Viens for interesting discussions and several helpful comments about this work. Also, I would like to thank my friend, Nguyen Van Hoang, for his help in proving the identity about $I$ in the appendix. Finally, I deeply thank an anonymous referee for a very careful and thorough reading of this work, and for her/his constructive remarks.
1,477,468,750,878
arxiv
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} The data-parallel synchronous stochastic gradient descent (S-SGD) method is commonly used as the optimizer to train large-scale deep neural networks (DNNs) \cite{dean2012large}\cite{goyal2017accurate}. In S-SGD, the computing tasks for each mini-batch of training data are distributed to a cluster of computing nodes, and the individual results (e.g., gradients) are aggregated to update the global network model before the next iteration begins. However, with more computing nodes and the fast-growing computing power of hardware accelerators, the data communication between computing nodes gradually becomes the performance bottleneck \cite{watcharapichat2016ako}\cite{cui2016geeps}\cite{shi2018adag}\cite{wang2019impact}. For example, the computing power of Nvidia GPUs has increased by 30x in the last 10 years, whilst it took about 15 years for the network speed to improve from 10Gbps to 100Gbps. Hence it becomes a critical issue to address the imbalance between computing and communication. Some recent works try to reduce the impact of data communication at either algorithmic or system level. On one hand, gradients could be quantized or sparsified \cite{alistarh2017qsgd}\cite{lin2018deep}\cite{wen2017terngrad}\cite{shi2019adistributed}\cite{shi2019ijcai} in order to reduce the amount of data to be exchanged so that the communication time could be reduced. But these methods usually sacrifice the training convergence speed. On the other hand, the high-performance computing (HPC) community has proposed several methods to improve the communication performance of the cluster by optimizing the hardware or communication software library \cite{potluri2013efficient}\cite{chen2019roundrobin}. In terms of hardware, InfiniBand (IB) and Omni-Path networks can provide much higher communication bandwidth and lower latency, and are deployed to shorten the performance gap between communication and computation \cite{bayatpour2017scalable}. Regarding the software, the implementation of message passing interface (MPI) has been optimized to support more efficient communication in DNN training \cite{bayatpour2017scalable}\cite{awan2017s}. Nvidia's NCCL\footnote{\url{https://developer.nvidia.com/nccl}} is another highly optimized communication library for deep learning (DL) frameworks on multi-GPU settings. The scaling efficiency of distributed DL systems can be modeled as a function of the communication-to-computation ratio \cite{wen2017terngrad}. For example, training ResNet-50 \cite{he2016deep} requires about 7.8 billion floating point operations in computation, while it needs to all-reduce 102 MB of data in one iteration. Higher communication-to-computation ratio results in lower scaling efficiency. The layered structure of DNNs makes it possible to overlap the communication and computation during the backward propagation \cite{awan2017s}\cite{zhang2017poseidon}\cite{shi2018performance}, which is known as wait-free backpropagation (WFBP). WFBP begins to exchange the gradients of a layer immediately after they have been calculated; so if the data communication time of a layer is shorter than the computation time of the gradients of its previous layer, then this communication cost can be fully hidden. However, if very fast hardware accelerators are used while the network speed is relatively slow (i.e., a high communication-to-computation ratio), there can exist many layers whose communication time is longer than the corresponding computation time. In such case, it becomes important to optimize the communications. We observe that the layer-wise gradient communication in WFBP is suboptimal due to the fact that all-reducing a small amount of data cannot fully utilize the network bandwidth in current network topology due to the startup time of message transmitting (or transmission latency). For example, on our 10GbE platform, all-reducing a set of 200 KB vectors across 8 nodes using MPI requires about 1.5 ms, while all-reducing a set of 400 KB vectors only requires 1.8 ms, which means that if we merge the two sets of 200 KB vectors to a single set of 400 KB vectors, then the total communication time can be reduced from 3 ms to 1.8 ms. The same phenomena can also be found in RDMA-based networks \cite{handley2017re}\cite{guo2016rdma}. You et al. \cite{you2017scaling} have also noticed this problem, and proposed a single-layer communication (SyncEASGD) method which merges the gradients of different layers into a single tensor and then transfers only once per iteration. As compared to the layer-wise communication in WFBP, it can eliminate most of the startup time of data communications. But in their proposed method, gradient communication can only start after the backward propagation, thus it misses the opportunity of overlapping the communication with computation. We argue that the best way to reduce the training time needs to consider not only how to overlap communication with computation, but also how to improve the communication efficiency by avoiding transmitting small messages. According to the taxonomy of efficient distributed DL~\cite{tang2020communication,shi2020quantitative}, our proposed method belongs to a kind of scheduling solution to improve the scalability of distributed training. In this paper, we first formulate the communication scheduling problem in S-SGD as an optimization problem that aims to minimize the total training time of an iteration. We then propose a merged-gradient wait-free backward propagation (MG-WFBP) method and prove its optimality. The time complexity of MG-WFBP is $O(L^2)$ where $L$ is the number of layers (or tensors) in the DNN, and it only needs to be executed once before the whole training process. We implement MG-WFBP atop the popular DL frameworks Caffe \cite{jia2014caffe} and PyTorch\footnote{\url{https://pytorch.org}} \cite{pytorch2019}, and make it publicly available\footnote{https://github.com/HKBU-HPML/MG-WFBP}. To validate the effectiveness of our proposed MG-WFBP, we evaluate its performance using various DNNs on multi-GPU settings with both 10Gbps Ethernet (10GbE) and 56Gbps InfiniBand (56GbIB) interconnects. On the relatively slow Nvidia Tesla K80 GPU clusters with 10GbE, MG-WFBP achieves about $1.2$x to $1.36$x improvement than the state-of-the-art communication algorithms WFBP and SyncEASGD, respectively. On the latest Nvidia Tesla V100 GPU clusters with 10GbE or 56GbIB, MG-WFBP achieves an average of 18.8\% faster than WFBP and SyncEASGD in terms of end-to-end training time. To investigate its performance on large clusters, we resolve to trace-based simulation (due to limited hardware resources) on 4-worker to 2048-worker clusters. In the 64-worker simulation, the results show that MG-WFBP achieves more than $1.7$x and $1.3$x speedups compared to WFBP and SyncEASGD respectively. This paper is an extension of our previous conference publication \cite{shi2019mgwfbp}, and we make the following new contributions. \begin{itemize} \item We provide a complete proof of the optimality of MG-WFBP. \item We implement MG-WFBP on PyTorch and also make it open-source. \item We conduct extensive experiments on two Nvidia V100 GPU clusters with 10Gbps Ethernet and 56Gbps InfiniBand interconnects using six DNNs. \item We verify that MG-WFBP is also robust to mixed precision training which is widely used in latest Nvidia GPUs and Google TPUs. \end{itemize} The rest of the paper is organized as follows. We present the preliminaries in Section \ref{s:pre}, followed by the formulation of the existing problem in Section \ref{s:profor}. We derive an optimal solution to the problem and then present our MG-WFBP algorithm in Section \ref{s:method}. The system implementation atop PyTorch is present in Section \ref{s:system}. Section \ref{s:eval} demonstrates the experimental studies on the proposed method compared to existing methods. Section \ref{s:relatedwork} introduces the related work. We discuss some limitations and possible directions in Section~\ref{s:discission}, and finally we conclude this paper in Section \ref{s:conclusion}. \section{Preliminaries}\label{s:pre} For ease of presentation, we summarize the frequently used mathematical notations in Table \ref{table:notation}. \begin{table}[!ht] \centering \caption{Frequently used notations} \label{table:notation} \begin{tabular}{|l|l|} \hline Name & Description \\\cline{1-2} \hline \hline $N$ & The number of computing nodes in the cluster. \\ $\alpha$ & Latency (startup time) of the network between two nodes. \\ $\beta$ & Transmission time per byte between two nodes. \\ $\gamma$ & Summation time of two floating point numbers in one node. \\ $a$ & Latency (startup time) of all-reduce.\\ $b$ & Transmission and computation time per byte of all-reduce. \\ $M$ & The size of a message in bytes. \\\cline{1-2} $W$ & Weights of the DNN. \\ $D_i^g$ & The input data size for the $g^{th}$ node at the $i^{th}$ mini-batch.\\\cline{1-2} $L$ & The number of learnable layers (or tensors) of a DNN.\\ $p^{(l)}$ & The number of parameters in the learnable layer $l$.\\ $t_{iter}$ & Time of one training iteration with one batch of data.\\ $t_{f}$ & Time of the forward pass in each iteration.\\ $t_{b}$ & Time of the backward propagation in each iteration.\\ $t_{u}$ & Time of the model update in each iteration.\\ $t_{b}^{(l)}$ & Time of the backward propagation of layer $l$ in each iteration.\\ $\tau_{b}^{(l)}$ & The timestamp when layer $l$ begins to calculate gradients.\\ $\tau_{c}^{(l)}$ & The timestamp when layer $l$ begins to communicate gradients.\\ $t_{c}$ & Time of gradient aggregation in each iteration.\\ $t_{c}^{(l)}$ & Time of gradient aggregation of layer $l$ in each iteration.\\ $t_{c}^{no}$ & The non-overlapped communication cost in each iteration.\\ \hline \end{tabular} \end{table} \subsection{Mini-batch SGD} Consider an $L$-layer DNN with a loss function $\mathcal{L}(W,D)$ which defines the difference between the prediction values and the ground truth over the training data set $D$, where $W$ is the set of model weights. To minimize the loss function, the mini-batch SGD updates the parameters iteratively. Typically, the $i^{th}$ iteration of the training includes four steps: 1) A mini-batch of data $D_i$ ($D_i\subset D$) is read as inputs of the DNN. 2) $D_i$ is fed forward across the neural network from layer $1$ to layer $L$ to compute the prediction values, and finally the loss function $\mathcal{L}(W,D)$ is computed. 3) The first order gradients w.r.t. parameters and inputs are calculated and backpropagated from layer $L$ to layer $1$. 4) Finally, the parameters are updated with the layer-wise gradients. The training is terminated when some stopping criteria are satisfied. The update of $W$ can be formulated as follows: \begin{equation} W_{i+1}=W_{i}-\eta\cdot\nabla\mathcal{L}(W_{i},D_{i}), \end{equation} where $\eta$ is the learning rate of SGD, $W_{i}$ denotes the weights at the $i^{th}$ iteration, and $\nabla\mathcal{L}(W_{i},D_{i})$ denotes the gradients. The time consumed in the training process is mainly in steps 2 and 3, because step 1 of the $i^{th}$ iteration can be scheduled to overlap with the $(i-1)^{th}$ iteration, and the time of step 4 is negligible. Therefore, we can simplify the timeline of SGD as a forward pass followed by a backward pass. The time of one iteration is represented by $t_{iter}=t_f+t_b$, where $t_f$ is the time of the forward pass, and $t_b$ is the time of the backward pass. \begin{figure}[!ht] \centering \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{ssgd.pdf} \caption{Naive S-SGD.} \end{subfigure} \begin{subfigure}{0.48\textwidth} \vspace{10pt} \includegraphics[width=\linewidth]{wfbp.pdf} \caption{WFBP S-SGD.} \end{subfigure} \begin{subfigure}{0.48\textwidth} \vspace{10pt} \includegraphics[width=\linewidth]{singlelayer.pdf} \caption{Single-layer S-SGD.} \end{subfigure} \caption{The timeline of the traditional S-SGD algorithms. (a) Naive S-SGD: Layer-wise gradient communications can only be started after all gradients have been calculated. (b) WFBP S-SGD (WFBP-SGD): Gradient communication of each layer begins immediately after the backward step of that layer. (c) SyncEASGD: All gradients are merged into a single-layer to be communicated together.} \label{fig:tranditionalSGDs} \end{figure} \subsection{Synchronized SGD} For large-scale DNNs, the synchronized SGD (S-SGD) with data-parallelism is widely applied to train a model using multiple workers (say $N$ workers, and indexed by $g$). Each worker takes a different mini-batch of data $D_{i}^{g}$ and forwards it by step 2), and then follows step 3) to calculate the gradients $\nabla\mathcal{L}(W_{i},D_{i}^{g})$. In this way, each worker has a copy of the model, while the gradients calculated by different workers are different since the input data are different. At the end of each iteration of a mini-batch, S-SGD needs to average the gradients from different workers, updates the model by the averaged gradients, and synchronizes the model with all workers. The weights update formula of S-SGD is: \begin{equation}\label{equ:ssgd} W_{i+1}=W_{i}-\eta\cdot\frac{1}{N}\sum_{g=1}^{N}\nabla\mathcal{L}(W_{i},D_{i}^{g}). \end{equation} The averaging operation of gradients across the cluster involves extra computation and communication overheads. As a side-effect, it is not easy to achieve linear scaling in the distributed SGD training. The timeline of the naive S-SGD (i.e., computation and communication are not overlapped) with communication overheads is illustrated in Fig. \ref{fig:tranditionalSGDs}(a). The naive S-SGD algorithm suffers from the waiting period of data communication of model synchronization at every iteration. In practice, the gradients of a layer is stored as a tensor; hence the averaging process can be implemented by many all-reduce operations, once per layer. The layer-wise nature introduces many startup times for layer-wise gradients when they are communicated. The iteration time of the naive S-SGD can be estimated as \begin{equation}\label{equ:tssgd} t_{iter}=t_{f}+t_{b}+t_{c}, \end{equation} where $t_{b}=\sum_{l=1}^{L}t_{b}^{(l)}$ is the layer-wise backward propagation time and $t_{c}=\sum_{l=1}^{L}t_{c}^{(l)}$ is the layer-wise gradient aggregation time which heavily relies on the communication performance. Considering S-SGD running on $N$ workers, we define the speedup of S-SGD compared to the vanilla single-worker SGD: \begin{equation} S(N)=\frac{N|D_i^{g}|/(t_f+t_b+t_c)}{|D_i^{g}|/(t_f+t_b)}=\frac{N}{1+\frac{t_c}{t_f+t_b}}, \end{equation} where $|D_i^{g}|$ is the number of training samples per worker at the $i^{th}$ iteration. Let $r=\frac{t_c}{t_f+t_b}$, which reflects the communication-to-computation ratio, we have \begin{equation}\label{equ:speedupsync} S(N)=\frac{N}{1+r}. \end{equation} \subsection{WFBP-SGD} In WFBP S-SGD (WFBP-SGD), the gradient communication of layer $l$ ($l>1$) can be overlapped with the backward propagation of layer $l-1$. The timeline of WFBP-SGD is illustrated in Fig. \ref{fig:tranditionalSGDs}(b). For simplicity, we assume that the start timestamp of the forward pass is $0$, and the start timestamp of the backward pass is $\tau_b^{(L)} = t_f$. Then the timestamp when layer $l$ begins to calculate the gradients, denoted by $\tau_b^{(l)}$, can be calculated by: \begin{equation}\label{equ:startcomp} \tau_b^{(l)}= \begin{cases} t_f & l=L\\ \tau_b^{(l+1)}+t_b^{(l+1)} & 1\leq l<L \end{cases}. \end{equation} Notice that the communication of gradients of layer $l$ ($l<L$) can only begin if the following two conditions are satisfied: (1) the gradients of layer $l$ have been calculated; (2) the communication of gradients of layer (l+1) has finished. So, the timestamp when layer $l$ begins the communication of gradients, denoted by $\tau_c^{(l)}$, can be calculated by: \begin{equation}\label{equ:startt} \tau_c^{(l)}= \begin{cases} \tau_b^{(l)}+t_b^{(l)} & l=L\\ \text{max}\{\tau_c^{(l+1)}+t_c^{(l+1)}, \tau_b^{(l)}+t_b^{(l)}\} & 1\leq l<L \end{cases}. \end{equation} The iteration time of WFBP-SGD can be calculated as \begin{equation}\label{equ:wfbpiter} \begin{split} t_{iter}&=t_f+t_b^{(L)}+(\tau_c^{(1)}-\tau_c^{(L)})+t_c^{(1)}\\ &=t_c^{(1)}+\text{max}\{\tau_c^{(2)}+t_c^{(2)}, \tau_b^{(1)}+t_b^{(1)}\}. \end{split} \end{equation} Since some communications are overlapped with the computation, the non-overlapped communication cost, $t_{c}^{no}$, becomes the bottleneck of the system. In WFBP-SGD, we redefine $r=\frac{t_c^{no}}{t_f+t_b}$, so the main problem of WFBP-SGD is that when the communication cannot be fully overlapped by computation, i.e., $\tau_c^{(l+1)}+t_c^{(l+1)} >\tau_b^{(l)}+t_b^{(l)}$, $t_c^{no}$ will limit the system scalability. \subsection{Single-Layer S-SGD} As layer-wise communications introduce many startup times especially for large-scale clusters, the startup times dominate the communication time so that overlapping communications and computations may lead to even worse scaling efficiency. Therefore, You et al. \cite{you2017scaling} propose a single-layer communication mechanism (SyncEASGD) which merges all gradients to be communicated by a single all-reduce operation at the end of each iteration, as shown in Fig. \ref{fig:tranditionalSGDs}(c). The iteration time of SyncEASGD can be estimated as \begin{equation}\label{equ:synceaiter} t_{iter}=t_f+t_b+t_c, \end{equation} where $t_c$ is composed by the startup time and the transmission time. \subsection{Communication Model} In Eq. (\ref{equ:ssgd}), we use $\Delta W_i=\sum_{g=1}^{N}\nabla\mathcal{L}(W_{i},D_{i}^{g})$ to represent the aggregation of gradients from $N$ workers, which is an all-reduce operation\footnote{In this paper, we mainly discuss the scenario with the all-reduce collective, while our proposed method should also be applicable to the parameter server architecture.}. There are many optimized algorithms for the all-reduce operation with different number of processes and message sizes \cite{rabenseifner2004optimization}\cite{thakur2005optimization}\cite{hoefler2010toward}. To simplify the problem, we assume that the number of workers is power-of-two, and the peer-to-peer communication cost is modeled as $\alpha+\beta M$ \cite{sarvotham2001connection}, where $\alpha$ is the latency component (or called start-up time), $\beta$ is the communication time per byte, and $M$ is the message size. Without loss of generality, we do not limit the communication model to one specific algorithm. Given $N$ workers, the time cost of all-reduce can be generalized as \begin{equation}\label{equ:tcomm} T_{ar}(M)=a+b\times M, \end{equation} where $a$ and $b$ are two constants that are not dependent on $M$. Some well optimized all-reduce algorithms are summarized in Table \ref{table:allreduce}. \begin{table}[!ht] \centering \caption{Cost of different all-reduce algorithms} \label{table:allreduce} \addtolength{\tabcolsep}{-2.2pt} \begin{tabular}{|l|c|c|} \hline All-reduce Algorithm & $a$ & $b$ \\\hline \hline Binary tree~\cite{rabenseifner2004optimization} & $2\alpha \log N$ & $(2\beta+\gamma)\log N$ \\\hline Recursive doubling~\cite{thakur2005optimization}& $\alpha \log N$ & $(\beta+\gamma)\log N$ \\\hline Recursive halving/doubling~\cite{thakur2005optimization}& $2\alpha \log N$ & $2\beta-\frac{1}{N}(2\beta+\gamma)+\gamma$ \\\hline Double binary trees~\cite{sanders2009two} & $2\log N$ & $\beta$+$\gamma$ \\\hline Ring~\cite{thakur2005optimization} & $2(N-1)\alpha$ & $\frac{2(N-1)}{N}\beta+\frac{(N-1)}{N}\gamma$ \\\hline \end{tabular} \end{table} With a given hardware configuration (i.e., $N, \alpha, \beta$, and $\gamma$ are fixed), the time cost of the all-reduce operation is a linear function of the message size $M$ with a y-intercept $a$ and a slope $b$. We empirically validate this linear model in Section 6.2. One important property of WFBP-SGD is that the messages are communicated layer by layer, which means that it needs to invoke many all-reduce operations. In each all-reduce operation, however, there is an extra cost of $a$ which is not related to $M$. Importantly, the linear function with a positive y-intercept value has a property of \begin{equation}\label{equ:pro} T_{ar}(M_{1})+T_{ar}(M_{2}) > T_{ar}(M_1+M_2). \end{equation} In other words, communicating a single message of size $M_1+M_2$ is more efficient than communicating a message of size $M_1$ and a message of size $M_2$ separately. \section{Problem Formulation}\label{s:profor} Eq. (\ref{equ:pro}) indicates that merging the gradients can improve the communication efficiency. If one merges all layers into one layer so that the communication is only invoked once (i.e., the single-layer communication \cite{you2017scaling}), then the overall communication time is minimal. However, the single-layer communication requires all gradients to be calculated first, which prohibits the overlap between communications and computations. Therefore, we would like to merge the layers appropriately so that it not only reduces the communication by merging, but also exploits the pipelining between communications and computations. Before formulating the problem, we formally define the concept of merged-gradient layer as follows. \begin{definition}{(Merged-gradient layer).} A layer $l$ is called a merged-gradient layer if at the timestamp of $\tau_c^{(l)}$, instead of communicating the gradients of that layer, we merge its gradients to layer $l-1$ and postpone the communication. The operator $\oplus$ defines the gradients merging between two consecutive layers, say $l\oplus (l-1)$. Merging more than two layers is possible by setting consecutive layers into merged-gradient layer. \end{definition} \begin{definition}{(Normal layer).} If a layer $l$ is not a merged-gradient layer, then it is called a normal layer and its gradients will not be merged into layer $l-1$. Its gradients (including those merged from other layers if any) should be communicated as earlier as possible, i.e., when its own gradients have been calculated and the previously scheduled communication has finished. \end{definition} There are several properties if layer $l$ is a merged-gradient layer. \begin{itemize} \item $l>1$, since the first layer of the DNN cannot be a merged-gradient layer according to the definition. \item There is no communication dedicated for layer $l$, i.e., \begin{equation}\label{ass:1} t_{c}^{(l)}=0. \end{equation} \item The number of updated parameters of layer $l-1$ becomes the summation of that of layer $l$ and layer $l-1$. \begin{equation}\label{ass:3} p^{(l-1)}=p^{(l-1)}+p^{(l)}. \end{equation} \item The timestamp when layer $l-1$ can begin the gradient communication is updated to \begin{equation}\label{ass:2} \tau_c^{(l-1)}=\text{max}\{\tau_c^{(l)}, \tau_b^{(l-1)}+t_b^{(l-1)}\}. \end{equation} \end{itemize} Intuitively, if merging the gradients of two consecutive layers can save time, then we should merge the two layers. In the following, we discuss a complete set of four cases of computation and communication patterns that may happen during the training process with WFBP for layer $l$. The four cases with potential merging are illustrated in Fig. \ref{fig:pipeline}. \begin{figure}[!ht] \centering \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{pipeline_a.pdf} \caption{Case 1.} \end{subfigure} \begin{subfigure}{0.48\textwidth} \vspace{10pt} \includegraphics[width=\linewidth]{pipeline_b.pdf} \caption{Case 2.} \end{subfigure} \begin{subfigure}{0.48\textwidth} \vspace{10pt} \includegraphics[width=\linewidth]{pipeline_c.pdf} \caption{Case 3.} \end{subfigure} \begin{subfigure}{0.48\textwidth} \vspace{10pt} \includegraphics[width=\linewidth]{pipeline_d.pdf} \caption{Case 4.} \end{subfigure} \caption{Four cases of gradient communication at one iteration on layer $l$ in WFBP-SGD. Note that the forward computation is not plotted as it is not related to the pipelining timeline.} \label{fig:pipeline} \end{figure} \textbf{Case 1}. In the ideal case, the communication of layer $l$ is fully hidden by its previous layer's computation, that is \begin{equation} \tau_{c}^{(l)}+t_c^{(l)}\leq \tau_{b}^{(l-1)}+t_{b}^{(l-1)}. \end{equation} The overhead of gradient communication is totally hidden by computation so that it is not necessary to merge the gradients. \textbf{Case 2}. The communication of layer $l$ is partially overlapped with the computation of layer $l-1$, and the communication of layer $l$ begins before the end of the computation of layer $l-1$, that is \begin{equation} \tau_{c}^{(l)}+t_c^{(l)} > \tau_{b}^{(l-1)}+t_{b}^{(l-1)} > \tau_{c}^{(l)}. \end{equation} Without merging, the communication of layer $l$ can immediately begin after the gradients of layer $l$ have been calculated, i.e., $\tau_{c}^{l-1}=\tau_{c}^{(l)}+t_c^{(l)}$. On the other hand, if we want to merge layer $l$ with layer $l-1$, the communication can only happen after the gradients of layer $l-1$ have been calculated. So we should consider whether merging layer $l$ and $l-1$ could bring any benefits or not. As shown in Fig. \ref{fig:pipeline}(b), the merged communication cost takes shorter time to finish, which indicates that the reduced time by merging is greater than the additional waiting time for the gradient computation of layer $l-1$. Formally, \begin{equation} \begin{split} &\tau_b^{(l-1)}+t_b^{(l-1)}-\tau_c^{(l)} \\ <& T_{ar}(p^{(l)}+p^{(l-1)})- (T_{ar}(p^{(l)})+T_{ar}(p^{(l-1)})) = a. \end{split} \end{equation} In this case, we prefer to merge the gradients of layer $l$ to layer $l-1$, i.e., making layer $l$ be a merged-gradient layer. \textbf{Case 3}. In this case, the communication of layer $l$ is also partially overlapped with the computation of $l-1$ as Case 2. However, different from Case 2, the merging operation results in a longer time because the reduced communication time is not as significant as the additional waiting time. To be specific, \begin{equation} \tau_{c}^{(l)}+t_c^{(l)} > \tau_{b}^{(l-1)}+t_{b}^{(l-1)} > \tau_{c}^{(l)}, \end{equation} and \begin{equation} \begin{split} &\tau_b^{(l-1)}+t_b^{(l-1)}-\tau_c^{(l)} \\ \geq & T_{ar}(p^{(l)}+p^{(l-1)})- (T_{ar}(p^{(l)})+T_{ar}(p^{(l-1)})) = a. \end{split} \end{equation} Therefore, we would not make layer $l$ be a merged-gradient layer because merging the gradients of layer $l$ to layer $l-1$ will decrease the time efficiency. \textbf{Case 4}. Very different from the previous cases, there is no overlap between the communication of layer $l$ and the computation of layer $l-1$ as shown in Fig. \ref{fig:pipeline}(d). This happens when the previous communication time is longer than the previous computation time. That is, \begin{equation} \tau_c^{(l)}\geq \tau_b^{(l-1)}+t_b^{(l-1)}. \end{equation} In this case, the communications of layer $l$ and layer $l-1$ do not need to wait for the end of the computation of layer $l-1$; hence merging gradients of layer $l$ to layer $l-1$ dose not introduce any waiting time for the computation, which would obviously reduce the communication time, i.e., \begin{equation} T_{ar}(p^{(l)}+p^{(l-1)})- (T_{ar}(p^{(l)})+T_{ar}(p^{(l-1)})) = a > 0. \end{equation} Thus, we would like to make layer $l$ be a merged-gradient layer in this case. From the above discussions, we can see that not all gradient merging can bring benefits of reduced iteration time (e.g., Case 3). Therefore, our problem is to find all merged-gradient layers such that the overall iteration time is minimal. Since a layer is either a normal-layer or a merged-gradient layer, we use $l_n$ and $l_m$ to denote the type of normal-layer and the merged-gradient layer respectively. Let the variable $e^{(l)}$ denote the type of layer $l$ ($l=1,2,...,L$), $e^{(l)}\in \{l_n, l_m\}$. For an $L$-layer DNN model, it can be represented by \begin{equation} \mathbb{M}=\{[e^{(1)},...,e^{(l)},...,e^{(L)}]|e^{(l)}\in \{l_n, l_m\} \text{ and } 1\leq l\leq L\}. \end{equation} Obviously, the number of combinations of normal layers and merge-gradient layers is $|\mathbb{M}|=2^L$. Therefore, our goal is to find an $m\in \mathbb{M}$ such that the iteration time is minimal. Assuming the linear communication model of Eq. (\ref{equ:tcomm}), the communication time of each layer is represented by \begin{equation}\label{equ:comm} t_{c}^{(l)}=T_{ar}(p^{(l)}). \end{equation} For a given DNN training with a specific mini-batch size on a hardware environment, the computation time of one iteration can be easily measured at the beginning of training. Since the architecture of the DNN would not change during the training, the feed-forward and backward propagation computation time is very stable \cite{shi2016benchmarking}. That is, $t_b^{(l)}$ is known for $l=1,2,...,L$. However, the beginning timestamp ($\tau_c^{(l)}$) and the communication time ($t_c^{(l)}$) of layer $l$ will be different when $e^{(l)}=l_n$ or $e^{(l)}=l_m$ as we discussed before. Therefore, we generalize the problem as follows. For a given $L$-layer\footnote{This is also applicable to current DL frameworks like PyTorch, in which the learnable parameters of a layer may be separated as two tensors.} DNN trained with WFBP-SGD on a specific cluster with $P$ workers, we would like to determine $e^{(l)}$ to be $l_n$ or $l_m$ such that the iteration time of training is minimal. Formally, we would like to minimize the iteration time of WFBP-SGD in Eq. (\ref{equ:wfbpiter}), i.e., \begin{equation}\label{equ:problem} \text{minimize: } t_{iter}=t_c^{(1)}+\max\{\tau_c^{(2)}+t_c^{(2)}, \tau_b^{(1)}+t_b^{(1)}\}. \end{equation} \section{Solution: MG-WFBP}\label{s:method} In this section, we first perform some theoretical analysis on the optimization problem, and then propose an optimal and efficient solution named merged-gradient WFBP (MG-WFPB) to the problem. \subsection{Theoretical Analysis} It is obvious that the objective function of Eq. (\ref{equ:problem}) can be rewritten by \begin{equation}\label{equ:newopt} \begin{split} t=&t_c^{(1)}+\text{max}\left\{\tau_c^{(2)}+t_c^{(2)}, \tau_b^{(1)}+t_b^{(1)}\right\}\\ =&T_{ar}(p^{(1)})+\text{max}\left\{\tau_c^{(2)}+T_{ar}(p^{(2)}), \tau_b^{(1)}+t_b^{(1)}\right\}\\ =&T_{ar}(p^{(1)})+\text{max}\left\{ \text{max}\{\tau_c^{(3)}+T_{ar}(p^{(3)}), \tau_b^{(2)}+t_b^{(2)}\} \right. \\ &\left. +T_{ar}(p^{(2)}), \tau_b^{(1)}+t_b^{(1)}\right\}. \end{split} \end{equation} It can be seen that the objective function consists of embedding $\max$ functions from the first layer to the last layer. We first analyze the difference of layer $2$ be a normal layer or a merged-gradient layer, and then we extend it to a general layer $l$ to prove its optimality. Assume that layers $L,L-1,...,3$ are normal layers, and layer $2$ is a merged-gradient layer, we have $t_c^{(2)}=0$ and $t_c^{(1)}=T_{ar}(p^{(2)}+p^{(1)})$. We plug in these two new values to Eq. (\ref{equ:newopt}) to obtain \begin{equation}\label{equ:merged} \begin{split} \hat{t}&=T_{ar}(p^{(2)}+p^{(1)})+\text{max}\left\{\tau_c^{(2)}, \tau_b^{(1)}+t_b^{(1)}\right\}. \end{split} \end{equation} Compare Eq. (\ref{equ:newopt}) to Eq. (\ref{equ:merged}), we want to find out under what conditions $\hat{t}< t$, i.e., layer $2$ can be a gradient-merged layer. Specifically, we would like to derive the conditions such that \begin{equation} \begin{split} \hat{t}=&T_{ar}(p^{(2)}+p^{(1)})+\text{max}\left\{\tau_c^{(2)}, \tau_b^{(1)}+t_b^{(1)}\right\} \\ <& t = T_{ar}(p^{(1)})+\text{max}\left\{\tau_c^{(2)}+T_{ar}(p^{(2)}), \tau_b^{(1)}+t_b^{(1)}\right\}, \end{split} \end{equation} which is equivalent to \begin{equation}\label{equ:ineq} \begin{split} &b\times p^{(2)}+\text{max}\left\{\tau_c^{(2)}, \tau_b^{(1)}+t_b^{(1)}\right\} \\ <& \text{max}\left\{\tau_c^{(2)}+T_{ar}(p^{(2)}), \tau_b^{(1)}+t_b^{(1)}\right\}. \end{split} \end{equation} Since there are two max functions in the above inequality, we need to decompose the max functions. Decomposing the two max functions explicitly corresponds to the four cases we discuss in the previous section. Note that it is impossible that $\tau_c^{(2)}+T_{ar}(p^{(2)}) \leq \tau_b^{(1)}+t_b^{(1)}$ and $\tau_c^{(2)} > \tau_b^{(1)}+t_b^{(1)}$ hold simultaneously. Therefore we decompose the two max functions with the following three conditions. \textbf{Condition 1}. $\tau_c^{(2)}+T_{ar}(p^{(2)}) \leq \tau_b^{(1)}+t_b^{(1)}$. Then $\tau_c^{(2)} \leq \tau_b^{(1)}+t_b^{(1)}$ also holds. The inequality (\ref{equ:ineq}) becomes \begin{align*} b\times p^{(2)}+ \tau_b^{(1)}+t_b^{(1)} < \tau_b^{(1)}+t_b^{(1)}, \end{align*} which obviously does not hold as $b \times p^{(2)}>0$. Therefore, layer $2$ should be a normal layer in this case, since making layer $2$ a merged-gradient layer cannot reduce the iteration time. \textbf{Condition 2}. The condition is \begin{equation} \tau_{c}^{(2)}+T_{ar}(p^{(2)}) > \tau_{b}^{(1)}+t_{b}^{(1)} > \tau_{c}^{(2)}. \end{equation} We can decompose inequality (\ref{equ:ineq}) to \begin{equation} \begin{split} &b\times p^{(2)}+ \tau_b^{(1)}+t_b^{(1)} \\ <& \tau_c^{(2)}+T_{ar}(p^{(2)})=\tau_c^{(2)}+a+b\times p^{(2)}, \end{split} \end{equation} which is equivalent to \begin{equation}\label{equ:inq-c2} \begin{split} \tau_b^{(1)}+t_b^{(1)}<\tau_c^{(2)}+a. \end{split} \end{equation} So if inequality (\ref{equ:inq-c2}) is true, then we can make layer $2$ a merged-gradient layer to save the iteration time; otherwise we make it a normal layer. \textbf{Condition 3}. The condition is \begin{equation} \tau_{c}^{(2)}+T_{ar}(p^{(2)}) > \tau_{c}^{(2)}> \tau_{b}^{(1)}+t_{b}^{(1)}. \end{equation} We decompose inequality (\ref{equ:ineq}) to \begin{equation} \begin{split} b\times p^{(2)}+\tau_c^{(2)}<\tau_c^{(2)}+T_{ar}(p^{(2)}). \end{split} \end{equation} It is equivalent to \begin{equation} \begin{split} b\times p^{(2)}+\tau_c^{(2)}<\tau_c^{(2)}+a+b\times p^{(2)}, \end{split} \end{equation} which is obviously true as $a>0$. Therefore, under this condition, we prefer to make layer $2$ a merged-gradient layer. To summarize, under Condition 2 with inequality (\ref{equ:inq-c2}) and Condition 3, making layer $2$ a merged-gradient layer can reduce the iteration time. Now we extend the above analysis to a general layer $l$ and $l>1$. When we just consider the end time of layer $l-1$, making layer $l$ be a merged-gradient layer if Condition 2 with inequality (\ref{equ:inq-c2}) holds or Condition 3 holds will reduce the end time of layer $l-1$. Thus, we have the following lemma. \begin{lemma}\label{lemma:mergedlayer} Given an $L$-layer DNN which is trained with WFBP-SGD in a cluster of $N$ workers, if the gradient communication is done through all-reduce, layer $l>1$ should be a merged-gradient layer to reduce the iteration time if and only if \begin{equation}\label{equ:lemma-inq1} \tau_b^{(l-1)}+t_b^{(l-1)} < \tau_c^{(l)}+a. \end{equation} \begin{proof} As we discussed in the above three conditions, if Condition 2 together with inequality (\ref{equ:inq-c2}) or Condition 3 holds, layer $l$ should be a merged-gradient layer to reduce the iteration time, otherwise it should be a normal layer. The combination of Condition 2 together with inequality (\ref{equ:inq-c2}) and Condition 3 is \begin{equation} \tau_b^{(l-1)}+t_b^{(l-1)} < \tau_c^{(l)}+a, \end{equation} which concludes the proof. \end{proof} \end{lemma} From Lemma \ref{lemma:mergedlayer}, it is seen that whether layer $l$ should be a merged-gradient layer or not depends on the end of computation time of layer $l-1$ (i.e., $\tau_b^{(l-1)}+t_b^{(l-1)}$) and its own beginning time of communication (i.e., $\tau_c^{(l)}$). Thus, the communications of higher layers are not affected by the lower layers, while the lower layers are affected by the higher ones as the lower layer can only begin after the higher layers have finished. If layer $l$ is a normal layer, we can continue to determine layer $l-1$ by checking the above three conditions. If layer $l$ is a merged-gradient layer, layer $l-1$ has earlier end time according to the benefit of the merged-gradient layer. Again we also continue to determine the type of layer $l-1$ as the same way of layer $l$, which results in a recursive way from layer $L$ to layer $2$. Consequently, we determine the last layer $L$ whether it can be a merged-gradient layer or a normal layer, and then determine layer $L-1$, and finally to layer $2$ to find the final solution $m\in \mathbb{M}$ such that Eq. (\ref{equ:newopt}) is minimal. \begin{theorem}\label{theorem:opt} Given an $L$-layer DNN which is trained with WFBP-SGD in a cluster of $N$ workers, if the gradient communication is done through all-reduce, one can find $m\in \mathbb{M}$ such that the iteration time is minimal, and \begin{equation}\label{the:solution} m=[e^{(L)}, e^{(L-1)},...,e^{(1)}], \end{equation} where \begin{equation}\label{equ:layertype} e^{(l)}= \begin{cases} l_m & \text{if }\tau_b^{(l-1)}+t_b^{(l-1)} < \tau_c^{(l)}+a \text{ and } l>1\\ l_n & \text{otherwise} \end{cases} \end{equation} for $1 \leq l \leq L$. \end{theorem} \begin{proof} A layer $l$ is either a merged-gradient layer or a normal layer. According to Lemma \ref{lemma:mergedlayer}, for $l>1$ and $\tau_b^{(l-1)}+t_b^{(l-1)} < \tau_c^{(l)}+a$, $e^{(l)}=l_m$ has shorter time than $e^{(l)}=l_n$. For $l=1$ or $\tau_b^{(l-1)}+t_b^{(l-1)} \geq \tau_c^{(l)}+a$, $e^{(l)}=l_n$ has shorter time than $e^{(l)}=l_m$. Consequently, if $m=[e^{(L)}, e^{(L-1)},...,e^{(1)}]$ and $e^{(l)}$ is assigned by Eq. (\ref{equ:layertype}), then changing the merged-gradient layers to normal layers or changing the normal layers to merged-gradient would bring longer iteration time, which conclude the proof. \end{proof} \subsection{Algorithms} Assume that the $N$-node cluster is connected by an interconnection with a bandwidth $B$, we can measure the all-reduce cost with respect to message size to derive the parameter $a$ and $b$ in Eq. (\ref{equ:tcomm}). Therefore, we can estimate the communication time of all-reduce for any message size. For the backward computation time, we can also benchmark for a particular GPU at the beginning of training. Thus, $t_f$, $t_b^{(l)}$ and $t_c^{(l)}$, where $1 \leq l \leq L$, are known. According to Theorem \ref{theorem:opt}, we drive the algorithm to find $m$ as shown in Algorithm \ref{algo:mgbp}. \begin{algorithm}[h] \caption{Find optimal $m\in \mathbb{M}$}\label{algo:mgbp} \small \textbf{Input: }$a$, $b$, $L$, $\bm{t_b}[1...L]$, $\bm{p}=[p^{(1)},p^{(2)},...,p^{(L)}]$.\\ \textbf{Output: $m$} \begin{algorithmic}[1] \State Initialize $\bm{t_c}[1...L]$; // Communication time cost \State Initialize $\bm{\tau_b}[1...L]$; // Backward computation start time \State Initialize $m[1...L]=\{l_n\}$; // Initialize all layers be normal layers \For{$l=1\rightarrow L$} \State $\bm{t_c}[l]=a+b\times \bm{p}[l]$; \EndFor \State $\bm{\tau_b}[L]=0$; \For{$l=L-1\rightarrow 1$} \State $\bm{\tau_b}[l]$ = $\bm{\tau_b}[l+1]$ + $\bm{t_b}[l+1]$; \EndFor \State $\bm{\tau_c}$=\Call{CalculateCommStart}{$\bm{t_c}, \bm{t_b}, \bm{\tau_b}, L$}; \For{$l=L\rightarrow 2$} \If{$\bm{\tau_b}[l-1]+\bm{t_b}[l-1]-\bm{\tau_c}[l] < a$} // Eq. (\ref{equ:layertype}) \State \Call{Merge}{$\bm{\tau_b}, \bm{t_c}, \bm{p}, l$}; \State $\bm{\tau_c}$=\Call{CalculateCommStart}{$\bm{t_c}, \bm{t_b}, \bm{\tau_b}, L$}; \State $m[l]=l_m$; // Make $l$ be the merged-gradient layer \EndIf \EndFor \State Return $m$; \Procedure{Merge}{$\bm{\tau_b}, \bm{t_c}, \bm{p}, l$} \State $\bm{t_c}[l]=0$; \State $\bm{p}[l-1]=\bm{p}[l-1]+\bm{p}[l]$; \State $\bm{t_c}[l-1]=a+b\times \bm{p}[l-1]$; \EndProcedure \Procedure{CalculateCommStart}{$\bm{t_c}, \bm{t_b}, \bm{\tau_b}, L$} \State Initialize $\bm{\tau_c}[1...L]$; // Communication start time \State $\bm{\tau_c}[L]=\bm{\tau_b}[L]+\bm{t_b}[L]$; \For{$l=L-1\rightarrow 1$} \State $\bm{\tau_c}[l]=\text{max}\{\bm{\tau_c}[l+1]+\bm{t_c}[l+1], \bm{\tau_b}[l]+\bm{t_b}[l]\}$; \EndFor \State \text{Return } $\bm{\tau_c}$; \EndProcedure \end{algorithmic} \end{algorithm} The algorithm first (line 1-8) initializes the layer-wise gradient communication cost $t_c^{(l)}$, the computation start time $\tau_b^{(l)}$ according to Eq. (\ref{equ:tcomm}) and Eq. (\ref{equ:startcomp}) respectively with system settings and benchmarks in the first several iterations. Then (line 9, line 20-25) the layer-wise start time of communication is calculated based on Eq. (\ref{equ:startt}). After that (line 10-14), the merged-gradient layers are found according to Eq. (\ref{equ:layertype}), in which if there is a layer found as a merged-gradient layer, the communication time of its previous layer should be updated (line 16-19) according to Eq. (\ref{ass:1}), Eq. (\ref{ass:3}) and Eq. (\ref{ass:2}). The proposed algorithm has a time complexity of $O(L^2)$. For a merged-gradient layer, the algorithm needs to re-calculate the start time of communication of each layer, which is an $O(L)$ search, and it has maximal $L-1$ merged-gradient layers, so the time complexity of the algorithm is $O(L^2)$. Since the algorithm is a one-time calculation at the beginning of the training and it needs not to be re-calculated during the training process, the overhead of finding $m\in \mathbb{M}$ has no side-effect to the training performance. \begin{algorithm}[h] \caption{MG-WFBP S-SGD at worker $g$}\label{algo:gewfbp} \textbf{Input: } $\bm{D}=[\{X_1, y_1\},...,\{X_n, y_n\}]$, $I$, $net$, $N$, $bs$\\ \textbf{Output: $\bm{W}=[W^{(1)}, W^{(2)},...W^{(L)}]$} \begin{algorithmic}[1] \small \State Initialize a shared and synchronized queue $Q$; \State Obtain the parameter size $\bm{p}[1...L]$ from $net$; \State Allocate memories $\bm{W}$; \State Initialize $\bm{W}$ in all accelerators; \If{rank == 0} \State Benchmark several iterations to achieve $\bm{t_b}[1...L]$; \State Get $m$ from Algorithm \ref{algo:mgbp}; \EndIf \State Bcast($m$, root=0); // Broadcast the optimal solution to all workers \State \Call{AsyncHandleCommunication}{$Q, m$}; \For{$i=1\rightarrow I$} \State Sample a mini-batch of data from $D$ to $d$; \State \Call{AsyncHandleComputation}{$Q,d,L$}; \State WaitForLastCommunicationFinished(); \State $\bm{W}=\bm{W}-\eta\cdot\nabla \bm{W}$, \EndFor \State NotifyFinished(); // Set $isRunning$ to false \Procedure{AsyncHandleComputation}{$Q,d,L$} \State $o=d$; \For{$l=1\rightarrow L$} \State $o$=FeedForward($l,o$); \EndFor \For{$l=L\rightarrow 1$} \State BackwardPropagation($l$); \State $Q.\text{push}(l)$; \EndFor \EndProcedure \Procedure{AsyncHandleCommunication}{$Q, m$} \State Initialize $lb$; // layerBuffer \While{\textit{isRunning}} \State $l=Q.\text{pop()}$; \State $lb$.push($l$); \If{$m[l] == l_n$} \State SynchonizedAllReduce($lb$); \State $lb.$clear(); \EndIf \If{$l=1$} \State NotifyLastCommunicationFinished(); \EndIf \EndWhile \EndProcedure \end{algorithmic} \end{algorithm} We denote the WFBP algorithm integrated with the optimal solution $m$ derived from Algorithm \ref{algo:mgbp} as MG-WFBP. In MG-WFBP, the merged-gradient layers should be communicated with their previous layers. As a result, MG-WFBP achieves the minimal iteration time of S-SGD under known DNNs and system configurations. The algorithm of MG-WFBP S-SGD is shown in Algorithm \ref{algo:gewfbp}. For each worker, the algorithm first (line 1-7) initializes related variables and calculates $m\in \mathbb{M}$ by using Algorithm \ref{algo:mgbp}. Then the root worker (rank 0) broadcasts (line 8) the solution $m$ to all other workers. Line 9 starts a communication thread, and the thread reads the layer number from the shared queue $Q$ and decides whether its gradients should be communicated (line 24-32). After that (line 10-14), it starts the loop of iteration, and iteratively (line 16-22) reads data to do feed forward operations and backward propagation followed by pushing the layer number into the shared queue. Finally, the algorithm notifies a message of \textit{isRunning=false} to finish the training. \subsection{Applicability to Parameter Server} The MG-WFBP algorithm relies on the measurement of layer-wise computing time and the modeling of gradient aggregation to obtain the optimal merging strategy. Regarding the parameter server (PS) architecture, the layer-wise computation is the same as that of all-reduce, while the gradient aggregation process becomes a two-phase operation: (1) the workers push gradients to PS; and (2) the workers pull the aggregated gradients (or the latest model parameters) from PS. Each direction of communication (i.e., push or pull) can also be modeled by Eq. (\ref{equ:tcomm}). Therefore, the theoretical analysis of the optimal merge with Theorem~\ref{theorem:opt} holds and our MG-WFBP is applicable to the PS architecture. \section{System Implementation}\label{s:system} As shown in Algorithm \ref{algo:gewfbp}, to implement MG-WFBP, our system is required to be equipped with three main features. First, the system needs to measure the backward propagation computation time of each layer (i.e., $t_b^{(l)}$) for any configured deep neural networks. Second, the backward computation and gradient aggregation should be executed in parallel to pipeline communications and computations. Third, the merging operation of the merged-gradient layer should be efficient. It is non-trivial to implement the above three functions in current state-of-the-art DL frameworks (e.g., TensorFlow \cite{abadi2016tensorflow} and PyTorch \cite{pytorch2019}) which exploit a directed acyclic graph (DAG) to represent computing operations during training. Considering that PyTorch becomes more and more popular due to its easy-to-use Pythonic programming style and high performance operators \cite{pytorch2019}, in this section we describe the implementation of MG-WFBP algorithm atop PyTorch. \subsection{Time Measurement of Backward Propagation} When deploying the DAG to GPUs in PyTorch, different operators could be executed concurrently due to the execution nature of CUDA streams \cite{nvidia2011cuda}. Therefore, during the backward propagation, the gradients of different variables could be calculated concurrently on the same GPU such that the time measurement of each variable is not straightforward. To correctly collect the backward propagation time, we design a lightweight profiling tool for backward propagation in PyTorch with sequential execution of different variables. For each tensor that has gradients, we synchronize the tensor after it finishes invoking the gradient computation with CUDA synchronization ($torch.cuda.synchronize$). Consequently, we can collect the time interval of gradient computation between two nearby tensors, and the two nearby tensors with gradients should be from a single layer or from two nearby layers. The measurement can be automatically executed in our MG-WFBP training algorithm during the first several iterations. \subsection{Parallelism between Gradient Computation and Aggregation} It is known that current DL frameworks provide Python APIs for end-users. In general, one can use multi-threading or multi-processing to make gradient computation and aggregation executed on two different threads or processes. On one hand, however, there exists the GIL problem \cite{beazley2010understanding} in multi-threading of Python, which would result in very poor performance when paralleling two computation tasks. On the other hand, the multi-processing mechanism requires the memory copy between two processes as the gradients are updated every iteration. Since multiple processes cannot share the GPU memory address, when a process needs to copy its GPU data to another process, it needs to copy the data to host memory and then to another process, which causes performance degradation. To avoid the GIL problem in Python and memory copy between processes, we implement the gradient aggregation in a C++ daemon thread, and the original training codes are kept unchanged and the original training process (forward and backward computation) is running in the main thread. The C++ daemon thread well addresses the GIL problem in Python, and it can share the data of gradient with the main thread so that no memory copy is required. The architecture is shown in Fig. \ref{fig:sysarch}. \begin{figure}[!h] \centering \includegraphics[width=\linewidth]{sysarch.pdf} \caption{Overview of system architecture.} \label{fig:sysarch} \end{figure} The key component is the communication handler implemented in a C++ daemon thread. It fetches the gradients from the queue that is shared with the computation handler. During the backpropagation, the computation handler in the main thread puts the gradients from the normal layers to the shared queue, while the communication handler pops the gradients from the queue and communicates with other workers with an all-reduce operation. \subsection{Efficient Gradient Merging} For every iteration, we need to copy two layers' gradient data to a single segment of continuous memory if the current layer is a merge-gradient layer. We pre-allocate all memory for merged-gradient layers. For example, we assume layer 2 is a merge-gradient layer, which has $p^{(2)}$ parameters, and layer 1 has $p^{(1)}$ parameters. Note that layer 1 and layer 2 have different tensors so that the memory for these two tensors may not be continuous. Then we allocate a buffer whose size is $(p^{(2)}+p^{(1)})\times BytesPerElement$, where $BytesPerElement$ is 4 for single precision floats and 2 for half precision floats (e.g., Mixed precision training). Therefore, for every merged-gradient layers and their preceded normal layers, there exist pre-allocated buffers. When any buffer is full, the gradient aggregation thread invokes the all-reduce operation. The pre-allocated buffers for merged-gradient layers consume the same memory size as the model parameters, but it is relatively small as compared to the occupied memory of the temporary outputs of hidden layers. In PyTorch, the data copy between GPU tensors is fast as it just needs to copy data in GPU memory without copying back to host memory. For example, Nvidia Tesla V100 GPU delivers a peak memory bandwidth of 900GB/s. \section{Experimental Studies}\label{s:eval} \subsection{Experimental Settings} We conduct extensive experimental studies to show the effectiveness of MG-WFBP. Our test-beds contain three GPU clusters with 10Gbps Ethernet (10GbE) and 56Gbps InfiniBand (56GbIB) interconnections. One is an 8-node Nvidia Tesla K80 cluster which has a total of 16 GK210 GPUs (one Tesla K80 card contains two GK210 GPUs), and the 8 nodes are connected by 10GbE; the other two are 4-node Nvidia Tesla V100 clusters, in which each node contains 4 GPUs, resulting in a total of 16 GPUs, and the 4 nodes are connected with 10GbE and 56GbIB. The cluster settings are listed in Table \ref{table:clusters}. \begin{table}[!ht] \centering \caption{The hardware and software settings on one node.} \label{table:clusters} \begin{tabular}{|l|c|c|c|} \hline & Cluster 1 & Cluster 2 & Cluster 3 \\\hline \hline \# of Nodes & 8 & \multicolumn{2}{c|}{4} \\\hline GPU (Nvidia) & Tesla K80 & \multicolumn{2}{c|}{Tesla V100 PCIe x4} \\\hline Network & 10GbE & 10GbE & 56GbIB \\\hline PCIe & \multicolumn{3}{c|}{PCI Express Gen3 x16} \\\hline CPU (Intel) & Xeon E5-2650v4 Dual & \multicolumn{2}{c|}{Xeon E5-2698v3 Dual}\\\hline Memory & \multicolumn{3}{c|}{256 GB} \\\hline OS & CentOS-7.2 & \multicolumn{2}{c|}{Ubuntu 16.04} \\\hline Software & CUDA-8.0 & \multicolumn{2}{c|}{CUDA-10.0} \\\cline{2-4} & OpenMPI-3.1.1 & \multicolumn{2}{c|}{OpenMPI-4.0.0} \\\cline{2-4} & NCCL-2.2.12 & \multicolumn{2}{c|}{NCCL-2.3.7} \\\hline \end{tabular} \end{table} First, we conduct experiments to measure the communication performance on the three clusters. Second, we evaluate the end-to-end training wall-clock time on representative real-world DNN models including GoogleNet \cite{szegedy2015going}, ResNet-50/152 \cite{he2016deep}, DenseNet-161/201 \cite{huang2017densely} and Inception-v4 \cite{szegedy2017inception} with the ImageNet dataset ILSVRC-2012 \cite{deng2009imagenet} which contains about $1.28$ million training images and $50,000$ validation images of $1,000$ categories. The resolution of the input images is $224\times224$. The training settings of DNN models are listed in Table \ref{table:dnns}. \begin{table}[!ht] \caption{DNNs for evaluation.} \label{table:dnns} \begin{tabular}{|l|l|l|l|l|} \hline Model& \# Tensors &\# Parameters & \# MACs & Batch Size \\\hline \hline GoogleNet &59 & \textasciitilde 13M & 1.43G & 64\\\hline ResNet-50 &161 & \textasciitilde 25.5M & 3.9G & 32 \\\hline ResNet-152 & 467 & \textasciitilde 60.1M & 11.61G & 128 \\\hline DenseNet-161 &484 & \textasciitilde 28.6M & 7.85G & 64 \\\hline DenseNet-201 &604 & \textasciitilde 20M & 4.39G & 64 \\\hline Inception-v4 &449 & \textasciitilde 42.6M & 6.16G & 128 \\\hline \end{tabular} Note: \# MACs indicates the number of multiply and accumulates in the forward calculation with a batch size of 1. \end{table} \subsection{Measurement of All-reduce Communication} \begin{figure*}[!ht] \centering \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{commtime-16-GK210-10GbE.pdf} \caption{Cluster 1} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{commtime-16-V100-10GbE.pdf} \caption{Cluster 2} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{commtime-16-V100-56GbIB.pdf} \caption{Cluster 3} \end{subfigure} \caption{The communication time of all-reduce along with the size of parameters on three different clusters. (a) Cluster 1 with $a=9.72\times 10^{-4},b=1.97\times 10^{-9}$; (b) Cluster 2 with $a=9.08\times 10^{-4},b=7.4\times 10^{-10}$; (c) Cluster 3 with $a=2.36\times 10^{-4},b=4.06\times 10^{-10}$.} \label{fig:commoverhead} \end{figure*} To verify the communication model in Eq. (\ref{equ:tcomm}) empirically, we first present some foregone results of the time of the all-reduce operation in the three configured clusters. The measured time of all-reduce under cluster 1, cluster 2 and cluster 3 are shown in Fig. \ref{fig:commoverhead}(a), \ref{fig:commoverhead}(b) and \ref{fig:commoverhead}(c) respectively. Take the size of parameters ($4p$ in single precision floating points) as the variable, we can see that the startup overheads (e.g., $2(N-1)\times \alpha$ in the ring-based all-reduce algorithm) are $972\mu s$, $908\mu s$ and $236\mu s$ on cluster 1, cluster 2 and cluster 3 respectively. We also show the statistical distributions of the layer-wise tensor size in different DNNs in Fig. \ref{fig:tensordistribution}, which shows that a large proportion of tensors are with small number of gradients. For example, ResNet-152 has 150 tensors whose size is 1024 bytes (in 32-bit precision), and DenseNet-161 has 160 tensors whose size is 768 bytes (in 32-bit precision). \begin{figure}[!h] \centering \includegraphics[width=0.95\linewidth]{tensordistribution.pdf} \caption{Tensor size distribution. The two curves are the all-reduce communication time with respect to tensor size, and the scatter markers indicate the number of tensors that have a specific size in a DNN.} \label{fig:tensordistribution} \end{figure} \subsection{Real-world Experiments} We implement WFBP \cite{awan2017s}\cite{zhang2017poseidon}, single-layer communication Sync EASGD (SyncEASGD) \cite{you2017scaling} and our proposed MG-WFBP with PyTorch and OpenMPI, and test the performance across two 16-GPU (K80 and V100) clusters with 10GbE and 56GbIB. We also compare the scaling efficiencies with TensorFlow. The compared TensorFlow version is at v1.3, and it uses parameter servers to do S-SGD using the official benchmark script\footnote{https://github.com/tensorflow/benchmarks}. We also run 13 epochs to verify the convergence of the model training, in which $50,000$ images are used to test the top-1 accuracy. \subsubsection{Results on Cluster 1} \begin{figure}[!h] \centering \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{speedupgooglenetk80real.pdf} \caption{Speedup} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{googlenetconvergence.pdf} \caption{Top-1 validation accuracy} \end{subfigure} \caption{The performance of GoogleNet on the K80 cluster with 10GbE. The baseline of the speedup of SGD is on a single machine with 2 GPUs.} \label{fig:realresultsgooglenet} \end{figure} \begin{figure}[!h] \centering \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{speedupresnetk80real.pdf} \caption{Speedup} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{resnet50convergence.pdf} \caption{Top-1 validation accuracy} \end{subfigure} \caption{The performance of ResNet-50 on the K80 cluster with 10GbE. The baseline of the speedup of SGD is on a single machine with 2 GPUs.} \label{fig:realresultsresnet} \end{figure} \begin{figure}[!h] \centering \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{commgooglenetreal.pdf} \caption{GoogleNet} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{commresnetreal.pdf} \caption{ResNet-50} \end{subfigure} \caption{Time costs of non-overlapped communication and computation. `WF.', `S.E.' and `M.W.' indicate WFBP, SyncEASGD and MG-WFBP algorithms respectively. `Comp.' refers to the computation cost (i.e., $t_f+t_b$), and `Comm.' refers to the non-overlapped communication cost (i.e., $t_c^{no}$).} \label{fig:realcomm} \end{figure} The experimental results of GoogleNet and ResNet-50 on the GK210 GPU cluster are shown in Fig. \ref{fig:realresultsgooglenet} and Fig. \ref{fig:realresultsresnet} respectively. The non-overlapped communication cost compared to the computation time is shown in Fig. \ref{fig:realcomm}. The baseline is the iteration throughput of two GPUs in a single machine, in which no communication is required. And the speedup of throughput on multiple workers are compared to the baseline. From Fig. \ref{fig:realcomm}, we can observe that for both GoogleNet and ResNet-50, MG-WFBP performs better than WFBP, SyncEASGD and TensorFlow. SyncEASGD dose not overlap the communication with computation; and hence the communication cost increases when the number of workers increases. As a consequence, the scaling efficiency of SyncEASGD is poor. WFBP achieves near linear scaling on 2 and 4 nodes, in which the non-overlapped communication overhead are small. When scaling to 8 nodes, however, WFBP has an obvious drop in efficiency due to the increased startup time of layer-wise communication which cannot be totally hidden by computation. Regarding the performance of TensorFlow, it uses parameter servers to do the model aggregation. On one hand, the centralized parameter server based algorithm could easily suffer a bandwidth pressure in the parameter server on the lower speed network \cite{zhang2017poseidon}. On the other hand, it takes two communication directions (workers to PS, and PS to workers) to finish the model synchronization, which introduces more overhead in the synchronization pass. Therefore, though TensorFlow exploits the WFBP technique, the PS-based method performs worse than the decentralized method. Our proposed algorithm has a very small non-overlapped communication cost even on the 8-node cluster, so the scaling efficiency is still close to linear. In summary, MG-WFBP achieves about $1.2$x and $1.36$x speedups compared to WFBP and SyncEASGD respectively on the 8-node (16 GPUs) K80 cluster on both GoogleNet and ResNet-50. \begin{figure*}[!ht] \captionsetup[subfigure]{justification=centering} \centering \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{resnet-152-breakdown-bs128-10GbE.pdf} \caption{ResNet-152 with 10GbE \\(1\%-20\%)} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{densenet-161-breakdown-bs64-10GbE.pdf} \caption{DenseNet-161 with 10GbE \\(7\%-70\%)} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{densenet-201-breakdown-bs64-10GbE.pdf} \caption{DenseNet-201 with 10GbE \\(7\%-69\%)} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{inception-v4-breakdown-bs128-10GbE.pdf} \caption{Inception-v4 with 10GbE \\(12\%-39\%)} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{resnet-152-breakdown-bs128-56GbIB.pdf} \caption{ResNet-152 with 56GbIB \\(2\%-9\%)} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{densenet-161-breakdown-bs64-56GbIB.pdf} \caption{DenseNet-161 with 56GbIB \\(2\%-24\%)} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{densenet-201-breakdown-bs64-56GbIB.pdf} \caption{DenseNet-201 with 56GbIB \\(6\%-26\%)} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{inception-v4-breakdown-bs128-56GbIB.pdf} \caption{Inception-v4 with 56GbIB \\(2\%-18\%)} \end{subfigure} \caption{Time comparison of non-overlapped communication and computation on the two V100 GPU clusters (10GbE and 56GbIB). `WF.', `S.E.' and `M.W.' indicate WFBP, SyncEASGD and MG-WFBP algorithms respectively. `Comp.' refers to the computation cost (i.e., $t_f+t_b$), and `Comm.' refers to the non-overlapped communication cost (i.e., $t_c^{no}$). The values in the brackets are the range of improvements of MG-WFBP over WFBP and SyncEASGD.} \label{fig:v100results} \end{figure*} \subsubsection{Results on Cluster 2 and Cluster 3} Note that MG-WFBP has no side-effect on the convergence performance (in terms of the number of iterations) as MG-WFBP can achieve consistent results of the aggregated gradients with the original S-SGD at each iteration. Therefore, in the following performance evaluation, we focus on the comparison on the average iteration wall-clock time to demonstrate how much performance improvement of our MG-WFBP over WFBP and SyncEASGD. On cluster 2 and cluster 3, in addition to the general setting with single precision (FP32) training, we also apply our MG-WFBP algorithm to the mixed precision training technique \cite{micikevicius2018mixed}, which is widely used on the GPUs with Tensor Cores (e.g., Tesla V100) to increase the computing efficiency and reduce the communication traffic. The results are shown in Fig. \ref{fig:v100results}. In overall, it can be seen that for different DNN models, no one always outperforms the other one between WFBP and SyncEASGD algorithms as the both algorithms are sensitive to the cluster configurations, while our proposed MG-WFBP algorithm achieves the fastest training speed in all evaluated DNNs. The first row of Fig. \ref{fig:v100results} shows that MG-WFBP achieves up to $70\%$ improvement over WFBP and SyncEASGD algorithms on Cluster 2 with 10GbE connection. The second row of Fig. \ref{fig:v100results} demonstrates that MG-WFBP outperforms WFBP and SyncEASGD up to $26\%$ on Cluster 3 with 56GbIB connection. On the ResNet-152 architecture, pipelining all FP32 tensors brings some benefits to hide some communication overheads so that WFBP trains faster than SyncEASGD. On both DenseNet and Inception architectures, however, pipelining for every tensors between communication and computation introduces many extra communication overheads so that WFBP performs slower training speed than SyncEASGD. On the ResNet-152 architecture with FP32 precision, the hidden communication time is longer than the extra time introduced by each layer's startup overhead with pipelining so that WFBP is about $10\%$ faster than SyncEASGD. Our MG-WFBP algorithm can further reduce the negative impact of the startup time by smartly merging some gradients, which results in extra $10\%$ improvement. On the other hand, pipelining all tensors introduces larger overheads than hidden time. For example, SyncEASGD is $20\%$ faster than WFBP in DenseNet-161. By merging the tensors smartly, MG-WFBP performs $7\%$ faster than SyncEASGD. In summary, MG-WFBP can always outperform WFBP and SyncEASGD. In the conducted extensive experiments, MG-WFBP generally achieves up to $15\%$ improvement over the best of WFBP and SyncEASGD in both 10GbE and 56GbIB interconnections. \subsection{Simulation} \begin{figure}[!ht] \centering \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{speedupgooglenetk80-2048.pdf} \caption{GoogleNet} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{speedupresnetk80-2048.pdf} \caption{ResNet-50} \end{subfigure} \caption{The performance comparison on the simulated K80 cluster connected with 10GbE with the ring-based all-reduce algorithm. Baseline of the speedup of SGD is on a single K80 card.} \label{fig:simspeedupk80} \end{figure} Due to the hardware limitation, we do not have a very large GPU cluster to support more large-scale experiments. So we conduct simulations based on the real single-GPU performance and the network performance model. First, we measure the layer-wise backward propagation time (i.e., the computation time $t_b^{(l)}$, $l=1,2,...,L$) of GoogleNet and ResNet-50 on a single K80 GPU. Second, to estimate the parameters of $a$ and $b$ in the communication model of Eq.~\ref{equ:tcomm}, we exploit the fitted parameters shown in Fig.~\ref{fig:commoverhead} on a K80 GPU cluster connected with 10GbE. Based on the measured layer-wise backward propagation time on the real K80 GPU and the communication model on the K80 GPU cluster, we simulate WFBP, SyncEASGD and MG-WFBP by scaling from 4 workers to 2048 workers with the ring-based and double binary tree based all-reduce algorithms, which have been practically implemented by NCCL. \textbf{Overall Performance with ring-based all-reduce}. We simulate to train GoogleNet and ResNet-50 by scaling from 4 workers to 2048 workers. The scaling performance is shown in Fig. \ref{fig:simspeedupk80}, in which MG-WFBP has $n=[10, 6, 6, 5, 3, 2, 1,..., 1]$ and $n=[33,19,10,7,5,3,1,...,1]$ merged-gradient layers in GoogleNet and ResNet-50 respectively on the $p=[2^2, 2^3, ..., 2^{11}]$-worker clusters. On the 64-worker cluster, MG-WFBP outperforms WFBP and SyncEASGD by $1.78$x and $1.35$x, respectively on GoogleNet. On ResNet-50, MG-WFBP achieves almost linear speedup, while WFBP and SyncEASGD only have around $55\%$ scaling efficiency. It is important to notice that the curves of WFBP and SyncEASGD have a crossing point in Fig. \ref{fig:simspeedupk80}. This is because the two algorithms are sub-optimal in utilizing the network bandwidth; when the startup time of network communication is not very large (e.g., 4-16 workers in the K80 cluster), WFBP would have the advantage to hide more communication time compared to SyncEASGD. But when scaling to medium-size clusters (e.g., 64 workers), the startup time of communication becomes much larger so that it is hard to be hidden, then using a single-layer communication could become a better approach. As we can see, SyncEASGD achieves better scaling efficiency than WFBP in the 64-worker cluster on both tested CNNs. In such scenarios, MG-WFBP not only overlaps the communication with computation, but also finds the optimal communication message size. So it achieves better scaling efficiency than SyncEASGD and WFBP. Similarly, on training ResNet-50, MG-WFBP achieves about $1.75$x and $1.45$x speedups compared to WFBP and SyncEASGD respectively on the simulated 64-worker cluster. When scaling to large-size clusters (e.g., 256 workers or more), our MG-WFBP converges to the SyncEASGD since the startup time of each layer becomes too large to be hidden, which means that the single-layer communication becomes the optimal. In summary, on the simulated experiments, our proposed algorithm MG-WFBP always achieves the best speedup. However, the ring-based all-reduce algorithm has a startup time that is linear to the number of workers, which makes MG-WFBP become the single-layer communication when scaling to large-scale clusters. \textbf{Simulation with double binary trees}. The startup term in ring-based all-reduce is linear to the number of workers, hence it does not perform well in very large clusters. In the recent NCCL releases (from version 2.4), the double binary trees all-reduce algorithm~\cite{sanders2009two} becomes an alternative as it has a logarithmic startup overhead. We replace $a$ and $b$ with the double binary trees algorithm as shown in Table~\ref{table:allreduce} to compare SyncEASGD, WFBP, and MG-WFBP with simulations on 128 to 2048 workers. The results are shown in Fig.~\ref{fig:simspeedupk80-tree}. It can be seen that WFBP and MG-WFBP always outperform SyncEASGD as the startup time of the double binary trees algorithm is relatively small. With the gradient merge solution, MG-WFBP achieves better performance than WFBP by eliminating some layer's startup times. \begin{figure}[!ht] \centering \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{speedupgooglenetk80-2048-tree.pdf} \caption{GoogleNet} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{speedupresnetk80-2048-tree.pdf} \caption{ResNet-50} \end{subfigure} \caption{The performance comparison on the simulated K80 cluster connected with 10GbE with the double binary trees all-reduce algorithm. } \label{fig:simspeedupk80-tree} \end{figure} \section{Related Work}\label{s:relatedwork} The wait-free backward propagation (WFBP) algorithm has recently been proposed to reduce such impact by overlapping communication with computation \cite{awan2017s}\cite{zhang2017poseidon}. In WFBP, the backward computation operations can be started without waiting for the completion of the previous round of data communication. If the communication cost of layer $l+1$ is smaller than the cost of gradients computation of layer $l$, then the communication cost can be completely hidden (except the first layer); and as a result, the scaling efficiency can be close to linear \cite{awan2017s}\cite{zhang2017poseidon}. In practice, however, many DNN models are trained on high-throughput GPUs that result in very short computing time for each backward layer, while it needs to wait for gradient aggregation before starting the next iteration especially on low bandwidth networks (e.g., 10 Gbps Ethernet). Current distributed training systems \cite{you2017scaling}\cite{hoefler2010toward}\cite{jia2018highly} exploit tensor fusion that merges small size of gradients before communicating across workers to reduce the communication overhead. The parameter server (PS) method \cite{li2014communication} is proposed for parallelism between computation and communication, but it easily suffers from the communication traffic jam since PS needs to collect the gradients from all the workers. In the centralized framework, Pumma et al. \cite{pumma2017parallel}\cite{pumma2019scalable} provided detailed analysis on the data I/O bottleneck and optimization for large-scale training. Sufficient factor broadcasting (SFB) \cite{zhang2017poseidon} uses the matrix factorization technique to reduce the volume of the data that needs to be communicated for fully connected layers. Although SFB uses P2P communication to eliminate the bandwidth pressure on the PS, it brings a growing number of sufficient factors with both the increasing number of data samples and workers. Zhang et al. \cite{zhang2017poseidon} proposed the Poseidon system with hybrid communication of PS and SFB combined with the WFBP algorithm, and they have achieved 15.5x speedup on 16 single-GPU (TITAN X Pascal) machines. Unfortunately, due to drawbacks of PS and SFB and the communication scheme, Poseidon could also be far away from linear scaling with a large number of workers. In the HPC community, the MPI data communication collectives have been redesigned for distributed training to improve the communication performance across multiple machines \cite{awan2017s}. Many MPI-like implementations, such as OpenMPI, NCCL, Gloo\footnote{https://github.com/facebookincubator/gloo} and MVAPICH2-GDR\footnote{https://mvapich.cse.ohio-state.edu/}, support efficient CUDA-aware communication between GPUs via network, and many state-of-the-art DL frameworks (e.g., TensorFlow, PyTorch, Caffe2 and CNTK\footnote{\url{https://docs.microsoft.com/en-us/cognitive-toolkit/}}) integrate NCCL or Gloo for their distributed training modules. Even though these libraries provide very efficient communication collectives, the data communication would still become bottleneck when the communication-to-computation ratio is high, and S-SGD does not scale very well. \section{Discussion}\label{s:discission} Our proposed MG-WFBP is an efficient solution to alleviate the impact of the startup overhead of network communications in distributed DL, but it still has the following limitations: 1) it assumes synchronized SGD with data parallelism, and 2) it requires extra GPU memory (with the same size as model parameters) to buffer the gradients during training. The MG-WFBP algorithm mainly considers the scheduling in the backward pass of S-SGD. It would be possible to extend MG-WFBP to a more general scheduling framework. For example, in S-SGD, gradient compression~\cite{lin2018deep,shi2019adistributed} is a promising approach to improving the scalability of distributed DL~\cite{tang2020communication}. To integrate gradient compression with MG-WFBP, one should consider the extra computational overhead of gradient compression (e.g., top-k sparsification~\cite{lin2018deep}) to generate an optimal solution~\cite{shi2020communication}. Furthermore, it is also possible to pipeline the communications and feed-forward computations so that some communication overheads can be hidden during the feed-forward pass~\cite{bao2020preemptive}. It could be more challenging and useful by considering both feed-forward and backward passes to achieve an optimal gradient merge solution. We will leave this as our future work. \section{Conclusion}\label{s:conclusion} In this work, we first showed that existing state-of-the-art communication strategies, say wait-free backward propagation (WFBP) and single-layer communication (SyncEASGD), are sub-optimal in the synchronized distributed deep learning training when the communication-to-computation ratio is high. Then we generalized the communication problem in pipelining communication and computation as an optimization problem and developed an optimal solution with an efficient algorithm. We then proposed the merged-gradient wait-free backward propagation (MG-WFBP) strategy by optimally merging gradients. We implemented MG-WFBP atop the popular deep learning framework PyTorch. Our implementation is also publicly available. Through extensive experiments on three 16-GPU clusters including Nvidia Tesla K80 GPUs with 10Gbps Ethernet connection and Nvidia Tesla V100 GPUs with both 10Gbps Ethernet and 56Gbps InfiniBand, we verified that MG-WFBP can achieve much better scalability than WFBP and SyncEASGD on various popular convolutional neural networks. Simulations were also studied to further explore the advantage of MG-WFBP on large-scale clusters. \section*{Acknowledgments} The research was supported in part by Hong Kong RGC GRF grants under the contracts HKBU 12200418, HKUST 16206417 and 16207818, as well as an RGC CRF grant under the contract C7036-15G. We would also like to thank NVIDIA for providing the GPU clusters for experiments. \bibliographystyle{IEEEtran} \Urlmuskip=0mu plus 1mu
1,477,468,750,879
arxiv
\section{\texorpdfstring{$G$}{G}-invariance and \texorpdfstring{$G$}{G}-admissiblity of finite type} \label{section:invariance and admissibility} In this section, we will provide a proof of Theorem~\ref{theorem:G-invariance and G-admissibility}. \begin{lemma}\label{lemma:no weird cycles in An} Let $\clusterfont{Q}$ be a quiver of type $\dynkinfont{A}_{2n-1}$. Suppose that $\clusterfont{Q}$ is invariant under an action of $G = \Z/2\Z$ defined by \begin{align*} \tau(i)&=2n-i \end{align*} for all $i\in[2n-1]$. Here, we denote by $\tau \in \Z/2\Z$ the generator of $G$. Then there is no oriented cycle of the form \[ j\to i\to\tau(j)\to\tau(i)\to j \] for any $i,j\neq n$. \end{lemma} \begin{proof} It is well known that a quiver $\clusterfont{Q}$ of type $\dynkinfont{A}$ corresponds to a triangulation of a polygon, where diagonals and triangles define mutable vertices and arrows. Therefore, any minimal cycle in $\clusterfont{Q}$ if exists is of length $3$, which is also proved in~\cite[\S2]{BuanVatne08}. Hence if an oriented cycle $j\to i\to\tau(j)\to\tau(i)\to j$ of length $4$ exists, then there must be an edge $i-\tau(i)$ or $j-\tau(j)$ in $\clusterfont{Q}$. Hence $b_{i,\tau(i)}\neq0$ or $b_{j,\tau(j)}\neq0$ for $\clusterfont{B}=(b_{k,\ell})=\clusterfont{B}(\clusterfont{Q})$. This is impossible because $\clusterfont{Q}$ is $\Z/2\Z$-invariant and so \[ b_{i,\tau(i)} = b_{\tau(i),\tau(\tau(i))} = b_{\tau(i), i} = -b_{i,\tau(i)}\quad\Longrightarrow\quad b_{i,\tau(i)}=0. \] Therefore we are done. \end{proof} \begin{proposition}\label{proposition:admissibility for An} Let $\clusterfont{Q}$ be a quiver of type $\dynkinfont{A}_{2n-1}$, which is $\Z/2\Z$-invariant as above. Then $\clusterfont{Q}$ is $\Z/2\Z$-admissible. \end{proposition} \begin{proof} We will check the conditions~\eqref{mutable},~\eqref{bii'=0}, and~\eqref{nonnegativity_of_bijbi'j} for the admissibility according to Definition~\ref{definition:admissible quiver}. Let $\clusterfont{B} = (b_{i,j}) = \clusterfont{B}(\clusterfont{Q})$. \noindent \eqref{mutable} Since all vertices in $\clusterfont{Q}$ are mutable, the condition~\eqref{mutable} is obviously satisfied. \noindent \eqref{bii'=0} On the other hand, for each $i\in[2n-i]$, we have \[ b_{i,\tau(i)}=(\gamma_i, \gamma_{\tau(i)})= (\gamma_{\tau(i)}, \gamma_{\tau(\tau(i))}) = (\gamma_{\tau(i)}, \gamma_i) = -b_{i,\tau(i)}, \] which implies \[ b_{i,\tau(i)}=0. \] \noindent \eqref{nonnegativity_of_bijbi'j} Finally, we need to prove that for each $i, j$, \[ b_{i,j}b_{\tau(i),j}\ge 0. \] If $j=n$, then since $\tau(n)=n$, we have \[ b_{i,n}b_{\tau(i),n}=b_{i,n}b_{\tau(i),\tau(n)} = b_{i,n}b_{i,n}\ge 0. \] Similarly, if $i=n$, then \[ b_{n, j}b_{\tau(n),j} = b_{n,j}b_{n,j}\ge 0. \] Suppose that for some $i, j\neq n$, \[ b_{i,j}b_{\tau(i),j}<0. \] By changing the roles of $i$ and $\tau(i)$ if necessary, we may assume that $b_{i,j}<0<b_{\tau(i),j}$. Then we also have \[ b_{\tau(i),\tau(j)}<0<b_{i,\tau(j)}, \] which implies that there is an oriented cycle in $\clusterfont{Q}$ \[ j\to i \to \tau(j) \to \tau(i) \to j. \] However, this contradicts to Lemma~\ref{lemma:no weird cycles in An} and therefore $\clusterfont{Q}$ satisfies all conditions in Definition~\ref{definition:admissible quiver}. \end{proof} \begin{proposition}\label{proposition:admissibility for D4} Let $\clusterfont{Q}$ be a quiver on $[4]$ of type $\dynkinfont{D}_{4}$, which is invariant under the $\Z/3\Z$-action given by \begin{align*} 1&\stackrel{\tau}{\longleftrightarrow} 1,& 2&\stackrel{\tau}{\longrightarrow} 3\stackrel{\tau}{\longrightarrow} 4\stackrel{\tau}{\longrightarrow} 2. \end{align*} Here, we denote by $\tau$ the generator of $\Z/3\Z$. Then the quiver $\clusterfont{Q}$ is $\Z/3\Z$-admissible. \end{proposition} \begin{proof} \noindent \eqref{mutable} This is obvious as before. \noindent \eqref{bii'=0} Let $\clusterfont{B}=(b_{i,j})=\clusterfont{B}(\clusterfont{Q})$. Suppose that $b_{2,3}\neq0$. Since the quiver is $\Z/3\Z$-invariant, \[ b_{2,3}=b_{3,4}=b_{4,2}\neq0 \] and so $\clusterfont{Q}$ has a directed cycle either \[ 2\to3\to4\to 2\quad\text{or}\quad 2\to4\to3\to 2. \] Then according to the value $b_{1,2}$, the underlying graph of the quiver $\clusterfont{Q}$ is either the complete graph $K_4$ or a disconnected graph. However, both are impossible as shown in~\cite[Figure~1]{BuanTorkildsen09}. Therefore, we obtain \[ b_{2,3}=b_{3,4}=b_{4,2}=0. \] \noindent \eqref{nonnegativity_of_bijbi'j} The only entries we need to check are $b_{1,j}$'s, which are all equal by the $\Z/3\Z$-invariance of $\clusterfont{Q}$. Therefore \[ b_{1,j}b_{1,j'}\ge 0. \] This completes the proof. \end{proof} \begin{lemma}\label{lemma:no weird cycles in E6} Let $\clusterfont{Q}$ be a quiver on $[6]$ of type $\dynkinfont{E}_6$, which is invariant under the $\Z/2\Z$-action defined by \begin{align*} i&\stackrel{\eta}{\longleftrightarrow} i, i\le 2,& 3&\stackrel{\eta}{\longleftrightarrow} 5,& 4&\stackrel{\eta}{\longleftrightarrow} 6. \end{align*} Here, we denote by $\eta$ the generator of $\Z/2\Z$. Then there is no oriented cycle, which is either \begin{equation}\label{eq_weird_cycles_in_E6} 3\to4\to5\to6\to3\quad\text{or}\quad 3\to6\to5\to4\to3. \end{equation} \end{lemma} \begin{proof} We first recall from~\cite[Theorem~1.8]{FZ2_2003} that \begin{equation}\label{equation_bij_le_1} |b_{i,j}|\le 1 \qquad \text{ for all }i,j\in[6]. \end{equation} Otherwise, $\clusterfont{Q}$ produces a cluster pattern of infinite type. Hence, $\clusterfont{Q}$ is a simple directed graph. Suppose that $\clusterfont{Q}$ contains an oriented cycle in~\eqref{eq_weird_cycles_in_E6}. By relabeling if necessary, we may assume that the $\clusterfont{Q}$ contains an oriented cycle $3\to4\to5\to6\to 3$. Then, since $\clusterfont{Q}$ is connected, at least one of vertices $1$ and $2$ is joined with one of vertices $3,4,5$ and $6$ by an edge. Without loss of generality, we may assume that such a vertex is $1$. Let $\clusterfont{Q}'$ be the quiver on $\{1,3,4,5,6\}$ obtained by forgetting the vertex $2$ in $\clusterfont{Q}$. Then, by the invariance of $\clusterfont{Q}$ under $\Z/2\Z$-action, \begin{align*}\label{equation:squares} \clusterfont{Q}' = \begin{tikzpicture}[baseline=-.5ex] \begin{scope} \draw[fill] (0,0) node (A1) {} circle (2pt) (0,1) node (A4) {} circle (2pt) (-1,0) node (A5) {} circle (2pt) (0,-1) node (A6) {} circle (2pt) (1,0) node (A7) {} circle (2pt); \draw[->] (A1) node[above right] {$1$} -- (A5); \draw[->] (A1) -- (A7); \draw[->] (A4) -- (A5) node[left] {$5$}; \draw[->] (A5) -- (A6) node[below] {$6$}; \draw[->] (A6) -- (A7) node[right] {$3$}; \draw[->] (A7) -- (A4) node[above] {$4$}; \end{scope} \end{tikzpicture}, \quad \begin{tikzpicture}[baseline=-.5ex] \begin{scope}[xshift=3cm] \draw[fill] (0,0) node (A1) {} circle (2pt) (0,1) node (A4) {} circle (2pt) (-1,0) node (A5) {} circle (2pt) (0,-1) node (A6) {} circle (2pt) (1,0) node (A7) {} circle (2pt); \draw[->] (A1) node[above right] {$1$} -- (A4); \draw[->] (A1) -- (A6); \draw[->] (A5) -- (A1); \draw[->] (A7) -- (A1); \draw[->] (A4) -- (A5) node[left] {$5$}; \draw[->] (A5) -- (A6) node[below] {$6$}; \draw[->] (A6) -- (A7) node[right] {$3$}; \draw[->] (A7) -- (A4) node[above] {$4$}; \end{scope} \end{tikzpicture},\quad\text{ or }\quad \begin{tikzpicture}[baseline=-.5ex] \begin{scope}[xshift=6cm] \draw[fill] (0,0) node (A1) {} circle (2pt) (0,1) node (A4) {} circle (2pt) (-1,0) node (A5) {} circle (2pt) (0,-1) node (A6) {} circle (2pt) (1,0) node (A7) {} circle (2pt); \draw[->] (A1) node[above right] {$1$} -- (A4); \draw[->] (A1) -- (A5); \draw[->] (A1) -- (A6); \draw[->] (A1) -- (A7); \draw[->] (A4) -- (A5) node[left] {$5$}; \draw[->] (A5) -- (A6) node[below] {$6$}; \draw[->] (A6) -- (A7) node[right] {$3$}; \draw[->] (A7) -- (A4) node[above] {$4$}; \end{scope} \end{tikzpicture} \subset \clusterfont{Q} \end{align*} up to relabeling and the mutation $\mu_1$. Then, via further mutations, each of these quivers can be transformed to a quiver producing a cluster pattern of infinite type because of the condition~\eqref{equation_bij_le_1} as follows: \begin{align*} &\begin{tikzpicture}[baseline=-.5ex] \begin{scope} \draw[fill] (0,0) node (A1) {} circle (2pt) (0,1) node (A4) {} circle (2pt) (-1,0) node (A5) {} circle (2pt) (0,-1) node (A6) {} circle (2pt) (1,0) node (A7) {} circle (2pt); \draw[->] (A1) node[above right] {$1$} -- (A5); \draw[->] (A1) -- (A7); \draw[->] (A4) -- (A5) node[left] {$5$}; \draw[->] (A5) -- (A6) node[below] {$6$}; \draw[->] (A6) -- (A7) node[right] {$3$}; \draw[->] (A7) -- (A4) node[above] {$4$}; \end{scope} \begin{scope}[xshift=5cm] \draw[fill] (0,0) node (A1) {} circle (2pt) (0,1) node (A4) {} circle (2pt) (-1,0) node (A5) {} circle (2pt) (0,-1) node (A6) {} circle (2pt) (1,0) node (A7) {} circle (2pt); \draw[->] (A1) node[above right] {$1$} -- (A4); \draw[->] (A1) -- (A6); \draw[->] (A5) -- (A1); \draw[->] (A7) -- (A1); \draw[->] (A4) -- (A5) node[left] {$5$}; \draw[->] (A5) -- (A6) node[below] {$6$}; \draw[->] (A6) -- (A7) node[right] {$3$}; \draw[->] (A7) -- (A4) node[above] {$4$}; \end{scope} \begin{scope}[xshift=10cm] \draw[fill] (0,0) node (A1) {} circle (2pt) (0,1) node (A4) {} circle (2pt) (-1,0) node (A5) {} circle (2pt) (0,-1) node (A6) {} circle (2pt) (1,0) node (A7) {} circle (2pt); \draw[->] (A4) node[above] {$4$} -- (A1) node[above right] {$1$}; \draw[->] (A1) -- (A5) node[left] {$5$}; \draw[->] (A6) node[below] {$6$} -- (A1); \draw[->] (A1) -- (A7) node[right] {$3$}; \draw[-Implies,double distance=2pt] (A5) -- (A6); \draw[-Implies,double distance=2pt] (A7) -- (A4); \end{scope} \draw[->] (2,0) -- (3,0) node[midway, above] {$\mu_5\mu_3\mu_4\mu_6$}; \draw[->] (7,0) -- (8,0) node[midway, above] {$\mu_1$}; \end{tikzpicture}\\ &\begin{tikzpicture}[baseline=-.5ex] \begin{scope} \draw[fill] (0,0) node (A1) {} circle (2pt) (0,1) node (A4) {} circle (2pt) (-1,0) node (A5) {} circle (2pt) (0,-1) node (A6) {} circle (2pt) (1,0) node (A7) {} circle (2pt); \draw[->] (A1) node[above right] {$1$} -- (A4); \draw[->] (A1) -- (A5); \draw[->] (A1) -- (A6); \draw[->] (A1) -- (A7); \draw[->] (A4) -- (A5) node[left] {$5$}; \draw[->] (A5) -- (A6) node[below] {$6$}; \draw[->] (A6) -- (A7) node[right] {$3$}; \draw[->] (A7) -- (A4) node[above] {$4$}; \end{scope} \begin{scope}[xshift=10cm] \draw[fill] (0,0) node (A1) {} circle (2pt) (0,1) node (A4) {} circle (2pt) (0,-1) node (A6) {} circle (2pt); \draw[fill] (-1,0) node (A5) {} circle (2pt) (1,0) node (A7) {} circle (2pt); \draw[->] (A5) -- (A1); \draw[->] (A7) -- (A1); \draw[->] (A5) -- (A4); \draw[->] (A6) -- (A5) node[left] {$5$}; \draw[->] (A7) -- (A6); \draw[->] (A4) -- (A7) node[right] {$3$}; \draw[-Implies,double distance=2pt] (A1) -- (A4) node[above] {$4$}; \draw[-Implies,double distance=2pt] (A1) node[above right] {$1$} -- (A6) node[below] {$6$}; \end{scope} \draw[->] (2,0) -- (8,0) node[midway, above] {$\mu_5\mu_3$}; \end{tikzpicture} \end{align*} Since any subquiver of a quiver mutation equivalent to $\clusterfont{Q}$ is of finite type, we get a contradiction which completes the proof. \end{proof} \begin{remark}\label{rmk_proof_of_E6} Since there are only finitely many quivers of type $\dynkinfont{E}_6$, the above lemma can be verified by a computer but we gave here a combinatorial proof. \end{remark} \begin{proposition}\label{proposition:admissibility for Dn and E6} Let $\clusterfont{Q}$ be a quiver of type $\dynkinfont{Z}=\dynkinfont{D}_{n+1}$ or $\dynkinfont{E}_6$, which is invariant under $\Z/2\Z$-action defined by \begin{align*} i&\stackrel{\eta}{\longleftrightarrow} i, i < n,& n&\stackrel{\eta}{\longleftrightarrow} n+1 \end{align*} for $\dynkinfont{Z}=\dynkinfont{D}_{n+1}$, or \begin{align*} i&\stackrel{\eta}{\longleftrightarrow} i, i\le 2,& 3&\stackrel{\eta}{\longleftrightarrow} 5,& 4&\stackrel{\eta}{\longleftrightarrow} 6, \end{align*} for $\dynkinfont{Z}=\dynkinfont{E}_6$. Here, $\eta$ is the generator of $\Z/2\Z$. Then the quiver $\clusterfont{Q}$ is $\Z/2\Z$-admissible. \end{proposition} \begin{proof} \noindent \eqref{mutable} This is obvious as before. \noindent \eqref{bii'=0} Let $\clusterfont{B}=(b_{i,j})=\clusterfont{B}(\clusterfont{Q})$. Then, by the $\Z/2\Z$-invariance of $\clusterfont{Q}$, \[ b_{i,\eta(i)}=b_{\eta(i), \eta(\eta(i))} = b_{\eta(i), i} = -b_{i, \eta(i)}\quad \Longrightarrow\quad b_{i,\eta(i)}=0. \] \noindent \eqref{nonnegativity_of_bijbi'j} If $\dynkinfont{Z}=\dynkinfont{D}_{n+1}$, then we only need to show \[ b_{i,n}b_{i,n+1}\ge 0 \] for $i<n$. This is obvious since \[ b_{i,n+1} = b_{\eta(i), \eta(n+1)} = b_{i,n}. \] If $\dynkinfont{Z}=\dynkinfont{E}_6$, then all we need to show inequalities \begin{align*} b_{i,j}b_{i,j+2}&\ge 0,& b_{3,4}b_{3,6}&\ge 0 \end{align*} hold for $i=1,2$ and $j=3,4$. The first inequality is obvious since \begin{align*} b_{i,j+2}&=b_{\eta(i),\eta(j+2)} = b_{i,j}. \end{align*} Suppose that $b_{3,4}b_{3,6}<0$. Then, since $b_{3,4}=b_{5,6}$ and $b_{3,6}=b_{5,4}$, the $\clusterfont{Q}$ has a loop either \[ 3\to 4\to 5\to 6 \to 3\quad\text{or}\quad 3\to 6\to 5\to 4 \to 3 \] which yields a contradiction by Lemma~\ref{lemma:no weird cycles in E6}. This completes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem:G-invariance and G-admissibility}] This follows from Propositions~\ref{proposition:admissibility for An}, \ref{proposition:admissibility for D4}, and \ref{proposition:admissibility for Dn and E6}. \end{proof} \section{Supplementary pictorial proofs}\label{sec:supplementary pictorial proofs} \subsection{Justifications of moves \Move{DI} and \Move{DII} for denegenerate $N$-graphs}\label{appendix:DI and DII} \[ \begin{tikzcd}[row sep=-3pc, column sep=1pc] \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw [dashed] (0,0) circle [radius=2] \draw[red, thick] (160:2)--(-1,0) -- (2,0) (200:2)--(-1,0) (0,2)--(0,-2); \draw[thick,red, fill=red] (-1,0) circle (2pt); \draw[Dble={blue and green},line width=2] (0,0) -- (-45:2); \draw[Dble={green and blue},line width=2] (0,0) -- (45:2); \draw[Dble={blue and green},line width=2] (0,0) -- (135:2); \draw[Dble={green and blue},line width=2] (0,0) -- (-135:2); \end{tikzpicture} \ar[rr,"{\rm perturb.}"]\arrow[dd,"\Move{DI}"'] & & \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw [dashed] (0,0) circle [radius=2]; \draw[blue, thick] (145:2) -- (-1,0) -- (1,0) -- (35:2) (-145:2) -- (-1,0) (1,0) -- (-35:2); \draw[red, thick] (165:2) -- (-1.5,0) -- (-1,0) -- (0,1) -- (0,2) (0,1)-- (1,0) -- (2,0) (-165:2) -- (-1.5,0) (-1,0) -- (0,-1) -- (0,-2) (0,-1) -- (1,0); \draw[green, thick](125:2) -- (0,1) -- (55:2) (-125:2) -- (0,-1) -- (-55:2) (0,-1) -- (0,1) ; \draw[thick,red, fill=red] (-1.5,0) circle (2pt); \draw[thick,fill=white] (-1,0) circle (2pt) (1,0) circle (2pt) (0,-1) circle (2pt) (0,1) circle (2pt); \end{tikzpicture} \ar[r, leftrightarrow,"\Move{II}"] & \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw [dashed] (0,0) circle [radius=2]; \draw[blue, thick] (145:2) -- (-1,0.5) -- (-0.5,0) -- (1,0) -- (35:2) (-145:2) -- (-1,-0.5) -- (-0.5,0) (1,0) -- (-35:2) (-1,0.5) to[out=-135,in=135] (-1,-0.5); \draw[red, thick] (165:2) -- (-1,0.5) -- (0,1) -- (0,2) (0,1)-- (1,0) -- (2,0) (-165:2) -- (-1,-0.5) -- (0,-1) -- (0,-2) (0,-1) -- (1,0) (-1,0.5) -- (-1,-0.5); \draw[green, thick](125:2) -- (0,1) -- (55:2) (-125:2) -- (0,-1) -- (-55:2) (0,-1) -- (0,1) ; \draw[thick,blue, fill=blue] (-0.5,0) circle (2pt); \draw[thick,fill=white] (-1,0.5) circle (2pt) (-1,-0.5) circle (2pt) (1,0) circle (2pt) (0,-1) circle (2pt) (0,1) circle (2pt); \end{tikzpicture} \ar[rd, leftrightarrow,sloped,"\Move{VI}"]\\ & & & & \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw [dashed] (0,0) circle [radius=2]; \draw[blue, thick] (145:2) -- (-1,0.5) -- (0.5,0) -- (1,0) -- (35:2) (-145:2) -- (-1,-0.5) -- (0.5,0) (1,0) -- (-35:2) (-1,0.5) to[out=-135,in=135] (-1,-0.5); \draw[red, thick] (165:2) -- (-1,0.5) -- (0,1) -- (0,2) (0,1)-- (1,0) -- (2,0) (-165:2) -- (-1,-0.5) -- (0,-1) -- (0,-2) (0,-1) -- (1,0) (-1,0.5) -- (-1,-0.5); \draw[green, thick](125:2) -- (0,1) -- (55:2) (-125:2) -- (0,-1) -- (-55:2) (0,-1) -- (0,1) ; \draw[thick,blue, fill=blue] (0.5,0) circle (2pt); \draw[thick,fill=white] (-1,0.5) circle (2pt) (-1,-0.5) circle (2pt) (1,0) circle (2pt) (0,-1) circle (2pt) (0,1) circle (2pt); \end{tikzpicture} \ar[dl, leftrightarrow,sloped,"\Move{II}"] \\ \begin{tikzpicture}[baseline=-.5ex,scale=1.2] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle [radius=1]; \draw[rounded corners,thick, red](0,1)--(0,-1) (150:1)--++(5/4,0)--(3/4,0) (210:1)--++(5/4,0)--(3/4,0) (3/4,0)--(1,0); \draw[Dble={green and blue},line width=2] (120:1) -- (0,0.5); \draw[Dble={blue and green},line width=2] (60:1) -- (0,0.5); \draw[Dble={blue and green},line width=2] (-120:1) -- (0,-0.5); \draw[Dble={green and blue},line width=2] (-60:1) -- (0,-0.5); \draw[blue,line width=2] (-0.05,0.5) to[out=-135,in=135] (-0.05,-0.5); \draw[green,line width=2] (0,0.5) to[out=-135,in=135] (0,-0.5); \draw[blue,line width=2] (0.05,0.5) to[out=-45,in=45] (0.05,-0.5); \draw[green,line width=2] (0,0.5) to[out=-45,in=45] (0,-0.5); \draw[thick,red,fill=red] (3/4,0) circle (1pt); \end{tikzpicture} \ar[rr, "{\rm perturb}"] & & \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw [dashed] (0,0) circle [radius=2]; \draw[blue, thick] (145:2) -- (-1,1) -- (1,1) -- (35:2) (-145:2) -- (-1,-1) -- (1,-1) -- (-35:2) (-1,1) -- (-1,-1) (1,1) -- (1,-1); \draw[red, thick] (165:2) -- (-1,1) -- (0,1.5) -- (0,2) (-165:2) -- (-1,-1) -- (0,-1.5) -- (0,-2) (0,1.5) -- (1,1) -- (1.5,0) -- (2,0) (0,-1.5) -- (1,-1) -- (1.5,0) (-1,1) -- (0,0.5) -- (1,1) (-1,-1) -- (0,-0.5) -- (1,-1) (0,0.5) -- (0,-0.5); \draw[green, thick](125:2) -- (0,1.5) -- (55:2) (-125:2) -- (0,-1.5) -- (-55:2) (0,-1.5) -- (0,-0.5) (0,0.5) -- (0,1.5) (0,0.5) to[out=-135,in=135] (0,-0.5) (0,0.5) to[out=-45,in=45] (0,-0.5); \draw[thick,red, fill=red] (1.5,0) circle (2pt); \draw[thick,fill=white] (-1,1) circle (2pt) (-1,-1) circle (2pt) (1,1) circle (2pt) (1,-1) circle (2pt) (0,1.5) circle (2pt) (0,-1.5) circle (2pt) (0,0.5) circle (2pt) (0,-0.5) circle (2pt); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw [dashed] (0,0) circle [radius=2]; \draw[blue, thick] (145:2) -- (-1,1) -- (1,1) -- (35:2) (-145:2) -- (-1,-1) -- (1,-1) -- (-35:2) (-1,1) -- (-1,-1) (1,1) -- (1,-1); \draw[red, thick] (165:2) -- (-1,1) -- (0,1.5) -- (0,2) (-165:2) -- (-1,-1) -- (0,-1.5) -- (0,-2) (0,1.5) -- (1,1) -- (1.5,0) -- (2,0) (0,-1.5) -- (1,-1) -- (1.5,0) (-1,1) to[out=-45, in=45] (-1,-1) (1,1) to[out=-135, in =135] (1,-1); \draw[green, thick](125:2) -- (0,1.5) -- (55:2) (-125:2) -- (0,-1.5) -- (-55:2) (0,-1.5) -- (0,1.5) ; \draw[thick,red, fill=red] (1.5,0) circle (2pt); \draw[thick,fill=white] (-1,1) circle (2pt) (-1,-1) circle (2pt) (1,1) circle (2pt) (1,-1) circle (2pt) (0,1.5) circle (2pt) (0,-1.5) circle (2pt); \end{tikzpicture} \ar[from=l,leftrightarrow, "\Move{I}"] \end{tikzcd} \] \[ \begin{tikzcd}[row sep=-3pc, column sep=1pc] \begin{tikzpicture}[baseline=-.5ex,scale=1.2] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle [radius=1]; \draw[thick, red](135:1)--(-45:1) (45:1)--(-135:1); \draw[Dble={blue and green},line width=2] (0,0) -- (-90:1); \draw[Dble={blue and green},line width=2] (0,0) -- (90:1); \draw[Dble={green and blue},line width=2] (0,0) -- (1,0); \draw[Dble={green and blue},line width=2] (0,0) -- (-0.5,0); \draw[Dble={green and blue},line width=2] (-0.5,0) -- (155:1.2); \draw[Dble={green and blue},line width=2] (-0.5,0) -- (205:1.2); \end{tikzpicture} \ar[rr, "{\rm perturb.}"]\arrow[dd,"\Move{DII}"'] & & \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw [dashed] (0,0) circle [radius=2]; \draw[blue, thick] (155:2) -- (-1,0.5) -- (-0.5, 0.5) -- (100:2) (-175:2) -- (-1,0.5) (-0.5, 0.5) -- (0.5,-0.5) -- (-10:2) (0.5,-0.5) -- (-80:2); \draw[red, thick] (135:2) -- (-0.5, 0.5) -- (0.5,0.5) -- (45:2) (-135:2) -- (-0.5, -0.5) -- (0.5,-0.5) -- (-45:2) (-0.5,0.5) -- (-0.5,-0.5) (0.5,0.5) -- (0.5,-0.5); \draw[green, thick] (175:2) -- (-1,-0.5) -- (-155:2) (-1,-0.5) -- (-0.5,-0.5) -- (0.5,0.5) -- (80:2) (-0.5,-0.5) -- (-100:2) (0.5,0.5) -- (10:2); \draw[thick,red, fill=red]; \draw[thick,blue, fill=blue] (-1,0.5) circle (2pt); \draw[thick,green, fill=green] (-1,-0.5) circle (2pt); \draw[thick,fill=white] (1/2,1/2) circle (2pt) (-1/2,1/2)circle (2pt) (1/2,-1/2)circle (2pt) (-1/2,-1/2) circle (2pt); \end{tikzpicture} \ar[r,leftrightarrow, "\Move{II}^2"] & \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw [dashed] (0,0) circle [radius=2]; \draw[blue, thick] (155:2) -- (-1,0.5) -- (-0.5, 1) -- (100:2) (-175:2) -- (-1,0.5) (-0.5, 1) -- (0.5,0) -- (1,-1/2) -- (-10:2) (0.5,0) to[out=-135,in=135] (0.5,-1) -- (1,-1/2) (0.5,-1) -- (-80:2); \draw[red, thick] (135:2) -- (-0.5, 1) -- (-0.5,0) -- (-0.5,-1) -- (-135:2) (45:2) -- (0.5,1) -- (0.5,-1) -- (-45:2) (-0.5,1) -- (0.5,1) (-0.5,0) -- (0.5,0) (-0.5,-1) -- (1/2,-1); \draw[green, thick] (175:2) -- (-1/2,0) to[out=-45,in=45] (-1/2,-1) -- (-155:2) (-1/2,-1) -- (-100:2) (80:2) -- (1/2,1) -- (10:2) (1/2,1) -- (-1/2,0); \draw[thick,red, fill=red]; \draw[thick,blue, fill=blue] (-1,0.5) circle (2pt) (1,-0.5) circle (2pt); \draw[thick,green, fill=green] ; \draw[thick,fill=white] (-1/2,1) circle (2pt) (-1/2,0)circle (2pt) (-1/2,-1)circle (2pt) (1/2,1)circle (2pt) (1/2,0) circle (2pt) (1/2,-1) circle (2pt); \end{tikzpicture} \ar[rd,leftrightarrow, sloped, "\Move{IV}"] \\ & & & & \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw [dashed] (0,0) circle [radius=2]; \draw[blue, thick] (155:2) -- (-1,0) -- (-0.5, 0) -- (1/2,1) -- (100:2) (-175:2) -- (-1,0) (-1/2,0) -- (0.5,-1) -- (-80:2) (1/2,1) -- (1,-1/2) -- (-10:2) (1/2,-1) -- (1,-1/2); \draw[red, thick] (135:2) -- (-0.5, 1) -- (-0.5,0) -- (-0.5,-1) -- (-135:2) (45:2) -- (0.5,1) -- (0.5,-1) -- (-45:2) (-0.5,1) -- (0.5,1) (-0.5,0) -- (0.5,0) (-0.5,-1) -- (1/2,-1); \draw[green, thick] (175:2) -- (-1/2,1) -- (80:2) (-155:2) -- (-1/2,-1) -- (-100:2) (-1/2,1) -- (1/2,0) -- (-1/2,-1) (1/2,0) -- (10:2) ; \draw[thick,red, fill=red]; \draw[thick,blue, fill=blue] (-1,0) circle (2pt) (1,-0.5) circle (2pt); \draw[thick,green, fill=green] ; \draw[thick,fill=white] (-1/2,1) circle (2pt) (-1/2,0)circle (2pt) (-1/2,-1)circle (2pt) (1/2,1)circle (2pt) (1/2,0) circle (2pt) (1/2,-1) circle (2pt); \end{tikzpicture} \ar[dl,leftrightarrow, sloped, "\Move{II}^2"] \\ \begin{tikzpicture}[baseline=-.5ex,scale=1.2] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle [radius=1]; \draw[thick, red](120:1)--(0,0.5) to[out=-45,in=45] (0,-0.5) --(-120:1) (60:1)--(0,0.5) to[out=-135,in=135] (0,-0.5) --(-60:1); \draw[Dble={green and blue},line width=2] (0,0.5) -- ++(-1,0); \draw[Dble={green and blue},line width=2] (0,0.5) -- (0.3,0.5); \draw[Dble={green and blue},line width=2] (0,1) -- (0,0.5); \draw[Dble={green and blue},line width=2] (0,0) -- (0,0.5); \draw[Dble={green and blue},line width=2] (0,0) -- (0,-0.5); \draw[Dble={green and blue},line width=2] (0,-0.5) -- ++(-1,0); \draw[Dble={green and blue},line width=2] (0,-0.5) -- (0.3,-0.5); \draw[Dble={green and blue},line width=2] (0,-1) -- (0,-0.5); \draw[blue,line width=2] (0.3,-0.525) to[out=0,in=-135] (0.7,-0.025) (0.3,0.475) to[out=0,in=135] (0.7,-0.025) -- (1,-0.025); \draw[green,line width=2] (0.3,-0.475) to[out=0,in=-135] (0.7,0.025) (0.3,0.525) to[out=0,in=135] (0.7,0.025) -- (1,0.025); \end{tikzpicture} \ar[rr, "{\rm perturb.}"] & & \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw [dashed] (0,0) circle [radius=2]; \draw[blue, thick] (155:2) -- (-1/2,1) -- (100:2) (-175:2) -- (-1/2,-1/2) -- (1/2,-1) -- (-80:2) (-1/2,1/2) -- (1/2, 1/2) -- (-1/2,-1/2) (1/2,1/2) -- (1,-1/2) -- (-10:2) (1/2, -1) -- (1,-1/2) (-1/2, 1) -- (1/2, 1/2) ; \draw[red, thick] (135:2) -- (-0.5,1) -- (-0.5,-1) -- (-135:2) (45:2) -- (0.5,1) -- (0.5,-1) -- (-45:2) (-0.5,1) -- (0.5,1) (-1/2,1/2) -- (0.5,1/2) (-1/2,-1/2) -- (1/2,-1/2) (-0.5,-1) -- (1/2,-1); \draw[green, thick] (175:2) -- (-1/2,1/2) -- (1/2,1) -- (80:2) (-155:2) -- (-1/2,-1) -- (-100:2) (-1/2,1/2) -- (1/2,-1/2) -- (-1/2,-1) (1/2,1) -- (1,1/2) -- (1/2,-1/2) (1,1/2) -- (10:2); \draw[thick,blue, fill=blue] (1,-0.5) circle (2pt); \draw[thick,red, fill=red] ; \draw[thick,green, fill=green](1,1/2) circle (2pt) ; \draw[thick,fill=white] (-1/2,1) circle (2pt) (-1/2,1/2)circle (2pt) (-1/2,-1/2)circle (2pt) (-1/2,-1)circle (2pt) (1/2,1)circle (2pt) (1/2,1/2) circle (2pt) (1/2,-1/2) circle (2pt) (1/2,-1) circle (2pt); \end{tikzpicture} \ar[r,leftrightarrow, "\Move{IV}"] & \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw [dashed] (0,0) circle [radius=2]; \draw[blue, thick] (155:2) -- (-1/2,1/2) -- (0.5, 1) -- (100:2) (-175:2) -- (-1/2,-1/2) -- (1/2,-1) -- (-80:2) (-1/2,1/2) to[out=-45,in=45] (-1/2, -1/2) (1/2,1) -- (1,-1/2) -- (-10:2) (1/2, -1) -- (1,-1/2); \draw[red, thick] (135:2) -- (-0.5,1) -- (-0.5,-1) -- (-135:2) (45:2) -- (0.5,1) -- (0.5,-1) -- (-45:2) (-0.5,1) -- (0.5,1) (-1/2,1/2) -- (0.5,1/2) (-1/2,-1/2) -- (1/2,-1/2) (-0.5,-1) -- (1/2,-1); \draw[green, thick] (175:2) -- (-1/2,1) -- (80:2) (-155:2) -- (-1/2,-1) -- (-100:2) (-1/2,1) -- (1/2,1/2) to[out=-135,in=135] (1/2,-1/2) -- (-1/2,-1) (1/2,1/2) -- (1,1/2) -- (10:2) (1/2,-1/2) -- (1,1/2); \draw[thick,red, fill=red] ; \draw[thick,blue, fill=blue] (1,-0.5) circle (2pt); \draw[thick,green, fill=green](1,1/2) circle (2pt) ; \draw[thick,fill=white] (-1/2,1) circle (2pt) (-1/2,1/2)circle (2pt) (-1/2,-1/2)circle (2pt) (-1/2,-1)circle (2pt) (1/2,1)circle (2pt) (1/2,1/2) circle (2pt) (1/2,-1/2) circle (2pt) (1/2,-1) circle (2pt); \end{tikzpicture} \end{tikzcd} \] \subsection{Justifications of Legendrian local mutations in degenerate $N$-graphs}\label{appendix:local mutations} \[ \begin{tikzcd}[row sep=-3pc] \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw[red] (-2,0) node[above] {$\gamma$}; \draw[dashed] (0,0) circle (3); \draw[color=cyclecolor2, opacity=0.5, line width=5] (-3,0) -- (3,0); \draw[thick, red] (-3,0) -- (3,0) (0,-3) -- (0,3); \draw[Dble={green and blue},line width=2] (0,0) -- (45:3); \draw[Dble={green and blue},line width=2] (135:3) -- (0,0); \draw[Dble={green and blue},line width=2] (0,0) -- (-135:3); \draw[Dble={green and blue},line width=2] (-45:3) -- (0,0); \end{tikzpicture} \arrow[r, "\text{Perturb}"]\arrow[dd,"\mu_\gamma"'] & \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=cyclecolor2, opacity=0.5, line width=5] (-3,0) -- (3,0); \draw[red, thick] (-3,0) -- (-1,0) -- (0,1) -- (1,0) -- (0,-1) -- (-1,0) (1,0) -- (3,0) (0,1) -- (0,3) (0,-1) -- (0,-3); \draw[blue, thick] (-1,0) -- +(135:3) ++(0,0) -- +(-135:3) (1,0) -- +(45:3) ++(0,0) -- +(-45:3) (-1,0) -- (1,0); \draw[green, thick] (0,1) -- +(135:3) ++(0,0) -- +(45:3) (0,-1) -- +(-135:3) ++(0,0) -- +(-45:3) (0,-1) -- (0,1); \draw[thick, fill=white] (-1,0) circle (0.1) (0,1) circle (0.1) (1,0) circle (0.1) (0,-1) circle (0.1); \end{tikzpicture} \arrow[rd, sloped, "\mu_\gamma"]\\ & & \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=cyclecolor2, opacity=0.5, line width=5] (-3,0) -- (3,0); \draw[blue, thick] (1,1.5) to[out=-60,in=60] (1,0) to[out=-60,in=60] (1,-1.5) (-1,-1.5) to[out=120,in=-120] (-1,0) to[out=120,in=-120] (-1,1.5); \draw[red, thick] (0,2) -- (-1,1.5) to[out=-60,in=60] (-1,0) to[out=-60,in=60] (-1,-1.5) -- (0,-2) -- (1,-1.5) to[out=120,in=-120] (1,0) to[out=120,in=-120] (1,1.5) -- cycle; \draw[red, thick] (-3,0) -- (-1,0) (1,0) -- (3,0); \draw[blue, thick] (-1,0) --(1,0); \begin{scope}[yshift=1.5cm] \draw[red, thick] (-3,0) -- (-1,0) (1,0) -- (3,0); \draw[blue, thick] (-1,0) --(1,0); \end{scope} \begin{scope}[yshift=-1.5cm] \draw[red, thick] (-3,0) -- (-1,0) (1,0) -- (3,0); \draw[blue, thick] (-1,0) --(1,0); \end{scope} \draw[blue, thick] (-1,1.5) -- +(135:3) (-1,-1.5) -- +(-135:3) (1,1.5) -- +(45:3) (1,-1.5) -- +(-45:3); \draw[green, thick] (0,2) -- +(135:3) ++(0,0) -- +(45:3) (0,-2) -- +(-135:3) ++(0,0) -- +(-45:3) (0,-2) -- (0,2); \draw[thick, fill=white] (-1,0) circle (0.1) (-1,1.5) circle (0.1) (-1,-1.5) circle (0.1) (0,2) circle (0.1) (1,0) circle (0.1) (1,1.5) circle (0.1) (1,-1.5) circle (0.1) (0,-2) circle (0.1); \end{tikzpicture} \arrow[dl, leftrightarrow, sloped, "\Move{I}^2"]\\ \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw[red] (2,0) node[above] {$\gamma'$}; \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=cyclecolor2, opacity=0.5, line width=5] (-3,0) -- (3,0); \draw[thick, red] (-3,0) -- (3,0) (-3,1) -- (3,1) (-3,-1) -- (3,-1) (0,-3) -- (0,3); \draw[Dble={green and blue},line width=2] (0,1) -- (45:3); \draw[Dble={green and blue},line width=2] (135:3) -- (0,1); \draw[Dble={green and blue},line width=2] (0,-1) -- (-135:3); \draw[Dble={green and blue},line width=2] (-45:3) -- (0,-1); \draw[blue,line width=2] (-0.1,1) to[out=-180,in=180] (-0.1,0) (0.1,1) to[out=0,in=0] (0.1,0) (-0.1,-1) to[out=-180,in=180] (-0.1,0) (0.1,-1) to[out=0,in=0] (0.1,0); \draw[green,line width=2] (0,0.95) to[out=-180,in=180] (0,0.05) (0,0.95) to[out=0,in=0] (0,0.05) (0,-0.95) to[out=-180,in=180] (0,-0.05) (0,-0.95) to[out=0,in=0] (0,-0.05); \end{tikzpicture} \arrow[r,"\text{Perturb}"'] & \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=cyclecolor2, opacity=0.5, line width=5] (-3,0) -- (3,0); \draw[blue, thick] (1,1.5) to[out=-60,in=60] (1,0) to[out=-60,in=60] (1,-1.5) (-1,-1.5) to[out=120,in=-120] (-1,0) to[out=120,in=-120] (-1,1.5); \draw[red, thick] (0,2) -- (-1,1.5) -- (0,1) (0, 0.5) -- (-1,0) -- (0,-0.5) (0,-1) -- (-1,-1.5) -- (0,-2) -- (1,-1.5) -- (0, -1) (0, -0.5) -- (1,0) -- (0, 0.5) (0,1) -- (1,1.5) -- (0,2) (0,1) -- (0,0.5) (0,-1) -- (0,-0.5); \draw[red, thick] (-3,0) -- (-1,0) (1,0) -- (3,0); \draw[blue, thick] (-1,0) --(1,0); \begin{scope}[yshift=1.5cm] \draw[red, thick] (-3,0) -- (-1,0) (1,0) -- (3,0); \draw[blue, thick] (-1,0) --(1,0); \end{scope} \begin{scope}[yshift=-1.5cm] \draw[red, thick] (-3,0) -- (-1,0) (1,0) -- (3,0); \draw[blue, thick] (-1,0) --(1,0); \end{scope} \draw[blue, thick] (-1,1.5) -- +(135:3) (-1,-1.5) -- +(-135:3) (1,1.5) -- +(45:3) (1,-1.5) -- +(-45:3); \draw[green, thick] (0,2) -- +(135:3) ++(0,0) -- +(45:3) (0,-2) -- +(-135:3) ++(0,0) -- +(-45:3) (0,2) -- (0,1) (0,0.5) -- (0,-0.5) (0,-1) -- (0,-2) (0, 0.75) circle (0.25) (0, -0.75) circle (0.25); \draw[thick, fill=white] (-1,0) circle (0.1) (-1,1.5) circle (0.1) (-1,-1.5) circle (0.1) (0,2) circle (0.1) (1,0) circle (0.1) (1,1.5) circle (0.1) (1,-1.5) circle (0.1) (0,-2) circle (0.1) (0,0.5) circle (0.1) (0,1) circle (0.1) (0,-0.5) circle (0.1) (0,-1) circle (0.1); \end{tikzpicture} \end{tikzcd} \] \[ \begin{tikzcd}[row sep=1pc, column sep=1pc] \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=blue!50!green!50] (-2,0) node[above] {$\gamma_I$}; \draw[thick, red] (45:3) -- (-135:3) (135:3) -- (-45:3); \draw[Dble={blue and green},line width=2] (-3,0) -- (0,0); \draw[Dble={blue and green},line width=2] (3,0) -- (0,0); \draw[Dble={blue and green},line width=2] (0,0) --(0,3); \draw[Dble={blue and green},line width=2] (0,0) -- (0,-3); \draw[color=cyclecolor1, opacity=0.5, line width=7] (-3,0) -- (3,0); \end{tikzpicture} \arrow[rr,"\text{Perturb}"]\arrow[d, " \mu_{\gamma_I}"']&& \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=cyclecolor1, opacity=0.5, line width=7] (-3,0.707) -- (3,0.707); \draw[color=cyclecolor1, opacity=0.5, line width=7] (-3,-0.707) -- (3,-0.707); \draw[red, thick] (45:1) -- (45:3) (-45:1) -- (-45:3) (45:-1) -- (45:-3) (-45:-1) -- (-45:-3) (45:-1) rectangle (45:1); \draw[blue, thick] (-45:1) -- +(3,0) +(0,0) -- ++(0,-3) (-45:-1) -- +(-3,0) +(0,0) -- ++(0,3) (-45:1) -- (-45:-1); \draw[green, thick] (45:1) -- +(3,0) +(0,0) -- ++(0,3) (45:-1) -- +(-3,0) +(0,0) -- ++(0,-3) (45:1) -- (45:-1); \draw[thick, fill=white] (45:1) circle (0.1) (45:-1) circle (0.1) (-45:1) circle (0.1) (-45:-1) circle (0.1); \end{tikzpicture} \arrow[r, "\mu_{\gamma_I}"]& \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=cyclecolor1, opacity=0.5, line width=7] (-3,1.2) -- (3,1.2) (-3, -1.2) -- (3, -1.2); \draw[thick, blue] (-3, 2) -- (-1, 2) (1, -2) -- (3, -2) (-3, 1.2) -- (-1, 1.2) (1, -1.2) -- (3, -1.2) (-3, 0.4) -- (-1, 0.4) (1, -0.4) -- (3, -0.4) (-1, 2) -- (-1,3) (1,-2) -- (1,-3) (-1, 2) to[out=-30, in=30] (-1, 1.2) to[out=-30, in=30] (-1, 0.4) -- (1, -0.4) to[out=-150, in=150] (1, -1.2) to[out=-150, in=150] (1, -2) ; \draw[thick, green] (-3,-2) -- (-1, -2) (1, 2) -- (3, 2) (-3, -1.2) -- (-1, -1.2) (1, 1.2) -- (3, 1.2) (-3, -0.4) -- (-1, -0.4) (1, 0.4) -- (3, 0.4) (1, 2) -- (1,3) (-1,-2) -- (-1,-3) (1, 2) to[out=-150, in=150] (1, 1.2) to[out=-150, in=150] (1, 0.4) -- (-1, -0.4) to[out=-30, in=30] (-1, -1.2) to[out=-30, in=30] (-1, -2) ; \draw[thick, red] (-1, 2) -- (-1, -2) (1, 2) -- (1, -2) (-1, 2) -- ++(-45:-1) (1, 2) -- ++(45:1) (-1, -2) -- ++(45:-1) (1,-2) -- ++(-45:1); \foreach \y in {-2, -1.2, -0.4, 0.4, 1.2, 2} { \draw[thick, red] (-1, \y) -- (1, \y); \draw[thick, fill=white] (-1, \y) circle (0.1) (1,\y) circle (0.1); } \end{tikzpicture} \arrow[d, leftrightarrow, "\Move{IV}"] \\ \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[thick, red, rounded corners] (45:3) -- (0,1) -- (-0.5, 0.5) -- (0.5, -0.5) -- (0,-1) -- (-135:3) (-45:3) -- (0,-1) -- (-0.5, -0.5) -- (0.5, 0.5) -- (0,1) -- (135:3); \draw[Dble={green and blue},line width=2] (0,3) --(0,1); \draw[Dble={green and blue},line width=2] (0,0.5) --(0,0); \draw[Dble={green and blue},line width=2] (0,-0.5) --(0,-1); \draw[Dble={green and blue},line width=2] (0,-3) -- (0,-1); \draw[Dble={green and blue},line width=2] (0,-0.5) --(0,0); \draw[Dble={green and blue},line width=2] (0,0.5) -- (0,1); \begin{scope} \draw[Dble={green and blue},line width=2] (0,0) -- (-3,0); \draw[Dble={green and blue},line width=2] (0,0) -- (3,0); \draw[color=cyclecolor1, opacity=0.5, line width=7] (-3,0) -- (3,0); \end{scope} \begin{scope}[yshift=-1cm] \draw[Dble={green and blue},line width=2] (0,0) -- (-3,0); \draw[Dble={green and blue},line width=2] (0,0) -- (3,0); \end{scope} \begin{scope}[yshift=1cm] \draw[Dble={green and blue},line width=2] (0,0) -- (-3,0); \draw[Dble={green and blue},line width=2] (0,0) -- (3,0); \end{scope} \draw[color=blue!50!green!50] (2,0) node[above] {$\gamma'_I$}; \end{tikzpicture} \arrow[rr,"\text{Perturb}"'] && \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=cyclecolor1, opacity=0.5, line width=7] (-3,0.4) -- (3,0.4) (-3, -0.4) -- (3, -0.4); \draw[thick, blue] (-3, 2) -- (-1, 2) (1, -2) -- (3, -2) (-3, -1.2) -- (-1, -1.2) (1, 1.2) -- (3, 1.2) (-3, 0.4) -- (-1, 0.4) (1, -0.4) -- (3, -0.4) (-1, 2) -- (-1,3) (1,-2) -- (1,-3) (-1, 2) -- (1, 1.2) -- (-1, 0.4) -- (1, -0.4) -- (-1, -1.2) -- (1, -2) ; \draw[thick, green] (-3,-2) -- (-1, -2) (1, 2) -- (3, 2) (-3, 1.2) -- (-1, 1.2) (1, -1.2) -- (3, -1.2) (-3, -0.4) -- (-1, -0.4) (1, 0.4) -- (3, 0.4) (1, 2) -- (1,3) (-1,-2) -- (-1,-3) (1, 2) -- (-1, 1.2) -- (1, 0.4) -- (-1, -0.4) -- (1, -1.2) -- (-1, -2) ; \draw[thick, red] (-1, 2) -- (-1, -2) (1, 2) -- (1, -2) (-1, 2) -- ++(-45:-1) (1, 2) -- ++(45:1) (-1, -2) -- ++(45:-1) (1,-2) -- ++(-45:1); \foreach \y in {-2, -1.2, -0.4, 0.4, 1.2, 2} { \draw[thick, red] (-1, \y) -- (1, \y); \draw[thick, fill=white] (-1, \y) circle (0.1) (1,\y) circle (0.1); } \end{tikzpicture} \arrow[r, leftrightarrow, "\Move{IV}^2"] & \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=cyclecolor1, opacity=0.5, line width=7] (-3,1.2) -- (3,1.2) (-3, -1.2) -- (3, -1.2); \draw[thick, blue] (-3, 2) -- (-1, 2) (1, -2) -- (3, -2) (-3, 1.2) -- (-1, 1.2) (1, -1.2) -- (3, -1.2) (-3, -0.4) -- (-1, -0.4) (1, 0.4) -- (3, 0.4) (-1, 2) -- (-1,3) (1,-2) -- (1,-3) (-1, 2) to[out=-30, in=30] (-1, 1.2) -- (1, 0.4) -- (-1, -0.4) -- (1, -1.2) to[out=-150, in=150] (1, -2) ; \draw[thick, green] (-3,-2) -- (-1, -2) (1, 2) -- (3, 2) (-3, -1.2) -- (-1, -1.2) (1, 1.2) -- (3, 1.2) (-3, 0.4) -- (-1, 0.4) (1, -0.4) -- (3, -0.4) (1, 2) -- (1,3) (-1,-2) -- (-1,-3) (1, 2) to[out=-150, in=150] (1, 1.2) -- (-1, 0.4) -- (1, -0.4) -- (-1, -1.2) to[out=-30, in=30] (-1, -2) ; \draw[thick, red] (-1, 2) -- (-1, -2) (1, 2) -- (1, -2) (-1, 2) -- ++(-45:-1) (1, 2) -- ++(45:1) (-1, -2) -- ++(45:-1) (1,-2) -- ++(-45:1); \foreach \y in {-2, -1.2, -0.4, 0.4, 1.2, 2} { \draw[thick, red] (-1, \y) -- (1, \y); \draw[thick, fill=white] (-1, \y) circle (0.1) (1,\y) circle (0.1); } \end{tikzpicture} \end{tikzcd} \] \subsection{Equivalence between $\ngraphfont{G}(1,b,c)$ and a stabilization of $\ngraphfont{G}(\dynkinfont{A}_n)$} \label{appendix:tripod with a=1 is of type An} A stabilization of $\ngraphfont{G}(\dynkinfont{A}_n)$ is a $3$-graph which is $\partial$-Legendrian isotopic to a $3$-graph $S(\ngraphfont{G}(\dynkinfont{A}_n))$ given as follows: \[ \begin{tikzcd} \ngraphfont{G}(\dynkinfont{A}_n)=\begin{tikzpicture}[baseline=-.5ex,xscale=0.5, yscale=0.5] \draw[thick] (0,0) circle (3cm); \foreach \i in {120,300} { \draw[blue, thick, fill] (0,0) -- (60+\i:1) circle (2pt) -- (100+\i:3) (60+\i:1) -- (50+\i:1.5) circle (2pt) -- (20+\i:3) (50+\i:1.5) -- (70+\i:1.75) circle (2pt) -- (80+\i:3) (70+\i:1.75) -- (50+\i:2) circle (2pt) -- (40+\i:3); \draw[blue, thick, dashed] (50+\i:2) -- (60+\i:3); } \curlybrace[]{135}{225}{3.2}; \draw (180:3.5) node[rotate=90] {$b+2$}; \curlybrace[]{315}{405}{3.2}; \draw (0:3.2) node[right] {$c$}; \end{tikzpicture} \arrow[r,"\Move{S}"]& \begin{tikzpicture}[baseline=-.5ex,xscale=0.5,yscale=0.5] \draw[thick] (0,0) circle (3cm); \foreach \i in {120,300} { \draw[blue, thick, fill] (0,0) -- (60+\i:1) circle (2pt) -- (100+\i:3) (60+\i:1) -- (50+\i:1.5) circle (2pt) -- (20+\i:3) (50+\i:1.5) -- (70+\i:1.75) circle (2pt) -- (80+\i:3) (70+\i:1.75) -- (50+\i:2) circle (2pt) -- (40+\i:3); \draw[blue, thick, dashed] (50+\i:2) -- (60+\i:3); } \draw[red, thick, rounded corners] (240:3) -- (270:1) -- (300:3); \draw[blue, thick] (260:3) -- (270:2) -- (280:3) (270:3) -- (270:2); \draw[blue, thick, fill] (270:2) circle (2pt); \end{tikzpicture} =S(\ngraphfont{G}(\dynkinfont{A}_n)) \end{tikzcd} \] Now we attach the annular $3$-graph corresponding to Legendrian isotopy from $S(\beta(\dynkinfont{A}_n))$ to~$\beta(1,b,c)$ given above. Then we have the following $3$-graph which is $\partial$-Legendrian isotopic to~$S(\ngraphfont{G}(\dynkinfont{A}_n))$. \[ \begin{tikzcd}[column sep=5pc] \begin{tikzpicture}[baseline=-.5ex,xscale=0.5,yscale=0.5] \draw[thick] (0,0) circle (3cm); \foreach \i in {120,300} { \draw[blue, thick, fill] (0,0) -- (60+\i:1) circle (2pt) -- (100+\i:3) (60+\i:1) -- (50+\i:1.5) circle (2pt) -- (20+\i:3) (50+\i:1.5) -- (70+\i:1.75) circle (2pt) -- (80+\i:3) (70+\i:1.75) -- (50+\i:2) circle (2pt) -- (40+\i:3); \draw[blue, thick, dashed] (50+\i:2) -- (60+\i:3); } \draw[red, thick, rounded corners] (240:3) -- (270:1) -- (300:3); \draw[blue, thick] (260:3) -- (270:2) -- (280:3) (270:3) -- (270:2); \draw[blue, thick, fill] (270:2) circle (2pt); \end{tikzpicture} \arrow[r,"\partial\text{-Legendrian}","\text{isotopic}"']& \begin{tikzpicture}[baseline=-.5ex,xscale=0.5,yscale=0.5] \draw[thick](0,0) circle (7cm) (0,0) circle (3cm); \draw[dashed] (0,0) circle (5cm); \foreach \i in {120,300} { \draw[blue, thick, fill] (0,0) -- (60+\i:1) circle (2pt) -- (100+\i:3) (60+\i:1) -- (50+\i:1.5) circle (2pt) -- (20+\i:3) (50+\i:1.5) -- (70+\i:1.75) circle (2pt) -- (80+\i:3) (70+\i:1.75) -- (50+\i:2) circle (2pt) -- (40+\i:3); \draw[blue, thick, dashed] (50+\i:2) -- (60+\i:3); } \draw[red, thick, rounded corners] (240:3) -- (270:1) -- (300:3); \draw[blue, thick] (260:3) -- (270:2) -- (280:3) (270:3) -- (270:2); \draw[blue, thick, fill] (270:2) circle (2pt); \draw[red, thick] (240:3) -- (240:4) -- (220:5) -- (220:7) (240:4) to[out=330,in=100] (280:5) to[out=280,in=240] (320:6); \draw[blue, thick] (220:3) -- (240:4) (260:3) -- (240:4) (240:4) -- (240:7); \draw[blue, thick] (270:3) to[out=270,in=120] (300:5) to[out=300,in=220] (320:6) -- (320:7); \draw[red, thick] (300:3) to[out=300,in=220] (320:4); \draw[blue, thick] (280:3) to[out=280,in=240] (320:4); \foreach \r in {140, 160, 200} { \begin{scope}[rotate=\r] \draw[blue, thick] (0:3) -- (0:7); \end{scope} } \draw[blue, thick, dashed] (180:3) -- (180:7); \foreach \r in {320, 0, 20} { \begin{scope}[rotate=\r] \draw[red, thick] (0:4) to[out=120,in=-100] (20:4) -- (20:6); \draw[blue, thick] (0:4) to[out=60, in=-40] (20:4) -- (20:3); \draw[blue, thick] (0:6) to[out=120,in=-100] (20:6) -- (20:7); \draw[red, thick] (0:6) to[out=60, in=-40] (20:6); \end{scope} } \begin{scope}[rotate=340, dashed] \draw[red, thick] (0:4) to[out=120,in=-100] (20:4) -- (20:6); \draw[blue, thick] (0:4) to[out=60, in=-40] (20:4) -- (20:3); \draw[blue, thick] (0:6) to[out=120,in=-100] (20:6) -- (20:7); \draw[red, thick] (0:6) to[out=60, in=-40] (20:6); \end{scope} \draw[red, thick] (320:4) -- (320:6); \draw[blue, thick] (320:4) -- (320:3); \draw[red, thick] (40:4) -- (90:5) -- (120:7); \draw[blue, thick] (40:4) -- (60:5) -- (100:7); \draw[blue, thick] (40:6) -- (80:7); \draw[red, thick] (40:6) -- (60:7); \foreach \r in {320, 340, 360, 380, 400} { \begin{scope}[rotate=\r] \draw[thick, fill=white] (0:4) circle (2pt); \draw[thick, fill=white] (0:6) circle (2pt); \end{scope} } \draw[thick, fill=white] (240:4) circle (2pt); \end{tikzpicture} \end{tikzcd} \] By applying the following generalized push-through move, \[ \begin{tikzcd} \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw[dashed] \boundellipse{0.75,0}{3}{2}; \draw[blue, thick](-1.2,1.5) -- (0,1.5) (-2.15,0.5) -- (0,0.5) (-2.15,-0.5) -- (0,-0.5) (-1.2,-1.5) -- (0,-1.5); \draw[blue, thick] (0.5,2) -- (0,1.5) to[out=-60,in=60] (0,0.5) to[out=-60,in=100] (0.1,0.2) (0.1,-0.2) to[out=-100,in=60] (0,-0.5) to[out=-60,in=60] (0,-1.5) -- (0.5,-2); \draw[blue, thick, densely dotted] (0.1, 0.2) -- (0.1, -0.2); \draw[red, thick] (-0.35,1.85) -- (0,1.5) to[out=-120,in=120] (0,0.5) to[out=-120,in=80] (-0.1,0.2) (-0.1,-0.2) to[out=-80,in=120] (0,-0.5) to[out=-120,in=120] (0,-1.5) -- (-0.35,-1.85); \draw[red, thick, densely dotted] (-0.1, 0.2) -- (-0.1, -0.2); \draw[red, thick, fill] (3.75,0) -- (2.5,0) circle (2pt) -- (0,-1.5) (2.5,0) -- (2, 0.5) circle (2pt) -- (0,1.5) (2,0.5) -- (1.5,0) circle (2pt) -- (0,-0.5) (1.5,0) -- (1, 0.5) circle (2pt) -- (0,0.5); \draw[red, thick, dashed] (1,0.5) -- (0.5,0); \draw[fill=white, thick] (0,1.5) circle (2pt) (0,0.5) circle (2pt) (0,-0.5) circle (2pt) (0,-1.5) circle (2pt); \end{tikzpicture} \arrow[leftrightarrow, r,"\Move{II^*}"]& \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw[dashed] \boundellipse{1.7,0}{2.5}{2}; \draw[blue, thick, fill] (3,0) -- (2.5,0) circle (2pt) -- (0,-1.5) (2.5,0) -- (2, 0.5) circle (2pt) -- (0,1.5) (2,0.5) -- (1.5,0) circle (2pt) -- (-0.75,-0.5) (1.5,0) -- (1, 0.5) circle (2pt) -- (-0.75,0.5); \draw[blue, thick, dashed] (1,0.5) -- (0.5,0); \draw[blue, thick] (3.4,1.5) -- (3,0) -- (3.4,-1.5); \draw[red, thick] (2.5,1.9) -- (3,0) -- (2.5,-1.9) (3,0) -- (4.2,0); \draw[fill=white, thick] (3,0) circle (2pt); \end{tikzpicture} \end{tikzcd} \] we obtain the $3$-graph in the left of the following three equivalent $3$-graphs \[ \begin{tikzcd}[column sep=1.5pc] \begin{tikzpicture}[baseline=-.5ex,xscale=0.35,yscale=0.35] \draw[thick] (0,0) circle (5cm); \draw[blue, thick, fill] (-0.5,0) -- (-1,0) circle (2pt) -- (220:3) (-1,0) -- (170:2) circle (2pt) -- (140:5) (170:2) -- (190:3) circle (2pt) -- (200:5) (190:3) -- (170:4) circle (2pt) -- (160:5); \draw[blue, thick, dashed] (170:4) -- (180:5); \draw[blue, thick, fill] (0.5,0) -- (1,0) circle (2pt) -- (40:5) (1,0) -- (-10:2) circle (2pt) -- (-40:5) (-10:2) -- (10:3) circle (2pt) -- (20:5) (10:3) -- (-10:4) circle (2pt) -- (-20:5); \draw[blue, thick, dashed] (-10:4) -- (0:5); \draw[red, thick] (180:0.5) -- (120:5) (0:0.5) -- (60:5) (0:0.5) -- (180:0.5); \draw[blue, thick] (180:0.5) -- (100:5) (0:0.5) -- (80:5) (220:3) to[out=220,in=180] (270:4); \draw[blue, thick, fill] (180:0.5) -- (270:1) circle (2pt) (0:0.5) -- (270:1) (270:1) -- (270:4) (270:4) -- (300:5); \draw[red, thick] (180:0.5) -- (270:4) -- (0:0.5) (270:4) -- (270:5); \draw[fill=white, thick] (180:0.5) circle (2pt) (0:0.5) circle (2pt) (270:4) circle (2pt); \end{tikzpicture}\arrow[r,leftrightarrow,"\Move{II}"]& \begin{tikzpicture}[baseline=-.5ex,xscale=0.35,yscale=0.35] \draw[thick] (0,0) circle (5cm); \draw[blue, thick, fill] (-0.5,0) -- (-1,0) circle (2pt) (-1,0) -- (170:2) circle (2pt) -- (140:5) (170:2) -- (190:3) circle (2pt) -- (200:5) (190:3) -- (170:4) circle (2pt) -- (160:5); \draw[blue, thick, dashed] (170:4) -- (180:5); \draw[blue, thick, fill] (0.5,0) -- (1,0) circle (2pt) -- (40:5) (1,0) -- (-10:2) circle (2pt) -- (-40:5) (-10:2) -- (10:3) circle (2pt) -- (20:5) (10:3) -- (-10:4) circle (2pt) -- (-20:5); \draw[blue, thick, dashed] (-10:4) -- (0:5); \draw[blue, thick] (180:1) to[out=-90,in=-180] (-0.5,-2); \draw[red, thick] (180:0.5) -- (120:5) (0:0.5) -- (60:5) (0:0.5) -- (180:0.5); \draw[blue, thick] (180:0.5) -- (100:5) (0:0.5) -- (80:5); \draw[red, thick] (-0.5,0) to[out=-105,in=105] (-0.5,-2) (0.5,0) to[out=-75,in=75] (0.5,-2) (-0.5,-2) to[out=30, in=150] (0.5,-2); \draw[blue, thick] (-0.5,0) to[out=-75,in=75] (-0.5,-2) (0.5,0) to[out=-105,in=105] (0.5,-2) (-0.5,-2) to[out=-30, in=-150] (0.5,-2) -- (300:5); \draw[red, thick, fill] (-0.5,-2) -- (0, -3) circle (2pt) (0,-3) -- (0.5, -2) (0,-3) -- (0,-5); \draw[fill=white, thick] (-0.5,0) circle (2pt) (0.5,0) circle (2pt) (-0.5,-2) circle (2pt) (0.5,-2) circle (2pt); \end{tikzpicture}\arrow[r,leftrightarrow,"\Move{II}^2"]& \begin{tikzpicture}[baseline=-.5ex,xscale=0.35,yscale=0.35] \draw[thick] (0,0) circle (5cm); \draw[blue, thick, fill] (-0.5,0) -- (-1,0) circle (2pt) -- (140:5) (-1,0) -- (190:2) circle (2pt) -- (200:5) (190:2) -- (170:3) circle (2pt) -- (160:5); \draw[blue, thick, dashed] (170:3) -- (180:5); \draw[blue, thick, fill] (0.5,0) -- (1,0) circle (2pt) -- (-60:5) (1,0) -- (10:1.5) circle (2pt) -- (40:5) (10:1.5) -- (-10:2) circle (2pt) -- (-40:5) (-10:2) -- (10:2.5) circle (2pt) -- (20:5) (10:2.5) -- (-10:3) circle (2pt) -- (-20:5); \draw[blue, thick, dashed] (-10:3) -- (0:5); \draw[red, thick] (-0.5,0) -- (120:5) (0.5,0) -- (60:5) (-0.5,0) to[out=30,in=150] (0.5,0); \draw[blue, thick] (-0.5,0) -- (100:5) (0.5,0) -- (80:5) (-0.5,0) to[out=-30,in=-150] (0.5,0); \draw[red, thick, fill] (-0.5,0) -- (0,-3) circle (2pt) -- (0.5,0) (0,-3) -- (0,-5); \draw[fill=white, thick] (-0.5,0) circle (2pt) (0.5,0) circle (2pt); \curlybrace[]{130}{210}{5.5}; \draw (170:6) node[rotate=80] {$b+1$}; \curlybrace[]{-70}{50}{5.5}; \draw (-10:6) node[rotate=-100] {$c+1$}; \curlybrace[]{70}{110}{5.5}; \draw (90:5.5) node[above] {$1+1$}; \end{tikzpicture} \end{tikzcd} \] where the right one is equivalent to the $3$-graph $\ngraphfont{G}(1,b,c)$ via the Move~\Move{II} as claimed. \subsection{Equivalence between $\mathscr{G}^{\mathsf{brick}}(a,b,c)$ and $\mathscr{G}(a,b,c)$}\label{appendix:Ngraph of type abc} \[ \begin{tikzcd} \begin{tikzpicture}[baseline=-.5ex,scale=1] \draw[thick, rounded corners] (-4,-1) rectangle (3,1); \begin{scope} \clip (-4,-1) rectangle (3,1); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-3.5,0.5) -- (-1, 0.5) (-2.5,-0.5) -- (-2,-0.5) (-0.5, 0.5) -- (0, 0.5) (1, -0.5) -- (1.5, -0.5) (2, -0.5) -- (2.5, -0.5); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-3,-0.5) -- (-2.5,-0.5) (-2, -0.5) -- (-1.5, -0.5) (-1, 0.5) -- (-0.5, 0.5) (0,0.5) -- (0.5,0.5) (1.5, -0.5) -- (2, -0.5); \draw[color=cyan, line cap=round, line width=5, opacity=0.5] (-1, 0.5) -- (-1, -0.5) (-1, -0.5) -- (1, -0.5) (-0.5, 0.5) -- (-0.5, -0.5) (0, 0.5) -- (0, -0.5) (0.5, -0.5) -- (0.5, 0.5) (-1.5,-0.5) -- (-1,-0.5); \draw[thick, blue] (-4,-0.5) -- (4, -0.5) (-4, 0.5) -- (4, 0.5) ; \draw[thick, blue, fill] (-3.5, -0.5) -- (-3.5, 0.5) circle (1pt) (-3, -1) -- (-3, -0.5) circle (1pt) (-2.5, -1) -- (-2.5, -0.5) circle (1pt) (-2, -1) -- (-2, -0.5) circle (1pt) (-1.5,-1) -- (-1.5,-0.5) circle (1pt) (-1, -0.5) -- (-1, 0.5) circle (1pt) (-0.5, -0.5) -- (-0.5, 0.5) circle (1pt) (0, -0.5) -- (0, 0.5) circle (1pt) (0.5, -0.5) -- (0.5, 0.5) circle (1pt) (1, -1) -- (1, -0.5) circle (1pt) (1.5, -1) -- (1.5, -0.5) circle (1pt) (2, -1) -- (2, -0.5) circle (1pt) (2.5, -1) -- (2.5,-0.5) circle (1pt); \draw[thick, red] (-3.5,-1) -- (-3.5,-0.5) (-1, -1) -- (-1, -0.5) (-0.5,-1) -- (-0.5, -0.5) (0,-0.5) -- (0, -1) (0.5, -1) -- (0.5,-0.5); \draw[thick, red, rounded corners] (-4,0) -- (-3.75,0) -- (-3.5,-0.5) (-3.5, -0.5) -- (-3.25, 0) -- (-1.25, 0) -- (-1, -0.5) (-1, -0.5) -- (-0.75, 0) -- (-0.5, -0.5) (-0.5, -0.5) -- (-0.25, 0) -- (0, -0.5) (0, -0.5) -- (0.25,0) -- (0.5,-0.5) (0.5, -0.5) -- (0.75, 0) -- (1, 0) -- (3,0); \foreach \x in {-3.5, -1, -0.5, 0, 0.5} { \draw[thick, fill=white] (\x,-0.5) circle (1pt) ; } \end{scope} \draw (-2.25, -1) node[below] {$\underbrace{\hphantom{\hspace{1.8cm}}}_{a}$}; \draw (-.25, -1) node[below] {$\underbrace{\hphantom{\hspace{1.8cm}}}_{b-1}$}; \draw (1.75, -1) node[below] {$\underbrace{\hphantom{\hspace{1.8cm}}}_{c}$}; \end{tikzpicture} \arrow[d, "\Move{I}^*\Move{II}^*"]\\ \begin{tikzpicture}[baseline=-.5ex,scale=1] \draw[thick, rounded corners] (-4,-1.5) rectangle (3,1); \begin{scope} \clip (-4,-1.5) rectangle (3,1); \draw[color=cyclecolor1, rounded corners, line cap=round, line width=5, opacity=0.5] (-3.5,0.5) -- (-1, 0.5) -- (-1, -1) (-2.5,-0.5) -- (-2,-0.5) (-0.5, -1) -- (0, -1) (1, -0.5) -- (1.5, -0.5) (2, -0.5) -- (2.5, -0.5); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-3,-0.5) -- (-2.5,-0.5) (-2, -0.5) -- (-1.5, -0.5) (-1, -1) -- (-0.5, -1) (0,-1) -- (0.5,-1) (1.5, -0.5) -- (2, -0.5); \draw[color=cyan, rounded corners, line cap=round, line width=5, opacity=0.5] (0.5, -0.5) -- (1, -0.5) (-1, -0.5) -- (-0.75, 0) -- (0.25, 0) -- (0.5, -0.5) (-1.5,-0.5) -- (-1,-0.5); \draw[thick, blue, rounded corners] (-4,-0.5) -- (4, -0.5) (-4, 0.5) -- (-1, 0.5) -- (-1, -0.5) (0.5, -0.5) -- (0.5, 0.5) -- (3,0.5) ; \draw[thick, blue, fill] (-3.5, -0.5) -- (-3.5, 0.5) circle (1pt) (-3, -1.5) -- (-3, -0.5) circle (1pt) (-2.5, -1.5) -- (-2.5, -0.5) circle (1pt) (-2, -1.5) -- (-2, -0.5) circle (1pt) (-1.5,-1.5) -- (-1.5,-0.5) circle (1pt) (1, -1.5) -- (1, -0.5) circle (1pt) (1.5, -1.5) -- (1.5, -0.5) circle (1pt) (2, -1.5) -- (2, -0.5) circle (1pt) (2.5, -1.5) -- (2.5,-0.5) circle (1pt); \draw[thick, red, fill] (-3.5,-1.5) -- (-3.5,-0.5) (-1, -1.5) -- (-1, -1) circle (1pt) -- (-1, -0.5) (-0.5,-1.5) -- (-0.5, -1) circle (1pt) -- (-1, -1) (-0.5, -1) -- (0, -1) (0, -1.5) -- (0,-1) circle (1pt) (0.5, -1.5) -- (0.5,-1) circle (1pt) -- (0.5, -0.5) (0.5, -1) -- (0, -1); \draw[thick, red, rounded corners] (-4,0) -- (-3.75,0) -- (-3.5,-0.5) (-3.5, -0.5) -- (-3.25, 0) -- (-1.25, 0) -- (-1, -0.5) (-1, -0.5) -- (-0.75, 0) -- (0.25, 0) -- (0.5, -0.5) (0.5, -0.5) -- (0.75, 0) -- (1, 0) -- (3,0); \foreach \x in {-3.5, -1, 0.5} { \draw[thick, fill=white] (\x,-0.5) circle (1pt) ; } \end{scope} \end{tikzpicture} \arrow[d, "\Move{II}^*"]\\ \begin{tikzpicture}[baseline=-.5ex,scale=1] \draw[thick, rounded corners] (-4.5,-1.5) rectangle (3,1); \begin{scope} \clip (-4.5,-1.5) rectangle (3,1); \draw[color=cyclecolor1, rounded corners, line cap=round, line width=5, opacity=0.5] (-1.5, -1) -- (-1, -1) (-3,-0.5) -- (-2.5,-0.5) (-0.5, -1) -- (0, -1) (0.5, 0) -- (1, 0) (1.5, 0) -- (2, 0); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-3.5,-0.5) -- (-3,-0.5) (-2.5, -0.5) -- (-2, -0.5) (-1, -1) -- (-0.5, -1) (0,-1) -- (0.5,-1) (1, 0) -- (1.5, 0); \draw[color=cyan, rounded corners, line cap=round, line width=5, opacity=0.5] (-1.5, -1) -- (-1, -0.5) -- (-0.75, 0) -- (0.5, 0) (-2,-0.5) -- (-1.5,-0.5) to[out=30, in=150] (-1, -0.5); \draw[thick, blue, rounded corners] (-4.5,-0.5) -- (4, -0.5) (-4.5, 0.5) -- (-1, 0.5) -- (-1, -0.5) (2.5, -0.5) -- (2.5, 0.5) -- (3,0.5) (-4, -0.5) -- (-4, 0.25) -- (-1.5, 0.25) -- (-1.5, -0.5) ; \draw[thick, blue, fill] (-3.5, -1.5) -- (-3.5, -0.5) circle (1pt) (-3, -1.5) -- (-3, -0.5) circle (1pt) (-2.5, -1.5) -- (-2.5, -0.5) circle (1pt) (-2, -1.5) -- (-2, -0.5) circle (1pt) ; \draw[thick, blue, rounded corners] (1, -1.5) -- (1, -1) -- (0.5, -0.5) (1.5, -1.5) -- (1.5, -1) -- (1, -0.5) (2, -1.5) -- (2, -1) -- (1.5, -0.5) (2.5, -1.5) -- (2.5, -1) -- (2,-0.5); \draw[thick, red, fill] (-4,-1.5) -- (-4,-0.5) (-1.5, -0.5) -- (-1.5, -1) circle (1pt) -- (-1, -1) (-1, -0.5) -- (-1.5, -1) (-1, -1.5) -- (-1, -1) circle (1pt) (-0.5,-1.5) -- (-0.5, -1) circle (1pt) -- (-1, -1) (-0.5, -1) -- (0, -1) (0, -1.5) -- (0,-1) circle (1pt) (0.5, -1.5) -- (0.5,-1) circle (1pt) -- (0.5, -0.5) (0.5, -1) -- (0, -1) (0.5, -0.5) -- (0.5, 0) circle (1pt) (1, -0.5) -- (1, 0) circle (1pt) (1.5, -0.5) -- (1.5, 0) circle (1pt) (2, -0.5) -- (2, 0) circle (1pt) (2.5, -0.5) -- (2, 0) ; \draw[thick, red, rounded corners] (-4.5,0) -- (-4.25,0) -- (-4,-0.5) (-4, -0.5) -- (-3.75, 0) -- (-1.75, 0) -- (-1.5, -0.5) (-1, -0.5) -- (-0.75, 0) -- (2, 0) (2.5, -0.5) -- (2.75, 0) -- (3,0) (0.5, -0.5) to[out=-30, in=-150] (1, -0.5) (1, -0.5) to[out=-30, in=-150] (1.5, -0.5) (1.5, -0.5) to[out=-30, in=-150] (2, -0.5) (2, -0.5) to[out=-30, in=-150] (2.5, -0.5) (-1.5, -0.5) to[out=30, in=150] (-1, -0.5) ; \foreach \x in {-4, -1.5, -1, 0.5, 1, 1.5, 2, 2.5} { \draw[thick, fill=white] (\x,-0.5) circle (1pt) ; } \end{scope} \end{tikzpicture} \arrow[d, "\Move{II}^*"]\\ \begin{tikzpicture}[baseline=-.5ex,scale=1] \draw[thick, rounded corners, fill=black!10] (-4.5,-1.5) rectangle (3,1); \fill[white, rounded corners] (-3.25, 0.25) -- (2.25, 0.25) -- (2.25, -0.25) -- (-0.75, -0.25) -- (-0.75, -0.75) -- (0.5, -0.75) -- (0.75, -1) -- (0.75, -1.25) -- (-1.75, -1.25) -- (-1.75, -1) -- (-1.25, -0.5) -- (-1.5, -0.25) -- (-3.25, -0.25) -- cycle; \begin{scope} \clip (-4.5,-1.5) rectangle (3,1); \draw[color=cyclecolor1, rounded corners, line cap=round, line width=5, opacity=0.5] (-1.5, -1) -- (-1, -1) (-2.5,0) -- (-2,0) (-0.5, -1) -- (0, -1) (0.5, 0) -- (1, 0) (1.5, 0) -- (2, 0); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-3,0) -- (-2.5,0) (-2, 0) -- (-1.5, 0) (-1, -1) -- (-0.5, -1) (0,-1) -- (0.5,-1) (1, 0) -- (1.5, 0); \draw[color=cyan, rounded corners, line cap=round, line width=5, opacity=0.5] (-1.5, -1) -- (-1, -0.5) -- (-0.75, 0) -- (0.5, 0) (-1.5,0) -- (-1, -0.5); \draw[thick, blue, rounded corners] (-4.5,-0.5) -- (4, -0.5) (-4.5, 0.5) -- (-1, 0.5) -- (-1, -0.5) (2.5, -0.5) -- (2.5, 0.5) -- (3,0.5) (-4, -0.5) -- (-4, 0.25) -- (-3.5, 0.25) -- (-3.5, -0.5) ; \draw[thick, blue, rounded corners] (-3.5, -1.5) -- (-3.5, -1) -- (-3, -0.5) (-3, -1.5) -- (-3, -1) -- (-2.5, -0.5) (-2.5, -1.5) -- (-2.5, -1) -- (-2, -0.5) (-2, -1.5) -- (-2, -1) -- (-1.5, -0.5) (1, -1.5) -- (1, -1) -- (0.5, -0.5) (1.5, -1.5) -- (1.5, -1) -- (1, -0.5) (2, -1.5) -- (2, -1) -- (1.5, -0.5) (2.5, -1.5) -- (2.5, -1) -- (2,-0.5); \draw[thick, red, fill] (-4,-1.5) -- (-4,-0.5) (-1.5, -0.5) -- (-1.5, -1) circle (1pt) -- (-1, -1) (-1, -0.5) -- (-1.5, -1) (-1, -1.5) -- (-1, -1) circle (1pt) (-0.5,-1.5) -- (-0.5, -1) circle (1pt) -- (-1, -1) (-0.5, -1) -- (0, -1) (0, -1.5) -- (0,-1) circle (1pt) (0.5, -1.5) -- (0.5,-1) circle (1pt) -- (0.5, -0.5) (0.5, -1) -- (0, -1) (0.5, -0.5) -- (0.5, 0) circle (1pt) (1, -0.5) -- (1, 0) circle (1pt) (1.5, -0.5) -- (1.5, 0) circle (1pt) (2, -0.5) -- (2, 0) circle (1pt) (2.5, -0.5) -- (2, 0) (-3.5, -0.5) -- (-3, 0) (-3, 0) -- (-1.5,0) (-3, -0.5) -- (-3, 0) circle (1pt) (-2.5, -0.5) -- (-2.5, 0) circle (1pt) (-2, -0.5) -- (-2, 0) circle (1pt) (-1.5, -0.5) -- (-1.5, 0) circle (1pt) ; \draw[thick, red, rounded corners] (-4.5,0) -- (-4.25,0) -- (-4,-0.5) (-4, -0.5) -- (-3.75, 0) -- (-3.5, -0.5) (-1, -0.5) -- (-0.75, 0) -- (2, 0) (2.5, -0.5) -- (2.75, 0) -- (3,0) (0.5, -0.5) to[out=-30, in=-150] (1, -0.5) (1, -0.5) to[out=-30, in=-150] (1.5, -0.5) (1.5, -0.5) to[out=-30, in=-150] (2, -0.5) (2, -0.5) to[out=-30, in=-150] (2.5, -0.5) (-1.5, 0) -- (-1, -0.5) (-3.5, -0.5) to[out=-30, in=-150] (-3, -0.5) (-3, -0.5) to[out=-30, in=-150] (-2.5, -0.5) (-2.5, -0.5) to[out=-30, in=-150] (-2, -0.5) (-2, -0.5) to[out=-30, in=-150] (-1.5, -0.5) ; \foreach \x in {-4, -3.5, -3, -2.5, -2, -1.5, -1, 0.5, 1, 1.5, 2, 2.5} { \draw[thick, fill=white] (\x,-0.5) circle (1pt) ; } \end{scope} \end{tikzpicture} \end{tikzcd} \] The innermost $N$-graph is the same as $\overline{\ngraphfont{G}(a,b,c)}$ up to Legendrian mutations, which is $\partial$-Legendrian isotopic to the Legendrian Coxeter mutation of $\ngraphfont{G}(a,b,c)$ by Proposition~\ref{proposition:effect of Legendrian Coxeter mutation}. \subsection{Equivalence between $\mathscr{G}^{\mathsf{brick}}(\widetilde{\dynD}_n)$ and $\mathscr{G}(\widetilde{\dynD}_n)$}\label{appendix:Ngraph of type affine Dn} We first show that $\ngraphfont{G}^{\mathsf{brick}}(\widetilde{\dynD}_n)$ is Legendrian mutation equivalent to the following $N$-graph up to $\partial$-Legendrian isotopy: \[ \begin{tikzcd} \ngraphfont{G}(\widetilde{\dynD}_n)'\coloneqq\begin{tikzpicture}[baseline=-.5ex,scale=0.4] \draw[rounded corners=5, thick] (-6.5, -2.5) rectangle (6.5, 2.5); \draw (0.5, -2.5) node[below] {$\underbrace{\hphantom{\hspace{2cm}}}_{n-4}$}; \clip[rounded corners=5] (-6.5, -2.5) rectangle (6.5, 2.5); \draw[thick, blue, fill] (-0.5, -2.5) -- (-0.5,0) circle (2pt) (1.5, -2.5) -- (1.5,0) circle (2pt) ; \begin{scope}[xscale=-1] \draw[thick, blue, fill] (-2.5, -2.5) -- (-2.5,0) circle (2pt) (-0.5, -2.5) -- (-0.5,0) circle (2pt) (1.5, -2.5) -- (1.5,0) circle (2pt) ; \end{scope} \draw[thick, green, rounded corners] (-2.5, 2.5) -- (-2.5, -2.5); \draw[thick, red] (-3.5, -2.5) -- (-3.5, 2.5) (-6.5, 0) -- (-3.5, 0) ; \draw[thick, red] (3.5, 2.5) -- (3.5, -2.5) (6.5, 0) -- (3.5, 0) ; \draw[thick, blue, fill] (-3.5, 0) -- (3.5, 0) (-3.5, 0) -- (-4.5, 1) circle (2pt) -- (-4.5, 2.5) (-4.5, 1) -- (-6.5, 1) (-5.5, 1) circle (2pt) -- (-5.5, 2.5) (-3.5, 0) -- (-4.5, -1) circle (2pt) -- (-4.5, -2.5) (-4.5, -1) -- (-6.5, -1) (-5.5, -1) circle (2pt) -- (-5.5, -3) ; \begin{scope}[xscale=-1] \draw[thick, blue, fill] (-3.5, 0) -- (-4.5, 1) circle (2pt) -- (-4.5, 2.5) (-4.5, 1) -- (-6.5, 1) (-5.5, 1) circle (2pt) -- (-5.5, 2.5) (-3.5, 0) -- (-4.5, -1) circle (2pt) -- (-4.5, -2.5) (-4.5, -1) -- (-6.5, -1) (-4.5, -1.75) circle (2pt) -- (-6.5, -1.75) ; \end{scope} \draw[thick, fill=white] (-3.5, 0) circle (2pt) (3.5, 0) circle (2pt); \end{tikzpicture} \arrow[r, "\partial\text{-Leg. iso.}"'] & \begin{tikzpicture}[baseline=-.5ex,scale=0.4] \draw[rounded corners=5, thick] (-7.5, -3) rectangle (8, 4); \clip[rounded corners=5] (-7.5, -3) rectangle (8, 4); \fill[opacity=0.1, rounded corners] (-3, -4) -- (-3, -1.5) -- (4, -1.5) -- (4, -4) -- cycle; \fill[opacity=0.1, rounded corners] (7, -2) -- (9,-2) -- (9, 5) -- (6, 5) -- (6, 2.5) -- (7, 2.5) -- cycle; \draw[thick, red] (-0.5, -2.5) -- (-0.5, -3) (0.5, -2.5) -- (0.5, -3) (1.5, -2.5) -- (1.5, -3) (2.5, -2.5) -- (2.5, -3) (3.5, -2.5) -- (3.5, -3) ; \draw[thick, blue, fill] (-0.5, -2.5) -- (-0.5,0) circle (2pt) (0.5, -2.5) -- (0.5,0) circle (2pt) (1.5, -2.5) -- (1.5,0) circle (2pt) (2.5, -2.5) -- (2.5, 0) circle (2pt) (3.5, -2.5) -- (3.5, 0) circle (2pt) ; \draw[thick, blue, rounded corners] (-0.5, -2.5) -- (-1.5, -2.5) -- (-1.5, -3) (-0.5, -2.5) -- (3.5, -2.5) -- (8, -2.5) ; \draw[thick, green, rounded corners] (-3.5, 4) -- (-3.5, -3); \draw[thick, red] (-4.5, -3) -- (-4.5, 4) (-7.5, 0) -- (-4.5, 0) ; \draw[thick, red, rounded corners] (4.5, 4) -- (4.5, -2) -- (3.5, -2.5) (-0.5, -2.5) to[out=45,in=135] (0.5, -2.5) (0.5, -2.5) to[out=45,in=135] (1.5, -2.5) (1.5, -2.5) to[out=45,in=135] (2.5, -2.5) (2.5, -2.5) to[out=45,in=135] (3.5, -2.5) (-0.5, -2.5) -- (-1.5, -2) -- (-2.5, -2) -- (-2.5, -3) (4.5, 0) -- (6.5, 0) -- (7.5, 1) (7.5,1) to[out=135, in=-135] (7.5,2) (7.5,2) to[out=135, in=-135] (7.5,3) (7.5,3) -- (6.5, 4) (7.5,1) -- (8, 1) (7.5,2) -- (8, 2) (7.5,3) -- (8, 3) ; \draw[thick, blue, fill] (-4.5, 0) -- (4.5, 0) (-4.5, 0) -- (-5.5, 1) circle (2pt) -- (-5.5, 4) (-5.5, 1) -- (-7.5, 1) (-6.5, 1) circle (2pt) -- (-6.5, 4) (-4.5, 0) -- (-5.5, -1) circle (2pt) -- (-5.5, -3) (-5.5, -1) -- (-7.5, -1) (-6.5, -1) circle (2pt) -- (-6.5, -3) ; \draw[thick, blue, rounded corners] (5.5, -1) -- (7.5, -1) -- (7.5, 4) (5.5, 1) -- (5.5, 3) -- (7.5, 3) (5.5, 1) -- (7.5, 1) (6.5, 1) circle (2pt) -- (6.5, 2) -- (7.5, 2) (5.5, -2.5) circle (2pt) -- (5.5, -1) ; \begin{scope}[rotate=180] \draw[thick, blue, fill] (-4.5, 0) -- (4.5, 0) (-4.5, 0) -- (-5.5, 1) circle (2pt) (-4.5, 0) -- (-5.5, -1) circle (2pt) ; \end{scope} \draw[thick, fill=white] (-4.5, 0) circle (2pt) (4.5, 0) circle (2pt) (7.5, 1) circle (2pt) (7.5, 2) circle (2pt) (7.5, 3) circle (2pt) (-0.5, -2.5) circle (2pt) (0.5, -2.5) circle (2pt) (1.5, -2.5) circle (2pt) (2.5, -2.5) circle (2pt) (3.5, -2.5) circle (2pt) ; \end{tikzpicture} \arrow[ld, sloped, "\Move{I}^*","\Move{II}^*"']\\ \begin{tikzpicture}[baseline=-.5ex,scale=0.4] \draw[rounded corners=5, thick] (-7.5, -3) rectangle (8.5, 3); \clip[rounded corners=5] (-7.5, -3) rectangle (8.5, 3); \draw[thick, red, fill] (-1.5, 0) -- (-0.5, -1) circle (2pt) -- (-0.5, -3) (0.5, -1) circle (2pt) -- (0.5, -3) (1.5, -1) circle (2pt) -- (1.5, -3) (2.5, -1) circle (2pt) -- (2.5, -3) (6.5, 0) -- (4.5,0) (4.5, 0) -- (3.5, -1) circle (2pt) -- (3.5, -3) (-0.5, -1) -- (3.5, -1) (6.5, 0) -- (6.5, 3) (6.5, 0) -- (7.5, 0) circle (2pt) -- +(1,1) +(0,0) -- ++(0.5, -0.5) circle (2pt) -- +(0.5, 0.5) +(0,0) -- +(0.5, -0.5) ; \draw[thick, red, rounded corners] (-1.5, 0) -- (-2.5, -1) -- (-2.5, -3) (-4.5, 3) -- (-4.5, -3) (-4.5, 0) -- (-7.5, 0) (-1.5, 0) -- (-0.5, 1) -- (3.5, 1) -- (4.5, 0) (5.5, 3) -- (5.5, 0) ; \draw[thick, blue, fill] (-4.5, 0) -- (-5.5, 1) circle (2pt) -- (-5.5, 3.5) (-5.5, 1) -- (-7.5, 1) (-6.5, 1) circle (2pt) -- (-6.5, 3.5) (-4.5, 0) -- (-5.5, -1) circle (2pt) -- (-5.5, -3) (-5.5, -1) -- (-7.5, -1) (-6.5, -1) circle (2pt) -- (-6.5, -3) (4.5, 0) -- (6.5, -2) circle (2pt) (6.5, -1) circle (2pt) -- (5.5, 0) (6.5, -1) -- (6.5, -3) (6.5, -1) -- (6.5, 0) ; \draw[thick, blue, rounded corners] (-4.5, 0) -- (-2.5, 0) -- (-1.5, 0) (-1.5, 0) -- (-1.5, -3) (-1.5, 0) -- (-0.5, 0) -- (3.5, 0) -- (4.5, 0) (4.5, 0) to[out=45, in=135] (5.5, 0) (5.5, 0) to[out=45, in=135] (6.5, 0) (6.5, 0) -- (8.5, 2) ; \draw[thick, green] (-3.5, 3) -- (-3.5, -3) ; \draw[thick, fill=white] (-4.5, 0) circle (2pt) (-1.5, 0) circle (2pt) (4.5, 0) circle (2pt) (5.5, 0) circle (2pt) (6.5, 0) circle (2pt) ; \end{tikzpicture} \arrow[r,sloped, "\Move{II}^2"',"\partial\text{-Leg. iso.}"] & \begin{tikzpicture}[baseline=-.5ex,scale=0.4] \draw[rounded corners=5, thick] (-7.5, -3) rectangle (6.5, 3); \clip[rounded corners=5] (-7.5, -3) rectangle (6.5, 3); \fill[opacity=0.1, rounded corners] (-5, 4) -- (-5, 3) -- (-3.5, 1.5) -- (-2, 3) -- (-2, 4) -- cycle; \fill[opacity=0.1, rounded corners] (-5, -4) -- (-5, -3) -- (-3.5, -1.5) -- (-2, -3) -- (-2, -4) -- cycle; \draw[thick, red, fill] (-1.5, 0) -- (-0.5, -1) circle (2pt) -- (-0.5, -3) (0.5, -1) circle (2pt) -- (0.5, -3) (1.5, -1) circle (2pt) -- (1.5, -3) (2.5, -1) circle (2pt) -- (2.5, -3) (4.5, 0) -- (3.5, -1) circle (2pt) -- (3.5, -3) (-0.5, -1) -- (3.5, -1) (4.5, 0) -- (5.5, 0) circle (2pt) -- +(1,1) +(0,0) -- ++(0.5, -0.5) circle (2pt) -- +(0.5, 0.5) +(0,0) -- +(0.5, -0.5) (-1.5, 0) -- (-2.25, 0.75) circle (2pt) -- (-2.75, 1.25) circle (2pt) -- (-3.5, 2) (-3.5, 2) -- (-3.5, 3) (-1.5, 0) -- (-2.5, -1) -- (-3.5, -2) (-3.5, -2) -- (-3.5, -3) ; \draw[thick, red, rounded corners] (-2.25, 0.75) -- (-1, 2) -- (2.5, 2) -- (4.5, 0) (-2.75,1.25) -- (-1.5, 2.5) -- (-1.5, 3) (-3.5, 2) -- (-4.5, 1) -- (-4.5, -1) -- (-3.5, -2) (-4.5, 0) -- (-7.5, 0) ; \draw[thick, blue, fill] (-4.5, 0) -- (-5.5, 1) circle (2pt) -- (-5.5, 3.5) (-5.5, 1) -- (-7.5, 1) (-6.5, 1) circle (2pt) -- (-6.5, 3.5) (-4.5, 0) -- (-5.5, -1) circle (2pt) -- (-5.5, -3) (-5.5, -1) -- (-7.5, -1) (-6.5, -1) circle (2pt) -- (-6.5, -3) ; \draw[thick, blue, rounded corners] (-4.5, 0) -- (-2.5, 0) -- (-1.5, 0) (-1.5, 0) -- (-1.5, -3) (-1.5, 0) -- (-0.5, 0) -- (3.5, 0) -- (4.5, 0) (4.5, 0) -- (4.5, -3) (4.5, 0) -- ++(0,3) ; \draw[thick, green] (-3.5, 2) -- (-3.5, -2) (-4.5, 3) -- (-3.5, 2) -- (-2.5, 3) (-4.5, -3) -- (-3.5, -2) -- (-2.5, -3) ; \draw[thick, fill=white] (-4.5, 0) circle (2pt) (-3.5, 2) circle (2pt) (-3.5, -2) circle (2pt) (-1.5, 0) circle (2pt) (4.5, 0) circle (2pt) (5.5, 0) circle (2pt) (6.5, 0) ; \end{tikzpicture} \arrow[ld, sloped, "\Move{II}^2"]\\ \begin{tikzpicture}[baseline=-.5ex,scale=0.4] \draw[rounded corners=5, thick] (-7.5, -3) rectangle (6.5, 4); \clip[rounded corners=5] (-7.5, -3) rectangle (6.5, 4); \fill[opacity=0.1, rounded corners] (-0.5, 1.5) -- (-0.5, 5) -- (3, 5) -- (3, 1.5) -- cycle; \draw[thick, red, fill] (-1.5, 0) -- (-0.5, -1) circle (2pt) -- (-0.5, -3) (0.5, -1) circle (2pt) -- (0.5, -3) (1.5, -1) circle (2pt) -- (1.5, -3) (2.5, -1) circle (2pt) -- (2.5, -3) (4.5, 0) -- (3.5, -1) circle (2pt) -- (3.5, -3) (-0.5, -1) -- (3.5, -1) (4.5, 0) -- (5.5, 0) circle (2pt) -- +(1,1) +(0,0) -- ++(0.5, -0.5) circle (2pt) -- +(0.5, 0.5) +(0,0) -- +(0.5, -0.5) (-1.5, 0) -- (-2.5, 1) -- (-3.5, 2) (-1.5, 0) -- (-2.5, -1) -- (-3.5, -2) (-3.5, -2) -- (-3.5, -3) ; \draw[thick, red, rounded corners] (-1.5, 2) -- (-1.5, 1) -- (3.5, 1) -- (4.5, 0) (-3.5, 2) -- (-4.5, 1) -- (-4.5, -1) -- (-3.5, -2) (-4.5, 0) -- (-7.5, 0) (-3.5, 2) -- (-1.5, 2) (-1.5, 2) -- (0.5, 2) (0.5, 2) -- (0.5, 4) (0.5, 2) -- (2.5, 2) -- (2.5, 4) ; \draw[thick, blue, fill] (-4.5, 0) -- (-5.5, 1) circle (2pt) -- (-5.5, 4) (-5.5, 1) -- (-7.5, 1) (-6.5, 1) circle (2pt) -- (-6.5, 4) (-4.5, 0) -- (-5.5, -1) circle (2pt) -- (-5.5, -3) (-5.5, -1) -- (-7.5, -1) (-6.5, -1) circle (2pt) -- (-6.5, -3) ; \draw[thick, blue, rounded corners] (-4.5, 0) -- (-2.5, 0) -- (-1.5, 0) (-1.5, 0) -- (-1.5, -3) (-1.5, 0) -- (-0.5, 0) -- (3.5, 0) -- (4.5, 0) (4.5, 0) -- (4.5, -3) (4.5, 0) -- (4.5,4) ; \draw[thick, green] (-3.5, 2) -- (-3.5, -2) (-3.5, 4) -- (-3.5, 3) -- (-3.5, 2) to[out=-30,in=-150] (-1.5, 2) to[out=-30, in=-150] (0.5, 2) (-4.5, -3) -- (-3.5, -2) -- (-2.5, -3) ; \draw[thick, green, fill] (-3.5, 3.5) circle (2pt) (-3.5, 2.5) circle (2pt) ; \draw[thick, green, rounded corners] (0.5, 2) -- (-1.5, 3.5) -- (-3.5, 3.5) (-1.5, 2) -- (-1.5, 2.5) -- (-3.5, 2.5) (0.5, 2) -- (1.5, 3) -- (1.5, 4) ; \draw[thick, fill=white] (-4.5, 0) circle (2pt) (-1.5, 0) circle (2pt) (4.5, 0) circle (2pt) (-3.5, 2) circle (2pt) (-1.5, 2) circle (2pt) (0.5, 2) circle (2pt) (-3.5, -2) circle (2pt) ; \end{tikzpicture} \arrow[r,sloped, "\Move{IV}"',"\partial\text{-Leg. iso.}"]& \begin{tikzpicture}[baseline=-.5ex,scale=0.4] \draw[rounded corners=5, thick] (-7.5, -3) rectangle (6.5, 4); \clip[rounded corners=5] (-7.5, -3) rectangle (6.5, 4); \draw[thick, red, fill] (-0.5, 0) circle (2pt) -- (-0.5, -3) (0.5, 0) circle (2pt) -- (0.5, -3) (1.5, 0) circle (2pt) -- (1.5, -3) (2.5, 0) circle (2pt) -- (2.5, -3) (4.5, 1) -- (3.5, 0) circle (2pt) -- (3.5, -3) (-0.5, 0) -- (3.5, 0) (4.5, 1) -- (5.5, 1) circle (2pt) -- +(1,1) +(0,0) -- ++(0.5, -0.5) circle (2pt) -- +(0.5, 0.5) +(0,0) -- +(0.5, -0.5) (-2.5, 0) -- (-3.5, 1) (-2.5, 0) -- (-3.5, -1) (-3.5, -1) -- (-3.5, -3) ; \draw[thick, red, rounded corners] (-2.5, 0) -- (-1.5, 0) -- (-0.5, 0) (-2.5, 2) -- (-1.5, 2) -- (-0.5, 2) -- (3.5, 2) -- (4.5, 1) (-3.5, 1) -- (-4.5, 0) (-4.5, 0) -- (-3.5, -1) (-4.5, 0) -- (-7.5, 0) (-3.5, 1) -- (-3.5, 2) -- (-2.5, 2) (-2.5, 2) -- (-2.5, 3) -- (-2.5, 4) ; \draw[thick, blue, fill] (-3.5, 1) -- (-5.5, 1) circle (2pt) -- (-5.5, 4) (-5.5, 1) -- (-7.5, 1) (-6.5, 1) circle (2pt) -- (-6.5, 4) (-3.5, -1) -- (-5.5, -1) circle (2pt) -- (-5.5, -3) (-5.5, -1) -- (-7.5, -1) (-6.5, -1) circle (2pt) -- (-6.5, -3) ; \draw[thick, blue, rounded corners] (-3.5, -1) -- (-3.5, 1) (-1.5, -3) -- (-1.5, -2) -- (-2.5, -1) -- (-3.5, -1) (-3.5, 1) -- (-1.5, 1) -- (-0.5, 1) -- (3.5, 1) -- (4.5, 1) (4.5, 0) -- (4.5, -3) (4.5, 0) -- (4.5,4) ; \draw[thick, green] (-4.5, 0) -- (-2.5, 0) (-4.5, 4) -- (-4.5, 0) (-4.5, -3) -- (-4.5, 0) (-2.5, -3) -- (-2.5, 0) ; \draw[thick, green, fill] (-4.5, 3.5) circle (2pt) (-4.5, 2.5) circle (2pt) ; \draw[thick, green, rounded corners] (-3.5, 4) -- (-3.5, 3.5) -- (-4.5, 3.5) (-2.5, 2) -- (-3, 2.5) -- (-4.5, 2.5) (-2.5, 0) -- (-2.5, 1) -- (-2.5, 2) (-2.5, 2) -- (-1.5, 3) -- (-1.5, 4) ; \draw[thick, fill=white] (-4.5, 0) circle (2pt) (-2.5, 0) circle (2pt) (4.5, 1) circle (2pt) (-3.5, 1) circle (2pt) (-3.5, -1) circle (2pt) (-2.5, 2) circle (2pt) ; \end{tikzpicture} \arrow[ld, sloped, "\Move{II}^*"] \\ \begin{tikzpicture}[baseline=-.5ex,scale=0.4] \draw[rounded corners=5, thick] (-7.5, -4) rectangle (7.5, 4); \clip[rounded corners=5] (-7.5, -4) rectangle (7.5, 4); \fill[opacity=0.1, rounded corners] (-8, 0.5) -- (-3, 0.5) -- (-3, -2.5) -- (8,-2.5) -- (8, -5) -- (-8, -5) --cycle; ; \draw[thick, red, fill] (0.5, 0) circle (2pt) -- (0.5, -4) (1.5, 0) circle (2pt) -- (1.5, -4) (2.5, 0) circle (2pt) -- (2.5, -4) (3.5, 0) circle (2pt) -- (3.5, -4) (5.5, 1) -- (4.5, 0) circle (2pt) -- (4.5, -4) (0.5, 0) -- (4.5, 0) (5.5, 1) -- (6.5, 1) circle (2pt) -- +(1,1) +(0,0) -- ++(0.5, -0.5) circle (2pt) -- +(0.5, 0.5) +(0,0) -- +(0.5, -0.5) (-1.5, 0) -- (-3.5, 1) (-1.5, 0) -- (-2.5, -1) (-3.5, -3) -- (-3.5, -4) (-3.5, -1) -- (-2.5, -1) circle (2pt) (-3.5, -2) -- (-2.5, -1) (-2, -0.5) circle (2pt) ; \draw[thick, red, rounded corners] (-1.5, 0) -- (0.5, 0) (-2.5, 2) -- (-1.5, 2) -- (-0.5, 2) -- (4.5, 2) -- (5.5, 1) (-3.5, 1) -- (-4.5, 0) (-4.5, 0) -- (-3.5, -1) (-4.5, 0) -- (-7.5, 0) (-3.5, 1) -- (-3.5, 2) -- (-2.5, 2) (-1.5, 2) -- (-1.5, 4) (-3.5, -3) to[out=135, in=-135] (-3.5, -2) (-3.5, -2) to[out=135, in=-135] (-3.5, -1) (-3.5, -3) -- (-2, -1.5) -- (-2, -0.5) ; \draw[thick, blue, fill] (-3.5, 1) -- (-5.5, 1) circle (2pt) -- (-5.5, 4) (-5.5, 1) -- (-7.5, 1) (-6.5, 1) circle (2pt) -- (-6.5, 4) (-3.5, -1) -- (-7.5, -1) (-3.5, -1) -- (-3.5, -3) ; \draw[thick, blue, rounded corners] (-5.5, -4) -- (-5.5, -3) -- (-3.5, -3) (-3.5, -1) -- (-3.5, 1) (-0.5, -4) -- (-0.5, -3) -- (-2.5, -3) -- (-3.5, -3) (-3.5, 1) -- (-1.5, 1) -- (-0.5, 1) -- (3.5, 1) -- (5.5, 1) (5.5, 0) -- (5.5, -4) (5.5, 0) -- (5.5,4) (-3.5, -2) -- (-6.5, -2) -- (-6.5, -4) ; \draw[thick, green] (-4.5, 0) -- (-1.5, 0) (-4.5, 4) -- (-4.5, 0) (-4.5, -4) -- (-4.5, 0) (-1.5, -4) -- (-1.5, 0) (-1.5, 0) -- (-1.5, 2) ; \draw[thick, green, fill] (-4.5, 3.5) circle (2pt) (-4.5, 2.5) circle (2pt) ; \draw[thick, green, rounded corners] (-2.5, 4) -- (-2.5, 3.5) -- (-4.5, 3.5) (-1.5, 2) -- (-2, 2.5) -- (-4.5, 2.5) (-1.5, 2) -- (-0.5, 3) -- (-0.5, 4) ; \end{tikzpicture} \arrow[r, "\partial\text{-Leg. iso.}"] & \begin{tikzpicture}[baseline=10ex, xscale=0.4, yscale=0.4] \draw[rounded corners=5, thick] (0, 0) rectangle (14, 7); \clip[rounded corners=5] (0, 0) rectangle (14, 7); \fill[opacity=0.1, rounded corners] (-1, -1) rectangle (1.5, 4.5) (12.5, 4.5) rectangle (14.5, -1); \draw[blue,thick,rounded corners] (0,3)--(6,3)--(8,2)--(9,2)--(10,1) (10,1)--(11,2)--(12,2)--(13,1) (13,1)--(13.5,2)--(14,2) (0,5)--(6,5)--(8,6)--(14,6) (1,3)--(1,5) (4,3)--(4,5) (10,0)--(10,1) (13,0)--(13,1); \draw[red,thick,rounded corners] (0,1)--(6.5,1) (7.5,1)--(14,1) (0,4)--(0.5,4)--(1,3) (1,3)--(2,4)--(3,4)--(4,3) (4,3)--(5,4)--(9,4)--(10,3) (10,3)--(11,4)--(12,4)--(13,3) (13,3)--(13.5,4)--(14,4) (2,0)--(2,1) (3,0)--(3,1) (5,0)--(5,1) (6,0)--(6,1) (8,0)--(8,1) (9,0)--(9,1) (11,0)--(11,1) (12,0)--(12,1) (1,1)--(1,3) (4,1)--(4,3) (10,1)--(10,3) (13,1)--(13,3); \draw[red,thick] (6.5,1)--(7.5,1); \draw[red, thick] (7, 1) -- (7, 0); \draw[green,thick,rounded corners] (0,2)--(0.5,2)--(1,1) (1,1)--(2,2)--(3,2)--(4,1) (4,1)--(5,2)--(6,2)--(8,3)--(14,3) (0,6)--(6,6)--(8,5)--(14,5) (1,0)--(1,1) (4,0)--(4,1) (10,3)--(10,5) (13,3)--(13,5); \draw[fill, blue, thick] (1,5) circle (2pt) (4,5) circle (2pt); \draw[fill, red, thick] (2,1) circle (2pt) (3,1) circle (2pt) (5,1) circle (2pt) (6,1) circle (2pt) (8,1) circle (2pt) (9,1) circle (2pt) (11,1) circle (2pt) (12,1) circle (2pt); \draw[fill, green, thick] (10,5) circle (2pt) (13,5) circle (2pt); \draw[fill=white, thick] (1,1) circle (2pt) (4,1) circle (2pt) (10,1) circle (2pt) (13,1) circle (2pt) (1,3) circle (2pt) (4,3) circle (2pt) (10,3) circle (2pt) (13,3) circle (2pt); \end{tikzpicture} \end{tikzcd} \] On the other hand, we have the following move \[ \begin{tikzcd}[column sep=3pc] \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw[thick, dashed] (-0.5, 3) -- (-0.5, -3); \draw[thick, rounded corners=5] (-0.5, 3) -- (4, 3) -- (4, -3) -- (-0.5, -3); \clip[rounded corners=5] (-0.5, 3) -- (4, 3) -- (4, -3) -- (-0.5, -3); \draw[cyclecolor1, opacity=0.5, line width=5, line cap=round] (1,0) -- (2,1) (1,0) -- (2,-1) (1,0) -- (0,0) node[midway, above, black, opacity=1] {$\gamma'$}; \draw[blue, thick, fill] (0, 0) circle (2pt) -- (0, -3); \draw[red, thick] (1, 3) -- (1, -3); \draw[red, thick] (1,0) -- (4, 0); \draw[blue, thick, fill] (-0.5,0) -- (1, 0) (1,0) -- ++(1, 1) circle (2pt) ++(0,0) -- +(0, 2) ++(0,0) -- ++(1,0) circle (2pt) -- +(0,2) ++(0,0) -- +(1,0) (1,0) -- ++(1, -1) circle (2pt) ++(0,0) -- +(2,0) ++(0,0) -- ++(0,-1) circle (2pt) -- +(2,0) ++(0,0) -- ++(0,-1) ; \draw[fill=white, thick] (1,0) circle (2pt); \end{tikzpicture} \arrow[r, "\mu_{\gamma'}"] & \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw[thick, dashed] (-0.5, 3) -- (-0.5, -3); \draw[thick, rounded corners=5] (-0.5, 3) -- (6, 3) -- (6, -3) -- (-0.5, -3); \clip[rounded corners=5] (-0.5, 3) -- (6, 3) -- (6, -3) -- (-0.5, -3); \fill[opacity=0.1, rounded corners] (-1, 1.75) -- (2.75, 1.75) -- (2.75, -1.75) -- (-1, -1.75) -- (-1, -4) -- (7,-4) -- (7, 4) -- (-1, 4) -- cycle; \draw[thick, red] (1,3) -- (1,2) rectangle (3,0) rectangle (1,-2) -- (1,-3) (3,2) -- (5,0) -- (3,-2) (5,0) -- (6,0) ; \draw[thick, blue, fill] (-0.5, 0) -- (0,0) circle (2pt) -- (1,0) (1,2) -- ++(2,2) (3,2) -- ++(2,2) (5,0) -- ++(2,2) (5,0) -- ++(2,-2) (3,-2) -- ++(2,-2) (1,-2) -- ++(2,-2) (1,2) -- (2, 1.25) circle (2pt) -- (3, 2) (2,1.25) -- (2, 0.75) (1,0) -- (2, 0.75) circle (2pt) -- (3, 0) (1,0) -- (1.75, -1) circle (2pt) -- (1, -2) (1.75, -1) -- (2.25, -1) (3,0) -- (2.25, -1) circle (2pt) -- (3,-2) ; \draw[thick, blue, rounded corners] (0,0) -- (0, 2) -- (1,2) (0, -4) -- (0, -2) -- (1, -2) (3, -2) to[out=60,in=-60] (3,0) (3, 2) to[out=-75, in=165] (5,0) ; \draw[thick, fill=white] (1,0) circle (2pt) (1,-2) circle (2pt) (1,2) circle (2pt) (3,0) circle (2pt) (3,-2) circle (2pt) (3,2) circle (2pt) (5,0) circle (2pt) ; \end{tikzpicture} \arrow[r, "\partial\text{-Leg. iso.}"]& \begin{tikzpicture}[baseline=-.5ex, scale=0.5, yscale=-1] \draw[thick, dashed] (-0.5, 3) -- (-0.5, -3); \draw[thick, rounded corners=5] (-0.5, 3) -- (4, 3) -- (4, -3) -- (-0.5, -3); \clip[rounded corners=5] (-0.5, 3) -- (4, 3) -- (4, -3) -- (-0.5, -3); \draw[blue, thick, fill] (0, 0) circle (2pt) -- (0, -3); \draw[red, thick] (1, 3) -- (1, -3); \draw[red, thick] (1,0) -- (4, 0); \draw[blue, thick, fill] (-0.5,0) -- (1, 0) (1,0) -- ++(1, 1) circle (2pt) ++(0,0) -- +(0, 2) ++(0,0) -- ++(1,0) circle (2pt) -- +(0,2) ++(0,0) -- +(1,0) (1,0) -- ++(1, -1) circle (2pt) ++(0,0) -- +(2,0) ++(0,0) -- ++(0,-1) circle (2pt) -- +(2,0) ++(0,0) -- ++(0,-1) ; \draw[fill=white, thick] (1,0) circle (2pt); \end{tikzpicture} \end{tikzcd} \] which flips up the downward leg. Finally, the downward and upward legs can be interchanged via Legendrian mutations and therefore the $N$-graph $\ngraphfont{G}(\widetilde{\dynD}_n)'$ is Legendrian mutation equivalent to $\ngraphfont{G}(\widetilde{\dynD}_n)$ up to $\partial$-Legendrian isotopy. \subsection{Equivalence between $\tilde{\mathscr{G}}(\widetilde{\dynD}_4)$ and $\mathscr{G}(\widetilde{\dynD}_4)$}\label{appendix:affine D4} It is enough to show the equivalence between $\tilde\ngraphfont{G}(\widetilde{\dynD}_4)$ and $\ngraphfont{G}^{\mathsf{brick}}(\widetilde{\dynD}_4)$ by Lemma~\ref{lemma:Ngraphs of affine Dn}. \[ \begin{tikzcd} (\ngraphfont{G}^{\mathsf{brick}}(\widetilde{\dynD}_4),\ngraphfont{B}^{\mathsf{brick}}(\widetilde{\dynD}_4))= \begin{tikzpicture}[baseline=10ex, xscale=0.6, yscale=0.4] \draw[rounded corners=5, thick] (0, 0) rectangle (9, 7); \clip[rounded corners=5] (0, 0) rectangle (9, 7); \fill[opacity=0.1, rounded corners] (-1, -1) rectangle (1.5, 4.5) (7.5, 4.5) rectangle (9.5, -1); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (1,5)--(4,5) (5,5)--(8,5) (2,1)--(3,1) (6,1)--(7,1); \draw (2.5, 5) node[above] {$\gamma$}; \draw[color=cyan, line cap=round, line width=5, opacity=0.5] (3,1)--(6,1) (4,1)--(4,5) (5,1)--(5,5); \draw[blue,thick,rounded corners] (0,3)--(4,3) (4,3)--(5,1) (5,1)--(6,2)--(7,2)--(8,1) (8,1)--(8.5,2)--(9,2) (0,5)--(4,5) (4,5)--(5,6)--(9,6) (1,3)--(1,5) (4,3)--(4,5) (5,0)--(5,1) (8,0)--(8,1); \draw[red,thick,rounded corners] (0,1)--(9,1) (0,4)--(0.5,4)--(1,3) (1,3)--(2,4)--(3,4)--(4,3) (4,3)--(5,3) (5,3)--(6,4)--(7,4)--(8,3) (8,3)--(8.5,4)--(9,4) (2,0)--(2,1) (3,0)--(3,1) (6,0)--(6,1) (7,0)--(7,1) (1,1)--(1,3) (4,1)--(4,3) (5,1)--(5,3) (8,1)--(8,3); \draw[green,thick,rounded corners] (0,2)--(0.5,2)--(1,1) (1,1)--(2,2)--(3,2)--(4,1) (4,1)--(5,3) (5,3)--(9,3) (0,6)--(4,6)--(5,5)--(9,5) (1,0)--(1,1) (4,0)--(4,1) (5,3)--(5,5) (8,3)--(8,5); \draw[fill, blue, thick] (1,5) circle (2pt) (4,5) circle (2pt); \draw[fill, red, thick] (2,1) circle (2pt) (3,1) circle (2pt) (6,1) circle (2pt) (7,1) circle (2pt); \draw[fill, green, thick] (5,5) circle (2pt) (8,5) circle (2pt); \draw[fill=white, thick] (1,1) circle (2pt) (4,1) circle (2pt) (5,1) circle (2pt) (8,1) circle (2pt) (1,3) circle (2pt) (4,3) circle (2pt) (5,3) circle (2pt) (8,3) circle (2pt); \end{tikzpicture} \end{tikzcd} \] By cutting out the shaded region and taking a Legendrian mutation on $\gamma$, we have a degenerate $N$-graph below, which is $\tilde\ngraphfont{G}(\widetilde{\dynD}_4)$ up to $\partial$-Legendrian isotopy and Legendrian mutations. \[ \begin{tikzcd} \begin{tikzpicture}[baseline=-.5ex, scale=0.6] \draw (0,0) circle (3); \clip (0,0) circle (3); \foreach \r in {1, -1} { \begin{scope}[yscale=-1,xscale=\r] \draw[fill, red, thick] (0,0) -- (-3,-3) (0,0) -- (45:2) circle (2pt) (45:2) -- ++(0,3) (45:2) -- ++(3,0) (45:2) ++ (0.75,0) circle (2pt) -- ++(0,2) ; \end{scope} \draw[Dble={blue and green},line width=2] (-3,0) -- (0,0); \draw[Dble={blue and green},line width=2] (3,0) -- (0,0); \draw[Dble={green and blue},line width=2] (0,-3) -- (0,0); \draw[Dble={green and blue},line width=2] (0,1) -- (0,0); \draw[Dble={blue and green},line width=2] (0,1) -- ++(-45:-3); \draw[Dble={blue and green},line width=2] (0,1) -- ++(45:3); \draw[Dble={blue and green},line width=2] (0,1) ++(45:1) -- ++(-45:-2); } \end{tikzpicture} \arrow[r, "\Move{DII}"]& \begin{tikzpicture}[baseline=-.5ex, scale=0.6] \draw[thick, rounded corners] (-3,-3) rectangle (3,3); \clip[rounded corners] (-3,-3) rectangle (3,3); \draw[thick, red, fill] (-1,0) -- ++(-45:-5) (1,0) -- ++(45:5) (-1,0) -- ++(0,-1) circle (2pt) -- +(-3,0) ++(0,0) -- +(0,-3) ++(-1,0) circle (2pt) -- +(0,-3) (1,0) -- ++(0,-1) circle (2pt) -- +(3,0) ++(0,0) -- +(0, -3) ++(1,0) circle (2pt) -- +(0,-3) ; \draw[thick, red] (-1,0) to[out=30, in=150] (1,0) (-1, 0) to[out=-30, in=-150] (1,0); \draw[Dble={blue and green},line width=2] (-3,0) -- (-1,0); \draw[Dble={blue and green},line width=2] (3,0) -- (1,0); \draw[Dble={blue and green},line width=2] (0,0) -- (-1,0); \draw[Dble={blue and green},line width=2] (0,0) -- (1,0); \draw[Dble={green and blue},line width=2] (-1,3) -- (-1,0); \draw[Dble={green and blue},line width=2] (1,2) -- (1,0); \draw[Dble={blue and green},line width=2] (1,2) -- ++(-45:-3); \draw[Dble={blue and green},line width=2] (1,2) -- ++(45:3); \draw[Dble={blue and green},line width=2] (-1,0) -- (0,-1); \draw[Dble={blue and green},line width=2] (1,0) -- (0,-1); \draw[Dble={blue and green},line width=2] (0,-1) -- (0,-3); \end{tikzpicture} \arrow[r,"\Move{DI}^2"]& \begin{tikzpicture}[baseline=-.5ex, scale=0.6] \draw[thick, rounded corners, fill=black!10] (-3,-3) rectangle (4,3); \clip[rounded corners] (-3,-3) rectangle (4,3); \fill[white, rounded corners] (-1.5, 2.5) -- (3.5, 2.5) -- (3.5, -2) -- (0.5, -2) -- (0.5, 1) -- (-1.5, 1) -- cycle; \draw[thick, red, fill] (-2,0) -- ++(-45:-5) (2,0) -- ++(45:5) (-2,-3) -- (-2, 0) (-1, -3) -- (-1, 2) circle (2pt) (0, -3) -- (0, 2) circle (2pt) (-2, 0) -- (0,0) (2,0) -- ++(0,-1) circle (2pt) -- +(3,0) ++(0,0) -- +(0, -3) ++(1,0) circle (2pt) -- +(0,-3) ; \draw[thick, red, rounded corners] (-2,0) -- (-1,2) (-1,2) -- (1,2) -- (2,0); \draw[thick, red] (0, 0) to[out=-30, in=-150] (2,0); \draw[Dble={blue and green},line width=2] (-3,0) -- (-2,0); \draw[Dble={blue and green},line width=2] (5,0) -- (2,0); \draw[Dble={blue and green},line width=2] (1,0) -- (0,0); \draw[Dble={blue and green},line width=2] (1,0) -- (2,0); \draw[Dble={green and blue},line width=2] (-2,3) -- (-2,0); \draw[Dble={green and blue},line width=2] (2,2) -- (2,0); \draw[Dble={blue and green},line width=2] (2,2) -- ++(-45:-3); \draw[Dble={blue and green},line width=2] (2,2) -- ++(45:3); \draw[Dble={blue and green},line width=2] (0,0) -- (1,-1); \draw[Dble={blue and green},line width=2] (2,0) -- (1,-1); \draw[Dble={blue and green},line width=2] (1,-1) -- (1,-3); \draw[line width=2, blue] (-2,0) to[out=45, in=135] (-1,0); \draw[line width=2, blue] (-1,0) to[out=45, in=135] (0,0); \draw[line width=2, blue] (-2,0) to[out=-45, in=-135] (-1,0); \draw[line width=2, blue] (-1,0) to[out=-45, in=-135] (0,0); \begin{scope}[yshift=0.1cm] \draw[line width=2, green] (-2,0) to[out=45, in=135] (-1,0); \draw[line width=2, green] (-1,0) to[out=45, in=135] (0,0); \end{scope} \begin{scope}[yshift=-0.1cm] \draw[line width=2, green] (-2,0) to[out=-45, in=-135] (-1,0); \draw[line width=2, green] (-1,0) to[out=-45, in=-135] (0,0); \end{scope} \end{tikzpicture} \end{tikzcd} \] \section{Introduction} \input{section1_introduction.tex} \section{Cluster algebras}\label{sec:cluster algebras} \input{section2_cluster_algebras.tex} \section{Legendrians and \texorpdfstring{$N$-graphs}{N-graphs}}\label{sec:N-graph} \input{section3_Ngraphs_and_seeds.tex} \section{Lagrangian fillings for Legendrians links of finite or affine type}\label{sec:N-graph of finite or affine type} \input{section4_Lagrangian_fillings.tex} \section{Foldings}\label{section:folding} \input{section5_folding.tex} \addtocontents{toc}{\protect\setcounter{tocdepth}{1}} \subsection{Background} Legendrian knots are central objects in the study of 3-dimensional contact manifolds. Classification of Legendrian knots is important in its own right and also plays a prominent role in classifying 4-dimensional Weinstein manifolds. Classical Legendrian knot invariants are Thurston--Bennequin number and rotation number~\cite{Gei2008} which distinguish the pair of Legendrian knots with the same knot type. There are non-classical invariants including the Legendrian contact algebra via the method of Floer theory~\cite{EGH2000, Che2002}, and the space of constructible sheaves using microlocal analysis~\cite{GKS2012,STZ2017}. These non-classical invariants distinguish the Chekanov pair, a pair of Legendrian knots of type $m5_2$ having the same classical invariants. Recently, the study of exact Lagrangian fillings for Legendrian links has been extremely plentiful. In the context of Legendrian contact algebra, an exact Lagrangian filling gives an augmentation through the functorial view point~\cite{EHK2016}. There are several level of equivalence between augmentations and the constructible sheaves for Legendrian links from counting to categorical equivalence~\cite{NRSSZ2015}. Using these idea of augmentations and constructible sheaves, people construct infinitely many fillings for certain Legendrian links~\cite{CG2020, GSW2020b, CZ2020}. Here is the summarized list of methods of constructing Lagrangian fillings for Legendrian links: \begin{enumerate} \item Decomposable Lagrangian fillings via pinching sequences and Legendrian loops \cite{EHK2016, Kal2006, CN2021}. \item Alternating Legendrians and its conjugate Lagrangian fillings \cite{STWZ2019}. \item Legendrian weaves via $N$-graphs and Legendrian mutations \cite{TZ2018, CZ2020}. \item Donaldson--Thomas transformation on augmentation varieties \cite{SW2019, GSW2020a, GSW2020b}. \end{enumerate} Cluster algebras, introduced by Fomin and Zelevinsky~\cite{FZ1_2002}, play a crucial role in the above constructions and applications. More precisely, the space of augmentations and the moduli of constructible sheaves of microlocal rank one adapted to Legendrian links admit structures of cluster pattern and $Y$-pattern, respectively~\cite{STWZ2019, SW2019, GSW2020a}. Note that a $Y$-seed of cluster algebra consists of a quiver whose vertices are decorated with variables, called \emph{coefficients}. An involutory operation at each vertex, called \emph{mutation}, generates all seeds of the $Y$-pattern. The main point is to identify the mutation in the $Y$-pattern and an operation in the space of Lagrangian fillings. This geometric operation is deeply related to the Lagrangian surgery~\cite{Pol1991} and the wall-crossing phenomenon~\cite{Aur2007}. Indeed, a Legendrian torus link of type $(2,n)$ admits as many exact Lagrangian fillings as Catalan number up to exact Lagrangian isotopy \cite{Pan2017, STWZ2019, TZ2018}. Interestingly enough, the Catalan number is the number of seeds in a cluster pattern of Dynkin type $\dynkinfont{A}_{n-1}$. There are also Legendrian links corresponding to finite Dynkin type $\dynkinfont{D}\dynkinfont{E}$, and affine Dynkin type $\widetilde{\dynD}\widetilde{\dynE}$ \cite{GSW2020b}. A conjecture by Casals \cite[Conjecture~5.1]{Cas2020} says that the number of distinct exact embedded Lagrangian fillings (up to exact Lagrangian isotopy) for Legendrian links of type $\dynkinfont{ADE}$ is exactly the same as the number of seeds of the corresponding cluster algebras. Furthermore, it is also conjectured by Casals \cite[Conjecture~5.4]{Cas2020} that for Legendrian links of type $\dynkinfont{A}_{2n-1}, \dynkinfont{D}_{n+1}, \dynkinfont{E}_6$ and $\dynkinfont{D}_4$, Lagrangian fillings having certain $\Z/2\Z$ or $\Z/3\Z$-symmetry form the cluster patterns of type $\dynkinfont{B}_n, \dynkinfont{C}_n, \dynkinfont{F}_4$ and $\dynkinfont{G}_2$, which are Dynkin diagrams obtained by \emph{folding} as explained in~\cite{FZ_Ysystem03}. \subsection{The results} \subsubsection{Lagrangian fillings for Legendrians of type $\dynkinfont{ADE}$ or $\widetilde{\dynD}\widetilde{\dynE}$} Our main result is that there are at least as many Lagrangian fillings for Legendrian links of finite type as seeds in the corresponding cluster structures. We deal with $N$-graphs introduced by Casals and Zaslow \cite{CZ2020} to construct the Lagrangian fillings. An $N$-graph $\ngraphfont{G}$ on $\mathbb{D}^2$ gives a Legendrian surface $\Lambda(\ngraphfont{G})$ in $J^1\mathbb{D}^2$ while the boundary $\partial \ngraphfont{G}$ on $\mathbb{S}^1$ induces a Legendrian link~$\lambda(\partial \ngraphfont{G})$. Then projection of $\Lambda(\ngraphfont{G})$ along the Reeb direction becomes a Lagrangian filling of~$\lambda(\partial \ngraphfont{G})$. As mentioned above, we interpret an $N$-graph as a $Y$-seed in the corresponding $Y$-pattern. A one-cycle in the Legendrian surface $\Lambda(\ngraphfont{G})$ corresponds to a vertex of the quiver, and a signed intersection between one-cycles gives an arrow between corresponding vertices. From constructible sheaves adapted to $\Lambda(\ngraphfont{G})$, one can assign a monodromy to each one-cycle which becomes the coefficient at each vertex. There is an operation so called a \emph{Legendrian mutation} $\mu_\gamma$ on an $N$-graph $\ngraphfont{G}$ along one-cycle $[\gamma]\in H_1(\Lambda(\ngraphfont{G}))$ which is the counterpart of the mutation on the $Y$-pattern, see Proposition~\ref{proposition:equivariance of mutations}. The delicate and challenging part is that we do not know whether Legendrian mutations are always possible or not. Simply put, this is because the mutation in cluster side is algebraic, whereas the Legendrian mutation is rather geometric. The main idea of our construction is to consider $N$-graphs $\ngraphfont{G}(a,b,c)$ and $\ngraphfont{G}(\widetilde{\dynD}_n)$ bounding Legendrian links $\lambda(a,b,c)$ and $\lambda(\widetilde{\dynD}_n)$, respectively. \begin{align*} \lambda(a,b,c)= \begin{tikzpicture}[baseline=5ex,scale=0.8] \draw[thick] (0,0) to[out=0,in=180] (1,0.5) (2,0.5) to (2.5,0.5) (3.5,0.5) to (4,0.5) (5,0.5) to (5.5,0.5); \draw[thick] (0,0.5) to[out=0,in=180] (1,0) to (2.5,0) (3.5,0) to (5.5,0); \draw[thick] (0,1) to[out=0,in=180] (1,1) (2,1) to (4,1) (5,1) to (5.5,1); \draw[thick] (1,0.4) rectangle node {$\scriptstyle a$} (2, 1.1); \draw[thick] (2.5,-0.1) rectangle node {$\scriptstyle{b-1}$} (3.5, 0.6); \draw[thick] (4,0.4) rectangle node {$\scriptstyle c$} (5, 1.1); \draw[thick] (0,1) to[out=180, in=0] (-0.5,1.25) to[out=0,in=180] (0,1.5) to (5.5,1.5) to[out=0,in=180] (6,1.25) to[out=180,in=0] (5.5,1); \draw[thick] (0,0.5) to[out=180, in=0] (-1,1.25) to[out=0,in=180] (0,1.75) to (5.5,1.75) to[out=0,in=180] (6.5,1.25) to[out=180,in=0] (5.5,0.5); \draw[thick] (0,0) to[out=180, in=0] (-1.5,1.25) to[out=0,in=180] (0,2) to (5.5,2) to[out=0,in=180] (7,1.25) to[out=180,in=0] (5.5,0); \end{tikzpicture} \end{align*} \begin{align*} \lambda({{\widetilde{\dynD}}_n})= \begin{tikzpicture}[baseline=-0.5ex,scale=0.8] \draw[thick] (0,0) to[out=0,in=180] (1,-0.5) to[out=0,in=180] (2,-1) to[out=0,in=180] (3,-0.5) to[out=0,in=180] (4,0) to[out=0,in=180] (9,0); \draw[thick] (0,-0.5) to[out=0,in=180] (1,0) to[out=0,in=180] (3,0) to[out=0,in=180] (4,-0.5) (5,-0.5) to[out=0,in=180] (6,-0.5) to[out=0,in=180] (7,-1) to[out=0,in=180] (8,-0.5) to[out=0,in=180] (9,-0.5); \draw[thick] (0,-1) to[out=0,in=180] (1,-1) to[out=0,in=180] (2,-0.5) to[out=0,in=180] (3,-1) to[out=0,in=180] (4,-1) (5,-1) to[out=0,in=180] (6,-1.5) to[out=0,in=180] (8,-1.5) to[out=0,in=180] (9,-1); \draw[thick] (0,-1.5) to[out=0,in=180] (5,-1.5) to[out=0,in=180] (6,-1) to[out=0,in=180] (7,-0.5) to[out=0,in=180] (8,-1) to[out=0,in=180] (9,-1.5); \draw[thick] (4,-0.4) rectangle node {$\scriptstyle{n-4}$} (5, -1.1); \draw[thick] (0,0) to[out=180,in=0] (-0.5,0.25) to[out=0,in=180] (0,0.5) to[out=0,in=180] (9,0.5) to[out=0,in=180] (9.5,0.25) to[out=180,in=0] (9,0); \draw[thick] (0,-0.5) to[out=180,in=0] (-1,0.25) to[out=0,in=180] (0,0.75) to[out=0,in=180] (9,0.75) to[out=0,in=180] (10,0.25) to[out=180,in=0] (9,-0.5); \draw[thick] (0,-1) to[out=180,in=0] (-1.5,0.25) to[out=0,in=180] (0,1) to[out=0,in=180] (9,1) to[out=0,in=180] (10.5,0.25) to[out=180,in=0] (9,-1); \draw[thick] (0,-1.5) to[out=180,in=0] (-2,0.25) to[out=0,in=180] (0,1.25) to[out=0,in=180] (9,1.25) to[out=0,in=180] (11,0.25) to[out=180,in=0] (9,-1.5); \end{tikzpicture} \end{align*} Note that the above Legendrians $\lambda(a,b,c)$ and $\lambda(\widetilde{\dynD}_n)$ can be obtained by ($-1$)-closure of the following braids, respectively, \begin{align*} \beta(a,b,c)&=\sigma_2\sigma_1^{a+1}\sigma_2\sigma_1^{b+1}\sigma_2\sigma_1^{c+1},& \beta(\widetilde{\dynD}_n)&=\left(\sigma_2\sigma_1^3\sigma_2\sigma_1^3\sigma_2\sigma_1^k\sigma_3\right)\cdot\left(\sigma_2\sigma_1^3\sigma_2\sigma_1^3\sigma_2\sigma_1^\ell\sigma_3\right), \end{align*} where $k=\lfloor \frac{n-3}2\rfloor$ and $\ell=\lfloor \frac{n-4}2\rfloor$, see Section~\ref{sec:N-graph of finite or affine type}. Those braids provide boundary data of the following $N$-graphs which represent exact Lagrangian fillings of corresponding Legendrian links: \begin{figure}[ht] \subfigure[$(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))$\label{N-graph(a,b,c)}]{ $ \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw[thick] (0,0) circle (3cm); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (60:1) -- (50:1.5) (70:1.75) -- (50:2) (180:1) -- (170:1.5) (190:1.75) -- (170:2) (300:1) -- (290:1.5) (310:1.75) -- (290:2); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) -- (60:1) (0,0) -- (180:1) (0,0) -- (300:1) (50:1.5) -- (70:1.75) (170:1.5) -- (190:1.75) (290:1.5) -- (310:1.75); \draw[red, thick] (0,0) -- (0:3) (0,0) -- (120:3) (0,0) -- (240:3); \draw[blue, thick, fill] (0,0) -- (60:1) circle (2pt) -- (100:3) (60:1) -- (50:1.5) circle (2pt) -- (20:3) (50:1.5) -- (70:1.75) circle (2pt) -- (80:3) (70:1.75) -- (50:2) circle (2pt) -- (40:3); \draw[blue, thick, dashed] (50:2) -- (60:3); \draw[blue, thick, fill] (0,0) -- (180:1) circle (2pt) -- (220:3) (180:1) -- (170:1.5) circle (2pt) -- (140:3) (170:1.5) -- (190:1.75) circle (2pt) -- (200:3) (190:1.75) -- (170:2) circle (2pt) -- (160:3); \draw[blue, thick, dashed] (170:2) -- (180:3); \draw[blue, thick, fill] (0,0) -- (300:1) circle (2pt) -- (340:3) (300:1) -- (290:1.5) circle (2pt) -- (260:3) (290:1.5) -- (310:1.75) circle (2pt) -- (320:3) (310:1.75) -- (290:2) circle (2pt) -- (280:3); \draw[blue, thick, dashed] (290:2) -- (300:3); \draw[thick, fill=white] (0,0) circle (2pt); \curlybrace[]{10}{110}{3.2}; \draw (300:3.5) node[rotate=30] {\small ${c+1}$}; \curlybrace[]{130}{230}{3.2}; \draw (180:3.5) node[rotate=90] {\small $b+1$}; \curlybrace[]{250}{350}{3.2}; \draw (60:3.5) node[rotate=-30] {\small $a+1$}; \end{tikzpicture} $} \qquad \subfigure[$(\ngraphfont{G}(\widetilde{\dynD}_{n}),\ngraphfont{B}(\widetilde{\dynD}_{n}))$\label{N-graph(Dn)}]{ $ \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw[rounded corners=5, thick] (-6.5, -2.5) rectangle (6.5, 2.5); \draw (0.5, -2.5) node[above] {$\cdots$} (-0.5, -2.5) node[below] {$\underbrace{\hphantom{\hspace{3cm}}}_{k}$}; \draw (1.5, 2.5) node[below] {$\cdots$} (0.5, 2.5) node[above] {$\overbrace{\hphantom{\hspace{3cm}}}^{\ell}$}; \clip[rounded corners=5] (-6.5, -2.5) rectangle (6.5, 2.5); \draw[cyclecolor1, opacity=0.5, line cap=round, line width=5] (-3.5, 0) -- (-2.5, 0) (-3.5, 0) -- (-4.5, 1) (-3.5, 0) -- (-4.5, -1) (-1.5, 0) -- (-0.5, 0) (0.5, 0) -- (1.5, 0) (3.5, 0) -- (2.5, 0) (3.5, 0) -- (4.5, 1) (3.5, 0) -- (4.5, -1) ; \draw[cyclecolor2, opacity=0.5, line cap=round, line width=5] (-4.5, 1) -- (-5.5, 1) (-4.5, -1) -- (-4.5, -1.75) (-1.5, 0) -- (-2.5, 0) (-0.5, 0) -- (0.5, 0) (1.5, 0) -- (2.5, 0) (4.5, 1) -- (4.5, 1.75) (4.5, -1) -- (5.5, -1) ; \foreach \i in {0, 180} { \begin{scope}[rotate=\i] \draw[thick, green] (-2.5, 2.5) -- (0,0); \draw[thick, red] (-3.5, -2.5) -- (-3.5, 2.5) (-6.5, 0) -- (-3.5, 0) ; \draw[thick, blue, fill] (-2.5, -2.5) -- (-2.5,0) circle (2pt) (-0.5, -2.5) -- (-0.5,0) circle (2pt) (1.5, -2.5) -- (1.5,0) circle (2pt) ; \draw[thick, blue, fill] (-3.5, 0) -- (3.5, 0) (-3.5, 0) -- (-4.5, 1) circle (2pt) -- (-4.5, 2.5) (-4.5, 1) -- (-6.5, 1) (-5.5, 1) circle (2pt) -- (-5.5, 2.5) (-3.5, 0) -- (-4.5, -1) circle (2pt) -- (-4.5, -2.5) (-4.5, -1) -- (-6.5, -1) (-4.5, -1.73) circle (2pt) -- (-6.5, -1.73) ; \end{scope} } \draw[thick, fill=white] (-3.5, 0) circle (2pt) (3.5, 0) circle (2pt); \end{tikzpicture} $} \caption{Pairs of $N$-graphs and tuples of cycles} \label{fig:N-graphs of (a,b,c) and Dn} \end{figure} \noindent Here, the \colorbox{cyclecolor1!50!}{orange}- and \colorbox{cyclecolor2!50!}{green}-shaded edges indicate a tuple of one-cycles $\ngraphfont{B}$ in the corresponding Legendrian surface. See~\S\ref{sec:1-cycles in Legendrian weaves} for the detail. The Legendrians $\lambda(a,b,c), \lambda(\widetilde{\dynD}_n)$ are the rainbow closure of \emph{positive braids}. By the work of Shen--Weng \cite{SW2019}, it is direct to check that the corresponding cluster structure of Legendrian $\lambda(\dynkinfont{Z})$ is indeed of type $\dynkinfont{Z}$ for $\dynkinfont{Z}\in\{\dynkinfont{A},\dynkinfont{D},\dynkinfont{E},\widetilde{\dynD},\widetilde{\dynE}\}$. More precisely, the coordinate ring of the moduli space $\cM_1(\lambda(\dynkinfont{Z}))$ of microlocal rank one sheaves in $\Sh^\bullet_{\lambda(\dynkinfont{Z})}(\R^2)$ admits the aforementioned $Y$-pattern structure. By the way, the (candidate) Legendrians of type $\widetilde{\dynA}$ are not the rainbow closure of positive braids, in general. Indeed, Casals--Ng~\cite{CN2021} considered a Legendrian link of type $\widetilde{\dynA}_{1,1}$ which is not the rainbow closure of a positive braid. So we can not directly apply the subsequent argument to Legendrians of type $\widetilde{\dynA}$. To prove the realizability of each $Y$-seed in the corresponding $Y$-pattern, we use an induction argument on the rank of the type $\dynkinfont{Z}$. More precisely, for each $Y$-pattern, we consider the \emph{exchange graph}, whose vertices are the $Y$-seeds and whose edges connect the vertices related by a single mutation. It has been known that the exchange graph of a $Y$-pattern is determined by the Dynkin type $\dynkinfont{Z}$ of the $Y$-pattern when $\dynkinfont{Z}$ is finite or affine (cf. Propositions~\ref{thm_exchange_graph_Dynkin} and~\ref{prop_Y-pattern_exchange_graph}). Because of this, we denote by $\exchange(\Phi(\dynkinfont{Z}))$ the exchange graph of a $Y$-pattern of type $\dynkinfont{Z}$. Here, $\Phi(\dynkinfont{Z})$ is the root system of type $\dynkinfont{Z}$. Note that when $\dynkinfont{Z}$ is of finite type, the exchange graph $\exchange(\Phi(\dynkinfont{Z}))$ becomes the one-skeleton of a polytope, called the (\emph{generalized}) \emph{associahedron} (see Figures~\ref{fig_asso_A3_intro} and~\ref{fig_asso_D4}). \begin{figure}[ht] \tdplotsetmaincoords{110}{-30} \begin{tikzpicture}% [tdplot_main_coords, scale=0.700000, back/.style={loosely dotted, thin}, edge/.style={color=black, thick}, facet/.style={fill=blue!95!black,fill opacity=0.100000}, vertex/.style={inner sep=1pt,circle,fill=black,thick,anchor=base}, gvertex/.style={inner sep=1.2pt,circle,draw=green!25!black,fill=green!75!black,thick,anchor=base}] \coordinate (1) at (-0.50000, -1.50000, 2.00000); \coordinate (2) at (1.50000, 1.50000, -2.00000); \coordinate (3) at (0.50000, 0.50000, 1.00000); \coordinate (4) at (0.50000, 1.50000, 0.00000); \coordinate (5) at (1.50000, 1.50000, -1.00000); \coordinate (6) at (-0.50000, -0.50000, 2.00000); \coordinate (7) at (1.50000, 0.50000, 0.00000); \coordinate (8) at (1.50000, -1.50000, 0.00000); \coordinate (9) at (1.50000, -1.50000, -2.00000); \coordinate (10) at (-1.50000, -1.50000, -2.00000); \coordinate (11) at (-1.50000, -1.50000, 2.00000); \coordinate (12) at (-1.50000, 1.50000, -2.00000); \coordinate (13) at (-1.50000, 1.50000, 0.00000); \coordinate (14) at (-1.50000, -0.50000, 2.00000); \fill[cyclecolor1, opacity = 0.3] (10)--(12)--(13)--(14)--(11)--cycle; \fill[cyclecolor2, opacity = 0.2] (10)--(9)--(8)--(1)--(11)--cycle; \fill[yellow, opacity = 0.3] (10)--(9)--(2)--(12)--cycle; \draw[edge,back] (9) -- (10); \draw[edge,back] (10) -- (11); \draw[edge,back] (10) -- (12); \node[vertex] at (10) {}; \draw[edge] (1) -- (6); \draw[edge] (1) -- (8); \draw[edge] (1) -- (11); \draw[edge] (2) -- (5); \draw[edge] (2) -- (9); \draw[edge] (2) -- (12); \draw[edge] (3) -- (4); \draw[edge] (3) -- (6); \draw[edge] (3) -- (7); \draw[edge] (4) -- (5); \draw[edge] (4) -- (13); \draw[edge] (5) -- (7); \draw[edge] (6) -- (14); \draw[edge] (7) -- (8); \draw[edge] (8) -- (9); \draw[edge] (11) -- (14); \draw[edge] (12) -- (13); \draw[edge] (13) -- (14); \node[vertex] at (1) {}; \node[vertex] at (2) {}; \node[vertex] at (3) {}; \node[vertex] at (4) {}; \node[vertex] at (5) {}; \node[vertex] at (6) {}; \node[vertex] at (7) {}; \node[vertex] at (8) {}; \node[vertex] at (9) {}; \node[vertex] at (11) {}; \node[vertex] at (12) {}; \node[vertex] at (13) {}; \node[vertex] at (14) {}; \foreach \g in {10, 6, 5} { \node[gvertex] at (\g) {}; } \end{tikzpicture}% \hspace{1cm}% \begin{tikzpicture}% [tdplot_main_coords, scale=0.700000, back/.style={loosely dotted, thin}, edge/.style={color=black, thick}, facet/.style={fill=blue!95!black,fill opacity=0.100000}, vertex/.style={inner sep=1pt,circle,fill=black,thick,anchor=base}, gvertex/.style={inner sep=1.2pt,circle,draw=green!25!black,fill=green!75!black,thick,anchor=base}] \coordinate (1) at (-0.50000, -1.50000, 2.00000); \coordinate (2) at (1.50000, 1.50000, -2.00000); \coordinate (3) at (0.50000, 0.50000, 1.00000); \coordinate (4) at (0.50000, 1.50000, 0.00000); \coordinate (5) at (1.50000, 1.50000, -1.00000); \coordinate (6) at (-0.50000, -0.50000, 2.00000); \coordinate (7) at (1.50000, 0.50000, 0.00000); \coordinate (8) at (1.50000, -1.50000, 0.00000); \coordinate (9) at (1.50000, -1.50000, -2.00000); \coordinate (10) at (-1.50000, -1.50000, -2.00000); \coordinate (11) at (-1.50000, -1.50000, 2.00000); \coordinate (12) at (-1.50000, 1.50000, -2.00000); \coordinate (13) at (-1.50000, 1.50000, 0.00000); \coordinate (14) at (-1.50000, -0.50000, 2.00000); \fill[cyclecolor1, opacity = 0.3] (6)--(3)--(7)--(8)--(1)--cycle; \fill[cyclecolor2, opacity = 0.2] (6)--(3)--(4)--(13)--(14)--cycle; \fill[yellow, opacity = 0.3] (6)--(1)--(11)--(14)--cycle; \draw[edge,back] (9) -- (10); \draw[edge,back] (10) -- (11); \draw[edge,back] (10) -- (12); \node[vertex] at (10) {}; \draw[edge] (1) -- (6); \draw[edge] (1) -- (8); \draw[edge] (1) -- (11); \draw[edge] (2) -- (5); \draw[edge] (2) -- (9); \draw[edge] (2) -- (12); \draw[edge] (3) -- (4); \draw[edge] (3) -- (6); \draw[edge] (3) -- (7); \draw[edge] (4) -- (5); \draw[edge] (4) -- (13); \draw[edge] (5) -- (7); \draw[edge] (6) -- (14); \draw[edge] (7) -- (8); \draw[edge] (8) -- (9); \draw[edge] (11) -- (14); \draw[edge] (12) -- (13); \draw[edge] (13) -- (14); \node[vertex] at (1) {}; \node[vertex] at (2) {}; \node[vertex] at (3) {}; \node[vertex] at (4) {}; \node[vertex] at (5) {}; \node[vertex] at (6) {}; \node[vertex] at (7) {}; \node[vertex] at (8) {}; \node[vertex] at (9) {}; \node[vertex] at (11) {}; \node[vertex] at (12) {}; \node[vertex] at (13) {}; \node[vertex] at (14) {}; \foreach \g in {10, 6, 5} { \node[gvertex] at (\g) {}; } \end{tikzpicture} % \hspace{1cm}% \begin{tikzpicture}% [tdplot_main_coords, scale=0.700000, back/.style={loosely dotted, thin}, edge/.style={color=black, thick}, facet/.style={fill=blue!95!black,fill opacity=0.100000}, vertex/.style={inner sep=1pt,circle,fill=black,thick,anchor=base}, gvertex/.style={inner sep=1.2pt,circle,draw=green!25!black,fill=green!75!black,thick,anchor=base}] \coordinate (1) at (-0.50000, -1.50000, 2.00000); \coordinate (2) at (1.50000, 1.50000, -2.00000); \coordinate (3) at (0.50000, 0.50000, 1.00000); \coordinate (4) at (0.50000, 1.50000, 0.00000); \coordinate (5) at (1.50000, 1.50000, -1.00000); \coordinate (6) at (-0.50000, -0.50000, 2.00000); \coordinate (7) at (1.50000, 0.50000, 0.00000); \coordinate (8) at (1.50000, -1.50000, 0.00000); \coordinate (9) at (1.50000, -1.50000, -2.00000); \coordinate (10) at (-1.50000, -1.50000, -2.00000); \coordinate (11) at (-1.50000, -1.50000, 2.00000); \coordinate (12) at (-1.50000, 1.50000, -2.00000); \coordinate (13) at (-1.50000, 1.50000, 0.00000); \coordinate (14) at (-1.50000, -0.50000, 2.00000); \fill[cyclecolor1, opacity = 0.3] (4)--(5)--(2)--(12)--(13)--cycle; \fill[cyclecolor2, opacity = 0.2] (5)--(2)--(9)--(8)--(7)--cycle; \fill[yellow, opacity = 0.3] (3)--(4)--(5)--(7)--cycle; \draw[edge,back] (9) -- (10); \draw[edge,back] (10) -- (11); \draw[edge,back] (10) -- (12); \node[vertex] at (10) {}; \draw[edge] (1) -- (6); \draw[edge] (1) -- (8); \draw[edge] (1) -- (11); \draw[edge] (2) -- (5); \draw[edge] (2) -- (9); \draw[edge] (2) -- (12); \draw[edge] (3) -- (4); \draw[edge] (3) -- (6); \draw[edge] (3) -- (7); \draw[edge] (4) -- (5); \draw[edge] (4) -- (13); \draw[edge] (5) -- (7); \draw[edge] (6) -- (14); \draw[edge] (7) -- (8); \draw[edge] (8) -- (9); \draw[edge] (11) -- (14); \draw[edge] (12) -- (13); \draw[edge] (13) -- (14); \node[vertex] at (1) {}; \node[vertex] at (2) {}; \node[vertex] at (3) {}; \node[vertex] at (4) {}; \node[vertex] at (5) {}; \node[vertex] at (6) {}; \node[vertex] at (7) {}; \node[vertex] at (8) {}; \node[vertex] at (9) {}; \node[vertex] at (11) {}; \node[vertex] at (12) {}; \node[vertex] at (13) {}; \node[vertex] at (14) {}; \foreach \g in {10, 6, 5} { \node[gvertex] at (\g) {}; } \end{tikzpicture} \caption{The type $\dynkinfont{A}_3$ associahedron}\label{fig_asso_A3_intro} \end{figure} A (fixed) sequence of mutations corresponding to a chosen Coxeter element provides an action on the exchange graph. We call this specific sequence of mutations a \emph{Coxeter mutation} $\mu_{\clusterfont{Q}}$. The orbit of the initial seed is called \emph{bipartite belt}. The green dots in Figure~\ref{fig_asso_A3_intro} present the elements of the bipartite belt. We notice that the facets meeting at the initial seed correspond to the exchange graphs $\exchange(\Phi(\dynkinfont{Z}\setminus \{i\}))$. In Figure~\ref{fig_asso_A3_intro}, there are two pentagons and one square intersecting a green dot. Indeed, a pentagon is the type $\dynkinfont{A}_2$ generalized associahedron; a square is the type $\dynkinfont{A}_1 \times \dynkinfont{A}_1$ generalized associahedron. Moreover, by applying the Coxeter mutation on these facets iteratively, one can obtain all facets in the associahedron. Even though we do not have a polytope model for the exchange graph of affine type, similar properties hold, that is, one can reach any $Y$-seed in the exchange graph from the initial seed by taking Coxeter mutations and then applying a certain sequence of mutations omitting at least one vertex. The following good properties of the above pairs $(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))$ and $(\ngraphfont{G}(\widetilde{\dynD}_{n}),\ngraphfont{B}(\widetilde{\dynD}_{n}))$ play a crucial role in interpreting the Coxeter mutation $\mu_{\quiver}$ in terms of $N$-graphs: \begin{enumerate} \item The geometric and algebraic intersection numbers between chosen one-cycles coincide. \item The corresponding quivers $\clusterfont{Q}(a,b,c)$, $\clusterfont{Q}(\widetilde{\dynD}_n)$ are bipartite, see~\S\ref{sec:N-graphs and seeds} for the details. \end{enumerate} The property (2) naturally splits $\ngraphfont{B}$ into two subsets $\ngraphfont{B}_+$ and $\ngraphfont{B}_-$. In Figure~\ref{fig:N-graphs of (a,b,c) and Dn}, they consist of \colorbox{cyclecolor1!50!}{orange}- and \colorbox{cyclecolor2!50!}{green}-shaded edges, respectively. Then the property (1) enables us to perform the \emph{Legendrian Coxeter mutation}, which is the $N$-graph realization of the Coxeter mutation defiend by the sequence of Legendrian mutations: \[ \mu_\ngraph=\prod_{\gamma \in \ngraphfont{B}_+} \mu_{\gamma}\cdot\prod_{\gamma\in \ngraphfont{B}_-} \mu_{\gamma}. \] Then the resulting $N$-graphs $\mu_\ngraph(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))$ and $\mu_\ngraph(\ngraphfont{G}(\widetilde{\dynD}_n),\ngraphfont{B}(\widetilde{\dynD}_n))$ become the $N$-graphs shown in Figure~\ref{figure:intro_Legendrian Coxeter mutation} up to a sequence of Move~\Move{II} in Figure~\ref{fig:move1-6}. \begin{figure}[ht] \subfigure[$\mu_\ngraph(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))$\label{ncoxeter_n(a,b,c)}]{ $ \begin{tikzpicture}[baseline=-.5ex,xscale=0.5, yscale=0.5] \draw[thick] (0,0) circle (5cm); \draw[dashed] (0,0) circle (3cm); \fill[opacity=0.1, even odd rule] (0,0) circle (3) (0,0) circle (5); \foreach \i in {1,2,3} { \begin{scope}[rotate=\i*120] \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (60:1) -- (50:1.5) (70:1.75) -- (50:2); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) -- (60:1) (50:1.5) -- (70:1.75); \draw[blue, thick, rounded corners] (0,0) -- (0:3.4) to[out=-75,in=80] (-40:4); \draw[red, thick, fill] (0,0) -- (60:1) circle (2pt) (60:1) -- (50:1.5) circle (2pt) -- (70:1.75) circle (2pt) -- (50:2) circle (2pt); \draw[red, thick, dashed, rounded corners] (50:2) -- (60:2.8) -- (60:3.3) to[out=0,in=220] (40:4); \draw[red, thick, rounded corners] (50:2) -- (40:2.8) -- (40:3.3) to[out=-20,in=200] (20:4) (70:1.75) -- (80:2.8) -- (80:3.3) to[out=20,in=240] (60:4) (60:1) -- (100:2.8) -- (100:3.3) to[out=40,in=260] (80:4); \draw[red, thick, rounded corners] (50:1.5) -- (20:3) -- (20:3.5) to[out=-70,in=50] (-40:4) (20:4) to[out=-50,in=120] (0:4.5) -- (0:5); \draw[red, thick] (20:4) to[out=100,in=-40] (40:4) (60:4) to[out=140,in=0] (80:4); \draw[blue, thick] (20:5) -- (20:4) to[out=140,in=-80] (40:4) (60:5) -- (60:4) to[out=180,in=-40] (80:4) -- (80:5); \draw[thick, dotted] (40:4) arc (40:60:4); \draw[blue, thick, rounded corners] (20:4) to[out=-70,in=100] (-20:4.5) -- (-20:5); \draw[blue, thick, dashed] (40:4) -- (40:5); \draw[fill=white, thick] (20:4) circle (2pt) (40:4) circle (2pt) (60:4) circle (2pt) (80:4) circle (2pt) (-40:4) circle (2pt); \end{scope} \draw[fill=white, thick] (0,0) circle (2pt); } \end{tikzpicture} $} \qquad \subfigure[$\mu_\ngraph(\ngraphfont{G}(\widetilde{\dynD}_{4}),\ngraphfont{B}(\widetilde{\dynD}_{4}))$\label{ncoxeter_D4}]{ $ \begin{tikzpicture}[baseline=-.5ex,xscale=0.5, yscale=-0.5] \fill[opacity=0.1, rounded corners=5] (-8, -4) rectangle (8, 4); \draw[rounded corners=5, thick] (-8, -4) rectangle (8, 4); \clip[rounded corners=5] (-8, -4) rectangle (8, 4); \foreach \r in {0, 180} { \begin{scope}[rotate=\r] \draw[blue, thick] (-4, 1) -- ++(-1, -1) (-4, -1) -- ++(-1, 1) to[out=-120, in=120] ++(0,-3) (-5, 3) to[out=-105,in=30] (-7,0) (-2, -2.5) -- ++(1, -0.5) -- +(-1, -1) ++(0,0) -- ++(2, 0) -- ++(1, 0.5) (-3, -2.5) -- ++(-2, -0.5) -- ++(-1, -1) (-4, 1.75) -- ++(-1, 1.25) -- ++(-1, 1) (-2, 2.5) -- ++(1, 0.5) -- ++(-1, 1) (-8, 1) -- ++(1, -1) -- ++(-1, -1) ; \draw[red, thick] (-1, -2.5) -- ++(0, -0.5) -- +(0, -1) ++(0,0) -- ++(-4, 0) -- ++(0, 3) -- +(1,0) ++(0,0) -- ++(0,3) -- ++(4,0) -- +(0, 1) ++(0,0) -- ++(0, -0.5) (-5, -3) -- ++(-2, 3) -- +(-1, 0) ++(0,0) -- ++(2, 3) ; \draw[green, thick] (0, 4) -- (0, 2.5); \draw[fill=white, thick] (-5, 0) circle (2pt) (-7, 0) circle (2pt) (-5, -3) circle (2pt) (-1, -3) circle (2pt) (1, -3) circle (2pt) (-5, 3) circle (2pt) ; \end{scope} } \begin{scope}[yscale=-1] \draw[fill=white, rounded corners=5,dashed] (-4, -2.5) rectangle (4, 2.5); \clip[rounded corners=5] (-4, -2.5) rectangle (4, 2.5); \draw[cyclecolor1, opacity=0.5, line cap=round, line width=5] (-1, 0) -- (1, 0) (-1, 0) -- (-2, 1) (-1, 0) -- (-2, -1) (1, 0) -- (2, 1) (1, 0) -- (2, -1) ; \draw[cyclecolor2, opacity=0.5, line cap=round, line width=5] (-2, 1) -- (-3, 1) (-2, -1) -- (-2, -1.75) (2, 1) -- (2, 1.75) (2, -1) -- (3, -1) ; \foreach \i in {0, 180} { \begin{scope}[rotate=\i] \begin{scope}[xshift=2.5cm] \draw[thick, green] (-2.5, 2.5) -- ++(0,-2.5); \draw[thick, red] (-3.5, -2.5) -- (-3.5, 2.5) (-6.5, 0) -- (-3.5, 0) ; \draw[thick, blue, fill] (-3.5, 0) -- (-2.5, 0) (-3.5, 0) -- (-4.5, 1) circle (2pt) -- (-4.5, 2.5) (-4.5, 1) -- (-6.5, 1) (-5.5, 1) circle (2pt) -- (-5.5, 2.5) (-3.5, 0) -- (-4.5, -1) circle (2pt) -- (-4.5, -2.5) (-4.5, -1) -- (-6.5, -1) (-4.5, -1.73) circle (2pt) -- (-6.5, -1.73) ; \end{scope} \end{scope} } \draw[thick, fill=white] (-1, 0) circle (2pt) (1, 0) circle (2pt); \end{scope} \end{tikzpicture} $} \caption{After applying Legendrian Coxeter mutation on the initial pair} \label{figure:intro_Legendrian Coxeter mutation} \end{figure} Removing the gray-shaded annulus region, $(\ngraphfont{G}(\widetilde{\dynD}_n),\ngraphfont{B}(\widetilde{\dynD}_n))$ and $\mu_\ngraph(\ngraphfont{G}(\widetilde{\dynD}_n),\ngraphfont{B}(\widetilde{\dynD}_n))$ are identical, and the only difference between $(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))$ and $\mu_\ngraph(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))$ is the reverse of the color. Note that the intersection pattern between one-cycles and the Legendrian mutability are preserved under the action of the Legendrian Coxeter mutation $\mu_\ngraph$. By the induction argument on the rank of root system, we conclude that there in no (geometric) obstruction to realize each seed via the $N$-graph. Note that the $N$-graphs $\ngraphfont{G}(a,b,c)$ and $\ngraphfont{G}(\widetilde{\dynD}_{n})$ cover Lagrangian fillings of Legendrian links of type $\dynkinfont{Z}\in\{\dynkinfont{A},\dynkinfont{D},\dynkinfont{E},\widetilde{\dynD},\widetilde{\dynE}\}$, see Table~\ref{table:short notations}. In particular, $\ngraphfont{G}(a,b,c)$ is of type $\dynkinfont{ADE}$ or $\widetilde{\dynD}\widetilde{\dynE}$ if and only if $\frac 1a+\frac 1b+\frac 1c>1$ or $\frac 1a+\frac 1b+\frac 1c=1$, respectively. This guarantees that there are at least as many Lagrangian fillings as seeds for $\lambda(\dynkinfont{Z})$ for $\dynkinfont{Z}\in\{\dynkinfont{A},\dynkinfont{D},\dynkinfont{E},\widetilde{\dynD},\widetilde{\dynE}\}$. \begin{theorem}[Theorem~\ref{theorem:seed many fillings}]\label{thm_intro_1} Let $\lambda$ be a Legendrian knot or link of type~$\dynkinfont{ADE}$ or type $\widetilde{\dynD}\widetilde{\dynE}$. Then it admits as many exact embedded Lagrangian fillings as the number of seeds in the seed pattern of the same type. See Table~\ref{table_seeds_and_cluster_variables} for the number of seeds of finite type. \end{theorem} There are several ways of constructing exact embedded Lagrangian fillings as mentioned above. Especially in $\dynkinfont{D}_4$ case, there are 34 distinct Lagrangian fillings constructed by the method of the alternating Legendrians \cite{BFFH2018,STWZ2019}, while the above $N$-graphs give seeds many 50 Lagrangian fillings. Most recently, for Legendrian links of type $\dynkinfont{D}_n$, Hughes \cite{Hughes2021} makes use of $3$-graphs together with 1-cycles to show that every sequence of quiver mutations can be realized by Legendrian weave mutations. Compared with our strategy using structural results of the cluster pattern, he studies $3$-graph moves arise from quivers of type $\dynkinfont{D}_n$ in a more direct and concrete way. As a corollary, he also obtained at least as many Lagrangian fillings as seeds in the cluster algebra of type $\dynkinfont{D}_n$. There are many results showing the existence of (infinitely many) distinct Lagrangian fillings for Legendrian links, see \cite{EHK2016,Pan2017,TZ2018,STWZ2019,CG2020,CZ2020,GSW2020b,CN2021}. To the best of authors' knowledge, Theorem~\ref{thm_intro_1} is the first results of (infinitely many) Lagrangian fillings of Legendrian links which exhaust all seeds in the corresponding cluster pattern beyond type $\dynkinfont{A}\dynkinfont{D}$. The gray-shaded annular $N$-graphs in the above figure can be seen as exact Lagrangian cobordisms. In particular, the annular $N$-graph in $\mu_\ngraph(\ngraphfont{G}(\widetilde{\dynD}_{4}),\ngraphfont{B}(\widetilde{\dynD}_{4}))$ corresponds to the cobordism from the Legendrian $\lambda(\widetilde{\dynD}_4)$ onto itself which defines the \emph{Legendrian loop} $\vartheta(\widetilde{\dynD}_4)$. See Figure~\ref{fig:legendrian loop of D_intro} in general. Note that this coincides with the Legendrian loop described in \cite[Figure~2]{CN2021} up to Reidemeister moves. For type~$\widetilde{\dynE}$, the twice of Legendrian Coxeter mutation on the pair $(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))$ gives the Legendrian loop~$\vartheta(\widetilde{\dynE})$ of $\lambda(\widetilde{\dynE})$ as shown in Figure~\ref{fig:legendrian loop of E_intro}. The Legendrian loop $\vartheta(\widetilde{\dynE})$ can be interpreted as the move of the half twist $\Delta_3$ along the three-strand braid band, whereas the Legendrian loop $\vartheta(\widetilde{\dynD}_n)$ is essentially the move of the half twist $\Delta_2$ along the two-strand braid band as depicted in Figure~\ref{fig:legendrian loop of D_intro}. \begin{figure}[ht] \subfigure[$\ngraphfont{C}({\widetilde{\dynD}}_{n})$]{ \begin{tikzpicture}[baseline=-.5ex,scale=0.4] \draw[thick, rounded corners=5] (-9,-4) rectangle (9, 4); \foreach \r in {0, 180} { \begin{scope}[rotate=\r] \begin{scope} \draw[violet, line width=4, opacity=0.5, rounded corners=5](2,2)--(2,3)--(1,3) (0,3)--(-1,3)--(-1,4) (-1,2)--(-1,3)--(-3,3) (-3,-3)--(-2,-3)--(-2,-4); \draw[violet, line width=4, opacity=0.5] (-3,3)--(-6,3) to[out=-105,in=30] (-8,0) --(-9,0) (-5,0)--(-6,0) to[out=-120,in=120] (-6,-3) -- (-3,-3); \draw[violet, line width=4, opacity=0.5,dashed](1,3)--(0,3); \end{scope} \begin{scope}[yscale=-1] \draw[thick, red] (-3, 4) -- ++(0, -2) (-3, -4) -- ++(0, 2) (-3, 3) -- ++(-3, 0) -- ++(0, -6) -- ++(3,0) (-6, 0) -- ++(1, 0) (-6, 3) -- ++(-2, -3) -- ++(2, -3) (-8, 0) -- ++(-1, 0) ; \draw[thick, blue] (-4, 4) -- ++(1, -1) -- ++(-1, -1) (-4, -4) -- ++(1, 1) -- ++(-1, 1) (-7, 4) -- ++(1, -1) -- ++(1.5, -1.5) (-7, -4) -- ++(1, 1) -- ++(1.5, 1.5) (-6, 0) -- ++(1, 1) (-6, 0) -- ++(1,-1) (-8, 0) -- ++(-1, 1) (-8, 0) -- ++(-1, -1) (-8, 0) to[out=-30, in=105] ++(2, -3) (-6, 3) to[out=-120, in=120] (-6, 0) ; \end{scope} \draw[thick, blue, rounded corners] (-3, 3) -- ++(2, 0) -- ++(0, -1) (-1, 4) -- ++(0, -1) -- ++(1,0) (2, 2) -- ++(0, 1) -- ++(-1, 0) (-2, -4) -- ++(0, 1) -- ++(-1, 0) ; \draw[thick, blue, dashed] (0, 3) -- ++(1, 0) ; \draw[thick, green] (-2, 4) -- ++(0,-2) ; \draw[thick, fill=white] (-3, 3) circle (2pt) (-3, -3) circle (2pt) (-6, 3) circle (2pt) (-6, 0) circle (2pt) (-6, -3) circle (2pt) (-8, 0) circle (2pt) ; \end{scope} } \draw[thick, rounded corners=5, fill=white] (-5,-2) rectangle (5, 2); \draw (0.5, 4) node[above=0ex] {$\overbrace{\hphantom{\hspace{1.6cm}}}^{\ell}$}; \draw (-0.5, -4) node[below=0ex] {$\underbrace{\hphantom{\hspace{1.6cm}}}_{k}$}; \end{tikzpicture} } \subfigure[$\vartheta_0({\widetilde{\dynD}}_{n})$]{ \begin{tikzpicture}[baseline=-.5ex, scale=0.6] \begin{scope} \draw[thick] (-7, 0.5) -- ++(7,0) (-7, 1) -- ++(7,0); \draw[thick, blue] (-7, 1.5) -- ++(7,0) (-7, 2) -- ++(7,0); \fill[blue, opacity=0.1] (-7, 1.5) -- ++(7,0) -- ++(0, 0.5) -- ++(-7, 0); \draw[thick, rounded corners] (-7, -0.5) -- ++(0.5, 0) -- ++(4.5, 0) -- ++(0.5, -0.5) -- ++(1,0) -- ++(0.5, 0); \draw[thick, rounded corners] (-7, -1) -- ++(0.5, 0) -- ++(1, -1) -- ++(1, 1) -- ++(0.5, 0) -- ++(1, -1) -- ++(1.5, 1.5) -- ++(1, 0) -- ++(0.5, 0); \draw[thick, blue, rounded corners] (-7, -1.5) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(1, 0) -- ++(1, -1) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, 0); \draw[thick, blue, rounded corners] (-7, -2) -- ++(0.5, 0) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) -- ++(1, 1) -- ++(1, 0) -- ++(1, -1) -- ++(0.5, 0); \draw[thick, blue, fill=blue!10] (-1, -2.175) rectangle ++(1, 0.75) node[pos=.5] {$\scriptstyle k-1$}; \fill[blue, opacity=0.1] (-7, -2) [rounded corners]-- ++(0.5, 0) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) [sharp corners]-- ++(0.25, 0.25) [rounded corners]-- ++(-0.75, 0.75) -- ++(-1, 0) -- ++(-0.5, -0.5) -- ++(-0.5, 0); \fill[blue, opacity=0.1] (-4.25, -1.75) [rounded corners]-- ++(0.75, 0.75) -- ++(1, 0) [sharp corners]-- ++(0.75, -0.75) [rounded corners]-- ++(-0.25, -0.25) -- ++(-0.5, 0) -- ++(-0.5, 0.5) -- ++(-0.5, -0.5) -- ++(-0.5, 0) -- ++(-0.25, 0.25); \fill[blue, opacity=0.1] (-1, -1.5) -- ++(-0.5, 0) -- ++(-0.25, -0.25) -- ++(0.25, -0.25) -- ++(0.5, 0); \draw[thick, violet, dashed] (-4.25, -1.75) circle (0.25) (-1.75, -1.75) circle (0.25) ; \draw[thick, violet, dashed, ->, rounded corners] (-2, -1.75) -- ++(-0.25, 0) -- ++(-0.5, 0.5) -- ++(-0.5, 0) -- ++(-0.5, -0.5) -- ++(-0.25, 0); \draw[thick, violet, dashed, ->, rounded corners] (-4.5, -1.75) -- ++(-0.25, 0) -- ++(-0.5, 0.5) -- ++(-0.5, 0) -- ++(-0.5, -0.5) -- ++(-0.75, 0) arc (-90:-270:1.75) -- ++(14, 0) to[out=0, in=180] ++(3, -2.5) arc (-90:90:0.75) to[out=180, in=0] ++(-3, -2.5); \end{scope} \begin{scope}[xshift=7cm] \draw[thick] (-7, 0.5) -- ++(7,0) (-7, 1) -- ++(7,0); \draw[thick, blue] (-7, 1.5) -- ++(7,0) (-7, 2) -- ++(7,0); \fill[blue, opacity=0.1] (-7, 1.5) -- ++(7,0) -- ++(0, 0.5) -- ++(-7, 0); \draw[thick, rounded corners] (-7, -0.5) -- ++(0.5, 0) -- ++(4.5, 0) -- ++(0.5, -0.5) -- ++(1,0) -- ++(0.5, 0); \draw[thick, rounded corners] (-7, -1) -- ++(0.5, 0) -- ++(1, -1) -- ++(1, 1) -- ++(0.5, 0) -- ++(1, -1) -- ++(1.5, 1.5) -- ++(1, 0) -- ++(0.5, 0); \draw[thick, blue, rounded corners] (-7, -1.5) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(1, 0) -- ++(1, -1) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, 0); \draw[thick, blue, rounded corners] (-7, -2) -- ++(0.5, 0) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) -- ++(1, 1) -- ++(1, 0) -- ++(1, -1) -- ++(0.5, 0); \draw[thick, blue, fill=blue!10] (-1, -2.175) rectangle ++(1, 0.75) node[pos=.5] {$\scriptstyle \ell-1$}; \fill[blue, opacity=0.1] (-7, -2) [rounded corners]-- ++(0.5, 0) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) [sharp corners]-- ++(0.25, 0.25) [rounded corners]-- ++(-0.75, 0.75) -- ++(-1, 0) -- ++(-0.5, -0.5) -- ++(-0.5, 0); \fill[blue, opacity=0.1] (-4.25, -1.75) [rounded corners]-- ++(0.75, 0.75) -- ++(1, 0) [sharp corners]-- ++(0.75, -0.75) [rounded corners]-- ++(-0.25, -0.25) -- ++(-0.5, 0) -- ++(-0.5, 0.5) -- ++(-0.5, -0.5) -- ++(-0.5, 0) -- ++(-0.25, 0.25); \fill[blue, opacity=0.1] (-1, -1.5) -- ++(-0.5, 0) -- ++(-0.25, -0.25) -- ++(0.25, -0.25) -- ++(0.5, 0); \draw[thick, violet, dashed] (-4.25, -1.75) circle (0.25) (-1.75, -1.75) circle (0.25) ; \draw[thick, violet, dashed, ->, rounded corners] (-2, -1.75) -- ++(-0.25, 0) -- ++(-0.5, 0.5) -- ++(-0.5, 0) -- ++(-0.5, -0.5) -- ++(-0.25, 0); \draw[thick, violet, dashed, ->, rounded corners] (-4.5, -1.75) -- ++(-0.25, 0) -- ++(-0.5, 0.5) -- ++(-0.5, 0) -- ++(-0.5, -0.5) -- ++(-0.75, 0); \end{scope} \draw[thick] (-7, 0.5) arc (90:270:0.5) (-7, 1) arc (90:270:1); \draw[thick, blue] (-7, 1.5) arc (90:270:1.5) (-7, 2) arc (90:270:2); \fill[blue, opacity=0.1] (-7, 1.5) arc (90:270:1.5) -- (-7, -2) arc (-90:-270:2); \begin{scope} \draw[thick] (7, 1) to[out=0, in=180] ++(3, -2.5) (7, 0.5) to[out=0, in=180] ++(3, -2.5); \draw[thick, blue] (7, 2) to[out=0, in=180] ++(3, -2.5) (7, 1.5) to[out=0, in=180] ++(3, -2.5); \fill[blue, opacity=0.1] (7, 2) to[out=0, in=180] ++(3, -2.5) -- (10, -1) to[out=180, in=0] ++(-3, 2.5); \end{scope} \begin{scope}[yscale=-1] \draw[thick] (7, 1) to[out=0, in=180] ++(3, -2.5) (7, 0.5) to[out=0, in=180] ++(3, -2.5); \draw[thick, blue] (7, 2) to[out=0, in=180] ++(3, -2.5) (7, 1.5) to[out=0, in=180] ++(3, -2.5); \fill[blue, opacity=0.1] (7, 2) to[out=0, in=180] ++(3, -2.5) -- (10, -1) to[out=180, in=0] ++(-3, 2.5); \end{scope} \draw[thick] (10, 2) arc (90:-90:2) (10, 1.5) arc (90:-90:1.5); \draw[thick, blue] (10, 1) arc (90:-90:1) (10, 0.5) arc (90:-90:0.5); \fill[blue, opacity=0.1] (10, 1) arc (90:-90:1) -- ++(0, 0.5) arc (-90:90:0.5); \end{tikzpicture} } \caption{Legendrian Coxeter padding $\ngraphfont{C}({\widetilde{\dynD}}_{n})$ and the corresponding Legendrian loop $\vartheta_0({\widetilde{\dynD}}_{n})$} \end{figure} \begin{theorem}[Theorem~\ref{thm:legendrian loop}]\label{theorem:legendrian loop} The Legendrian Coxeter mutation $\mu_\ngraphfont{G}$ on $(\ngraphfont{G}(\widetilde{\dynD}),\ngraphfont{B}(\widetilde{\dynD}))$ and twice of Legendrian mutation $\mu_\ngraphfont{G}^{2}$ on $(\ngraphfont{G}(\widetilde{\dynE}),\ngraphfont{B}(\widetilde{\dynE}))$ induce Legendrian loops $\vartheta(\widetilde{\dynD})$ and $\vartheta(\widetilde{\dynE})$ in Figures~\ref{fig:legendrian loop of E_intro} and \ref{fig:legendrian loop of D_intro}, respectively. In particular, the order of the Legendrian loops are infinite as elements of the fundamental group of the space of Legendrians isotopic to $\lambda(\widetilde{\dynD})$ and $\lambda(\widetilde{\dynE})$, respectively. \end{theorem} Note that the above idea of Coxeter mutation also works for $(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))$ with $\frac{1}{a}+\frac{1}{b}+\frac{1}{c} < 1$. Indeed the operation $\mu_{\quiver}$ is of infinite order and so is $\mu_\ngraph$, hence Legendrian weaves \[\Lambda(\mu_\ngraph^r(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c)))\] produce infinitely many distinct Lagrangian fillings. The quiver $\clusterfont{Q}(a,b,c)$ is also bipartite and one can perform the Legendrian Coxeter mutation $\mu_{\ngraphfont{G}}$ on the $N$-graph $\ngraphfont{G}(a,b,c)$ by stacking the gray-shaded annulus like as before. Therefore, there is no obstruction to realize seeds obtained by mutations $\mu_\ngraph^r$ via the $N$-graphs. Since the order of the Legendrian Coxeter mutation is infinite (see Lemma~\ref{lemma:order of coxeter mutation}), we obtain infinitely many $N$-graphs and hence infinitely many exact embedded Lagrangian fillings for the Legendrian link $\lambda(a,b,c)$ with $\frac{1}{a}+\frac{1}{b}+\frac{1}{c} < 1$. \begin{theorem}[Theorem~\ref{theorem:infinite fillings}]\label{thm_intro_infinite_fillings} For each $a,b,c\ge 1$, the Legendrian knot or link $\lambda(a,b,c)$ has infinitely many distinct Lagrangian fillings if \[ \frac1a+\frac1b+\frac1c < 1. \] \end{theorem} Gao--Shen--Weng \cite{GSW2020b} already proved the existence of infinitely many Lagrangian fillings for much general type of positive braid Legendrian links. Their main idea is to use the aperiodicity of \emph{Donaldson--Thomas transformation}(DT) on cluster varieties. An interesting observation is that the corresponding action of DT on the bipartite quivers in the $Y$-pattern becomes the Coxeter mutation. Accordingly, Theorem~\ref{thm_intro_infinite_fillings} can be interpreted as an $N$-graph analogue of the aperiodicity of DT. \subsubsection{Lagrangian fillings for Legendrians of type $\dynkinfont{BCFG}$ or standard affine type with symmetry} Now we move to cluster structure of type $\dynkinfont{BCFG}$ and standard affine types with certain symmetry. They are obtained by the folding procedure from type $\dynkinfont{ADE}$ or $\widetilde{\dynD}\widetilde{\dynE}$, see Table~\ref{table:foldings}. In order to interpret those symmetries into Legendrians links and surfaces, we need to introduce corresponding actions on symplectic- and contact manifolds. Consider two actions on $\mathbb{S}^3\times \R_u$, the rotation $R_{\theta_0}$ and conjugation $\eta$ as follows: \begin{align*} R_{\theta_0}(z_1, z_2,u)&=(z_1\cos\theta_0 -z_2\sin\theta_0,z_1\sin\theta_0+z_2\cos\theta_0,u);\\ \eta(z_1,z_2,u)&= (\bar z_1,\bar z_2 ,u). \end{align*} Here $\mathbb{S}^3$ is the unit sphere in $\bbC^2$ with coordinates $z_1=r_1 e^{i\theta_1}, z_2=r_2 e^{i\theta_2}$ with $r_1^2 + r_2^2=1$. Note that $\eta$ is an anti-symplectic involution which naturally gives $\Z/2\Z$-action on the symplectic manifold. Under certain coordinate changes, the restrictions of $R_{\theta_0}$ and $\eta$ on $J^1\mathbb{S}^1$ become \begin{align*} R_{\theta_0}|_{J^1\mathbb{S}^1}(\theta,p_{\theta},z)&=(\theta+\theta_0,p_{\theta},z);\\ \eta|_{J^1\mathbb{S}^1}(\theta,p_{\theta},z)&=(\theta,-p_{\theta},-z). \end{align*} In turn, the rotation $R_{\theta_0}$ acts on the $N$-graph $\ngraphfont{G}(\dynkinfont{Z})$ by rotating the disk $\mathbb{D}^2$, and $\eta$ acts by flipping the $z$-coordinate. Any $Y$-pattern of non-simply-laced finite or affine type can be obtained by folding a $Y$-pattern of type $\dynkinfont{ADE}$ or $\widetilde{\dynA} \widetilde{\dynD} \widetilde{\dynE}$. In other words, those $Y$-pattern of non-simply-laced type can be seen as sub-patterns of $\dynkinfont{ADE}$- or $\widetilde{\dynA} \widetilde{\dynD} \widetilde{\dynE}$-types consisting of $Y$-seeds with certain symmetries of finite group $G$ action. We call such $Y$-seeds or $N$-graphs \emph{$G$-admissible}, and the mutation in the folded cluster structure is a sequence of mutations respecting the $G$-orbits. We say that a $Y$-seed (or an $N$-graph) is \emph{globally foldable} if it is $G$-admissible and its arbitrary mutations along $G$-orbits are again $G$-admissible. Figure~\ref{figure:N-graph with rotational symmetry} illustrates the $N$-graphs with rotational symmetry and the corresponding $Y$-patterns of folding. Indeed, they are $\ngraphfont{G}(1,n,n)$, $\ngraphfont{G}(2,2,2)$, $\ngraphfont{G}(3,3,3)$, $\ngraphfont{G}(\widetilde{\dynD}_{2n})$, $\ngraphfont{G}(\widetilde{\dynD}_4)$ which admits $\Z/2\Z$-, $\Z/3\Z$-, $\Z/3\Z$-, $\Z/2\Z$-, $\Z/2\Z$-action, respectively. \begin{figure}[ht] \[ \begin{tikzcd}[column sep=0.5pc, row sep=small] \begin{tikzpicture}[baseline=-.5ex,xscale=0.5,yscale=0.5 \draw[orange, opacity=0.2, fill] (-90:3) arc(-90:90:3) (90:3) -- (0.5,0.5) -- (-0.5,-0.5) -- (-90:3); \draw[violet, opacity=0.1, fill] (90:3) arc(90:270:3) (270:3) -- (-0.5,-0.5) -- (0.5,0.5) -- (90:3); \draw[thick] (0,0) circle (3); \draw[color=cyclecolor2,line cap=round, line width=5, opacity=0.5] (-1.5,0.5) -- (-0.5, -0.5) (0.5, 0.5) -- (1.5, -0.5); \draw[color=cyclecolor1,line cap=round, line width=5, opacity=0.5] (-2.5,-0.5) -- (-1.5, 0.5) (-0.5, -0.5) -- (0.5, 0.5) (1.5, -0.5) -- (2.5, 0.5); \draw[blue, thick, fill] (0:3) -- (2.5,0.5) circle (2pt) -- (45:3) (2.5,0.5) -- (1.5,-0.5) circle (2pt) -- (-45:3) (1.5,-0.5) -- (0.5,0.5) circle (2pt) -- (90:3) (0.5,0.5) -- (-0.5, -0.5) circle (2pt) -- (-90:3) (-0.5, -0.5) -- (-1.5, 0.5) circle (2pt) -- (135:3) (-1.5, 0.5) -- (-2.5, -0.5) circle (2pt) -- (-135:3); \draw[blue, thick] (-2.5,-0.5) -- (-180:3); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex,xscale=0.5,yscale=0.5 \draw[orange, opacity=0.2, fill] (0:3) arc(0:120:3) (120:3) -- (0,0) -- (0:3); \draw[violet, opacity=0.1, fill] (0:3) -- (0,0) -- (-120:3) arc(-120:0:3) (0:3); \draw[blue, opacity=0.1, fill] (120:3) arc(120:240:3) (240:3) -- (0,0) -- (120:3); \draw[thick] (0,0) circle (3cm); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (60:1) -- (45:2) (180:1) -- (165:2) (300:1) -- (285:2); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) -- (60:1) (0,0) -- (180:1) (0,0) -- (300:1); \draw[red, thick] (0,0) -- (0:3) (0,0) -- (120:3) (0,0) -- (240:3); \draw[blue, thick, fill] (0,0) -- (60:1) circle (2pt) -- (90:3) (60:1) -- (45:2) circle (2pt) -- (30:3) (45:2) -- (60:3); \draw[blue, thick, fill] (0,0) -- (180:1) circle (2pt) -- (210:3) (180:1) -- (165:2) circle (2pt) -- (150:3) (165:2) -- (180:3); \draw[blue, thick, fill] (0,0) -- (300:1) circle (2pt) -- (330:3) (300:1) -- (285:2) circle (2pt) -- (270:3) (285:2) -- (300:3); \draw[thick, fill=white] (0,0) circle (2pt); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[thick] (0,0) circle (3cm); \draw[orange, opacity=0.2, fill] (0:3) arc(0:120:3) (120:3) -- (0,0) -- (0:3); \draw[violet, opacity=0.1, fill] (0:3) -- (0,0) -- (-120:3) arc(-120:0:3) (0:3); \draw[blue, opacity=0.1, fill] (120:3) arc(120:240:3) (240:3) -- (0,0) -- (120:3); \draw[cyclecolor2, line cap=round, line width=5, opacity=0.5] (60:1) -- (45:1.5) (180:1) -- (165:1.5) (300:1) -- (285:1.5); \draw[cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) -- (60:1) (0,0) -- (180:1) (0,0) -- (300:1) (45:1.5) -- (60:2) (165:1.5) -- (180:2) (285:1.5) -- (300:2); \draw[red, thick] (0,0) -- (0:3) (0,0) -- (120:3) (0,0) -- (240:3); \draw[blue, thick, fill] (0,0) -- (60:1) circle (2pt) -- (96:3) (60:1) -- (45:1.5) circle (2pt) -- (24:3) (45:1.5) -- (60:2) circle (2pt) -- (72:3) (60:2) -- (48:3); \draw[blue, thick, fill] (0,0) -- (180:1) circle (2pt) -- (216:3) (180:1) -- (165:1.5) circle (2pt) -- (144:3) (165:1.5) -- (180:2) circle (2pt) -- (192:3) (180:2) -- (168:3); \draw[blue, thick, fill] (0,0) -- (300:1) circle (2pt) -- (336:3) (300:1) -- (285:1.5) circle (2pt) -- (264:3) (285:1.5) -- (300:2) circle (2pt) -- (312:3) (300:2) -- (288:3); \draw[thick, fill=white] (0,0) circle (2pt); \end{tikzpicture} \\ \begin{tikzpicture}[baseline=-.5ex,scale=0.7] \node[Dnode] (a1) at (-2,0.5) {}; \node[Dnode] (a2) at (-1,0.5) {}; \node[Dnode] (a3) at (0,0) {}; \node[Dnode] (a4) at (-1,-0.5) {}; \node[Dnode] (a5) at (-2,-0.5) {}; \node[ynode] at (a1) {}; \node[gnode] at (a2) {}; \node[ynode] at (a3) {}; \node[gnode] at (a4) {}; \node[ynode] at (a5) {}; \draw (a1)--(a2)--(a3)--(a4)--(a5); \node at (-3,0) {$\dynkinfont{A}_{2n-1}$}; \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex,scale=0.7] \coordinate[Dnode] (d1) at (-2,0) {}; \coordinate[Dnode] (d2) at (-1,0) {}; \coordinate[Dnode] (d3) at (-1,0.5) {}; \coordinate[Dnode] (d4) at (-1,-0.5) {}; \foreach \y in {d1} { \node[ynode] at (\y) {}; } \foreach \g in {d2,d3,d4}{ \node[gnode] at (\g) {}; } \draw (d1)--(d2) (d1)--(d3) (d1)--(d4); \node at (-3,0) {$\dynkinfont{D}_{4}$}; \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex,scale=0.7] \node[Dnode] (a1) at (0,0) {}; \node[Dnode] (a2) at (-1,0.5) {}; \node[Dnode] (a3) at (-2,0.5) {}; \node[Dnode] (a4) at (-1,0) {}; \node[Dnode] (a5) at (-2,0) {}; \node[Dnode] (a6) at (-1,-0.5) {}; \node[Dnode] (a7) at (-2,-0.5) {}; \node[ynode] at (a1) {}; \node[gnode] at (a2) {}; \node[ynode] at (a3) {}; \node[gnode] at (a4) {}; \node[ynode] at (a5) {}; \node[gnode] at (a6) {}; \node[ynode] at (a7) {}; \draw (a1)--(a2)--(a3) (a1)--(a4)--(a5) (a1)--(a6)--(a7); \node at (-3,0) {$\widetilde{\dynE}_{6}$}; \end{tikzpicture} \\[-1em] \rotatebox{-90}{$\rightsquigarrow$} & \rotatebox{-90}{$\rightsquigarrow$} &\rotatebox{-90}{$\rightsquigarrow$} \\[-2em] \begin{tikzpicture}[baseline=-.5ex,scale=0.7] \node at (-3,0) {$\dynkinfont{B}_{n}$}; \coordinate[Dnode] (1) at (-2,0) {}; \coordinate[Dnode] (2) at (-1,0) {}; \coordinate[Dnode] (3) at (0,0) {}; \node[ynode] at (1) {}; \node[ynode] at (3) {}; \node[gnode] at (2) {}; \draw (1)--(2); \draw[double line] (2)-- ++ (3); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex,scale=0.7] \node at (-3,0) {$\dynkinfont{G}_{2}$}; \coordinate[Dnode] (1) at (-2,0) {}; \coordinate[Dnode] (2) at (-1,0) {}; \node[ynode] at (1) {}; \node[gnode] at (2) {}; \draw[triple line] (2)-- ++ (1) ; \draw (1)--(2); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex,scale=0.7] \node at (-3,0) {$\widetilde{\dynG}_{2}$}; \coordinate[Dnode] (3) at (-2,0) {}; \coordinate[Dnode] (2) at (-1,0) {}; \coordinate[Dnode] (1) at (0,0) {}; \node[ynode] at (1) {}; \node[gnode] at (2) {}; \node[ynode] at (3) {}; \draw (2)--(3); \draw[triple line] (2)-- ++ (1) ; \draw (1)--(2); \end{tikzpicture} \end{tikzcd} \] \[ \begin{tikzcd}[column sep=0.5pc, row sep=small] \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[rounded corners=5, thick] (-6.5, -2.5) rectangle (6.5, 2.5); \draw (0.5, -2.5) (-0.5, -2.5) ; \draw (1.5, 2.5) (0.5, 2.5) ; \clip[rounded corners=5] (-6.5, -2.5) rectangle (6.5, 2.5); \draw[orange, opacity=0.2, fill] (0,2.5)--(-7,2.5)--(-7,-2.5)--(0,-2.5); \draw[violet, opacity=0.1, fill] (0,2.5)--(7,2.5)--(7,-2.5)--(0,-2.5); \draw[cyclecolor1, opacity=0.5, line cap=round, line width=5] (-3.5, 0) -- (-2.5, 0) (-3.5, 0) -- (-4.5, 1) (-3.5, 0) -- (-4.5, -1) (-1.5, 0) -- (-0.5, 0) (0.5, 0) -- (1.5, 0) (3.5, 0) -- (2.5, 0) (3.5, 0) -- (4.5, 1) (3.5, 0) -- (4.5, -1) ; \draw[cyclecolor2, opacity=0.5, line cap=round, line width=5] (-4.5, 1) -- (-5.5, 1) (-4.5, -1) -- (-4.5, -1.75) (-1.5, 0) -- (-2.5, 0) (-0.5, 0) -- (0.5, 0) (1.5, 0) -- (2.5, 0) (4.5, 1) -- (4.5, 1.75) (4.5, -1) -- (5.5, -1) ; \foreach \i in {0, 180} { \begin{scope}[rotate=\i] \draw[thick, green] (-2.5, 2.5) -- (0,0); \draw[thick, red] (-3.5, -2.5) -- (-3.5, 2.5) (-6.5, 0) -- (-3.5, 0) ; \draw[thick, blue, fill] (-2.5, -2.5) -- (-2.5,0) circle (2pt) (-0.5, -2.5) -- (-0.5,0) circle (2pt) (1.5, -2.5) -- (1.5,0) circle (2pt) ; \draw[thick, blue, fill] (-3.5, 0) -- (3.5, 0) (-3.5, 0) -- (-4.5, 1) circle (2pt) -- (-4.5, 2.5) (-4.5, 1) -- (-6.5, 1) (-5.5, 1) circle (2pt) -- (-5.5, 2.5) (-3.5, 0) -- (-4.5, -1) circle (2pt) -- (-4.5, -2.5) (-4.5, -1) -- (-6.5, -1) (-4.5, -1.73) circle (2pt) -- (-6.5, -1.73) ; \end{scope} } \draw[thick, fill=white] (-3.5, 0) circle (2pt) (3.5, 0) circle (2pt); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[rounded corners=5, thick] (-4, -2.5) rectangle (4, 2.5); \clip[rounded corners=5] (-4, -2.5) rectangle (4, 2.5); \draw[orange, opacity=0.2, fill] (0,2.5)--(-7,2.5)--(-7,-2.5)--(0,-2.5); \draw[violet, opacity=0.1, fill] (0,2.5)--(7,2.5)--(7,-2.5)--(0,-2.5); \draw[cyclecolor1, opacity=0.5, line cap=round, line width=5] (-1, 0) -- (1, 0) (-1, 0) -- (-2, 1) (-1, 0) -- (-2, -1) (1, 0) -- (2, 1) (1, 0) -- (2, -1) ; \draw[cyclecolor2, opacity=0.5, line cap=round, line width=5] (-2, 1) -- (-3, 1) (-2, -1) -- (-2, -1.75) (2, 1) -- (2, 1.75) (2, -1) -- (3, -1) ; \foreach \i in {0, 180} { \begin{scope}[rotate=\i] \begin{scope}[xshift=2.5cm] \draw[thick, green] (-2.5, 2.5) -- ++(0,-2.5); \draw[thick, red] (-3.5, -2.5) -- (-3.5, 2.5) (-6.5, 0) -- (-3.5, 0) ; \draw[thick, blue, fill] (-3.5, 0) -- (-2.5, 0) (-3.5, 0) -- (-4.5, 1) circle (2pt) -- (-4.5, 2.5) (-4.5, 1) -- (-6.5, 1) (-5.5, 1) circle (2pt) -- (-5.5, 2.5) (-3.5, 0) -- (-4.5, -1) circle (2pt) -- (-4.5, -2.5) (-4.5, -1) -- (-6.5, -1) (-4.5, -1.73) circle (2pt) -- (-6.5, -1.73) ; \end{scope} \end{scope} } \draw[thick, fill=white] (-1, 0) circle (2pt) (1, 0) circle (2pt); \end{tikzpicture} \\ \begin{tikzpicture}[baseline=-.5ex,scale=0.7] \coordinate[Dnode] (d1) at (0,1.5) {}; \coordinate[Dnode] (d2) at (0,0.5) {}; \coordinate[Dnode] (d3) at (1,1) {}; \coordinate[Dnode] (d4) at (2,1) {}; \coordinate[Dnode] (d5) at (3,1) {}; \coordinate[Dnode] (d6) at (0,-0.5) {}; \coordinate[Dnode] (d7) at (0,-1.5) {}; \coordinate[Dnode] (d8) at (1,-1) {}; \coordinate[Dnode] (d9) at (2,-1) {}; \coordinate[Dnode] (d10) at (3,-1) {}; \coordinate[Dnode] (d11) at (4,0) {}; \node[gnode] at (d1) {}; \node[gnode] at (d2) {}; \node[gnode] at (d4) {}; \node[gnode] at (d6) {}; \node[gnode] at (d7) {}; \node[gnode] at (d9) {}; \node[gnode] at (d11) {}; \node[ynode] at (d3) {}; \node[ynode] at (d5) {}; \node[ynode] at (d8) {}; \node[ynode] at (d10) {}; \draw (d1)--(d3)--(d4)--(d5)--(d11)--(d10)--(d9)--(d8)--(d6) (d2)--(d3) (d7)--(d8); \node at (2,-2) {$\widetilde{\dynD}_{2n}$}; \node at (4.5,-0.1) {$\rightsquigarrow$}; \begin{scope}[xshift=5cm] \node at (2,-2) {$\widetilde{\dynB}_{n}$}; \coordinate[Dnode] (1) at (0,0.5) {}; \coordinate[Dnode] (2) at (0,-0.5) {}; \coordinate[Dnode] (3) at (1,0) {}; \coordinate[Dnode] (4) at (2,0) {}; \coordinate[Dnode] (5) at (3,0) {}; \coordinate[Dnode] (6) at (4,0) {}; \node[gnode] at (1) {}; \node[gnode] at (2) {}; \node[ynode] at (3) {}; \node[gnode] at (4) {}; \node[ynode] at (5) {}; \node[gnode] at (6) {}; \draw (1)--(3)--(4)--(5) (2)--(3) ; \draw[double line] (6)-- ++ (5) ; \end{scope} \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex,scale=0.7] \coordinate[Dnode] (d1) at (0,0) {}; \coordinate[Dnode] (d2) at (-1,0.5) {}; \coordinate[Dnode] (d3) at (-1,-0.5) {}; \coordinate[Dnode] (d4) at (1,0.5) {}; \coordinate[Dnode] (d5) at (1,-0.5) {}; \node[ynode] at (d1) {}; \node[gnode] at (d3) {}; \node[gnode] at (d2) {}; \node[gnode] at (d4) {}; \node[gnode] at (d5) {}; \draw (d4)--(d1)--(d2) (d3)--(d1)--(d5); \node at (-2,0) {$\widetilde{\dynD}_{4}$}; \node at (0,-1) {\rotatebox[origin=c]{-90}{$\rightsquigarrow$}}; \node at (-2,-2) {$\widetilde{\dynC}_{2}$}; \coordinate[Dnode] (c1) at (-1,-2) {}; \coordinate[Dnode] (c2) at (-0,-2) {}; \coordinate[Dnode] (c3) at (1,-2) {}; \node[gnode] at (c1) {}; \node[ynode] at (c2) {}; \node[gnode] at (c3) {}; \draw[double line] (c3)-- ++ (c2); \draw[double line] (c1)-- ++ (c2); \end{tikzpicture} \end{tikzcd} \] \caption{Examples of $N$-graphs with rotational symmetry} \label{figure:N-graph with rotational symmetry} \end{figure} In order to present conjugation invariant $N$-graphs, we need to adopt a degenerate version of $N$-graphs which allows overlapping edges and cycles as in Figure~\ref{figure:N-graph with conjugation symmetry}. They are equivalent to $\ngraphfont{G}(\widetilde{\dynD}_{n+1})$, $\ngraphfont{G}(\widetilde{\dynD}_4)$, $\ngraphfont{G}(2,3,3)$, $\ngraphfont{G}(3,3,3)$, and $\ngraphfont{G}(2,4,4)$ up to $\partial$-Legendrian isotopy see Definition~\ref{def:boundary Legendrian isotopic}, respectively. \begin{figure}[ht] \[ \begin{tikzcd}[column sep=0.5pc, row sep=small] \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \useasboundingbox (-3, -3.5) rectangle (3, 3.5); \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[color=cyclecolor1,line cap=round, line width=5, opacity=0.5](-135:1.5)--(45:2.5) (-135:1.5) ++(-0.5,0) -- ++(0,-0.5); \draw[color=cyclecolor2,line cap=round, line width=5, opacity=0.5](-135:1.5)-- ++(-0.5,0) (-135:1.5) ++(-0.5,-0.5) -- ++(-0.5,0); \draw[fill, red, thick] (3,-3) -- (0,0) (0,0) -- (-3,3) (0,0) -- (45:2.5) circle (2pt) (45:2.5) -- ++(0,3) (45:2.5) -- ++(3,0) (0,0) -- (-135:1.5) circle (2pt) (-135:1.5) -- ++(0,-3) (-135:1.5) -- ++(-3,0) (-135:1.5) ++ (-.5,0) circle (2pt) -- ++(0,-2) (-135:1.5) ++ (-.5,-.5) circle (2pt) -- ++(-2,0) (-135:1.5) ++ (-1,-.5) circle (2pt) (-135:1.5) ++ (-1,-.5) -- ++(0,-1); \draw[Dble={green and blue},line width=2] (-2,0) -- ++(-1,-1); \draw[Dble={green and blue},line width=2] (-2,0) -- ++(-1,1); \draw[Dble={blue and green},line width=2] (-2,0) -- (0,0); \draw[Dble={blue and green},line width=2] (0,0) -- (0,3); \draw[Dble={blue and green},line width=2] (0,0) -- (0,-3); \draw[Dble={green and blue},line width=2] (0,0) -- (2,0); \draw[Dble={green and blue},line width=2] (2,0) -- ++(2,-2); \draw[Dble={green and blue},line width=2] (2,0) -- ++(2,2); \draw[color=cyclecolor3,line width=7,opacity=0.5,line cap=round](-2,0)--(2,0); \end{tikzpicture} & \begin{tikzpicture}[baseline=-5ex,scale=0.7] \coordinate[Dnode] (d1) at (0,0) {}; \coordinate[Dnode] (d2) at (1,0) {}; \coordinate[Dnode] (d3) at (2,0) {}; \coordinate[Dnode] (d4) at (3,0) {}; \coordinate[Dnode] (d5) at (4,0.5) {}; \coordinate[Dnode] (d6) at (4,-0.5) {}; \node[gnode] at (d1) {}; \node[ynode] at (d2) {}; \node[gnode] at (d3) {}; \node[ynode] at (d4) {}; \node[bnode] at (d5) {}; \node[bnode] at (d6) {}; \draw (d1)--(d2)--(d3)--(d4)--(d5) (d4)--(d6); \node at (2,0.5) {$\widetilde{\dynD}_{n+1}$}; \node at (2,-1) {\rotatebox[origin=c]{-90}{$\rightsquigarrow$}}; \begin{scope}[yshift=-2cm] \node at (2,-1) {$\widetilde{\dynC}_{n}$}; \coordinate[Dnode] (1) at (0,0) {}; \coordinate[Dnode] (2) at (1,0) {}; \coordinate[Dnode] (3) at (2,0) {}; \coordinate[Dnode] (4) at (3,0) {}; \coordinate[Dnode] (5) at (4,0) {}; \node[gnode] at (1) {}; \node[ynode] at (2) {}; \node[gnode] at (3) {}; \node[ynode] at (4) {}; \node[bnode] at (5) {}; \draw (1)--(2)--(3)--(4) ; \draw[double line] (5)-- ++ (4); \end{scope} \end{tikzpicture} &\hspace{6mm}& \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[color=cyclecolor1,line cap=round, line width=5, opacity=0.5](-135:2)--(45:2); \draw[color=cyclecolor2,line cap=round, line width=5, opacity=0.5](-135:2)-- ++(-0.75,0) (45:2)--++(0.75,0); \foreach \r in {0, 180} { \begin{scope}[rotate=\r] \draw[fill, red, thick] (3,-3) -- (-3,3) (0,0) -- (45:2) circle (2pt) (45:2) -- ++(0,3) (45:2) -- ++(3,0) (45:2) ++ (0.75,0) circle (2pt) -- ++(0,2) ; \draw[Dble={blue and green},line width=2] (0,0) -- (0,3); \draw[Dble={green and blue},line width=2] (0,0) -- (2,0); \draw[Dble={green and blue},line width=2] (2,0) -- ++(-45:2); \draw[Dble={green and blue},line width=2] (2,0) -- ++(45:2); \end{scope} } \draw[color=cyclecolor3,line width=7,opacity=0.5,line cap=round](-2,0)--(2,0); \end{tikzpicture} & \begin{tikzpicture}[baseline=-5ex,scale=0.7] \coordinate[Dnode] (d1) at (0,0) {}; \coordinate[Dnode] (d2) at (-1,0.5) {}; \coordinate[Dnode] (d3) at (-1,-0.5) {}; \coordinate[Dnode] (d4) at (1,0.5) {}; \coordinate[Dnode] (d5) at (1,-0.5) {}; \node[ynode] at (d1) {}; \node[gnode] at (d3) {}; \node[gnode] at (d2) {}; \node[bnode] at (d4) {}; \node[bnode] at (d5) {}; \draw (d4)--(d1)--(d2) (d3)--(d1)--(d5); \node at (0,0.5) {$\widetilde{\dynD}_{4}$}; \node at (0,-1) {\rotatebox[origin=c]{-90}{$\rightsquigarrow$}}; \node at (0,-3) {$\widetilde{\dynA}_{5}^{(2)}$}; \coordinate[Dnode] (c1) at (-1,-1.5) {}; \coordinate[Dnode] (c2) at (0,-2) {}; \coordinate[Dnode] (c3) at (-1,-2.5) {}; \coordinate[Dnode] (c4) at (1,-2) {}; \node[gnode] at (c1) {}; \node[ynode] at (c2) {}; \node[gnode] at (c3) {}; \node[bnode] at (c4) {}; \draw (c1)--(c2)--(c3); \draw[double line] (c4)-- ++ (c2); \end{tikzpicture} \end{tikzcd} \] \[ \begin{tikzcd} \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[color=cyclecolor2,line cap=round, line width=5, opacity=0.5](-135:1.5)--(45:2.5); \draw[color=cyclecolor1,line cap=round, line width=5, opacity=0.5](-135:1.5)--++(-0.5,0); \draw[fill, red, thick] (3,-3) -- (0,0) (0,0) -- (-3,3) (0,0) -- (45:2.5) circle (2pt) (45:2.5) -- ++(0,3) (45:2.5) -- ++(3,0) (0,0) -- (-135:1.5) circle (2pt) (-135:1.5) -- ++(0,-3) (-135:1.5) -- ++(-3,0) (-135:1.5) ++ (-0.5,0) circle (2pt) -- ++(0,-2); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,-1); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,1); \draw[Dble={blue and green},line width=2] (-2.5,0) -- (0,0); \draw[Dble={blue and green},line width=2] (0,0) -- (0,3); \draw[Dble={blue and green},line width=2] (0,0) -- (0,-3); \draw[Dble={green and blue},line width=2] (0,0) -- (1.5,0); \draw[Dble={green and blue},line width=2] (1.5,0) -- ++(2,-2); \draw[Dble={green and blue},line width=2] (1.5,0) -- ++(2,2); \draw[Dble={green and blue},line width=2] (1.5,0) ++(45:0.5) -- ++(2,-2); \draw[color=cyclecolor3,line width=7,opacity=0.5,line cap=round](-2.5,0)--(1.5,0); \draw[color=cyclecolor4,line width=7,opacity=0.5,line cap=round](1.5,0)-- ++(45:0.5); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[color=cyclecolor2,line cap=round, line width=5, opacity=0.5](-135:1.5)--(45:2.5) (-135:1.5)++(-0.5,0)--++(0,-0.5); \draw[color=cyclecolor1,line cap=round, line width=5, opacity=0.5](-135:1.5)--++(-0.5,0); \draw[fill, red, thick] (3,-3) -- (0,0) (0,0) -- (-3,3) (0,0) -- (45:2.5) circle (2pt) (45:2.5) -- ++(0,3) (45:2.5) -- ++(3,0) (0,0) -- (-135:1.5) circle (2pt) (-135:1.5) -- ++(0,-3) (-135:1.5) -- ++(-3,0) (-135:1.5) ++ (-0.5,0) circle (2pt) -- ++(0,-2) (-135:1.5) ++ (-0.5,0) ++(0,-0.5) circle (2pt) -- ++(-1, 0); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,-1); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,1); \draw[Dble={blue and green},line width=2] (-2.5,0) -- (0,0); \draw[Dble={blue and green},line width=2] (0,0) -- (0,3); \draw[Dble={blue and green},line width=2] (0,0) -- (0,-3); \draw[Dble={green and blue},line width=2] (0,0) -- (1.5,0); \draw[Dble={green and blue},line width=2] (1.5,0) -- ++(2,-2); \draw[Dble={green and blue},line width=2] (1.5,0) -- ++(2,2); \draw[Dble={green and blue},line width=2] (1.5,0) ++(45:0.5) -- ++(2,-2); \draw[color=cyclecolor3,line width=7,opacity=0.5,line cap=round](-2.5,0)--(1.5,0); \draw[color=cyclecolor4,line width=7,opacity=0.5,line cap=round](1.5,0)-- ++(45:0.5); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[color=cyclecolor2,line cap=round, line width=5, opacity=0.5](-135:1.5)--(45:2.5); \draw[color=cyclecolor1,line cap=round, line width=5, opacity=0.5](-135:1.5)--++(-0.5,0); \draw[fill, red, thick] (3,-3) -- (0,0) (0,0) -- (-3,3) (0,0) -- (45:2.5) circle (2pt) (45:2.5) -- ++(0,3) (45:2.5) -- ++(3,0) (0,0) -- (-135:1.5) circle (2pt) (-135:1.5) -- ++(0,-3) (-135:1.5) -- ++(-3,0) (-135:1.5) ++ (-0.5,0) circle (2pt) -- ++(0,-2); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,-1); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,1); \draw[Dble={blue and green},line width=2] (-2.5,0) -- (0,0); \draw[Dble={blue and green},line width=2] (0,0) -- (0,3); \draw[Dble={blue and green},line width=2] (0,0) -- (0,-3); \draw[Dble={green and blue},line width=2] (0,0) -- (1.5,0); \draw[Dble={green and blue},line width=2] (1.5,0) -- ++(2,-2); \draw[Dble={green and blue},line width=2] (1.5,0) -- ++(2,2); \draw[Dble={green and blue},line width=2] (1.5,0) ++(45:0.5) -- ++(2,-2); \draw[Dble={green and blue},line width=2] (1.5,0) ++(45:0.5) ++(-45:0.5) -- ++(45:1); \draw[color=cyclecolor3,line width=7,opacity=0.5,line cap=round](-2.5,0)--(1.5,0) (1.5,0)++(45:0.5) -- ++(-45:0.5); \draw[color=cyclecolor4,line width=7,opacity=0.5,line cap=round](1.5,0)-- ++(45:0.5); \end{tikzpicture} \\ \begin{tikzpicture}[baseline=-0.5ex,scale=0.7] \coordinate[Dnode] (d1) at (1,0) {}; \coordinate[Dnode] (d2) at (2,0) {}; \coordinate[Dnode] (d3) at (3,0.5) {}; \coordinate[Dnode] (d4) at (4,0.5) {}; \coordinate[Dnode] (d5) at (3,-0.5) {}; \coordinate[Dnode] (d6) at (4,-0.5) {}; \node[ynode] at (d1) {}; \node[gnode] at (d2) {}; \node[bnode] at (d3) {}; \node[vnode] at (d4) {}; \node[bnode] at (d5) {}; \node[vnode] at (d6) {}; \draw (d1)--(d2)--(d3)--(d4) (d2)--(d5)--(d6); \node at (0,0) {$\dynkinfont{E}_{6}$}; \end{tikzpicture} & \begin{tikzpicture}[baseline=-0.5ex,scale=0.7] \coordinate[Dnode] (d0) at (0,0) {}; \coordinate[Dnode] (d1) at (1,0) {}; \coordinate[Dnode] (d2) at (2,0) {}; \coordinate[Dnode] (d3) at (3,0.5) {}; \coordinate[Dnode] (d4) at (4,0.5) {}; \coordinate[Dnode] (d5) at (3,-0.5) {}; \coordinate[Dnode] (d6) at (4,-0.5) {}; \node[gnode] at (d0) {}; \node[ynode] at (d1) {}; \node[gnode] at (d2) {}; \node[bnode] at (d3) {}; \node[vnode] at (d4) {}; \node[bnode] at (d5) {}; \node[vnode] at (d6) {}; \draw (d0)--(d1)--(d2)--(d3)--(d4) (d2)--(d5)--(d6); \node at (-1,0) {$\widetilde{\dynE}_{6}$}; \end{tikzpicture} & \begin{tikzpicture}[baseline=-0.5ex,scale=0.7] \coordinate[Dnode] (d1) at (1,0) {}; \coordinate[Dnode] (d2) at (2,0) {}; \coordinate[Dnode] (d3) at (3,0.5) {}; \coordinate[Dnode] (d4) at (4,0.5) {}; \coordinate[Dnode] (d5) at (3,-0.5) {}; \coordinate[Dnode] (d6) at (4,-0.5) {}; \coordinate[Dnode] (d7) at (5,0.5) {}; \coordinate[Dnode] (d8) at (5,-0.5) {}; \node[ynode] at (d1) {}; \node[gnode] at (d2) {}; \node[bnode] at (d3) {}; \node[vnode] at (d4) {}; \node[bnode] at (d5) {}; \node[vnode] at (d6) {}; \node[bnode] at (d7) {}; \node[bnode] at (d8) {}; \draw (d1)--(d2)--(d3)--(d4)--(d7) (d2)--(d5)--(d6)--(d8); \node at (0,0) {$\widetilde{\dynE}_{7}$}; \end{tikzpicture}\\[-1em] \rotatebox{-90}{$\rightsquigarrow$} & \rotatebox{-90}{$\rightsquigarrow$} &\rotatebox{-90}{$\rightsquigarrow$} \\[-2em] \begin{tikzpicture}[baseline=-0.5ex,scale=0.7] \node at (0,0) {$\dynkinfont{F}_{4}$}; \coordinate[Dnode] (1) at (1,0) {}; \coordinate[Dnode] (2) at (2,0) {}; \coordinate[Dnode] (3) at (3,0) {}; \coordinate[Dnode] (4) at (4,0) {}; \node[ynode] at (1) {}; \node[gnode] at (2) {}; \node[bnode] at (3) {}; \node[vnode] at (4) {}; \draw (1)--(2) (3)--(4) ; \draw[double line] (3)-- ++ (2); \end{tikzpicture} & \begin{tikzpicture}[baseline=-0.5ex,scale=0.7] \node at (-1,0) {$\widetilde{\dynE}_{6}^{(2)}$}; \coordinate[Dnode] (0) at (0,0) {}; \coordinate[Dnode] (1) at (1,0) {}; \coordinate[Dnode] (2) at (2,0) {}; \coordinate[Dnode] (3) at (3,0) {}; \coordinate[Dnode] (4) at (4,0) {}; \node[gnode] at (0) {}; \node[ynode] at (1) {}; \node[gnode] at (2) {}; \node[bnode] at (3) {}; \node[vnode] at (4) {}; \draw (0)--(1)--(2) (3)--(4) ; \draw[double line] (3)-- ++ (2) ; \end{tikzpicture} & \begin{tikzpicture}[baseline=-0.5ex,scale=0.7] \node at (0,0) {$\widetilde{\dynF}_{4}$}; \coordinate[Dnode] (1) at (1,0) {}; \coordinate[Dnode] (2) at (2,0) {}; \coordinate[Dnode] (3) at (3,0) {}; \coordinate[Dnode] (4) at (4,0) {}; \coordinate[Dnode] (5) at (5,0) {}; \node[ynode] at (1) {}; \node[gnode] at (2) {}; \node[bnode] at (3) {}; \node[vnode] at (4) {}; \node[bnode] at (5) {}; \draw (1)--(2) (3)--(4)--(5) ; \draw[double line] (3)-- ++ (2) ; \end{tikzpicture} \end{tikzcd} \] \caption{Examples of $N$-graphs with conjugation symmetry} \label{figure:N-graph with conjugation symmetry} \end{figure} \begin{theorem}[Theorem~\ref{thm:folding of N-graphs}]\label{Thm:folding of N-graphs} The following holds: \begin{enumerate} \item The Legendrian $\lambda(\dynkinfont{A}_{2n-1})$ has $\binom{2n}{n}$ Lagrangian fillings which are invariant under the $\pi$-rotation and admit the $Y$-pattern of type $\dynkinfont{B}_n$. \item The Legendrian $\lambda(\dynkinfont{D}_{4})$ has $8$ Lagrangian fillings which are invariant under the $2\pi/3$-rotation and admit the $Y$-pattern of type $\dynkinfont{G}_2$. \item The Legendrian $\lambda(\widetilde{\dynE}_{6})$ has Lagrangian fillings which are invariant under the $2\pi/3$-rotation and admit the $Y$-pattern of type $\widetilde{\dynG}_2$. \item The Legendrian $\lambda(\widetilde{\dynD}_{2n})$ with $n\ge 3$ has Lagrangian fillings which are invariant under the $\pi$-rotation and admit the $Y$-pattern of type $\widetilde{\dynB}_n$. \item The Legendrian $\lambda(\widetilde{\dynD}_4)$ has Lagrangian fillings which are invariant under the $\pi$-rotation and admit the $Y$-pattern of type $\widetilde{\dynC}_2$. \item The Legendrian $\tilde\lambda(\dynkinfont{E}_{6})$ has $105$ Lagrangian fillings which are invariant under the antisymplectic involution and admit the $Y$-pattern of type $\dynkinfont{F}_4$. \item The Legendrian $\tilde\lambda(\dynkinfont{D}_{n+1})$ has $\binom{2n}{n}$ Lagrangian fillings which are invariant under the antisymplectic involution and admit the $Y$-pattern of type $\dynkinfont{C}_n$. \item The Legendrian $\tilde\lambda(\widetilde{\dynE}_{6})$ has Lagrangian fillings which are invariant under the antisymplectic involution and admit the $Y$-pattern of type $\dynkinfont{E}_6^{(2)}$. \item The Legendrian $\tilde\lambda(\widetilde{\dynE}_{7})$ has Lagrangian fillings which are invariant under the antisymplectic involution and admit the $Y$-pattern of type $\widetilde{\dynF}_4$. \item The Legendrian $\tilde\lambda(\widetilde{\dynD}_4)$ has Lagrangian fillings which are invariant under the antisymplectic involution and admit the $Y$-pattern of type $\dynkinfont{A}_5^{(2)}$. \end{enumerate} \end{theorem} The study of Lagrangian fillings with symmetry, again to the best of authors' knowledge, is started from \cite{Cas2020}. We clarify the actions on the symplectic and contact manifold, together with the induced actions on Lagrangian fillings and Legendrian links. The items (1),(2),(6),(7) in Theorem~\ref{Thm:folding of N-graphs} answer that the conjecture \cite[Conjecture 5.4]{Cas2020} is true, and furthermore we extend our results to certain non-simply-laced affine types. \subsection{Organization of the paper} The rest of the paper is divided into six sections including appendices. We review, in Section~\ref{sec:cluster algebras}, some basics on finite and affine cluster algebra. Especially we focus on structural results about the combinatorics of exchange graphs using Coxeter mutations. In Section~\ref{sec:N-graph}, we recall how $N$-graphs and their moves encode Legendrian surfaces and the Legendrian isotopies. We also introduce degenerate $N$-graphs which will be used to construct Lagrangian fillings having conjugation symmetry. After that we review the assignment of $Y$-seeds in the cluster structure from $N$-graphs together with certain flag moduli. We also discuss the Legendrian mutation on (degenerate) $N$-graphs. In Section~\ref{sec:N-graph of finite or affine type}, we investigate Legendrian links and $N$-graphs of type $\dynkinfont{ADE}$ or $\widetilde{\dynD} \widetilde{\dynE}$. We discuss $N$-graph realization of the Coxeter mutation and prove Theorem~\ref{theorem:legendrian loop} on the relationship between Coxeter mutations and Legendrian loops. By combining the structural results in the seed pattern of cluster algebra and $N$-graph realization of the Coxeter mutation, we construct as many Lagrangian fillings as seeds for Legendrian links of type $\dynkinfont{ADE}$ or $\widetilde{\dynD} \widetilde{\dynE}$, and hence prove Theorem~\ref{thm_intro_1}. In Section~\ref{section:folding}, we discuss rotation and conjugation actions on $N$-graphs and invariant $N$-graphs. We also prove Theorem~\ref{Thm:folding of N-graphs}. In Appendix~\ref{section:invariance and admissibility}, we argue that $G$-invariance of type $\dynkinfont{ADE}$ implies $G$-admissibility. Finally, in Appendix~\ref{sec:supplementary pictorial proofs}, we collect several equivalences between different presentation of $N$-graphs. If some readers are familiar with the notion of cluster algebra and $N$-graph, then one may skip Section~\ref{sec:cluster algebras} and Section~\ref{sec:N-graph}, respectively, and start from Section~\ref{sec:N-graph of finite or affine type}. \subsection*{Acknowledgement} B. An and Y. Bae were supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1A2C1A0100320). E. Lee was supported by the Institute for Basic Science (IBS-R003-D1). \subsection{Basics on cluster algebras} \begin{definition}[{cf. \cite{FZ1_2002, FZ2_2003, FZ4_2007}}]\label{definition:seeds} A seed and $Y$-seed are defined as follows. \begin{enumerate} \item A \emph{seed} $(\bfx, \clusterfont{\tilde{B}})$ is a pair of \begin{itemize} \item a tuple $\bfx = (x_1,\dots,x_m)$ of algebraically independent generators of $\mathbb{F}$, that is, $\mathbb{F} = \bbC(x_1,\dots,x_m)$; \item an $m \times n$ integer matrix $\clusterfont{\tilde{B}} = (b_{i,j})_{i,j}$ such that the \emph{principal part} $\clusterfont{B} \colonequals (b_{i,j})_{1\leq i,j\leq n}$ is skew-symmetrizable, that is, there exist positive integers $d_1,\dots,d_n$ such that \[ \textrm{diag}(d_1,\dots,d_n) \cdot \clusterfont{B} \] is a skew-symmetric matrix. \end{itemize} We refer to $\bfx$ as the \emph{cluster} of a seed $(\bfx, \clusterfont{\tilde{B}})$, to elements $x_1,\dots,x_m$ as \emph{cluster variables}, and to $\clusterfont{\tilde{B}}$ as the \emph{exchange matrix}. Moreover, we call $x_1,\dots,x_n$ \emph{unfrozen} (or, \emph{mutable}) variables and $x_{n+1},\dots,x_m$ \emph{frozen} variables. \item A \emph{$Y$-seed} $(\bfy,\clusterfont{B})$ is a pair of an $n$-tuple $\bfy=(y_1,\dots,y_n)$ of elements in $\mathbb{F}$ and an $n\times n$ skew-symmetrizable matrix $\clusterfont{B}$. We call $\bfy$ the \emph{coefficient tuple} of a $Y$-seed $(\bfy, \clusterfont{B})$ and call $y_1,\dots,y_n$ \emph{coefficients}. \end{enumerate} \end{definition} We say that two seeds $(\bfx, \clusterfont{\tilde{B}})$ and $(\bfx', \clusterfont{\tilde{B}}')$ are \textit{equivalent}, denoted by $(\bfx, \clusterfont{\tilde{B}})\sim(\bfx', \clusterfont{\tilde{B}}')$ if there exists a permutation $\sigma$ on $[m]$ such that $\sigma|_{[n]}=[n]$, \[ x_i' = x_{\sigma(i)}\quad \text{ and } \quad b_{i,j}' = b_{\sigma(i),\sigma(j)} \quad \text{ for }1\le i\le m, 1\le j\le n, \] where $\bfx = (x_{1},\dots,x_{m})$, $\bfx' = (x_1',\dots,x_m')$, $\clusterfont{\tilde{B}} = (b_{i,j})$, and $\clusterfont{\tilde{B}}' = (b_{i,j}')$. Similarly, two $Y$-seeds $(\bfy,\clusterfont{B})$ and $(\bfy',\clusterfont{B}')$ are \emph{equivalent} and denoted by $(\bfy,\clusterfont{B})\sim(\bfy',\clusterfont{B}')$ if there exists a permutation $\sigma$ on $[n]$ such that \[ y_i' = y_{\sigma(i)}\quad\text{and}\quad b_{i,j}' = b_{\sigma(i),\sigma(j)}\quad\text{for }1\le i,j \le n. \] To define cluster algebras, we introduce mutations on exchange matrices, and quivers, and seeds as follows. \begin{enumerate} \item (Mutation on exchange matrices) For an exchange matrix $\clusterfont{\tilde{B}}$ and $1 \le k \le n$, the mutation $\mu_k(\clusterfont{\tilde{B}}) = (b_{i,j}')$ is defined as follows. \[ b_{i,j}' = \begin{cases} -b_{i,j} & \text{ if } i = k \text{ or } j = k, \\ \displaystyle b_{i,j} + \frac{|b_{i,k}| b_{k,j} + b_{i,k} | b_{k,j}|} {2} & \text{ otherwise}. \end{cases} \] We say that \emph{$\clusterfont{\tilde{B}}' =(b_{i,j}')$ is the mutation of $\clusterfont{\tilde{B}}$ at $k$}. \item (Mutation on quivers) We call a finite directed multigraph $\clusterfont{Q}$ a \emph{quiver} if it does not have oriented cycles of length at most $2$. The adjacency matrix $\clusterfont{\tilde{B}}(\clusterfont{Q})$ of a quiver is always skew-symmetric. Moreover, $\mu_k(\clusterfont{\tilde{B}}(\clusterfont{Q}))$ is again the adjacency matrix of a quiver $\clusterfont{Q}'$. We define $\mu_k(\clusterfont{Q})$ to be the quiver satisfying \[ \clusterfont{\tilde{B}}(\mu_k(\clusterfont{Q})) = \mu_k(\clusterfont{\tilde{B}}(\clusterfont{Q})), \] and say that \emph{$\mu_k(\clusterfont{Q})$ is the mutation of $\clusterfont{Q}$ at $k$}. \item (Mutation on seeds) For a seed $(\bfx, \clusterfont{\tilde{B}})$ and an integer $1 \leq k \leq n$, the \emph{mutation} $\mu_k(\bfx, \clusterfont{\tilde{B}}) = (\bfx', \mu_k(\clusterfont{\tilde{B}}))$ is defined as follows: \begin{equation* x_i' = \begin{cases} x_i &\text{ if } i \neq k,\\ \displaystyle x_k^{-1}\left( \prod_{b_{j,k} > 0} x_j^{b_{j,k}} + \prod_{b_{j,k} < 0}x_j^{-b_{j,k}} \right) & \text{ otherwise}. \end{cases} \end{equation*} \item (Mutation on $Y$-seeds) The \emph{$Y$-seed mutation} (or, \emph{cluster $\mathcal{X}$-mutation}, \emph{$\mathcal{X}$-cluster mutation}) on a $Y$-seed $(\bfy, \clusterfont{B})$ at $k\in[n]$ is a $Y$-seed $(\bfy'=(y_1',\dots, y_n'),\clusterfont{B}'=\mu_k(\clusterfont{B}))$, where for each $1 \le i\le n$, \[ y_i' = \begin{cases} \displaystyle {y}_{i} {y}_{k}^{\max\{b_{i,k},0\}}(1+{y}_{k})^{-b_{i,k}} & \text{ if }i \neq k, \\ {y}_{k}^{-1} &\text{ otherwise}. \end{cases} \] \end{enumerate} \begin{example}\label{example_mutation_skewsymmetrizable} Let $n = m = 2$. Suppose that an initial seed is given by \[ (\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0}) = \left( (x_1,x_2), \begin{pmatrix} 0 & 1 \\ -3 & 0 \end{pmatrix} \right). \] Considering mutations $\mu_1(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0})$ and $\mu_2\mu_1(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0})$, we obtain the following. \begin{align*} \mu_1(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0}) &= \left( \left( \frac{1+x_2^3}{x_1},x_2 \right), \begin{pmatrix} 0 & -1 \\sigma_3 & 0 \end{pmatrix} \right),\\ \mu_2\mu_1(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0}) &= \left( \left( \frac{1+x_2^3}{x_1}, \frac{1+x_1+x_2^3}{x_1x_2} \right), \begin{pmatrix} 0 & 1 \\ -3 & 0 \end{pmatrix} \right). \end{align*} \end{example} \begin{remark}\label{rmk_mutation_on_quivers} Let $k$ be a vertex in a quiver $\clusterfont{Q}$ on $[m]$. The mutation $\mu_k(\clusterfont{Q})$ can also be described via a sequence of three steps: \begin{enumerate} \item For each directed two-arrow path $i \to k \to j$, add a new arrow $i \to j$. \item Reverse the direction of all arrows incident to the vertex $k$. \item Repeatedly remove directed $2$-cycles until unable to do so. \end{enumerate} \end{remark} \begin{remark}\label{rmk_mutation_commutes} Let $\clusterfont{\tilde{B}} = (b_{i,j})$ be an exchange matrix of size $m \times n$. For $k,\ell \in [n]$, if $b_{k,\ell} = b_{\ell,k} = 0$, then the mutations at $k$ and $\ell$ commute with each other: $\mu_{\ell}(\mu_k(\clusterfont{\tilde{B}})) = \mu_k(\mu_{\ell}(\clusterfont{\tilde{B}}))$. Similarly, for a quiver $\clusterfont{Q}$ on $[m]$, if there does not exist an arrow connecting mutable vertices $k$ and $\ell$, then we have $\mu_{\ell}(\mu_k(\clusterfont{Q})) = \mu_k(\mu_{\ell}(\clusterfont{Q}))$. \end{remark} We say a quiver $\clusterfont{Q}'$ is \emph{mutation equivalent} to another quiver $\clusterfont{Q}$ if there exists a sequence of mutations $\mu_{j_1},\dots,\mu_{j_{\ell}}$ which connects $\clusterfont{Q}'$ and $\clusterfont{Q}$, that is, \[ \clusterfont{Q}' = (\mu_{j_{\ell}} \cdots \mu_{j_1})(\clusterfont{Q}). \] Similarly, we say an exchange matrix $\clusterfont{\tilde{B}}'$ is \emph{mutation equivalent} to another matrix $\clusterfont{\tilde{B}}$ if $\clusterfont{\tilde{B}}'$ is obtained by applying a sequence of mutations to $\clusterfont{\tilde{B}}$. An immediate check shows that $\mu_k(\bfx,\clusterfont{\tilde{B}})$ is again a seed $\mu_k(\bfy,\clusterfont{B})$ is a $Y$-seed, and a mutation is an involution, that is, its square is the identity. Also, note that the mutation on seeds does not change frozen variables $x_{n+1},\dots,x_m$. Let $\mathbb{T}_n$ denote the $n$-regular tree whose edges are labeled by $1,\dots,n$. Except for $n = 1$, there are infinitely many vertices on the tree $\mathbb{T}_n$. For example, we present regular trees $\mathbb{T}_2$ and $\mathbb{T}_3$ in Figure~\ref{figure_regular_trees_2_and_3}. \begin{figure} \begin{tabular}{cc} \begin{tikzpicture} \tikzset{every node/.style={scale=0.8}} \tikzset{cnode/.style = {circle, fill,inner sep=0pt, minimum size= 1.5mm}} \node[cnode] (1) {}; \node[cnode, right of=1 ] (2) {}; \node[cnode, right of=2 ] (3) {}; \node[cnode, right of=3 ] (4) {}; \node[cnode, right of=4 ] (5) {}; \node[cnode, right of=5 ] (6) {}; \node[left of=1] {$\cdots$}; \node[right of=6] {$\cdots$}; \draw (1)--(2) node[above, midway] {$1$}; \draw (2)--(3) node[above, midway] {$2$}; \draw (3)--(4) node[above, midway] {$1$}; \draw (4)--(5) node[above, midway] {$2$}; \draw (5)--(6) node[above, midway] {$1$}; \end{tikzpicture} & \begin{tikzpicture} \tikzset{every node/.style={scale=0.8}} \tikzset{cnode/.style = {circle, fill,inner sep=0pt, minimum size= 1.5mm}} \node[cnode] (1) {}; \node[cnode, below right of =1] (2) {}; \node[cnode, below of =2] (3) {}; \node[cnode, above right of=2] (4){}; \node[cnode, above of =4] (5) {}; \node[cnode, below right of = 4] (6) {}; \node[cnode, below of= 6] (7) {}; \node[cnode, above right of = 6] (8) {}; \node[cnode, above of = 8] (9) {}; \node[cnode, below right of = 8] (10) {}; \node[cnode, below of = 10] (11) {}; \node[cnode, above right of = 10] (12) {}; \node[left of = 1] {$\cdots$}; \node[right of = 12] {$\cdots$}; \draw (1)--(2) node[above, midway, sloped] {$1$}; \draw (4)--(6) node[above, midway, sloped] {$1$}; \draw (8)--(10) node[above, midway, sloped] {$1$}; \foreach \x [evaluate ={ \x as \y using int(\x +2)} ] in {2, 6, 10}{ \draw (\x)--(\y) node[below, midway, sloped] {$2$}; } \foreach \x [evaluate = {\x as \y using int(\x +1)}] in {2, 4, 6, 8, 10}{ \draw (\x)--(\y) node[above, midway, sloped] {$3$}; } \end{tikzpicture}\\[2ex] $\mathbb{T}_2$ & $\mathbb{T}_3$ \end{tabular} \caption{The $n$-regular trees for $n=2$ and $n = 3$.} \label{figure_regular_trees_2_and_3} \end{figure} A \emph{cluster pattern} (or \emph{seed pattern}) is an assignment \[ \mathbb{T}_n \to \{\text{seeds in } \mathbb{F}\}, \quad t \mapsto (\bfx_t, \clusterfont{\tilde{B}}_t) \] such that if $\begin{tikzcd} t \arrow[r,dash, "k"] & t' \end{tikzcd}$ in $\mathbb{T}_n$, then $\mu_k(\bfx_t, \clusterfont{\tilde{B}}_t) = (\bfx_{t'}, \clusterfont{\tilde{B}}_{t'})$. Let $\{ (\bfx_t, \clusterfont{\tilde{B}}_t)\}_{t \in \mathbb{T}_n}$ be a cluster pattern with $\bfx_t = (x_{1;t},\dots,x_{m;t})$. Since the mutation does not change frozen variables, we may let $x_{n+1} = x_{n+1;t},\dots,x_m = x_{m;t}$. \begin{definition}[{cf. \cite{FZ2_2003}}] Let $\{ (\bfx_t, \clusterfont{\tilde{B}}_t)\}_{t \in \mathbb{T}_n}$ be a cluster pattern with $\bfx_t = (x_{1;t},\dots,x_{m;t})$. The \emph{cluster algebra} $\cA(\{(\bfx_t, \clusterfont{\tilde{B}}_t)\}_{t \in \mathbb{T}_n})$ is defined to be the $\bbC[x_{n+1},\dots,x_m]$-subalgebra of $\mathbb{F}$ generated by all the cluster variables $\bigcup_{t \in \mathbb{T}_n} \{x_{1;t},\dots,x_{n;t}\}$. \end{definition} If we fix a vertex $t_0 \in \mathbb{T}_n$, then a cluster pattern $\{ (\bfx_{t}, \clusterfont{\tilde{B}}_{t}) \}_{t \in \mathbb{T}_n}$ is constructed from the seed~$(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0})$. In this case, we call $(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0})$ an \emph{initial seed}. Because of this reason, we simply denote by $\cA(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0})$ the cluster algebra given by the cluster pattern constructed from the initial seed~$(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0})$. \begin{example}\label{example_A2_example} Let $n = m = 2$. Suppose that an initial seed is given by \[ (\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0}) = \left( (x_1,x_2), \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \right). \] We present the cluster pattern obtained by the initial seed $(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0})$. \begin{center} \begin{tikzcd \left( (x_2,x_1), \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \right) \arrow[r, color=white, "\textcolor{black}{\sim}" description] & (\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0}) = \left( (x_1,x_2), \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \right) \arrow[d,<->, "\mu_1"] \\ \left( (\frac{1+x_1}{x_2}, x_1), \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \right) \arrow[u,<->, "\mu_1"] & \left( \left(\frac{1+x_2}{x_1}, x_2\right), \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \right) \arrow[d, <->,"\mu_2"]\\ \left( \left(\frac{1+x_1}{x_2}, \frac{1+x_1+x_2}{x_1x_2}\right), \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \right) \arrow[u,<->, "\mu_2"] & \left( \left(\frac{1+x_2}{x_1}, \frac{1+x_1+x_2}{x_1x_2}\right), \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \right) \arrow[l,<->, "\mu_1"] \end{tikzcd} \end{center} Accordingly, we have \[ \cA(\Sigma_{t_0}) = \cA(\{\Sigma_t\}_{t \in \mathbb{T}_n}) = \bbC\left[x_1,x_2,\frac{1+x_2}{x_1}, \frac{1+x_1+x_2}{x_1x_2}, \frac{1+x_1}{x_2}\right]. \] We notice that there are only five seeds in this case. Indeed, it becomes a cluster pattern of type~$\dynkinfont{A}_2$ (see Example~\ref{example_root_and_A2}). \end{example} \begin{remark}\label{rmk_x_cluster_mutation} One can obtain a $Y$-pattern from a given cluster pattern as follows. Let $\{ (\bfx_t, \clusterfont{\tilde{B}}_t)\}_{t \in \mathbb{T}_n}$ be a cluster pattern with $\bfx_t = (x_{1;t},\dots,x_{m;t})$. For $t \in \mathbb{T}_n$ and $i \in [n]$, we defined an assignment $(\mathbf x_t, \clusterfont{\tilde{B}}_t) \stackrel{\Theta}{\mapsto} (\hat{\mathbf y}_t,\clusterfont{B})$, where $\hat{\mathbf y}_t= (\hat{y}_{1;t},\dots,\hat{y}_{n;t})$ is defined by \[ \hat{y}_{i;t} = \prod_{j\in[m] } x_{j;t}^{b^{(t)}_{i,j}} \] Here, $\clusterfont{\tilde{B}}_t = (b^{(t)}_{i,j})$ and $\clusterfont{B}_t$ is the principal part of $\clusterfont{\tilde{B}}_t$. Then the assignment $t \mapsto (\hat{\mathbf y}_t, \clusterfont{B})$ provides a $Y$-pattern and commutes with the mutation maps, indeed, we obtain $\mu_k(\Theta(\mathbf x, \clusterfont{\tilde{B}})) = \Theta(\mu_k(\mathbf x,\clusterfont{\tilde{B}}))$. We notice that if the exchange matrix $\clusterfont{\tilde{B}}_t$ has full rank, then the variables $\hat{y}_{1;t},\dots,\hat{y}_{n;t}$ are algebraically independent. Note that the mutation preserves the rank of the exchange matrix as proved in~\cite[Lemma~3.2]{BFZ3_2005}. \end{remark} \subsection{Cluster algebras of Dynkin type} The number of cluster variables in Example~\ref{example_A2_example} is finite even though the number of vertices in the graph $\mathbb{T}_2$ is infinite. We call such cluster algebras \emph{of finite type}. More precisely, we recall the following definition. \begin{definition}[{\cite{FZ2_2003}}] A cluster algebra is said to be \emph{of finite type} if it has finitely many cluster variables. \end{definition} It has been realized that classifying finite type cluster algebras is related to studying exchange matrices. The \emph{Cartan counterpart} $C(\clusterfont{B}) = (c_{i,j})$ of the principal part $\clusterfont{B}$ of an exchange matrix $\clusterfont{\tilde{B}}$ is defined by \[ c_{i,j} = \begin{cases} 2 & \text{ if } i = j, \\ -|b_{i,j}| & \text{ otherwise}. \end{cases} \] Since $\clusterfont{B}$ is skew-symmetrizable, its Cartan counterpart $C(\clusterfont{B})$ is symmetrizable. We say that a quiver $\clusterfont{Q}$ is \emph{acyclic} if it does not have directed cycles. Similarly, for a skew-symmetrizable matrix $\clusterfont{B} = (b_{i,j})$, we say that it is \emph{acyclic} if there are no sequences $j_1,j_2,\dots,j_{\ell}$ with $\ell \ge 3$ such that \[ b_{j_1,j_2}, b_{j_2,j_3},\dots,b_{j_{\ell-1},j_{\ell}},b_{j_{\ell},j_1} > 0. \] We say a seed $\Sigma = (\mathbf x, \clusterfont{\tilde{B}})$ is \emph{acyclic} if so is its principal part $\clusterfont{B}$. \begin{definition}\label{def_quiver_of_type_X} For a finite or affine Dynkin type $\dynkinfont{Z}$, we define a quiver $\clusterfont{Q}$, a matrix $\clusterfont{\tilde{B}}$, a cluster pattern $\{(\bfx_t, \clusterfont{\tilde{B}}_t)\}_{t \in \mathbb{T}_n}$, a $Y$-pattern $\{(\bfy_t, \clusterfont{B}_t)\}_{t \in \mathbb{T}_n}$, or a cluster algebra $\cA(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0})$ \emph{of type~$\dynkinfont{Z}$} as follows. \begin{enumerate} \item A quiver is \textit{of type~$\dynkinfont{Z}$} if it is mutation equivalent to an \emph{acyclic} quiver whose underlying graph is isomorphic to the Dynkin diagram of type $\dynkinfont{Z}$. \item A skew-symmetrizable matrix $\clusterfont{B}$ is \textit{of type $\dynkinfont{Z}$} if it is mutation equivalent to an acyclic skew-symmetrizable matrix whose Cartan counterpart $C(\clusterfont{B})$ is isomorphic to the Cartan matrix of type~$\dynkinfont{Z}$. \item A cluster pattern $\{(\bfx_t, \clusterfont{\tilde{B}}_t)\}_{t \in \mathbb{T}_n}$ or a $Y$-pattern $\{(\bfy_t, \clusterfont{B}_t)\}_{t \in \mathbb{T}_n}$ is \textit{of type $\dynkinfont{Z}$} if for some $t \in \mathbb{T}_n$, the Cartan counterpart $C(\clusterfont{B}_t)$ is of type $\dynkinfont{Z}$. \item A cluster algebra $\cA(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0})$ is \textit{of type $\dynkinfont{Z}$} if its cluster pattern is of type $\dynkinfont{Z}$. \end{enumerate} \end{definition} Here, we say that two matrices $C_1$ and $C_2$ are \emph{isomorphic} if they are conjugate to each other via a permutation matrix, that is, $C_2 = P^{-1} C_1 P$ for some permutation matrix~$P$. One may wonder whether there exist exchange matrices in the same seed pattern having different Dynkin type. However, it is proved in~\cite[Corollary~4]{CalderoKeller06} that if two acyclic skew-symmetrizable matrices are mutation equivalent, then there exists a sequence of mutations from one to other such that intermediate skew-symmetrizable matrices are all acyclic. Indeed, if two acyclic skew-symmetrizable matrices are mutation equivalent, then their Cartan counterparts are isomorphic. \begin{proposition}[{cf. \cite[Corollary~4]{CalderoKeller06}}]\label{prop_quiver_of_same_type_are_mutation_equivalent} Let $\clusterfont{B}$ and $\clusterfont{B}'$ be acyclic skew-symmetrizable matrices. Then the following are equivalent: \begin{enumerate} \item the Cartan matrices $C(\clusterfont{B})$ and $C(\clusterfont{B}')$ are isomorphic; \item $\clusterfont{B}$ and $\clusterfont{B}'$ are mutation equivalent. \end{enumerate} \end{proposition} Accordingly, a quiver, a matrix, a cluster pattern, or a cluster algebra of type $\dynkinfont{Z}$ is well-defined. The following theorem presents a classification of cluster algebras of finite type. \begin{theorem}[{\cite{FZ2_2003}}] \label{thm_FZ_finite_type} Let $\{ (\bfx_t, \clusterfont{\tilde{B}}_t)\}_{t \in \mathbb{T}_n}$ be a cluster pattern with an initial seed $(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0})$. Let $\mathcal{A}(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0})$ be the corresponding cluster algebra. Then, the cluster algebra $\mathcal{A}(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0})$ is of finite type if and only if $\mathcal{A}(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0})$ is of finite Dynkin type. \end{theorem} We provide a list of all of the irreducible finite type root systems and their Dynkin diagram in Table~\ref{table_finite}. In Tables~\ref{table_standard_affine} and~\ref{table_twisted_affine}, we present lists of standard affine root systems and twisted affine root systems, respectively. They are the same as presented in Tables Aff 1, Aff 2, and Aff 3 of~\cite[Chapter~4]{Kac83}, and we denote by $\widetilde{\dynX} = \dynkinfont{Z}^{(1)}$. We notice that the number of vertices of the standard affine Dynkin diagram of type $\widetilde{\dynX}_{n-1}$ is $n$ while we do not specify the vertex numbering. We note that all Dynkin diagram of finite or affine type but $\widetilde{\dynA}_{n-1}$ do not have (undirected) cycles. Accordingly, we may omit the acyclicity condition in Definition~\ref{def_quiver_of_type_X} except $\widetilde{\dynA}_{n-1}$-type. On the other hand, if a quiver is a directed $n$-cycle, then the corresponding Cartan counterpart is of type $\widetilde{\dynA}_{n-1}$ while it is mutation equivalent to a quiver of type $\dynkinfont{D}_n$ (see~Type IV in \cite{Vatne10}). The mutation equivalence classes of acyclic quivers of type $\widetilde{\dynA}_{n-1}$ are described in~\cite[Lemma~6.8]{FST08}. Let $\clusterfont{Q}$ and $\clusterfont{Q}'$ are two $n$-cycles for $n \geq 3$. Suppose that in $\clusterfont{Q}$, there are $p$ edges of one direction and $q = n - p$ edges of the opposite direction. Also, in $\clusterfont{Q}'$, there are $p'$ edges of one direction and $q' = n - p'$ edges of the opposite direction. Then two quivers $\clusterfont{Q}$ and $\clusterfont{Q}'$ are mutation equivalent if and only if the unordered pairs $\{p,q\}$ and $\{p',q'\}$ coincide. We say that a quiver $\clusterfont{Q}$ is of type $\widetilde{\dynA}_{p,q}$ if it has $p$ edges of one direction and $q$ edges of the opposite direction. We depict some examples for quivers of type $\widetilde{\dynA}_{p,q}$ in Figure~\ref{fig_example_Apq}. \begin{figure}[ht] \begin{tabular}{cccc} \begin{tikzpicture}[scale = 0.5] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (3) {}; \node[Dnode] (1) [below left = 0.6cm and 0.6cm of 3]{}; \node[Dnode] (2) [below right = 0.6cm and 0.6cm of 3] {}; \draw[->] (3)--(1); \draw[->] (3)--(2); \draw[->] (1)--(2); \end{tikzpicture} & \begin{tikzpicture}[scale = 0.5] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [below = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [above = of 3] {}; \draw[->] (4) -- (1); \draw[->] (4) -- (3); \draw[->] (3) -- (2); \draw[->] (2) --(1); \end{tikzpicture} & \begin{tikzpicture}[scale = 0.5] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [below = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [above = of 3] {}; \draw[->] (4) -- (1); \draw[->] (4) -- (3); \draw[->] (3) -- (2); \draw[->] (1)--(2); \end{tikzpicture} & \raisebox{4em}{ \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \tikzstyle{state}=[draw, circle, inner sep = 0.07cm] \tikzset{every node/.style={scale=0.7}} \foreach \x in {1,...,7, 9,10,11}{ \node[Dnode] (\x) at (\x*30:3) {}; } \node(12) at (12*30:3) {$\vdots$}; \node[rotate=-30] (8) at (8*30:3) {$\cdots$}; \foreach \x [evaluate={\y=int(\x+1);}] in {3,...,6,9}{ \draw[->] (\x)--(\y); } \foreach \x [evaluate={\y=int(\x-1);}] in {3,2,11}{ \draw[->] (\x)--(\y); } \curlybrace[]{100}{290}{3.5}; \draw (190:4) node[rotate=0] {$p$}; \curlybrace[]{-50}{80}{3.5}; \draw (15:4) node[rotate=0] {$q$}; \end{tikzpicture}} \\ $\widetilde{\dynA}_{1,2}$ & $\widetilde{\dynA}_{1,3}$ & $\widetilde{\dynA}_{2,2}$ & $\widetilde{\dynA}_{p,q}$ \end{tabular} \caption{Quivers of type $\widetilde{\dynA}_{p,q}$.}\label{fig_example_Apq} \end{figure} In what follows, we fix an ordering on the simple roots as in Table~\ref{table_finite}; our conventions agree with that in the standard textbook of Humphreys~\cite{Humphreys}. \begin{table}[t] \begin{center} \begin{tabular}{c|l } \toprule $\Phi$ & Dynkin diagram \\ \midrule $\dynkinfont{A}_n$ $(n \geq 1)$ & \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [right =of 4] {}; \draw (1)--(2)--(3) (4)--(5); \draw[dotted] (3)--(4); \end{tikzpicture} \\ $\dynkinfont{B}_n$ $(n \geq 2)$ & \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [right =of 4] {}; \draw (1)--(2) (3)--(4); \draw [dotted] (2)--(3); \draw[double line] (4)--(5); \end{tikzpicture} \\ $\dynkinfont{C}_n$ $(n \geq 3)$ & \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [right =of 4] {}; \draw (1)--(2) (3)--(4); \draw [dotted] (2)--(3); \draw[double line] (5)--(4); \end{tikzpicture} \\ $\dynkinfont{D}_n$ $(n \geq 4)$ & \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [above right= 0.3cm and 1cm of 4] {}; \node[Dnode] (6) [below right= 0.3cm and 1cm of 4] {}; \draw(1)--(2) (3)--(4)--(5) (4)--(6); \draw[dotted] (2)--(3); \end{tikzpicture} \\ $\dynkinfont{E}_6$& \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (3) [right=of 1] {}; \node[Dnode] (4) [right=of 3] {}; \node[Dnode] (2) [above=of 4] {}; \node[Dnode] (5) [right=of 4] {}; \node[Dnode] (6) [right=of 5]{}; \draw(1)--(3)--(4)--(5)--(6) (2)--(4); \end{tikzpicture} \\ $\dynkinfont{E}_7$ & \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (3) [right=of 1] {}; \node[Dnode] (4) [right=of 3] {}; \node[Dnode] (2) [above=of 4] {}; \node[Dnode] (5) [right=of 4] {}; \node[Dnode] (6) [right=of 5]{}; \node[Dnode] (7) [right=of 6]{}; \draw(1)--(3)--(4)--(5)--(6)--(7) (2)--(4); \end{tikzpicture} \\ $\dynkinfont{E}_8$ & \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (3) [right=of 1] {}; \node[Dnode] (4) [right=of 3] {}; \node[Dnode] (2) [above=of 4] {}; \node[Dnode] (5) [right=of 4] {}; \node[Dnode] (6) [right=of 5]{}; \node[Dnode] (7) [right=of 6]{}; \node[Dnode] (8) [right=of 7]{}; \draw(1)--(3)--(4)--(5)--(6)--(7)--(8) (2)--(4); \end{tikzpicture} \\ $\dynkinfont{F}_4$ & \begin{tikzpicture}[scale = .5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \draw (1)--(2) (3)--(4); \draw[double line] (2)-- (3); \end{tikzpicture} \\ $\dynkinfont{G}_2$ & \begin{tikzpicture}[scale =.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \draw[triple line] (2)--(1); \draw (1)--(2); \end{tikzpicture}\\ \bottomrule \end{tabular} \end{center} \caption{Dynkin diagrams of finite type}\label{table_finite} \end{table} \begin{table}[ht] \begin{center} \begin{tabular}{l|l} \toprule $\Phi$ & Dynkin diagram \\ \midrule $\widetilde{\dynA}_1$ & \begin{tikzpicture}[scale=.5, baseline=-.5ex, decoration={ markings, mark=at position 0.4 with {\arrow[line width = 0.5pt,scale=1]{angle 90}}}] \tikzset{every node/.style={scale=0.7}} \node[Dnode ] (1) {}; \node[Dnode ] (2) [right = of 1] {}; \draw[double distance = 1.5pt, postaction={decorate}] (2)--(1); \draw[postaction={decorate}, draw=none] (1)--(2); \end{tikzpicture} \\ $\widetilde{\dynA}_{n-1}$ $(n \geq 3)$ & \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [right =of 4] {}; \node[Dnode] (6) [above =of 3] {}; \draw (1)--(2)--(3) (4)--(5) (1)--(6)--(5); \draw[dotted] (3)--(4); \end{tikzpicture} \\ $\widetilde{\dynB}_{n-1}$ $(n \geq 4)$ & \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [right =of 4] {}; \node[Dnode] (6) [below left = 0.6cm and 0.6cm of 1] {}; \node[Dnode] (7) [above left = 0.6cm and 0.6cm of 1] {}; \draw (1)--(2) (3)--(4) (6)--(1)--(7); \draw [dotted] (2)--(3); \draw[double line] (4)--(5); \end{tikzpicture} \\ $\widetilde{\dynC}_{n-1}$ $(n \geq 3)$ & \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [right =of 4] {}; \node[Dnode] (6) [left =of 1] {}; \draw (1)--(2) (3)--(4); \draw [dotted] (2)--(3); \draw[double line] (5)--(4); \draw[double line] (6)--(1); \end{tikzpicture} \\ $\widetilde{\dynD}_{n-1}$ $(n \geq 5)$ & \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [ below right = 0.6cm and 0.6cm of 4] {}; \node[Dnode] (6) [above right= 0.6cm and 0.6cm of 4] {}; \node[Dnode] (7) [above left = 0.6cm and 0.6cm of 1] {}; \node[Dnode] (8) [below left=0.6cm and 0.6cm of 1] {}; \draw(1)--(2) (3)--(4)--(5) (4)--(6) (7)--(1)--(8); \draw[dotted] (2)--(3); \end{tikzpicture} \\ $\widetilde{\dynE}_6$ & \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (3) [right=of 1] {}; \node[Dnode] (4) [right=of 3] {}; \node[Dnode] (2) [above=of 4] {}; \node[Dnode] (5) [right=of 4] {}; \node[Dnode] (6) [right=of 5]{}; \node[Dnode] (7) [above=of 2]{}; \draw(1)--(3)--(4)--(5)--(6) (7)--(2)--(4); \end{tikzpicture} \\ $\widetilde{\dynE}_7$ & \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (8) [left=of 1] {}; \node[Dnode] (3) [right=of 1] {}; \node[Dnode] (4) [right=of 3] {}; \node[Dnode] (2) [above=of 4] {}; \node[Dnode] (5) [right=of 4] {}; \node[Dnode] (6) [right=of 5]{}; \node[Dnode] (7) [right=of 6]{}; \draw (8)--(1)--(3)--(4)--(5)--(6)--(7) (2)--(4); \end{tikzpicture} \\ $\widetilde{\dynE}_8$ & \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (3) [right=of 1] {}; \node[Dnode] (4) [right=of 3] {}; \node[Dnode] (2) [above=of 4] {}; \node[Dnode] (5) [right=of 4] {}; \node[Dnode] (6) [right=of 5]{}; \node[Dnode] (7) [right=of 6]{}; \node[Dnode] (8) [right=of 7]{}; \node[Dnode] (9) [right=of 8]{}; \draw(1)--(3)--(4)--(5)--(6)--(7)--(8)--(9) (2)--(4); \end{tikzpicture} \\ $\widetilde{\dynF}_4$ & \begin{tikzpicture}[scale = .5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [right =of 4] {}; \draw (1)--(2) (2)--(3) (4)--(5); \draw[double line] (3)--(4); \end{tikzpicture} \\ $\widetilde{\dynG}_2$ & \begin{tikzpicture}[scale =.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right=of 2] {}; \draw[triple line] (2)--(3); \draw (1)--(2); \draw (2)--(3); \end{tikzpicture}\\ \bottomrule \end{tabular} \end{center} \caption{Dynkin diagrams of standard affine root systems}\label{table_standard_affine} \end{table} \begin{table}[ht] \begin{tabular}{l|l} \toprule $\Phi$ & Dynkin diagram \\ \midrule $\dynkinfont{A}_2^{(2)}$ & \begin{tikzpicture}[scale=.5, baseline=-.5ex, decoration={ markings, mark=at position 0.6 with {\arrow[line width = 0.5pt,scale=1]{angle 90}}}] \tikzset{every node/.style={scale=0.7}} \node[Dnode ] (1) {}; \node[Dnode ] (2) [right = of 1] {}; \draw[double distance = 2.7pt] (1)--(2); \draw[double distance = 0.9pt, postaction={decorate}] (1)--(2); \end{tikzpicture} \\ $\dynkinfont{A}_{2(n-1)}^{(2)}$ ($n \ge 3$) & \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [right =of 4] {}; \node[Dnode] (6) [left =of 1] {}; \draw (1)--(2) (3)--(4); \draw [dotted] (2)--(3); \draw[double line] (4)--(5); \draw[double line] (6)--(1); \end{tikzpicture} \\ $\dynkinfont{A}_{2(n-1)-1}^{(2)}$ ($n \ge 4$) & \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [right =of 4] {}; \node[Dnode] (6) [below left = 0.6cm and 0.6cm of 1] {}; \node[Dnode] (7) [above left = 0.6cm and 0.6cm of 1] {}; \draw (1)--(2) (3)--(4) (6)--(1)--(7); \draw [dotted] (2)--(3); \draw[double line] (5)--(4); \end{tikzpicture} \\ $\dynkinfont{D}_{n}^{(2)}$ ($n \ge 3$) & \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [right =of 4] {}; \node[Dnode] (6) [left =of 1] {}; \draw (1)--(2) (3)--(4); \draw [dotted] (2)--(3); \draw[double line] (4)--(5); \draw[double line] (1)--(6); \end{tikzpicture} \\ $\dynkinfont{E}_6^{(2)}$ & \begin{tikzpicture}[scale = .5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [right =of 4] {}; \draw (1)--(2) (2)--(3) (4)--(5); \draw[double line] (4)--(3); \end{tikzpicture} \\ $\dynkinfont{D}_4^{(3)}$ & \begin{tikzpicture}[scale =.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right=of 2] {}; \draw[triple line] (3)--(2); \draw (1)--(2); \draw (2)--(3); \end{tikzpicture}\\ \bottomrule \end{tabular} \caption{Dynkin diagrams of twisted affine root systems}\label{table_twisted_affine} \end{table} For a Dynkin type $\dynkinfont{Z}$, we say that $\dynkinfont{Z}$ is \emph{simply-laced} if its Dynkin diagram has only single edges, otherwise, $\dynkinfont{Z}$ is \emph{non-simply-laced}. Recall that the Cartan matrix associated to a Dynkin diagram~$\dynkinfont{Z}$ can be read directly from the diagram~$\dynkinfont{Z}$ as follows: \begin{center} \setlength{\tabcolsep}{20pt} \begin{tabular}{ccccc} \begin{tikzpicture}[scale =.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode, label=below:{$i$}] (2) {}; \node[Dnode, label=below:{$j$}] (3) [right=of 2] {}; \draw (2)-- (3); \end{tikzpicture}& \begin{tikzpicture}[scale =.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode, label=below:{$i$}] (2) {}; \node[Dnode, label=below:{$j$}] (3) [right=of 2] {}; \draw[double line] (2)--(3); \end{tikzpicture}& \begin{tikzpicture}[scale =.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode, label=below:{$i$}] (2) {}; \node[Dnode, label=below:{$j$}] (3) [right=of 2] {}; \draw[triple line] (2)--(3); \draw (2)--(3); \end{tikzpicture} & \begin{tikzpicture}[scale=.5, baseline=-.5ex, decoration={ markings, mark=at position 0.6 with {\arrow[line width = 0.5pt,scale=1]{angle 90}}}] \tikzset{every node/.style={scale=0.7}} \node[Dnode, label=below:{$i$}] (1) {}; \node[Dnode, label=below:{$j$}] (2) [right = of 1] {}; \draw[double distance = 2.7pt] (1)--(2); \draw[double distance = 0.9pt, postaction={decorate}] (1)--(2); \end{tikzpicture} & \begin{tikzpicture}[scale=.5, baseline=-.5ex, decoration={ markings, mark=at position 0.4 with {\arrow[line width = 0.5pt,scale=1]{angle 90}}}] \tikzset{every node/.style={scale=0.7}} \node[Dnode, label=below:{$i$}] (1) {}; \node[Dnode, label=below:{$j$}] (2) [right = of 1] {}; \draw[double distance = 1.5pt, postaction={decorate}] (2)--(1); \draw[postaction={decorate}, draw=none] (1)--(2); \end{tikzpicture}\\ $c_{i,j} = -1$ &$c_{i,j} = -2$ & $c_{i,j} = -3$ & $c_{i,j} = -4$ & $c_{i,j} = -2$ \\ $c_{j,i} = -1$ &$c_{j,i} = -1$ & $c_{j,i} = -1$ & $c_{j,i} = -1$ & $c_{j,i} = -2$ \end{tabular} \end{center} For example, the Cartan matrix $(c_{i,j})$ of the diagram \begin{tikzpicture}[scale =.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode, label=below:{1}] (1) {}; \node[Dnode, label=below:{2}] (2) [right = of 1] {}; \draw[triple line] (2)--(1); \draw (1)--(2); \end{tikzpicture} of type $\dynkinfont{G}_2$ is \begin{equation}\label{eq_Cartan_G2} \begin{pmatrix} 2 & -1 \\ -3 & 2 \end{pmatrix}. \end{equation} Therefore, for each non-simply-laced Dynkin diagram $\dynkinfont{Z}$, any exchange matrix $\clusterfont{B}$ of type $\dynkinfont{Z}$ is \emph{not} skew-symmetric but skew-symmetrizable. Hence it never comes from any quiver. \begin{assumption}\label{assumption_finite} Throughout this paper, we assume that for any cluster algebra, the principal part $\clusterfont{B}_{t_0}$ of the initial exchange matrix is acyclic of \textit{finite or affine Dynkin} type unless mentioned otherwise. \end{assumption} In Table~\ref{table_seeds_and_cluster_variables}, we provide enumeration on the number of cluster variables and clusters in each cluster algebra of finite (irreducible) type (cf.~\cite[Figure~5.17]{FWZ_chapter45}). \begin{table}[htb] \setlength{\tabcolsep}{4pt} \begin{tabular}{c|ccccccccc} \toprule $\Phi$ & $\dynkinfont{A}_n$ & $\dynkinfont{B}_n$ & $\dynkinfont{C}_n$ & $\dynkinfont{D}_n$ & $\dynkinfont{E}_6$ & $\dynkinfont{E}_7$ & $\dynkinfont{E}_8$ & $\dynkinfont{F}_4$ & $\dynkinfont{G}_2$ \\ \midrule $\#$seeds & $\displaystyle \frac{1}{n+2}{\binom{2n+2}{n+1}}$ & $\displaystyle \binom{2n}{n}$ & $\displaystyle \binom{2n}{n}$ & $\displaystyle \frac{3n-2}{n} \binom{2n-2}{n-1}$ & $833$ & $4160$ & $25080$ & $105$ & $8$ \\[1.5em] $\#$clvar & $\displaystyle \frac{n(n+3)}{2}$ & $n(n+1)$ & $n(n+1)$ & $n^2$ & $42$ & $70$ & $128$ & $28$ & $8$ \\ \bottomrule \end{tabular} \caption{Enumeration of seeds and cluster variables}\label{table_seeds_and_cluster_variables} \end{table} \begin{example}\label{example_root_and_A2} Continuing Example~\ref{example_A2_example}, the Cartan counterpart of the principal part $\clusterfont{B}_{t_0}$ is given by \[ C(\clusterfont{B}_{t_0}) = \begin{pmatrix} 2 & -1 \\ -1 & 2 \end{pmatrix}, \] which is the Cartan matrix of type $\dynkinfont{A}_2$. Accordingly, by Theorem~\ref{thm_FZ_finite_type}, the cluster algebra $\cA(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0})$ is of finite type. Indeed, there are only five seeds in the seed pattern. \end{example} \subsection{Folding}\label{sec:folding} Under certain conditions, one can \textit{fold} cluster patterns to produce new ones. This procedure is used to study cluster algebras of non-simply-laced type from those of simply-laced type (see Figure~\ref{fig_folding} and Table~\ref{figure:all possible foldings}). In this section, we recall \textit{folding} of cluster algebras from~\cite{FWZ_chapter45}. We also refer the reader to~\cite{Dupont08}. Let $\clusterfont{Q}$ be a quiver on $[m]$. Let $G$ be a finite group acting on the set $[m]$. The notation $i \sim i'$ will mean that $i$ and $i'$ lie in the same $G$-orbit. To study folding of cluster algebras, we prepare some terminologies. We denote by $\clusterfont{\tilde{B}} = \clusterfont{\tilde{B}}(\clusterfont{Q})$ the submatrix $(b_{i,j})_{1\le i\le m, 1 \le j \le n}$ of the adjacency matrix $(b_{i,j})_{1 \le i,j\le m}$ of the quiver~$\clusterfont{Q}$. Also, we denote by $\clusterfont{B} = \clusterfont{B}(\clusterfont{Q})$ the principal part of $\clusterfont{\tilde{B}}(\clusterfont{Q})$. For each $g \in G$, let $\clusterfont{Q}' = g \cdot \clusterfont{Q}$ be the quiver such that $\clusterfont{\tilde{B}}(\clusterfont{Q}') = (b_{i,j}')$ is given by \[ b_{i,j}' = b_{g(i),b(j)}. \] \begin{definition}[{cf.~\cite[\S4.4]{FWZ_chapter45} and~\cite[\S 3]{Dupont08}}]\label{definition:admissible quiver} Let $\clusterfont{Q}$ be a quiver on $[m]$ and $G$ a finite group acting on the set~$[m]$. \begin{enumerate} \item A quiver $\clusterfont{Q}$ is \emph{$G$-invariant} if $g \cdot \clusterfont{Q} = \clusterfont{Q}$ for any $g \in G$. \item A $G$-invariant quiver $\clusterfont{Q}$ is \emph{$G$-admissible} if \label{admissible} \begin{enumerate} \item for any $i \sim i'$, index $i$ is mutable if and only if so is $i'$; \label{mutable} \item for mutable indices $i \sim i'$, we have $b_{i,i'} = 0$; \label{bii'=0} \item for any $i \sim i'$, and any mutable $j$, we have $b_{i,j} b_{i',j} \geq 0$.\label{nonnegativity_of_bijbi'j} \end{enumerate} \item For a $G$-admissible quiver $\clusterfont{Q}$, we call a $G$-orbit \emph{mutable} (respectively, \emph{frozen}) if it consists of mutable (respectively, frozen) vertices. \end{enumerate} \end{definition} For a $G$-admissible quiver $\clusterfont{Q}$, we define the matrix $\clusterfont{\tilde{B}}^G = \clusterfont{\tilde{B}}(\clusterfont{Q})^G = (b_{I,J}^G)$ whose rows (respectively, columns) are labeled by the $G$-orbits (respectively, mutable $G$-orbits) by \[ b_{I,J}^G = \sum_{i \in I} b_{i,j} \] where $j$ is an arbitrary index in $J$. We then say $\clusterfont{\tilde{B}}^G$ is obtained from $\clusterfont{\tilde{B}}$ (or from the quiver $\clusterfont{Q}$) by \textit{folding} with respect to the given $G$-action. \begin{remark} We note that the $G$-admissibility and the folding can also be defined for exchange matrices. \end{remark} \begin{example}\label{example_D4_to_G2} Let $\clusterfont{Q}$ be a quiver of type $\dynkinfont{D}_4$ given as follows. \[ \begin{tikzpicture}[node distance=0.7cm] \tikzstyle{state}=[draw, circle, inner sep = 0.07cm] \tikzset{every node/.style={scale=0.7}} \tikzstyle{double line} = [ double distance = 1.5pt, double=\pgfkeysvalueof{/tikz/commutative diagrams/background color} ] \tikzstyle{triple line} = [ double distance = 2pt, double=\pgfkeysvalueof{/tikz/commutative diagrams/background color} ] \node[state, label=left:{$1$}] (1) {}; \node[state, label =right:{$2$}] (2) [above right = 0.4cm and 0.7cm of 1] {}; \node[state, label=right:{$3$}] (3) [right = 0.7cm of 1] {}; \node[state, label= right:{$4$}] (4) [below right = 0.4cm and 0.7cm of 1] {}; \draw (3)--(1)--(4) (1)--(2); \node[label={below:\normalsize{$\rightsquigarrow$}}] [above right = 0.1cm and 1.5cm of 3] {}; \node[ynode] at (1) {}; \node[gnode] at (2) {}; \node[gnode] at (3) {}; \node[gnode] at (4) {}; \draw[<-] (1)--(2); \draw[<-] (1)--(3); \draw[<-] (1)--(4); \end{tikzpicture} \qquad \text{\raisebox{1.5em}{$\clusterfont{\tilde{B}}(\clusterfont{Q}) = \begin{pmatrix} 0 & -1 & -1 &-1 \\ 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{pmatrix}$}} \] The finite group $G = \Z / 3 \Z$ acts on $[4]$ by sending $2 \mapsto 3 \mapsto 4 \mapsto 2$ and $1 \mapsto 1$. Here, we decorate vertices of the quiver $\clusterfont{Q}$ with \colorbox{cyclecolor2!50!}{green} and \colorbox{cyclecolor1!50!}{orange} colors for presenting sources and sinks, respectively. One may check that the quiver $\clusterfont{Q}$ is $G$-admissible. By setting $I_1 = \{1\}$ and $I_2 = \{ 2,3,4\}$, we obtain \[ \begin{split} b_{I_1,I_2}^G &= \sum_{i \in I_1} b_{i,2} = b_{1,2} = -1, \\ b_{I_2,I_1}^G &= \sum_{i \in I_2} b_{i,1} = b_{2,1} + b_{3,1} + b_{4,1} = 3. \end{split} \] Accordingly, we obtain the matrix $\clusterfont{\tilde{B}}^G = \begin{pmatrix} 0 & -1 \\ 3 & 0 \end{pmatrix}$ whose Cartan counterpart is the Cartan matrix of type $\dynkinfont{G}_2$ (cf.~\eqref{eq_Cartan_G2}). \end{example} For a $G$-admissible quiver $\clusterfont{Q}$ and a mutable $G$-orbit $I$, we consider a composition of mutations given by \[ \mu_I = \prod_{i \in I} \mu_i \] which is well-defined because of the definition of admissible quivers (cf. Remark~\ref{rmk_mutation_commutes}). If $\mu_I(\clusterfont{Q})$ is again $G$-admissible, then we have that \begin{equation* (\mu_I(\clusterfont{\tilde{B}}))^G = \mu_I(\clusterfont{\tilde{B}}^G). \end{equation*} We notice that the quiver $\mu_I(\clusterfont{Q})$ is \textit{not} $G$-admissible in general. Therefore, we present the following definition. \begin{definition} Let $G$ be a group acting on the vertex set of a quiver $\clusterfont{Q}$. We say that $\clusterfont{Q}$ is \emph{globally foldable} with respect to $G$ if $\clusterfont{Q}$ is $G$-admissible and moreover for any sequence of mutable $G$-orbits $I_1,\dots,I_\ell$, the quiver $(\mu_{I_\ell} \dots \mu_{I_1})(\clusterfont{Q})$ is $G$-admissible. \end{definition} For a globally foldable quiver, we can fold all the seeds in the corresponding seed pattern. Let $\mathbb{F}^G$ be the field of rational functions in $\# ([m]/G)$ independent variables. Let $\psi \colon \mathbb{F} \to \mathbb{F}^G$ be a surjective homomorphism. A seed $(\mathbf{x}, \clusterfont{\tilde{B}})$ or a $Y$-seed $(\bfy, \clusterfont{B})$ is called \emph{$(G, \psi)$-invariant} or \emph{admissible} if \begin{itemize} \item $\clusterfont{Q}$ is a $G$-invariant or admissible quiver, respectively; \item for any $i \sim i'$, we have $\psi(x_i) = \psi(x_{i'})$ or $\psi(y_i) = \psi(y_{i'})$. \end{itemize} In this situation, we define new ``folded'' seed $(\bfx,\clusterfont{\tilde{B}})^G = (\bfx^G, \clusterfont{\tilde{B}}^G)$ and $Y$-seed $(\bfy,\clusterfont{B})^G=(\bfy^G, \clusterfont{B}^G)$ in $\mathbb{F}^G$ whose exchange matrix is given as before and cluster variables $\bfx^G = (x_I)$ and $\bfy^G=(y_I)$ are indexed by the $G$-orbits and given by $x_I = \psi(x_i)$ and $y_I=\psi(y_i)$. We notice that for a $(G,\psi)$-admissible seed $(\bfx, \clusterfont{\tilde{B}})$ or a $(G,\psi)$-admissible $Y$-seed $(\bfy, \clusterfont{B})$, the folding process is equivariant under the orbit-wise mutation, that is, for any mutable $G$-orbit~$I$, we have \[ (\mu_I(\bfx,\clusterfont{\tilde{B}}))^G = \mu_{I}((\bfx,\clusterfont{\tilde{B}})^G) \quad \text{ and } \quad (\mu_I(\bfy,\clusterfont{B}))^G = \mu_{I}((\bfy,\clusterfont{B})^G). \] \begin{proposition}[{cf.~\cite[Corollary~4.4.11]{FWZ_chapter45}}]\label{proposition:folded cluster pattern} Let $\clusterfont{Q}$ be a quiver which is globally foldable with respect to a group $G$ acting on the set of its vertices. Let $(\mathbf{x}, \clusterfont{\tilde{B}})$ and $(\bfy, \clusterfont{B})$ be a seed and a $Y$-seed in the field $\mathbb{F}$ of rational functions freely generated by $\mathbf{x} = (x_1,\dots,x_m)$. Then we have the following. \begin{enumerate} \item Define $\psi \colon \mathbb{F} \to \mathbb{F}^G$ so that $(\bfx, \clusterfont{\tilde{B}})$ is a $(G, \psi)$-admissible seed. Then, for any mutable $G$-orbits $I_1,\dots,I_\ell$, the seed $(\mu_{I_\ell} \cdots \mu_{I_1})(\bfx,\clusterfont{\tilde{B}})$ is $(G, \psi)$-admissible, and moreover the folded seeds $((\mu_{I_\ell} \dots \mu_{I_1})(\bfx,\clusterfont{\tilde{B}}))^G$ form a seed pattern in $\mathbb{F}^G$ with the initial seed $(\bfx,\clusterfont{\tilde{B}})^G=(\bfx^G, \clusterfont{\tilde{B}}^G)$. \item Define $\psi \colon \mathbb{F} \to \mathbb{F}^G$ so that $(\bfy, \clusterfont{B})$ is a $(G, \psi)$-admissible seed. Then, for any mutable $G$-orbits $I_1,\dots,I_\ell$, the $Y$-seed $(\mu_{I_\ell} \dots \mu_{I_1})(\bfy,\clusterfont{B})$ is $(G, \psi)$-admissible, and moreover the folded $Y$-seeds $((\mu_{I_\ell} \cdots \mu_{I_1})(\bfy,\clusterfont{B}))^G$ form a $Y$-pattern in $\mathbb{F}^G$ with the initial seed $(\bfy,\clusterfont{B})^G=(\bfy^G, \clusterfont{B}^G)$. \end{enumerate} \end{proposition} \begin{example}\label{example_folding_ADE} The quiver in Example~\ref{example_D4_to_G2} is globally foldable, and moreover the corresponding seed pattern is of type $\dynkinfont{G}_2$. In fact, seed patterns of type~$\dynkinfont{BCFG}$ are obtained by folding quivers of type~$\dynkinfont{ADE}$; seed patterns of type~$\widetilde{\dynB}\widetilde{\dynC}\widetilde{\dynF}\widetilde{\dynG}$ are obtained by folding quivers of type~$\widetilde{\dynD}\widetilde{\dynE}$ (cf.~\cite{FeliksonShapiroTumarkin12_unfoldings}). In Figures~\ref{fig_folding} and~\ref{figure:G-actions}, we present the corresponding quivers of type~$\dynkinfont{ADE}$ and type $\widetilde{\dynE}$. We decorate vertices of quivers with \colorbox{cyclecolor2!50!}{green} and \colorbox{cyclecolor1!50!}{orange} colors for presenting source and sink, respectively. As one may see, we have to put arrows on the Dynkin diagram alternatingly. The alternating colorings on quivers of type $\dynkinfont{ADE}$ provide that on quivers of type $\dynkinfont{BCFG}$ as displayed in the right column of Figure~\ref{fig_folding}. Foldings between simply-laced and non-simply-laced finete and affine Dynkin diagrams are given in Table~\ref{figure:all possible foldings}. \end{example} \begin{figure} \begin{tikzpicture}[node distance=0.7cm] \tikzset{every node/.style={scale=0.7}} \begin{scope}[xshift=-1.5cm, yshift=-0.5cm] \node[color=white] {\textcolor{black}{{\Large $\dynkinfont{A}_{2n-1} \rightsquigarrow \dynkinfont{B}_n$}}}; \end{scope} \begin{scope}[xshift=-1.5cm, yshift=-2.5cm] \node[color=white] {\textcolor{black}{{\Large $\dynkinfont{D}_{n+1} \rightsquigarrow \dynkinfont{C}_n$}}}; \end{scope} \begin{scope}[xshift=-1.5cm, yshift=-4.5cm] \node[color=white] {\textcolor{black}{{\Large $\dynkinfont{E}_6 \rightsquigarrow \dynkinfont{F}_4$}}}; \end{scope} \begin{scope}[xshift=-1.5cm, yshift=-6.5cm] \node[color=white] {\textcolor{black}{{\Large $\dynkinfont{D}_{4} \rightsquigarrow \dynkinfont{G}_2$}}}; \end{scope} \begin{scope \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [below right= 0.4cm and 0.7cm of 4] {}; \node[Dnode] (6) [below left = 0.4cm and 0.7cm of 5] {}; \node[Dnode] (7) [left = of 6] {}; \node[Dnode] (8) [left = of 7] {}; \node[Dnode] (9) [left = of 8] {}; \foreach \y in {3, 5, 7} { \node[ynode] at (\y) {}; } \foreach \g in {4,6} { \node[gnode] at (\g) {}; } \draw (1)--(2) (3)--(4)--(5)--(6)--(7) (8)--(9); \draw[dotted] (2)--(3) (7)--(8); \end{scope} \begin{scope}[xshift = 6cm, yshift = -0.5cm] \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [right =of 4] {}; \foreach \y in {3, 5} { \node[ynode] at (\y) {}; } \foreach \g in {4} { \node[gnode] at (\g) {}; } \draw (1)--(2) (3)--(4); \draw [dotted] (2)--(3); \draw[double line] (4)--(5); \node[label={below:\normalsize{$\rightsquigarrow$}}] [above left = 0.1cm and 1cm of 1] {}; \end{scope} \begin{scope}[yshift= -2.5cm] \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [above right= 0.4cm and .7cm of 4] {}; \node[Dnode] (6) [below right= 0.4cm and 0.7cm of 4] {}; \foreach \y in {4} { \node[ynode] at (\y) {}; } \foreach \g in {3,5,6} { \node[gnode] at (\g) {}; } \draw(1)--(2) (3)--(4)--(5) (4)--(6); \draw[dotted] (2)--(3); \end{scope} \begin{scope}[xshift = 6cm, yshift=-2.5cm] \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \node[Dnode] (5) [right =of 4] {}; \foreach \y in {4} { \node[ynode] at (\y) {}; } \foreach \g in {3,5} { \node[gnode] at (\g) {}; } \draw (1)--(2) (3)--(4); \draw [dotted] (2)--(3); \draw[double line] (5)--(4); \node[label={below:\normalsize{$\rightsquigarrow$}}] [above left = 0.1cm and 1cm of 1] {}; \end{scope} \begin{scope}[xshift=3.5cm, yshift=-4.5cm] \node[Dnode] (2) {}; \node[Dnode] (4) [left = of 2] {}; \node[Dnode] (3) [above left = 0.4cm and 0.7cm of 4] {}; \node[Dnode] (1) [left = of 3] {}; \node[Dnode] (5) [below left = 0.4cm and 0.7cm of 4] {}; \node[Dnode] (6) [left = of 5] {}; \foreach \y in {1,4,6} { \node[ynode] at (\y) {}; } \foreach \g in {2,3,5} { \node[gnode] at (\g) {}; } \draw(1)--(3)--(4)--(5)--(6) (2)--(4); \end{scope} \begin{scope}[xshift = 6cm, yshift = -4.5cm] \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \node[Dnode] (3) [right = of 2] {}; \node[Dnode] (4) [right =of 3] {}; \foreach \y in {1,3} { \node[ynode] at (\y) {}; } \foreach \g in {2,4} { \node[gnode] at (\g) {}; } \draw (1)--(2) (3)--(4); \draw[double line] (2)--(3); \node[label={below:\normalsize{$\rightsquigarrow$}}] [above left = 0.1cm and 1cm of 1] {}; \end{scope} \begin{scope}[xshift=2cm, yshift=-6.5cm] \node[Dnode] (1) at (0,0) {}; \node[Dnode] (2) at (60:1) {}; \node[Dnode] (3) at (180:1) {}; \node[Dnode] (4) at (300:1) {}; \foreach \y in {1} { \node[ynode] at (\y) {}; } \foreach \g in {2,3,4} { \node[gnode] at (\g) {}; } \draw (3)--(1)--(4) (1)--(2); \draw[->, dashed, thin] (70:1) arc (70:170:1) node[midway, above left] {}; \draw[->, dashed, thin] (190:1) arc (190:290:1) node[midway, below left] {}; \draw[->, dashed, thin] (310:1) arc (310:410:1) node[midway, right] {}; \end{scope} \begin{scope}[xshift = 6cm, yshift = -6.5cm] \node[Dnode] (1) {}; \node[Dnode] (2) [right = of 1] {}; \draw[triple line] (2)--(1); \draw (1)--(2); \foreach \y in {1} { \node[ynode] at (\y) {}; } \foreach \g in {2} { \node[gnode] at (\g) {}; } \node[label={below:\normalsize{$\rightsquigarrow$}}] [above left = 0.1cm and 1cm of 1] {}; \end{scope} \end{tikzpicture} \caption{Foldings in Dynkin diagrams of finite type (for seed patterns)}\label{fig_folding} \end{figure} \begin{figure} \begin{tikzpicture} \def2.5{2.5} \def7{7} \def7.7{7.7} \tikzset{every node/.style={scale=0.7}} \begin{scope}[xshift=-1.5cm, yshift= 0 cm] \node[color=white] {\textcolor{black}{{\Large $\widetilde{\dynD}_4 \rightsquigarrow \widetilde{\dynC}_2$}}}; \end{scope} \begin{scope}[xshift=-1.5cm, yshift= - 2.5 cm] \node[color=white] {\textcolor{black}{{\Large $\widetilde{\dynD}_4 \rightsquigarrow \dynkinfont{A}_5^{(2)}$}}}; \end{scope} \begin{scope}[xshift=-1.5cm, yshift=-2*2.5 cm] \node[color=white] {\textcolor{black}{{\Large $\widetilde{\dynD}_{2n} \rightsquigarrow \widetilde{\dynB}_n$}}}; \end{scope} \begin{scope}[xshift=-1.5cm, yshift=-3.5*2.5 cm] \node[color=white] {\textcolor{black}{{\Large $\widetilde{\dynE}_6 \rightsquigarrow \widetilde{\dynG}_2$}}}; \end{scope} \begin{scope}[xshift=-1.5cm, yshift=-5*2.5 cm] \node[color=white] {\textcolor{black}{{\Large $\widetilde{\dynE}_6 \rightsquigarrow \dynkinfont{E}_6^{(2)}$}}}; \end{scope} \begin{scope}[xshift=-1.5cm, yshift=-6*2.5 cm] \node[color=white] {\textcolor{black}{{\Large $\widetilde{\dynE}_7 \rightsquigarrow \widetilde{\dynF}_4$}}}; \end{scope} \foreach \y in {0,1,2,3.5,5,6}{ \node at (6,-\y * 2.5) {\normalsize{$\rightsquigarrow$}} ; } \begin{scope}[xshift=1cm, yshift= 0cm] \node[Dnode, ynode] (1) at (0,0) {}; \node[Dnode, gnode] (2) at (45:1) {}; \node[Dnode, gnode] (3) at (135:1) {}; \node[Dnode, gnode] (4) at (225:1) {}; \node[Dnode, gnode] (5) at (315:1) {}; \draw (1)--(2) (1)--(3) (1)--(4) (1)--(5); \draw[<->, dashed, thin] (-35:1) arc (-35:35:1) ; \draw[<->, dashed, thin] (145:1) arc (145:215:1) ; \end{scope} \begin{scope}[xshift= 7 cm, yshift=0 cm] \node[Dnode, gnode] (1) {}; \node[Dnode, ynode] (2) [right = of 1] {}; \node[Dnode, gnode] (3) [right = of 2] {}; \draw[double line] (1)--(2); \draw[double line] (3)--(2); \end{scope} \begin{scope}[xshift=1cm, yshift= - 2.5 cm] \node[Dnode, ynode] (1) at (0,0) {}; \node[Dnode, gnode] (2) at (45:1) {}; \node[Dnode, gnode] (3) at (135:1) {}; \node[Dnode, gnode] (4) at (225:1) {}; \node[Dnode, gnode] (5) at (315:1) {}; \draw (1)--(2) (1)--(3) (1)--(4) (1)--(5); \draw[<->, dashed, thin] (-35:1) arc (-35:35:1) ; \end{scope} \begin{scope}[xshift= 7.7 cm, yshift= - 2.5 cm] \node[Dnode, ynode] (1) at (0,0) {}; \node[Dnode, gnode] (2) at (135:1) {}; \node[Dnode, gnode] (3) at (225:1) {}; \node[Dnode, gnode] (4) at (0:1) {}; \draw (1)--(2) (1)--(3); \draw[double line] (4)--(1); \end{scope} \begin{scope}[xshift=1cm, yshift= - 2*2.5 cm] \node[Dnode, ynode] (1) at (0,0) {}; \node[Dnode, gnode] (2) at (135:1) {}; \node[Dnode, gnode] (3) at (225:1) {}; \node[Dnode, gnode] (4) at (1,0) {}; \node[Dnode, gnode] (5) at (2,0) {}; \node[Dnode, ynode] (6) at (3,0) {}; \node[Dnode, gnode] (7) at ($(3,0) + (45:1)$) {}; \node[Dnode, gnode] (8) at ($(3,0) + (-45:1)$) {}; \draw (1)--(2) (1)--(3) (1)--(4) (5)--(6) (6)--(7) (6)--(8); \draw[dotted] (4)--(5); \draw[->, dashed, thin] ($(1.5,0)+(10:1)$) arc (10:170:1) ; \draw[->, dashed, thin] ($(1.5,0)+(190:1)$) arc (190:350:1) ; \end{scope} \begin{scope}[xshift= 7.7 cm, yshift= - 2*2.5 cm] \node[Dnode, ynode] (1) at (0,0) {}; \node[Dnode, gnode] (2) at (135:1) {}; \node[Dnode, gnode] (3) at (225:1) {}; \node[Dnode, gnode] (4) at (1,0) {}; \node[Dnode] (5) at (2,0) {}; \node[Dnode] (6) at (3,0) {}; \draw (1)--(2) (1)--(3) (1)--(4); \draw[dotted] (4)--(5); \draw[double line] (5)--(6); \end{scope} \begin{scope}[xshift= 2cm, yshift= -3.5*2.5 cm] \node[ynode] (A1) at (0,0) {}; \node[ynode] (A3) at (60:2) {}; \node[ynode] (A5) at (180:2) {}; \node[ynode] (A7) at (300:2) {}; \node[gnode] (A2) at (60:1) {}; \node[gnode] (A4) at (180:1) {}; \node[gnode] (A6) at (300:1) {}; \foreach \x in {1,...,7}{ \node[Dnode] at (A\x) {}; } \draw (A1) node[above left] {} -- (A2) node[above left] {}; \draw (A1) -- (A4) node[above left] {}; \draw (A1) -- (A6) node[right] {}; \draw (A3) node[above left] {} -- (A2); \draw (A5) node[above left] {} -- (A4); \draw (A7) node[right] {} -- (A6); \draw[->, dashed, thin] (70:1) arc (70:170:1) node[midway, above left] {}; \draw[->, dashed, thin] (65:2) arc (65:175:2) node[midway, above left] {}; \draw[->, dashed, thin] (190:1) arc (190:290:1) node[midway, below left] {}; \draw[->, dashed, thin] (185:2) arc (185:295:2) node[midway, below left] {}; \draw[->, dashed, thin] (310:1) arc (310:410:1) node[midway, right] {}; \draw[->, dashed, thin] (305:2) arc (305:415:2) node[midway, right] {}; \end{scope} \begin{scope}[xshift= 7 cm, yshift= -3.5*2.5 cm] \node[ynode, label=below:{}] (1) {}; \node[gnode, label=below:{}] (2) [right = of 1] {}; \node[ynode, label=below:{}] (3) [right=of 2] {}; \foreach \x in {1,...,3}{ \node[Dnode] at (\x) {}; } \draw[triple line] (2)--(1); \draw (1)--(2); \draw (2)--(3); \end{scope} \begin{scope}[xshift=0 cm, yshift=-5*2.5 cm] \node[Dnode, ynode] (1) {}; \node[Dnode, gnode] (2) [right = of 1] {}; \node[Dnode, ynode] (3) [right = of 2] {}; \node[Dnode, gnode] (4) [above right = 0.4cm and 0.7cm of 3] {}; \node[Dnode, ynode] (5) [right = of 4] {}; \node[Dnode, gnode] (6) [below right = 0.4cm and 0.7cm of 3] {}; \node[Dnode, ynode] (7) [right = of 6] {}; \draw (1)--(2)--(3)--(4)--(5) (3)--(6)--(7); \end{scope} \begin{scope}[xshift=7 cm, yshift=-5*2.5 cm] \node[Dnode, ynode] (1) {}; \node[Dnode, gnode] (2) [right = of 1] {}; \node[Dnode, ynode] (3) [right = of 2] {}; \node[Dnode, gnode] (4) [right = of 3] {}; \node[Dnode, ynode] (5) [right = of 4] {}; \draw (1)--(2)--(3) (4)--(5); \draw[double line] (4)--(3); \end{scope} \begin{scope}[xshift=0cm, yshift=-6*2.5 cm] \node[Dnode, gnode] (2) {}; \node[Dnode, ynode] (3) [right = of 2] {}; \node[Dnode, gnode] (4) [above right = 0.4cm and 0.7cm of 3] {}; \node[Dnode, ynode] (5) [right = of 4] {}; \node[Dnode, gnode] (6) [right = of 5] {}; \node[Dnode, gnode] (7) [below right = 0.4cm and 0.7cm of 3] {}; \node[Dnode, ynode] (8) [right = of 7] {}; \node[Dnode, gnode] (9) [right = of 8] {}; \draw (2)--(3)--(4)--(5)--(6) (3)--(7)--(8)--(9); \end{scope} \begin{scope}[xshift=7 cm, yshift=-6*2.5 cm] \node[Dnode, gnode] (1) {}; \node[Dnode, ynode] (2) [right = of 1] {}; \node[Dnode, gnode] (3) [right = of 2] {}; \node[Dnode, ynode] (4) [right = of 3] {}; \node[Dnode, gnode] (5) [right = of 4] {}; \draw (1)--(2) (3)--(4)--(5); \draw[double line] (3)--(2); \end{scope} \end{tikzpicture} \caption{Foldings in Dynkin diagrams of affine type (for seed patterns)} \label{figure:G-actions} \end{figure} \newcolumntype{?}{!{\vrule width 0.6pt}} \begin{table}[ht] { \setlength{\tabcolsep}{4pt} \renewcommand{\arraystretch}{1.5} \begin{tabular}{c||c?c?c?c} \toprule $\dynkinfont{Z}$ & $\dynkinfont{A}_{2n-1}$ & $\dynkinfont{D}_{n+1}$ & $\dynkinfont{E}_6$ & $\dynkinfont{D}_4$ \\ \hline $G$ & $\Z/2\Z$& $\Z/2\Z$ & $\Z/2\Z$ & $\Z/3\Z$ \\ \hline ${\dynkinfont{Z}^G}$ & $\dynkinfont{B}_n$ & $\dynkinfont{C}_n$ & $\dynkinfont{F}_4$ & $\dynkinfont{G}_2$ \\ \bottomrule \end{tabular}} \medskip { \setlength{\tabcolsep}{4pt} \renewcommand{\arraystretch}{1.5} \begin{tabular}{c||c?c?c?c?c?c?c?c?c?c?c} \toprule $\dynkinfont{Z}$ & $\widetilde{\dynA}_{2,2}$ & $\widetilde{\dynA}_{n,n}$ & \multicolumn{2}{c?}{$\widetilde{\dynD}_4$ } & \multicolumn{2}{c?}{$\widetilde{\dynD}_n$ } & \multicolumn{2}{c?}{$\widetilde{\dynD}_{2n}$ } & \multicolumn{2}{c?}{$\widetilde{\dynE}_6$ } & $\widetilde{\dynE}_7$ \\ \hline $G$ & $\Z/2\Z$& $\Z/2\Z$ & $(\Z/2\Z)^2$ & $\Z/3\Z$ & $\Z/ 2\Z$ & $\Z/2\Z$ & $\Z/2\Z$ & $(\Z/2\Z)^2$ & $\Z/3\Z$ & $\Z/2\Z$ & $\Z/2\Z$ \\ \hline ${\dynkinfont{Z}^G}$ & $\widetilde{\dynA}_1$ & $\dynkinfont{D}_{n+1}^{(2)}$ & $\dynkinfont{A}_2^{(2)}$ & $\dynkinfont{D}_4^{(3)}$ & $\widetilde{\dynC}_{n-2}$ & $\dynkinfont{A}_{2(n-1)-1}^{(2)}$ & $\widetilde{\dynB}_n$ & $\dynkinfont{A}_{2n-2}^{(2)}$ & $\widetilde{\dynG}_2$ & $\dynkinfont{E}_6^{(2)}$ & $\widetilde{\dynF}_4$ \\ \bottomrule \end{tabular}} \caption{Foldings appearing in finite and affine Dynkin diagrams} \label{figure:all possible foldings} \end{table} For a quiver of type $\dynkinfont{ADE}$, one can prove that the $G$-invariance is equivalent to the $G$-admissible as follows: \begin{theorem}\label{theorem:G-invariance and G-admissibility} Let $\clusterfont{Q}$ be a quiver of type $\dynkinfont{ADE}$, which is invariant under the $G$-action given by Figure~\ref{fig_folding}. Then the $\clusterfont{Q}$ is $G$-admissible. \end{theorem} The proof of Theorem~\ref{theorem:G-invariance and G-admissibility} is given in Appendix~\ref{section:invariance and admissibility}. As we saw in Definition~\ref{definition:admissible quiver}, if a seed $\Sigma = (\mathbf{x}, \clusterfont{Q})$ is $(G,\psi)$-admissible, then $\Sigma$ is $(G,\psi)$-invariant. The converse holds when we consider the foldings presented in Table~\ref{figure:all possible foldings}, and moreover they form the folded cluster pattern. \begin{theorem}[{\cite{AL2021}}]\label{thm_invariant_seeds_form_folded_pattern} Let $(\dynkinfont{Z}, G, {\dynkinfont{Z}^G})$ be a triple given by a column of Table~\ref{figure:all possible foldings}. Let $\Sigma_{t_0} = (\mathbf{x}_{t_0},\clusterfont{Q}_{t_0})$ be a seed in the field $\mathbb{F}$. Suppose that $\clusterfont{Q}_{t_0}$ is of type $\dynkinfont{Z}$. Define $\psi \colon \mathbb{F} \to \mathbb{F}^G$ so that $\Sigma_{t_0}$ is a $(G, \psi)$-admissible seed. Then, for any seed $\Sigma = (\mathbf{x}, \clusterfont{Q})$ in the cluster pattern, if the quiver $\clusterfont{Q}$ is $G$-invariant, then it is $G$-admissible. Moreover, any $(G,\psi)$-invariant seed $\Sigma = (\mathbf{x}, \clusterfont{Q})$ can be reached with a sequence of orbit mutations from the initial seed. Indeed, the set of such seeds forms the cluster pattern of the `folded' cluster algebra $\cA(\Sigma_{t_0}^G)$ of type ${\dynkinfont{Z}^G}$. \end{theorem} \subsection{Combinatorics of exchange graphs} \label{sec_comb_of_exchange_graphs} The \emph{exchange graph} of a cluster pattern or a $Y$-pattern is the $n$-regular (finite or infinite) connected graph whose vertices are the seeds of the cluster pattern and whose edges connect the seeds related by a single mutation. In this section, we recall the combinatorics of exchange graphs which will be used later. For more details, we refer the reader to~\cite{FZ2_2003, FZ_Ysystem03, FZ4_2007}. \begin{definition}[Exchange graphs] Exchange graphs for seed patterns or $Y$-patterns are defined as follows. \begin{enumerate} \item The \emph{exchange graph} $\exchange(\{(\bfx_t, \clusterfont{\tilde{B}}_t)\}_{t \in \mathbb T_n})$ of the cluster pattern $\{(\bfx_t, \clusterfont{\tilde{B}}_t)\}_{t \in \mathbb T_n}$ is a quotient of the tree $\mathbb{T}_n$ modulo the equivalence relation on vertices defined by setting $t \sim t'$ if and only if $(\bfx_t, \clusterfont{\tilde{B}}_t) \sim (\bfx_{t'},\clusterfont{\tilde{B}}_{t'})$. \item The \emph{exchange graph} $\exchange(\{(\bfy_t, \clusterfont{B}_t)\}_{t \in \mathbb T_n})$ of the $Y$-pattern $\{(\bfy_t, \clusterfont{B}_t)\}_{t \in \mathbb T_n}$ is a quotient of the tree $\mathbb{T}_n$ modulo the equivalence relation on vertices defined by setting $t \sim t'$ if and only if $(\bfy_t, \clusterfont{B}_t) \sim (\bfy_{t'},\clusterfont{B}_{t'})$. \end{enumerate} \end{definition} For example, the exchange graph in Example~\ref{example_A2_example} is a cycle graph with~$5$~vertices. As we already have seen in Theorem~\ref{thm_FZ_finite_type}, cluster algebras of finite type are classified by Cartan matrices of finite type. Moreover, for a cluster algebra of finite or affine type, the exchange graph depends only on the exchange matrix (see Theorem~\ref{thm_exchange_graph_Dynkin}). To explain this observation, we need some terminologies. For $\Sigma_{t_0} = (\mathbf x_{t_0}, \clusterfont{\tilde{B}}_{t_0})$, the cluster algebra $\cA(\Sigma_{t_0})$ is said to have \emph{principal coefficients} if the exchange matrix $\clusterfont{\tilde{B}}_{t_0}$ is a $(2n \times n)$-matrix of the form $\begin{pmatrix} \clusterfont{B}_{t_0} \\ \mathcal{I}_n \end{pmatrix}$, and have \emph{trivial coefficients} if $\clusterfont{\tilde{B}}_{t_0}=\clusterfont{B}_{t_0}$. Here, $\mathcal{I}_n$ is the identity matrix of size~$n \times n$. We recall the following result on the combinatorics of exchange graphs. \begin{theorem}[{\cite[Theorem~4.6]{FZ4_2007}}]\label{thm_exchange_graph_covering} The exchange graph of an arbitrary cluster pattern $\{(\bfx_t, \clusterfont{\tilde{B}}_t)\}_{t \in \mathbb T_n}$ is covered by the exchange graph of the cluster pattern $\{(\bfx_t, \clusterfont{\tilde{B}}_t')\}_{t \in \mathbb T_n}$ having principal coefficients and the set of principal part of exchange matrices are the same. \end{theorem} One of the direct consequence is that the exchange graph of the cluster pattern $\{(\bfx_t, \clusterfont{\tilde{B}}_t)\}_{t \in \mathbb T_n}$ having trivial coefficients is covered by the exchange graph of the cluster pattern whose initial exchange matrix has the principal part $\clusterfont{\tilde{B}}_{t_0}$. Therefore, for a fixed principal part of the exchange matrix, the cluster pattern having principal coefficients has the largest exchange graph while that having trivial coefficients has the smallest one (see~\cite[Section~4]{FZ4_2007}). However, it is unknown whether the largest exchange graph is strictly larger than the smallest one or not. Indeed, it is conjectured in \cite[Conjecture~4.3]{FZ4_2007} that the exchange graph of a cluster pattern is determined by the initial principal part $\clusterfont{B}_{t_0}$ only. The conjecture is confirmed for finite cases~\cite{FZ2_2003} or exchange matrices coming from quivers~\cite{IKLP13}. We furthermore extend this result to cluster algebras whose initial exchange matrices are of affine type. \begin{theorem}[{cf. \cite[Theorem~1.13]{FZ2_2003} and \cite[Theorem~4.6]{IKLP13}}]\label{thm_exchange_graph_Dynkin} Let $\Sigma_{t_0} = (\mathbf x_{t_0}, \clusterfont{\tilde{B}}_{t_0})$ be an initial seed. If the principal part~$\clusterfont{B}_{t_0}$ of $\clusterfont{\tilde{B}}_{t_0}$ is \emph{of finite or affine type}, then the exchange graph of the cluster pattern $\{(\bfx_t, \clusterfont{\tilde{B}}_t)\}_{t \in \mathbb T_n}$ only depends on $\clusterfont{B}_{t_0}$. \end{theorem} \begin{proof} We first notice that the statement holds if the principal part $\clusterfont{B}_{t_0}$ is of finite type~\cite[Theorem~1.13]{FZ2_2003} or exchange matrices are obtained from quivers~\cite[Theorem~4.6]{IKLP13}. It is enough to consider the case when the principal part is of \emph{non-simply-laced affine type}. Let $(\dynkinfont{Z}, G, {\dynkinfont{Z}^G})$ be a column in Table~\ref{figure:all possible foldings}. Let $\clusterfont{Q}(\dynkinfont{Z})$ be the quiver of type $\dynkinfont{Z}$ and $\clusterfont{B}(\dynkinfont{Z})=\clusterfont{B}(\clusterfont{Q}(\dynkinfont{Z}))$ be the adjacency matrix of $\clusterfont{Q}(\dynkinfont{Z})$, which is a square matrix of size $n$. Let $\clusterfont{\tilde{B}}(\dynkinfont{Z}) = \begin{pmatrix} \clusterfont{B}(\dynkinfont{Z})\\ \mathcal{I}_n \end{pmatrix}$ be the $(2n\times n)$ matrix having principal coefficients whose principal part is $\clusterfont{B}(\dynkinfont{Z})$. On the other hand, we consider a quiver $\overline{\clusterfont{Q}}(X)$ by adding $n^G \colonequals \#([n]/G)$ frozen vertices and arrows. Here, each frozen vertex is indexed by a $G$-orbit and we draw an arrow from the frozen vertex to each mutable vertex in the corresponding $G$-orbit. For algebraic independent elements $\bfx=(x_1,\dots, x_n)$, $\overline{\bfx} = (x_1,\dots,x_n,x_{n+1},\dots,x_{n+n^G})$, and $\tilde\bfx=(x_1,\dots, x_n, x_{n+1},\dots, x_{2n})$ in $\mathbb{F}$, we obtain seeds \[ \tilde{\Sigma}_{t_0} = (\tilde\bfx, \clusterfont{\tilde{B}}(\dynkinfont{Z})), \quad \overline{\Sigma}_{t_0} = (\overline\bfx, \clusterfont{B}(\overline\clusterfont{Q}(\dynkinfont{Z}))), \quad\text{ and }\quad \Sigma_{t_0} = \cA(\bfx,\clusterfont{B}(\dynkinfont{Z})). \] Since the exchange matrices come from quivers, the exchange graphs given by seeds $\tilde{\Sigma}_{t_0}, \overline{\Sigma}_{t_0}, \Sigma_{t_0}$ are isomorphic. Indeed, we have \begin{equation}\label{equation_exchange_graphs_are_the_same} \{ \tilde{\Sigma}_{t}\}_{t \in \mathbb T_n}/\sim \;\; = \{ \overline{\Sigma}_{t}\}_{t \in \mathbb T_n}/\sim \;\; = \{ {\Sigma}_{t}\}_{t \in \mathbb T_n}/\sim \end{equation} Extending the action of $G$ on $\clusterfont{Q}$ of type $\dynkinfont{Z}$ to $\overline\clusterfont{Q}(\dynkinfont{Z})$ such that $G$ acts trivially on frozen vertices, the quiver $\overline\clusterfont{Q}(\dynkinfont{Z})$ becomes a globally foldable quiver with respect to $G$ (see~\cite[Lemma~5.5.3]{FWZ_chapter45}). Moreover, via $\psi \colon \mathbb{F} \to \mathbb{F}^G$, the folded seed $\overline{\Sigma}_{t_0}^G = (\overline\bfx, \overline\clusterfont{Q}(\dynkinfont{Z}))^G$ produces the principal coefficient cluster algebra of type ${\dynkinfont{Z}^G}$. This produces the following diagram. \[ \begin{tikzcd} \{ \tilde{\Sigma}_{t}\}_{t \in \mathbb T_n}/\sim \arrow[r,twoheadrightarrow, "="] & \{ \overline{\Sigma}_{t}\}_{t \in \mathbb T_n}/\sim \arrow[r, twoheadrightarrow,"="] & \{ {\Sigma}_{t}\}_{t \in \mathbb T_n}/\sim \\ & \{ \text{$(G,\psi)$-admissible seeds $\overline{\Sigma}_{t}$}\}/\sim \arrow[u, hookrightarrow] \arrow[r,rightarrowtail] \arrow[d,equal] &\{ \text{$(G,\psi)$-admissible seeds ${\Sigma}_{t}$}\}/\sim \arrow[u, hookrightarrow] \arrow[d,equal] \\ & \{ \overline{\Sigma}_t^G\}_{t \in \mathbb T_n}/\sim \arrow[r,twoheadrightarrow] & \{{\Sigma}_t^G\}_{t \in \mathbb T_n}/\sim \end{tikzcd} \] The equalities on the top row are obtained by~\eqref{equation_exchange_graphs_are_the_same}. The surjectivity in the bottom row is induced by the maximality of the exchange graph of a cluster algebra having principal coefficients in Theorem~\ref{thm_exchange_graph_covering}. Moreover, the equalities connecting the second and third rows are given by Theorem~\ref{thm_invariant_seeds_form_folded_pattern}. This proves the theorem. \end{proof} We recall from~\cite{CaoHuangLi20} the relation between the cluster pattern and $Y$-pattern having the \emph{same} initial exchange matrix. \begin{proposition}[{\cite[Theorem~2.5]{CaoHuangLi20}}]\label{prop_Y-pattern_exchange_graph} Let $(\bfy_{t_0}, \clusterfont{B}_{t_0})$ be a $Y$-seed and let $\{(\bfy_t, \clusterfont{B}_t)\}_{t \in \mathbb{T}_n}$ be the $Y$-pattern. Let $(\bfx_{t_0},\clusterfont{\tilde{B}}_{t_0})$ be a cluster seed such that the principal part of the exchange matrix $\clusterfont{\tilde{B}}_{t_0}$ is $\clusterfont{B}_{t_0}$ and let $\{(\bfx_t, \clusterfont{\tilde{B}}_t)\}_{t \in \mathbb{T}_n}$ be the cluster pattern. Suppose that the initial variables $y_{1;t_0},\dots,y_{n;t_0}$ are algebraically independent. Then, we have \[ \exchange(\{(\bfx_t, \clusterfont{\tilde{B}}_t)\}_{t \in \mathbb{T}_n}) = \exchange(\{(\bfy_t, \clusterfont{B}_t)\}_{t \in \mathbb{T}_n}). \] \end{proposition} Because of Assumption~\ref{assumption_finite}, Theorem~\ref{thm_exchange_graph_Dynkin}, and Proposition~\ref{prop_Y-pattern_exchange_graph}, when the initial variables $y_{1;t_0},\dots,y_{n;t_0}$ are algebraically independent, all the following exchange graphs are the same. \[ \exchange(\{(\bfx_t, \clusterfont{\tilde{B}}_t)\}_{t \in \mathbb T_n}) = \exchange(\{(\bfx_t, \clusterfont{B}_t)\}_{t \in \mathbb T_n}) = \exchange(\{(\bfy_t, \clusterfont{B}_t)\}_{t \in \mathbb T_n}). \] We simply denote the above exchange graphs with the associated root system $\Phi$ by \begin{equation}\label{eq_exchange_graphs_are_the_same} \exchange(\Phi) = \exchange(\{(\bfx_t, \clusterfont{\tilde{B}}_t)\}_{t \in \mathbb T_n}) = \exchange(\{(\bfx_t, \clusterfont{B}_t)\}_{t \in \mathbb T_n}) = \exchange(\{(\bfy_t, \clusterfont{B}_t)\}_{t \in \mathbb T_n}). \end{equation} Since the exchange graph of a cluster pattern of finite type and that of a $Y$-pattern having the same type are the same, we will mainly treat exchange graphs of cluster patterns of finite or affine type from now on. Let $\Phi$ be the root system defined by the Cartan counterpart of $\clusterfont{B}$. It is proved in~\cite{FZ2_2003} and~\cite{ReadingStella20} that there is a bijective correspondence between a subset $\Roots_{\ge -1} \subset \Phi$, called \emph{almost positive roots}, and the set of cluster variables. \begin{equation}\label{equation_bijective_vars_alpostRoots_facets} \Roots_{\ge -1} \stackrel{1:1}{\longleftrightarrow}\{\text{cluster variables in $\cA$ of type $\dynkinfont{Z}$}\} \end{equation} More precisely, one may associate the set $-\Pi$ of negative simple roots with the set of cluster variables $x_{1;t_0},\dots,x_{n;t_0}$ in the initial seed $(\bfx_{t_0},\clusterfont{\tilde{B}}_{t_0})$; a positive root $\sum_{i=1}^n d_i \alpha_i$ is associated to a (non-initial) cluster variable of the form \[ \frac{f(\bfx_{t_0})}{x_{1;t_0}^{d_1} \cdots x_{n;t_0}^{d_n}},\qquad f(\bfx_{t_0})\in\bbC[x_{1;t_0},\dots, x_{m;t_0}]. \] Accordingly, each vertex of the exchange graph $\exchange(\Phi)$ corresponds to an $n$-subset of $\Roots_{\ge -1}$. We notice that when $\Phi$ is of finite type, the set $\Roots_{\ge -1}$ is given by $\Roots_{\ge -1}\colonequals \Phi^+ \cup -\Pi$. Here, $\Phi^+$ is the set of positive roots and $\Pi=\{\alpha_1,\dots,\alpha_n\}$ is the set of simple roots. To study the combinatorics of exchange graphs, we prepare some terminologies. Let $\Phi$ be a rank $n$ root system. For every subset $J \subset [n]$, let $\Phi(J)$ denote the root subsystem of $\Phi$ spanned by the set of simple roots $\{ \alpha_i \mid i \in J \}$. Indeed, the Dynkin diagram of $\Phi(J)$ is the full subdiagram on the vertices in $J$. Note that $\Phi(J)$ may not be irreducible even if $\Phi$ is. A \emph{Coxeter element} is the product of all simple reflections. The order $h$ of a Coxeter element in $W$ is called the \emph{Coxeter number} of $\Phi$. We present the known formula of Coxeter numbers $h$ in Table~\ref{table_Coxeter_number} (see~\cite[Appendix]{Bourbaki02}). \begin{table}[b] \begin{tabular}{c|ccccccccc} \toprule $\Phi$ & $\dynkinfont{A}_n$ & $\dynkinfont{B}_n$ & $\dynkinfont{C}_n$ & $\dynkinfont{D}_n$ & $\dynkinfont{E}_6$ & $\dynkinfont{E}_7$ & $\dynkinfont{E}_8$ & $\dynkinfont{F}_4$ & $\dynkinfont{G}_2$ \\ \midrule $h$ & $n+1$ & $2n$ & $2n$ & $2n-2$ & $12$ & $18$ & $30$ & $12$ & $6$ \\ \bottomrule \end{tabular} \caption{Coxeter numbers}\label{table_Coxeter_number} \end{table} The Dynkin diagrams of finite or affine root systems do not have cycles except of type $\widetilde{\dynA}_{n-1}$ for $n \geq 3$. We consider \emph{bipartite coloring} on Dynkin diagrams except of type $\widetilde{\dynA}$, that is, we have a function $\varepsilon \colon [n] \to \{+,-\}$, called a \emph{coloring}, such that any two vertices $i$ and $j$ connected by an edge have different colors. Since we are considering tree-shaped diagrams, they admit bipartite colorings. We notice that a bipartite coloring on a Dynkin diagram decides a \emph{bipartite} skew-symmetrizable matrix $\clusterfont{B} = (b_{i,j})$ of the same type by setting \begin{equation}\label{eq_bipartite_matrix} b_{i,j} > 0 \iff \varepsilon(i) = + \text{ and } \varepsilon(j) = -. \end{equation} Here, a skew-symmetrizable matrix is called \emph{bipartite} if there exists a coloring $\varepsilon$ satisfying~\eqref{eq_bipartite_matrix}. Moreover, for a simply-laced Dynkin diagram, a bipartite coloring defines a \emph{bipartite quiver}, that is, each vertex of the quiver is either source or sink. More precisely, we let $i$ be a source if $\varepsilon(i) = +$; otherwise, a sink. \begin{example}\label{example_F4_coloring} Consider the coloring on the Dynkin diagram of $\dynkinfont{F}_{4}$. \[ \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode, label=below:{1}] (1) {}; \node[Dnode, label=below:{2}] (2) [right = of 1] {}; \node[Dnode, label=below:{3}] (3) [right = of 2] {}; \node[Dnode, label=below:{4}] (4) [right =of 3] {}; \foreach \y in {1,3} { \node[ynode] at (\y) {}; } \foreach \g in {2,4} { \node[gnode] at (\g) {}; } \draw (1)--(2) (3)--(4); \draw[double line] (2)--(3); \end{tikzpicture} \] Here, \colorbox{cyclecolor2!50!}{green} nodes have color $+$; \colorbox{cyclecolor1!50!}{orange} nodes have color $-$. This coloring gives a skew-symmetrizable matrix $\clusterfont{B}$ whose Cartan counterpart $C(\clusterfont{B})$ is of type $\dynkinfont{F}_4$. \[ \clusterfont{B} = \begin{pmatrix} 0 & -1 & 0 & 0 \\ 1 & 0 & 2 & 0 \\ 0 & -1 & 0 & -1 \\ 0 & 0 & 1 & 0 \end{pmatrix}, \qquad C(\clusterfont{B}) = \begin{pmatrix} 2 & -1 & 0 & 0 \\ -1 & 2 & -2 & 0 \\ 0 & -1 & 2 & -1 \\ 0 & 0 & -1 & 2 \end{pmatrix}. \] The coloring on the Dynkin diagram of $\dynkinfont{E}_6$ as shown on the left provides the bipartite quiver like the one on the right. \[ \begin{tikzpicture}[scale=.5, baseline=-.5ex] \tikzset{every node/.style={scale=0.7}} \node[Dnode, label=below:{$4$}] (1) {}; \node[Dnode, label=below:{$3$}] (3) [right=of 1] {}; \node[Dnode, label=below:{$1$}] (4) [right=of 3] {}; \node[Dnode, label=right:{$2$}] (2) [above=of 4] {}; \node[Dnode, label=below:{$5$}] (5) [right=of 4] {}; \node[Dnode, label=below:{$6$}] (6) [right=of 5]{}; \draw(1)--(3)--(4)--(5)--(6) (2)--(4); \foreach \y in {1,4,6} { \node[ynode] at (\y) {}; } \foreach \g in {2,3,5} { \node[gnode] at (\g) {}; } \begin{scope}[xshift = 12cm] \node[Dnode, label=below:{$4$}] (1) {}; \node[Dnode, label=below:{$3$}] (3) [right=of 1] {}; \node[Dnode, label=below:{$1$}] (4) [right=of 3] {}; \node[Dnode, label=right:{$2$}] (2) [above=of 4] {}; \node[Dnode, label=below:{$5$}] (5) [right=of 4] {}; \node[Dnode, label=below:{$6$}] (6) [right=of 5]{}; \draw[<-] (1)--(3); \draw[<-] (4)--(3); \draw[<-] (4)--(2); \draw[<-] (4)--(5); \draw[<-] (6)--(5); \foreach \y in {1,4,6} { \node[ynode] at (\y) {}; } \foreach \g in {2,3,5} { \node[gnode] at (\g) {}; } \end{scope} \end{tikzpicture} \] \end{example} Let $I_+$ and $I_-$ be two parts of the set of vertices of the Dynkin diagram given by a bipartite coloring; they are determined uniquely up to renaming. Consider the composition $\mu_{\quiver} = \mu_+ \mu_-$ of a sequence of mutations where \[ \mu_{\varepsilon} = \prod_{i \in I_{\varepsilon}} \mu_i \qquad \text{ for } \varepsilon \in \{ +, -\}, \] which is well-defined (cf. Remark~\ref{rmk_mutation_commutes}). We call $\mu_{\quiver}$ a \emph{Coxeter mutation}. Because of the definition, for a bipartite skew-symmetrizable matrix $\clusterfont{B}$ or a bipartite quiver $\clusterfont{Q}$, we obtain \[ \mu_{\quiver}(\clusterfont{B}) = \clusterfont{B},\qquad \mu_{\quiver}(\clusterfont{Q}) = \clusterfont{Q}. \] The initial seed $\Sigma_{t_0} = \Sigma_0 = (\bfx_0, \clusterfont{\tilde{B}}_0)$ is included in the \textit{bipartite belt} consisting of the seeds $\Sigma_r = (\bfx_r, \clusterfont{\tilde{B}}_0)$ for $r \in \Z$ defined by \[ \Sigma_r = (\bfx_r, \clusterfont{\tilde{B}}_0) = \begin{cases} \mu_{\quiver}^r(\Sigma_0) & \text{ if } r > 0, \\ (\mu_- \mu_+)^{-r}(\Sigma_0) & \text{ if } r < 0. \end{cases} \] We write \[ \bfx_r = (x_{1;r},\dots,x_{n;r}) \quad \text{ for }r \in \Z. \] It is known from~\cite{FZ_Ysystem03} and~\cite{ReadingStella20} that both $\mu_+$ and $\mu_-$ act on the set $\Roots_{\ge -1}$ of almost positive roots and on the set $V(\exchange(\Phi))$ of vertices via the bijective correspondence~\eqref{equation_bijective_vars_alpostRoots_facets}. We summarize the properties of the action of Coxeter mutation as follows. \begin{proposition}[{cf.~\cite[Propositions~2.5, 3.5, and~3.6]{FZ_Ysystem03} for finite type;~\cite[Propositions~5.4 and~5.14]{ReadingStella20} for affine type}] \label{prop_FZ_finite_type_Coxeter_element} Let $\Phi$ be a finite or affine root system of type~$\dynkinfont{Z}$. Let $\{(\mathbf x_t, \clusterfont{\tilde{B}}_t)\}_{t\in \mathbb T}$ be a cluster pattern of type $\dynkinfont{Z}$ and $\exchange(\Phi)$ its exchange graph. Then the following holds. \begin{enumerate} \item For $\ell \in [n]$ and $r \in \Z$, we denote by $\exchangesub{\Phi}{x_{\ell;r}}$ the induced subgraph of $\exchange(\Phi)$ consisting of seeds having the cluster variable $x_{\ell;r}$. Then, we have \[ \exchangesub{\Phi}{x_{\ell;r}} \cong \exchange(\Phi([n] \setminus \{\ell\})). \] \item Both $\mu_+$ and $\mu_-$ act on the exchange graph $\exchange(\Phi)$. \item For any seed $(\bfx, \clusterfont{\tilde{B}}) \in \exchange(\Phi)$, there exists $r\in \Z$ such that \[ |\{ x_{1;r},\dots,x_{n;r}\} \cap \{ x_{1},\dots,x_{n} \}| \geq 2. \] Furthermore, if $\Phi$ is of finite type having even Coxeter number $h = 2e$, then $r \in \{0,1,\dots,e\}$. \end{enumerate} \end{proposition} As a direct consequence of Proposition~\ref{prop_FZ_finite_type_Coxeter_element}, we have the following lemma which will be used later. \begin{lemma}\label{lemma:normal form} Let $(\bfy_{t_0}, \clusterfont{B}_{t_0})$ be a $Y$-seed such that the Cartan counterpart $C(\clusterfont{B}_{t_0})$ is of finite or affine type. For a $Y$-seed $(\bfy, \clusterfont{B})$ in the seed pattern, there exist $r \in\Z$, $\ell \in [n]$, and $j_1,\dots,j_{L} \in [n] \setminus \{\ell\}$ such that a sequence $\mu_{j_1},\dots,\mu_{j_L}$ of mutations connecting $\mu_{\quiver}^r(\bfy_{t_0}, \clusterfont{B}_{t_0})$ and $(\bfy, \clusterfont{B})$, that is, \[ (\bfy, \clusterfont{B}) = (\mu_{j_L} \cdots \mu_{j_1})(\mu_{\quiver}^r(\bfy_{t_0}, \clusterfont{B}_{t_0})). \] Furthermore, if $\Phi$ is of finite type and has even Coxeter number $h = 2e$, then $r \in \{0,1,\dots,e\}$. \end{lemma} \begin{proof} Since the exchange graph $\exchange(\{(\bfy_t, \clusterfont{B}_t)\}_{t\in \mathbb{T}_n})$ is the graph $\exchange(\Phi)$ by Proposition~\ref{prop_Y-pattern_exchange_graph}, it is enough to prove the claim in terms of seeds. Let $(\bfx, \clusterfont{\tilde{B}}) \in \exchange(\Phi)$ be a seed. By Proposition~\ref{prop_FZ_finite_type_Coxeter_element}(3), there exist $\ell \in [n]$ and $r \in \Z$ such that $x_{\ell;r} \in \{x_1,\dots,x_n\}$. Accordingly, both seeds $\mu_{\quiver}^r(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0})$ and $(\bfx, \clusterfont{\tilde{B}})$ are contained in the induced subgraph $\exchangesub{\Phi}{x_{\ell;r}}$. Since the subgraph $\exchangesub{\Phi}{x_{\ell;r}}$ itself is the exchange graph of the root subsystem $\Phi([n] \setminus \{\ell\})$ by Proposition~\ref{prop_Y-pattern_exchange_graph}(1), it is connected. Accordingly, two seeds $\mu_{\quiver}^r(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0})$ and $(\bfx, \clusterfont{\tilde{B}})$ are connected without applying mutations at the vertex $\ell$, that is, there exists a sequence $j_1,\dots,j_L \in [n] \setminus \{\ell\}$ such that $(\bfx, \clusterfont{\tilde{B}}) = (\mu_{j_L} \cdots \mu_{j_1})(\mu_{\quiver}^r(\bfx_{t_0}, \clusterfont{\tilde{B}}_{t_0}))$ as desired. \end{proof} For a finite root system $\Phi$, the exchange graph $\exchange(\Phi)$ becomes the one-skeleton of an $n$-dimensional polytope $P(\Phi)$, called the \emph{generalized associahedron}. Moreover, there is a bijective correspondence between the set $\mathscr{F}(P(\Phi))$ of codimension-one faces, called \emph{facets}, of $P(\Phi)$ and the set of almost positive roots $\Roots_{\ge -1}$. We denote by $F_{\beta}$ the facet of the polytope $P(\Phi)$ corresponding to a root $\beta \in \Roots_{\ge -1}$. We demonstrate Proposition~\ref{prop_FZ_finite_type_Coxeter_element} for root systems of type $\dynkinfont{A}_3$ and $\dynkinfont{D}_4$. \begin{example} Consider the root system $\Phi$ of type $\dynkinfont{A}_3$. In this case, the Coxeter number is $4$, which is even (cf. Table~\ref{table_Coxeter_number}). In Table~\ref{table_A3_tau_action}, we present how $\mu_{\quiver}$ acts on the set of almost positive roots. Here, we use the convention that $I_+ = \{1,3\}$ and $I_- = \{2\}$. \begin{table}[b] \begin{tabular}{c|ccc} \toprule $r$ & $\mu_{\quiver}^r({-\alpha_1})$ & $ \mu_{\quiver}^r({-\alpha_2})$ & $ \mu_{\quiver}^r({-\alpha_3})$\\ \midrule $0$ & ${-\alpha_1}$ & ${-\alpha_2}$ & ${-\alpha_3}$ \\ $1$ & ${\alpha_1 + \alpha_2}$ & ${\alpha_2}$ &${\alpha_2 + \alpha_3}$ \\ $2$ & ${\alpha_3}$ & ${\alpha_1 + \alpha_2 + \alpha_3}$ & ${\alpha_1}$\\ \bottomrule \end{tabular} \caption{Computation $\mu_{\quiver}^r({-\alpha_i})$ for type $\dynkinfont{A}_3$}\label{table_A3_tau_action} \end{table} The generalized associahedron of type $\dynkinfont{A}_3$ is presented in Figure~\ref{fig_asso_A3}. We label each codimension-one face the corresponding almost positive root. The back-side facets are associated with the set of negative simple roots. As one may see that the face posets of $\mu_{\quiver}^r(F_{-\alpha_i})$ are the same as that of the generalized associahedron $P(\Phi ([n]\setminus \{i\}))$. Indeed, the facets $\mu_{\quiver}^r(F_{-\alpha_1})$ and $\mu_{\quiver}^r(F_{-\alpha_3})$ are pentagons, and the facets $\mu_{\quiver}^r(F_{-\alpha_2})$ are squares. For $(\bfx, \clusterfont{\tilde{B}}) = F_{-\alpha_1} \cap F_{-\alpha_2} \cap F_{-\alpha_3}$, we decorate the vertices $\{ \mu_{\quiver}^r(\bfx,\clusterfont{\tilde{B}}) \mid r = 0,1,2 \}$ with green. As one can see, the orbits of $F_{-\alpha_1}, F_{-\alpha_2}, F_{-\alpha_3}$ exhaust all vertices as claimed in Proposition~\ref{prop_FZ_finite_type_Coxeter_element}(3). \end{example} \begin{figure} \tdplotsetmaincoords{110}{-30} \begin{tikzpicture}% [tdplot_main_coords, scale= 2, scale=0.700000, back/.style={loosely dotted, thin}, edge/.style={color=black, thick}, facet/.style={fill=blue!95!black,fill opacity=0.100000}, vertex/.style={inner sep=1pt,circle,fill=black,thick,anchor=base}, gvertex/.style={inner sep=1.2pt,circle,draw=green!25!black,fill=green!75!black,thick,anchor=base}] \coordinate (1) at (-0.50000, -1.50000, 2.00000); \coordinate (2) at (1.50000, 1.50000, -2.00000); \coordinate (3) at (0.50000, 0.50000, 1.00000); \coordinate (4) at (0.50000, 1.50000, 0.00000); \coordinate (5) at (1.50000, 1.50000, -1.00000); \coordinate (6) at (-0.50000, -0.50000, 2.00000); \coordinate (7) at (1.50000, 0.50000, 0.00000); \coordinate (8) at (1.50000, -1.50000, 0.00000); \coordinate (9) at (1.50000, -1.50000, -2.00000); \coordinate (10) at (-1.50000, -1.50000, -2.00000); \coordinate (11) at (-1.50000, -1.50000, 2.00000); \coordinate (12) at (-1.50000, 1.50000, -2.00000); \coordinate (13) at (-1.50000, 1.50000, 0.00000); \coordinate (14) at (-1.50000, -0.50000, 2.00000); \draw[edge,back] (9) -- (10); \draw[edge,back] (10) -- (11); \draw[edge,back] (10) -- (12); \node[vertex] at (10) {}; \draw (7)--(8) node[midway, sloped, above, yshift=1.3cm] {$\alpha_2 + \alpha_3$}; \draw (13)--(4) node[midway, sloped, above, yshift=1.3cm] {$\alpha_1 + \alpha_2$}; \draw[color=white] (13)--(2) node[midway, sloped, below, rotate = 45] {\textcolor{black}{$\alpha_1$}}; \draw[color=white] (2)--(8) node[midway, sloped, below, rotate = - 50] {\textcolor{black}{$\alpha_3$}}; \draw (14)--(6) node[midway, sloped, above, xshift=0.3cm] {$\alpha_2$}; \draw[color=white] (4)--(7) node[midway, sloped, above, yshift=-0.2cm] {\textcolor{black}{\scriptsize $\alpha_1 + \alpha_2 + \alpha_3$}}; \draw[edge] (1) -- (6); \draw[edge] (1) -- (8); \draw[edge] (1) -- (11); \draw[edge] (2) -- (5); \draw[edge] (2) -- (9); \draw[edge] (2) -- (12); \draw[edge] (3) -- (4); \draw[edge] (3) -- (6); \draw[edge] (3) -- (7); \draw[edge] (4) -- (5); \draw[edge] (4) -- (13); \draw[edge] (5) -- (7); \draw[edge] (6) -- (14); \draw[edge] (7) -- (8); \draw[edge] (8) -- (9); \draw[edge] (11) -- (14); \draw[edge] (12) -- (13); \draw[edge] (13) -- (14); \node[vertex] at (1) {}; \node[vertex] at (2) {}; \node[vertex] at (3) {}; \node[vertex] at (4) {}; \node[vertex] at (5) {}; \node[vertex] at (6) {}; \node[vertex] at (7) {}; \node[vertex] at (8) {}; \node[vertex] at (9) {}; \node[vertex] at (11) {}; \node[vertex] at (12) {}; \node[vertex] at (13) {}; \node[vertex] at (14) {}; \foreach \g in {10, 6, 5} { \node[gvertex] at (\g) {}; } \end{tikzpicture} \caption{The type $\dynkinfont{A}_3$ generalized associahedron}\label{fig_asso_A3} \end{figure} \begin{example} We consider the generalized associahedron of type $\dynkinfont{D}_4$ and present four facets corresponding to the negative simple roots in Figure~\ref{fig_asso_D4}. The facet corresponding to $-\alpha_2$ is combinatorially equivalent to $P(\Phi(\{1\})) \times P(\Phi(\{3\})) \times P(\Phi(\{4\}))$, which is a $3$-cube presented in the boundary. The intersection of these four facets is a vertex sits in the bottom colored in green. The Coxeter mutation $\mu_{\quiver}$ acts on the face poset of the permutohedron, especially, four green vertices are in the same orbit. \end{example} \begin{remark}\label{remark:folding and Coxeter mutation} As saw in Example~\ref{example_folding_ADE}, bipartite coloring on quivers of type~$\dynkinfont{ADE}$ induce that on quivers of type~$\dynkinfont{BCFG}$. Accordingly, if a seed pattern of simply-laced type $\dynkinfont{Z}$ gives a seed pattern of type ${\dynkinfont{Z}^G}$ via the folding procedure, then the Coxeter mutation of type~${\dynkinfont{Z}^G}$ is the same as that of type~$\dynkinfont{Z}$. More precisely, for a globally foldable $Y$-seed $(\bfy,\clusterfont{B})$ with respect to $G$ of type $\dynkinfont{Z}$ and its Coxeter mutation $\mu_{\quiver}^{\dynkinfont{Z}}$, we have \[ \mu_{\quiver}^{{\dynkinfont{Z}^G}}((\bfy,\clusterfont{B})^G) = (\mu_{\quiver}^{\dynkinfont{Z}}(\bfy,\clusterfont{B}))^G. \] Here, $\mu_{\quiver}^{{\dynkinfont{Z}^G}}$ is the Coxeter mutation on the seed pattern determined by $(\bfy,\clusterfont{B})^G$. Moreover, Coxeter numbers of $\dynkinfont{Z}$ and ${\dynkinfont{Z}^G}$ are the same. Indeed, \[ \begin{split} & h(\dynkinfont{A}_{2n-1}) = h (\dynkinfont{B}_n) = 2n, \\ & h(\dynkinfont{D}_{n+1}) = h(\dynkinfont{C}_n) = 2n, \\ & h(\dynkinfont{E}_6) = h(\dynkinfont{F}_4) = 12, \\ & h(\dynkinfont{D}_4) = h(\dynkinfont{G}_2) = 6. \end{split} \] \end{remark} \begin{figure} \subfigure[The generalized associahedron of type $\dynkinfont{D}_4$.]{ \centering \tdplotsetmaincoords{110}{260} \begin{tikzpicture}% [tdplot_main_coords,scale = 7, back/.style={loosely dotted, thin}, edge/.style={color=black}, cube/.style={color=red, thick}, facet/.style={fill=blue!95!black,fill opacity=0.100000}, vertex/.style={inner sep=1.2pt,circle,draw=green!25!black,fill=green!75!black,thick,anchor=base}, vertex_normal/.style = {inner sep=0.5pt,circle,draw=black,fill=black,thick,anchor=base}, edge0/.style = {color=blue!50!red, very thick, dashed, opacity=0.7}, edge8/.style= {color=ForestGreen, very thick, opacity=0.7}, edge12/.style= {blue,dotted, very thick, opacity=0.7}, face1/.style = {fill=blue!20!white, fill opacity = 0.5}] \coordinate (0) at (0.6, 0.6, 0.6); \coordinate (1) at (0.42857142857142855, -0.42857142857142855, -0.42857142857142855); \coordinate (2) at (-0.13333333333333333, 0.2, -0.13333333333333333); \coordinate (3) at (0.375, -0.25, 0.25); \coordinate (4) at (0.3, 0.0, 0.0); \coordinate (5) at (0.09090909090909091, -0.09090909090909091, 0.09090909090909091); \coordinate (6) at (0.2222222222222222, -0.2222222222222222, 0.2222222222222222); \coordinate (7) at (0.42857142857142855, -0.2857142857142857, 0.42857142857142855); \coordinate (8) at (0.08333333333333333, 0.0, 0.0); \coordinate (9) at (0.0, 0.07692307692307693, 0.0); \coordinate (10) at (-0.07142857142857142, 0.07142857142857142, -0.07142857142857142); \coordinate (11) at (0.0, 0.0, -0.07692307692307693); \coordinate (12) at (-0.07692307692307693, 0.0, 0.0); \coordinate (13) at (0.0, -0.08333333333333333, 0.0); \coordinate (14) at (0.0, 0.0, 0.08333333333333333); \coordinate (15) at (-0.13333333333333333, 0.13333333333333333, -0.13333333333333333); \coordinate (16) at (0.0, 0.0, 0.3); \coordinate (17) at (0.25, -0.25, 0.375); \coordinate (18) at (0.2857142857142857, -0.42857142857142855, 0.42857142857142855); \coordinate (19) at (0.0, -0.3, 0.0); \coordinate (20) at (0.5, -0.5, 0.5); \coordinate (21) at (0.42857142857142855, -0.42857142857142855, 0.2857142857142857); \coordinate (22) at (0.25, -0.375, 0.25); \coordinate (23) at (0.0, 0.3, 0.3); \coordinate (24) at (0.3, 0.3, 0.0); \coordinate (25) at (0.42857142857142855, 0.42857142857142855, 0.42857142857142855); \coordinate (26) at (0.0, 0.23076923076923078, 0.0); \coordinate (27) at (0.3, 0.3, -0.3); \coordinate (28) at (-0.13333333333333333, 0.2, -0.2); \coordinate (29) at (0.3, 0.0, -0.3); \coordinate (30) at (0.0, 0.0, -0.23076923076923078); \coordinate (31) at (-0.13333333333333333, 0.13333333333333333, -0.2); \coordinate (32) at (0.0, -0.3, -0.3); \coordinate (33) at (0.6, 0.6, -0.6); \coordinate (34) at (0.6, -0.6, -0.6); \coordinate (35) at (0.6, -0.6, 0.6); \coordinate (36) at (-0.6, -0.6, 0.6); \coordinate (37) at (-0.6, 0.6, 0.6); \coordinate (38) at (-0.2, 0.2, -0.13333333333333333); \coordinate (39) at (-0.23076923076923078, 0.0, 0.0); \coordinate (40) at (-0.2, 0.13333333333333333, -0.13333333333333333); \coordinate (41) at (-0.3, 0.0, 0.3); \coordinate (42) at (-0.42857142857142855, -0.42857142857142855, 0.42857142857142855); \coordinate (43) at (-0.3, -0.3, 0.0); \coordinate (44) at (-0.3, 0.3, 0.3); \coordinate (45) at (-0.2, 0.2, -0.2); \coordinate (46) at (-0.2, 0.13333333333333333, -0.2); \coordinate (47) at (-0.3, -0.3, -0.3); \coordinate (48) at (-0.6, 0.6, -0.6); \coordinate (49) at (-0.6, -0.6, -0.6); \fill[red!60!blue, opacity=0.15] (37)--(48)--(49)--(36)--cycle {}; \fill[face1] (37)--(48)--(45)--(38)--(44)--cycle {}; \fill[face1] (37)--(48)--(33)--(0)--cycle{}; \fill[face1] (37)--(44)--(23)--(25)--(0)--cycle {}; \fill[face1] (44)--(23)--(26)--(2)--(38)--cycle {}; \fill[face1] (2)--(28)--(45)--(38)--cycle {}; \fill[face1] (45)--(28)--(27)--(33)--(48)--cycle {}; \fill[face1] (2)--(26)--(24)--(27)--(28)--cycle {}; \fill[face1] (25)--(23)--(26)--(24)--cycle; \fill[face1] (0)--(25)--(24)--(27)--(33)--cycle; \fill[green!10!white, opacity=0.7] (48)--(45)--(46)--(47)--(49)--(34)--(33)--cycle{}; \draw[edge] (3)--(4); \draw[edge] (5)--(6); \draw[edge] (4)--(8); \draw[edge] (10)--(15); \draw[edge] (14)--(16); \draw[edge] (16)--(17); \draw[edge] (13)--(19); \draw[edge] (1)--(21); \draw[edge] (19)--(22); \draw[edge] (16)--(23); \draw[edge] (4)--(24); \draw[edge] (7)--(25); \draw[edge] (9)--(26); \draw[edge] (4)--(29); \draw[edge] (11)--(30); \draw[edge] (19)--(32); \draw[edge] (20)--(35); \draw[edge] (12)--(39); \draw[edge] (16)--(41); \draw[edge] (18)--(42); \draw[edge] (19)--(43); \draw[cube] (0)--(33); \draw[cube] (33)--(34); \draw[cube] (34)--(35); \draw[cube] (0)--(35); \draw[cube] (35)--(36); \draw[cube] (0)--(37); \draw[cube] (36)--(37); \draw[cube] (37)--(48); \draw[cube] (33)--(48); \draw[cube] (48)--(49); \draw[cube] (34)--(49); \draw[cube] (36)--(49); \draw[edge0] (36)--(37); \draw[edge0] (39)--(40); \draw[edge0] (38)--(40); \draw[edge0] (39)--(41); \draw[edge0] (41)--(42); \draw[edge0] (36)--(42); \draw[edge0] (39)--(43); \draw[edge0] (42)--(43); \draw[edge0] (37)--(44); \draw[edge0] (38)--(44); \draw[edge0] (41)--(44); \draw[edge0] (38)--(45); \draw[edge0] (45)--(46); \draw[edge0] (40)--(46); \draw[edge0] (43)--(47); \draw[edge0] (46)--(47); \draw[edge0] (37)--(48); \draw[edge0] (45)--(48); \draw[edge0] (48)--(49); \draw[edge0] (47)--(49); \draw[edge0] (36)--(49); \draw[edge8] (27)--(28); \draw[edge8] (1)--(29); \draw[edge8] (27)--(29); \draw[edge8] (29)--(30); \draw[edge8] (28)--(31); \draw[edge8] (30)--(31); \draw[edge8] (30)--(32); \draw[edge8] (1)--(32); \draw[edge8] (27)--(33); \draw[edge8] (33)--(34); \draw[edge8] (1)--(34); \draw[edge8] (28)--(45); \draw[edge8] (45)--(46); \draw[edge8] (31)--(46); \draw[edge8] (32)--(47); \draw[edge8] (46)--(47); \draw[edge8] (45)--(48); \draw[edge8] (33)--(48); \draw[edge8] (48)--(49); \draw[edge8] (47)--(49); \draw[edge8] (34)--(49); \draw[edge12] (24)--(25); \draw[edge12] (23)--(25); \draw[edge12] (0)--(25); \draw[edge12] (2)--(26); \draw[edge12] (23)--(26); \draw[edge12] (24)--(26); \draw[edge12] (24)--(27); \draw[edge12] (2)--(28); \draw[edge12] (27)--(28); \draw[edge12] (0)--(33); \draw[edge12] (27)--(33); \draw[edge12] (0)--(37); \draw[edge12] (2)--(38); \draw[edge12] (37)--(44); \draw[edge12] (38)--(44); \draw[edge12] (23)--(44); \draw[edge12] (38)--(45); \draw[edge12] (28)--(45); \draw[edge12] (37)--(48); \draw[edge12] (45)--(48); \draw[edge12] (33)--(48); \draw[cube] (0)--(33); \draw[cube] (33)--(34); \draw[cube] (34)--(35); \draw[cube] (0)--(35); \draw[cube] (35)--(36); \draw[cube] (0)--(37); \draw[cube] (36)--(37); \draw[cube] (37)--(48); \draw[cube] (33)--(48); \draw[cube] (48)--(49); \draw[cube] (34)--(49); \draw[cube] (36)--(49); \draw[cube] (2)--(15); \draw[cube] (2)--(28); \draw[cube] (15)--(31); \draw[cube] (28)--(31); \draw[cube] (2)--(38); \draw[cube] (38)--(40); \draw[cube] (15)--(40); \draw[cube] (38)--(45); \draw[cube] (28)--(45); \draw[cube] (45)--(46); \draw[cube] (31)--(46); \draw[cube] (40)--(46); \draw[cube] (3)--(6); \draw[cube] (3)--(7); \draw[cube] (6)--(17); \draw[cube] (7)--(17); \draw[cube] (17)--(18); \draw[cube] (18)--(20); \draw[cube] (7)--(20); \draw[cube] (3)--(21); \draw[cube] (20)--(21); \draw[cube] (18)--(22); \draw[cube] (21)--(22); \draw[cube] (6)--(22); \draw[cube] (5)--(8); \draw[cube] (8)--(9); \draw[cube] (9)--(10); \draw[cube] (8)--(11); \draw[cube] (10)--(11); \draw[cube] (10)--(12); \draw[cube] (11)--(13); \draw[cube] (12)--(13); \draw[cube] (5)--(13); \draw[cube] (9)--(14); \draw[cube] (12)--(14); \draw[cube] (5)--(14); \foreach \x in {0,...,49}{ \node[vertex_normal] at (\x) {}; } \foreach \x in {48, 20, 15, 5}{ \node[vertex] at (\x) {}; } \end{tikzpicture} } \vspace{1em} \subfigure[$F_{-\alpha_1}$.]{ \centering \tdplotsetmaincoords{110}{260} \begin{tikzpicture}% [tdplot_main_coords, scale = 3, back/.style={loosely dotted, thin}, edge/.style={color=black}, cube/.style={color=red, thick}, facet/.style={fill=blue!95!black,fill opacity=0.100000}, vertex/.style={inner sep=1pt,circle,draw=green!25!black,fill=green!75!black,thick,anchor=base}, vertex_normal/.style = {inner sep=0.5pt,circle,draw=black,fill=black,thick,anchor=base}, edge0/.style = {color=blue!50!red, very thick, opacity=0.7}, edge8/.style= {color=ForestGreen, very thick, opacity=0.7}, edge12/.style= {blue,dotted, very thick, opacity=0.7}, face1/.style = {fill=blue!20!white, fill opacity = 0.5}] \coordinate (0) at (0.6, 0.6, 0.6); \coordinate (1) at (0.42857142857142855, -0.42857142857142855, -0.42857142857142855); \coordinate (2) at (-0.13333333333333333, 0.2, -0.13333333333333333); \coordinate (3) at (0.375, -0.25, 0.25); \coordinate (4) at (0.3, 0.0, 0.0); \coordinate (5) at (0.09090909090909091, -0.09090909090909091, 0.09090909090909091); \coordinate (6) at (0.2222222222222222, -0.2222222222222222, 0.2222222222222222); \coordinate (7) at (0.42857142857142855, -0.2857142857142857, 0.42857142857142855); \coordinate (8) at (0.08333333333333333, 0.0, 0.0); \coordinate (9) at (0.0, 0.07692307692307693, 0.0); \coordinate (10) at (-0.07142857142857142, 0.07142857142857142, -0.07142857142857142); \coordinate (11) at (0.0, 0.0, -0.07692307692307693); \coordinate (12) at (-0.07692307692307693, 0.0, 0.0); \coordinate (13) at (0.0, -0.08333333333333333, 0.0); \coordinate (14) at (0.0, 0.0, 0.08333333333333333); \coordinate (15) at (-0.13333333333333333, 0.13333333333333333, -0.13333333333333333); \coordinate (16) at (0.0, 0.0, 0.3); \coordinate (17) at (0.25, -0.25, 0.375); \coordinate (18) at (0.2857142857142857, -0.42857142857142855, 0.42857142857142855); \coordinate (19) at (0.0, -0.3, 0.0); \coordinate (20) at (0.5, -0.5, 0.5); \coordinate (21) at (0.42857142857142855, -0.42857142857142855, 0.2857142857142857); \coordinate (22) at (0.25, -0.375, 0.25); \coordinate (23) at (0.0, 0.3, 0.3); \coordinate (24) at (0.3, 0.3, 0.0); \coordinate (25) at (0.42857142857142855, 0.42857142857142855, 0.42857142857142855); \coordinate (26) at (0.0, 0.23076923076923078, 0.0); \coordinate (27) at (0.3, 0.3, -0.3); \coordinate (28) at (-0.13333333333333333, 0.2, -0.2); \coordinate (29) at (0.3, 0.0, -0.3); \coordinate (30) at (0.0, 0.0, -0.23076923076923078); \coordinate (31) at (-0.13333333333333333, 0.13333333333333333, -0.2); \coordinate (32) at (0.0, -0.3, -0.3); \coordinate (33) at (0.6, 0.6, -0.6); \coordinate (34) at (0.6, -0.6, -0.6); \coordinate (35) at (0.6, -0.6, 0.6); \coordinate (36) at (-0.6, -0.6, 0.6); \coordinate (37) at (-0.6, 0.6, 0.6); \coordinate (38) at (-0.2, 0.2, -0.13333333333333333); \coordinate (39) at (-0.23076923076923078, 0.0, 0.0); \coordinate (40) at (-0.2, 0.13333333333333333, -0.13333333333333333); \coordinate (41) at (-0.3, 0.0, 0.3); \coordinate (42) at (-0.42857142857142855, -0.42857142857142855, 0.42857142857142855); \coordinate (43) at (-0.3, -0.3, 0.0); \coordinate (44) at (-0.3, 0.3, 0.3); \coordinate (45) at (-0.2, 0.2, -0.2); \coordinate (46) at (-0.2, 0.13333333333333333, -0.2); \coordinate (47) at (-0.3, -0.3, -0.3); \coordinate (48) at (-0.6, 0.6, -0.6); \coordinate (49) at (-0.6, -0.6, -0.6); \fill[red!60!blue, opacity=0.15] (37)--(48)--(49)--(36)--cycle {}; \draw[edge] (3)--(4); \draw[edge] (5)--(6); \draw[edge] (4)--(8); \draw[edge] (10)--(15); \draw[edge] (14)--(16); \draw[edge] (16)--(17); \draw[edge] (13)--(19); \draw[edge] (1)--(21); \draw[edge] (19)--(22); \draw[edge] (16)--(23); \draw[edge] (4)--(24); \draw[edge] (7)--(25); \draw[edge] (9)--(26); \draw[edge] (4)--(29); \draw[edge] (11)--(30); \draw[edge] (19)--(32); \draw[edge] (20)--(35); \draw[edge] (12)--(39); \draw[edge] (16)--(41); \draw[edge] (18)--(42); \draw[edge] (19)--(43); \draw[cube] (0)--(33); \draw[cube] (33)--(34); \draw[cube] (34)--(35); \draw[cube] (0)--(35); \draw[cube] (35)--(36); \draw[cube] (0)--(37); \draw[cube] (36)--(37); \draw[cube] (37)--(48); \draw[cube] (33)--(48); \draw[cube] (48)--(49); \draw[cube] (34)--(49); \draw[cube] (36)--(49); \draw[edge8] (27)--(28); \draw[edge8] (1)--(29); \draw[edge8] (27)--(29); \draw[edge8] (29)--(30); \draw[edge8] (28)--(31); \draw[edge8] (30)--(31); \draw[edge8] (30)--(32); \draw[edge8] (1)--(32); \draw[edge8] (27)--(33); \draw[edge8] (33)--(34); \draw[edge8] (1)--(34); \draw[edge8] (28)--(45); \draw[edge8] (45)--(46); \draw[edge8] (31)--(46); \draw[edge8] (32)--(47); \draw[edge8] (46)--(47); \draw[edge8] (45)--(48); \draw[edge8] (33)--(48); \draw[edge8] (48)--(49); \draw[edge8] (47)--(49); \draw[edge8] (34)--(49); \draw[edge12] (24)--(25); \draw[edge12] (23)--(25); \draw[edge12] (0)--(25); \draw[edge12] (2)--(26); \draw[edge12] (23)--(26); \draw[edge12] (24)--(26); \draw[edge12] (24)--(27); \draw[edge12] (2)--(28); \draw[edge12] (27)--(28); \draw[edge12] (0)--(33); \draw[edge12] (27)--(33); \draw[edge12] (0)--(37); \draw[edge12] (2)--(38); \draw[edge12] (37)--(44); \draw[edge12] (38)--(44); \draw[edge12] (23)--(44); \draw[edge12] (38)--(45); \draw[edge12] (28)--(45); \draw[edge12] (37)--(48); \draw[edge12] (45)--(48); \draw[edge12] (33)--(48); \draw[cube] (0)--(33); \draw[cube] (33)--(34); \draw[cube] (34)--(35); \draw[cube] (0)--(35); \draw[cube] (35)--(36); \draw[cube] (0)--(37); \draw[cube] (36)--(37); \draw[cube] (37)--(48); \draw[cube] (33)--(48); \draw[cube] (48)--(49); \draw[cube] (34)--(49); \draw[cube] (36)--(49); \draw[cube] (2)--(15); \draw[cube] (2)--(28); \draw[cube] (15)--(31); \draw[cube] (28)--(31); \draw[cube] (2)--(38); \draw[cube] (38)--(40); \draw[cube] (15)--(40); \draw[cube] (38)--(45); \draw[cube] (28)--(45); \draw[cube] (45)--(46); \draw[cube] (31)--(46); \draw[cube] (40)--(46); \draw[cube] (3)--(6); \draw[cube] (3)--(7); \draw[cube] (6)--(17); \draw[cube] (7)--(17); \draw[cube] (17)--(18); \draw[cube] (18)--(20); \draw[cube] (7)--(20); \draw[cube] (3)--(21); \draw[cube] (20)--(21); \draw[cube] (18)--(22); \draw[cube] (21)--(22); \draw[cube] (6)--(22); \draw[cube] (5)--(8); \draw[cube] (8)--(9); \draw[cube] (9)--(10); \draw[cube] (8)--(11); \draw[cube] (10)--(11); \draw[cube] (10)--(12); \draw[cube] (11)--(13); \draw[cube] (12)--(13); \draw[cube] (5)--(13); \draw[cube] (9)--(14); \draw[cube] (12)--(14); \draw[cube] (5)--(14); \draw[edge0] (36)--(37); \draw[edge0] (39)--(40); \draw[edge0] (38)--(40); \draw[edge0] (39)--(41); \draw[edge0] (41)--(42); \draw[edge0] (36)--(42); \draw[edge0] (39)--(43); \draw[edge0] (42)--(43); \draw[edge0] (37)--(44); \draw[edge0] (38)--(44); \draw[edge0] (41)--(44); \draw[edge0] (38)--(45); \draw[edge0] (45)--(46); \draw[edge0] (40)--(46); \draw[edge0] (43)--(47); \draw[edge0] (46)--(47); \draw[edge0] (37)--(48); \draw[edge0] (45)--(48); \draw[edge0] (48)--(49); \draw[edge0] (47)--(49); \draw[edge0] (36)--(49); \foreach \x in {0,...,49}{ \node[vertex_normal] at (\x) {}; } \foreach \x in {48, 20, 15, 5}{ \node[vertex] at (\x) {}; } \end{tikzpicture} } \subfigure[$F_{-\alpha_3}$.]{ \centering \tdplotsetmaincoords{110}{260} \begin{tikzpicture}% [tdplot_main_coords,scale = 3, back/.style={loosely dotted, thin}, edge/.style={color=black}, cube/.style={color=red, thick}, facet/.style={fill=blue!95!black,fill opacity=0.100000}, vertex/.style={inner sep=1pt,circle,draw=green!25!black,fill=green!75!black,thick,anchor=base}, vertex_normal/.style = {inner sep=0.5pt,circle,draw=black,fill=black,thick,anchor=base}, edge0/.style = {color=blue!50!red, very thick, dashed, opacity=0.7}, edge8/.style= {color=ForestGreen, very thick, opacity=0.7}, edge12/.style= {blue,dotted, very thick, opacity=0.7}, edge12_/.style={blue, very thick, opacity = 0.7}, face1/.style = {fill=blue!20!white, fill opacity = 0.5}] \coordinate (0) at (0.6, 0.6, 0.6); \coordinate (1) at (0.42857142857142855, -0.42857142857142855, -0.42857142857142855); \coordinate (2) at (-0.13333333333333333, 0.2, -0.13333333333333333); \coordinate (3) at (0.375, -0.25, 0.25); \coordinate (4) at (0.3, 0.0, 0.0); \coordinate (5) at (0.09090909090909091, -0.09090909090909091, 0.09090909090909091); \coordinate (6) at (0.2222222222222222, -0.2222222222222222, 0.2222222222222222); \coordinate (7) at (0.42857142857142855, -0.2857142857142857, 0.42857142857142855); \coordinate (8) at (0.08333333333333333, 0.0, 0.0); \coordinate (9) at (0.0, 0.07692307692307693, 0.0); \coordinate (10) at (-0.07142857142857142, 0.07142857142857142, -0.07142857142857142); \coordinate (11) at (0.0, 0.0, -0.07692307692307693); \coordinate (12) at (-0.07692307692307693, 0.0, 0.0); \coordinate (13) at (0.0, -0.08333333333333333, 0.0); \coordinate (14) at (0.0, 0.0, 0.08333333333333333); \coordinate (15) at (-0.13333333333333333, 0.13333333333333333, -0.13333333333333333); \coordinate (16) at (0.0, 0.0, 0.3); \coordinate (17) at (0.25, -0.25, 0.375); \coordinate (18) at (0.2857142857142857, -0.42857142857142855, 0.42857142857142855); \coordinate (19) at (0.0, -0.3, 0.0); \coordinate (20) at (0.5, -0.5, 0.5); \coordinate (21) at (0.42857142857142855, -0.42857142857142855, 0.2857142857142857); \coordinate (22) at (0.25, -0.375, 0.25); \coordinate (23) at (0.0, 0.3, 0.3); \coordinate (24) at (0.3, 0.3, 0.0); \coordinate (25) at (0.42857142857142855, 0.42857142857142855, 0.42857142857142855); \coordinate (26) at (0.0, 0.23076923076923078, 0.0); \coordinate (27) at (0.3, 0.3, -0.3); \coordinate (28) at (-0.13333333333333333, 0.2, -0.2); \coordinate (29) at (0.3, 0.0, -0.3); \coordinate (30) at (0.0, 0.0, -0.23076923076923078); \coordinate (31) at (-0.13333333333333333, 0.13333333333333333, -0.2); \coordinate (32) at (0.0, -0.3, -0.3); \coordinate (33) at (0.6, 0.6, -0.6); \coordinate (34) at (0.6, -0.6, -0.6); \coordinate (35) at (0.6, -0.6, 0.6); \coordinate (36) at (-0.6, -0.6, 0.6); \coordinate (37) at (-0.6, 0.6, 0.6); \coordinate (38) at (-0.2, 0.2, -0.13333333333333333); \coordinate (39) at (-0.23076923076923078, 0.0, 0.0); \coordinate (40) at (-0.2, 0.13333333333333333, -0.13333333333333333); \coordinate (41) at (-0.3, 0.0, 0.3); \coordinate (42) at (-0.42857142857142855, -0.42857142857142855, 0.42857142857142855); \coordinate (43) at (-0.3, -0.3, 0.0); \coordinate (44) at (-0.3, 0.3, 0.3); \coordinate (45) at (-0.2, 0.2, -0.2); \coordinate (46) at (-0.2, 0.13333333333333333, -0.2); \coordinate (47) at (-0.3, -0.3, -0.3); \coordinate (48) at (-0.6, 0.6, -0.6); \coordinate (49) at (-0.6, -0.6, -0.6); \fill[face1] (37)--(48)--(45)--(38)--(44)--cycle {}; \fill[face1] (37)--(48)--(33)--(0)--cycle{}; \fill[face1] (37)--(44)--(23)--(25)--(0)--cycle {}; \fill[face1] (44)--(23)--(26)--(2)--(38)--cycle {}; \fill[face1] (2)--(28)--(45)--(38)--cycle {}; \fill[face1] (45)--(28)--(27)--(33)--(48)--cycle {}; \fill[face1] (2)--(26)--(24)--(27)--(28)--cycle {}; \fill[face1] (25)--(23)--(26)--(24)--cycle; \fill[face1] (0)--(25)--(24)--(27)--(33)--cycle; \draw[edge] (3)--(4); \draw[edge] (5)--(6); \draw[edge] (4)--(8); \draw[edge] (10)--(15); \draw[edge] (14)--(16); \draw[edge] (16)--(17); \draw[edge] (13)--(19); \draw[edge] (1)--(21); \draw[edge] (19)--(22); \draw[edge] (16)--(23); \draw[edge] (4)--(24); \draw[edge] (7)--(25); \draw[edge] (9)--(26); \draw[edge] (4)--(29); \draw[edge] (11)--(30); \draw[edge] (19)--(32); \draw[edge] (20)--(35); \draw[edge] (12)--(39); \draw[edge] (16)--(41); \draw[edge] (18)--(42); \draw[edge] (19)--(43); \draw[cube] (0)--(33); \draw[cube] (33)--(34); \draw[cube] (34)--(35); \draw[cube] (0)--(35); \draw[cube] (35)--(36); \draw[cube] (0)--(37); \draw[cube] (36)--(37); \draw[cube] (37)--(48); \draw[cube] (33)--(48); \draw[cube] (48)--(49); \draw[cube] (34)--(49); \draw[cube] (36)--(49); \draw[edge0] (36)--(37); \draw[edge0] (39)--(40); \draw[edge0] (38)--(40); \draw[edge0] (39)--(41); \draw[edge0] (41)--(42); \draw[edge0] (36)--(42); \draw[edge0] (39)--(43); \draw[edge0] (42)--(43); \draw[edge0] (37)--(44); \draw[edge0] (38)--(44); \draw[edge0] (41)--(44); \draw[edge0] (38)--(45); \draw[edge0] (45)--(46); \draw[edge0] (40)--(46); \draw[edge0] (43)--(47); \draw[edge0] (46)--(47); \draw[edge0] (37)--(48); \draw[edge0] (45)--(48); \draw[edge0] (48)--(49); \draw[edge0] (47)--(49); \draw[edge0] (36)--(49); \draw[edge8] (27)--(28); \draw[edge8] (1)--(29); \draw[edge8] (27)--(29); \draw[edge8] (29)--(30); \draw[edge8] (28)--(31); \draw[edge8] (30)--(31); \draw[edge8] (30)--(32); \draw[edge8] (1)--(32); \draw[edge8] (27)--(33); \draw[edge8] (33)--(34); \draw[edge8] (1)--(34); \draw[edge8] (28)--(45); \draw[edge8] (45)--(46); \draw[edge8] (31)--(46); \draw[edge8] (32)--(47); \draw[edge8] (46)--(47); \draw[edge8] (45)--(48); \draw[edge8] (33)--(48); \draw[edge8] (48)--(49); \draw[edge8] (47)--(49); \draw[edge8] (34)--(49); \draw[cube] (0)--(33); \draw[cube] (33)--(34); \draw[cube] (34)--(35); \draw[cube] (0)--(35); \draw[cube] (35)--(36); \draw[cube] (0)--(37); \draw[cube] (36)--(37); \draw[cube] (37)--(48); \draw[cube] (33)--(48); \draw[cube] (48)--(49); \draw[cube] (34)--(49); \draw[cube] (36)--(49); \draw[cube] (2)--(15); \draw[cube] (2)--(28); \draw[cube] (15)--(31); \draw[cube] (28)--(31); \draw[cube] (2)--(38); \draw[cube] (38)--(40); \draw[cube] (15)--(40); \draw[cube] (38)--(45); \draw[cube] (28)--(45); \draw[cube] (45)--(46); \draw[cube] (31)--(46); \draw[cube] (40)--(46); \draw[cube] (3)--(6); \draw[cube] (3)--(7); \draw[cube] (6)--(17); \draw[cube] (7)--(17); \draw[cube] (17)--(18); \draw[cube] (18)--(20); \draw[cube] (7)--(20); \draw[cube] (3)--(21); \draw[cube] (20)--(21); \draw[cube] (18)--(22); \draw[cube] (21)--(22); \draw[cube] (6)--(22); \draw[cube] (5)--(8); \draw[cube] (8)--(9); \draw[cube] (9)--(10); \draw[cube] (8)--(11); \draw[cube] (10)--(11); \draw[cube] (10)--(12); \draw[cube] (11)--(13); \draw[cube] (12)--(13); \draw[cube] (5)--(13); \draw[cube] (9)--(14); \draw[cube] (12)--(14); \draw[cube] (5)--(14); \draw[edge12_] (24)--(25); \draw[edge12_] (23)--(25); \draw[edge12_] (0)--(25); \draw[edge12_] (2)--(26); \draw[edge12_] (23)--(26); \draw[edge12_] (24)--(26); \draw[edge12_] (24)--(27); \draw[edge12_] (2)--(28); \draw[edge12_] (27)--(28); \draw[edge12_] (0)--(33); \draw[edge12_] (27)--(33); \draw[edge12_] (0)--(37); \draw[edge12_] (2)--(38); \draw[edge12_] (37)--(44); \draw[edge12_] (38)--(44); \draw[edge12_] (23)--(44); \draw[edge12_] (38)--(45); \draw[edge12_] (28)--(45); \draw[edge12] (37)--(48); \draw[edge12] (45)--(48); \draw[edge12] (33)--(48); \foreach \x in {0,...,49}{ \node[vertex_normal] at (\x) {}; } \foreach \x in {48, 20, 15, 5}{ \node[vertex] at (\x) {}; } \end{tikzpicture} } \subfigure[$F_{-\alpha_4}$.]{ \centering \tdplotsetmaincoords{110}{260} \begin{tikzpicture}% [tdplot_main_coords,scale = 3, back/.style={loosely dotted, thin}, edge/.style={color=black}, cube/.style={color=red, thick}, facet/.style={fill=blue!95!black,fill opacity=0.100000}, vertex/.style={inner sep=1pt,circle,draw=green!25!black,fill=green!75!black,thick,anchor=base}, vertex_normal/.style = {inner sep=0.5pt,circle,draw=black,fill=black,thick,anchor=base}, edge0/.style = {color=blue!50!red, very thick, dashed, opacity=0.7}, edge8/.style= {color=ForestGreen, very thick, opacity=0.7}, edge12/.style= {blue,dotted, very thick, opacity=0.7}, face1/.style = {fill=blue!20!white, fill opacity = 0.5}] \coordinate (0) at (0.6, 0.6, 0.6); \coordinate (1) at (0.42857142857142855, -0.42857142857142855, -0.42857142857142855); \coordinate (2) at (-0.13333333333333333, 0.2, -0.13333333333333333); \coordinate (3) at (0.375, -0.25, 0.25); \coordinate (4) at (0.3, 0.0, 0.0); \coordinate (5) at (0.09090909090909091, -0.09090909090909091, 0.09090909090909091); \coordinate (6) at (0.2222222222222222, -0.2222222222222222, 0.2222222222222222); \coordinate (7) at (0.42857142857142855, -0.2857142857142857, 0.42857142857142855); \coordinate (8) at (0.08333333333333333, 0.0, 0.0); \coordinate (9) at (0.0, 0.07692307692307693, 0.0); \coordinate (10) at (-0.07142857142857142, 0.07142857142857142, -0.07142857142857142); \coordinate (11) at (0.0, 0.0, -0.07692307692307693); \coordinate (12) at (-0.07692307692307693, 0.0, 0.0); \coordinate (13) at (0.0, -0.08333333333333333, 0.0); \coordinate (14) at (0.0, 0.0, 0.08333333333333333); \coordinate (15) at (-0.13333333333333333, 0.13333333333333333, -0.13333333333333333); \coordinate (16) at (0.0, 0.0, 0.3); \coordinate (17) at (0.25, -0.25, 0.375); \coordinate (18) at (0.2857142857142857, -0.42857142857142855, 0.42857142857142855); \coordinate (19) at (0.0, -0.3, 0.0); \coordinate (20) at (0.5, -0.5, 0.5); \coordinate (21) at (0.42857142857142855, -0.42857142857142855, 0.2857142857142857); \coordinate (22) at (0.25, -0.375, 0.25); \coordinate (23) at (0.0, 0.3, 0.3); \coordinate (24) at (0.3, 0.3, 0.0); \coordinate (25) at (0.42857142857142855, 0.42857142857142855, 0.42857142857142855); \coordinate (26) at (0.0, 0.23076923076923078, 0.0); \coordinate (27) at (0.3, 0.3, -0.3); \coordinate (28) at (-0.13333333333333333, 0.2, -0.2); \coordinate (29) at (0.3, 0.0, -0.3); \coordinate (30) at (0.0, 0.0, -0.23076923076923078); \coordinate (31) at (-0.13333333333333333, 0.13333333333333333, -0.2); \coordinate (32) at (0.0, -0.3, -0.3); \coordinate (33) at (0.6, 0.6, -0.6); \coordinate (34) at (0.6, -0.6, -0.6); \coordinate (35) at (0.6, -0.6, 0.6); \coordinate (36) at (-0.6, -0.6, 0.6); \coordinate (37) at (-0.6, 0.6, 0.6); \coordinate (38) at (-0.2, 0.2, -0.13333333333333333); \coordinate (39) at (-0.23076923076923078, 0.0, 0.0); \coordinate (40) at (-0.2, 0.13333333333333333, -0.13333333333333333); \coordinate (41) at (-0.3, 0.0, 0.3); \coordinate (42) at (-0.42857142857142855, -0.42857142857142855, 0.42857142857142855); \coordinate (43) at (-0.3, -0.3, 0.0); \coordinate (44) at (-0.3, 0.3, 0.3); \coordinate (45) at (-0.2, 0.2, -0.2); \coordinate (46) at (-0.2, 0.13333333333333333, -0.2); \coordinate (47) at (-0.3, -0.3, -0.3); \coordinate (48) at (-0.6, 0.6, -0.6); \coordinate (49) at (-0.6, -0.6, -0.6); \fill[green!10!white, opacity=0.7] (48)--(45)--(46)--(47)--(49)--(34)--(33)--cycle{}; \draw[edge] (3)--(4); \draw[edge] (5)--(6); \draw[edge] (4)--(8); \draw[edge] (10)--(15); \draw[edge] (14)--(16); \draw[edge] (16)--(17); \draw[edge] (13)--(19); \draw[edge] (1)--(21); \draw[edge] (19)--(22); \draw[edge] (16)--(23); \draw[edge] (4)--(24); \draw[edge] (7)--(25); \draw[edge] (9)--(26); \draw[edge] (4)--(29); \draw[edge] (11)--(30); \draw[edge] (19)--(32); \draw[edge] (20)--(35); \draw[edge] (12)--(39); \draw[edge] (16)--(41); \draw[edge] (18)--(42); \draw[edge] (19)--(43); \draw[cube] (0)--(33); \draw[cube] (33)--(34); \draw[cube] (34)--(35); \draw[cube] (0)--(35); \draw[cube] (35)--(36); \draw[cube] (0)--(37); \draw[cube] (36)--(37); \draw[cube] (37)--(48); \draw[cube] (33)--(48); \draw[cube] (48)--(49); \draw[cube] (34)--(49); \draw[cube] (36)--(49); \draw[edge0] (36)--(37); \draw[edge0] (39)--(40); \draw[edge0] (38)--(40); \draw[edge0] (39)--(41); \draw[edge0] (41)--(42); \draw[edge0] (36)--(42); \draw[edge0] (39)--(43); \draw[edge0] (42)--(43); \draw[edge0] (37)--(44); \draw[edge0] (38)--(44); \draw[edge0] (41)--(44); \draw[edge0] (38)--(45); \draw[edge0] (45)--(46); \draw[edge0] (40)--(46); \draw[edge0] (43)--(47); \draw[edge0] (46)--(47); \draw[edge0] (37)--(48); \draw[edge0] (45)--(48); \draw[edge0] (48)--(49); \draw[edge0] (47)--(49); \draw[edge0] (36)--(49); \draw[edge12] (24)--(25); \draw[edge12] (23)--(25); \draw[edge12] (0)--(25); \draw[edge12] (2)--(26); \draw[edge12] (23)--(26); \draw[edge12] (24)--(26); \draw[edge12] (24)--(27); \draw[edge12] (2)--(28); \draw[edge12] (27)--(28); \draw[edge12] (0)--(33); \draw[edge12] (27)--(33); \draw[edge12] (0)--(37); \draw[edge12] (2)--(38); \draw[edge12] (37)--(44); \draw[edge12] (38)--(44); \draw[edge12] (23)--(44); \draw[edge12] (38)--(45); \draw[edge12] (28)--(45); \draw[edge12] (37)--(48); \draw[edge12] (45)--(48); \draw[edge12] (33)--(48); \draw[cube] (0)--(33); \draw[cube] (33)--(34); \draw[cube] (34)--(35); \draw[cube] (0)--(35); \draw[cube] (35)--(36); \draw[cube] (0)--(37); \draw[cube] (36)--(37); \draw[cube] (37)--(48); \draw[cube] (33)--(48); \draw[cube] (48)--(49); \draw[cube] (34)--(49); \draw[cube] (36)--(49); \draw[cube] (2)--(15); \draw[cube] (2)--(28); \draw[cube] (15)--(31); \draw[cube] (28)--(31); \draw[cube] (2)--(38); \draw[cube] (38)--(40); \draw[cube] (15)--(40); \draw[cube] (38)--(45); \draw[cube] (28)--(45); \draw[cube] (45)--(46); \draw[cube] (31)--(46); \draw[cube] (40)--(46); \draw[cube] (3)--(6); \draw[cube] (3)--(7); \draw[cube] (6)--(17); \draw[cube] (7)--(17); \draw[cube] (17)--(18); \draw[cube] (18)--(20); \draw[cube] (7)--(20); \draw[cube] (3)--(21); \draw[cube] (20)--(21); \draw[cube] (18)--(22); \draw[cube] (21)--(22); \draw[cube] (6)--(22); \draw[cube] (5)--(8); \draw[cube] (8)--(9); \draw[cube] (9)--(10); \draw[cube] (8)--(11); \draw[cube] (10)--(11); \draw[cube] (10)--(12); \draw[cube] (11)--(13); \draw[cube] (12)--(13); \draw[cube] (5)--(13); \draw[cube] (9)--(14); \draw[cube] (12)--(14); \draw[cube] (5)--(14); \draw[edge8] (27)--(28); \draw[edge8] (1)--(29); \draw[edge8] (27)--(29); \draw[edge8] (29)--(30); \draw[edge8] (28)--(31); \draw[edge8] (30)--(31); \draw[edge8] (30)--(32); \draw[edge8] (1)--(32); \draw[edge8] (27)--(33); \draw[edge8] (33)--(34); \draw[edge8] (1)--(34); \draw[edge8] (28)--(45); \draw[edge8] (45)--(46); \draw[edge8] (31)--(46); \draw[edge8] (32)--(47); \draw[edge8] (46)--(47); \draw[edge8] (45)--(48); \draw[edge8] (33)--(48); \draw[ForestGreen, thick, opacity = 0.7, dashed] (48)--(49); \draw[edge8] (47)--(49); \draw[edge8] (34)--(49); \foreach \x in {0,...,49}{ \node[vertex_normal] at (\x) {}; } \foreach \x in {48, 20, 15, 5}{ \node[vertex] at (\x) {}; } \end{tikzpicture} } \caption{The generalized associahedron of type $\dynkinfont{D}_4$ and facets corresponding to some negative simple roots $-\alpha_1$, $-\alpha_3$, and $-\alpha_4$.}\label{fig_asso_D4} \end{figure} In the remaining part of this section, we recall~\cite{FZ4_2007} which considers the combinatorics on mutations in a more general setting. Let $\clusterfont{Q}$ be a bipartite quiver and $I_+$ and $I_-$ be the bipartite decomposition of the vertex set of $\clusterfont{Q}$. Consider the composition $\mu_{\quiver} = \mu_+ \mu_-$ of a sequence of mutations where \[ \mu_{\varepsilon} = \prod_{i \in I_{\varepsilon}} \mu_i \qquad \text{ for } \varepsilon \in \{ +, -\}. \] We call $\mu_{\quiver}$ a \emph{Coxeter mutation} as before. We enclose this section by recalling the result~\cite[Theorem~8.8]{FZ4_2007} on the order of Coxeter mutation on the cluster pattern. Recall from Proposition~\ref{prop_Y-pattern_exchange_graph} that for an exchange matrix $\clusterfont{\tilde{B}}_{t_0}$, if $\clusterfont{B}_{t_0}$ is skew-symmetric, then the exchange graph of a seed pattern $\{(\bfx_t,\clusterfont{\tilde{B}}_t)\}_{t\in \mathbb{T}_n}$ and that of a $Y$-pattern $\{(\bfy_t, \clusterfont{B}_{t})\}_{t\in\mathbb{T}_n}$ having algebraically independent variables $y_{1;t_0},\dots,y_{n;t_0}$ are the same. Accordingly, we obtain the following from~\cite[Theorem~8.8]{FZ4_2007}. \begin{lemma}[{cf. \cite[Theorem~8.8]{FZ4_2007}}]\label{lemma:order of coxeter mutation} Let $(\bfy_{t_0}, \clusterfont{B}_{t_0})$ be an initial $Y$-seed. Suppose that $\clusterfont{B}_{t_0} = \clusterfont{B}(\clusterfont{Q})$ for a bipartite quiver $\clusterfont{Q}$ and $y_{1;t_0},\dots,y_{n;t_0}$ are algebraically independent. Then the set $\{ \mu_{\quiver}^r (\bfy_{t_0}, \clusterfont{B}_{t_0}) \}_{r \in \Z_{\geq 0}}$ of $Y$-seeds is finite if and only if $\clusterfont{B}_{t_0}$ is of finite type. Moreover, for such a quiver $\clusterfont{Q}$, the order the $\mu_{\quiver}$-action is given by $(h+2)/2$ if $h$ is even, or $h+2$ otherwise, where $h$ is the corresponding Coxeter number. \end{lemma} \subsection{\texorpdfstring{$N$}{N}-graphs and Legendrian weaves} \begin{definition}\cite[Definition~2.2]{CZ2020}\label{definition:N-graph} An $N$-graph $\ngraphfont{G}$ on a smooth surface $S$ is an $(N-1)$-tuple of graphs $(\ngraphfont{G}_1,\dots, \ngraphfont{G}_{N-1})$ satisfying the following conditions: \begin{enumerate} \item Each graph $\ngraphfont{G}_i$ is embedded, trivalent, possibly empty and non necessarily connected. \item Any consecutive pair of graphs $(\ngraphfont{G}_i,\ngraphfont{G}_{i+1})$, $1\leq i \leq N-2$, intersects only at hexagonal points depicted as in Figure~\ref{fig:hexagonal_point}. \item Any pair of graphs $(\ngraphfont{G}_i, \ngraphfont{G}_j)$ with $1\leq i,j\leq N-1$ and $|i-j|>1$ intersects transversely at edges. \end{enumerate} \end{definition} \begin{figure}[ht] \begin{tikzpicture} \begin{scope} \draw[dashed] (0,0) circle (1cm); \draw[red, thick] (60:1)--(0,0) (180:1)--(0,0) (-60:1)--(0,0); \draw[blue, thick] (0:1)--(0,0) (120:1)--(0,0) (240:1)--(0,0); \draw[thick,black,fill=white] (0,0) circle (0.05); \end{scope} \end{tikzpicture} \caption{A hexagonal point} \label{fig:hexagonal_point} \end{figure} Let $\pi_F:J^1S \cong T^*S\times\R\to S\times \R$ be the front projection, and we call the image $\pi_F(\Lambda)$ of a Legendrian~$\Lambda\subset J^1S$ a \emph{wavefront}. Since $J^1S$ is equipped with the contact form $dz-p_x dx-p_y dy$, the coordinates~$(p_x,p_y)$ of the Legendrian $\Lambda$ are recovered from $(x,y)$-slope of the tangent plane $T_{(x,y,z)}\pi_F(\Lambda)$: \begin{align*} p_x&=\partial_x z(x,y),& p_y&=\partial_y z(x,y). \end{align*} For any $N$-graph $\ngraphfont{G}$ on a surface $S$, we associate a Legendrian surface $\Lambda(\ngraphfont{G})\subset J^1S$. Basically, we construct the Legendrian surface by weaving the wavefronts in $S \times \R$ constructed from a local chart of $S$. Let $\ngraphfont{G}\subset S$ be an $N$-graph. A finite cover $\{U_i\}_{i\in I}$ of $S$ is called {\em $\ngraphfont{G}$-compatible} if \begin{enumerate} \item each $U_i$ is diffeomorphic to the open disk $\mathring{\mathbb{D}}^2$, \item $U_i \cap \ngraphfont{G}$ is connected, and \item $U_i \cap \ngraphfont{G}$ contains at most one vertex. \end{enumerate} For each $U_i$, we associate a wavefront $\Gamma(U_i)\subset U_i\times \R \subset S\times \R$. Note that there are only five types of nondegenerate local charts for any $N$-graph $\ngraphfont{G}$ as follows: \begin{enumerate}[Type 1] \item A chart without any graph component whose corresponding wavefront becomes \[ \bigcup_{i=1,\dots,N}\mathring{\mathbb{D}}^2\times\{i\}\subset \mathring{\mathbb{D}}^2\times \R. \] \item A chart with single edge. The corresponding wavefront is the union of the $\dynkinfont{A}_1^2$-germ along the two sheets $\mathring\mathbb{D}^2\times \{i\}$ and $\mathring\mathbb{D}^2\times\{i+1\}$, and trivial disks $\mathbb{D}^2\times\{j\}$, $j\in \{1,\dots,N\}\setminus\{i,i+1\}$ The local model of $\dynkinfont{A}_1^2$ comes from the origin of the singular surface \[ \Gamma(\dynkinfont{A}_1^2)=\{(x,y,z)\in \R^3 \mid x^2-z^2=0\} \] See Figure~\ref{fig:A_1^2 germ}. \item A chart with transversely intersecting two edges. The wavefront consists of two $\dynkinfont{A}_1^2$-germs of $\mathring\mathbb{D}^2\times\{i,i+1\}$ and $\mathring\mathbb{D}^2\times\{j,j+1\}$, and trivial disks $\mathbb{D}^2\times\{k\}$, $k\in \{1,\dots,N\}\setminus\{i,i+1,j,j+1\}$. \item A chart with a monochromatic trivalent vertex whose wavefront is the union of the $\dynkinfont{D}_4^-$-germ, see \cite[\S2.4]{Arn1990}, and trivial disks $\mathbb{D}^2\times\{j\}$, $j\in \{1,\dots,N\}\setminus\{i,i+1\}$. The local model for Legendrian singularity of type $\dynkinfont{D}_4^-$ is given by the image at the origin of \begin{align*} \delta_4^-:\R^2\to \R^3:(x,y)\mapsto \left( x^2-y^2, 2xy, \frac{2}{3}(x^3-3xy^2) \right). \end{align*} See Figure~\ref{fig:D_4^- germ}. \item A chart with a bichromatic hexagonal point. The induced wavefront is the union of the $\dynkinfont{A}_1^3$-germ along the three sheets $\mathring\mathbb{D}^2\times \{*\}$, $*=i,i+1,i+2$, and the trivial disks $\mathbb{D}^2\times\{j\}$, $j\in \{1,\dots,N\}\setminus\{i,i+1,i+2\}$. The local model of $\dynkinfont{A}_1^3$ is given by the origin of the singular surface \[ \{(x,y,z)\in \R^3 \mid (x^2-z^2)(y-z)=0\}. \] See Figure~\ref{fig:A_1^3 germ}. \end{enumerate} \begin{figure}[ht] \subfigure[The germ of $A_1^2$\label{fig:A_1^2 germ}]{\makebox[0.3\textwidth]{$ \begin{tikzpicture}[baseline=-.5ex,scale=1] \begin{scope} \draw[blue, thick] (-1,0)--(1,0) node[black,above, midway] {$\dynkinfont{A}_1^2$}; \draw[thick] (-3/4,1)--(5/4,1); \draw[thick] (5/4,1)--(3/4,-1); \draw[thick] (3/4,-1)--(-5/4,-1); \draw[thick] (-5/4,-1)--(-3/4,1); \draw[thick] (-5/4,3/4)--(3/4,3/4); \draw[thick] (3/4,3/4)--(5/4,-3/4); \draw[thick] (5/4,-3/4)--(-3/4,-3/4); \draw[thick] (-3/4,-3/4)--(-5/4,3/4); \end{scope} \end{tikzpicture} $}} \subfigure[The germ of $A_1^3$\label{fig:A_1^3 germ}]{\makebox[0.3\textwidth]{$ \begin{tikzpicture}[baseline=-.5ex,scale=1] \begin{scope} \draw[red, thick] (0,0)--(1,0); \draw[red, thick] (-5/4,3/4)--(0,0); \draw[red, thick] (-3/4,1)--(0,0); \draw[blue, thick] (-1,0)--(0,0); \draw[blue, thick] (0,0)--(5/4,-3/4); \draw[blue, thick] (0,0)--(3/4,-1); \draw[thick] (-3/4,1)--(5/4,1); \draw[thick] (5/4,1)--(3/4,-1); \draw[thick] (3/4,-1)--(-5/4,-1); \draw[thick] (-5/4,-1)--(-3/4,1); \draw[thick,black,fill=white] (0,0) circle (0.05) node[above right] {$\dynkinfont{A}_1^3$}; \draw[thick] (-5/4,3/4)--(3/4,3/4); \draw[thick] (3/4,3/4)--(5/4,-3/4); \draw[thick] (5/4,-3/4)--(-3/4,-3/4); \draw[thick] (-3/4,-3/4)--(-5/4,3/4); \draw[thick] (-5/4,3/4)--(-3/4,1); \draw[thick] (-3/4,1)--(5/4,-3/4); \draw[thick] (5/4,-3/4)--(3/4,-1); \draw[thick] (3/4,-1)--(-5/4,3/4); \end{scope} \end{tikzpicture} $}} \subfigure[The germ of $D_4^-$\label{fig:D_4^- germ}]{\makebox[0.3\textwidth]{$ \begin{tikzpicture}[baseline=-.5ex,scale=1] \begin{scope} \draw[blue, thick, fill] (-1.4,0)--(0,0) circle (1.5pt); \draw[blue, thick] (0,0)--(1,1); \draw[blue, thick] (0,0)--(1/2,-1); \node[above] at (0,0) {$\dynkinfont{D}_4^-$}; \draw[thick] (-1.4,0) to[out=35,in=190] (1,1); \draw[dotted, thick] (-1.4,0) to[out=10,in=215] (1,1); \draw[thick] (-1.4,0) to[out=-35,in=170] (1/2,-1); \draw[thick] (-1.4,0) to[out=-10,in=145] (1/2,-1); \draw[dotted,thick] (1/2,-1) to[out=90,in=240] (1,1); \draw[thick] (1/2,-1) to[out=70,in=260] (1,1); \end{scope} \end{tikzpicture} $}} \caption{Three-types of wavefronts of Legendrian singularities.} \label{fig:legendrian_singularities} \end{figure} \begin{figure}[ht] \begin{tikzpicture} \begin{scope} \draw[thick] \boundellipse{0,1}{1.25}{0.25}; \draw[blue, thick] (-1,0)--(1,0); \draw[thick] (-3/4,1/3)--(5/4,1/3); \draw[thick] (5/4,1/3)--(3/4,-1/3); \draw[thick] (3/4,-1/3)--(-5/4,-1/3); \draw[thick] (-5/4,-1/3)--(-3/4,1/3); \draw[thick] (-5/4,3/12)--(3/4,3/12); \draw[thick] (3/4,3/12)--(5/4,-3/12); \draw[thick] (5/4,-3/12)--(-3/4,-3/12); \draw[thick] (-3/4,-3/12)--(-5/4,3/12); \draw[thick] \boundellipse{0,-1}{1.25}{0.25}; \draw[thick] \boundellipse{0,-2}{1.25}{0.25}; \draw[blue, thick] (-5/4,-2)--(5/4,-2); \draw[thick, ->] (0,-1.35)--(0,-1.9); \node at (1.5,1){\tiny $N$}; \node at (0,2/3){\tiny $\vdots$}; \node at (1.5,1/3){\tiny $i+1$}; \node at (1.5,-1/3){\tiny $i$}; \node at (0,-0.45){\tiny $\vdots$}; \node at (1.5,-1){\tiny $1$}; \end{scope} \begin{scope}[xshift=3.5cm] \begin{scope}[yshift=0.6cm] \draw[thick] (-5/4,-1/9)--(-1/4,1/3); \draw[thick] (1/4,-1/9)--(5/4,1/3); \draw[thick] (-5/4,-1/3)--(-1/4,1/9); \draw[thick] (1/4,-1/3)--(5/4,1/9); \draw[thick] (1/4,-1/3)--(-5/4,-1/9); \draw[thick] (-5/4,-1/3)--(1/4,-1/9); \draw[thick] (-1/4,1/9)--(5/4,1/3); \draw[thick] (5/4,1/9)--(-1/4,1/3); \draw[thick,yellow] (-1/2,-2/9)--(1/2,2/9); \end{scope} \begin{scope}[yshift=-0.75cm] \draw[blue, thick] (-1,0)--(1,0); \draw[thick] (-3/4,1/3)--(5/4,1/3); \draw[thick] (5/4,1/3)--(3/4,-1/3); \draw[thick] (3/4,-1/3)--(-5/4,-1/3); \draw[thick] (-5/4,-1/3)--(-3/4,1/3); \draw[thick] (-5/4,3/12)--(3/4,3/12); \draw[thick] (3/4,3/12)--(5/4,-3/12); \draw[thick] (5/4,-3/12)--(-3/4,-3/12); \draw[thick] (-3/4,-3/12)--(-5/4,3/12); \end{scope} \draw[thick] \boundellipse{0,-2}{1.25}{0.25}; \draw[blue, thick] (-5/4,-2)--(5/4,-2); \draw[yellow, thick] (-0.6,-2-2/9)--(0.6,-2+2/9); \draw[thick, ->] (0,-1.35)--(0,-1.9); \node at (1.5,1){\tiny $j+1$}; \node at (1.5,1/3){\tiny $j$}; \node at (0,0){\tiny $\vdots$}; \node at (1.5,-1/3){\tiny $i+1$}; \node at (1.5,-1){\tiny $i$}; \end{scope} \begin{scope}[xshift=7cm] \draw[thick] \boundellipse{0,1}{1.25}{0.25}; \draw[blue, thick] (-1.25,0)--(0,0); \draw[blue, thick] (0,0)--(1.25,1/3); \draw[blue, thick] (0,0)--(1/2,-1/3); \draw[thick,blue,fill=blue] (0,0) circle (0.05); \draw[thick] (-1.25,0) to[out=25,in=175] (1.25,1/3); \draw[dotted, thick] (-1.25,0) to[out=10,in=190] (1.25,1/3); \draw[thick] (-1.25,0) to[out=-25,in=180] (1/2,-1/3); \draw[thick] (-1.25,0) to[out=-10,in=160] (1/2,-1/3); \draw[dotted,thick] (1/2,-1/3) to[out=80,in=220] (1.25,1/3); \draw[thick] (1/2,-1/3) to[out=30,in=250] (1.25,1/3); \draw[thick] \boundellipse{0,-1}{1.25}{0.25}; \draw[thick] \boundellipse{0,-2}{1.25}{0.25}; \draw[blue, thick] (-5/4,-2)--(0,-2); \draw[blue, thick] (0,-2)--(0.90,-1.825); \draw[blue, thick] (0,-2)--(1/2,-2.225); \draw[thick,blue,fill=blue] (0,-2) circle (0.05); \draw[thick, ->] (0,-1.35)--(0,-1.9); \node at (1.5,1){\tiny $N$}; \node at (0,2/3){\tiny $\vdots$}; \node at (1.5,1/3){\tiny $i+1$}; \node at (1.5,-1/3){\tiny $i$}; \node at (0,-0.45){\tiny $\vdots$}; \node at (1.5,-1){\tiny $1$}; \end{scope} \begin{scope}[xshift=10.5cm] \draw[thick] \boundellipse{0,1}{1.25}{0.25}; \draw[red, thick] (0,0)--(1,0); \draw[red, thick] (-5/4,3/12)--(0,0); \draw[red, thick] (-3/4,1/3)--(0,0); \draw[blue, thick] (-1,0)--(0,0); \draw[blue, thick] (0,0)--(5/4,-3/12); \draw[blue, thick] (0,0)--(3/4,-1/3); \draw[thick] (-3/4,1/3)--(5/4,1/3); \draw[thick] (5/4,1/3)--(3/4,-1/3); \draw[thick] (3/4,-1/3)--(-5/4,-1/3); \draw[thick] (-5/4,-1/3)--(-3/4,1/3); \draw[thick,black,fill=white] (0,0) circle (0.05); \draw[thick] (-5/4,3/12)--(3/4,3/12); \draw[thick] (3/4,3/12)--(5/4,-3/12); \draw[thick] (5/4,-3/12)--(-3/4,-3/12); \draw[thick] (-3/4,-3/12)--(-5/4,3/12); \draw[thick] (-5/4,3/12)--(-3/4,1/3); \draw[thick] (-3/4,1/3)--(5/4,-3/12); \draw[thick] (5/4,-3/12)--(3/4,-1/3); \draw[thick] (3/4,-1/3)--(-5/4,3/12); \draw[thick] \boundellipse{0,-1}{1.25}{0.25}; \draw[thick] \boundellipse{0,-2}{1.25}{0.25}; \draw[blue, thick] (-5/4,-2)--(0,-2); \draw[blue, thick] (0,-2)--(0.90,-1.825); \draw[blue, thick] (0,-2)--(1/2,-2.225); \draw[red, thick] (5/4,-2)--(0,-2); \draw[red, thick] (0,-2)--(-0.90,-2.175); \draw[red, thick] (0,-2)--(-1/2,-1.775); \draw[thick,black,fill=white] (0,-2) circle (0.05); \draw[thick, ->] (0,-1.35)--(0,-1.9); \node at (1.5,1){\tiny $N$}; \node at (0,2/3){\tiny $\vdots$}; \node at (1.5,1/3){\tiny $i+2$}; \node at (1.5,0){\tiny $i+1$}; \node at (1.5,-1/3){\tiny $i$}; \node at (0,-0.45){\tiny $\vdots$}; \node at (1.5,-1){\tiny $1$}; \end{scope} \end{tikzpicture} \caption{Local charts for $N$-graphs of Type 2,3,4, and 5.} \label{fig:local_chart_3-graphs} \end{figure} \begin{definition}\cite[Definition~2.7]{CZ2020} Let $\ngraphfont{G}$ be an $N$-graph on a surface $S$. The {\em Legendrian weave}~$\Lambda(\ngraphfont{G})\subset J^1 S$ is an embedded Legendrian surface whose wavefront $\Gamma(\ngraphfont{G})\subset S\times \R$ is constructed by weaving the wavefronts $\{\Gamma(U_i)\}_{i\in I}$ from a $\ngraphfont{G}$-compatible cover $\{U_i\}_{i\in I}$ with respect to the gluing data given by $\ngraphfont{G}$. \end{definition} \begin{remark} Note that $\Lambda(\ngraphfont{G})$ is well-defined up to the choice of cover and up to planar isotopies. Let $\{\varphi_t\}_{t\in[0,1]}$ be a compactly supported isotopy of $S$. Then this induces a Legendrian isotopy of Legendrian surface $\Lambda(\varphi_t(\ngraphfont{G}))\subset J^1 S$ relative to the boundary. \end{remark} We also list certain degenerate local models of $N$-graph as follows: \begin{enumerate}[Type~D1] \item A chart with double edges whose wavefront consists of two $\dynkinfont{A}_1^2$-germs of $\mathring\mathbb{D}^2\times\{i,i+1\}$ and $\mathring\mathbb{D}^2\times\{j,j+1\}$ for $|i-j|>1$, and trivial disks $\mathbb{D}^2\times\{k\}$, $k\in \{1,\dots,N\}\setminus\{i,i+1,j,j+1\}$. See Figure~\ref{fig:degenerate type1}. \item A chart with trichromatic graph of $(\ngraphfont{G}_{i-1},\ngraphfont{G}_i,\ngraphfont{G}_{i+1})$ satisfying \label{degenerate_type2} \begin{itemize} \item each has a unique vertex of four valent, \item $\ngraphfont{G}_{i-1}$ and $\ngraphfont{G}_{i+1}$ are identical, and \item $\ngraphfont{G}_i$ and $\ngraphfont{G}_{i+1}$ are intersecting at the vertex of eight valent in an alternating way, see the middle one in Figure~\ref{fig:degenerate type2}. \end{itemize} \end{enumerate} For $i=2$, the wavefront corresponding to a chart of \ref{degenerate_type2} inside $\mathbb{D}^2\times \R$ consists of four disks $(\mathbb{D}_1,\dots,\mathbb{D}_4)$, which is the cone $C(\lambda)=\lambda\times[0,1]/\lambda\times\{0\}$ of the following Legendrian front $\lambda$ in $\mathbb{S}^1\times\R$ \[ (\sigma_{1,3}\sigma_2)^4=\vcenter{\hbox{ \begin{tikzpicture}[scale=0.8] \begin{scope} \foreach \x in {0,2,4,6} { \draw[thick] (\x,0) to[out=0,in=180] (\x+1,0.5); \draw[thick] (\x,0.5) to[out=0,in=180] (\x+1,0); } \draw[thick] (1,0) -- (2,0) (3,0)--(4,0) (5,0)--(6,0) (7,0)--(8,-0); \end{scope} \begin{scope}[xshift=1cm,yshift=0.5cm] \foreach \x in {0,2,4,6} { \draw[thick] (\x,0) to[out=0,in=180] (\x+1,0.5); \draw[thick] (\x,0.5) to[out=0,in=180] (\x+1,0); } \end{scope} \begin{scope}[yshift=1cm] \foreach \x in {0,2,4,6} { \draw[thick] (\x,0) to[out=0,in=180] (\x+1,0.5); \draw[thick] (\x,0.5) to[out=0,in=180] (\x+1,0); } \draw[thick] (1,0.5) -- (2,0.5) (3,0.5)--(4,0.5) (5,0.5)--(6,0.5) (7,0.5)--(8,0.5); \end{scope} \draw[red, dashed] (0,-0.25)--(0,1.75) (8,-0.25)--(8,1.75); \end{tikzpicture} }}, \] where $\sigma_{1,3}$ is a $4$-braid isotopic to $\sigma_1\sigma_3$ (or equivalently, $\sigma_3\sigma_1$) such that two crossings $\sigma_1$ and $\sigma_3$ occur simultaneously. \begin{figure}[ht] \subfigure[Type 1\label{fig:degenerate type1}]{ $\begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[Dble={green and blue},line width=2] (-3,0) -- (0,0); \draw[Dble={green and blue},line width=2] (3,0) -- (0,0); \end{tikzpicture} \stackrel{\text{perturb.}}{\longrightarrow} \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[thick, blue] (15:3) -- (15:-3); \draw[thick, green] (-15:3) -- (-15:-3); \end{tikzpicture} \quad \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[Dble={green and blue},line width=2] (120:3) -- (0,0); \draw[Dble={green and blue},line width=2] (-120:3) -- (0,0); \draw[Dble={green and blue},line width=2] (0,0) -- (3,0); \end{tikzpicture} \stackrel{\text{perturb.}}{\longrightarrow} \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[thick, green, fill] (110:3) -- (60:1) circle (2pt) (-130:3) -- (60:1) (10:3) -- (60:1); \draw[thick, blue, fill] (130:3) -- (-60:1) circle (2pt) (-110:3) -- (-60:1) (-10:3) -- (-60:1); \end{tikzpicture}$ } \subfigure[Type 2\label{fig:degenerate type2}]{ $ \begin{tikzpicture}[baseline=-.5ex,scale=1.5] \draw [dashed] (0,0) circle [radius=1]; \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1,0)--(-1/2,0) to (0,1/2) to (1/2,0)--(1,0); \draw [red, thick] (-1/2,0) to (0, -1/2) to (1/2,0); \draw [red, thick] (0,1) to (0,1/2); \draw [red, thick] (0,-1) to (0,-1/2); \draw [green, thick] (-1/2,{sqrt(3)/2}) to (0,1/2) to (0,-1/2) to (-1/2,-{sqrt(3)/2}); \draw [green, thick] (1/2,{sqrt(3)/2})--(0,1/2); \draw [green, thick] (1/2,-{sqrt(3)/2})--(0,-1/2); \draw[thick,black,fill=white] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \draw[thick,black,fill=white] (0,1/2) circle (0.05); \draw[thick,black,fill=white] (0,-1/2) circle (0.05); \end{tikzpicture} \stackrel{\text{perturb.}}{\longleftarrow} \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[fill, red, thick] (-3,0) -- (3,0) (0,3)--(0,-3); \begin{scope} \draw[Dble={blue and green},line width=2] (0,0) -- (-45:3); \draw[Dble={green and blue},line width=2] (0,0) -- (45:3); \draw[Dble={blue and green},line width=2] (0,0) -- (135:3); \draw[Dble={green and blue},line width=2] (0,0) -- (-135:3); \end{scope} \end{tikzpicture} = \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[fill, red, thick] (-3,0) -- (3,0) (0,3)--(0,-3); \begin{scope} \draw[Dble={blue and green},line width=2] (-45:1.5) -- (0,0); \draw[Dble={green and blue},line width=2] (45:1.5) -- (0,0); \draw[Dble={blue and green},line width=2] (135:1.5) -- (0,0); \draw[Dble={green and blue},line width=2] (-135:1.5) -- (0,0); \draw[Dble={blue and green},line width=2] (-45:1.5) -- (-45:3); \draw[Dble={green and blue},line width=2] (45:1.5) -- (45:3); \draw[Dble={blue and green},line width=2] (135:1.5) -- (135:3); \draw[Dble={green and blue},line width=2] (-135:1.5) -- (-135:3); \end{scope} \end{tikzpicture} \stackrel{\text{perturb.}}{\longrightarrow} \begin{tikzpicture}[baseline=-.5ex,scale=1.5] \draw [dashed] (0,0) circle [radius=1]; \draw [blue, thick] ({-sqrt(3)/2},1/2)--({+sqrt(3)/2},1/2); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--({+sqrt(3)/2},-1/2); \draw [blue, thick] (0,1/2)--(0,-1/2); \draw [red, thick] (-1,0)--(-1/2,0) to (0,1/2) to (1/2,0)--(1,0); \draw [red, thick] (-1/2,0) to (0, -1/2) to (1/2,0); \draw [red, thick] (0,1) to (0,1/2); \draw [red, thick] (0,-1) to (0,-1/2); \draw [green, thick] (-1/2,{sqrt(3)/2}) to (-1/2,{-sqrt(3)/2}); \draw [green, thick] (1/2,{sqrt(3)/2}) to (1/2,{-sqrt(3)/2}); \draw [green, thick] (-1/2,0) to (1/2,0); \draw[thick,black,fill=white] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \draw[thick,black,fill=white] (0,1/2) circle (0.05); \draw[thick,black,fill=white] (0,-1/2) circle (0.05); \end{tikzpicture} $} \caption{Local models for degenerate $N$-graphs and their perturbations} \label{figure:perturbation of degenerated Ngraphs} \end{figure} \begin{remark} The cone point neighborhood of the wavefront for the degenerate $N$-graph of~\ref{degenerate_type2}, is diffeomorphic to the union of four planes, $\{z=x\}, \{z=-x\}, \{z=y\},$ and $\{z=-y\}$ in $\R^3$, see Figure~\ref{fig:degenerated N-graph}. Moreover, if we regard each of $\ngraphfont{G}_i$ as a union of transversely intersecting two edges, then we have six edges for the local chart of $N$-graph $(\ngraphfont{G}_1,\ngraphfont{G}_2,\ngraphfont{G}_3)$. On the other hand, each pair of four disks in the wave front forms the $\dynkinfont{A}_1^2$-germ and this corresponds to the (degenerated) six edges. \end{remark} \begin{figure}[ht] \begin{tikzpicture} \begin{axis}[domain=-1:1,y domain=-1:1] \draw[thick, blue] (0,0,0)--(-1,-1,-1); \addplot3[samples = 10,samples y = 10,mesh,draw = gray, thick] {x}; \addplot3[samples = 10,samples y = 10,mesh,draw = gray, thick] {-x}; \addplot3[samples = 10,samples y = 10,mesh,draw = gray, thick] {y}; \addplot3[samples = 10,samples y = 10,mesh,draw = gray, thick] {-y}; \addplot3 [domain=-1:0, blue, thick] (x, x, x); \addplot3 [domain=-1:0, blue, thick] (x, -x, x); \addplot3 [domain=0:1, blue, thick] (x, x, -x); \addplot3 [domain=0:1, blue, thick] (x, -x, -x); \addplot3 [domain=-1:1, red, thick] (x, 0, 0); \addplot3 [y domain=-1:1, red, thick] (0, y, 0); \addplot3 [domain=-1:0, green, thick] (x, x, -x); \addplot3 [domain=-1:0, green, thick] (x, -x, -x); \addplot3 [domain=0:1, green, thick] (x, x, x); \addplot3 [domain=0:1, green, thick] (x, -x, x); \end{axis} \end{tikzpicture} \caption{A wavefront for the degenerate $N$-graph.} \label{fig:degenerated N-graph} \end{figure} We obtain (regular) $N$-graphs from degenerate $N$-graphs via (generic) perturbation of the wavefront as depicted in Figure~\ref{figure:perturbation of degenerated Ngraphs}. The idea of $N$-graph is useful in the study of Legendrian surface, because the Legendrian isotopy of the Legendrian weave $\Lambda(\ngraphfont{G})$ can be encoded in combinatorial moves of $N$-graphs. \begin{theorem}\cite[Theorem~1.1]{CZ2020}\label{thm:N-graph moves and legendrian isotopy} Let $\ngraphfont{G}$ be a non-degenerate local $N$-graph. The combinatorial moves $\Move{I}\sim \Move{IV'}$ in Figure~\ref{fig:move1-6} are Legendrian isotopies for $\Lambda(\ngraphfont{G})$. \end{theorem} We denote the equivalence class of an $N$-graph $\ngraphfont{G}$ up to the moves $\Move{I}\sim \Move{IV'}$ in Figure~\ref{fig:move1-6} by~$[\ngraphfont{G}]$. Let us also list the combinatorial moves \Move{DI} and \Move{DII} for Legendrian isotopies involving degenerate $N$-graphs as depicted in Figure~\ref{fig:move1-6}. \begin{corollary} Let $\ngraphfont{G}$ be a local degenerate $N$-graph. The combinatorial moves \Move{DI} and \Move{DII} in Figure~\ref{fig:move1-6} are Legendrian isotopies for $\Lambda(\ngraphfont{G})$. \end{corollary} \begin{proof} It is direct to check that the moves (DI) and (DII) for degenerate $N$-graphs can be obtained by composing the perturbations in Figure~\ref{figure:perturbation of degenerated Ngraphs} and moves in Figure~\ref{fig:move1-6}. See Appendix~\ref{appendix:DI and DII}. \end{proof} \begin{figure}[ht] \begin{tikzpicture} \begin{scope \draw [dashed] (0,0) circle [radius=1] \draw [dashed] (3,0) circle [radius=1] \draw [<->] (1.25,0) -- (1.75,0) node[midway, above] {\Move{I}}; \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1,0)--(-1/2,0) to[out=60,in=180] (0,1/2) to[out=0,in=120] (1/2,0)--(1,0); \draw [red, thick] (-1/2,0) to[out=-60,in=180] (0, -1/2) to[out=0, in=-120] (1/2,0); \draw[thick,black,fill=white] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \draw [blue, thick] ({3-sqrt(3)/2},1/2)--({3+sqrt(3)/2},1/2); \draw [blue, thick] ({3-sqrt(3)/2},-1/2)--({3+sqrt(3)/2},-1/2); \draw [red, thick] (2,0)--(4,0); \end{scope} \begin{scope}[xshift=7cm \draw [dashed] (0,0) circle [radius=1] \draw [dashed] (3,0) circle [radius=1] \draw [<->] (1.25,0) -- (1.75,0) node[midway, above] {\Move{II}}; \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1/2,{sqrt(3)/2}) -- (1/2,0)--(1,0); \draw [red, thick] (-1/2,{-sqrt(3)/2}) -- (1/2,0); \draw[thick,blue,fill=blue] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \draw [blue, thick] ({3-sqrt(3)/2},1/2)--({3+sqrt(3)/2},1/2); \draw [blue, thick] ({3-sqrt(3)/2},-1/2)--({3+sqrt(3)/2},-1/2); \draw [blue, thick] (3,1/2)--(3,-1/2); \draw [red, thick] (5/2,{sqrt(3)/2})--(3,1/2) to[out=-150,in=150] (3,-1/2)--(5/2,{-sqrt(3)/2}); \draw [red, thick] (3,1/2)--(7/2,0) -- (4,0); \draw [red, thick] (3,-1/2)--(7/2,0); \draw[thick,black,fill=white] (3,1/2) circle (0.05); \draw[thick,black,fill=white] (3,-1/2) circle (0.05); \draw[thick,red,fill=red] (7/2,0) circle (0.05); \end{scope} \begin{scope}[yshift=-2.5cm \draw [dashed] (0,0) circle [radius=1]; \draw [<->] (1.25,0) -- (1.75,0) node[midway, above] {\Move{III}}; \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1,0)--(-1/2,0) to[out=60,in=180] (0,1/2) to[out=0,in=120] (1/2,0)--(1,0); \draw [red, thick] (-1/2,0) to[out=-60,in=180] (0, -1/2) to[out=0, in=-120] (1/2,0); \draw [red, thick] (0,1) to (0,1/2); \draw [red, thick] (0,-1) to (0,-1/2); \draw[thick,black,fill=white] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \draw[thick,red,fill=red] (0,1/2) circle (0.05); \draw[thick,red,fill=red] (0,-1/2) circle (0.05); \end{scope} \begin{scope}[yshift=-2.5cm, xshift=3cm \draw [dashed] (0,0) circle [radius=1]; \draw [blue, thick] (-1/2,{sqrt(3)/2}) to (0,1/2) to (0,-1/2) to (-1/2,-{sqrt(3)/2}); \draw [blue, thick] (1/2,{sqrt(3)/2})--(0,1/2); \draw [blue, thick] (1/2,-{sqrt(3)/2})--(0,-1/2); \draw [red, thick] (-1,0)--(-1/2,0) to[out=60,in=180] (0,1/2) to[out=0,in=120] (1/2,0)--(1,0); \draw [red, thick] (-1/2,0) to[out=-60,in=180] (0, -1/2) to[out=0, in=-120] (1/2,0); \draw [red, thick] (0,1) to (0,1/2); \draw [red, thick] (0,-1) to (0,-1/2); \draw[thick,red,fill=red] (-1/2,0) circle (0.05); \draw[thick,red,fill=red] (1/2,0) circle (0.05); \draw[thick,black,fill=white] (0,1/2) circle (0.05); \draw[thick,black,fill=white] (0,-1/2) circle (0.05); \end{scope} \begin{scope}[xshift=7cm, yshift=-2.5cm \draw [dashed] (0,0) circle [radius=1]; \draw [<->] (1.25,0) -- (1.75,0) node[midway, above] {\Move{IV}}; \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1,0)--(-1/2,0) to (0,1/2) to (1/2,0)--(1,0); \draw [red, thick] (-1/2,0) to (0, -1/2) to (1/2,0); \draw [red, thick] (0,1) to (0,1/2); \draw [red, thick] (0,-1) to (0,-1/2); \draw [green, thick] (-1/2,{sqrt(3)/2}) to (0,1/2) to (0,-1/2) to (-1/2,-{sqrt(3)/2}); \draw [green, thick] (1/2,{sqrt(3)/2})--(0,1/2); \draw [green, thick] (1/2,-{sqrt(3)/2})--(0,-1/2); \draw[thick,black,fill=white] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \draw[thick,black,fill=white] (0,1/2) circle (0.05); \draw[thick,black,fill=white] (0,-1/2) circle (0.05); \end{scope} \begin{scope}[xshift=10cm, yshift=-2.5cm \draw [dashed] (0,0) circle [radius=1]; \draw [blue, thick] ({-sqrt(3)/2},1/2)--({+sqrt(3)/2},1/2); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--({+sqrt(3)/2},-1/2); \draw [blue, thick] (0,1/2)--(0,-1/2); \draw [red, thick] (-1,0)--(-1/2,0) to (0,1/2) to (1/2,0)--(1,0); \draw [red, thick] (-1/2,0) to (0, -1/2) to (1/2,0); \draw [red, thick] (0,1) to (0,1/2); \draw [red, thick] (0,-1) to (0,-1/2); \draw [green, thick] (-1/2,{sqrt(3)/2}) to (-1/2,{-sqrt(3)/2}); \draw [green, thick] (1/2,{sqrt(3)/2}) to (1/2,{-sqrt(3)/2}); \draw [green, thick] (-1/2,0) to (1/2,0); \draw[thick,black,fill=white] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \draw[thick,black,fill=white] (0,1/2) circle (0.05); \draw[thick,black,fill=white] (0,-1/2) circle (0.05); \end{scope} \begin{scope}[xshift=0cm, yshift=-5cm \draw [dashed] (0,0) circle [radius=1]; \draw [<->] (1.25,0) -- (1.75,0) node[midway, above] {\Move{V}}; \draw [blue, thick] ({-sqrt(3)/2},1/2)to[out=0,in=180](0,-1/2); \draw [blue, thick] ({sqrt(3)/2},1/2)to[out=180,in=0](0,-1/2); \draw [yellow, thick] ({-sqrt(3)/2},-1/2)to[out=0,in=180](0,1/2); \draw [yellow, thick] ({sqrt(3)/2},-1/2)to[out=180,in=0](0,1/2); \end{scope} \begin{scope}[xshift=3cm, yshift=-5cm \draw [dashed] (0,0) circle [radius=1]; \draw [blue, thick] ({-sqrt(3)/2},1/2) to ({sqrt(3)/2},1/2); \draw [yellow, thick] ({-sqrt(3)/2},-1/2) to ({sqrt(3)/2},-1/2); \end{scope} \begin{scope}[xshift=7cm, yshift=-5cm \draw [dashed] (0,0) circle [radius=1]; \draw [<->] (1.25,0) -- (1.75,0) node[midway, above] {\Move{VI}}; \draw [blue, thick] ({-sqrt(3)/2},1/2) to (0,0) to(1,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2) to (0,0); \draw[thick,blue,fill=blue] (0,0) circle (0.05); \draw[yellow, thick] (0,1) to[out=-135,in=135] (0,-1); \end{scope} \begin{scope}[xshift=10cm, yshift=-5cm \draw [dashed] (0,0) circle [radius=1]; \draw [blue, thick] ({-sqrt(3)/2},1/2) to (0,0) to(1,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2) to (0,0); \draw[thick,blue,fill=blue] (0,0) circle (0.05); \draw[yellow, thick] (0,1) to[out=-45,in=45] (0,-1); \end{scope} \begin{scope}[xshift=0cm, yshift=-7.5cm \draw [dashed] (0,0) circle [radius=1]; \draw [<->] (1.25,0) -- (1.75,0) node[midway, above] {\Move{VI'}}; \draw [blue, thick] ({-1/2},{sqrt(3)/2}) to (0,0) to(1,0); \draw [blue, thick] ({-1/2},{-sqrt(3)/2}) to (0,0); \draw [red, thick] (-1,0) to (0,0) to(1/2,{sqrt(3)/2}); \draw [red, thick] (0,0) to(1/2,{-sqrt(3)/2}); \draw[thick,black,fill=white] (0,0) circle (0.05); \draw[yellow, thick] (0,1) to[out=-135,in=135] (0,-1); \end{scope} \begin{scope}[xshift=3cm, yshift=-7.5cm \draw [dashed] (0,0) circle [radius=1]; \draw [blue, thick] ({-1/2},{sqrt(3)/2}) to (0,0) to(1,0); \draw [blue, thick] ({-1/2},{-sqrt(3)/2}) to (0,0); \draw [red, thick] (-1,0) to (0,0) to(1/2,{sqrt(3)/2}); \draw [red, thick] (0,0) to(1/2,{-sqrt(3)/2}); \draw[thick,black,fill=white] (0,0) circle (0.05); \draw[yellow, thick] (0,1) to[out=-45,in=45] (0,-1); \end{scope} \begin{scope}[xshift=0cm, yshift=-10cm] \draw [dashed] (0,0) circle [radius=1] \draw[red, thick] (160:1)--(-0.5,0) -- (1,0) (200:1)--(-0.5,0) (0,1)--(0,-1); \draw[thick,red] (-1/2,0) circle (1pt); \draw[Dble={blue and green},line width=2] (0,0) -- (-45:1); \draw[Dble={green and blue},line width=2] (0,0) -- (45:1); \draw[Dble={blue and green},line width=2] (0,0) -- (135:1); \draw[Dble={green and blue},line width=2] (0,0) -- (-135:1); \draw [<->] (1.25,0) -- (1.75,0) node[midway, above] {\Move{DI}}; \end{scope} \begin{scope}[xshift=3cm, yshift=-10cm] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle [radius=1]; \draw[rounded corners,thick, red](0,1)--(0,-1) (150:1)--++(5/4,0)--(3/4,0) (210:1)--++(5/4,0)--(3/4,0) (3/4,0)--(1,0); \draw[Dble={green and blue},line width=2] (120:1) -- (0,0.5); \draw[Dble={blue and green},line width=2] (60:1) -- (0,0.5); \draw[Dble={blue and green},line width=2] (-120:1) -- (0,-0.5); \draw[Dble={green and blue},line width=2] (-60:1) -- (0,-0.5); \draw[blue,line width=2] (-0.05,0.5) to[out=-135,in=135] (-0.05,-0.5); \draw[green,line width=2] (0,0.45) to[out=-135,in=135] (0,-0.45); \draw[blue,line width=2] (0.05,0.5) to[out=-45,in=45] (0.05,-0.5); \draw[green,line width=2] (0,0.45) to[out=-45,in=45] (0,-0.45); \draw[thick,red,fill=red] (3/4,0) circle (1pt); \end{scope} \begin{scope}[xshift=7cm, yshift=-10cm \draw [<->] (1.25,0) -- (1.75,0) node[midway, above] {\Move{DII}}; \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle [radius=1]; \draw[thick, red](135:1)--(-45:1) (45:1)--(-135:1); \draw[Dble={blue and green},line width=2] (0,0) -- (-90:1); \draw[Dble={blue and green},line width=2] (0,0) -- (90:1); \draw[Dble={green and blue},line width=2] (0,0) -- (1,0); \draw[Dble={green and blue},line width=2] (0,0) -- (-0.5,0); \draw[Dble={green and blue},line width=2] (-0.5,0) -- (155:1.2); \draw[Dble={green and blue},line width=2] (-0.5,0) -- (205:1.2); \end{scope} \begin{scope}[xshift=10cm, yshift=-10cm] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle [radius=1]; \draw[thick, red](120:1)--(0,0.5) to[out=-45,in=45] (0,-0.5) --(-120:1) (60:1)--(0,0.5) to[out=-135,in=135] (0,-0.5) --(-60:1); \draw[Dble={green and blue},line width=2] (0,0.5) -- ++(-1,0); \draw[Dble={green and blue},line width=2] (0,0.5) -- (0.3,0.5); \draw[Dble={green and blue},line width=2] (0,1) -- (0,0.5); \draw[Dble={green and blue},line width=2] (0,0) -- (0,0.5); \draw[Dble={green and blue},line width=2] (0,0) -- (0,-0.5); \draw[Dble={green and blue},line width=2] (0,-0.5) -- ++(-1,0); \draw[Dble={green and blue},line width=2] (0,-0.5) -- (0.3,-0.5); \draw[Dble={green and blue},line width=2] (0,-1) -- (0,-0.5); \draw[blue,line width=2] (0.3,-0.535) to[out=0,in=-135] (0.7,-0.035) (0.3,0.465) to[out=0,in=135] (0.7,-0.035) -- (1,-0.035); \draw[green,line width=2] (0.3,-0.465) to[out=0,in=-135] (0.7,0.035) (0.3,0.535) to[out=0,in=135] (0.7,0.035) -- (1,0.035); \end{scope} \end{tikzpicture} \caption{Combinatorial moves for Legendrian isotopies of surface $\Lambda(\ngraphfont{G})$. Here the pairs ({\color{blue} blue}, {\color{red} red}) and ({\color{red} red}, {\color{green} green}) are consecutive. Other pairs are not.} \label{fig:move1-6} \end{figure} \begin{definition}\label{definiton:freeness} An $N$-graph $\ngraphfont{G}$ on $S$ is called {\em free} if the induced Legendrian weave $\Lambda(\ngraphfont{G})\subset J^1S$ can be woven without interior Reeb chord. \end{definition} \begin{example}\cite[Example 7.3]{CZ2020}\label{ex:free N-graph} Let $\ngraphfont{G}\subset \mathbb{D}^2$ be a $2$-graph such that $\mathbb{D}^2\setminus \ngraphfont{G}$ is simply connected relative to the boundary $\partial\mathbb{D}^2\cap(\mathbb{D}^2\setminus \ngraphfont{G})$. Then $\ngraphfont{G}$ is free if and only if $\ngraphfont{G}$ has no faces contained in~$\mathring\mathbb{D}^2$. Note that each of such faces admits at least one Reeb chord, see Figure~\ref{fig:N-graphs with Reeb chords}. \end{example} \begin{figure}[ht] \begin{tikzcd} \begin{tikzpicture} \draw [dashed] (0,0) circle [radius=1.5]; \draw [thick,blue] (-1.5,0)-- (-1,0) to[out=90,in=90] (1,0)--(1.5,0) (-1,0) to[out=-90,in=-90] (1,0); \draw [thick, blue, fill] (-1,0) circle (1.5pt) (1,0) circle (1.5pt); \end{tikzpicture} & \begin{tikzpicture} \draw [dashed] (0,0) circle [radius=1.5]; \draw [thick,blue] (90:1.5) -- (90:1) (210:1.5) -- (210:1) (-30:1.5) -- (-30:1) (90:1) -- (210:1) -- (-30:1)-- (90:1); \draw [thick, blue, fill] (90:1) circle (1.5pt) (210:1) circle (1.5pt) (-30:1) circle (1.5pt); \end{tikzpicture} \end{tikzcd} \caption{$N$-graphs with Reeb chords} \label{fig:N-graphs with Reeb chords} \end{figure} In particular, we have the following lemma whose proof is omitted. \begin{lemma}\label{lemma:tree Ngraphs are free} Let $\ngraphfont{G}=(\ngraphfont{G}_1,\dots,\ngraphfont{G}_{N-1})$ be an $N$-graph on $\mathbb{D}^2$. Suppose that each $\ngraphfont{G}_i$ is a tree or empty. Then $\ngraphfont{G}$ is free. \end{lemma} \begin{comment} \begin{proof} Recall from Definition~\ref{def:free} that an $3$-graph $\ngraphfont{G}(a,b,c)$ is free if the Legendrian weave $\Lambda(\ngraphfont{G}(a,b,c))$ can be woven without Reeb chords. To investigate the Reeb chords of $\Lambda(\ngraphfont{G}(a,b,c))$ in $J^1\mathbb{D}^2$, let us consider the wavefront $\Gamma(\ngraphfont{G}(a,b,c))$ in $\mathbb{D}^2\times \R$. Label the sheets of the wavefront \begin{equation}\label{equation:wavefront decomposition} \Gamma(\ngraphfont{G}(a,b,c))=\bigcup_{i=1}^{3}\Gamma_i \end{equation} by the $z$-coordinate from bottom to top. Let $f_i:\mathbb{D}^2\to \R$ be a function whose graph becomes $\Gamma_i$, and let $h_{ij}:\mathbb{D}^2\to \R$ be a difference function given by $f_j-f_i$ for any $1\le i<j\le 3$. By the construction $h_{i\, i+1}^{-1}(0)$ gives the subgraph $\ngraphfont{G}_i(a,b,c)$ of $\ngraphfont{G}(a,b,c)=(\ngraphfont{G}_1, \ngraphfont{G}_2)$. The critical points of $h_{ij}$ on $\mathring{\mathbb{D}}^2\setminus \ngraphfont{G}(a,b,c)$ are the possible candidates for the Reeb chords. In other words, to guarantee that $\ngraphfont{G}(a,b,c)$ is free, it suffices to show that $h_{ij}$ has no critical point on $\mathring{\mathbb{D}}^2\setminus \ngraphfont{G}(a,b,c)$. Let us analyze local configurations of gradients $\nabla h_{12}$, $\nabla h_{23}$. \begin{enumerate} \item Near the edge of $N$-graphs, the followings are examples of local configuration of $\nabla h_{12}$, $\nabla h_{23}$ depending on slope of Legendrian sheets. \[ \begin{tikzpicture} \begin{scope} \draw[thick] \boundellipse{0,1}{1.25}{0.25}; \draw[blue, thick] (-1,0)--(1,0); \draw[thick] (-3/4,1/3)--(5/4,1/3); \draw[thick] (5/4,1/3)--(3/4,-1/3); \draw[thick] (3/4,-1/3)--(-5/4,-1/3); \draw[thick] (-5/4,-1/3)--(-3/4,1/3); \draw[thick] (-5/4,3/12)--(3/4,3/12); \draw[thick] (3/4,3/12)--(5/4,-3/12); \draw[thick] (5/4,-3/12)--(-3/4,-3/12); \draw[thick] (-3/4,-3/12)--(-5/4,3/12); \draw[thick] \boundellipse{0,-1}{1.25}{0.25}; \draw[blue, thick] (-5/4,-1)--(5/4,-1); \draw[thick, ->] (0,-0.35)--(0,-0.9); \node at (-1.5,1){\tiny $3$}; \node at (-1.5,1/3){\tiny $2$}; \node at (-1.5,-1/3){\tiny $1$}; \end{scope} \begin{scope}[xshift=10.5cm,decoration={markings, mark=at position 0.5 with {\arrow{>}}}] \draw[thick] (0,0) circle (1cm); \draw[blue, thick, dashed, opacity=0.5] (-1,0)--(1,0); \node at (0,-1.5) {$\nabla h_{23}$}; \draw[postaction={decorate}] (30:1) -- (0.5,0); \draw[postaction={decorate}] (60:1) -- (-0.1,0); \draw[postaction={decorate}] (90:1) -- (-0.6,0); \draw[postaction={decorate}] (120:1) -- (-1,0); \draw[postaction={decorate}] (60:1) -- (-0.1,0); \draw[postaction={decorate}] (-30:1) -- (0.5,0); \draw[postaction={decorate}] (-60:1) -- (-0.1,0); \draw[postaction={decorate}] (-90:1) -- (-0.6,0); \draw[postaction={decorate}] (-120:1) -- (-1,0); \end{scope} \begin{scope}[xshift=7cm,decoration={markings, mark=at position 0.5 with {\arrow{>}}}] \draw[thick] (0,0) circle (1cm); \draw[blue, thick, dashed, opacity=0.5] (-1,0)--(1,0); \node at (0,-1.5) {$\nabla h_{23}$}; \draw[postaction={decorate}] (30:1) -- (0.5,0); \draw[postaction={decorate}] (60:1) -- (-0.1,0); \draw[postaction={decorate}] (90:1) -- (-0.6,0); \draw[postaction={decorate}] (120:1) -- (-1,0); \draw[postaction={decorate}] (60:1) -- (-0.1,0); \draw[postaction={decorate}] (0.5,0) -- (-150:1); \draw[postaction={decorate}] (-0.1,0) -- (-160:1); \draw[postaction={decorate}] (-0.6,0) -- (-170:1); \draw[postaction={decorate}] (1,0) -- (-140:1); \draw[postaction={decorate}] (-20:1) -- (-120:1); \end{scope} \begin{scope}[xshift=3.5cm,decoration={markings, mark=at position 0.5 with {\arrow{>}}}] \draw[thick] (0,0) circle (1cm); \draw[blue, thick] (-1,0)--(1,0); \node at (0,-1.5) {$\nabla h_{12}$}; \draw[postaction={decorate}] (1.732*0.5,0) -- (30:1); \draw[postaction={decorate}] (0.5,0) -- (60:1); \draw[postaction={decorate}] (0,0) -- (90:1); \draw[postaction={decorate}] (-0.5,0) -- (120:1); \draw[postaction={decorate}] (-1.732*0.5,0) -- (150:1); \draw[postaction={decorate}] (1.732*0.5,0) -- (-30:1); \draw[postaction={decorate}] (0.5,0) -- (-60:1); \draw[postaction={decorate}] (0,0) -- (-90:1); \draw[postaction={decorate}] (-0.5,0) -- (-120:1); \draw[postaction={decorate}] (-1.732*0.5,0) -- (-150:1); \end{scope} \end{tikzpicture} \] Note that $\nabla h_{12}$, $\nabla h_{23}$ are not defined on the edge. Let $A$ and $B$ be two connected components of edge complement in the above neighborhood. Then $\nabla h_{12}|_A$ and $(\nabla h_{12}+\nabla h_{23})|_B$ can be extended to the edge smoothly. This is because the $z$-coordinate differences among the three Legendrian sheets are smooth. \item For the trivalent vertices in $\ngraphfont{G}_2$, we consider the following gradient vector field configurations: \[ \begin{tikzpicture} \begin{scope} \draw[thick] \boundellipse{0,1}{1.25}{0.25}; \draw[blue, thick] (-1.25,0)--(0,0); \draw[blue, thick] (0,0)--(1.25,1/3); \draw[blue, thick] (0,0)--(1/2,-1/3); \draw[thick,blue,dashed,fill=blue] (0,0) circle (0.05); \draw[thick] (-1.25,0) to[out=25,in=175] (1.25,1/3); \draw[dotted, thick] (-1.25,0) to[out=10,in=190] (1.25,1/3); \draw[thick] (-1.25,0) to[out=-25,in=180] (1/2,-1/3); \draw[thick] (-1.25,0) to[out=-10,in=160] (1/2,-1/3); \draw[dotted,thick] (1/2,-1/3) to[out=80,in=220] (1.25,1/3); \draw[thick] (1/2,-1/3) to[out=30,in=250] (1.25,1/3); \draw[thick] \boundellipse{0,-1}{1.25}{0.25}; \draw[blue, thick] (-5/4,-1)--(0,-1); \draw[blue, thick] (0,-1)--(0.90,-0.825); \draw[blue, thick] (0,-1)--(1/2,-1.225); \draw[thick,blue,fill=blue] (0,-1) circle (0.05); \draw[thick, ->] (0,-0.35)--(0,-0.9); \node at (-1.5,1){\tiny $3$}; \node at (-1.5,1/3){\tiny $2$}; \node at (-1.5,-1/3){\tiny $1$}; \end{scope} \begin{scope}[xshift=7cm,decoration={markings, mark=at position 0.5 with {\arrow{>}}}] \draw[thick] (0,0) circle (1cm); \draw[thick, blue, dashed, fill=blue, opacity=0.5] (0,0) circle (2pt) -- (60:1) (0,0) -- (-60:1) (0,0) -- (180:1); \draw[postaction={decorate}] (170:1) -- (-0.5,0); \draw[postaction={decorate}] (160:1) -- (0,0); \draw[postaction={decorate}] (140:1) -- (60:0.5); \draw[postaction={decorate}] (-170:1) -- (-0.5,0); \draw[postaction={decorate}] (-160:1) -- (0,0); \draw[postaction={decorate}] (-140:1) -- (-60:0.5); \draw[postaction={decorate}] (60:0.5) -- (25:1); \draw[postaction={decorate}] (-60:0.5) -- (-25:1); \draw[postaction={decorate}] (0,0) -- (1,0); \node at (0,-1.5) {$\nabla h_{23}$}; \end{scope} \begin{scope}[xshift=3.5cm,decoration={markings, mark=at position 0.5 with {\arrow{>}}}] \draw[thick] (0,0) circle (1cm); \draw[thick, blue, fill=blue] (0,0) circle (2pt) -- (60:1) (0,0) -- (-60:1) (0,0) -- (180:1); \begin{scope} \draw[postaction={decorate}] (0,0) -- (1,0); \draw[postaction={decorate}] (60:0.5) to[out=-30,in=180] (15:1); \draw[postaction={decorate}] (-60:0.5) to[out=30,in=180] (-15:1); \end{scope} \begin{scope}[rotate=120] \draw[postaction={decorate}] (0,0) -- (1,0); \draw[postaction={decorate}] (60:0.5) to[out=-30,in=180] (15:1); \draw[postaction={decorate}] (-60:0.5) to[out=30,in=180] (-15:1); \end{scope} \begin{scope}[rotate=240] \draw[postaction={decorate}] (0,0) -- (1,0); \draw[postaction={decorate}] (60:0.5) to[out=-30,in=180] (15:1); \draw[postaction={decorate}] (-60:0.5) to[out=30,in=180] (-15:1); \end{scope} \node at (0,-1.5) {$\nabla h_{12}$}; \end{scope} \end{tikzpicture} \] \item Near the hexagonal point, we consider the following model of gradient vector fields. \[ \begin{tikzpicture} \begin{scope} \begin{scope}[yshift=0.5cm,yscale=2] \draw[red, thick] (0,0)--(1,0) (-5/4,3/12)--(0,0) (-3/4,1/3)--(0,0); \draw[blue, thick] (-1,0)--(0,0) (0,0)--(5/4,-3/12) (0,0)--(3/4,-1/3); \draw[thick] (-3/4,1/3)--(5/4,1/3); \draw[thick] (5/4,1/3)--(3/4,-1/3); \draw[thick] (3/4,-1/3)--(-5/4,-1/3); \draw[thick] (-5/4,-1/3)--(-3/4,1/3); \draw[thick] (-5/4,3/12)--(3/4,3/12); \draw[thick] (3/4,3/12)--(5/4,-3/12); \draw[thick] (5/4,-3/12)--(-3/4,-3/12); \draw[thick] (-3/4,-3/12)--(-5/4,3/12); \draw[thick] (-5/4,3/12)--(-3/4,1/3); \draw[thick] (-3/4,1/3)--(5/4,-3/12); \draw[thick] (5/4,-3/12)--(3/4,-1/3); \draw[thick] (3/4,-1/3)--(-5/4,3/12); \end{scope} \draw[thick,black,fill=white] (0,0.5) circle (2pt); \begin{scope}[yshift=0cm] \draw[thick] \boundellipse{0,-1}{1.25}{0.25}; \draw[blue, thick] (-5/4,-1)--(0,-1) (0,-1)--(0.90,-0.825) (0,-1)--(1/2,-1.225); \draw[red, thick] (5/4,-1)--(0,-1) (0,-1)--(-0.90,-1.175) (0,-1)--(-1/2,-0.775); \draw[thick,black,fill=white] (0,-1) circle (2pt); \draw[thick, ->] (0,-0.35)--(0,-0.9); \end{scope} \node at (-1.5,1){\tiny $3$}; \node at (-1.5,0.5){\tiny $2$}; \node at (-1.5,0){\tiny $1$}; \end{scope} \begin{scope}[xshift=3.5cm,decoration={markings, mark=at position 0.5 with {\arrow{>}}}] \draw[thick] (0,0) circle (1cm); \draw[thick, blue] (0,0)-- (60:1) (0,0)--(180:1) (0,0)--(-60:1); \draw[thick, red, dashed, opacity=0.5] (0,0)-- (1,0) (0,0)--(120:1) (0,0)--(-120:1); \draw[thick,black,fill=white] (0,0) circle (2pt); \begin{scope} \draw[postaction={decorate}] (60:0.25) to (0:0.5); \draw[postaction={decorate}] (-60:0.25) to (0:0.5); \draw[postaction={decorate}] (60:0.5) to (0:1); \draw[postaction={decorate}] (-60:0.5) to (0:1); \draw[postaction={decorate}] (60:0.75) to (20:1); \draw[postaction={decorate}] (-60:0.75) to (-20:1); \end{scope} \begin{scope}[rotate=120] \draw[postaction={decorate}] (60:0.25) to (0:0.5); \draw[postaction={decorate}] (-60:0.25) to (0:0.5); \draw[postaction={decorate}] (60:0.5) to (0:1); \draw[postaction={decorate}] (-60:0.5) to (0:1); \draw[postaction={decorate}] (60:0.75) to (20:1); \draw[postaction={decorate}] (-60:0.75) to (-20:1); \end{scope} \begin{scope}[rotate=240] \draw[postaction={decorate}] (60:0.25) to (0:0.5); \draw[postaction={decorate}] (-60:0.25) to (0:0.5); \draw[postaction={decorate}] (60:0.5) to (0:1); \draw[postaction={decorate}] (-60:0.5) to (0:1); \draw[postaction={decorate}] (60:0.75) to (20:1); \draw[postaction={decorate}] (-60:0.75) to (-20:1); \end{scope} \node at (0,-1.5) {$\nabla h_{12}$}; \end{scope} \begin{scope}[xshift=7cm,decoration={markings, mark=at position 0.5 with {\arrow{>}}}] \draw[thick] (0,0) circle (1cm); \draw[thick, blue, dashed, opacity=0.5] (0,0)-- (60:1) (0,0)--(180:1) (0,0)--(-60:1); \draw[thick, red] (0,0)-- (1,0) (0,0)--(120:1) (0,0)--(-120:1); \draw[thick,black,fill=white] (0,0) circle (2pt); \begin{scope}[rotate=60] \draw[postaction={decorate}] (60:0.25) to (0:0.5); \draw[postaction={decorate}] (-60:0.25) to (0:0.5); \draw[postaction={decorate}] (60:0.5) to (0:1); \draw[postaction={decorate}] (-60:0.5) to (0:1); \draw[postaction={decorate}] (60:0.75) to (20:1); \draw[postaction={decorate}] (-60:0.75) to (-20:1); \end{scope} \begin{scope}[rotate=180] \draw[postaction={decorate}] (60:0.25) to (0:0.5); \draw[postaction={decorate}] (-60:0.25) to (0:0.5); \draw[postaction={decorate}] (60:0.5) to (0:1); \draw[postaction={decorate}] (-60:0.5) to (0:1); \draw[postaction={decorate}] (60:0.75) to (20:1); \draw[postaction={decorate}] (-60:0.75) to (-20:1); \end{scope} \begin{scope}[rotate=300] \draw[postaction={decorate}] (60:0.25) to (0:0.5); \draw[postaction={decorate}] (-60:0.25) to (0:0.5); \draw[postaction={decorate}] (60:0.5) to (0:1); \draw[postaction={decorate}] (-60:0.5) to (0:1); \draw[postaction={decorate}] (60:0.75) to (20:1); \draw[postaction={decorate}] (-60:0.75) to (-20:1); \end{scope} \node at (0,-1.5) {$\nabla h_{23}$}; \end{scope} \end{tikzpicture} \] Similar as in (1), the three Legendrian sheets near the hexagonal point are smooth with distinct slope, so certain combination of $\nabla h_{12}, \nabla h_{23}$ depending on the graph complement regions can be smoothly extended to $\ngraphfont{G}_1$ or $\ngraphfont{G}_2$. \end{enumerate} The above configurations do not produce Reeb chords near edges, trivalent vertices, and hexagonal points. So it suffices to check that non-vanishing property of the gradient vector fields on the graph complement regions. By weaving the above local configurations near edges, vertices, and a hexagonal point, we consider the following gradient vector fields $\nabla h_{12}$ and $\nabla h_{23}$ defined on $\mathring\mathbb{D}^2\setminus(\ngraphfont{G}_1\cup \ngraphfont{G}_2)$. Then, by the construction of gradients $\nabla h_{12}, \nabla h_{23}$ and the definition of Reeb chord, there are no Reeb chords connecting $\Gamma_i$ and $\Gamma_{i+1}$ for $i=1,2$. Now consider the the gradient flow lines of $h_{1\,2}+h_{2\,3}$ to see the Reeb chords from $\Gamma_1$ to $\Gamma_3$. Without loss of generality, by tilting the slope of Legendrian sheets, we may assume that $\|\nabla h_{12}\|<\|\nabla h_{23}\|$ except on a sufficiently small neighborhood $U(\ngraphfont{G})$ of $\ngraphfont{G}$. Hence the above gradients $\nabla h_{12}, \nabla h_{23}$ induce a wavefront $\Gamma(\ngraphfont{G}(a,b,c))$ without interior Reeb chords and we are done. \end{proof} \end{comment} Let us consider the Lagrangian projection $\pi_L:J^1S\cong T^*S\times \R \to T^*S$. Then the image \[ L(\ngraphfont{G})\colonequals\pi_L(\Lambda(\ngraphfont{G})) \] of the Legendrian weave gives us an exact, possibly immersed Lagrangian surface in $T^*S$. The following lemma is a direct consequence of Theorem~\ref{thm:N-graph moves and legendrian isotopy} and Definition~\ref{definiton:freeness}. \begin{lemma} Let $\ngraphfont{G}$ and $\ngraphfont{G}'$ be two $N$-graphs on $S$. Then the following statements hold: \begin{enumerate} \item If $\ngraphfont{G}$ is free, then the Lagrangian surface $L(\ngraphfont{G})=\pi_L(\Lambda(\ngraphfont{G}))$ is exact and embedded. \item If $[\ngraphfont{G}]=[\ngraphfont{G}']$, then two Lagrangian surfaces \[ L(\ngraphfont{G})=\pi_L(\Lambda(\ngraphfont{G}))\quad\text{ and }\quad L(\ngraphfont{G}')=\pi_L(\Lambda(\ngraphfont{G}')) \] in $T^*S$ are exact Lagrangian isotopic relative to boundary. \end{enumerate} \end{lemma} \subsection{\texorpdfstring{$N$-graphs on $\mathbb{D}^2$ and $\mathbb{A}$}{N-graphs on disk and annulus}} In this section, we consider Legendrian links in $\R^3$ or $\mathbb{S}^3$, Lagrangian fillings in $\R^4$ and how to describe them in terms of $N$-graphs. \subsubsection{Geometric setup}\label{sec:geometric setup} Let us recall and collect basic facts about contact submanifolds of $\R^3$ and~$\R^5$, and their coordinate changes. Let $(\theta, p_\theta, z)$ be the coordinates of $J^1\mathbb{S}$ with the contact form $\alpha_{J^1\mathbb{S}^1}=dz-p_\theta d\theta$. The Legendrian unknot $\lambda_{\rm unknot}$ in $J^1\mathbb{S}^1$ is given by \[ \lambda_{\rm unknot}=\{ (\theta,0,0) \mid \theta \in \mathbb{S}^1\}\subset J^1\mathbb{S}^1. \] The symplectization of $J^1\mathbb{S}^1$ is \[(J^1\mathbb{S}^1\times \R_s,d(e^s(dz-p_\theta d\theta))),\] and its contactization becomes \[ (J^1\mathbb{S}^1\times \R_s \times \R_t,dt+e^s(dz-p_\theta d\theta))\] which is contactomorphic to $(J^1(\mathbb{S}^1\times \R_{r>0}), dw-p_\vartheta d\vartheta-p_r dr)$ under the strict contactomorphism~$\phi$ given by \[ (\theta,p_\theta,z,s,t)\mapsto(\vartheta, r, p_\vartheta, p_r, w)=(\theta, e^s, e^s p_\theta, z, t+ e^s z). \] For each symplectization level $s=s_0$, the map $\phi$ induces a contact embedding $J^1\mathbb{S}^1 \hookrightarrow J^1(\mathbb{S}\times \R_{r>0})$ especially into $J^1(\mathbb{S}^1 \times \R_{r>0})\cap \{r=e^{s_0}\}$. Furthermore, there is a strict contactomorphism \[ \psi \colon (J^1(\mathbb{S}^1\times \R_{r>0}), dw-p_\vartheta d\vartheta-p_r dr) \to (J^1(\R^2\setminus\{{\bf 0}\}),dw-y_1dx_1-y_2dx_2) \] defined by \[ (x_1,x_2,y_1,y_2,w)=\left(r \cos\vartheta,r\sin\vartheta, p_r \cos\vartheta -\frac{\sin\vartheta}{r}p_\vartheta, p_r \sin\vartheta + \frac{\cos\vartheta}{r}p_\vartheta,w\right). \] By compactifying the origin ${\bf 0}\in \R^2$, we have the following diagram: \[ \begin{tikzcd} J^1\mathbb{S}^1\times\mathbb{R}_s\times\mathbb{R}_t \arrow[r, "\cong", "\phi"'] \arrow[d,"\pi_t"] & J^1(\mathbb{S}^1\times\mathbb{R}_{r>0})\arrow[r,"\cong","\psi"'] & J^1(\mathbb{R}^2\setminus\{\mathbf{0}\})\arrow[r, hookrightarrow] & J^1\mathbb{R}^2=T^*\R^2\times \R_w \arrow[d,"\pi_L"]\\ J^1\mathbb{S}^1\times \R_s \arrow[rrr,hookrightarrow,"\Phi"] & & & T^*\R^2 \end{tikzcd} \] Here, the symplectic embedding $\Phi\colon J^1\mathbb{S}^1\times \R_s \hookrightarrow T^*\R^2$ is defined by \begin{align*} (\theta, p_\theta, z, s)\mapsto (x_1, x_2, y_1, y_2)=(e^s\cos\theta, e^s\sin\theta, z\cos\theta-p_\theta\sin\theta, z\sin\theta+p_\theta\cos\theta). \end{align*} On the other hand, we have another symplectomorphism \begin{align*} \varphi \colon (\mathbb{S}^3\times \R_u,d(e^u\alpha_{\mathbb{S}^3}))&\to (T^*\R^2\setminus\{({\bf 0},{\bf 0})\},dx_1\wedge dy_1+dx_2\wedge dy_2);\\ (z_1,z_2,u)&\mapsto e^{u/2}(r_1\cos\theta_1,r_2\cos\theta_2, r_1\sin\theta_1,r_2\sin\theta_2) \end{align*} where $\mathbb{S}^3$ is the unit sphere in $\bbC^2_{z_1,z_2}$, $z_1=r_1 e^{i\theta_1}, z_2=r_2 e^{i\theta_2}$, and with the contact form \begin{align*} \alpha_{\mathbb{S}^3}=\frac{1}{2}r_1^2 d\theta_1 +\frac{1}{2} r_2^2d\theta_2,\quad r_1^2+r_2^2=1. \end{align*} So far, we have the following diagram of symplectic embeddings \[ \begin{tikzcd} J^1\mathbb{S}^1\times \R_s \arrow[rr, hookrightarrow, "\Phi"] \arrow[rd, hookrightarrow, "\Psi"'] & & T^*\R^2\setminus\{(\mathbf{0},\mathbf{0})\}\\ & \mathbb{S}^3\times\R_u \arrow[ru, "\cong"'] \end{tikzcd} \] where the map $\Psi(\theta, p_\theta, z, s) = (z_1, z_2, u)$ is defined by \begin{align*} z_1 &= \frac{e^s\cos\theta+ i (z\cos\theta-p_\theta\sin\theta)}{\sqrt{e^{2s}+z^2+p_\theta^2}},\\ z_2 &= \frac{e^s\sin\theta+ i (z\sin\theta+p_\theta\cos\theta)}{\sqrt{e^{2s}+z^2+p_\theta^2}},\\ e^u &= e^{2s}+z^2+p_\theta^2. \end{align*} Let us define $\iota:J^1\mathbb{S}^1\to \mathbb{S}^3$ as the composition of the inclusions $J^1\mathbb{S}^1\cong J^1\mathbb{S}^1\times\{s=0\}\to J^1\mathbb{S}^1\times\R_s$, $\Psi:J^1\mathbb{S}^1\times\R_s\to \mathbb{S}^3\times\R^u$ and the projection $\mathbb{S}^3\times\R_u\to \mathbb{S}^3$ so that \[ \iota(\theta, p_\theta, z)\colonequals \left( \frac{\cos\theta+i(z\cos\theta-p_\theta\sin\theta)}{\sqrt{1+z^2+p_\theta^2}}, \frac{\sin\theta+i(z\sin\theta+p_\theta\cos\theta)}{\sqrt{1+z^2+p_\theta^2}} \right). \] Then the image of the Legendrian unknot $\lambda_{\rm unknot}\subset J^1\mathbb{S}^1$ becomes \[ \{(z_1,z_2) \mid z_1=\cos\theta, z_2=\sin\theta, \theta\in\mathbb{S}^1\}\subset \mathbb{S}^3\subset \bbC^2. \] Recall the stereographic projection of $\mathbb{S}^3$ with respect to $(0,-i)\in\bbC^2$, and see the corresponding image of $\lambda_{\rm unknot}$: \begin{align*} (\mathbb{S}^3\setminus \{(0,-i)\},\alpha_{\mathbb{S}^3}) &\to (\R^3,dz'+x'dy'-y'dx')\cong \bbC\times \R;\\ (z_1,z_2)& \mapsto \left(\frac{i z_1}{i+z_2},\frac{-{\rm Re}(z_2)}{|i+z_2|^2}\right);\\ (\cos\theta,\sin\theta)&\mapsto \left(\frac{\cos\theta}{1+\sin^2\theta},\frac{\cos\theta\sin\theta}{1+\sin^2\theta},\frac{-\sin\theta}{1+\sin^2\theta}\right). \end{align*} Under the strict contactomorphism \begin{align*} (\R^3,dz'+x'dy'-y'dx') &\to (J^1\R,dz-ydx);\\ (x',y',z') &\mapsto (x,y,z)=(x',2y',z'+x'y'), \end{align*} the image of $\lambda_{\rm unknot}$ becomes \[ \left( \frac{\cos\theta}{1+\sin^2\theta},\frac{2\cos\theta\sin\theta}{1+\sin^2\theta},\frac{-2\sin^3\theta}{(1+\sin^2\theta)^2} \right) \] whose front projection looks like as follows: \[ \begin{tikzpicture}[baseline=-.5ex, scale=2] \draw[->] (-1.25,0) -- (1.25,0) node[above] {$x$}; \draw[->] (0,-0.75) -- (0,0.75) node[right] {$z$}; \draw[thick, red] plot[domain=0:2*pi, samples=200] ({cos(\x r)/ (1+ sin(\x r)^2)}, {-2*sin(\x r)^3/(1+sin(\x r)^2)^2}); \end{tikzpicture} \] Let $\lambda\subset J^1\mathbb{S}^1$ be a Legendrian link. Then the image $\iota(\lambda)$ can be isotoped into a neighborhood of the Legendrian unknot in $\R^3$. In particular, if $\lambda$ is a closure of a positive braid $\beta$, then $\iota(\lambda)$ looks like a satellite link of the Legendrian unknot in $\R^3$. We consider a Legendrian surface $\widehat\Lambda \subset J^1(\mathbb{S}^1\times \R_{r>0})$ having cylindrical ends so that for some $S_1>S_2$, \begin{align*} \widehat\Lambda \cap J^1(\mathbb{S}^1\times\R_{r\ge e^{S_1}}) &\cong \lambda_1\times\R_{s\ge S_1},& \widehat\Lambda \cap J^1(\mathbb{S}^1\times\R_{r\le e^{S_2}}) &\cong \lambda_2\times\R_{s\le S_2}. \end{align*} Then the projection $L_{\widehat\Lambda}=\pi_L(\psi(\widehat\Lambda))$ of the surface $\widehat\Lambda$ inside $\mathbb{S}^3\times\R_u$ becomes an exact Lagrangian cobordism from $\iota(\lambda_1)$ to $\iota(\lambda_2)$. Similarly, let $\widehat\Lambda\subset J^1\R^2$ be a Legendrian surface having a cylindrical end. That is, for some $S\in \R$, \[ \widehat\Lambda\cap J^1\R^2_{r\ge e^S}\cong \lambda \times \R_{s\ge S}. \] Then the projection $L_{\widehat\Lambda}=\pi_L(\widehat\Lambda)$ in $T^*\R^2\cong(\bbC^2, \omega_{\mathsf{st}})$ becomes an exact Lagrangian filling of $\iota(\lambda)$. Note that the Lagrangian $\pi_L(\widehat\Lambda)$ is embedded if and only if the Legendrian surface $\widehat\Lambda$ has no Reeb chords. \begin{lemma}\label{lem:legendrian and lagrangian} Let $\widehat\Lambda$ and $\widehat\Lambda'$ be two Legendrian surfaces in $J^1\R^2$ without Reeb chords having the identical cylindrical ends \[ \widehat\Lambda\cap J^1\R^2_{r\ge e^S} \cong \lambda \times \R_{s\ge S} \cong \widehat\Lambda'\cap J^1\R^2_{r\ge e^S} \] for some $S\in\R$. If the exact embedded Lagrangian fillings $L_{\widehat\Lambda}=\pi_L(\widehat\Lambda)$ and $L_{\widehat\Lambda'}=\pi_L(\widehat\Lambda')$ of $\iota(\lambda)$ are exact Lagrangian isotopic, then $\widehat\Lambda, \widehat\Lambda'$ are Legendrian isotopic. \end{lemma} On the other hand, any compact Legendrian surface $\Lambda\subset J^1\mathbb{D}^2$ can be extended to $\widehat\Lambda\subset J^1\R^2$ by attaching the cylindrical end $\partial\Lambda\times[1,\infty)$ in a smooth way. For two compact Legendrian surfaces $\Lambda, \Lambda'\subset J^1\mathbb{D}^2$, if $\widehat\Lambda$ and $\widehat\Lambda'$ are Legendrian isotopic if and only if $\Lambda$ and $\Lambda'$ are Legendrian isotopic relative to boundary. \begin{corollary} Let $\lambda\subset J^1\mathbb{S}^1$ be a Legendrian link and $\Lambda, \Lambda'\subset J^1\mathbb{D}^2$ be two Legendrian surfaces without Reeb chords whose boundaries are $\lambda$. Then two exact embedded Lagrangian fillings $\pi_{L}(\Lambda)$ and $\pi_{L}(\Lambda')$ are exact Lagrangian isotopic relative to boundary if and only if $\Lambda$ and $\Lambda'$ are Legendrian isotopic relative to boundary without making Reeb chords during the isotopy. \end{corollary} \begin{remark} We are interested in exact Lagrangian fillings of Legendrian links up to \emph{exact Lagrangian isotopy} relative to boundary, an isotopy through exact Lagrangian fillings which fixes the Legendrian boundary. This is equivalent to exact Lagrangian fillings up to \emph{Hamiltonian isotopy}, which is an isotopy through Hamiltonian diffeomorphism fixing the boundary. The similar holds for Lagrangian cobordisms. \end{remark} We end this section by investigating certain actions on the symplectic manifold $\mathbb{S}^3\times \R_u$ and induced actions on $J^1\mathbb{S}^1$. Especially, we are interested in actions on $\mathbb{S}^3\times \R_u$ preserving the $\R_u$-coordinate, the symplectization coordinate. So actions on $\mathbb{S}^3$ determine the actions on the symplectic manifold $\mathbb{S}^3\times \R_u$. Recall that $\mathbb{S}^3$ is the unit sphere in $\bbC^2$, i.e., coordinates $z_1=r_1 e^{i\theta_1}, z_2=r_2 e^{i\theta_2}$ with $r_1^2 + r_2^2=1$. \paragraph{Rotation} A symplectomorphism $R_{\theta_0} \colon \mathbb{S}^3\times \R_u \to \mathbb{S}^3\times \R_u$, called {\em rotation}, is defined by \[ R_{\theta_0}(z_1, z_2,u)=(z_1\cos\theta_0 -z_2\sin\theta_0, z_1\sin\theta_0+z_2\cos\theta_0,u). \] Note that the restriction $R_{\theta_0}|_{\mathbb{S}^3}$ fixes the contact form $\alpha_{\mathbb{S}^3}$. Under the symplectic embedding $\Psi:J^1\mathbb{S}^1\times \R_s \hookrightarrow \mathbb{S}^3\times \R_u$, we have the following induced symplectomorphism \[ J^1\mathbb{S}^1\times \R_s \to J^1\mathbb{S}^1\times \R_s,\quad (\theta,p_{\theta},z,s)\mapsto (\theta+\theta_0,p_{\theta},z,s). \] By restricting $R_{\theta_0}$ on $J^1\mathbb{S}^1$, we obtain \[ J^1\mathbb{S}^1 \to J^1\mathbb{S}^1, \quad (\theta,p_{\theta},z)\mapsto (\theta+\theta_0,p_{\theta},z). \] We are especially interested in $\theta_0=\pi,2\pi/3$. They produce $\Z/2\Z$- and $\Z/3\Z$-action on the symplectic manifold $\mathbb{S}^3\times \R_u$ and Lagrangian fillings of satellite links of the Legendrian unknot, respectively. \paragraph{Conjugation} An anti-symplectic involution $\tau \colon \mathbb{S}^3\times\R_u \to \mathbb{S}^3\times\R_u$, which we call {\em conjugation}, is defined by \[ (z_1,z_2,u)\mapsto (\bar z_1,\bar z_2 ,u). \] It is direct to check that $\tau$ reverses the sign of symplectic form $\frac{i}{2}(dz_1\wedge d\bar z_1+dz_2\wedge d\bar z_2)$, and its restriction on $\mathbb{S}^3$ also reverse the sign of $\alpha_{\mathbb{S}^3}$. Again by the symplectic embedding~$\Psi$, the conjugation induces an action on $J^1\mathbb{S}^1\times \R_s$ \[ (\theta,p_\theta,z,s)\mapsto (\theta,-p_\theta,-z,s) \] whose restriction on $J^1\mathbb{S}^1$ becomes \[ (\theta,p_\theta,z)\mapsto (\theta,-p_\theta,-z). \] This anti-symplectic involution naturally produce $\Z/2\Z$-action on the symplectic manifold and Lagrangian fillings as in the actions from the rotations. \begin{figure}[ht] \subfigure[Rotation\label{figure:rotation unknot}]{ \begin{tikzpicture}[baseline=-.5ex, scale=2] \draw[->] (-1.25,0) -- (1.25,0) node[above] {$x$}; \draw[->] (0,-0.75) -- (0,0.75) node[right] {$z$}; \draw[thick, red] plot[domain=0:2*pi, samples=200] ({cos(\x r)/ (1+ sin(\x r)^2)}, {-2*sin(\x r)^3/(1+sin(\x r)^2)^2}); \draw[->,blue] (1,-0.1) to[out=210,in=30] (0.75,-0.25); \draw[->,blue] (0.2,-0.6) to[out=190,in=-10] (-0.2,-0.6); \draw[->,blue] (-0.75,-0.25) to[out=150,in=-30] (-1,-0.1); \begin{scope}[rotate=180] \draw[->,blue] (1,-0.1) to[out=210,in=30] (0.75,-0.25); \draw[->,blue] (0.2,-0.6) to[out=190,in=-10] (-0.2,-0.6); \draw[->,blue] (-0.75,-0.25) to[out=150,in=-30] (-1,-0.1); \end{scope} \end{tikzpicture} } \subfigure[Conjugation\label{figure:conjugation unknot}]{ \begin{tikzpicture}[baseline=-.5ex, scale=2] \draw[->] (-1.25,0) -- (1.25,0) node[above] {$x$}; \draw[->] (0,-0.75) -- (0,0.75) node[right] {$z$}; \draw[thick, red] plot[domain=0:2*pi, samples=200] ({cos(\x r)/ (1+ sin(\x r)^2)}, {-2*sin(\x r)^3/(1+sin(\x r)^2)^2}); \draw[<->,blue] (0.65,-0.1) -- (0.75,-0.25); \draw[<->,blue] (0,-0.4) -- (0,-0.6); \draw[<->,blue] (-0.65,-0.1) -- (-0.75,-0.25); \begin{scope}[rotate=180] \draw[<->,blue] (0.65,-0.1) -- (0.75,-0.25); \draw[<->,blue] (0,-0.4) -- (0,-0.6); \draw[<->,blue] (-0.65,-0.1) -- (-0.75,-0.25); \end{scope} \end{tikzpicture} } \caption{Rotations and conjugation near the Legendrian unknot.} \label{fig:actions on the unknot} \end{figure} \begin{lemma}\label{lem:rotation and conjugation} Let $R_{\theta_0}$ and $\eta$ be rotation and conjugation defined on $\mathbb{S}^3\times \R$ as above, respectively. Then the induced maps on the front projection $\pi_{F}:J^1\mathbb{S}^1\to\mathbb{S}^1\times\R$ becomes as follows: \begin{align*} R_{\theta_0}|_{\mathbb{S}^1\times \R}&:(\theta,z)\mapsto (\theta+\theta_0,z);\\ \eta|_{\mathbb{S}^1\times \R}&:(\theta,z)\mapsto (\theta,-z). \end{align*} \end{lemma} \subsubsection{$N$-graphs on $\mathbb{D}^2$ and $\mathbb{A}$}\label{section:annular Ngraphs} Let $\beta\subset J^1\R^1$ be a positive $N$-braid given by a word consisting of the generators $\sigma_1,\dots, \sigma_{N-1}$, and let $\lambda=\lambda_\beta\in J^1\mathbb{S}^1$ be a Legendrian link obtained by the closure of $\beta$, which can be regarded as a satellite of the Legendrian unknot in $\R^3$. The front projection $\pi_F(\lambda)\subset \mathbb{S}^1\times \R$ of $\lambda$ consists of $N$-strands with double points corresponding to the braid word $\beta$. Hence, the Legendrian $\lambda$ gives us an $(N-1)$-tuple $(\lambda_1, \lambda_2,\dots, \lambda_{N-1})$ of subsets of points in $\mathbb{S}^1$, each of which corresponds to the generator $\sigma_i$ in the braid word $\beta$. Conversely, let $(\lambda_1,\dots, \lambda_{N-1})$ be an $(N-1)$-tuple of disjoint\footnote{This condition can be weakened as follows: $\lambda_i\cap \lambda_{i+1}=\varnothing$ for each $1\le i<N$.} subsets of $\mathbb{S}^1$. Then, from this data $(\lambda_1,\dots,\lambda_{N-1})$, one can build the Legendrian link $\lambda$, which is the branched $N$-fold covering space of $\mathbb{S}^1$ such that the $i$-th and $(i+1)$-st covers are branched along the set $\lambda_i$. Let $\ngraphfont{G}=(\ngraphfont{G}_1,\dots,\ngraphfont{G}_{N-1})$ be an $N$-graph on $\mathbb{D}^2$. The \emph{boundary} $\partial\ngraphfont{G}$ of $\ngraphfont{G}$ is a Legendrian link defined by an $N$-graph on $\mathbb{S}^1=\partial\mathbb{D}^2$ as \[ \partial\ngraphfont{G}=(\partial\ngraphfont{G}_1,\dots,\partial\ngraphfont{G}_{N-1}),\quad \partial\ngraphfont{G}_i\colonequals \ngraphfont{G}_i\cap \mathbb{S}^1\subset \mathbb{S}^1. \] We say that $\ngraphfont{G}$ is \emph{of type} $\lambda$ or $\lambda$ \emph{admits} an $N$-graph $\ngraphfont{G}$ if $\partial\ngraphfont{G}=\lambda$. Let $\mathbb{A}$ be the oriented annulus with two boundary components $\partial_+\mathbb{A}$ and $\partial_-\mathbb{A}$. For an $N$-graph~$\ngraphfont{G}$ on $\mathbb{A}$, let $\partial_\pm\ngraphfont{G}\colonequals\ngraphfont{G}\cap\partial_\pm\mathbb{A}$ be Legendrian links at two boundaries $\partial_\pm\mathbb{A}$, respectively. We say that $\ngraphfont{G}$ is \emph{of type} $(\lambda_+, \lambda_-)$ if $\partial_\pm\ngraphfont{G}=\lambda_\pm$, respectively. A typical example of annular $N$-graphs comes from Lagrangian cobordism between Legendrian links, which are closures of positive braids. In particular, for two closures $\lambda_1$ and $\lambda_2$ of positive braids $\beta_1$ and $\beta_2$, any sequence of Legendrian braid moves from $\lambda_2$ to $\lambda_1$ will give us a special annular $N$-graph $\ngraphfont{G}_{\lambda_2\lambda_1}$.\footnote{One may call the $N$-graph $\ngraphfont{G}_{\lambda_2\lambda_1}$ a \emph{strict concordance} since it is a union of cylinders.} Hence, for an $N$-graphs $\ngraphfont{G}$ with $\partial\ngraphfont{G}=\lambda_1$, we have the $N$-graph $\ngraphfont{G}_{\lambda_2\lambda_1}\ngraphfont{G}$ with boundary \[ \partial(\ngraphfont{G}_{\lambda_2\lambda_1}\ngraphfont{G})=\lambda_2. \] \begin{remark} We are dealing with both Legendrian links $\lambda$ and surfaces $\Lambda$. In order to avoid the confusion, we use the terminologies ``$\partial$-Legendrian isotopy'' and ``Legendrian isotopy'' for isotopies between Legendrian links and surfaces, respectively. \end{remark} Since a closure of a Legendrian positive braid in $J^1\mathbb{S}^1$ should not have any cusp, possible $\partial$-Legendrian isotopies are either plane isotopies \Move{R0} or the third Reidemeister move \Move{RIII} as follows: \begin{center} \begin{tikzpicture} \begin{scope} \draw[thick] (1,1) to[out=180,in=0] (0.2,0.2) to (-1,0.2); \draw[thick] (1,0.2) to[out=180,in=0] (0.2,1) to (-1,1); \node at (0,0.1) {$\vdots$}; \draw[thick] (1,-0.2) to (-0.2,-0.2) to[out=180,in=0] (-1,-1); \draw[thick] (1,-1) to (-0.2,-1) to[out=180,in=0] (-1,-0.2); \draw[thick,yellow,fill=yellow] (0.6,0.6) circle (0.05); \draw[thick,blue,fill=blue] (-0.6,-0.6) circle (0.05); \draw [<->] (1.5,0) -- (2,0) node[midway, above] {\Move{R0}}; \end{scope} \begin{scope}[xshift=3.5cm] \draw[thick] (-1,1) to[out=0,in=180] (-0.2,0.2) to (1,0.2); \draw[thick] (-1,0.2) to[out=0,in=180] (-0.2,1) to (1,1); \node at (0,0.1) {$\vdots$}; \draw[thick] (-1,-0.2) to (0.2,-0.2) to[out=0,in=180] (1,-1); \draw[thick] (-1,-1) to (0.2,-1) to[out=0,in=180] (1,-0.2); \draw[thick,yellow,fill=yellow] (-0.6,0.6) circle (0.05); \draw[thick,blue,fill=blue] (0.6,-0.6) circle (0.05); \end{scope} \begin{scope}[xshift=7cm] \draw[thick] (-1,-1) to (1,1); \draw[thick] (-1,1) to (1,-1); \draw[thick] (-1,0) to[out=0, in=180] (0,-1) to[out=0,in=180] (1,0); \draw[thick,blue,fill=blue] (-0.5,-0.5) circle (0.05); \draw[thick,blue,fill=blue] (0.5,-0.5) circle (0.05); \draw[thick,red,fill=red] (0,0) circle (0.05); \draw [<->] (1.5,0) -- (2,0) node[midway, above] {\Move{RIII}}; \end{scope} \begin{scope}[xshift=10.5cm] \draw[thick] (-1,-1) to (1,1); \draw[thick] (-1,1) to (1,-1); \draw[thick] (-1,0) to[out=0, in=180] (0,1) to[out=0,in=180] (1,0); \draw[thick,red,fill=red] (-0.5,0.5) circle (0.05); \draw[thick,red,fill=red] (0.5,0.5) circle (0.05); \draw[thick,blue,fill=blue] (0,0) circle (0.05); \end{scope} \end{tikzpicture} \end{center} Therefore, any annular $N$-graph corresponding to a sequence of Reidemeister moves between Legendrian links is a concatenation of \emph{elementary annular $N$-graphs}, which are $\ngraphfont{G}_{\Move{R0}}$ and $\ngraphfont{G}_{\Move{RIII}}$ on the annulus $\mathbb{A}$ as depicted in Figure~\ref{fig:elementary annulus N-graph}. We call an annular $N$-graph \emph{tame} if it is a concatenation of elementary annular $N$-graphs. \begin{figure}[ht] \subfigure[Annular $N$-graph \Move{R0}]{\makebox[0.45\textwidth]{$ \begin{tikzpicture}[baseline=-.5ex,scale=0.75] \draw [thick] (0,0) circle [radius=0.7]; \draw [thick] (0,0) circle [radius=1.5]; \draw [thick, dotted] (260:1) arc (260:280:1); \draw [thick, blue] (120:0.7) to[out=120, in=-120] (60:1.5); \draw [thick, yellow] (60:0.7) to[out=60,in=-60] (120:1.5); \draw [thick, black] (30:0.7) -- (30:1.5) (150:0.7) -- (150:1.5) (0:0.7) -- (0:1.5) (180:0.7) -- (180:1.5); \node[below] at (0,-1.5) {$\ngraphfont{G}_{\Move{R0}}$}; \end{tikzpicture} \cdot \begin{tikzpicture}[baseline=-.5ex,scale=0.75] \draw [thick] (0,0) circle [radius=0.7]; \draw [thick, black] (30:0.5) -- (30:0.7) (150:0.5) -- (150:0.7) (0:0.5) -- (0:0.7) (180:0.5) -- (180:0.7); \draw [thick, yellow] (60:0.5) -- (60:0.7); \draw [thick, blue] (120:0.5) -- (120:0.7) ; \draw [thick, dotted] (250:0.6) arc (250:290:0.6); \draw [double] (0,0) circle [radius=0.5]; \node[below] at (0,-1.5) {$\ngraphfont{G}$}; \end{tikzpicture} = \begin{tikzpicture}[baseline=-.5ex,scale=0.75] \draw [thick] (0,0) circle [radius=1.5]; \draw [thick, dotted] (260:1) arc (260:280:1); \draw [thick, blue] (120:0.7) to[out=120, in=-120] (60:1.5); \draw [thick, yellow] (60:0.7) to[out=60,in=-60] (120:1.5); \draw [thick, black] (30:0.7) -- (30:1.5) (150:0.7) -- (150:1.5) (0:0.7) -- (0:1.5) (180:0.7) -- (180:1.5); \draw [thick, black] (30:0.5) -- (30:0.7) (150:0.5) -- (150:0.7) (0:0.5) -- (0:0.7) (180:0.5) -- (180:0.7); \draw [thick, yellow] (60:0.5) -- (60:0.7); \draw [thick, blue] (120:0.5) -- (120:0.7) ; \draw [double] (0,0) circle [radius=0.5]; \node[below] at (0,-1.5) {$\ngraphfont{G}_{\Move{R0}}\cdot\ngraphfont{G}$}; \end{tikzpicture}$ }} \subfigure[Annular $N$-graph \Move{RIII}]{\makebox[0.45\textwidth]{$ \begin{tikzpicture}[baseline=-.5ex,scale=0.75] \draw [thick] (0,0) circle [radius=0.7]; \draw [thick] (0,0) circle [radius=1.5]; \draw [thick, dotted] (260:1) arc (260:280:1); \draw [thick, blue] (60:0.7) to[out=60, in=-30] (90:1.1) (120:0.7) to[out=120, in=-150] (90:1.1) -- (90:1.5); \draw [thick, red] (90:0.7) -- (90:1.1) to[out=150,in=-60] (120:1.5) (90:1.1) to[out=30,in=-120] (60:1.5); \draw [thick, black] (30:0.7) -- (30:1.5) (150:0.7) -- (150:1.5) (0:0.7) -- (0:1.5) (180:0.7) -- (180:1.5); \node[below] at (0,-1.5) {$\ngraphfont{G}_{\Move{RIII}}$}; \end{tikzpicture} \cdot \begin{tikzpicture}[baseline=-.5ex,scale=0.75] \draw [thick] (0,0) circle [radius=0.7]; \draw [thick, black] (30:0.5) -- (30:0.7) (150:0.5) -- (150:0.7) (0:0.5) -- (0:0.7) (180:0.5) -- (180:0.7); \draw [thick, red] (90:0.5) -- (90:0.7); \draw [thick, blue] (60:0.5) -- (60:0.7) (120:0.5) -- (120:0.7) ; \draw [thick, dotted] (250:0.6) arc (250:290:0.6); \draw [double] (0,0) circle [radius=0.5]; \node[below] at (0,-1.5) {$\ngraphfont{G}$}; \end{tikzpicture} = \begin{tikzpicture}[baseline=-.5ex,scale=0.75] \draw [thick] (0,0) circle [radius=1.5]; \draw [thick, dotted] (260:1) arc (260:280:1); \draw [thick, blue] (60:0.7) to[out=60, in=-30] (90:1.1) (120:0.7) to[out=120, in=-150] (90:1.1) -- (90:1.5); \draw [thick, red] (90:0.7) -- (90:1.1) to[out=150,in=-60] (120:1.5) (90:1.1) to[out=30,in=-120] (60:1.5); \draw [thick, black] (30:0.7) -- (30:1.5) (150:0.7) -- (150:1.5) (0:0.7) -- (0:1.5) (180:0.7) -- (180:1.5); \draw [thick, black] (30:0.5) -- (30:0.7) (150:0.5) -- (150:0.7) (0:0.5) -- (0:0.7) (180:0.5) -- (180:0.7); \draw [thick, red] (90:0.5) -- (90:0.7); \draw [thick, blue] (60:0.5) -- (60:0.7) (120:0.5) -- (120:0.7) ; \draw [double] (0,0) circle [radius=0.5]; \node[below] at (0,-1.5) {$\ngraphfont{G}_{\Move{RIII}}\cdot\ngraphfont{G}$}; \end{tikzpicture}$ }} \caption{$\partial$-Legendrian isotopy and elementary annulus $N$-graphs} \label{fig:elementary annulus N-graph} \end{figure} \begin{example} A \emph{rotational annular $N$-graph}, which has no vertices and rotates a certain angle as depicted below is tame. \[ \begin{tikzpicture}[baseline=-.5ex, scale=0.75] \draw [thick, red] (0.5*1/2,{1/2*sqrt(3)/2}) to[out=60,in=180] (1.5,0); \draw [rotate around={60:(0,0)},thick, red] (0.5*1/2,{1/2*sqrt(3)/2}) to[out=60,in=180] (1.5,0); \draw [rotate around={120:(0,0)},thick, red] (0.5*1/2,{1/2*sqrt(3)/2}) to[out=60,in=180] (1.5,0); \draw [rotate around={180:(0,0)},thick, red] (0.5*1/2,{1/2*sqrt(3)/2}) to[out=60,in=180] (1.5,0); \draw [rotate around={240:(0,0)},thick, red] (0.5*1/2,{1/2*sqrt(3)/2}) to[out=60,in=180] (1.5,0); \draw [rotate around={300:(0,0)},thick, red] (0.5*1/2,{1/2*sqrt(3)/2}) to[out=60,in=180] (1.5,0); \draw [thick, blue] ({1.5*sqrt(3)/2},{1.5*1/2}) to[out=-120, in=90] (0,0.5); \draw [rotate around={60:(0,0)}, thick, blue] ({1.5*sqrt(3)/2},{1.5*1/2}) to[out=-120, in=90] (0,0.5); \draw [rotate around={120:(0,0)}, thick, blue] ({1.5*sqrt(3)/2},{1.5*1/2}) to[out=-120, in=90] (0,0.5); \draw [rotate around={180:(0,0)}, thick, blue] ({1.5*sqrt(3)/2},{1.5*1/2}) to[out=-120, in=90] (0,0.5); \draw [rotate around={240:(0,0)}, thick, blue] ({1.5*sqrt(3)/2},{1.5*1/2}) to[out=-120, in=90] (0,0.5); \draw [rotate around={300:(0,0)}, thick, blue] ({1.5*sqrt(3)/2},{1.5*1/2}) to[out=-120, in=90] (0,0.5); \draw [thick] (0,0) circle [radius=1.5]; \draw [thick] (0,0) circle [radius=0.5]; \end{tikzpicture} \] It is known that the rotational annular $N$-graph acts on the set of $N$-graphs for the Legendrian torus link $\lambda(n,m)$ of maximal Thurston--Bennequin number. This type of annular $N$-graphs play a crucial role in producing a sequence of distinct exact Lagrangian fillings of positive braid Legendrian links, see \cite{Kal2006, CG2020, GSW2020b}. \end{example} \begin{definition}\label{def:boundary Legendrian isotopic} We say that two $N$-graphs $\ngraphfont{G}$ and $\ngraphfont{G}'$ with $\partial\ngraphfont{G}=\lambda_1$ and $\partial\ngraphfont{G}=\lambda_2$ are \emph{$\partial$-Legendrian isotopic} if there exists a tame annular $N$-graph $\ngraphfont{G}_{\lambda_2\lambda_1}$ such that $[\ngraphfont{G}']= [\ngraphfont{G}_{\lambda_2\lambda_1}\ngraphfont{G}]$. \end{definition} \subsubsection{Stabilizations} For a positive $N$-braid $\beta$ in $J^1\R^1$, a \emph{stabilization} $S(\beta)$ is a positive $(N+1)$-braid which satisfies the following: \begin{enumerate} \item Closures of $\beta$ and $S(\beta)$ are Legendrian isotopic in $\mathbb{S}^3$: \[ \lambda_\beta \cong \lambda_{S(\beta)}. \] \item The braid $\beta$ can be recovered by forgetting a strand from $S(\beta)$. \end{enumerate} More precisely, the second condition is as follows: let $q_{i+}(\beta)$ and $q_{i-}(\beta)$ be braids obtained from~$\beta$ by forgetting the $i$-th strand from the left and from the right. Then $S(\beta)$ has a decomposition \[ S(\beta)=\beta_1\sigma_j\beta_2 \] and there exists an index $1\le i\le N+1$ such that the $i$-th strands from the left and from the right of $\beta$ meet precisely at the crossing $\sigma_j$ in the middle of the decomposition $\beta=\beta_1\sigma_j\beta_2$ and moreover \[ \beta= q_{i+}(\beta_1) q_{i-}(\beta_2). \] The most typical example of a stabilization is as follows: let $\beta_0$ be a positive $N$-braid and $\beta=\Delta_N\beta_0\Delta_N$, where $\Delta_N$ is the half-twist $N$-braid. Then it determines a Legendrian link $\lambda_\beta$ uniquely up to Legendrian isotopy, but the converse is not true. Indeed, there are infinitely many pairwise distinct positive braids whose rainbow closures are Legendrian isotopic to $\lambda_\beta$ in $\mathbb{S}^3$. In particular, for a positive $N$-braid $\beta_0$, a \emph{stabilization} $S(\beta_0)$ is a positive $(N+1)$-braid defined by $S(\beta_0) = \beta_0\sigma_N$, where $\beta_0$ in $S(\beta_0)$ is regarded as an $(N+1)$-braid by adding a trivial $(N+1)$-st strand. Let $S(\beta) = \Delta_{N+1}S(\beta_0)\Delta_{N+1}$. Then \begin{align*} S(\beta) &= \Delta_{N+1} (\beta_0 \sigma_N) \Delta_{N+1} =(\sigma_1\dots\sigma_N) (\Delta_N \beta_0 \Delta_N) (\sigma_N\dots\sigma_1) \sigma_1 \mathrel{\dot{=}} \beta (\sigma_N\dots\sigma_2\sigma_1^3\sigma_2\dots\sigma_N), \end{align*} where $\mathrel{\dot{=}}$ means the same up to cyclic permutation of braid words. \begin{figure}[ht] \[ \setlength1pc{2pt} \begin{array}{rcccc} \lambda_\beta&=& \begin{tikzpicture}[baseline=-.5ex,scale=0.8] \draw (-1,-1.125) rectangle node[yshift=-.5ex] {$\beta_0$} (0.5,-0.375); \draw (-3,0) to[out=0,in=180] (-1,1) -- (1,1) to[out=0,in=180] (3,0); \draw (-3,0) to[out=0,in=180] (-1,-1) (0.5,-1) -- (1,-1) to[out=0,in=180] (3,0); \draw (-2.5,0) to[out=0,in=180] (-1,0.75) -- (1,0.75) to[out=0,in=180] (2.5,0); \draw (-2.5,0) to[out=0,in=180] (-1,-0.75) (0.5,-0.75) -- (1,-0.75) to[out=0,in=180] (2.5,0); \draw (-2,0) to[out=0,in=180] (-1,0.5) -- (1,0.5) to[out=0,in=180] (2,0); \draw (-2,0) to[out=0,in=180] (-1,-0.5) (0.5,-0.5) -- (1,-0.5) to[out=0,in=180] (2,0); \end{tikzpicture} &=& \begin{tikzpicture}[baseline=-.5ex,scale=0.8] \draw[fill=white] (-1,-1.125) rectangle node[yshift=-.5ex] {$\beta_0$} (0.5,-0.375); \draw (-3.5,0.5) to[out=0,in=180] (-2.5, 1) -- (2.5,1) to[out=0,in=180] (3.5,0.5); \draw (-3.5,0.5) to[out=0,in=180] (-2.5,-0.5) -- (-1.75,-0.5) to[out=0,in=180] (-1, -1) (0.5,-1) to[out=0,in=180] (1.25, -0.5) to[out=0,in=180] (2,-0.5) -- (2.5,-0.5) to[out=0,in=180] (3.5,0.5); \draw (-3.5,0.25) to[out=0,in=180] (-2.5, 0.75) -- (2.5,0.75) to[out=0,in=180] (3.5,0.25); \draw (-3.5,0.25) to[out=0,in=180] (-2.5,-0.75) -- (-1.75,-0.75) to[out=0,in=180] (-1.375,-1) to[out=0,in=180] (-1, -0.75) (0.5,-0.75) to[out=0,in=180] (0.875, -1) to[out=0,in=180] (1.25,-0.75) -- (2.5,-0.75) to[out=0,in=180] (3.5,0.25); \draw (-3.5,0) to[out=0,in=180] (-2.5, 0.5) -- (2.5,0.5) to[out=0,in=180] (3.5,0); \draw (-3.5,0) to[out=0,in=180] (-2.5,-1) -- (-1.75,-1) to[out=0,in=180] (-1,-0.5) (0.5,-0.5) to[out=0,in=180] (1.25, -1) -- (2.5,-1) to[out=0,in=180] (3.5,0); \draw[dashed] (-1.75, -1.25) rectangle node[below=2.5ex] {$\beta$} (1.25, -0.25); \end{tikzpicture} \\ \lambda_{S(\beta)}&=& \begin{tikzpicture}[baseline=-.5ex,scale=0.8] \draw (-1,-1.125) rectangle node[yshift=-.5ex] {$\beta_0$} (0.5,-0.375); \draw (-3,0) to[out=0,in=180] (-1,1) -- (1,1) to[out=0,in=180] (3,0); \draw (-3,0) to[out=0,in=180] (-1,-1) (0.5,-1) -- (1,-1) to[out=0,in=180] (3,0); \draw (-2.5,0) to[out=0,in=180] (-1,0.75) -- (1,0.75) to[out=0,in=180] (2.5,0); \draw (-2.5,0) to[out=0,in=180] (-1,-0.75) (0.5,-0.75) -- (1,-0.75) to[out=0,in=180] (2.5,0); \draw (-2,0) to[out=0,in=180] (-1,0.5) -- (1,0.5) to[out=0,in=180] (2,0); \draw (-2,0) to[out=0,in=180] (-1,-0.5) (1,-0.5) to[out=0,in=180] (2,0); \draw[thick,blue] (-1.5,0) to[out=0,in=180] (-1,0.25) -- (1,0.25) to[out=0,in=180] (1.5,0); \draw[thick,blue] (-1.5,0) to[out=0,in=180] (-1,-0.25) (1,-0.25) to[out=0,in=180] (1.5,0); \draw[thick,blue] (-1,-0.25) -- (0.5,-0.25) to[out=0,in=180] (1,-0.5) (0.5,-0.5) to[out=0,in=180] (1,-0.25); \draw[dashed] (-1.25,-1.25) rectangle node[below=3ex] {$S(\beta_0)$} (1.25, 0); \end{tikzpicture} &=& \begin{tikzpicture}[baseline=-.5ex,scale=0.8] \draw[fill=white] (-1,-1.125) rectangle node[yshift=-.5ex] {$\beta_0$} (0.5,-0.375); \draw (-3.5,0.5) to[out=0,in=180] (-2.5, 1) -- (2.5,1) to[out=0,in=180] (3.5,0.5); \draw (-3.5,0.5) to[out=0,in=180] (-2.5,-0.25) to[out=0,in=180] (-1.75, -0.5) to[out=0,in=180] (-1,-1) (0.5,-1) to[out=0,in=180] (1.25,-0.5) to[out=0,in=180] (2,-0.25) to[out=0,in=180] (2.5,-0.25) to[out=0,in=180] (3.5,0.5); \draw (-3.5,0.25) to[out=0,in=180] (-2.5, 0.75) -- (2.5,0.75) to[out=0,in=180] (3.5,0.25); \draw (-3.5,0.25) to[out=0,in=180] (-2.5,-0.5) to[out=0,in=180] (-1.75,-0.75) to[out=0,in=180] (-1.375, -1) to[out=0,in=180] (-1, -0.75) (0.5,-0.75) to[out=0,in=180] (0.875, -1) to[out=0,in=180] (1.25, -0.75) to[out=0,in=180] (2,-0.5) -- (2.5,-0.5) to[out=0,in=180] (3.5,0.25); \draw (-3.5,0) to[out=0,in=180] (-2.5, 0.5) -- (2.5,0.5) to[out=0,in=180] (3.5,0); \draw (-3.5,0) to[out=0,in=180] (-2.5,-0.75) to[out=0,in=180] (-1.75,-1) to[out=0,in=180] (-1,-0.5) (0.5, -0.5) to[out=0,in=180] (1.25, -1) to[out=0,in=180] (2,-0.75) (2.5,-0.75) to[out=0,in=180] (3.5,0); \draw[thick,blue] (-3.5,-0.25) to[out=0,in=180] (-2.5, 0.25) -- (2.5,0.25) to[out=0,in=180] (3.5,-0.25); \draw[thick,blue] (-3.5,-0.25) to[out=0,in=180] (-2.5, -1) to[out=0,in=180] (-1.75, -0.25) -- (1.25, -0.25) to[out=0,in=180] (2,-1) to[out=0,in=180] (2.5,-0.75) (2,-0.75) to[out=0,in=180] (2.5,-1) to[out=0,in=180] (3.5,-0.25); \draw[dashed] (-2.5,-1.25) rectangle node[below=3ex] {$S(\beta)$} (2.5,0); \end{tikzpicture} \end{array} \] \caption{A stabilization $\lambda_{S(\beta)}$ of a Legendrian link $\lambda_\beta$} \label{figure:stabilization of Legendrian} \end{figure} The Legendrian $\lambda_{S(\beta)}$ does depend on the braid word $\beta_0$. For example, for each pair of positive $N$-braids $\beta_0^{(1)}$ and $\beta_0^{(2)}$ with $\beta_0=\beta_0^{(1)}\beta_0^{(2)}$, let $\beta_0'=\beta_0^{(2)}\beta_0^{(1)}$ and $\beta' = \Delta_N\beta_0'\Delta_N$. Then two Legendrian links $\lambda_\beta$ and $\lambda_{\beta'}$ are Legendrian isotopic but $\lambda_{S(\beta)}$ and $\lambda_{S(\beta')}$ are \emph{not} Legendrian isotopic in general. Therefore a stabilization of a Legendrian link $\lambda$ which is a closure of a positive braid may not be uniquely determined. \begin{example}\label{example:stabilization of An} Let $\beta(a,b,c)=\sigma_2\sigma_1^{a+1}\sigma_2\sigma_1^{b+1}\sigma_2\sigma_1^{c+1}$ and $\beta(n)=\sigma_1^{n+3}$. Since $\beta(\dynkinfont{A}_n)=\sigma_1^{n+3} = \sigma_1^{c}\sigma_1^{b+2}$, we have \begin{align*} S(\beta(\dynkinfont{A}_n))&=(\sigma_1\sigma_2)\sigma_1^{c}\sigma_1^{b+2}(\sigma_2\sigma_1)\sigma_1\mathrel{\dot{=}}\sigma_1^{b+2} (\sigma_2\sigma_1^3\sigma_2) \sigma_1^{c} =\sigma_1^{b+1}\Delta_3 \sigma_1 \Delta_3 \sigma_1^{c-1}\\ &=\sigma_1^{b+1} \sigma_2\Delta_3^2\sigma_1^{c-1} =\sigma_1^{b+1}\sigma_2\sigma_1^{c+1}\sigma_2\sigma_1^2\sigma_2 \mathrel{\dot{=}}\beta(1,b,c). \end{align*} Therefore $\beta(1,b,c)$ is a stabilization of $\beta(n)$ for each $b+c-1=n$. \end{example} Let us define \[ \tilde\beta_0(a,b,c)= (\sigma_2\sigma_{1,3}\sigma_2) \sigma_2^{a-1}\sigma_1^{b-1}\sigma_3^{c-1}\quad\text{ and }\quad \tilde\beta(a,b,c)= (\sigma_2\sigma_{1,3}\sigma_2)^3 \sigma_2^{a-1} \sigma_1^{b+1}\sigma_3^{c+1}, \] where $\sigma_{1,3}$ is a $4$-braid isotopic to $\sigma_1\sigma_3$ (or equivalently, $\sigma_3\sigma_1$) such that two crossings $\sigma_1$ and $\sigma_3$ occur simultaneously. Then one can easily show that $\tilde\beta(a,b,c)$ is Legendrian isotopic to $\Delta_4\tilde\beta_0(a,b,c)\Delta_4$. \begin{example} The Legendrian $\lambda_{\tilde\beta_0(a,b,c)}$ is a stabilization of $\lambda_{\beta_0(a,b,c)}$ since \begin{align*} S(\beta_0(a,b,c)) &= \sigma_1\sigma_2^a \sigma_1^{b-1} \sigma_2 {\color{blue}\sigma_3}\sigma_2^{c-1} =\sigma_1\sigma_2^a \sigma_1^{b-1}\sigma_3^{c-1} \sigma_2\sigma_3 \mathrel{\dot{=}}(\sigma_2 \sigma_{1,3} \sigma_2) \sigma_2^{a-1} \sigma_1^{b-1}\sigma_3^{c-1} =\tilde\beta(a,b,c). \end{align*} \end{example} In particular, for $b=c$, we have $4$-braids \[ \tilde\beta_0(a,b,b)= (\sigma_2\sigma_{1,3}\sigma_2) \sigma_2^{a-1}\sigma_{1,3}^{b-1}\quad\text{ and }\quad \tilde\beta(a,b,b)= (\sigma_2\sigma_{1,3}\sigma_2)^3 \sigma_2^{a-1}\sigma_{1,3}^{b-1}, \] and denote $\tilde\beta(2,2,n-1)$ and $\tilde\beta(2,3,3)$ by $\tilde\beta(\dynkinfont{D}_{n+1})$ and $\tilde\beta(\dynkinfont{E}_6)$, respectively. Recall the conjugate action on $\mathbb{C}^2$, which turns links upside down so that in terms of braid words, it interchanges $\sigma_i$ and $\sigma_{N-i}$ for each $N$-braid. Hence, for $4$-braids, it preserves $\sigma_{1,3}$. Therefore $\tilde\beta(a,b,b)$ is invariant under conjugation and so is $\lambda_{\tilde\beta(a,b,b)}$: \begin{lemma} The Legendrian $\lambda_{\tilde\beta(a,b,b)}$ is invariant under conjugation. \end{lemma} \begin{corollary}\label{corollary:invariance under conjugation} The Legendrians $\lambda_{\tilde\beta(\dynkinfont{D}_{n+1})}$ and $\lambda_{\tilde\beta(\dynkinfont{E}_6)}$ are invariant under conjugation. \end{corollary} On the other hand, a stabilization $\lambda_{S(\beta)}$ of $\lambda_\beta$ will be represented by $N$-colored dots in $S^1$ while $\lambda_\beta$ uses only $(N-1)$ colors. That is, \[ \lambda_{S(\beta)} \longleftrightarrow \begin{tikzpicture}[baseline=-.5ex, scale=0.6] \draw (0,0) circle (2); \curlybrace[]{90}{270}{2.2}; \draw (180:2.2) node[left] {$\beta$}; \draw[fill, violet] (75:2) circle (2pt) (-75:2) circle (2pt); \draw[fill, yellow] (45:2) circle (2pt) (-45:2) circle (2pt); \draw[fill, red] (30:2) circle (2pt) (-30:2) circle (2pt); \draw[fill, blue] (15:2) circle (2pt) (-15:2) circle (2pt) (0:2) circle (2pt); \draw (60:2.2) node[rotate=-30] {$\dots$} (-60:2.2) node[rotate=30] {$\dots$}; \end{tikzpicture}\subset J^1\mathbb{S}^1 \] Then one can transfer an $N$-graph $\ngraphfont{G}$ for $\lambda_\beta$ into an $(N+1)$-graph $S(\ngraphfont{G})$ for $\lambda_{S(\beta)}$ as follows: \begin{center} \begin{tikzcd} \ngraphfont{G}=\begin{tikzpicture}[baseline=-.5ex] \draw [thick] (0,0) circle [radius=1]; \node at (0,0) {$\ngraphfont{G}_{(1,\dots,N)}$}; \end{tikzpicture} \arrow[leftrightarrow,"\mathrm{(S)}",r]& \begin{tikzpicture}[baseline=-.5ex] \draw (0,0) circle (1.5); \draw[double] (0,1.5) -- (0,-1.5) arc (-90:-270:1.5); \draw (-0.75,0) node {$\ngraphfont{G}_{(1,\dots,N)}$}; \draw[violet] (-75:1.5) to[out=105,in=-90] (0.25,0) to[out=90,in=-105] (75:1.5); \draw[yellow] (-45:1.5) to[out=135,in=-90] (0.75,0) to[out=90,in=-135] (45:1.5); \draw[red] (-30:1.5) to[out=150,in=-90] (1,0) to[out=90,in=-150] (30:1.5); \draw (0.5,0) node {$\scriptstyle\cdots$}; \draw[blue] (-15:1.5) to[out=135,in=-90] (1.25,0) to[out=90,in=-150] (15:1.5) (1.25,0) -- (1.5,0); \draw[fill,blue] (1.25,0) circle (1pt); \end{tikzpicture}=S(\ngraphfont{G}) \end{tikzcd} \end{center} \subsubsection{Annular $N$-graphs and Legendrian loops} Let $\beta, \beta_+, \beta_-\subset J^1\R^1$ be Legendrian positive $N$-braids. We denote by $\mathscr{N}\mathsf{graphs}(\beta)$ and $\mathscr{N}\mathsf{graphs}(\beta_+, \beta_-)$ the sets of equivalence classes of $N$-graphs on $\mathbb{D}^2$ and $\mathbb{A}$ satisfying boundary conditions given by the closure $\lambda_\beta$ or a pair of closures $(\lambda_{\beta_+}, \lambda_{\beta_-})$. \begin{align*} \mathscr{N}\mathsf{graphs}(\beta)&\colonequals \{[\ngraphfont{G}]\mid \ngraphfont{G}\text{ is an $N$-graph on $\mathbb{D}^2$ of type $\lambda_\beta$}\}\\ \mathscr{N}\mathsf{graphs}(\beta_+, \beta_-)&\colonequals \{[\ngraphfont{G}]\mid \ngraphfont{G}\text{ is an $N$-graph on $\mathbb{A}$ of type $(\lambda_{\beta_+}, \lambda_{\beta_-})$}\}. \end{align*} Here, we are assuming that we are aware of where each braid word starts. Then it is direct to check that these sets are invariant under the cyclic rotation of the braid words up to bijection. More precisely, for $N$-braids $\beta^{(1)}, \beta^{(2)}$ and $\beta^{(1)}_\pm, \beta^{(2)}_\pm$, closures of $\beta^{(1)}\beta^{(2)}$ and $\beta^{(2)}\beta^{(1)}$ are identical in $J^1\mathbb{S}^1$ and there are one-to-one correspondences between sets of $N$-graphs \begin{align} \begin{split} \mathscr{N}\mathsf{graphs}\left(\beta^{(1)}\beta^{(2)}\right) &\cong \mathscr{N}\mathsf{graphs}\left(\beta^{(2)}\beta^{(1)}\right), \end{split}\\ \begin{split} \mathscr{N}\mathsf{graphs}\left(\beta_+, \beta^{(1)}_-\beta^{(2)}_-\right) &\cong \mathscr{N}\mathsf{graphs}\left(\beta_+, \beta^{(2)}_-\beta^{(1)}_-\right),\\ \mathscr{N}\mathsf{graphs}\left(\beta^{(1)}_+\beta^{(2)}_+, \beta_-\right) &\cong \mathscr{N}\mathsf{graphs}\left(\beta^{(2)}_+\beta^{(1)}_+, \beta_-\right) \end{split} \label{equation:cyclic rotation of word} \end{align} Suppose that $\ngraphfont{G}_1\in\mathscr{N}\mathsf{graphs}(\beta_2,\beta_1)$ and $\ngraphfont{G}_2\in\mathscr{N}\mathsf{graphs}(\beta_3,\beta_2)$. Then two $N$-graphs can be merged or piled in a natural way to obtain the annular $N$-graph, denoted by~$\ngraphfont{G}_2\ngraphfont{G}_1\in\mathscr{N}\mathsf{graphs}(\beta_3, \beta_1)$. On the other hand, for $\ngraphfont{G}\in\mathscr{N}\mathsf{graphs}(\beta)$ and $\ngraphfont{G}_1\in\mathscr{N}\mathsf{graphs}(\beta', \beta)$, the concatenation $\ngraphfont{G}_1\ngraphfont{G}\in\mathscr{N}\mathsf{graphs}(\beta')$ is well-defined by gluing along the boundary $\lambda_\beta$. Hence, we have two natural maps \begin{align*} \mathscr{N}\mathsf{graphs}(\beta_3,\beta_2)\times\mathscr{N}\mathsf{graphs}(\beta_2,\beta_1)&\to \mathscr{N}\mathsf{graphs}(\beta_3,\beta_1),\\ \mathscr{N}\mathsf{graphs}(\beta', \beta)\times\mathscr{N}\mathsf{graphs}(\beta) &\to \mathscr{N}\mathsf{graphs}(\beta'). \end{align*} In particular, for each $\partial$-Legendrian isotopy from $\lambda'=\lambda_{\beta'}$ and $\lambda=\lambda_\beta$, we have a tame annular $N$-graph $\ngraphfont{G}_{\lambda'\lambda}\in\mathscr{N}\mathsf{graphs}(\beta',\beta)$, where $\lambda$ and $\lambda'$ are closures of $\beta$ and $\beta'$, respectively. Moreover, we also have a tame annular $N$-graph $\ngraphfont{G}^{-1}_{\lambda'\lambda}$ obtained by flipping the annulus inside out corresponding to the inverse isotopy from $\lambda$ to $\lambda'$. Hence, we have two maps inverses to each other \[ \mathscr{N}\mathsf{graphs}(\beta) \to \mathscr{N}\mathsf{graphs}(\beta'),\quad\text{ and }\quad \mathscr{N}\mathsf{graphs}(\beta')\to \mathscr{N}\mathsf{graphs}(\beta), \] defined by \[ \ngraphfont{G}\mapsto \ngraphfont{G}_{\lambda'\lambda}\cdot\ngraphfont{G},\quad\text{ and }\quad \ngraphfont{G}'\mapsto \ngraphfont{G}^{-1}_{\lambda'\lambda}\cdot\ngraphfont{G}', \] respectively. Let $\mathscr{N}\mathsf{graphs}_0(\beta,\beta)$ be the subset of tame annular $N$-graphs of type $(\beta,\beta)$. \begin{lemma} Let $\beta$ be a Legendrian positive $N$-graph. The set $\mathscr{N}\mathsf{graphs}_0(\beta,\beta)$ becomes a group under the concatenation which acts on the set $\mathscr{N}\mathsf{graphs}(\beta)$. \end{lemma} \begin{proof} It is easy to see that the set $\mathscr{N}\mathsf{graphs}_0(\beta,\beta)$ is closed under the concatenation, which is associative. The trivial $\partial$-Legendrian isotopy gives us the identity annular $N$-graph. Finally, for each $\ngraphfont{G}\in\mathscr{N}\mathsf{graphs}_0(\beta,\beta)$, the $N$-graph $\ngraphfont{G}^{-1}$ plays the role of the inverse of $\ngraphfont{G}$ due to the Move \Move{I} and \Move{V} of $N$-graphs in Figure~\ref{fig:move1-6}. Hence $\mathscr{N}\mathsf{graphs}_0(\beta,\beta)$ becomes a group acting on the set $\mathscr{N}\mathsf{graphs}(\beta)$ by concatenation, and so we are done. \end{proof} \begin{definition}[Legendrian loop]\label{definition:Legendrian loops} Let $\lambda \subset (\R ^3, \xi_{\rm st})$ be a Legendrian link and $\cL(\lambda)$ be the space of Legendrian links isotopic to $\lambda$. A {\em Legendrian loop} $\vartheta$ is a continuous map $\vartheta\colon(\mathbb{S}^1,{\rm pt})\to (\cL(\lambda), \lambda)$ and said to be \emph{tame} if the Legendrian $\vartheta(\theta)$ is a closure of a positive braid for each $\theta\in\mathbb{S}^1$. \end{definition} \begin{remark} One can regard each Legendrian loop $\vartheta$ for $\lambda$ as an element of the fundamental group $\pi_1(\cL(\lambda), \lambda)$. \end{remark} Let $\lambda$ be the closure of a positive braid $\beta$. Then each tame Legendrian loop for $\lambda$ corresponds to a $\partial$-Legendrian isotopy from $\lambda$ to $\lambda$ and can be regarded as an element $\ngraphfont{G}_\vartheta$ in $\mathscr{N}\mathsf{graphs}_0(\beta,\beta)$. Conversely, any element $\ngraphfont{G}$ in $\mathscr{N}\mathsf{graphs}_0(\beta,\beta)$ defines a tame Legendrian loop $\vartheta_\ngraphfont{G}$ obviously. In summary, we have the following lemma. \begin{lemma}\label{lemma:Legendrian loops and tame annular Ngraphs} Let $\beta$ be a Legendrian positive $N$-braid. Then there is one-to-one correspondence between $\mathscr{N}\mathsf{graphs}_0(\beta,\beta)$ and the subset of homotopy classes of tame Legendrian loops for $\lambda=\lambda_\beta$. In particular, each tame Legendrian loop acts on $\mathscr{N}\mathsf{graphs}(\beta)$. \end{lemma} \subsection{One-cycles in Legendrian weaves}\label{sec:1-cycles in Legendrian weaves} Let us recall from \cite{CZ2020} how to construct a seed from an $N$-graph~$\ngraphfont{G}$. Each one-cycle in $\Lambda(\ngraphfont{G})$ corresponds to a vertex of the quiver, and a monodromy along that cycle gives a coordinate function at that vertex. The quiver is obtained from the intersection data among one-cycles. Moreover, there is an operation in $N$-graph, called \emph{Legendrian mutation}, which is a counterpart of the mutation in the cluster structure. The Legendrian mutation is crucial in constructing and distinguishing $N$-graphs. In turn, these will give as many Lagrangian fillings as seeds of Legendrian links. Let $\ngraphfont{G}\subset \mathbb{D}^2$ be a free $N$-graph and $\Lambda(\ngraphfont{G})$ be the induced Legendrian weave. We express one-cycles of $\Lambda(\ngraphfont{G})$ in terms of subgraphs of $\ngraphfont{G}$. \begin{definition} A subgraph $\sfT$ of a nondegenerate $N$-graph $\ngraphfont{G}$ is said to be \emph{admissible} if at each vertex, it looks locally one of pictures depicted in Figure~\ref{fig:T cycle}. For a degenerate $N$-graph $\ngraphfont{G}$, a subgraph $\sfT$ is \emph{admissible} if so is its perturbation as a subgraph of the perturbation of $\ngraphfont{G}$. See Figure~\ref{figure:perturbation of admissible subgraphs}. For an admissible subgraph $\sfT\subset\ngraphfont{G}$, let $\ell(\sfT)\subset\mathbb{D}^2$ be an oriented, immersed, labelled loop given by gluing paths whose local pictures look as depicted in Figure~\ref{fig:T cycle}. \end{definition} \begin{figure}[ht] \subfigure[A trivalent vertex: case 1 \label{figure:loop near vertex1}]{\makebox[.3\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,scale=0.8] \draw [dashed] (0,0) circle [radius=2] \clip (0,0) circle [radius=2]; \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) to (2,0); \draw [blue, thick] (135:2) -- (0:0)--(0:2) (-135:2)--(0,0); \draw[->] (30:2) to[out=180,in=90] node[pos=0.5, above, sloped] {$\scriptstyle i+1$} node[pos=1, left] {$\scriptstyle i$} (-1,0); \draw[] (-1,0) to[out=-90,in=180] node[pos=0.5,below, sloped] {$\scriptstyle i+1$} (-30:2); \draw[thick,blue,fill=blue] (0,0) circle (0.05); \end{tikzpicture} }}% \subfigure[A trivalent vertex: case 2 \label{figure:loop near vertex2}]{\makebox[.3\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,scale=0.8] \draw [dashed] (0,0) circle [radius=2] \clip (0,0) circle [radius=2]; \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) to (2,0) (-135:2)--(0,0); \draw [blue, thick] (135:2) -- (0:0)--(0:2) (-135:2)--(0,0); \draw[->] (30:2) to[out=180,in=90] node[pos=0.15, below] {$\scriptstyle i+1$} node[pos=1, above, sloped, rotate = 180] {$\scriptstyle i$} (-1,0); \draw[] (-1,0) to[out=-90,in=180] node[near end, above] {$\scriptstyle i+1$} (-30:2); \draw[->] (240:2) to[out=45,in=-45] node[pos=0.2, above,sloped] {$\scriptstyle i+1$} node[pos=1,above, sloped] {$\scriptstyle i$} (45:0.5); \draw[] (45:0.5) to[out=135,in=45] node[pos=0.85, below,sloped] {$\scriptstyle i+1$} (210:2); \draw[thick,blue,fill=blue] (0,0) circle (0.05); \end{tikzpicture} }}% \subfigure[A trivalent vertex: case 3 \label{figure:loop near vertex3}]{\makebox[.3\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,scale=0.8] \draw [dashed] (0,0) circle [radius=2] \clip (0,0) circle [radius=2]; \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) to (2,0) (-135:2)--(0,0) (135:2) -- (0:0); \draw [blue, thick] (135:2) -- (0:0)--(0:2) (-135:2)--(0,0); \draw[->] (30:2) to[out=180,in=90] node[pos=0.2, below,sloped] {$\scriptstyle i+1$} node[pos=1, above, sloped, rotate = 180] {$\scriptstyle i$} (-1,0); \draw[] (-1,0) to[out=-90,in=180] node[pos=0.8, above,sloped] {$\scriptstyle i+1$} (-30:2); \draw[->] (250:2) to[out=45,in=-45] node[pos=0.2, above,sloped] {$\scriptstyle i+1$} node[pos=0.9,above, sloped] {$\scriptstyle i$} (45:0.5); \draw[] (45:0.5) to[out=135,in=45] node[pos=0.85, below,sloped] {$\scriptstyle i+1$} (210:2); \begin{scope} \draw[->] (150:2) to[out=-45,in=-135] node[pos=0.15, above,sloped] {$\scriptstyle i+1$} node[sloped, pos=1, below] {$\scriptstyle i$} (-45:0.25) ; \draw[] (-45:0.25) to[out=45,in=-45] node[pos=0.85, below,sloped] {$\scriptstyle i+1$} (120:2); \end{scope} \draw[thick,blue,fill=blue] (0,0) circle (0.05); \end{tikzpicture} }} \subfigure[A hexagonal vertex: case 1 \label{figure:loop near hexagon II1}]{\makebox[.3\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,scale=0.8] \draw [dashed] (0,0) circle [radius=2]; \clip (0,0) circle [radius=2]; \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-2,0) to (2,0); \draw[red, thick] (180:2) -- (0,0) -- (60:2) (0,0)-- (-60:2); \draw[blue, thick] (0:2) -- (0,0) -- (120:2) (0,0)-- (-120:2); \draw[thick,black,fill=white] (0,0) circle (0.05); \draw[->] (15:2) -- node[above, midway, pos=1]{$\scriptstyle i+2$} node[above, pos=0.3] {$\scriptstyle i+1$} (0,{2*sin(15)}); \draw[->] ({180+15}:2) -- node[below, pos=0.3]{$\scriptstyle i+2$} (0,{-2*sin(15)}) ; \draw[] (0,{2*sin(15)}) -- node[above, pos=0.7] {$\scriptstyle i+2$} ({180-15}:2); \draw[] (0,{-2*sin(15)}) -- node[below, midway, pos=0]{$\scriptstyle i+2$} node[below, pos=0.7] {$\scriptstyle i+1$} (-15:2); \end{tikzpicture} }} \subfigure[A hexagonal vertex: case 2 \label{figure:loop near hexagon II2}]{\makebox[.3\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,scale=0.8] \draw [dashed] (0,0) circle [radius=2] \clip (0,0) circle [radius=2]; \begin{scope} \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0:0) -- (-30:2); \draw [blue, thick](0:0)--(270:2); \draw [red, thick](0,0)--(330:2); \draw[->] (340:2) to[out=160,in=-60] (30:0.75); \draw(270:0.75) to[out=0,in=140] (320:2); \end{scope} \begin{scope}[rotate=120] \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0:0) -- (-30:2); \draw [blue, thick](0:0)--(270:2); \draw [red, thick](0,0)--(330:2); \draw[->] (340:2) to[out=160,in=-60] (30:0.75); \draw(270:0.75) to[out=0,in=140] (320:2); \end{scope} \begin{scope}[rotate=240] \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0:0) -- (-30:2); \draw [blue, thick](0:0)--(270:2); \draw [red, thick](0,0)--(330:2); \draw[->] (340:2) to[out=160,in=-60] (30:0.75); \draw(270:0.75) to[out=0,in=140] (320:2); \end{scope} \node[rotate=0] at (300:1.4) {$\scriptstyle i+2$}; \node[rotate=-60] at (0:1.2) {$\scriptstyle i+2$}; \node[rotate=-60] at (60:1.2) {$\scriptstyle i+2$}; \node[rotate=60] at (120:1.2) {$\scriptstyle i+2$}; \node[rotate=60] at (180:1.2) {$\scriptstyle i+2$}; \node[rotate=0] at (240:1.4) {$\scriptstyle i+2$}; \draw[thick,black, fill=white] (0:0) circle (1.5pt); \end{tikzpicture} }} \subfigure[A hexagonal vertex: case 3 \label{figure:loop near hexagon II3}]{\makebox[.3\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,scale=0.8] \draw [dashed] (0,0) circle [radius=2] \clip (0,0) circle [radius=2]; \begin{scope} \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0:0) -- (-30:2); \draw [red, thick](0:0)--(270:2); \draw [blue, thick](0,0)--(330:2); \draw (320:2) to[out=160,in=-60] (30:0.75); \draw[->] (340:2) to[out=140,in=0] (270:0.75); \end{scope} \begin{scope}[rotate=120] \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0:0) -- (-30:2); \draw [red, thick](0:0)--(270:2); \draw [blue, thick](0,0)--(330:2); \draw (320:2) to[out=160,in=-60] (30:0.75); \draw[->] (340:2) to[out=140,in=0] (270:0.75); \end{scope} \begin{scope}[rotate=240] \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0:0) -- (-30:2); \draw [red, thick](0:0)--(270:2); \draw [blue, thick](0,0)--(330:2); \draw (320:2) to[out=160,in=-60] (30:0.75); \draw[->] (340:2) to[out=140,in=0] (270:0.75); \end{scope} \node[rotate=-60] at (0:1) {$\scriptstyle i$}; \node[rotate=-60] at (60:1) {$\scriptstyle i$}; \node[rotate=60] at (120:1) {$\scriptstyle i$}; \node[rotate=60] at (180:1) {$\scriptstyle i$}; \node[rotate=0] at (250:1.1) {$\scriptstyle i$}; \node[rotate=0] at (290:1.1) {$\scriptstyle i$}; \node[rotate=-30] at (350:1.6) {$\scriptstyle i+1$}; \node[rotate=-30] at (310:1.6) {$\scriptstyle i+1$}; \node[rotate=30] at (230:1.6) {$\scriptstyle i+1$}; \node[rotate=30] at (190:1.6) {$\scriptstyle i+1$}; \node[rotate=0] at (65:1.6) {$\scriptstyle i+1$}; \node[rotate=0] at (115:1.6) {$\scriptstyle i+1$}; \draw[thick,black, fill=white] (0:0) circle (1.5pt); \end{tikzpicture}}} \caption{Local configurations on cycles and corresponding arcs of $\ngraphfont{G}\subset\mathbb{D}^2$} \label{fig:T cycle} \end{figure} \begin{figure} \[ \begin{tikzcd} \begin{tikzpicture}[baseline=-.5ex,scale=1/3] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[Dble={green and blue},line width=2] (120:3) -- (0,0); \draw[Dble={green and blue},line width=2] (-120:3) -- (0,0); \draw[Dble={green and blue},line width=2] (0,0) -- (3,0); \draw[color=cyclecolor1, opacity=0.5, line width=7, line cap=round] (0,0) -- (3,0); \end{tikzpicture} \arrow[d,"{\rm perturb.}"] & \begin{tikzpicture}[baseline=-.5ex,scale=1/3] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[Dble={green and blue},line width=2] (120:3) -- (0,0); \draw[Dble={green and blue},line width=2] (-120:3) -- (0,0); \draw[Dble={green and blue},line width=2] (0,0) -- (3,0); \draw[color=cyclecolor1, opacity=0.5, line width=7, line cap=round] (0,0) -- (3,0); \draw[color=cyclecolor1, opacity=0.5, line width=7, line cap=round] (-120:3) -- (0,0); \end{tikzpicture} \arrow[d,"{\rm perturb.}"] & \begin{tikzpicture}[baseline=-.5ex,scale=1/3] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[Dble={green and blue},line width=2] (120:3) -- (0,0); \draw[Dble={green and blue},line width=2] (-120:3) -- (0,0); \draw[Dble={green and blue},line width=2] (0,0) -- (3,0); \draw[color=cyclecolor1, opacity=0.5, line width=7, line cap=round] (0,0) -- (3,0); \draw[color=cyclecolor1, opacity=0.5, line width=7, line cap=round] (-120:3) -- (0,0); \draw[color=cyclecolor1, opacity=0.5, line width=7, line cap=round] (120:3) -- (0,0); \end{tikzpicture} \arrow[d,"{\rm perturb.}"] \\ \begin{tikzpicture}[baseline=-.5ex,scale=1/3] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw [color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-10:3) -- (-60:1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (10:3) -- (60:1); \draw[thick, blue, fill] (130:3) -- (-60:1) circle (2pt) (-110:3) -- (-60:1) (-10:3) -- (-60:1); \draw[thick, green, fill] (110:3) -- (60:1) circle (2pt) (-130:3) -- (60:1) (10:3) -- (60:1); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex,scale=1/3] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw [color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-10:3) -- (-60:1) (-110:3) -- (-60:1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (10:3) -- (60:1) (-130:3) -- (60:1); \draw[thick, blue, fill] (130:3) -- (-60:1) circle (2pt) (-110:3) -- (-60:1) (-10:3) -- (-60:1); \draw[thick, green, fill] (110:3) -- (60:1) circle (2pt) (-130:3) -- (60:1) (10:3) -- (60:1); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex,scale=1/3] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw [color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-10:3) -- (-60:1) (-110:3) -- (-60:1) (130:3) -- (-60:1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (10:3) -- (60:1) (-130:3) -- (60:1) (110:3) -- (60:1); \draw[thick, blue, fill] (130:3) -- (-60:1) circle (2pt) (-110:3) -- (-60:1) (-10:3) -- (-60:1); \draw[thick, green, fill] (110:3) -- (60:1) circle (2pt) (-130:3) -- (60:1) (10:3) -- (60:1); \end{tikzpicture} \end{tikzcd} \] \[ \begin{tikzcd} \begin{tikzpicture}[baseline=-.5ex,scale=1/3] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-3,0) -- (3,0); \draw[fill, red, thick] (-3,0) -- (3,0) (0,3)--(0,-3); \begin{scope} \draw[Dble={blue and green},line width=2] (0,0) -- (-45:3); \draw[Dble={green and blue},line width=2] (0,0) -- (45:3); \draw[Dble={blue and green},line width=2] (0,0) -- (135:3); \draw[Dble={green and blue},line width=2] (0,0) -- (-135:3); \end{scope} \end{tikzpicture} \arrow[d,"{\rm perturb.}"] & \begin{tikzpicture}[baseline=-.5ex,scale=1/3] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[fill, red, thick] (-3,0) -- (3,0) (0,3)--(0,-3); \begin{scope} \draw[Dble={blue and green},line width=2] (0,0) -- (-45:3); \draw[Dble={green and blue},line width=2] (0,0) -- (45:3); \draw[Dble={blue and green},line width=2] (0,0) -- (135:3); \draw[Dble={green and blue},line width=2] (0,0) -- (-135:3); \end{scope} \draw[color=cyclecolor1, opacity=0.5, line width=7, line cap=round] (-135:3) -- (45:3); \end{tikzpicture} \arrow[d,"{\rm perturb.}"] & \begin{tikzpicture}[baseline=-.5ex,scale=1/3] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[fill, red, thick] (-3,0) -- (3,0) (0,3)--(0,-3); \begin{scope} \draw[Dble={blue and green},line width=2] (0,0) -- (-45:3); \draw[Dble={green and blue},line width=2] (0,0) -- (45:3); \draw[Dble={blue and green},line width=2] (0,0) -- (135:3); \draw[Dble={green and blue},line width=2] (0,0) -- (-135:3); \end{scope} \draw[color=cyclecolor1, opacity=0.5, line width=7, line cap=round] (-135:3) -- (0,0) (135:3) -- (0,0) (0:3) -- (0,0); \end{tikzpicture} \arrow[d,"{\rm perturb.}"] & \begin{tikzpicture}[baseline=-.5ex,scale=1/3] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=cyclecolor1, opacity=0.5, line width=5, line cap=round] (-3,0) -- (0,0) (0,-3)--(0,0); \draw[fill, red, thick] (-3,0) -- (3,0) (0,3)--(0,-3); \draw[Dble={blue and green},line width=2] (0,0) -- (-45:3); \draw[Dble={green and blue},line width=2] (0,0) -- (45:3); \draw[Dble={blue and green},line width=2] (0,0) -- (135:3); \draw[Dble={green and blue},line width=2] (0,0) -- (-135:3); \draw[color=cyclecolor1, opacity=0.5, line width=7, line cap=round] (0,0)--(45:3); \end{tikzpicture} \arrow[d,"{\rm perturb.}"] \\ \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (0,0) circle [radius=1]; \clip (0,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-1,0) -- (1,0); \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1,0)--(-1/2,0) to (0,1/2) to (1/2,0)--(1,0); \draw [red, thick] (-1/2,0) to (0, -1/2) to (1/2,0); \draw [red, thick] (0,1) to (0,1/2); \draw [red, thick] (0,-1) to (0,-1/2); \draw [green, thick] (-1/2,{sqrt(3)/2}) to (0,1/2) to (0,-1/2) to (-1/2,-{sqrt(3)/2}); \draw [green, thick] (1/2,{sqrt(3)/2})--(0,1/2); \draw [green, thick] (1/2,-{sqrt(3)/2})--(0,-1/2); \draw[thick,black,fill=white] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \draw[thick,black,fill=white] (0,1/2) circle (0.05); \draw[thick,black,fill=white] (0,-1/2) circle (0.05); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (0,0) circle [radius=1]; \clip (0,0) circle (1); \draw [color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (210:1) -- (180:1/2)--(90:1/2)--(60:1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (240:1) -- (270:1/2)--(0:1/2)--(30:1); \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1,0)--(-1/2,0) to (0,1/2) to (1/2,0)--(1,0); \draw [red, thick] (-1/2,0) to (0, -1/2) to (1/2,0); \draw [red, thick] (0,1) to (0,1/2); \draw [red, thick] (0,-1) to (0,-1/2); \draw [green, thick] (-1/2,{sqrt(3)/2}) to (0,1/2) to (0,-1/2) to (-1/2,-{sqrt(3)/2}); \draw [green, thick] (1/2,{sqrt(3)/2})--(0,1/2); \draw [green, thick] (1/2,-{sqrt(3)/2})--(0,-1/2); \draw[thick,black,fill=white] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \draw[thick,black,fill=white] (0,1/2) circle (0.05); \draw[thick,black,fill=white] (0,-1/2) circle (0.05); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (0,0) circle [radius=1]; \clip (0,0) circle (1); \draw [color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (210:1) -- (180:1/2)--(0:1) (150:1)--(180:1/2); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (240:1) -- (270:1/2)--(0:1/2)--(0:1) (120:1)--(90:1/2)--(0:1/2); \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1,0)--(-1/2,0) to (0,1/2) to (1/2,0)--(1,0); \draw [red, thick] (-1/2,0) to (0, -1/2) to (1/2,0); \draw [red, thick] (0,1) to (0,1/2); \draw [red, thick] (0,-1) to (0,-1/2); \draw [green, thick] (-1/2,{sqrt(3)/2}) to (0,1/2) to (0,-1/2) to (-1/2,-{sqrt(3)/2}); \draw [green, thick] (1/2,{sqrt(3)/2})--(0,1/2); \draw [green, thick] (1/2,-{sqrt(3)/2})--(0,-1/2); \draw[thick,black,fill=white] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \draw[thick,black,fill=white] (0,1/2) circle (0.05); \draw[thick,black,fill=white] (0,-1/2) circle (0.05); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (0,0) circle [radius=1]; \clip (0,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (180:1)--(180:1/2)--(90:1/2)--(60:1) (180:1/2) --(270:1/2)--(0:1/2)--(30:1) (270:1/2)--(270:1); \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1,0)--(-1/2,0) to (0,1/2) to (1/2,0)--(1,0); \draw [red, thick] (-1/2,0) to (0, -1/2) to (1/2,0); \draw [red, thick] (0,1) to (0,1/2); \draw [red, thick] (0,-1) to (0,-1/2); \draw [green, thick] (-1/2,{sqrt(3)/2}) to (0,1/2) to (0,-1/2) to (-1/2,-{sqrt(3)/2}); \draw [green, thick] (1/2,{sqrt(3)/2})--(0,1/2); \draw [green, thick] (1/2,-{sqrt(3)/2})--(0,-1/2); \draw[thick,black,fill=white] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \draw[thick,black,fill=white] (0,1/2) circle (0.05); \draw[thick,black,fill=white] (0,-1/2) circle (0.05); \end{tikzpicture} \end{tikzcd} \] \caption{Local configurations on degenerate cycles and its perturbation.} \label{figure:perturbation of admissible subgraphs} \end{figure} The loop $\ell(\sfT)$ defines a unique lift $\tilde\ell(\sfT)\subset\Gamma(\ngraphfont{G})$ via $\pi_{\mathbb{D}^2}:\Gamma(\ngraphfont{G})\to\mathbb{D}^2$ so that each $s_j$-labelled arc in $\ell(\sfT)$ is contained in the $s_j$-th sheet of $\Gamma(\ngraphfont{G})$. Moreover, the immersed loop $\tilde\ell(\sfT)$ lifts uniquely to an embedded loop $\gamma(\sfT)$ in $\Lambda(\ngraphfont{G})$ via the front projection $\pi_F:\Lambda(\ngraphfont{G})\to\Gamma(\ngraphfont{G})$. \begin{definition}\label{def:one-cycles}[$\sfT$-cycle] For an admissible subgraph $\sfT\subset\ngraphfont{G}$, we call the cycle $[\gamma(\sfT)]\in H_1(\Lambda(\ngraphfont{G});\Z)$ a \emph{$\sfT$-cycle}. \end{definition} \begin{example}[(Long) $\sfI$-cycles] For an edge $e$ of $\ngraphfont{G}$ connecting two trivalent vertices, let $\sfI(e)$ be the subgraph of $\ngraphfont{G}$ consisting of a single edge $e$. Then the cycle~$[\gamma(\sfI(e))]$ depicted in Figure~\ref{figure:I-cycle} is called an \emph{$\sfI$-cycle}. In general, a linear chain of edges $(e_1,e_2,\dots, e_n)$ satisfying \begin{itemize} \item $e_i$ connects a trivalent vertex and a hexagonal point for $i=1,n$; \item $e_i$ and $e_{i+1}$ meet at a hexagonal point in the opposite way, see Figure~\ref{figure:long I-cycle}, for $i=2,\dots, n-1$ \end{itemize} forms an adissible subgraph $\sfI(e_1,\dots, e_n)$, and the cycle $[\gamma(\sfI(e_1,\dots, e_n))]$ is called a \emph{long $\sfI$-cycle}. See Figure~\ref{figure:long I-cycle}. \end{example} \begin{example}[$\sfY$-cycles] Let $e_1,e_2,e_3$ be monochromatic edges joining a hexagonal point $h$ and trivalent vertices $v_i$ for $i=1,2,3$. Then the subgraph $\sfY(e_1,e_2,e_3)$ consisting of three edges $e_1, e_2$ and $e_3$ is an admissible subgraph of $\ngraphfont{G}$ and it defines a cycle $[\gamma(\sfY(e_1,e_2,e_3))]$ called an \emph{upper} or \emph{lower}~\emph{$\sfY$-cycle} according to the relative position of sheets that edges represent. See Figures~\ref{figure:Y-cycle_1} and~\ref{figure:Y-cycle_2}. \end{example} \begin{figure}[ht] \subfigure[An $\sfI$-cycle $\gamma(\sfI(e))$\label{figure:I-cycle}]{\makebox[.4\textwidth]{ \begin{tikzpicture} \draw [dashed] (0,0) circle [radius=1.5] \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-1/2,0) to (1/2,0); \draw [blue, thick] ({-3*sqrt(3)/4},3/4)--(-1/2,0); \draw [blue, thick] ({-3*sqrt(3)/4},-3/4)--(-1/2,0); \draw [blue, thick] ({3*sqrt(3)/4},3/4)--(1/2,0); \draw [blue, thick] ({3*sqrt(3)/4},-3/4)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0) node[above, midway] {$e$}; \draw[thick,blue,fill=blue] (-1/2,0) circle (0.05); \draw[thick,blue,fill=blue] (1/2,0) circle (0.05); \draw[->] (1,0) node[right] {$\scriptstyle i$} to[out=90,in=0] (0,0.5) node[above] {$\scriptstyle i+1$} to[out=180,in=90] (-1,0) node[left] {$\scriptstyle i$} to[out=-90,in=180] (0,-0.5)node[below] {$\scriptstyle i+1$} to[out=0,in=-90] (1,0); \end{tikzpicture} }} \subfigure[A long $\sfI$-cycle $\gamma(\sfI(e_1,e_2))$\label{figure:long I-cycle}]{\makebox[.4\textwidth]{ \begin{tikzpicture} \draw[dashed] \boundellipse{0,0}{3}{1.5}; \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-1.5,0) to (1.5,0); \draw[red, thick] (0,0)--(1.35,1.35); \draw[red, thick] (0,0)--(1.35,-1.35); \draw[red, thick] (0,0)--(-1.5,0) node[above, midway] {$e_1$}; \draw[red, thick] (-1.5,0)--(-1.5-0.9,0.9); \draw[red, thick] (-1.5,0)--(-1.5-0.9,-0.9); \draw[blue, thick] (0,0)--(-1.35,1.35); \draw[blue, thick] (0,0)--(-1.35,-1.35); \draw[blue, thick] (0,0)--(1.5,0) node[above, midway] {$e_2$}; \draw[blue, thick] (1.5,0)--(1.5+0.9,0.9); \draw[blue, thick] (0,0)--(1.5,0)--(1.5+0.9,-0.9); \draw[thick,red,fill=red] (-1.5,0) circle (0.05); \draw[thick,blue,fill=blue] (1.5,0) circle (0.05); \draw[thick,black,fill=white] (0,0) circle (0.05); \draw[->] (2,0) node[right]{$\scriptstyle i$} to[out=90,in=0] (0,0.5) node[above]{$\scriptstyle i+2$} to[out=180,in=90] (-2,0) node[left]{$\scriptstyle i+1$} to[out=-90,in=180] (0,-0.5) node[below]{$\scriptstyle i+2$} to[out=0,in=-90] (2,0); \node[above] at (1.5,0.5) {$\scriptstyle i+1$}; \node[below] at (1.5,-0.5) {$\scriptstyle i+1$}; \node[above] at (-1.5,0.5) {$\scriptstyle i+2$}; \node[below] at (-1.5,-0.5) {$\scriptstyle i+2$}; \end{tikzpicture}}} \subfigure[An upper $\sfY$-cycle $\gamma(\sfY(e_1,e_2,e_3))$\label{figure:Y-cycle_1}]{\makebox[.4\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,,scale=0.8] \draw [dashed] (0,0) circle [radius=2] \begin{scope} \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0:0) -- (-30:1); \draw [blue, thick](0:0)--(270:2); \draw [red, thick](0,0)--(330:1) (310:2)--(330:1)--(350:2); \draw[->] (270:0.5) to[out=0,in=-120] (330:1.5); \draw(330:1.5) to[out=60,in=-60] (30:0.5); \node[rotate=60] at (330:1.75) {$\scriptstyle i+1$}; \node[rotate=-30] at (290:1) {$\scriptstyle i+2$}; \node[rotate=-30] at (0:1) {$\scriptstyle i+2$}; \end{scope} \begin{scope}[rotate=120] \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0:0) -- (-30:1); \draw [blue, thick](0:0)--(270:2); \draw [red, thick](0,0)--(330:1) (310:2)--(330:1)--(350:2); \draw[->] (270:0.5) to[out=0,in=-120] (330:1.5); \draw(330:1.5) to[out=60,in=-60] (30:0.5); \end{scope} \begin{scope}[rotate=240] \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0:0) -- (-30:1); \draw [blue, thick](0:0)--(270:2); \draw [red, thick](0,0)--(330:1) (310:2)--(330:1)--(350:2); \draw[->] (270:0.5) to[out=0,in=-120] (330:1.5); \draw(330:1.5) to[out=60,in=-60] (30:0.5); \end{scope} \draw[thick, red, fill] (90:1) circle (1.5pt) (210:1) circle (1.5pt) (330:1) circle (1.5pt); \node at (90:1.75) {$\scriptstyle i+1$}; \node[rotate=-90] at (60:1) {$\scriptstyle i+2$}; \node[rotate=90] at (120:1) {$\scriptstyle i+2$}; \node[rotate=-60] at (210:1.75) {$\scriptstyle i+1$}; \node[rotate=30] at (180:1) {$\scriptstyle i+2$}; \node[rotate=30] at (250:1) {$\scriptstyle i+2$}; \draw[thick,black, fill=white] (0:0) circle (1.5pt); \end{tikzpicture}}} \subfigure[A lower $\sfY$-cycle $\gamma(\sfY(e_1,e_2,e_3))$\label{figure:Y-cycle_2}]{\makebox[.4\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,,scale=0.8] \draw [dashed] (0,0) circle [radius=2] \begin{scope} \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0:0) -- (-30:1); \draw [red, thick](0:0)--(270:2); \draw [blue, thick](0,0)--(330:1) (310:2)--(330:1)--(350:2); \draw (270:0.3) to[out=0,in=60] (330:1.5); \draw[->] (30:0.3) to[out=-60,in=-120] (330:1.5) ; \end{scope} \begin{scope}[rotate=120] \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0:0) -- (-30:1); \draw [red, thick](0:0)--(270:2); \draw [blue, thick](0,0)--(330:1) (310:2)--(330:1)--(350:2); \draw (270:0.3) to[out=0,in=60] (330:1.5); \draw[->] (30:0.3) to[out=-60,in=-120] (330:1.5) ; \end{scope} \begin{scope}[rotate=240] \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0:0) -- (-30:1); \draw [red, thick](0:0)--(270:2); \draw [blue, thick](0,0)--(330:1) (310:2)--(330:1)--(350:2); \draw (270:0.3) to[out=0,in=60] (330:1.5); \draw[->] (30:0.3) to[out=-60,in=-120] (330:1.5) ; \end{scope} \draw[thick, blue, fill] (90:1) circle (1.5pt) (210:1) circle (1.5pt) (330:1) circle (1.5pt); \node[rotate=60] at (330:1.75) {$\scriptstyle i$}; \node at (90:1.75) {$\scriptstyle i$}; \node[rotate=-60] at (210:1.75) {$\scriptstyle i$}; \node[right] at (80:1) {$\scriptstyle i+1$}; \node[left] at (100:1) {$\scriptstyle i+1$}; \node[above right] at (330:1) {$\scriptstyle i+1$}; \node[below left] at (330:1) {$\scriptstyle i+1$}; \node[above left] at (210:1) {$\scriptstyle i+1$}; \node[below right] at (210:1) {$\scriptstyle i+1$}; \node[above] at (30:0.3) {$\scriptstyle i$}; \node[right] at (30:0.3) {$\scriptstyle i$}; \node[above] at (150:0.3) {$\scriptstyle i$}; \node[left] at (150:0.3) {$\scriptstyle i$}; \node[below left] at (270:0.2) {$\scriptstyle i$}; \node[below right] at (270:0.2) {$\scriptstyle i$}; \draw[thick,black, fill=white] (0:0) circle (1.5pt); \end{tikzpicture}}} \caption{(Long) $\sfI$- and $\sfY$-cycles} \label{fig:I and Y cycle} \end{figure} One of the benefit of cycles from admissible subgraphs is that one can keep track how cycles are changed under the $N$-graph moves described in Figure~\ref{fig:move1-6}, especially under Move~\Move{I} and Move~\Move{II}. Note that Move~\Move{III} can be decomposed into a sequence of Move~\Move{I} and Move~\Move{II}. Some of such changes are given in Figure~\ref{fig:cycles under moves}. \begin{figure}[ht] \[ \begin{tikzcd}[row sep=0pc] \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-1,0) to (1,0); \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1,0)--(-1/2,0) to[out=60,in=180] (0,1/2) to[out=0,in=120] (1/2,0)--(1,0); \draw [red, thick] (-1/2,0) to[out=-60,in=180] (0, -1/2) to[out=0, in=-120] (1/2,0); \draw[thick,black,fill=white] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \end{tikzpicture} \arrow[leftrightarrow,r,"\Move{I}"]& \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (3,0) circle [radius=1] \clip (3,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (2,0) to (4,0); \draw [blue, thick] ({3-sqrt(3)/2},1/2)--({3+sqrt(3)/2},1/2); \draw [blue, thick] ({3-sqrt(3)/2},-1/2)--({3+sqrt(3)/2},-1/2); \draw [red, thick] (2,0)--(4,0); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] ({-sqrt(3)/2},1/2)--(-1/2,0) -- (1/2,0) --({sqrt(3)/2},1/2); \begin{scope}[yscale=-1] \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] ({-sqrt(3)/2},1/2)--(-1/2,0) -- (1/2,0) --({sqrt(3)/2},1/2); \end{scope} \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1,0)--(-1/2,0) to[out=60,in=180] (0,1/2) to[out=0,in=120] (1/2,0)--(1,0); \draw [red, thick] (-1/2,0) to[out=-60,in=180] (0, -1/2) to[out=0, in=-120] (1/2,0); \draw[thick,black,fill=white] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \end{tikzpicture} \arrow[leftrightarrow,r,"\Move{I}"]& \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (3,0) circle [radius=1] \clip (3,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (2,1/2) to (4,1/2); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (2,-1/2) to (4,-1/2); \draw [blue, thick] ({3-sqrt(3)/2},1/2)--({3+sqrt(3)/2},1/2); \draw [blue, thick] ({3-sqrt(3)/2},-1/2)--({3+sqrt(3)/2},-1/2); \draw [red, thick] (2,0)--(4,0); \end{tikzpicture} \\ \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-1/2,0) to[out=-60,in=180] (0, -1/2) to[out=0, in=-120] (1/2,0); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1,0)--(-1/2,0) to[out=60,in=180] (0,1/2) to[out=0,in=120] (1/2,0)--(1,0); \draw [red, thick] (-1/2,0) to[out=-60,in=180] (0, -1/2) to[out=0, in=-120] (1/2,0); \draw[thick,black,fill=white] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \end{tikzpicture} \arrow[leftrightarrow,r,"\Move{I}"]& \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (3,0) circle [radius=1] \clip (3,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] ({3-sqrt(3)/2},1/2)--({3+sqrt(3)/2},1/2); \draw [blue, thick] ({3-sqrt(3)/2},1/2)--({3+sqrt(3)/2},1/2); \draw [blue, thick] ({3-sqrt(3)/2},-1/2)--({3+sqrt(3)/2},-1/2); \draw [red, thick] (2,0)--(4,0); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-1/2,0) to (1,0); \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1/2,{sqrt(3)/2}) -- (1/2,0)--(1,0); \draw [red, thick] (-1/2,{-sqrt(3)/2}) -- (1/2,0); \draw[thick,blue,fill=blue] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \end{tikzpicture} \arrow[leftrightarrow,r,"\Move{II}"]& \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (3,0) circle [radius=1] \clip (3,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (3.5,0) to (4,0); \draw [blue, thick] ({3-sqrt(3)/2},1/2)--({3+sqrt(3)/2},1/2); \draw [blue, thick] ({3-sqrt(3)/2},-1/2)--({3+sqrt(3)/2},-1/2); \draw [blue, thick] (3,1/2)--(3,-1/2); \draw [red, thick] (5/2,{sqrt(3)/2})--(3,1/2) to[out=-150,in=150] (3,-1/2)--(5/2,{-sqrt(3)/2}); \draw [red, thick] (3,1/2)--(7/2,0) -- (4,0); \draw [red, thick] (3,-1/2)--(7/2,0); \draw[thick,black,fill=white] (3,1/2) circle (0.05); \draw[thick,black,fill=white] (3,-1/2) circle (0.05); \draw[thick,red,fill=red] (7/2,0) circle (0.05); \end{tikzpicture} \\ \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-1/2,{sqrt(3)/2}) -- (1/2,0)--(1,0); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-1/2,{-sqrt(3)/2}) -- (1/2,0); \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1/2,{sqrt(3)/2}) -- (1/2,0)--(1,0); \draw [red, thick] (-1/2,{-sqrt(3)/2}) -- (1/2,0); \draw[thick,blue,fill=blue] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \end{tikzpicture} \arrow[leftrightarrow,r,"\Move{II}"]& \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (3,0) circle [radius=1] \clip (3,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (5/2,{sqrt(3)/2})--(3,1/2)--(3,1/2)--(3,-1/2) -- (3,-1/2)--(5/2,{-sqrt(3)/2}); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (3.5,0)--(4,0); \draw [blue, thick] ({3-sqrt(3)/2},1/2)--({3+sqrt(3)/2},1/2); \draw [blue, thick] ({3-sqrt(3)/2},-1/2)--({3+sqrt(3)/2},-1/2); \draw [blue, thick] (3,1/2)--(3,-1/2); \draw [red, thick] (5/2,{sqrt(3)/2})--(3,1/2) to[out=-150,in=150] (3,-1/2)--(5/2,{-sqrt(3)/2}); \draw [red, thick] (3,1/2)--(7/2,0) -- (4,0); \draw [red, thick] (3,-1/2)--(7/2,0); \draw[thick,black,fill=white] (3,1/2) circle (0.05); \draw[thick,black,fill=white] (3,-1/2) circle (0.05); \draw[thick,red,fill=red] (7/2,0) circle (0.05); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-1/2,{sqrt(3)/2}) -- (1/2,0)--({sqrt(3)/2},-1/2); \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1/2,{sqrt(3)/2}) -- (1/2,0)--(1,0); \draw [red, thick] (-1/2,{-sqrt(3)/2}) -- (1/2,0); \draw[thick,blue,fill=blue] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \end{tikzpicture} \arrow[leftrightarrow,r,"\Move{II}"]& \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (3,0) circle [radius=1] \clip (3,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (5/2,{sqrt(3)/2})--(3,1/2) to[out=-150,in=150] (3,-1/2)--({3+sqrt(3)/2},-1/2); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (3.5,0)--(3,1/2); \draw [blue, thick] ({3-sqrt(3)/2},1/2)--({3+sqrt(3)/2},1/2); \draw [blue, thick] ({3-sqrt(3)/2},-1/2)--({3+sqrt(3)/2},-1/2); \draw [blue, thick] (3,1/2)--(3,-1/2); \draw [red, thick] (5/2,{sqrt(3)/2})--(3,1/2) to[out=-150,in=150] (3,-1/2)--(5/2,{-sqrt(3)/2}); \draw [red, thick] (3,1/2)--(7/2,0) -- (4,0); \draw [red, thick] (3,-1/2)--(7/2,0); \draw[thick,black,fill=white] (3,1/2) circle (0.05); \draw[thick,black,fill=white] (3,-1/2) circle (0.05); \draw[thick,red,fill=red] (7/2,0) circle (0.05); \end{tikzpicture} \\ \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1/2,{sqrt(3)/2}) -- (1/2,0)--(1,0); \draw [red, thick] (-1/2,{-sqrt(3)/2}) -- (1/2,0); \draw[thick,blue,fill=blue] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \end{tikzpicture} \arrow[leftrightarrow,r,"\Move{II}"]& \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (3,0) circle [radius=1] \clip (3,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] ({3-sqrt(3)/2},1/2)--(3,1/2)--(3.5,0); \draw [blue, thick] ({3-sqrt(3)/2},1/2)--({3+sqrt(3)/2},1/2); \draw [blue, thick] ({3-sqrt(3)/2},-1/2)--({3+sqrt(3)/2},-1/2); \draw [blue, thick] (3,1/2)--(3,-1/2); \draw [red, thick] (5/2,{sqrt(3)/2})--(3,1/2) to[out=-150,in=150] (3,-1/2)--(5/2,{-sqrt(3)/2}); \draw [red, thick] (3,1/2)--(7/2,0) -- (4,0); \draw [red, thick] (3,-1/2)--(7/2,0); \draw[thick,black,fill=white] (3,1/2) circle (0.05); \draw[thick,black,fill=white] (3,-1/2) circle (0.05); \draw[thick,red,fill=red] (7/2,0) circle (0.05); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] ({sqrt(3)/2},1/2) -- (1/2,0)--({sqrt(3)/2},-1/2); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-1/2,0) -- (1/2,0); \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1/2,{sqrt(3)/2}) -- (1/2,0)--(1,0); \draw [red, thick] (-1/2,{-sqrt(3)/2}) -- (1/2,0); \draw[thick,blue,fill=blue] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \end{tikzpicture} \arrow[leftrightarrow,r,"\Move{II}"]& \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (3,0) circle [radius=1] \clip (3,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] ({3+sqrt(3)/2},1/2)--(3,1/2) to[out=-150,in=150] (3,-1/2)--({3+sqrt(3)/2},-1/2); \draw [blue, thick] ({3-sqrt(3)/2},1/2)--({3+sqrt(3)/2},1/2); \draw [blue, thick] ({3-sqrt(3)/2},-1/2)--({3+sqrt(3)/2},-1/2); \draw [blue, thick] (3,1/2)--(3,-1/2); \draw [red, thick] (5/2,{sqrt(3)/2})--(3,1/2) to[out=-150,in=150] (3,-1/2)--(5/2,{-sqrt(3)/2}); \draw [red, thick] (3,1/2)--(7/2,0) -- (4,0); \draw [red, thick] (3,-1/2)--(7/2,0); \draw[thick,black,fill=white] (3,1/2) circle (0.05); \draw[thick,black,fill=white] (3,-1/2) circle (0.05); \draw[thick,red,fill=red] (7/2,0) circle (0.05); \end{tikzpicture} \end{tikzcd} \] \[ \begin{tikzcd} \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[dashed] (0,0) circle [radius=2] \draw[red, thick] (160:2)--(-1,0) -- (2,0) (200:2)--(-1,0) (0,2)--(0,-2); \draw[thick,red, fill=red] (-1,0) circle (2pt); \draw[Dble={blue and green},line width=2] (0,0) -- (-45:2); \draw[Dble={green and blue},line width=2] (0,0) -- (45:2); \draw[Dble={blue and green},line width=2] (0,0) -- (135:2); \draw[Dble={green and blue},line width=2] (0,0) -- (-135:2); \draw[color=cyclecolor1, opacity=0.5, line width=7] (45:2) -- (-135:2); \end{tikzpicture} \arrow[d,"{\rm perturb.}"'] \arrow[r,leftrightarrow,"\Move{DI}"]& \begin{tikzpicture}[baseline=-.5ex,scale=1] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle [radius=1]; \draw[rounded corners,color=cyclecolor1, opacity=0.5, line width=5, line cap=round](0,-0.5)--++(0.4,0)--(3/4,0); \draw[rounded corners,thick, red](0,1)--(0,-1) (150:1)--++(5/4,0)--(3/4,0) (210:1)--++(5/4,0)--(3/4,0) (3/4,0)--(1,0); \draw[Dble={green and blue},line width=2] (120:1) -- (0,0.5); \draw[Dble={blue and green},line width=2] (60:1) -- (0,0.5); \draw[Dble={blue and green},line width=2] (-120:1) -- (0,-0.5); \draw[Dble={green and blue},line width=2] (-60:1) -- (0,-0.5); \draw[blue,line width=2] (-0.05,0.5) to[out=-135,in=135] (-0.05,-0.5); \draw[green,line width=2] (0,0.5) to[out=-135,in=135] (0,-0.5); \draw[blue,line width=2] (0.05,0.5) to[out=-45,in=45] (0.05,-0.5); \draw[green,line width=2] (0,0.5) to[out=-45,in=45] (0,-0.5); \draw[thick,red,fill=red] (3/4,0) circle (1pt); \draw[color=cyclecolor1, opacity=0.5, line width=7] (60:1) -- (0,0.5); \draw[color=cyclecolor1, opacity=0.5, line width=7] (-120:1) -- (0,-0.5); \draw[color=cyclecolor1, opacity=0.5, line width=7] (0,0.5) to[out=-135,in=135] (0,-0.5); \end{tikzpicture} \arrow[d,"{\rm perturb.}"] & \begin{tikzpicture}[baseline=-.5ex,scale=1] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle [radius=1]; \draw[color=cyclecolor1, opacity=0.5, line width=5] (45:1)-- (-135:1); \draw[thick, red](135:1)--(-45:1) (45:1)--(-135:1); \draw[Dble={blue and green},line width=2] (0,0) -- (-90:1); \draw[Dble={blue and green},line width=2] (0,0) -- (90:1); \draw[Dble={green and blue},line width=2] (0,0) -- (1,0); \draw[Dble={green and blue},line width=2] (0,0) -- (-0.5,0); \draw[Dble={green and blue},line width=2] (-0.5,0) -- (155:1.2); \draw[Dble={green and blue},line width=2] (-0.5,0) -- (205:1.2); \end{tikzpicture} \arrow[d,"{\rm perturb.}"'] \arrow[r,leftrightarrow,"\Move{DII}"]& \begin{tikzpicture}[baseline=-.5ex,scale=1] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle [radius=1]; \draw[color=cyclecolor1, opacity=0.5, line width=5] (60:1)-- (0,0.5) to[out=-135,in=135] (0,-0.5) -- (-120:1); \draw[thick, red](120:1)--(0,0.5) to[out=-45,in=45] (0,-0.5) --(-120:1) (60:1)--(0,0.5) to[out=-135,in=135] (0,-0.5) --(-60:1); \draw[Dble={green and blue},line width=2] (0,0.5) -- ++(-1,0); \draw[Dble={green and blue},line width=2] (0,0.5) -- (0.3,0.5); \draw[Dble={green and blue},line width=2] (0,1) -- (0,0.5); \draw[Dble={green and blue},line width=2] (0,0) -- (0,0.5); \draw[Dble={green and blue},line width=2] (0,0) -- (0,-0.5); \draw[Dble={green and blue},line width=2] (0,-0.5) -- ++(-1,0); \draw[Dble={green and blue},line width=2] (0,-0.5) -- (0.3,-0.5); \draw[Dble={green and blue},line width=2] (0,-1) -- (0,-0.5); \draw[blue,line width=2] (0.3,-0.525) to[out=0,in=-135] (0.7,-0.025) (0.3,0.475) to[out=0,in=135] (0.7,-0.025) -- (1,-0.025); \draw[green,line width=2] (0.3,-0.475) to[out=0,in=-135] (0.7,0.025) (0.3,0.525) to[out=0,in=135] (0.7,0.025) -- (1,0.025); \draw[color=cyclecolor1, opacity=0.5, line width=5,rounded corners] (0,-0.5) -- (0.4,-0.5) -- (0.7,0); \end{tikzpicture} \arrow[d,"{\rm perturb.}"]\\ \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw [dashed] (0,0) circle [radius=2]; \clip (0,0) circle [radius=2]; \draw[color=cyclecolor1, opacity=0.5, line width=5, line cap=round] (55:2) -- (0,1) -- (-1,0) -- (-145:2); \draw[color=cyclecolor2, opacity=0.5, line width=5, line cap=round] (35:2) -- (1,0) -- (0,-1) -- (-125:2); \draw[blue, thick] (145:2) -- (-1,0) -- (1,0) -- (35:2) (-145:2) -- (-1,0) (1,0) -- (-35:2); \draw[red, thick] (165:2) -- (-1.5,0) -- (-1,0) -- (0,1) -- (0,2) (0,1)-- (1,0) -- (2,0) (-165:2) -- (-1.5,0) (-1,0) -- (0,-1) -- (0,-2) (0,-1) -- (1,0); \draw[green, thick](125:2) -- (0,1) -- (55:2) (-125:2) -- (0,-1) -- (-55:2) (0,-1) -- (0,1) ; \draw[thick,red, fill=red] (-1.5,0) circle (2pt); \draw[thick,fill=white] (-1,0) circle (2pt) (1,0) circle (2pt) (0,-1) circle (2pt) (0,1) circle (2pt); \end{tikzpicture} \arrow[r,leftrightarrow,"\Move{II} \Move{VI}"]& \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw [dashed] (0,0) circle [radius=2]; \clip (0,0) circle [radius=2]; \draw[color=cyclecolor1, opacity=0.5, line width=5, line cap=round] (55:2) -- (0,1.5) -- (-1,1) -- (-1,-1) -- (-145:2) (-1,-1)-- (1,-1) -- (1.5,0); \draw[color=cyclecolor2, opacity=0.5, line width=5, line cap=round] (35:2) -- (1,1) -- (0,0.5) to[out=-135,in=135] (0,-0.5) -- (1,-1)-- (0,-1.5) -- (-125:2) (1,-1) -- (1.5,0); \draw[blue, thick] (145:2) -- (-1,1) -- (1,1) -- (35:2) (-145:2) -- (-1,-1) -- (1,-1) -- (-35:2) (-1,1) -- (-1,-1) (1,1) -- (1,-1); \draw[red, thick] (165:2) -- (-1,1) -- (0,1.5) -- (0,2) (-165:2) -- (-1,-1) -- (0,-1.5) -- (0,-2) (0,1.5) -- (1,1) -- (1.5,0) -- (2,0) (0,-1.5) -- (1,-1) -- (1.5,0) (-1,1) -- (0,0.5) -- (1,1) (-1,-1) -- (0,-0.5) -- (1,-1) (0,0.5) -- (0,-0.5); \draw[green, thick](125:2) -- (0,1.5) -- (55:2) (-125:2) -- (0,-1.5) -- (-55:2) (0,-1.5) -- (0,-0.5) (0,0.5) -- (0,1.5) (0,0.5) to[out=-135,in=135] (0,-0.5) (0,0.5) to[out=-45,in=45] (0,-0.5); \draw[thick,red, fill=red] (1.5,0) circle (2pt); \draw[thick,fill=white] (-1,1) circle (2pt) (-1,-1) circle (2pt) (1,1) circle (2pt) (1,-1) circle (2pt) (0,1.5) circle (2pt) (0,-1.5) circle (2pt) (0,0.5) circle (2pt) (0,-0.5) circle (2pt); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw [dashed] (0,0) circle [radius=2]; \clip (0,0) circle [radius=2]; \draw[color=cyclecolor1, opacity=0.5, line width=5,line cap=round] (45:2)-- (-135:2); \draw[blue, thick] (155:2) -- (-1,0.5) -- (-0.5, 0.5) -- (100:2) (-175:2) -- (-1,0.5) (-0.5, 0.5) -- (0.5,-0.5) -- (-10:2) (0.5,-0.5) -- (-80:2); \draw[red, thick] (135:2) -- (-0.5, 0.5) -- (0.5,0.5) -- (45:2) (-135:2) -- (-0.5, -0.5) -- (0.5,-0.5) -- (-45:2) (-0.5,0.5) -- (-0.5,-0.5) (0.5,0.5) -- (0.5,-0.5); \draw[green, thick] (175:2) -- (-1,-0.5) -- (-155:2) (-1,-0.5) -- (-0.5,-0.5) -- (0.5,0.5) -- (80:2) (-0.5,-0.5) -- (-100:2) (0.5,0.5) -- (10:2); \draw[thick,red, fill=red]; \draw[thick,blue, fill=blue] (-1,0.5) circle (2pt); \draw[thick,green, fill=green] (-1,-0.5) circle (2pt); \draw[thick,fill=white] (1/2,1/2) circle (2pt) (-1/2,1/2)circle (2pt) (1/2,-1/2)circle (2pt) (-1/2,-1/2) circle (2pt); \end{tikzpicture} \arrow[r,leftrightarrow,"\Move{II} \Move{VI}"]& \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw [dashed] (0,0) circle [radius=2]; \clip (0,0) circle [radius=2]; \draw[color=cyclecolor1, opacity=0.5, line width=5, line cap=round] (45:2)-- (0.5,1) -- (-0.5,0.5) -- (-0.5,-1) -- (-135:2) (-0.5,-0.5) -- (0.5,-0.5) -- (1,0.5) (-0.5,-1) -- (0.5,-1) -- (1,-0.5); \draw[blue, thick] (155:2) -- (-1/2,1) -- (100:2) (-1/2,1) -- (1/2, 1/2) (-175:2) -- (-1/2,-1/2) -- (1/2,-1) -- (-80:2) (-1/2,1/2) -- (1/2, 1/2) -- (-1/2,-1/2) (1/2,1/2) -- (1,-1/2) -- (-10:2) (1/2, -1) -- (1,-1/2); \draw[red, thick] (135:2) -- (-0.5,1) -- (-0.5,-1) -- (-135:2) (45:2) -- (0.5,1) -- (0.5,-1) -- (-45:2) (-0.5,1) -- (0.5,1) (-1/2,1/2) -- (0.5,1/2) (-1/2,-1/2) -- (1/2,-1/2) (-0.5,-1) -- (1/2,-1); \draw[green, thick] (175:2) -- (-1/2,1/2) -- (1/2,1) -- (80:2) (-155:2) -- (-1/2,-1) -- (-100:2) (-1/2,1/2) -- (1/2,-1/2) -- (-1/2,-1) (1/2,1) -- (1,1/2) -- (1/2,-1/2) (1,1/2) -- (10:2); \draw[thick,blue, fill=blue] (1,-0.5) circle (2pt); \draw[thick,red, fill=red] ; \draw[thick,green, fill=green](1,1/2) circle (2pt) ; \draw[thick,fill=white] (-1/2,1) circle (2pt) (-1/2,1/2)circle (2pt) (-1/2,-1/2)circle (2pt) (-1/2,-1)circle (2pt) (1/2,1)circle (2pt) (1/2,1/2) circle (2pt) (1/2,-1/2) circle (2pt) (1/2,-1) circle (2pt); \end{tikzpicture} \end{tikzcd} \] \caption{Cycles under Moves~\Move{I}, \Move{II}, \Move{DI} and \Move{DII}.} \label{fig:cycles under moves} \end{figure} \begin{remark} It is important to note that not every cycle can be represented by a subgraph. For example, the cycle on the left of the following picture can not be expressed by a subtree but it can be after Move~\Move{I}. \[ \begin{tikzcd} \gamma=\begin{tikzpicture}[baseline=-.5ex,scale=0.667] \draw [dashed] (0,0) circle [radius=1.5] \draw [blue, thick] ({-1.5*sqrt(2)/2},{1.5*sqrt(2)/2})--({sqrt(3)/2},1/2); \draw [blue, thick] ({-1.5*sqrt(2)/2},{-1.5*sqrt(2)/2})--({sqrt(3)/2},-1/2); \draw [blue, thick] ({sqrt(3)/2},1/2)--({sqrt(2)},1/2); \draw [blue, thick] ({sqrt(3)/2},-1/2)--({sqrt(2)},-1/2); \draw [blue, thick] ({sqrt(3)/2},1/2)--({sqrt(3)/2},{sqrt(6)/2}); \draw [blue, thick] ({sqrt(3)/2},-1/2)--({sqrt(3)/2},{-sqrt(6)/2}); \draw [red,thick] ({-1.5*2*sqrt(2)/3},{1.5*1/3}) to (-1,0); \draw [red,thick] ({-1.5*2*sqrt(2)/3},{-1.5*1/3}) to (-1,0); \draw [red, thick] (-1,0)--(1.5,0); \draw[thick,red,fill=red] (-1,0) circle (0.05); \draw[thick,blue,fill=blue] ({sqrt(3)/2},1/2) circle (0.05); \draw[thick,blue,fill=blue] ({sqrt(3)/2},-1/2) circle (0.05); \draw[->] (-1.2,0) arc (180:270:2 and 0.8) arc (-90:0:0.3) -- ++(0,0.5); \draw (-1.2,0) arc (180:90:2 and 0.8) arc (90:0:0.3) -- ++(0,-0.5); \end{tikzpicture} \arrow[leftrightarrow, r, "\Move{I}"]& \begin{tikzpicture}[baseline=-.5ex,scale=0.667] \begin{scope \draw [dashed] (0,0) circle [radius=1.5] \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-1,0) to (0.5,0); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] ({sqrt(3)/2},1/2)--(1/2,0); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] ({-1.5*sqrt(2)/2},{1.5*sqrt(2)/2})--(-1/2,0); \draw [blue, thick] ({-1.5*sqrt(2)/2},{-1.5*sqrt(2)/2})--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--({sqrt(2)},1/2); \draw [blue, thick] ({sqrt(3)/2},-1/2)--({sqrt(2)},-1/2); \draw [blue, thick] ({sqrt(3)/2},1/2)--({sqrt(3)/2},{sqrt(6)/2}); \draw [blue, thick] ({sqrt(3)/2},-1/2)--({sqrt(3)/2},{-sqrt(6)/2}); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red,thick] ({-1.5*2*sqrt(2)/3},{1.5*1/3}) to (-1,0); \draw [red,thick] ({-1.5*2*sqrt(2)/3},{-1.5*1/3}) to (-1,0); \draw [red, thick] (-1,0)--(-1/2,0) to[out=60,in=180] (0,1/2) to[out=0,in=120] (1/2,0)--(1.5,0); \draw [red, thick] (-1/2,0) to[out=-60,in=180] (0, -1/2) to[out=0, in=-120] (1/2,0); \draw[thick,black,fill=white] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \draw[thick,red,fill=red] (-1,0) circle (0.05); \draw[thick,blue,fill=blue] ({sqrt(3)/2},1/2) circle (0.05); \draw[thick,blue,fill=blue] ({sqrt(3)/2},-1/2) circle (0.05); \end{scope} \end{tikzpicture}=\gamma(\sfT) \end{tikzcd} \] On the other hand, there might be a one-cycle having two different subgraph presentations as follows: \[ \begin{tikzcd} \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-1,0) to (1,0); \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1,0)--(-1/2,0) to[out=60,in=180] (0,1/2) to[out=0,in=120] (1/2,0)--(1,0); \draw [red, thick] (-1/2,0) to[out=-60,in=180] (0, -1/2) to[out=0, in=-120] (1/2,0); \draw[thick,black,fill=white] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \end{tikzpicture} \arrow[equal, r, "\sim"]& \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (3,0) circle [radius=1] \clip (3,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (2,0) -- (2.5,0) to[out=60,in=180] (3,1/2) to[out=0,in=120] (3.5,0)-- (4,0) (2.5,0) to[out=-60,in=-180] (3,-1/2) to[out=0,in=-120] (3.5,0); \begin{scope}[xshift=3cm] \draw [blue, thick] ({-sqrt(3)/2},1/2)--(-1/2,0); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--(-1/2,0); \draw [blue, thick] ({sqrt(3)/2},1/2)--(1/2,0); \draw [blue, thick] ({sqrt(3)/2},-1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0); \draw [red, thick] (-1,0)--(-1/2,0) to[out=60,in=180] (0,1/2) to[out=0,in=120] (1/2,0)--(1,0); \draw [red, thick] (-1/2,0) to[out=-60,in=180] (0, -1/2) to[out=0, in=-120] (1/2,0); \draw[thick,black,fill=white] (-1/2,0) circle (0.05); \draw[thick,black,fill=white] (1/2,0) circle (0.05); \end{scope} \end{tikzpicture}& \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (0,0) circle [radius=1] \clip (0,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-1/2,{sqrt(3)/2}) -- (0,1/2)--(0,-1/2) -- (-1/2,{-sqrt(3)/2}); \draw [blue, thick] ({-sqrt(3)/2},1/2)--({sqrt(3)/2},1/2); \draw [blue, thick] ({-sqrt(3)/2},-1/2)--({sqrt(3)/2},-1/2); \draw [blue, thick] (0,1/2)--(0,-1/2); \draw [red, thick] (-0.5,{sqrt(3)/2})--(0,1/2) to[out=-150,in=150] (0,-1/2)--(-0.5,{-sqrt(3)/2}); \draw [red, thick] (0,1/2)--(0.5,0) -- (1,0); \draw [red, thick] (0,-1/2)--(0.5,0); \draw[thick,black,fill=white] (0,1/2) circle (0.05); \draw[thick,black,fill=white] (0,-1/2) circle (0.05); \draw[thick,red,fill=red] (0.5,0) circle (0.05); \end{tikzpicture} \arrow[equal, r, "\sim"]& \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (3,0) circle [radius=1] \clip (3,0) circle (1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (5/2,{sqrt(3)/2})--(3,1/2) to[out=-150,in=150] (3,-0.5) (3,1/2) -- (3.5,0) -- (3,-1/2) (3,-0.5) -- (5/2,{-sqrt(3)/2}); \draw [blue, thick] ({3-sqrt(3)/2},1/2)--({3+sqrt(3)/2},1/2); \draw [blue, thick] ({3-sqrt(3)/2},-1/2)--({3+sqrt(3)/2},-1/2); \draw [blue, thick] (3,1/2)--(3,-1/2); \draw [red, thick] (5/2,{sqrt(3)/2})--(3,1/2) to[out=-150,in=150] (3,-1/2)--(5/2,{-sqrt(3)/2}); \draw [red, thick] (3,1/2)--(7/2,0) -- (4,0); \draw [red, thick] (3,-1/2)--(7/2,0); \draw[thick,black,fill=white] (3,1/2) circle (0.05); \draw[thick,black,fill=white] (3,-1/2) circle (0.05); \draw[thick,red,fill=red] (7/2,0) circle (0.05); \end{tikzpicture} \end{tikzcd} \] Therefore, there is a bit subtle issue for picking up nice cycles in a consistent way. \end{remark} \begin{definition}\label{def:good cycle} Let $\ngraphfont{G}\subset \mathbb{D}^2$ be an $N$-graph, and let $\Lambda(\ngraphfont{G})$ be an induced Legendrian surface in~$J^1\mathbb{D}^2$. A cycle $[\gamma]$ in $H_1(\Lambda(\ngraphfont{G}))$ is \emph{good} if $[\gamma]$ can be transformed to an $\sfI$-cycle in $H_1(\Lambda(\ngraphfont{G}'))$ for some~$[\ngraphfont{G}']=[\ngraphfont{G}]$, respectively. \end{definition} \begin{example} The following cycles are good. \begin{enumerate} \item All (long) $\sfI$- and $\sfY$-cycles \[ \begin{tikzcd}[row sep=0pc] \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw [dashed] (0,0) circle [radius=2]; \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (90:1) to (0,0) (-30:1) to (0,0) (-150:1) to (0,0); \draw [blue, thick] (0,0) -- (90:1) -- (60:2) (90:1)-- (120:2) (0,0) -- (-30:1) -- (0:2) (-30:1)-- (-60:2) (0,0) -- (-150:1) -- (-120:2) (-150:1)-- (180:2); \draw [red, thick] (0,0)--(30:2) (0,0)--(150:2) (0,0)--(-90:2); \draw[thick,blue,fill=blue] (90:1) circle (2pt) (-30:1) circle (2pt) (-150:1) circle (2pt); \draw[thick,black,fill=white] (0,0) circle (2pt); \end{tikzpicture} \arrow[leftrightarrow,r,"\Move{II}"]& \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw [dashed] (0,0) circle [radius=2]; \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-150:1) -- (150:1) -- (30:1) -- (-30:1); \draw [blue, thick] (-150:1) -- (150:1) -- (120:2) (-30:1) -- (30:1) -- (60:2) (150:1) -- (30:1) (-30:1) -- (0:2) (-30:1)-- (-60:2) (-150:1) -- (-120:2) (-150:1)-- (180:2); \draw [red, thick] (150:1) to[out=45,in=180] (90:1) to[out=0,in=135] (30:1) (0,0)--(30:2) (0,0)--(150:2) (0,0)--(-90:2); \draw[thick,blue,fill=blue] (-30:1) circle (2pt) (-150:1) circle (2pt); \draw[thick,red,fill=red] (0,0) circle (2pt); \draw[thick,black,fill=white] (30:1) circle (2pt) (150:1) circle (2pt); \end{tikzpicture} \arrow[leftrightarrow,r,"\Move{II}"]& \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw [dashed] (0,0) circle [radius=2]; \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (120:0.5) -- (30:1) --(-30:1) ; \draw [blue, thick] (60:2) -- (30:1) -- (-30:1) -- (0:2) (-30:1) -- (-60:2) (30:1) -- (-150:1) -- (-120:2) (-150:1) -- (150:1)-- (180:2) (150:1) -- (120:2); \draw [red, thick] (30:2) -- (30:1) -- (-90:1) -- (-90:2) (-90:1) -- (-150:1) to[out=135,in=-135] (150:1) -- (150:2) (150:1) -- (120:0.5) -- (-150:1) (120:0.5) -- (30:1); \draw[thick,blue,fill=blue] (-30:1) circle (2pt); \draw[thick,red,fill=red] (-90:1) circle (2pt) (120:0.5) circle (2pt); \draw[thick,black,fill=white] (30:1) circle (2pt) (-150:1) circle (2pt) (150:1) circle (2pt); \end{tikzpicture} \arrow[leftrightarrow,r,"\Move{II}"]& \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw [dashed] (0,0) circle [radius=2]; \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0.5,0) -- (-0.5,0); \draw [blue, thick] (60:2) -- (30:1) -- (0:2) (30:1) -- (-30:1) -- (-60:2) (-30:1) -- (-150:1) -- (-120:2) (-150:1) -- (150:1) --(180:2) (150:1) -- (120:2); \draw [red, thick] (30:2) -- (30:1) -- (0:0.5) -- (180:0.5) -- (150:1) -- (150:2) (0:0.5) -- (-30:1) -- (-90:1) (180:0.5) -- (-150:1) -- (-90:1) -- (-90:2) (30:1) to[out=-45,in=45] (-30:1) (150:1) to[out=-135,in=135] (-150:1); \draw[thick,blue,fill=blue]; \draw[thick,red,fill=red] (-90:1) circle (2pt) (0:0.5) circle (2pt) (180:0.5) circle (2pt); \draw[thick,black,fill=white] (150:1) circle (2pt) (-150:1) circle (2pt) (30:1) circle (2pt) (-30:1) circle (2pt); \end{tikzpicture} \end{tikzcd} \] \item The cycle $\gamma(\sfT)$ for an admissibile tree $\sfT$ without local configurations depicted in Figures~\ref{figure:loop near vertex2} and \ref{figure:loop near vertex3} \end{enumerate} \end{example} \begin{definition}\label{def:equiv on N-graph and N-basis} Let $(\ngraphfont{G}, \ngraphfont{B})$ and $(\ngraphfont{G}', \ngraphfont{B}')$ be pairs of an $N$-graph and sets of good cycles. We say that $(\ngraphfont{G}, \ngraphfont{B})$ and $(\ngraphfont{G}', \ngraphfont{B}')$ are \emph{equivalent} if $[\ngraphfont{G}]=[\ngraphfont{G}']$ and the induced isomorphism \[ H_1(\Lambda(\ngraphfont{G}))\cong H_1(\Lambda(\ngraphfont{G}')) \] identifies $\ngraphfont{B}$ with $\ngraphfont{B}'$. We denote the equivalent class of $(\ngraphfont{G}, \ngraphfont{B})$ by $[\ngraphfont{G}, \ngraphfont{B}]$. \end{definition} Let $(\ngraphfont{G}, \ngraphfont{B})$ be a pair of $N$-graph and a set of (good) cycles. We define the conjugation $\overline{(\ngraphfont{G}, \ngraphfont{B})}$ by the pair $(\bar{\ngraphfont{G}},\bar{\ngraphfont{B}})$ such that $\bar{\ngraphfont{G}}$ is the conjugation of $\ngraphfont{G}$ and $\bar{\ngraphfont{B}}$ is the set of (good) cycles in $H_1(\Lambda(\bar{\ngraphfont{G}});\Z)$ consisting of the images of cycles in $\ngraphfont{B}$ under the induced isomorphism $H_1(\Lambda(\ngraphfont{G});\Z)\to H_1(\Lambda(\bar{\ngraphfont{G}});\Z)$. \subsection{Flag moduli spaces}\label{sec:flag moduli spaces} We recall from \cite{CZ2020} a central algebraic invariant $\mathcal{M}(\ngraphfont{G})$ of the Legendrian weave $\Lambda(\ngraphfont{G})$. The main idea is to consider moduli spaces of constructible sheaves associated to $\Lambda(\ngraphfont{G})$. To introduce a legible model for such constructible sheaves, let us consider a full flag, i.e. a nested sequence of subspaces in $\bbC^N$; \[ \cF^\bullet \in \{(\cF^i)_{i=0}^N \mid \dim \cF^i=i,\ \cF^j\subset \cF^{j+1}, 1\leq j\leq N-1,\ \cF^N=\bbC^N \}. \] \begin{definition}\cite{CZ2020}\label{def:flag moduli space} Let $\ngraphfont{G}\subset \mathbb{D}^2$ be an $N$-graph. Let $\{F_i\}_{i\in I}$ be a set of closures of connected components of $\mathbb{D}^2\setminus \ngraphfont{G}$, call each closure a \emph{face}. The \emph{framed flag moduli space} $\widetilde \cM(\ngraphfont{G})$ is a collection of \emph{flags} $\cF_{\Lambda(\ngraphfont{G})}=\{\cF^\bullet(F_i)\}_{i\in I}$ in $\bbC^N$ satisfying the following: Let $F_1,F_2$ be a pair of faces sharing an edge in $\ngraphfont{G}_i$. Then the corresponding flags $\cF^\bullet(F_1),\cF^\bullet(F_2)$ satisfy \begin{align}\label{equation:flag conditions} \begin{cases} \cF^j(F_1)=\cF^j(F_2), \qquad 0\leq j \leq N, \quad j\neq i;\\ \cF^i(F_1)\neq \cF^i(F_2). \end{cases} \end{align} The general linear group $\operatorname{GL}_N$ acts on $\cM(\ngraphfont{G})$ by transforming all flags at once. The \emph{flag moduli space} of the $N$-graph $\ngraphfont{G}$ is defined by the quotient space (a stack, in general) \[ \cM(\ngraphfont{G})\colonequals\widetilde{\cM}(\ngraphfont{G})/\operatorname{GL}_N. \] \end{definition} From now on, we will regard flags $\cF_{\Lambda(\ngraphfont{G})}$ as a formal parameter for the flag moduli space $\Lambda(\ngraphfont{G})$. Let $\Sh(\mathbb{D}^2 \times \R)$ be the category of \emph{constructible sheaves} on $\mathbb{D}^2\times \R$. Under the identification $J^1\mathbb{D}^2\cong T^{\infty,-}(\mathbb{D}^2\times \R)$, an $N$-graph $\ngraphfont{G}\subset \mathbb{D}^2$ gives a Legendrian \[ \Lambda(\ngraphfont{G})\subset J^1 \mathbb{D}^2 \cong T^{\infty,-}(\mathbb{D}^2\times \R) \subset T^\infty(\mathbb{D}^2\times \R). \] This can be used to define a Legendrian isotopy invariant $\Sh_{\Lambda(\ngraphfont{G})}^1(\mathbb{D}^2 \times \R)_{0}$ of $\Sh(\mathbb{D}^2 \times \R)$ consisting of constructible sheaves \begin{itemize} \item whose singular support at infinity lies in $\Lambda(\ngraphfont{G}) \subset T^\infty(\mathbb{D}^2\times \R)$, \item whose microlocal rank is one, and \item which are zero near $\mathbb{D}^2\times \{-\infty\}$. \end{itemize} See \cite{CZ2020,GKS2012,STZ2017} for more details. \begin{theorem}[{\cite[Theorem~5.3]{CZ2020}}] The flag moduli space $\cM(\ngraphfont{G})$ is isomorphic to $\Sh_{\Lambda(\ngraphfont{G})}^1(\mathbb{D}^2\times\R)_0$. Hence $\cM(\ngraphfont{G})$ is a Legendrian isotopy invariant of $\Lambda(\ngraphfont{G})$. \end{theorem} \begin{remark} Indeed, the actual theorem is about a connected surface, not only for $\mathbb{D}^2$. \end{remark} \subsection{$Y$-seeds and Legendrian mutations}\label{sec:N-graphs and seeds} Let $\ngraphfont{G}\subset \mathbb{D}^2$ be an $N$-graph, and let $\ngraphfont{B}=\{[\gamma_1],\dots, [\gamma_n]\}$ be a set of good cycles. For two cycles $[\gamma_i]$ and $[\gamma_j]$, let $i([\gamma_i], [\gamma_j])$ be the algebraic intersection number in~$H_1(\Lambda(\ngraphfont{G}))$. In particular, if $\gamma_i$ is an $\sfI$-cycle $\gamma(\sfI(e))$ and $\gamma_j$ is a $\sfT$-cycle for some admissible subgraph $\sfT$, then \[ i([\gamma_i], [\gamma_j]) = \sum_{e'\in \sfT} i(e, e'), \] where $i(e,e')\in\{0,1,-1\}$ is defined as follows: \[ i(e,e') \colonequals \begin{cases} 0 & \text{ if }e=e'\text{ or }e\cap e'=\varnothing;\\ 1 & \text{ if }e' \text{ is lying on the left side of }e;\\ -1 & \text{ if }e' \text{ is lying on the right side of }e. \end{cases} \] Geometrically, two representatives of $\gamma_i$ and $\gamma_j$ look locally as depicted in Figure~\ref{fig:I-cycle with orientation and intersections}. Their intersection $i([\gamma_i], [\gamma_j])$ is defined to be $+1$ by using the counterclockwise rotation convention of two tangent directions of cycles $\gamma_i$ and $\gamma_j$ at the intersection point as depicted in the third picture in Figure~\ref{fig:I-cycle with orientation and intersections}. Note that our convention is opposite to the one in~\cite{CZ2020}. \begin{figure}[ht] \[ \def1pc{1pc} \begin{array}{cccc} \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (0,0) circle [radius=1]; \clip (0,0) circle (1); \draw [blue, thick] (0,0)--({cos(30)},{sin(30)}); \draw [blue, thick] (0,0)--({cos(-90)},{sin(-90)}) node[left, midway] {\color{black}$e$}; \draw [blue, thick] (0,0)--({cos(150)},{sin(150)}) node[above, midway, rotate=-30] {\color{black}$e'$}; \draw[thick,blue,fill=blue] (0,0) circle (0.05); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex] \draw [dashed] (0,0) circle [radius=1]; \clip (0,0) circle (1); \draw [color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (0,0) to (0,-1); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) to ({cos(150)},{sin(150)}); \draw [blue, thick] (0,0)--({cos(30)},{sin(30)}); \draw [blue, thick] (0,0)--({cos(-90)},{sin(-90)}); \draw [blue, thick] (0,0)--({cos(150)},{sin(150)}); \draw[thick,blue,fill=blue] (0,0) circle (0.05); \draw [color=cyclecolor2!50!black,->] (0.3,-1) -- node[midway,right] {\color{black}$\gamma_i$} (0.3, 0) arc (0:180:0.3 and 0.5) -- (-0.3,-0.9); \begin{scope}[rotate=-120] \draw [color=cyclecolor1!90!yellow,->] (0.2,-1) -- node[midway,below] {\color{black}$\gamma_j$} (0.2, 0) arc (0:180:0.2 and 0.5) -- (-0.2,-0.9); \end{scope} \draw [fill] (-168:0.3) circle (1pt); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex] \begin{scope} \draw [color=cyclecolor2!50!black,->] (-90:-0.5) -- (-90:0.7) node[left] {\color{black}$\gamma_i$}; \draw [color=cyclecolor1!90!yellow,->] (-30:-0.5) -- (-30:0.7) node[below right] {\color{black}$\gamma_j$}; \draw [fill] (0,0) circle (1pt); \draw [->] (-90:0.3) arc (-90:-30:0.3); \draw (0,0) node[above right] {$(+)$}; \end{scope} \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex] \tikzstyle{state}=[draw, circle, inner sep = 0.07cm] \tikzset{every node/.style={scale=0.7}} \node[state, label=above:{$i$}] (1) at (3,0) {}; \node[state, label=above:{$j$}] (2) [right = of 1] {}; \node[ynode] at (2) {}; \node[gnode] at (1) {}; \draw[<-] (2)--(1); \end{tikzpicture} \end{array} \] \caption{$\sfI$-cycles with intersections.} \label{fig:I-cycle with orientation and intersections} \end{figure} \begin{definition} For each a pair $(\ngraphfont{G}, \ngraphfont{B})$ of an $N$-graph and a set of good cycles, we define a quiver $\clusterfont{Q}=\clusterfont{Q}(\ngraphfont{G},\ngraphfont{B})$ as follows: let $n=\#(\ngraphfont{B})$. \begin{enumerate} \item the set of vertices is $[n]$, \item the $(i,j)$-entry $b_{i,j}$ for $\clusterfont{\tilde{B}}(\clusterfont{Q})=(b_{i,j})$ is the algebraic intersection number between $[\gamma_i]$ and~$[\gamma_j]$ \[ b_{i,j} = i([\gamma_i], [\gamma_j])\quad\text{for}\quad 1\le i, j\le n. \] \end{enumerate} \end{definition} In order to assign a coefficient to each one-cycle, let us review the microlocal monodromy functor from \cite{STZ2017} \[ \mmon_\Lambda:\Sh_\Lambda^\bullet \to\Loc^\bullet(\Lambda). \] In our case, this functor sends microlocal rank-one sheaves $\cF \in \cM(\ngraphfont{G})\cong \Sh_{\Lambda(\ngraphfont{G})}^1(\mathbb{D}^2\times \R)_0$, or equivalently, flags $\{\cF^\bullet(F_i)\}_{i \in I}\in\cM(\ngraphfont{G})$ to rank-one local systems $\mmon_{\Lambda(\ngraphfont{G})}(\cF)$ on the Legendrian surface $\Lambda(\ngraphfont{G})$. Then the coefficients in the coefficient tuple $\bfy$ for the pair $(\ngraphfont{G}, \ngraphfont{B})$ are defined by \[ \bfy(\ngraphfont{G},\ngraphfont{B})=\left( \mmon_{\Lambda(\ngraphfont{G})}(-)([\gamma_1]), \dots, \mmon_{\Lambda(\ngraphfont{G})}(-)([\gamma_n])\right), \] where $\mmon_{\Lambda(\ngraphfont{G})}(-)([\gamma_j]):\cM(\ngraphfont{G})\to \bbC$. Let us denote the above assignment by \[ \Psi(\ngraphfont{G}, \ngraphfont{B})=(\bfy(\ngraphfont{G}, \ngraphfont{B}),\clusterfont{Q}(\ngraphfont{G},\ngraphfont{B})). \] By the Legendrian isotopy invariance of $\Sh_{\Lambda(\ngraphfont{G})}^1(\mathbb{D}^2\times \R)_0$ in \cite{GKS2012}, and the functorial property of the microlocal monodromy functor $\mmon$ \cite{STZ2017}, the assignment $\Psi$ is well-defined up to isotopy of~$\Lambda(\ngraphfont{G})$. That is, if two pairs $(\Lambda(\ngraphfont{G}),\ngraphfont{B})$ and $(\Lambda(\ngraphfont{G}'),\ngraphfont{B}')$ are Legendrian isotopic, or $[\ngraphfont{G}, \ngraphfont{B}]=[\ngraphfont{G}',\ngraphfont{B}']$ in particular, then they give us the same seed via $\Psi$. \begin{theorem}\cite[\S7.2.1]{CZ2020}\label{thm:N-graph to seed} Let $\ngraphfont{G}\subset \mathbb{D}^2$ be a $N$-graph with a tuple $\ngraphfont{B}$ of cycles in $H_1(\Lambda(\ngraphfont{G}))$. Then the assignment $\Psi$ to a $Y$-seed in a cluster structure \[ \Psi([\ngraphfont{G},\ngraphfont{B}])= (\bfy(\ngraphfont{G},\ngraphfont{B}),\clusterfont{Q}(\ngraphfont{G},\ngraphfont{B})) \] is well-defined. \end{theorem} As a corollary, the seed $\Psi(\ngraphfont{G}, \ngraphfont{B})$ can be used to distinguish a pair of Legendrian surfaces and hence, by Lemma~\ref{lem:legendrian and lagrangian}, a pair of Lagrangian fillings. \begin{corollary}\label{corollary:distinct seeds imples distinct fillings} For two pairs $(\ngraphfont{G}, \ngraphfont{B})$, $(\ngraphfont{G}', \ngraphfont{B}')$ with the same boundary condition defining different seeds, two induced Lagrangian fillings $(\pi\circ\iota)(\Lambda(\ngraphfont{G}))$, $(\pi\circ\iota)(\Lambda(\ngraphfont{G}'))$ bounding $\iota(\lambda)$ are not exact Lagrangian isotopic to each other. \end{corollary} The monodromy $\mmon_{\Lambda(\ngraphfont{G})}(\cF)$ along a loop $[\gamma]\in H_1(\Lambda(\ngraphfont{G}))$ can be obtained by restricting the constructible sheaf $\cF$ to a tubular neighborhood of $\gamma$. Let us investigate how the monodromy can be computed explicitly in terms of flags $\{\cF^\bullet(F_i)\}_{i \in I}$. Let us consider an $\sfI$-cycle $[\gamma]$ represented by a loop $\gamma(e)$ for some monochromatic edge $e$ as in Figure~\ref{figure:I-cycle with flags}. Let us denote four flags corresponding to each region by $F_1,F_2,F_3,F_4$, respectively. Suppose that $e \subset \ngraphfont{G}_i$, then by the construction of flag moduli space $\cM(\ngraphfont{G})$, a two-dimensional vector space $V\colonequals\cF^{i+1}(F_*)/\cF^{i-1}(F_*)$ is independent of $*=1,2,3,4$. Moreover, $\cF^{i}(F_*)/\cF^{i-1}(F_*)$ defines a one-dimensional subspace $v_*\subset V$ for $*=1,2,3,4$, satisfying \[ v_1\neq v_2 \neq v_3 \neq v_4 \neq v_1. \] Then $\mmon_{\Lambda(\ngraphfont{G})}(\cF)$ along the one-cycle $[\gamma(e)]$ is defined by the cross ratio \[ \mmon_{\Lambda(\ngraphfont{G})}(\cF)([\gamma])\colonequals\langle v_1,v_2,v_3,v_4 \rangle=\frac{v_1 \wedge v_2}{v_2 \wedge v_3}\cdot\frac{v_3 \wedge v_4}{v_4\wedge v_1}. \] Suppose that local flags $\{F_j\}_{j\in J}$ near the upper $\sfY$-cycle $[\gamma_U]$ look like in Figure~\ref{figure:Y-cycle with flags I}. Let $\ngraphfont{G}_i$ and $\ngraphfont{G}_{i+1}$ be the $N$-subgraphs in red and blue, respectively. Then the $3$-dimensional vector space $V=\cF^{i+2}(F_*)/\cF^{i-1}(F_*)$ is independent of $*\in J$. Now regard $a,b,c$ and $A,B,C$ are subspaces of $V$ of dimension one and two, respectively. Then the microlocal monodromy along the $\sfY$-cycle~$[\gamma_U]$ becomes \[ \mmon_{\Lambda(\ngraphfont{G})}(\cF)([\gamma_U])\colonequals\frac{A(c)B(a)C(b)}{A(b)B(c)C(a)}. \] Here, $B(a)$ can be seen as a paring between a vector $v_a$ with $\langle v_a \rangle=a$, and a covector $w_B$ with $\langle w_B \rangle=B^\perp$. Now consider the lower $\sfY$-cycle $[\gamma_L]$ whose local flags given as in Figure~\ref{figure:Y-cycle with flags II}. We already have seen that the orientation convention of the loop in Figure~\ref{fig:I and Y cycle} for the upper and lower $\sfY$-cycle is different. Then microlocal monodromy along $[\gamma_L]$ follows the opposite orientation and becomes \[ \mmon_{\Lambda(\ngraphfont{G})}(\cF)([\gamma_L])\colonequals\frac{A(b)B(c)C(a)}{A(c)B(a)C(b)}. \] \begin{figure}[ht] \begin{tikzcd} \subfigure[$\sfI$-cycle with flags.\label{figure:I-cycle with flags}]{ \begin{tikzpicture}[baseline=.5ex] \begin{scope \draw [dashed] (0,0) circle [radius=1.5] \draw [color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-1/2,0) to (1/2,0); \draw [blue, thick] ({-3*sqrt(3)/4},3/4)--(-1/2,0); \draw [blue, thick] ({-3*sqrt(3)/4},-3/4)--(-1/2,0); \draw [blue, thick] ({3*sqrt(3)/4},3/4)--(1/2,0); \draw [blue, thick] ({3*sqrt(3)/4},-3/4)--(1/2,0); \draw [blue, thick] (-1/2,0)--(1/2,0) node[above, midway] {$\gamma$}; \draw[thick,blue,fill=blue] (-1/2,0) circle (0.05); \draw[thick,blue,fill=blue] (1/2,0) circle (0.05); \node at (0,1) {$v_1$}; \node at (-1,0) {$v_2$}; \node at (0,-1) {$v_3$}; \node at (1,0) {$v_4$}; \end{scope} \end{tikzpicture}} & \subfigure[Upper $\sfY$-cycle with flags.\label{figure:Y-cycle with flags I}]{ \begin{tikzpicture}[baseline=.5ex] \begin{scope}[scale=0.9] \draw [dashed] (0,0) circle [radius=1.5] \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) to ({cos(0-30)},{sin(0-30)}); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) to ({cos(120-30)},{sin(120-30)}); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) to ({cos(240-30)},{sin(240-30)}); \draw [blue, thick] ({1.5*cos(180-30)},{1.5*sin(180-30)})--(0,0)--({1.5*cos(60-30)},{1.5*sin(60-30)}); \draw [blue, thick] (0,0)--({1.5*cos(60+30)},{-1.5*sin(60+30)}); \draw [red, thick] ({1.5*cos(20-30)},{1.5*sin(20-30)})--({cos(0-30)},{sin(0-30)})--(0,0); \draw [red, thick] ({1.5*cos(20+30)},{-1.5*sin(20+30)})--({cos(0-30)},{sin(0-30)}); \draw [red, thick] ({1.5*cos(100-30)},{1.5*sin(100-30)})--({cos(120-30)},{sin(120-30)})--(0,0); \draw [red, thick] ({1.5*cos(140-30)},{1.5*sin(140-30)})--({cos(120-30)},{sin(120-30)}); \draw [red, thick] ({1.5*cos(220-30)},{1.5*sin(220-30)})--({cos(240-30)},{sin(240-30)})--(0,0); \draw [red, thick] ({1.5*cos(260-30)},{1.5*sin(260-30)})--({cos(240-30)},{sin(240-30)}); \draw[thick,red,fill=red] ({cos(0-30)},{sin(0-30)}) circle (0.05); \draw[thick,red,fill=red] ({cos(120-30)},{sin(120-30)}) circle (0.05); \draw[thick,red,fill=red] ({cos(240-30)},{sin(240-30)}) circle (0.05); \draw[thick,black,fill=white] (0,0) circle (0.05); \end{scope} \begin{scope}[yshift=-0.1cm] \node at (0,1.7) {$(b,B)$}; \node[rotate=60] at ({1.7*cos(-30)},{1.7*sin(-30)}) {$(a,A)$}; \node[rotate=-60] at ({1.7*cos(-150)},{1.7*sin(-150)}) {$(c,C)$}; \node[rotate=100] at ({1.7*cos(10)},{1.7*sin(10)}) {$(a,ab)$}; \node[rotate=-100] at ({1.7*cos(170)},{1.7*sin(170)}) {$(c,bc)$}; \node[rotate=20] at ({1.7*cos(-70)},{1.7*sin(-70)}) {$(a,ac)$}; \node[rotate=-20] at ({1.7*cos(-110)},{1.7*sin(-110)}) {$(c,ac)$}; \node[rotate=-40] at ({1.7*cos(50)},{1.7*sin(50)}) {$(b,ab)$}; \node[rotate=40] at ({1.7*cos(130)},{1.7*sin(130)}) {$(b,bc)$}; \draw[red] (0,0) node[xshift=0.4cm] {$\gamma_U$}; \end{scope} \end{tikzpicture}} & \subfigure[Lower $\sfY$-cycle with flags.\label{figure:Y-cycle with flags II}]{ \begin{tikzpicture}[baseline=.5ex] \begin{scope}[scale=0.9] \draw [dashed] (0,0) circle [radius=1.5] \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) to ({cos(0-30)},{sin(0-30)}); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) to ({cos(120-30)},{sin(120-30)}); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) to ({cos(240-30)},{sin(240-30)}); \draw [red, thick] ({1.5*cos(180-30)},{1.5*sin(180-30)})--(0,0)--({1.5*cos(60-30)},{1.5*sin(60-30)}); \draw [red, thick] (0,0)--({1.5*cos(60+30)},{-1.5*sin(60+30)}); \draw [blue, thick] ({1.5*cos(20-30)},{1.5*sin(20-30)})--({cos(0-30)},{sin(0-30)})--(0,0); \draw [blue, thick] ({1.5*cos(20+30)},{-1.5*sin(20+30)})--({cos(0-30)},{sin(0-30)}); \draw [blue, thick] ({1.5*cos(100-30)},{1.5*sin(100-30)})--({cos(120-30)},{sin(120-30)})--(0,0); \draw [blue, thick] ({1.5*cos(140-30)},{1.5*sin(140-30)})--({cos(120-30)},{sin(120-30)}); \draw [blue, thick] ({1.5*cos(220-30)},{1.5*sin(220-30)})--({cos(240-30)},{sin(240-30)})--(0,0); \draw [blue, thick] ({1.5*cos(260-30)},{1.5*sin(260-30)})--({cos(240-30)},{sin(240-30)}); \draw[thick,blue,fill=blue] ({cos(0-30)},{sin(0-30)}) circle (0.05); \draw[thick,blue,fill=blue] ({cos(120-30)},{sin(120-30)}) circle (0.05); \draw[thick,blue,fill=blue] ({cos(240-30)},{sin(240-30)}) circle (0.05); \draw[thick,black,fill=white] (0,0) circle (0.05); \end{scope} \begin{scope}[yshift=-0.1cm] \node at (0,1.7) {$(b,B)$}; \node[rotate=60] at ({1.7*cos(-30)},{1.7*sin(-30)}) {$(a,A)$}; \node[rotate=-60] at ({1.7*cos(-150)},{1.7*sin(-150)}) {$(c,C)$}; \node[rotate=100] at ({1.7*cos(10)},{1.7*sin(10)}) {\small$(AB,A)$}; \node[rotate=-100] at ({1.7*cos(170)},{1.7*sin(170)}) {\small$(BC,C)$}; \node[rotate=20] at ({1.7*cos(-70)},{1.7*sin(-70)}) {\small$(AC,A)$}; \node[rotate=-20] at ({1.7*cos(-110)},{1.7*sin(-110)}) {\small$(AC,C)$}; \node[rotate=-40] at ({1.7*cos(50)},{1.7*sin(50)}) {\small$(AB,B)$}; \node[rotate=40] at ({1.7*cos(130)},{1.7*sin(130)}) {\small$(BC,B)$}; \draw[blue] (0,0) node[xshift=0.4cm] {$\gamma_L$}; \end{scope} \end{tikzpicture}} \end{tikzcd} \caption{$\sfI$- and $\sfY$-cycles with flags. Here $ab$ means the span of $a$ and $b$, and $AB$ means the intersection of $A$ and $B$.} \label{fig:I and Y cycle with flags} \end{figure} Let us define an operation called (\emph{Legendrian}) \emph{mutation} on $N$-graphs $\ngraphfont{G}$ which corresponds to a geometric operation on the induced Legendrian surface $\Lambda(\ngraphfont{G})$ that producing a smoothly isotopic but not necessarily Legendrian isotopic to $\Lambda(\ngraphfont{G})$, see \cite[Definition~4.19]{CZ2020}. Note that operation has an intimate relation with the wall-crossing phenomenon~\cite{Aur2007}, Lagrangian surgery \cite{Pol1991}, and quiver (or $Y$-seed) mutations \cite{FZ1_2002}. \begin{definition}\cite{CZ2020}\label{def:legendrian mutation} Let $\ngraphfont{G}$ be a (local) $N$-graph and $e\in \ngraphfont{G}_i\subset \ngraphfont{G}$ be an edge between two trivalent vertices corresponding to an $\sfI$-cycle $[\gamma]=[\gamma(e)]$. The mutation $\mu_\gamma(\ngraphfont{G})$ of $\ngraphfont{G}$ along $\gamma$ is obtained by applying the local change depicted in the left of Figure~\ref{fig:Legendrian mutation on N-graphs}. \end{definition} \begin{figure}[ht] \subfigure[Legendrian mutation along $\sfI$-cycle.\label{figure:I-mutation}]{ \makebox[0.45\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,scale=1.2] \begin{scope \draw [dashed] (0,0) circle [radius=1]; \draw [->,yshift=.5ex] (1.25,0) -- (1.75,0) node[midway, above] {$\mu_\gamma$}; \draw [<-,yshift=-.5ex] (1.25,0) -- (1.75,0) node[midway, below] {$\mu_{\gamma'}$}; \draw [color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-1/2,0) to (1/2,0); \draw [blue, thick] ({-1*sqrt(3)/2},1*1/2)--(-1/2,0); \draw [blue, thick] ({-1*sqrt(3)/2},-1*1/2)--(-1/2,0); \draw [blue, thick] ({1*sqrt(3)/2},1*1/2)--(1/2,0); \draw [blue, thick] ({1*sqrt(3)/2},-1*1/2)--(1/2,0); \draw [blue, thick] (-1/2,0)-- node[midway,above] {$\gamma$}(1/2,0); \draw[thick,blue,fill=blue] (-1/2,0) circle (0.05); \draw[thick,blue,fill=blue] (1/2,0) circle (0.05); \end{scope} \begin{scope}[xshift=3cm \draw [dashed] (0,0) circle [radius=1]; \draw [color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (0,-1/2) to (0,1/2); \draw [blue, thick] (-1*1/2,{1*sqrt(3)/2}) to (0,1/2) to (0,-1/2) node[midway,right] {$\gamma'$} to (-1*1/2,-{1*sqrt(3)/2}); \draw [blue, thick] (1*1/2,{1*sqrt(3)/2})--(0,1/2); \draw [blue, thick] (1*1/2,-{1*sqrt(3)/2})--(0,-1/2); \draw[thick,blue,fill=blue] (0,1/2) circle (0.05); \draw[thick,blue,fill=blue] (0,-1/2) circle (0.05); \end{scope} \end{tikzpicture} }} \subfigure[Legendrian mutation along $\sfY$-cycle.\label{figure:Y-mutation}]{ \makebox[0.45\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,scale=1.2] \begin{scope \draw [->,yshift=.5ex] (1.25,0) -- (1.75,0) node[midway, above] {$\mu_\gamma$}; \draw [<-,yshift=-.5ex] (1.25,0) -- (1.75,0) node[midway, below] {$\mu_{\gamma'}$}; \draw [dashed] (0,0) circle [radius=1]; \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,1/2) to (0,0); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] ({sqrt(3)/4},-1/4) to (0,0); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-{sqrt(3)/4},-1/4) to (0,0); \draw [blue, thick] (-1*1/2,{1*sqrt(3)/2}) to (0,1/2) to (0,0) node[right] {$\gamma$} to ({-sqrt(3)/4},-1/4) to (-1,0); \draw [blue, thick] (1/2,{1*sqrt(3)/2}) to (0,1/2) to (0,0) to ({sqrt(3)/4},-1/4) to (1,0); \draw [blue, thick] ({-sqrt(3)/4},-1/4) to ({-1/2},{-sqrt(3)/2}); \draw [blue, thick] ({sqrt(3)/4},-1/4) to ({1/2},{-sqrt(3)/2}); \draw [red, thick] (0,0) to ({sqrt(3)/2},1/2); \draw [red, thick] (0,0) to ({-sqrt(3)/2},1/2); \draw [red, thick] (0,0) to (0,-1); \draw[thick,blue,fill=blue] (0,1/2) circle (0.05); \draw[thick,blue,fill=blue] ({-sqrt(3)/4},-1/4) circle (0.05); \draw[thick,blue,fill=blue] ({sqrt(3)/4},-1/4) circle (0.05); \draw[thick,black,fill=white] (0,0) circle (0.05); \end{scope} \begin{scope}[xshift=3cm \draw [dashed] (0,0) circle [radius=1]; \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,1/2) to (0,0); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] ({sqrt(3)/4},-1/4) to (0,0); \draw [color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-{sqrt(3)/4},-1/4) to (0,0); \draw [blue, thick] (-1/2,{sqrt(3)/2}) to ({-sqrt(3)/4},1/4) to (-1,0); \draw [blue, thick] (1/2,{sqrt(3)/2}) to ({sqrt(3)/4},1/4) to (1,0); \draw [blue, thick] ({-1/2},{-sqrt(3)/2}) to (0,-1/2) to ({1/2},{-sqrt(3)/2}); \draw [blue, thick] ({sqrt(3)/4},1/4) to (0,0); \draw [blue, thick] ({-sqrt(3)/4},1/4) to (0,0); \draw [blue, thick] (0,-1/2) to (0,0); \draw [red, thick] ({-sqrt(3)/4},1/4) to ({-sqrt(3)/4},-1/4) to (0,-1/2) to ({sqrt(3)/4},-1/4) to ({sqrt(3)/4},1/4) to (0,1/2) to ({-sqrt(3)/4},1/4); \draw [red, thick] ({sqrt(3)/4},1/4) to ({sqrt(3)/2},1/2); \draw [red, thick] ({-sqrt(3)/4},1/4) to ({-sqrt(3)/2},1/2); \draw [red, thick] (0,-1/2) to (0,-1); \draw [red, thick] ({-sqrt(3)/4},-1/4) to (0,0); \draw [red, thick] ({sqrt(3)/4},-1/4) to (0,0); \draw [red, thick] (0,1/2) to (0,0) node[right] {$\gamma'$}; \draw[thick,red,fill=red] (0,1/2) circle (0.05); \draw[thick,red,fill=red] ({-sqrt(3)/4},-1/4) circle (0.05); \draw[thick,red,fill=red] ({sqrt(3)/4},-1/4) circle (0.05); \draw[thick,black,fill=white] (0,0) circle (0.05); \draw[thick,black,fill=white] ({-sqrt(3)/4},1/4) circle (0.05); \draw[thick,black,fill=white] ({sqrt(3)/4},1/4) circle (0.05); \draw[thick,black,fill=white] (0,-1/2) circle (0.05); \end{scope} \end{tikzpicture} }} \subfigure[Legendrian mutation along degenerate $\sfI$-cycles\label{figure:degen I-cycle}]{ \makebox[0.45\textwidth]{ \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \begin{scope}[local bounding box=Before, xshift=0cm] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=blue!50!green!50] (0,0) node[above] {$\gamma_I$}; \draw[Dble={blue and green},line width=2] (-1.5,0) -- (0,0); \draw[Dble={blue and green},line width=2] (1.5,0) -- (0,0); \draw[Dble={green and blue},line width=2] (-1.5,0) -- ++(120:3); \draw[Dble={green and blue},line width=2] (-1.5,0) -- ++(-120:3); \draw[Dble={green and blue},line width=2] (1.5,0) -- ++(60:3); \draw[Dble={green and blue},line width=2] (1.5,0) -- ++(-60:3); \draw[color=cyclecolor1, opacity=0.5, line width=7,line cap=round] (-1.5,0) -- (1.5,0); \end{scope} \begin{scope}[local bounding box=After, xshift=9cm] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=blue!50!green!50] (0,0) node[right] {$\gamma'_I$}; \begin{scope}[rotate=-90] \draw[Dble={blue and green},line width=2] (-1.5,0) -- (0,0); \draw[Dble={blue and green},line width=2] (1.5,0) -- (0,0); \draw[Dble={green and blue},line width=2] (-1.5,0) -- ++(120:3); \draw[Dble={green and blue},line width=2] (-1.5,0) -- ++(-120:3); \draw[Dble={green and blue},line width=2] (1.5,0) -- ++(60:3); \draw[Dble={green and blue},line width=2] (1.5,0) -- ++(-60:3); \draw[color=cyclecolor1, opacity=0.5, line width=7,line cap=round] (-1.5,0) -- (1.5,0); \end{scope} \end{scope} \draw[->] ($(Before.east)+(0.75,0.25)$) -- ($(After.west)+(-0.75,0.25)$) node[midway, above] {$\mu_{\gamma_I}$}; \draw[<-] ($(Before.east)+(0.75,-0.25)$) -- ($(After.west)+(-0.75,-0.25)$) node[midway, below] {$\mu_{\gamma'_I}$}; \end{tikzpicture} }} \subfigure[Legendrian local mutation along degenerate $\sfI$-cycles\label{figure:local degen I-cycle}]{ \makebox[0.45\textwidth]{ \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \begin{scope}[local bounding box=Before, xshift=0cm] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=blue!50!green!50] (0,0) node[above] {$\gamma_I$}; \draw[Dble={blue and green},line width=2] (-1.5,0) -- (0,0); \draw[Dble={blue and green},line width=2] (0,0) -- (3,0); \draw[Dble={green and blue},line width=2] (-1.5,0) -- ++(120:3); \draw[Dble={green and blue},line width=2] (-1.5,0) -- ++(-120:3); \draw[color=cyclecolor1, opacity=0.5, line width=7,line cap=round] (-1.5,0) -- (3,0); \end{scope} \begin{scope}[local bounding box=After, xshift=9cm] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=blue!50!green!50] (0,0) node[above] {$\gamma'_I$}; \draw[Dble={blue and green},line width=2] (150:3) -- (30:3); \draw[Dble={blue and green},line width=2, line cap=round] (-150:3) -- (-0.5,0); \draw[Dble={blue and green},line width=2] (-0.5,0) -- (-30:3); \draw[Dble={blue and green},line width=2] (-0.5,0) -- (3,0); \draw[color=cyclecolor1, opacity=0.5, line width=7,line cap=round] (-0.5,0) -- (3,0); \end{scope} \draw[->] ($(Before.east)+(0.75,0)$) -- ($(After.west)+(-0.75,0)$) node[midway, above] {$\mu_{\gamma_I}$}; \end{tikzpicture} }} \subfigure[Legendrian local mutation along long $\sfI$-cycle\label{figure:local long I-cycle}]{ \makebox[0.45\textwidth]{ \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \begin{scope}[local bounding box=Before, xshift=0cm] \draw[red] (-2,0) node[above] {$\gamma$}; \draw[dashed] (0,0) circle (3); \draw[color=cyclecolor2, opacity=0.5, line width=5] (-3,0) -- (3,0); \draw[thick, red] (-3,0) -- (3,0) (0,-3) -- (0,3); \draw[Dble={green and blue},line width=2] (0,0) -- (45:3); \draw[Dble={green and blue},line width=2] (135:3) -- (0,0); \draw[Dble={green and blue},line width=2] (0,0) -- (-135:3); \draw[Dble={green and blue},line width=2] (-45:3) -- (0,0); \end{scope} \begin{scope}[local bounding box=After, xshift=9cm] \draw[red] (2,0) node[above] {$\gamma'$}; \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=cyclecolor2, opacity=0.5, line width=5] (-3,0) -- (3,0); \draw[thick, red] (-3,0) -- (3,0) (-3,1) -- (3,1) (-3,-1) -- (3,-1) (0,-3) -- (0,3); \draw[Dble={green and blue},line width=2] (0,1) -- (45:3); \draw[Dble={green and blue},line width=2] (135:3) -- (0,1); \draw[Dble={green and blue},line width=2] (0,-1) -- (-135:3); \draw[Dble={green and blue},line width=2] (-45:3) -- (0,-1); \draw[blue,line width=2] (-0.1,1) to[out=-180,in=180] (-0.1,0) (0.1,1) to[out=0,in=0] (0.1,0) (-0.1,-1) to[out=-180,in=180] (-0.1,0) (0.1,-1) to[out=0,in=0] (0.1,0); \draw[green,line width=2] (0,0.95) to[out=-180,in=180] (0,0.05) (0,0.95) to[out=0,in=0] (0,0.05) (0,-0.95) to[out=-180,in=180] (0,-0.05) (0,-0.95) to[out=0,in=0] (0,-0.05); \end{scope} \draw[->] ($(Before.east)+(0.75,0)$) -- ($(After.west)+(-0.75,0)$) node[midway, above] {$\mu_{\gamma}$}; \end{tikzpicture} }} \subfigure[Legendrian local mutation along degenerate long $\sfI$-cycles\label{figure:local degen long I-cycle}]{ \makebox[0.45\textwidth]{ \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \begin{scope}[local bounding box=Before, xshift=0cm] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=blue!50!green!50] (-2,0) node[above] {$\gamma_I$}; \draw[thick, red] (45:3) -- (-135:3) (135:3) -- (-45:3); \draw[Dble={blue and green},line width=2] (-3,0) -- (0,0); \draw[Dble={blue and green},line width=2] (3,0) -- (0,0); \draw[Dble={blue and green},line width=2] (0,0) --(0,3); \draw[Dble={blue and green},line width=2] (0,0) -- (0,-3); \draw[color=cyclecolor1, opacity=0.5, line width=7] (-3,0) -- (3,0); \end{scope} \begin{scope}[local bounding box=After, xshift=9cm] \draw[dashed] (0,0) circle (3); \clip (0,0) circle (3); \draw[thick, red, rounded corners] (45:3) -- (0,1) -- (-0.5, 0.5) -- (0.5, -0.5) -- (0,-1) -- (-135:3) (-45:3) -- (0,-1) -- (-0.5, -0.5) -- (0.5, 0.5) -- (0,1) -- (135:3); \draw[Dble={green and blue},line width=2] (0,3) --(0,1); \draw[Dble={green and blue},line width=2] (0,0.5) --(0,0); \draw[Dble={green and blue},line width=2] (0,-0.5) --(0,-1); \draw[Dble={green and blue},line width=2] (0,-3) -- (0,-1); \draw[Dble={green and blue},line width=2] (0,-0.5) --(0,0); \draw[Dble={green and blue},line width=2] (0,0.5) -- (0,1); \begin{scope} \draw[Dble={green and blue},line width=2] (0,0) -- (-3,0); \draw[Dble={green and blue},line width=2] (0,0) -- (3,0); \draw[color=cyclecolor1, opacity=0.5, line width=7] (-3,0) -- (3,0); \end{scope} \begin{scope}[yshift=-1cm] \draw[Dble={green and blue},line width=2] (0,0) -- (-3,0); \draw[Dble={green and blue},line width=2] (0,0) -- (3,0); \end{scope} \begin{scope}[yshift=1cm] \draw[Dble={green and blue},line width=2] (0,0) -- (-3,0); \draw[Dble={green and blue},line width=2] (0,0) -- (3,0); \end{scope} \draw[color=blue!50!green!50] (2,0) node[above] {$\gamma'_I$}; \end{scope} \draw[->] ($(Before.east)+(0.75,0)$) -- ($(After.west)+(-0.75,0)$) node[midway, above] {$\mu_{\gamma_I}$}; \end{tikzpicture} }} \caption{Legendrian (local) mutations at (degenerate, long) $\sfI$- and $\sfY$-cycles.} \label{fig:Legendrian mutation on N-graphs} \end{figure} For the $\sfY$-cycle, the Legendrian mutation becomes as in the right of Figure~\ref{figure:Y-mutation}. Note that the mutation at $\sfY$-cycle can be decomposed into a sequence of Move~\Move{I} and Move~\Move{II} together with a mutation at $\sfI$-cycle. One can easily verify Legendrian (local) mutations on degenerate $N$-graph shown in Figures~\ref{figure:degen I-cycle} and \ref{figure:local degen I-cycle} via perturbation. For Figures~\ref{figure:local long I-cycle} and \ref{figure:local degen long I-cycle}, see Appendix~\ref{appendix:local mutations}. Let us remind our main purpose of finding exact embedded Lagrangian fillings for a Legendrian links. The following lemma guarantees that Legendrian mutation preserves the embedding property of Lagrangian fillings. \begin{proposition}\cite[Lemma~7.4]{CZ2020} Let $\ngraphfont{G}\subset \mathbb{D}^2$ be a free $N$-graph. Then mutation $\mu(\ngraphfont{G})$ at any $\sfI$- or $\sfY$-cycle is again free $N$-graph. \end{proposition} An important observation is the Legendrian mutation on $(\ngraphfont{G},\ngraphfont{B})$ induces a $Y$-seed mutation on the induced seed $\Psi(\ngraphfont{G},\ngraphfont{B})$. \begin{proposition}[{\cite[\S7.2]{CZ2020}}]\label{proposition:equivariance of mutations} Let $\ngraphfont{G}\subset \mathbb{D}^2$ be a $N$-graph and $\ngraphfont{B}$ be a set of good cycles in $H_1(\Lambda(\ngraphfont{G}))$. Let $\mu_{\gamma_i}(\ngraphfont{G},\ngraphfont{B})$ be a Legendrian mutation of $(\ngraphfont{G},\ngraphfont{B})$ along a one-cycle $\gamma_i$. Then \[ \Psi(\mu_{\gamma_i}(\ngraphfont{G},\ngraphfont{B}))=\mu_{i}(\Psi(\ngraphfont{G},\ngraphfont{B})). \] Here, $\mu_{i}$ is the $Y$-seed mutation at the vertex $i$ \textup{(}cf. Remark~\ref{rmk_x_cluster_mutation}\textup{)}. \end{proposition} \begin{remark}\label{remark:boundary-Legendrian isotopy} Let $\lambda$ and $\lambda'$ be two isotopic closures of positive $N$-braids. By fixing an isotopy between them, we have an annular $N$-graph $\ngraphfont{G}_{\lambda \lambda'}$ which induces a bijection between sets of $N$-graphs for $\lambda$ and $\lambda'$ by attaching $\ngraphfont{G}_{\lambda\legendrian'}$. Then, indeed, this bijection is equivariant under the Legendrian mutation if it is defined, that is, for $[\gamma] \in H_1(\Lambda(\ngraphfont{G}))$, \[ \mu_{\gamma}(\ngraphfont{G}_{\lambda \lambda'} \cdot \ngraphfont{G}) = \ngraphfont{G}_{\lambda \lambda'} \cdot \mu_{\gamma}(\ngraphfont{G}). \] In other words, two $\partial$-Legendrian isotopic $N$-graphs will generate equivariantly bijective sets of $N$-graphs under Legendrian mutations. \end{remark} \begin{remark}\label{remark:Stabilization} Similarly, a stabilization $S(\ngraphfont{G})$ of $\ngraphfont{G}$ will generate equivariantly bijective sets of $N$-graphs under Legendrian mutations as well since the stabilization part in $S(\ngraphfont{G})$ is away from chosen cycles and does not affect the Legendrian mutability. \end{remark} \subsubsection{Flags on $\lambda$} Let $\lambda=\lambda_\beta$ be a Legendrian in $J^1\mathbb{S}^1$, which gives us an $(N-1)$-tuple of points $X=(X_1,\dots, X_{N-1})$ in $\mathbb{S}^1$ which given by the alphabet $\sigma_1,\dots,\sigma_{N-1}$ of the braid word $\beta$. Let $\{f_j\}_{j\in J}$ be the set of closures of connected components of $\mathbb{S}^1\setminus X$. The flags $\mathcal{F}_\lambda=\{\cF_\lambda^\bullet(f_j)\}_{j\in J}$ in~$\bbC^N$ satisfying the conditions in \eqref{equation:flag conditions} and having the trivial monodromy of $\{\cF^N_\lambda(f_j)\}_{j\in J}$ along~$\mathbb{S}^1$ will be called simply by \emph{flags on $\lambda$}. It is well known that the moduli space $\cM(\lambda)$ of such flags $\mathcal{F}_\lambda$ up to $\operatorname{GL}_N$ is isomorphic to~$\Sh_\lambda^1(\R^2)_0$ which is a Legendrian isotopy invariant, see \cite[Theorem 1.1]{STZ2017}. As before, we will regard $\mathcal{F}_\lambda$ as a formal parameter for $\cM(\lambda)$. \begin{definition}\label{def:good N-graph} Let $\ngraphfont{G}\subset \mathbb{D}^2$ be an $N$-graph, and let $\mathcal{F}_\lambda$ be flags adapted to $\lambda\subset J^1\partial\mathbb{D}^2$ given by $\partial\ngraphfont{G}$. An $N$-graph $\ngraphfont{G}$ is called \emph{deterministic}, if there exist unique flags $\cF\in \widetilde{\cM}(\ngraphfont{G})$ in Definition~\ref{def:flag moduli space}, for each $\mathcal{F}_\lambda$, satisfying $\cF|_{\partial\mathbb{D}^2}=\mathcal{F}_\lambda$. \end{definition} Note that $\ngraphfont{G}(a,b,c)$ in the introduction is deterministic in an obvious way. If an $N$-graph $\ngraphfont{G}\subset \mathbb{D}^2$ is deterministic and $[\ngraphfont{G}]=[\ngraphfont{G}']$, then so is $\ngraphfont{G}'$. \begin{proposition}\label{prop_mutation_preserves_deterministic} Let $\ngraphfont{G}\subset \mathbb{D}^2$ be a deterministic $N$-graph. Then, for any $\sfI$- or $\sfY$-cycle~$\gamma$, the mutation~$\mu_\gamma(\ngraphfont{G})$ is again a deterministic $N$-graph. \end{proposition} \begin{proof} The proof is straightforward from the notion of the deterministic $N$-graph in Definition~\ref{def:good N-graph} and of the Legendrian mutation depicted in Figure~\ref{figure:I-mutation}. Note that the Legendrian mutation~$\mu_\gamma(\ngraphfont{G})$ at $\sfY$-cycle $\gamma$ is also deterministic, since $\mu_\gamma(\ngraphfont{G})$ is a composition of Moves \Move{I} and \Move{II}, and a mutation at $\sfI$-cycle. \end{proof} \begin{remark}\label{remark:Poisson variety} Especially when an $N$-graph $\ngraphfont{G}$ is deterministic, the coefficient tuple $\bfy$ originally defined on $\bbC[\cM(\ngraphfont{G})]$ can be restricted to the coordinate ring $\bbC[\cM(\lambda)]$ of the moduli spaces $\cM(\lambda)$ of flags on~$\lambda$. It is important to note that the moduli space $\cM(\lambda)$ is actually a cluster Poisson variety (also called a $\cX$-cluster variety or a cluster $\cX$-variety) due to the result of Shen--Weng \cite[Theorem~1.7]{SW2019}. \end{remark} \subsection{\texorpdfstring{$N$-graphs}{N-graphs} of finite or affine types} \subsubsection{Linear \texorpdfstring{$N$-graphs}{N-graphs}}\label{sec:linear} For $n\ge 1$, let us define positive $2$-braids \begin{align*} \beta_0(\dynkinfont{A}_n)&\colonequals \sigma_1^{n+1},& \beta(\dynkinfont{A}_n) &\colonequals \sigma_1^{n+3} = \Delta_2 \beta_0(n) \Delta_2, \end{align*} where $\Delta_N$ is the half-twist braid of $N$-strands. Then we define $\lambda(\dynkinfont{A}_n)$ be the rainbow closure of~$\beta_0(\dynkinfont{A}_n)$. \begin{align*} &\begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw[thick] (-0.5,-0.25) -- (-1,-0.25) to[out=180,in=0] (-1.5,0) to[out=0,in=180] (-1,0.25) -- (1,0.25) to[out=0,in=180] (1.5,0) to[out=180,in=0] (1,-0.25) -- (0.5, -0.25); \draw[thick] (-0.5,-0.75) -- (-1,-0.75) to[out=180,in=0] (-2,0) to[out=0,in=180] (-1,0.75) -- (1,0.75) to[out=0,in=180] (2,0) to[out=180,in=0] (1,-0.75) -- (0.5,-0.75); \draw[thick] (-0.5,-0.85) rectangle (0.5,-0.15); \draw (0,-0.5) node {$\scriptstyle n+1$}; \draw[dashed] (-1,-1) rectangle (1,0) (0,-1) node[below] {$\beta_0(\dynkinfont{A}_n)$}; \end{tikzpicture}& \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw[thick] (-1.5,-0.25) -- (-1,-0.25) (-1.5,0.25) -- (-1,0.25) (0.5,-0.25) -- (0,-0.25) (0.5,0.25) -- (0,0.25) (-1,-0.35) rectangle node {$r$} (0,0.35); \end{tikzpicture}&= \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw[thick, rounded corners] (-1.5,0.25) -- (-1,0.25) -- (-0.5, -0.25) -- (-0.3, -0.25) (-1.5,-0.25) -- (-1,-0.25) -- (-0.5, 0.25) -- (-0.3, 0.25) (1.5,0.25) -- (1,0.25) -- (0.5, -0.25) -- (0.3, -0.25) (1.5,-0.25) -- (1,-0.25) -- (0.5, 0.25) -- (0.3, 0.25); \draw[line cap=round, dotted] (-0.3,0.25) -- (0.3,0.25) (-0.3,-0.25) -- node[below] {$\underbrace{\hphantom{\hspace{2cm}}}_r$} (0.3,-0.25); \end{tikzpicture} \end{align*} One can easily check the quiver $\clusterfont{Q}^{\mathsf{brick}}(\dynkinfont{A}_n)$ from the \emph{brick diagram} of $\beta_0(\dynkinfont{A}_n)$ described in \cite{GSW2020b} look as shown in Figure~\ref{figure:brick linear quiver}, and there is a canonical $N$-graph with cycles $(\ngraphfont{G}^{\mathsf{brick}}(\dynkinfont{A}_n),\ngraphfont{B}^{\mathsf{brick}}(\dynkinfont{A}_n))$ on $\mathbb{D}^2$ as shown in Figure~\ref{figure:brick linear N-graph} such that \[ \clusterfont{Q}^{\mathsf{brick}}(\dynkinfont{A}_n)= \clusterfont{Q}(\ngraphfont{G}^{\mathsf{brick}}(\dynkinfont{A}_n), \ngraphfont{B}^{\mathsf{brick}}(\dynkinfont{A}_n)). \] The colors on cycles in Figure~\ref{figure:brick linear N-graph} are nothing to do with the bipartite coloring. \begin{figure}[ht] \subfigure[Quiver $\clusterfont{Q}^{\mathsf{brick}}(\dynkinfont{A}_n)$\label{figure:brick linear quiver}]{$ \clusterfont{Q}^{\mathsf{brick}}(\dynkinfont{A}_n)=\begin{tikzpicture}[baseline=-.5ex, scale=1.2] \draw[gray] (-3,0.25) -- (3,0.25) (-3,-0.25) -- (3,-0.25); \foreach \i in {-2, -1, 0, 1, 2} { \draw[gray] (\i, 0.25) -- (\i, -0.25) node[below] {$\sigma_1$}; } \draw (0,-0.5) node[below] {$\underbrace{\hphantom{\hspace{4.8cm}}}_{n+1}$}; \node[Dnode] (A1) at (-1.5,0) {}; \node[Dnode] (A2) at (-0.5,0) {}; \node (A3) at (0.5,0) {$\cdots$}; \node[Dnode] (A4) at (1.5,0) {}; \draw[->] (A1) -- (A2); \draw[->] (A2) -- (A3); \draw[->] (A3) -- (A4); \end{tikzpicture} $} \subfigure[$N$-graph $\ngraphfont{G}^{\mathsf{brick}}(\dynkinfont{A}_n)$\label{figure:brick linear N-graph}]{$ (\ngraphfont{G}^{\mathsf{brick}}(\dynkinfont{A}_n),\ngraphfont{B}^{\mathsf{brick}}(\dynkinfont{A}_n))= \begin{tikzpicture}[baseline=-.5ex,scale=1.2] \draw[thick, rounded corners] (-3,-1) rectangle (2,1); \begin{scope} \clip (-3,-1) rectangle (2,1); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-2,0) -- (-1,0) (0.5,0) -- (1,0); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-1,0) -- (-0.5,0); \draw[thick, blue] (-3,0) -- (-0.5, 0) (0.5,0) -- (2,0); \foreach \x in {-2,-1,1} { \draw[thick, blue, fill] (\x, -1) -- (\x,0) circle (1pt); } \draw[thick, blue, dashed] (-0.5,0) -- (0.5,0); \end{scope} \draw (-0.5, -1) node[below] {$\underbrace{\hphantom{\hspace{4cm}}}_{n+1}$}; \end{tikzpicture} $} \caption{Linear quiver $\clusterfont{Q}^{\mathsf{brick}}(\dynkinfont{A}_n)$ and $N$-graph $\ngraphfont{G}^{\mathsf{brick}}(\dynkinfont{A}_n)$} \label{figure:bricks linear} \end{figure} Then the quiver $\clusterfont{Q}^{\mathsf{brick}}(\dynkinfont{A}_n)$ is mutation equivalent to the bipartite quiver $\clusterfont{Q}(\dynkinfont{A}_n)$ as depicted in Figure~\ref{figure:linear quiver}. \begin{figure}[ht] \subfigure[Bipartite linear quivers $\clusterfont{Q}(\dynkinfont{A}_n)$ and $\clusterfont{Q}(\dynkinfont{A}_n)$\label{figure:linear quiver}]{\makebox[.45\textwidth]{$ \begin{tikzpicture}[baseline=-.5ex,xscale=-1] \useasboundingbox (-1,-2.1) rectangle (2.1,2.1); \draw (2,0) node[left] {$\clusterfont{Q}(\dynkinfont{A}_n)=$}; \node[Dnode, ynode] (A1) at (-1,0) {}; \node[Dnode, gnode] (A2) at (0,0) {}; \node[Dnode, ynode] (A3) at (1,0) {}; \node[Dnode] (A4) at (2,0) {}; \draw[<-] (A1) -- (A2); \draw[->] (A2) -- (A3); \draw[dotted] (A3) -- (A4); \draw (0.5,-0.5) node {$\underbrace{\hphantom{\hspace{3cm}}}_n$}; \end{tikzpicture} $}} \subfigure[Linear $N$-graph $(\ngraphfont{G}(\dynkinfont{A}_n),\ngraphfont{B}(\dynkinfont{A}_n))$\label{figure:linear N-graph}]{\makebox[.45\textwidth]{$ \begin{tikzpicture}[baseline=-.5ex,xscale=0.6,yscale=0.6] \useasboundingbox (-3.5,-3.5) rectangle (3.5,3.5); \draw[thick] (0,0) circle (3); \clip (0,0) circle (3); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-1.5,0.5) -- (-0.5, -0.5) node[midway, above,color=black,sloped,opacity=1] {$\gamma_2$} (1, 0) -- (1.5, -0.5); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-2.5,-0.5) -- (-1.5, 0.5) node[midway, above, color=black, sloped,opacity=1] {$\gamma_1$} (-0.5, -0.5) -- (0, 0) (1.5, -0.5) -- (2.5, 0.5) node[midway, above, color=black, sloped,opacity=1] {$\gamma_n$}; \draw[blue, thick, fill] (0:3) -- (2.5,0.5) circle (2pt) -- (45:3) (2.5,0.5) -- (1.5,-0.5) circle (2pt) -- (-45:3) (1.5,-0.5) -- (1,0) (0.5,0) node {$\scriptstyle\cdots$} (0,0) -- (-0.5, -0.5) circle (2pt) -- (-90:3) (-0.5, -0.5) -- (-1.5, 0.5) circle (2pt) -- (135:3) (-1.5, 0.5) -- (-2.5, -0.5) circle (2pt) -- (-135:3); \draw[blue, thick] (-2.5,-0.5) -- (-180:3); \end{tikzpicture} $}} \caption{Bipartite linear quiver $\clusterfont{Q}(\dynkinfont{A}_n)$ and $N$-graph $(\ngraphfont{G}(\dynkinfont{A}_n),\ngraphfont{B}(\dynkinfont{A}_n))$ with chosen cycles} \end{figure} \begin{definition}[Linear $N$-graphs] For $n\ge 1$, the \emph{linear $N$-graph} $(\ngraphfont{G}(\dynkinfont{A}_n), \ngraphfont{B}(\dynkinfont{A}_n))$ is the $2$-graph on $\mathbb{D}^2$ depicted in Figure~\ref{figure:linear N-graph}, which satisfies that \[ \clusterfont{Q}(\ngraphfont{G}(\dynkinfont{A}_n), \ngraphfont{B}(\dynkinfont{A}_n))=\clusterfont{Q}(\dynkinfont{A}_n). \] \end{definition} It is easy but important to note that the $N$-graph $(\ngraphfont{G}(\dynkinfont{A}_n),\ngraphfont{B}(\dynkinfont{A}_n))$ has symmetries as follows: \begin{lemma} The $N$-graph $(\ngraphfont{G}(\dynkinfont{A}_n),\ngraphfont{B}(\dynkinfont{A}_n))$ with cycles is invariant under the conjugation. Moreover, when $n$ is odd, it is invariant under $\pi$-rotation and the interchanges \[ \gamma_i \leftrightarrow \gamma_{n+1-i} \] for cycles $\gamma_i\in \ngraphfont{B}(\dynkinfont{A}_n)$. \end{lemma} \subsubsection{Tripod \texorpdfstring{$N$-graphs}{N-graphs}}\label{sec:tripods} For $a,b,c\ge 1$, we define a Legendrian link $\lambda(a,b,c)$, which is the closure of a braid $\beta(a,b,c)$ \begin{align*} \lambda(a,b,c) &= \lambda_{\beta(a,b,c)},& \beta(a,b,c)&=\sigma_2\sigma_1^{a+1}\sigma_2\sigma_1^{b+1}\sigma_2\sigma_1^{c+1}, \end{align*} where $\beta(a,b,c)$ is equivalent to the following: \begin{align*} \beta(a,b,c)&=\sigma_2\sigma_1^{a+1}\sigma_2\sigma_1^{b+1}\sigma_2\sigma_1^{c+1} =\sigma_2\sigma_1(\sigma_2\sigma_1)\sigma_2^a\sigma_1^{b-1}\sigma_2^c(\sigma_1\sigma_2)\sigma_1 =\Delta_3\sigma_1\sigma_2^a\sigma_1^{b-1}\sigma_2^c\Delta_3. \end{align*} Hence $\lambda(a,b,c)$ in $J^1\mathbb{S}^1$ corresponds to the rainbow closure of the braid $\beta_0(a,b,c)=\sigma_1\sigma_2^a\sigma_1^{b-1}\sigma_2^c$. \begin{align*} &\begin{tikzpicture}[yshift=0.45cm,baseline=-.5ex,scale=0.6] \draw[thick] (-3,-1)--(-1.5,-1) (-0.5,-1) -- (1.5, -1) (2.5,-1) -- (3,-1); \draw[thick, rounded corners] (-3,-1.5) -- (-2.5,-1.5) -- (-2,-2) -- (0,-2) (-0.5,-1.5) -- (0, -1.5) (1, -1.5) -- (1.5,-1.5) (2.5, -1.5) -- (3,-1.5); \draw[thick, rounded corners] (-3,-2) -- (-2.5,-2) -- (-2,-1.5) -- (-1.5,-1.5) (1,-2) -- (3,-2); \draw[thick] (-1.5,-1.6) rectangle node {$\scriptstyle a$} (-0.5, -0.9) (0,-1.4) rectangle node {$\scriptstyle b-1$} (1,-2.1) (1.5,-0.9) rectangle node {$\scriptstyle c$} (2.5,-1.6); \draw[thick] (-3,-1) to[out=180,in=0] (-3.5,-0.75) to[out=0,in=180] (-3,-0.5) -- (3,-0.5) to[out=0,in=180] (3.5,-0.75) to[out=180,in=0] (3,-1); \draw[thick] (-3,-1.5) to[out=180,in=0] (-4,-0.75) to[out=0,in=180] (-3,0) -- (3,0) to[out=0,in=180] (4,-0.75) to[out=180,in=0] (3,-1.5); \draw[thick] (-3,-2) to[out=180,in=0] (-4.5,-0.75) to[out=0,in=180] (-3,0.5) -- (3,0.5) to[out=0,in=180] (4.5,-0.75) to[out=180,in=0] (3,-2); \draw[dashed] (-3,-2.2) rectangle (3,-0.8) (0,-2.2) node[below] {$\beta_0(a,b,c)$}; \end{tikzpicture}& \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw[thick] (-1.5,-0.25) -- (-1,-0.25) (-1.5,0.25) -- (-1,0.25) (0.5,-0.25) -- (0,-0.25) (0.5,0.25) -- (0,0.25) (-1,-0.35) rectangle node {$r$} (0,0.35); \end{tikzpicture}&= \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw[thick, rounded corners] (-1.5,0.25) -- (-1,0.25) -- (-0.5, -0.25) -- (-0.3, -0.25) (-1.5,-0.25) -- (-1,-0.25) -- (-0.5, 0.25) -- (-0.3, 0.25) (1.5,0.25) -- (1,0.25) -- (0.5, -0.25) -- (0.3, -0.25) (1.5,-0.25) -- (1,-0.25) -- (0.5, 0.25) -- (0.3, 0.25); \draw[line cap=round, dotted] (-0.3,0.25) -- (0.3,0.25) (-0.3,-0.25) -- node[below] {$\underbrace{\hphantom{\hspace{2cm}}}_r$} (0.3,-0.25); \end{tikzpicture} \end{align*} The quiver $\clusterfont{Q}^{\mathsf{brick}}(a,b,c)$ obtained from the brick diagram of $\beta_0(a,b,c)$ and the corresponding $N$-graph with cycles $(\ngraphfont{G}^{\mathsf{brick}}(a,b,c),\ngraphfont{B}^{\mathsf{brick}}(a,b,c))$ on $\mathbb{D}^2$ are shown in Figure~\ref{figure:brick tripod N-graphs}. That is, \begin{align*} \clusterfont{Q}^{\mathsf{brick}}(a,b,c)&= \clusterfont{Q}(\ngraphfont{G}^{\mathsf{brick}}(a,b,c), \ngraphfont{B}^{\mathsf{brick}}(a,b,c)). \end{align*} As before, the colors on cycles in Figure~\ref{figure:brick tripod N-graphs} are nothing to do with the bipartite coloring. \begin{figure}[ht] \subfigure[Quiver $\clusterfont{Q}^{\mathsf{brick}}(a,b,c)$\label{figure:brick tripod quiver}]{$ \clusterfont{Q}^{\mathsf{brick}}(a,b,c)=\begin{tikzpicture}[baseline=-.5ex,xscale=1.2, yscale=-1.2] \draw[gray] (-5,0.5) -- (5,0.5) (-5,0) -- (5,0) (-5,-0.5) -- (5,-0.5); \draw[gray] (-4.5,0) -- (-4.5,-0.5) node[above] {$\sigma_2$} (-4,0) -- (-4, 0.5) node[below] {$\sigma_1$} (-3.5, 0) -- (-3.5, 0.5) node[below] {$\sigma_1$} (-3, 0) -- (-3, 0.5) node[below] {$\sigma_1$} (-2.5,0.5) node[below] {$\cdots$} (-2,0) -- (-2,0.5) node[below] {$\sigma_1$} (-1.5,0) -- (-1.5,0.5) node[below] {$\sigma_1$}; \draw[gray] (-1,0) -- (-1, -0.5) node[above] {$\sigma_2$} (-0.5,0) -- (-0.5, -0.5) node[above] {$\sigma_2$} (0, 0) -- (0, -0.5) node[above] {$\sigma_2$} (0.5,-0.5) node[above] {$\cdots$} (1,0) -- (1,-0.5) node[above] {$\sigma_2$} (1.5,0) -- (1.5, -0.5) node[above] {$\sigma_2$}; \draw[gray] (2,0) -- (2,0.5) node[below] {$\sigma_1$} (2.5,0) -- (2.5,0.5) node[below] {$\sigma_1$} (3,0) -- (3,0.5) node[below] {$\sigma_1$} (3.5,0.5) node[below] {$\cdots$} (4,0) -- (4,0.5) node[below] {$\sigma_1$} (4.5,0) -- (4.5,0.5) node[below] {$\sigma_1$}; \draw (-2.75,0.5) node[yshift=-4ex] {$\underbrace{\hphantom{\hspace{3cm}}}_a$}; \draw (0.25,-0.5) node[yshift=4ex] {$\overbrace{\hphantom{\hspace{3cm}}}^{b-1}$}; \draw (3.25,0.5) node[yshift=-4ex] {$\underbrace{\hphantom{\hspace{3cm}}}_c$}; \node[Dnode] (A1) at (-3.75, 0.25) {}; \node[Dnode] (A2) at (-3.25, 0.25) {}; \node[Dnode] (A3) at (-1.75, 0.25) {}; \node[Dnode] (Aa) at (0.25, 0.25) {}; \node (Adots) at (-2.5,0.25) {$\cdots$}; \node[Dnode] (B1) at (2.25, 0.25) {}; \node[Dnode] (B2) at (2.75,0.25) {}; \node[Dnode] (Bb) at (4.25, 0.25) {}; \node (Bdots) at (3.5,0.25) {$\cdots$}; \node[Dnode] (C1) at (-2.75, -0.25) {}; \node[Dnode] (C2) at (-0.75, -0.25) {}; \node[Dnode] (C3) at (-0.25, -0.25) {}; \node[Dnode] (Cc) at (1.25, -0.25) {}; \node (Cdots) at (0.5, -0.25) {$\cdots$}; \draw[->] (A1) -- (A2); \draw[->] (A2) -- (Adots); \draw[->] (Adots) -- (A3); \draw[->] (A3) -- (Aa); \draw[->] (Aa) -- (B1); \draw[->] (B1) -- (B2); \draw[->] (B2) -- (Bdots); \draw[->] (Bdots) -- (Bb); \draw[->] (C1) -- (C2); \draw[->] (C2) -- (C3); \draw[->] (C3) -- (Cdots); \draw[->] (Cdots) -- (Cc); \draw[->] (Aa) -- (C1); \end{tikzpicture} $} \subfigure[$N$-graph $\ngraphfont{G}^{\mathsf{brick}}(a,b,c)$\label{figure:brick tripod N-graphs}]{$ (\ngraphfont{G}^{\mathsf{brick}}(a,b,c),\ngraphfont{B}^{\mathsf{brick}}(a,b,c))= \begin{tikzpicture}[baseline=-.5ex,scale=1.2] \draw[thick, rounded corners] (-4,-1) rectangle (3,1); \begin{scope} \clip (-4,-1) rectangle (3,1); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-3.5,0.5) -- (-1, 0.5) (-2.5,-0.5) -- (-2.25,-0.5) (-0.5, 0.5) -- (-0.25, 0.5) (1, -0.5) -- (1.5, -0.5) (2.25, -0.5) -- (2.5, -0.5); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-3,-0.5) -- (-2.5,-0.5) (-1.75, -0.5) -- (-1.5, -0.5) (-1, 0.5) -- (-0.5, 0.5) (0.25,0.5) -- (0.5,0.5) (1.5, -0.5) -- (1.75, -0.5); \draw[color=cyan, line cap=round, line width=5, opacity=0.5] (-1, 0.5) -- (-1, -0.5) (-1, -0.5) -- (1, -0.5) (-0.5, 0.5) -- (-0.5, -0.5) (0.5, -0.5) -- (0.5, 0.5) (-1.5,-0.5) -- (-1,-0.5); \draw[thick, blue] (-4,-0.5) -- (-2.25, -0.5) (-4, 0.5) -- (-0.25, 0.5) (-1.75, -0.5) -- (-1,-0.5) (-1, -0.5) -- (-0.25, -0.5) (0.25, -0.5) -- (1.75,-0.5) (0.25, 0.5) -- (3,0.5) (2.25,-0.5) -- (3,-0.5); \draw[thick, blue, fill] (-3.5, -0.5) -- (-3.5, 0.5) circle (1pt) (-3, -1) -- (-3, -0.5) circle (1pt) (-2.5, -1) -- (-2.5, -0.5) circle (1pt) (-1.5,-1) -- (-1.5,-0.5) circle (1pt) (-1, -0.5) -- (-1, 0.5) circle (1pt) (-0.5, -0.5) -- (-0.5, 0.5) circle (1pt) (0.5, -0.5) -- (0.5, 0.5) circle (1pt) (1, -1) -- (1, -0.5) circle (1pt) (1.5, -1) -- (1.5, -0.5) circle (1pt) (2.5, -1) -- (2.5,-0.5) circle (1pt); \draw[thick, blue, dotted] (-2.25, -0.5) -- (-1.5, -0.5) (-0.25, -0.5) -- (0.25, -0.5) (-0.25, 0.5) -- (0.25, 0.5) (1.75, -0.5) -- (2.25, -0.5); \draw[thick, red] (-3.5,-1) -- (-3.5,-0.5) (-1, -1) -- (-1, -0.5) (-0.5,-1) -- (-0.5, -0.5) (0.5, -1) -- (0.5,-0.5); \draw[thick, red, rounded corners] (-4,0) -- (-3.75,0) -- (-3.5,-0.5) (-3.5, -0.5) -- (-3.25, 0) -- (-1.25, 0) -- (-1, -0.5) (-1, -0.5) -- (-0.75, 0) -- (-0.5, -0.5) (-0.5, -0.5) -- (-0.25, 0) (0.25,0) -- (0.5,-0.5) (0.5, -0.5) -- (0.75, 0) -- (1, 0) -- (3,0); \draw[thick, red, dotted] (-0.25, 0) -- (0.25,0); \foreach \x in {-3.5, -1, -0.5, 0.5} { \draw[thick, fill=white] (\x,-0.5) circle (1pt) ; } \end{scope} \draw (-2.25, -1) node[below] {$\underbrace{\hphantom{\hspace{2cm}}}_{a}$}; \draw (-.25, -1) node[below] {$\underbrace{\hphantom{\hspace{2cm}}}_{b-1}$}; \draw (1.75, -1) node[below] {$\underbrace{\hphantom{\hspace{2cm}}}_{c}$}; \end{tikzpicture} $} \caption{Tripod quiver $\clusterfont{Q}^{\mathsf{brick}}(a,b,c)$ and $N$-graph $\ngraphfont{G}^{\mathsf{brick}}(a,b,c)$} \label{figure:bricks tripod} \end{figure} This quiver $\clusterfont{Q}^{\mathsf{brick}}(a,b,c)$ is mutation equivalent to the quiver $\clusterfont{Q}(a,b,c)$, called the \emph{tripod quiver of type $(a,b,c)$}, depicted in Figure~\ref{figure:tripod quiver}. \begin{definition}[Tripod $N$-graphs] For $a,b,c\ge 1$, the \emph{tripod $N$-graph} $(\ngraphfont{G}(a,b,c), \ngraphfont{B}(a,b,c))$ is a free $3$-graph on $\mathbb{D}^2$ depicted in Figure~\ref{figure:tripod N-graph}, which satisfies that \[ \clusterfont{Q}(\ngraphfont{G}(a,b,c), \ngraphfont{B}(a,b,c))= \clusterfont{Q}(a,b,c). \] \end{definition} \begin{figure}[ht] \subfigure[Tripod quiver $\clusterfont{Q}(a,b,c)$\label{figure:tripod quiver}]{ \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \node[Dnode,ynode] (O) at (0,0) {}; \node[Dnode,gnode] (A1) at (60:1) {}; \node[Dnode,ynode] (A2) at (60:2) {}; \node[Dnode,gnode] (A3) at (60:3) {}; \node[Dnode] (An) at (60:4) {}; \node[Dnode,gnode] (B1) at (180:1) {}; \node[Dnode,ynode] (B2) at (180:2) {}; \node[Dnode,gnode] (B3) at (180:3) {}; \node[Dnode] (Bn) at (180:4) {}; \node[Dnode,gnode] (C1) at (-60:1) {}; \node[Dnode,ynode] (C2) at (-60:2) {}; \node[Dnode,gnode] (C3) at (-60:3) {}; \node[Dnode] (Cn) at (-60:4) {}; \draw[<-] (O)--(A1); \draw[<-] (A2) --(A1); \draw[<-] (A2)--(A3); \draw[dotted] (A3) -- (An); \draw[<-] (O)--(B1); \draw[<-] (B2)--(B1); \draw[<-] (B2)--(B3); \draw[dotted] (B3)--(Bn); \draw[<-] (O)--(C1); \draw[<-] (C2)--(C1); \draw[<-] (C2)--(C3); \draw[dotted] (C3)--(Cn); \draw (60:2.5) ++ (150:0.75) node[rotate=60]{$\overbrace{\hphantom{\hspace{1.8cm}}}^{a-1}$}; \draw (180:2.5) ++ (90:0.75) node[rotate=0]{$\overbrace{\hphantom{\hspace{1.8cm}}}^{b-1}$}; \draw (300:2.5) ++ (30:0.75) node[rotate=-60]{$\overbrace{\hphantom{\hspace{1.8cm}}}^{c-1}$}; \end{tikzpicture} } \subfigure[$(\ngraphfont{G}(a,b,c), \ngraphfont{B}(a,b,c))$\label{figure:tripod N-graph}]{ \begin{tikzpicture}[baseline=-.5ex,xscale=0.6,yscale=0.6] \useasboundingbox(-3.5,-3.5)rectangle(3.5,3.5); \draw[thick] (0,0) circle (3cm); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (60:1) -- (50:1.5) (70:1.75) -- (50:2) (180:1) -- (170:1.5) (190:1.75) -- (170:2) (300:1) -- (290:1.5) (310:1.75) -- (290:2); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) -- (60:1) (0,0) -- (180:1) (0,0) -- (300:1) (50:1.5) -- (70:1.75) (170:1.5) -- (190:1.75) (290:1.5) -- (310:1.75); \draw[red, thick] (0,0) -- (0:3) (0,0) -- (120:3) (0,0) -- (240:3); \draw[blue, thick, fill] (0,0) -- (60:1) circle (2pt) -- (100:3) (60:1) -- (50:1.5) circle (2pt) -- (20:3) (50:1.5) -- (70:1.75) circle (2pt) -- (80:3) (70:1.75) -- (50:2) circle (2pt) -- (40:3); \draw[blue, thick, dashed] (50:2) -- (60:3); \draw[blue, thick, fill] (0,0) -- (180:1) circle (2pt) -- (220:3) (180:1) -- (170:1.5) circle (2pt) -- (140:3) (170:1.5) -- (190:1.75) circle (2pt) -- (200:3) (190:1.75) -- (170:2) circle (2pt) -- (160:3); \draw[blue, thick, dashed] (170:2) -- (180:3); \draw[blue, thick, fill] (0,0) -- (300:1) circle (2pt) -- (340:3) (300:1) -- (290:1.5) circle (2pt) -- (260:3) (290:1.5) -- (310:1.75) circle (2pt) -- (320:3) (310:1.75) -- (290:2) circle (2pt) -- (280:3); \draw[blue, thick, dashed] (290:2) -- (300:3); \draw[thick, fill=white] (0,0) circle (2pt); \curlybrace[]{10}{110}{3.2}; \draw (300:3.5) node[rotate=30] {$c+1$}; \curlybrace[]{130}{230}{3.2}; \draw (180:3.5) node[rotate=90] {$b+1$}; \curlybrace[]{250}{350}{3.2}; \draw (60:3.5) node[rotate=-30] {$a+1$}; \end{tikzpicture} } \subfigure[$\overline{(\ngraphfont{G}(a,b,c), \ngraphfont{B}(a,b,c))}$\label{figure:switched tripod N-graph}]{ \begin{tikzpicture}[baseline=-.5ex,xscale=0.6,yscale=0.6] \useasboundingbox(-3.5,-3.5)rectangle(3.5,3.5); \draw[thick] (0,0) circle (3cm); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (60:1) -- (50:1.5) (70:1.75) -- (50:2) (180:1) -- (170:1.5) (190:1.75) -- (170:2) (300:1) -- (290:1.5) (310:1.75) -- (290:2); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) -- (60:1) (0,0) -- (180:1) (0,0) -- (300:1) (50:1.5) -- (70:1.75) (170:1.5) -- (190:1.75) (290:1.5) -- (310:1.75); \draw[blue, thick] (0,0) -- (0:3) (0,0) -- (120:3) (0,0) -- (240:3); \draw[red, thick, fill] (0,0) -- (60:1) circle (2pt) -- (100:3) (60:1) -- (50:1.5) circle (2pt) -- (20:3) (50:1.5) -- (70:1.75) circle (2pt) -- (80:3) (70:1.75) -- (50:2) circle (2pt) -- (40:3); \draw[red, thick, dashed] (50:2) -- (60:3); \draw[red, thick, fill] (0,0) -- (180:1) circle (2pt) -- (220:3) (180:1) -- (170:1.5) circle (2pt) -- (140:3) (170:1.5) -- (190:1.75) circle (2pt) -- (200:3) (190:1.75) -- (170:2) circle (2pt) -- (160:3); \draw[red, thick, dashed] (170:2) -- (180:3); \draw[red, thick, fill] (0,0) -- (300:1) circle (2pt) -- (340:3) (300:1) -- (290:1.5) circle (2pt) -- (260:3) (290:1.5) -- (310:1.75) circle (2pt) -- (320:3) (310:1.75) -- (290:2) circle (2pt) -- (280:3); \draw[red, thick, dashed] (290:2) -- (300:3); \draw[thick, fill=white] (0,0) circle (2pt); \curlybrace[]{10}{110}{3.2}; \draw (-60:3.5) node[rotate=30] {$c+1$}; \curlybrace[]{130}{230}{3.2}; \draw (180:3.5) node[rotate=90] {$b+1$}; \curlybrace[]{250}{350}{3.2}; \draw (-300:3.5) node[rotate=-30] {$a+1$}; \end{tikzpicture} } \caption{Tripod quiver and $N$-graphs with chosen cycles} \label{figure:tripod} \end{figure} Moreover, the above two $N$-graphs are essentially equivalent as follows. The proof will be given in Appendix~\ref{appendix:Ngraph of type abc}. \begin{lemma} The $N$-graphs $\ngraphfont{G}^{\mathsf{brick}}(a,b,c)$ and $\ngraphfont{G}(a,b,c)$ are equivalent up to $\partial$-Legendrian isotopy and Legendrian mutations. \end{lemma} The $N$-graph $\ngraphfont{G}(a,b,c)$ is free by Lemma~\ref{lemma:tree Ngraphs are free} and is never invariant under the conjugation, which acts on the Legendrian $\lambda(a,b,c)$ as interchanging $\sigma_1$ and $\sigma_2$ so that $\overline{\lambda(a,b,c)}$ is the closure of $\overline{\beta(a,b,c)}$ \[ \overline{\beta(a,b,c)} = \sigma_1\sigma_2^{a+1}\sigma_1\sigma_2^{b+1}\sigma_1\sigma_2^{c+1}. \] The conjugation $\overline{(\ngraphfont{B}(a,b,c),\ngraphfont{B}(a,b,c))}$ is depicted in Figure~\ref{figure:switched tripod N-graph}. We have the following obvious observation: \begin{lemma} For each $a\ge 1$, the $N$-graph $\ngraphfont{G}(a,a,a)$ is invariant under $2\pi/3$-rotation. \end{lemma} On the other hand, if one of $a,b,c$ is $1$, then the quiver $\clusterfont{Q}(a,b,c)$ is of type $\dynkinfont{A}_n$. Indeed, as seen in Example~\ref{example:stabilization of An}, the Legendrian link $\lambda(1,b,c)$ is a stabilization of $\lambda(\dynkinfont{A}_n)$ for $n=b+c-1$, and the $N$-graph $\ngraphfont{G}(a,b,c)$ is a stabilization of $\ngraphfont{G}(\dynkinfont{A}_n)$. See Appendix~\ref{appendix:tripod with a=1 is of type An} for the proof. \begin{lemma}\label{lemma:stabilized An} The $N$-graph $\ngraphfont{G}(1,b,c)$ is a stabilization of $\ngraphfont{G}(\dynkinfont{A}_n)$ for $n=b+c-1$. \end{lemma} One consequence of this lemma is that two $N$-graphs $\ngraphfont{G}(\dynkinfont{A}_n)$ and $\ngraphfont{G}(1,b,c)$ with $n=b+c-1$ will generate bijective sets of $N$-graphs under mutations as seen in Remarks~\ref{remark:boundary-Legendrian isotopy} and \ref{remark:Stabilization}, where the bijection preserves the mutation. Notice that the quivers $\clusterfont{Q}(a,b,c)$ together with $\clusterfont{Q}(\dynkinfont{A}_n)$ cover all quivers of finite type and some quivers of affine type. Indeed, for $1\le a\le b\le c$ and $n=a+b+c-2$, the quivers $\clusterfont{Q}(1,b,c)$ and $\clusterfont{Q}(\dynkinfont{A}_n)$ are of type $\dynkinfont{A}_n$, and the quivers $\clusterfont{Q}(2,2,n-2)$ and $\clusterfont{Q}(2,3,n-3)$ are of type $\dynkinfont{D}_n$ and $\dynkinfont{E}_n$. Moreover, $\clusterfont{Q}(3,3,3)$, $\clusterfont{Q}(2,4,4)$ and $\clusterfont{Q}(2,3,6)$ are of type $\widetilde{\dynE}_6, \widetilde{\dynE}_7$ and $\widetilde{\dynE}_8$, respectively. Hence as seen in Table~\ref{table:short notations}, we denote braids, Legendrians, quivers, and $N$-graphs with cycles by $\beta(\dynkinfont{Z})$, $\lambda(\dynkinfont{Z})$, $\clusterfont{Q}(\dynkinfont{Z})$ and $\ngraphfont{G}(\dynkinfont{Z})$ for $\dynkinfont{Z}=\dynkinfont{D}_n, \dynkinfont{E}_n$ or $\widetilde{\dynE}_n$, respectively. \begin{table}[ht] \[ \renewcommand\arraystretch{1.5} \begin{array}{c||c|c|c|c} \toprule \dynkinfont{Z} & \beta(\dynkinfont{Z}) & \lambda(\dynkinfont{Z}) & \clusterfont{Q}(\dynkinfont{Z}) & (\ngraphfont{G}(\dynkinfont{Z}), \ngraphfont{B}(\dynkinfont{Z}))\\ \midrule \dynkinfont{D}_n, n\ge 4 & \beta(n-2,2,2) & \lambda(n-2,2,2) & \clusterfont{Q}(n-2,2,2) & (\ngraphfont{G}(n-2,2,2),\ngraphfont{B}(n-2,2,2))\\ \hline \dynkinfont{E}_6 & \beta(2,3,3) & \lambda(2,3,3) & \clusterfont{Q}(2,3,3) & (\ngraphfont{G}(2,3,3),\ngraphfont{B}(2,3,3))\\ \hline \dynkinfont{E}_7 & \beta(2,3,4) & \lambda(2,3,4) & \clusterfont{Q}(2,3,4) & (\ngraphfont{G}(2,3,4),\ngraphfont{B}(2,3,4))\\ \hline \dynkinfont{E}_8 & \beta(2,3,5) & \lambda(2,3,5) & \clusterfont{Q}(2,3,5) & (\ngraphfont{G}(2,3,5),\ngraphfont{B}(2,3,5))\\ \hline \widetilde{\dynE}_6 & \beta(3,3,3) & \lambda(3,3,3) & \clusterfont{Q}(3,3,3) & (\ngraphfont{G}(3,3,3),\ngraphfont{B}(3,3,3))\\ \hline \widetilde{\dynE}_7 & \beta(2,4,4) & \lambda(2,4,4) & \clusterfont{Q}(2,4,4) & (\ngraphfont{G}(2,4,4),\ngraphfont{B}(2,4,4))\\ \hline \widetilde{\dynE}_8 & \beta(2,3,6) & \lambda(2,3,6) & \clusterfont{Q}(2,3,6) & (\ngraphfont{G}(2,3,6),\ngraphfont{B}(2,3,6))\\ \bottomrule \end{array} \] \caption{Braids, Legendrians, quivers, and $N$-graphs of type $\dynkinfont{D}\dynkinfont{E}$ and $\widetilde{\dynE}$} \label{table:short notations} \end{table} \subsubsection{Degenerate $N$-graphs of type $\dynkinfont{D}_{n+1}, \dynkinfont{E}_6, \widetilde{\dynE}_6$ and $\widetilde{\dynE}_7$} For $a, b\ge 1$, we define positive braids $\tilde\beta_0(a,b,b)$ and $\tilde\beta(a,b,b)$ as follows: \begin{align*} \tilde\beta_0(a,b,b)&\colonequals \sigma_2^a\sigma_{1,3}^{b-1}\sigma_2\sigma_{1,3} \mathrel{\dot{=}}\sigma_1\sigma_2^a \sigma_1^{b-1} \sigma_3^{b-1}\sigma_2\sigma_3 =\sigma_1\sigma_2^a \sigma_1^{b-1}\sigma_2{\color{blue}\sigma_3} \sigma_2^{b-1} =S(\beta_0(a,b,b))\\ \tilde\beta(a,b,b)&\colonequals \sigma_2^{a+1}\sigma_{1,3}\sigma_2\sigma_{1,3}^{b}\sigma_2^2\sigma_{1,3}\sigma_2\sigma_{1,3}^2\\ &=\sigma_2^a(\sigma_2 \sigma_{1,3} \sigma_2) \sigma_{1,3}^b (\sigma_2 \sigma_{1,3})^3 \\ &=\sigma_2^a \sigma_{1,3}^{b-1} \sigma_2 \sigma_{1,3} (\sigma_2 \sigma_{1,3})^2 (\sigma_2 \sigma_{1,3})^2 \\ &\mathrel{\dot{=}}\Delta_4\tilde\beta_0(a,b,b)\Delta_4. \end{align*} Let $\tilde\lambda(a,b,b)$ be the closure of $\tilde\beta(a,b,b)$. Then since both $\tilde\beta_0(a,b,b)$ and $\tilde\beta(a,b,b)$ are invariant under the conjugation, so is $\tilde\lambda(a,b,b)$. The quiver $\clusterfont{Q}^{\mathsf{brick}}(\tilde\beta_0(a,b,b))$ and $N$-graph $\tilde\ngraphfont{G}^{\mathsf{brick}}(a,b,b)$ look as follows: \begin{align*} \clusterfont{Q}^{\mathsf{brick}}(\tilde\beta_0(a,b,b)))&=\begin{tikzpicture}[baseline=-.5ex,xscale=1.2, yscale=1.2] \draw[gray] (-5,0.75) -- ++(7.5,0) (-5,0.25) -- ++(7.5,0); \draw[gray] (-5,-0.75) -- ++(7.5,0) (-5,-0.25) -- ++(7.5,0); \draw[gray] (-1.5,0.75) node[above] {$\sigma_3$} -- ++(0,-0.5) (-1,0.75) node[above] {$\sigma_3$} -- ++(0,-0.5) (-0.5,0.75) node[above] {$\sigma_3$} -- ++(0,-0.5) (0.5,0.75) node[above] {$\sigma_3$} -- ++(0,-0.5) (1,0.75) node[above] {$\sigma_3$} -- ++(0,-0.5) (2,0.75) node[above] {$\sigma_3$} -- ++(0,-0.5); \draw[gray] (-1.5,-0.75) node[below] {$\sigma_1$} -- ++(0,0.5) (-1,-0.75) node[below] {$\sigma_1$} -- ++(0,0.5) (-0.5,-0.75) node[below] {$\sigma_1$} -- ++(0,0.5) (0.5,-0.75) node[below] {$\sigma_1$} -- ++(0,0.5) (1,-0.75) node[below] {$\sigma_1$} -- ++(0,0.5) (2,-0.75) node[below] {$\sigma_1$} -- ++(0,0.5); \draw[gray] (-4.5,0.25) node[above] {$\sigma_2$} -- ++(0,-0.5) (-4,0.25) node[above] {$\sigma_2$} -- ++(0,-0.5) (-3.5,0.25) node[above] {$\sigma_2$} -- ++(0,-0.5) (-2.5,0.25) node[above] {$\sigma_2$} -- ++(0,-0.5) (-2,0.25) node[above] {$\sigma_2$} -- ++(0,-0.5) (1.5,0.25) -- ++(0,-0.5); \node[Dnode] (A1) at (-4.25, 0) {}; \node[Dnode] (A2) at (-3.75, 0) {}; \node (Adots) at (-3, 0) {$\cdots$}; \node[Dnode] (A3) at (-2.25, 0) {}; \node[Dnode] (A4) at (0.75, 0) {}; \node[Dnode] (B1) at (-1.25,0.5) {}; \node[Dnode] (C1) at (-1.25,-0.5) {}; \node[Dnode] (B2) at (-.75,0.5) {}; \node[Dnode] (C2) at (-.75,-0.5) {}; \node (Bdots) at (0,0.5) {$\cdots$}; \node (Cdots) at (0,-0.5) {$\cdots$}; \node[Dnode] (B3) at (0.75,0.5) {}; \node[Dnode] (C3) at (0.75,-0.5) {}; \node[Dnode] (B4) at (1.5,0.5) {}; \node[Dnode] (C4) at (1.5,-0.5) {}; \draw[->] (A1) -- (A2); \draw[->] (A2) -- (Adots); \draw[->] (Adots) -- (A3); \draw[->] (A3) -- (A4); \draw[->] (B1) -- (B2); \draw[->] (C1) -- (C2); \draw[->] (B2) -- (Bdots); \draw[->] (C2) -- (Cdots); \draw[->] (Bdots) -- (B3); \draw[->] (Cdots) -- (C3); \draw[->] (B3) -- (B4); \draw[->] (C3) -- (C4); \draw[->] (B4) -- (A4); \draw[->] (C4) -- (A4); \end{tikzpicture} \\ \tilde\ngraphfont{G}^{\mathsf{brick}}(a,b,b)&= \begin{tikzpicture}[baseline=-.5ex, scale=0.6] \draw[rounded corners=5,thick] (-7, -2.5) rectangle (7, 2.5); \draw (-4.5, -2.5) node[below] {$\underbrace{\hphantom{\hspace{2.4cm}}}_{a}$}; \draw (0.5, -2.5) node[below] {$\underbrace{\hphantom{\hspace{2.4cm}}}_{b-1}$}; \clip[rounded corners=5] (-7, -2.5) rectangle (7, 2.5); \draw[thick, red, rounded corners] (-7, 0.5) -- ++(4, 0) -- ++(2, -1) -- ++(3, 0) -- ++(2, 1) -- ++(2, -1) -- ++(1,0); \draw[thick, red, rounded corners] (-7, -1.5) -- ++(4, 0) ++(0,0) -- ++(2, 3) -- ++(3,0) -- ++(2,-3) ++(0,0) -- ++(2,3) -- ++(1,0); \draw[thick, red, fill] (-7, -1.5) ++(1,0) circle (2pt) -- +(0,-1) ++(1,0) circle (2pt) -- +(0,-1) ++(2,0) circle (2pt) -- +(0,-1) ++(7,0) circle (2pt) -- +(0,-1) ; \draw (-7, -2) ++(3,0) node {$\cdots$}; \draw[Dble={green and blue},line width=2, line cap=round] (-7,-0.5) -- ++(4,0); \draw[Dble={green and blue},line width=2, line cap=round] (-3,-0.5) -- ++(1,0.5); \draw[Dble={blue and green},line width=2, line cap=round] (-2,0) -- (-1,0.5); \draw[Dble={blue and green},line width=2, line cap=round] (-1,0.5) -- ++(3,0); \draw[Dble={green and blue},line width=2, line cap=round] (3,0) -- ++(1,-0.5); \draw[Dble={blue and green},line width=2, line cap=round] (2,0.5) -- ++(1,-0.5); \draw[Dble={green and blue},line width=2, line cap=round] (3,0) -- ++(1,-0.5); \draw[Dble={green and blue},line width=2, line cap=round] (4,-0.5) -- ++(1,0.5); \draw[Dble={blue and green},line width=2, line cap=round] (5,0) -- ++(1,0.5); \draw[Dble={blue and green},line width=2, line cap=round] (6,0.5) -- ++(1,0); \draw[Dble={blue and green},line width=2, line cap=round] (-7,1.5) -- ++(4,0); \draw[Dble={blue and green},line width=2, line cap=round] (-3,1.5) -- ++(1,-1.5); \draw[Dble={green and blue},line width=2] (-2,0) -- ++(1,-1.5); \draw[Dble={green and blue},line width=2, line cap=round] (-1,-1.5) -- ++(3,0); \draw[Dble={green and blue},line width=2] (-1,-1.5) -- ++(0,-1); \draw[Dble={green and blue},line width=2] (0,-1.5) -- ++(0,-1); \draw[Dble={green and blue},line width=2] (2,-1.5) -- ++(0,-1); \draw[Dble={green and blue},line width=2] (2,-1.5) -- ++(1,1.5); \draw[Dble={blue and green},line width=2, line cap=round] (3,0) -- ++(1,1.5); \draw[Dble={blue and green},line width=2, line cap=round] (4,1.5) -- ++(1,-1.5); \draw[Dble={green and blue},line width=2] (5,0) -- ++(1,-1.5); \draw[Dble={green and blue},line width=2] (6,-1.5) -- ++(1,0); \draw[Dble={green and blue},line width=2, line cap=round] (6,-1.5) -- ++(0,-1); \draw (1, -2) node {$\cdots$}; \end{tikzpicture}\\ &\stackrel{\rm perturb}{\rightarrow} \begin{tikzpicture}[baseline=-.5ex, scale=0.6] \draw[rounded corners=5,thick] (-7, -2.5) rectangle (7, 2.5); \clip[rounded corners=5] (-7, -2.5) rectangle (7, 2.5); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5, rounded corners] (-6,-1.5) -- (-5,-1.5) (-3.5,-1.5)--(-3,-1.5) (-1,-1.5)--(0,-1.5) (2,-1.5)--(2.5,-0.5)--(2.5,0.5)--(4,1.5)--(5.5,0.5) -- (5.5,-0.5) -- (6,-1.5) (-1.5,-2)--(-0.5,-2) (1.5,-2)--(2.5,-2)--(3.5,-0.5)--(3.5,0.5)--(4,1)--(4.5,0.5)--(4.5,-0.5)--(5.5,-2); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5, rounded corners] (-5,-1.5)--(-4.5,-1.5) (0,-1.5)--(0.5,-1.5) (1.5,-1.5)--(2,-1.5) (-0.5,-2)--(0,-2) (1,-2)--(1.5,-2) (-3,-1.5)--(-2.5,-0.5)--(-1.5,0.5)--(-1,1.5)--(2,1.5)--(2.5,0.5)--(3.5,-0.5)--(4,-1.5); \draw[thick, blue, rounded corners] (-7,2) -- (-2.5,2) -- (-1.5,0.5) (2.5, 0.5) -- (4,1.5) -- (5.5,0.5) (1.5,-2)--(2.5,-2)--(3.5,-0.5); \draw[thick, blue] (-7,-0.5) -- (-2.5,-0.5) -- (-1.5,-2) --(1.5,-2) (3.5,-0.5) -- (4.5,-0.5) -- (5.5,-2) -- (7,-2) (-2.5,-0.5) -- (-1.5,0.5) -- (2.5,0.5) -- (3.5,-0.5) (4.5,-0.5) -- (5.5,0.5) -- (7,0.5) (-1.5,-2) -- (-1.5,-2.5) (-0.5,-2) -- (-0.5,-2.5) (1.5,-2) -- (1.5,-2.5) (5.5,-2) -- (5.5,-2.5) ; \draw[thick, red, rounded corners] (-1.5,0.5) -- (-1,1.5) -- (2,1.5) -- (2.5,0.5) (5.5,0.5)-- (6,1.5) -- (7,1.5) (-1.5,-0.5) -- (-1,-1) -- (2,-1) -- (2.5,-0.5) (5.5,-0.5)-- (6,-1) -- (7,-1); \draw[thick, red] (-7,0.5) -- (-1.5,0.5) (2.5,0.5) -- (5.5,0.5) (-7,-1.5) -- (-3,-1.5) -- (-2.5,-0.5) -- (-1.5,-0.5) (2.5,-0.5) -- (3.5,-0.5) -- (4,-1.5) -- (4.5,-0.5) -- (5.5,-0.5) (-6,-1.5) -- (-6,-2.5) (-5,-1.5) -- (-5,-2.5)(-3,-1.5) -- (-3,-2.5) (-2.5,0.5)--(-2.5,-0.5) (-1.5,0.5)-- (-1.5,-0.5) (2.5,0.5)--(2.5,-0.5) (3.5,0.5)-- (3.5,-0.5) (4.5,0.5)--(4.5,-0.5) (5.5,0.5)-- (5.5,-0.5) (4,-1.5) -- (4,-2.5); \draw[thick, green, rounded corners] (-7,1.5) -- (-3,1.5) -- (-2.5,0.5) (-7,0)-- (-3,0) -- (-2.5,0.5) (3.5,0.5)--(4,1)--(4.5,0.5); \draw[thick, green] (-2.5,0.5)--(-1.5,-0.5) -- (2.5,-0.5) -- (3.5,0.5) (4.5,0.5)--(5.5,-0.5)--(7,-0.5) (-1.5,-0.5) -- (-1,-1.5) -- (2,-1.5) -- (2.5,-0.5) (5.5,-0.5)--(6,-1.5) --(7,-1.5) (-1,-1.5)--(-1,-2.5) (0,-1.5)--(0,-2.5) (2,-1.5)--(2,-2.5) (6,-1.5)--(6,-2.5); \draw[thick, blue, fill] (-1.5,-2) circle (2pt) (-0.5,-2) circle (2pt) (1.5,-2) circle (2pt) (5.5,-2) circle (2pt) ; \draw[thick, red, fill] (-6,-1.5) circle (2pt) (-5,-1.5) circle (2pt) (-3,-1.5) circle (2pt) (4,-1.5) circle (2pt) ; \draw[thick, green, fill] (-1,-1.5) circle (2pt) (0,-1.5) circle (2pt) (2,-1.5) circle (2pt) (6,-1.5) circle (2pt) ; \draw[fill=white, thick] (-2.5,0.5) circle (2pt) (-1.5,0.5) circle (2pt) (2.5,0.5) circle (2pt) (3.5,0.5) circle (2pt) (4.5,0.5) circle (2pt) (5.5,0.5) circle (2pt) (-2.5,-0.5) circle (2pt) (-1.5,-0.5) circle (2pt) (2.5,-0.5) circle (2pt) (3.5,-0.5) circle (2pt) (4.5,-0.5) circle (2pt) (5.5,-0.5) circle (2pt); \end{tikzpicture} \end{align*} \begin{definition}[Degenerate $4$-graphs for $\tilde\lambda(a,b,b)$] We define a degenerate $4$-graph $\tilde\ngraphfont{G}(a,b,b)$ for $\tilde\lambda(a,b,b)$ as depicted in the left of Figure~\ref{figure:degenerated 4-graph}. \end{definition} \begin{figure}[ht] \[ \begin{tikzcd}[ampersand replacement=\&] \tilde\ngraphfont{G}(a,b,b)\colonequals \begin{tikzpicture}[baseline=-.5ex, scale=0.8] \draw (0,0) circle (3); \curlybrace[]{200}{250}{3.2}; \draw (225:3.5) node[rotate=-45] {$a+1$}; \curlybrace[]{-30}{30}{3.2}; \draw (0:3.5) node[rotate=-90] {$b$}; \clip (0,0) circle (3); \draw[fill, red, thick] (3,-3) -- (0,0) (0,0) -- (-3,3) (0,0) -- (45:2.5) circle (2pt) (45:2.5) -- ++(0,3) (45:2.5) -- ++(3,0) (0,0) -- (-135:1.5) circle (2pt) (-135:1.5) -- ++(0,-3) (-135:1.5) -- ++(-3,0) (-135:1.5) ++ (-.5,0) circle (2pt) -- ++(0,-2) (-135:1.5) ++ (-.5,-.5) circle (2pt) -- ++(-2,0) (-135:1.5) ++ (-1,-.5) circle (2pt); \draw[red, thick, dashed] (-135:1.5) ++ (-1,-.5) -- ++(0,-2); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,-1); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,1); \draw[Dble={blue and green},line width=2] (-2.5,0) -- (0,0); \draw[Dble={blue and green},line width=2] (0,0) -- (0,3); \draw[Dble={blue and green},line width=2] (0,0) -- (0,-3); \draw[Dble={green and blue},line width=2] (0,0) -- (1.25,0); \draw[Dble={green and blue},line width=2] (1.25,0) -- ++(2,-2); \draw[Dble={green and blue},line width=2] (1.25,0) -- ++(2,2); \draw[Dble={green and blue},line width=2] (1.25,0) ++(45:0.5) -- ++(2,-2); \draw[Dble={green and blue},line width=2,dashed] (1.25,0) ++(45:0.5) ++(-45:0.5) -- ++(2,2); \end{tikzpicture}\arrow[r,"\text{Perturb.}"] \& \begin{tikzpicture}[baseline=-.5ex, scale=0.8] \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[cyclecolor1, opacity=0.5, line width=5, line cap=round] (-2.5,0.25) -- (1.25, 0.25) (-2.5,-0.25) -- (1.25, -0.25) (1.25, 0.25) ++(45:0.5) -- ++(-45:0.5) (1.25, -0.25) ++(45:0.5) -- ++(-45:0.5) (-135:1.5) -- ++(-0.5,0) (-135:1.5) ++(-0.5,-0.5) -- ++(-0.5,0) ; \draw[cyclecolor2, opacity=0.5, line width=5, line cap=round] (1.25, 0.25) -- ++(45:0.5) (1.25, -0.25) -- ++(45:0.5) (45:2.5) -- (-135:1.5) (-135:1.5) ++(-0.5,0) -- ++(0,-0.5) ; \draw[fill, red, thick] (3,-3) -- (0.25,-0.25) (-0.25,0.25) -- (-3,3) (0.25,0.25) -- (45:2.5) circle (2pt) (45:2.5) -- ++(0,3) (45:2.5) -- ++(3,0) (-0.25,-0.25) -- (-135:1.5) circle (2pt) (-135:1.5) -- ++(0,-3) (-135:1.5) -- ++(-3,0) (-135:1.5) ++ (-.5,0) circle (2pt) -- ++(0,-2) (-135:1.5) ++ (-.5,-.5) circle (2pt) -- ++(-2,0) (-135:1.5) ++ (-1,-.5) circle (2pt); \draw[red, thick] (-0.25,-0.25) rectangle (0.25,0.25); \draw[red, thick, dashed] (-135:1.5) ++ (-1,-.5) -- ++(0,-2); \draw[green, thick, fill] (-2.5,-0.25) circle (2pt) -- ++(-1,-1); \draw[green, thick] (-2.5,-0.25) -- ++(-1,1); \draw[green, thick] (-2.5,-0.25) -- (-0.25,-0.25) -- ++(0,-3); \draw[blue, thick, fill] (-2.5,0.25) circle (2pt) -- ++(-1,-1); \draw[blue, thick] (-2.5,0.25) -- ++(-1,1); \draw[blue, thick] (-2.5,0.25) -- (-0.25,0.25) -- ++(0,3); \draw[green, thick] (0.25, 0.25) -- (-0.25, -0.25); \draw[green, thick, fill] (0.25,0.25) -- ++(0,3); \draw[green, thick, fill] (0.25,0.25) -- ++(1,0); \draw[green, thick, fill] (1.25,0.25) circle (2pt) -- ++(2,-2); \draw[green, thick, fill] (1.25,0.25) -- ++(2,2); \draw[green, thick, fill] (1.25,0.25) ++(45:0.5) circle (2pt) -- ++(2,-2); \draw[green, thick, fill, dashed] (1.25,0.25) ++(45:0.5) ++(-45:0.5) circle (2pt) -- ++(2,2); \draw[blue, thick] (-0.25, 0.25) -- (0.25, -0.25); \draw[blue, thick, fill] (0.25,-0.25) -- ++(1,0); \draw[blue, thick, fill] (0.25,-0.25) -- ++(0,-3); \draw[blue, thick, fill] (1.25,-0.25) circle (2pt) -- ++(2,-2); \draw[blue, thick, fill] (1.25,-0.25) -- ++(2,2); \draw[blue, thick, fill] (1.25,-0.25) ++(45:0.5) circle (2pt) -- ++(2,-2); \draw[blue, thick, fill, dashed] (1.25,-0.25) ++(45:0.5) ++(-45:0.5) circle (2pt) -- ++(2,2); \draw[fill=white, thick] (0.25, 0.25) circle (2pt) (0.25, -0.25) circle (2pt) (-0.25, -0.25) circle (2pt) (-0.25, 0.25) circle (2pt); \end{tikzpicture} \end{tikzcd} \] \caption{Degenerate $4$-graphs $\tilde\ngraphfont{G}(a,b,b)$ and cycles in the perturbation} \label{figure:degenerated 4-graph} \end{figure} \begin{lemma}\label{lemma:tripod to degenerated Ngraph} The $N$-graph $\tilde\ngraphfont{G}(a,b,b)$ is equivalent to $\tilde\ngraphfont{G}^{\mathsf{brick}}(a,b,b)$ up to $\partial$-Legendrian isotopy and Legendrian mutations. \end{lemma} \begin{proof} It is straightforward to check that we obtain the following degenerate $N$-graph from $\tilde\ngraphfont{G}^{\mathsf{brick}}(a,b,b)$ by applying a sequence of Move \Move{DI}. \[ \begin{tikzpicture}[baseline=-.5ex, scale=0.6] \draw[rounded corners=5,thick] (-7, -2.5) rectangle (7, 2.5); \clip[rounded corners=5] (-7, -2.5) rectangle (7, 2.5); \draw[red, thick, rounded corners] (-7,0.5) -- (-3.5,0.5) (-2.5,0.5)--(-2,0.5) (-7,-1.5)--(-6,-0.5)--(-6,2)--(-3.5,2) (-2.5,2)--(3,2)--(3,0.5) (-2,0.5)--(-1,-0)--(2,0)--(4,1)--(5,0.5) (5,0.5)--(5.5,1)--(6,0.5) (5,0.5)--(5.5,0)--(6,0.5) (-6,-2.5)--(-5,-1.5)--(-5,2) (-5,-2.5)--(-4,-1.5)--(-4,2) (-3,-2.5)--(-2,-1.5)--(-2,2); \draw[red, thick, dashed] (-3.5,0.5)--(-2.5,0.5) (-3.5,2)--(-2.5,2); \draw[red, thick] (3,0.5)--(4,-1.5)--(5,0.5) (7,1)--(6,0.5)--(7,-0.5) (4,-1.5)--(4,-2.5); \draw[Dble={blue and green},line width=2] (5,0.5)--(5.5,0.5); \draw[Dble={green and blue},line width=2] (5.5,0.5)--(6,0.5); \draw[Dble={green and blue},line width=2, line cap=round] (-6,0.5)--(-5.5,0); \draw[Dble={blue and green},line width=2, line cap=round] (-6,0.5)--(-5.5,1); \draw[Dble={green and blue},line width=2, line cap=round] (-5,0.5)--(-4.5,0); \draw[Dble={blue and green},line width=2, line cap=round] (-5,0.5)--(-4.5,1); \draw[Dble={green and blue},line width=2, line cap=round] (-4,0.5)--(-3.5,0); \draw[Dble={blue and green},line width=2, line cap=round] (-4,0.5)--(-3.5,1); \draw[Dble={green and blue},line width=2, line cap=round] (-2,0.5)--(-1,-1.5); \draw[Dble={blue and green},line width=2, line cap=round] (-2,0.5)--(-1,1); \draw[Dble={blue and green},line width=2, line cap=round] (3,0.5)--(4,2); \draw[Dble={green and blue},line width=2, line cap=round] (4,0)--(5,0.5); \draw[Dble={green and blue},line width=2] (5,0.5)--(5.5,2); \draw[Dble={green and blue},line width=2] (3,0.5)--(4,0); \draw[Dble={blue and green},line width=2] (-7,1.5)--(-6,0.5); \draw[Dble={green and blue},line width=2] (-7,-0.5)--(-6,0.5); \draw[Dble={green and blue},line width=2] (-5.5,0)--(-5,0.5); \draw[Dble={blue and green},line width=2] (-5.5,1)--(-5,0.5); \draw[Dble={green and blue},line width=2] (-4.5,0)--(-4,0.5); \draw[Dble={blue and green},line width=2] (-4.5,1)--(-4,0.5); \draw[Dble={green and blue},line width=2] (-2.5,0)--(-2,0.5); \draw[Dble={blue and green},line width=2] (-2.5,1)--(-2,0.5); \draw[Dble={blue and green},line width=2, line cap=round] (4,2)--(5.5,2); \draw[Dble={blue and green},line width=2] (5.5,2)--(6,0.5); \draw[Dble={blue and green},line width=2] (6,0.5)--(7,0.5); \draw[Dble={green and blue},line width=2, line cap=round] (5,-1.5)--(6,-2.5); \draw[Dble={green and blue},line width=2, line cap=round] (6,-0.5)--(7,-1.5); \draw[Dble={green and blue},line width=2] (5,0.5)--(5,-1.5); \draw[Dble={green and blue},line width=2] (6,0.5)--(6,-0.5); \draw[Dble={blue and green},line width=2, line cap=round] (-1,1)--(2,1); \draw[Dble={green and blue},line width=2] (-1,-1.5)--(0.5,-1.5); \draw[Dble={green and blue},line width=2,dashed] (0.5,-1.5)--(1.5,-1.5); \draw[Dble={green and blue},line width=2] (1.5,-1.5)--(2,-1.5); \draw[Dble={green and blue},line width=2] (2,-1.5)--(3,0.5); \draw[Dble={blue and green},line width=2] (2,1)--(3,0.5); \draw[Dble={blue and green},line width=2] (-1,-2.5)--(-1,-1.5); \draw[Dble={blue and green},line width=2] (0,-2.5)--(0,-1.5); \draw[Dble={blue and green},line width=2] (2,-2.5)--(2,-1.5); \draw[Dble={blue and green},line width=2,dashed] (-3.5,1)--(-2.5,1); \draw[Dble={green and blue},line width=2,dashed] (-3.5,0)--(-2.5,0); \fill[opacity=0.1, rounded corners=5] (-8,-3)--(-1.5,-3)--(-1.5,1.5)--(-6.5,1.5)--(-6.5,3)--(-8,3) (8,-3)--(8,3)--(6,3)--(6,1.5)--(4.5,1.5)--(4.5,-3); \draw[fill=red, thick,red] (-5, 2) circle (2pt) (-4, 2) circle (2pt) (-2, 2) circle (2pt) (4,-1.5) circle (2pt); \end{tikzpicture} \] Let us ignore the shaded regions whose union is tame under perturbation, see \S~\ref{section:annular Ngraphs}, then it is obvious that the resulting $N$-graph becomes $\tilde\ngraphfont{G}(a,b,b)$ in Figure~\ref{figure:degenerated 4-graph} after a sequence of Legendrian mutations. \end{proof} \begin{remark} Note that the $4$-graph $\tilde{\ngraphfont{G}}(a,b,b)$ is indeed a stablization of the tripod $3$-graph $\ngraphfont{G}(a,b,b)$ up to $\partial$-Legendrian isotopy and Legendrian mutations. \end{remark} In particular, when $(a,b,b) = (n-1,2,2), (2,3,3), (3,3,3)$ and $(2,4,4)$, we denote $\tilde\beta_0(a,b,b)$ and $\tilde\beta(a,b,b)$ by $\tilde\beta_0(\dynkinfont{Z})$ and $\tilde\beta(\dynkinfont{Z})$ for $\dynkinfont{Z}=\dynkinfont{D}_{n+1}, \dynkinfont{E}_6, \widetilde{\dynE}_6$ and $\widetilde{\dynE}_7$, respectively. \begin{align*} \tilde\beta_0(\dynkinfont{D}_{n+1})&\colonequals\tilde\beta_0(n-1,2,2),& \tilde\beta_0(\dynkinfont{E}_6)&\colonequals\tilde\beta_0(2,3,3),& \tilde\beta_0(\widetilde{\dynE}_6)&\colonequals\tilde\beta_0(3,3,3),& \tilde\beta_0(\widetilde{\dynE}_7)&\colonequals\tilde\beta_0(2,4,4)\\ \tilde\beta(\dynkinfont{D}_{n+1})&\colonequals\tilde\beta(n-1,2,2),& \tilde\beta(\dynkinfont{E}_6)&\colonequals\tilde\beta(2,3,3),& \tilde\beta(\widetilde{\dynE}_6)&\colonequals\tilde\beta(3,3,3),& \tilde\beta(\widetilde{\dynE}_7)&\colonequals\tilde\beta(2,4,4), \end{align*} Similarly, we denote their closures and $N$-graphs by $\tilde\lambda(\dynkinfont{Z})$ and $\tilde\ngraphfont{G}(\dynkinfont{Z})$. Namely, \begin{align*} \tilde\lambda(\dynkinfont{D}_{n+1})&\colonequals\tilde\lambda(n-1,2,2),& \tilde\lambda(\dynkinfont{E}_6)&\colonequals\tilde\lambda(2,3,3),& \tilde\lambda(\widetilde{\dynE}_6)&\colonequals\tilde\lambda(3,3,3),& \tilde\lambda(\widetilde{\dynE}_7)&\colonequals\tilde\lambda(2,4,4)\\ \tilde\ngraphfont{G}(\dynkinfont{D}_{n+1})&\colonequals\tilde\ngraphfont{G}(n-1,2,2),& \tilde\ngraphfont{G}(\dynkinfont{E}_6)&\colonequals\tilde\ngraphfont{G}(2,3,3),& \tilde\ngraphfont{G}(\widetilde{\dynE}_6)&\colonequals\tilde\ngraphfont{G}(3,3,3),& \tilde\ngraphfont{G}(\widetilde{\dynE}_7)&\colonequals\tilde\ngraphfont{G}(2,4,4). \end{align*} The degenerate $N$-graphs and the perturbed $N$-graphs with cycles listed above are depicted in Table~\ref{table:degenerated 4-graphs}. \begin{table}[ht] \renewcommand{\arraystretch}{1.5} \begin{tabular}{c||c|c|c|c} \toprule $\dynkinfont{Z}$ & $\dynkinfont{D}_{n+1}$ & $\dynkinfont{E}_6$ & $\widetilde{\dynE}_6$ & $\widetilde{\dynE}_7$\\ \midrule $\tilde\ngraphfont{G}(\dynkinfont{Z})$ & \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \useasboundingbox (-3, -3.5) rectangle (3, 3.5); \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[fill, red, thick] (3,-3) -- (0,0) (0,0) -- (-3,3) (0,0) -- (45:2.5) circle (2pt) (45:2.5) -- ++(0,3) (45:2.5) -- ++(3,0) (0,0) -- (-135:1.5) circle (2pt) (-135:1.5) -- ++(0,-3) (-135:1.5) -- ++(-3,0) (-135:1.5) ++ (-.5,0) circle (2pt) -- ++(0,-2) (-135:1.5) ++ (-.5,-.5) circle (2pt) -- ++(-2,0) (-135:1.5) ++ (-1,-.5) circle (2pt); \draw[red, thick, dashed] (-135:1.5) ++ (-1,-.5) -- ++(0,-1); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,-1); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,1); \draw[Dble={blue and green},line width=2] (-2.5,0) -- (0,0); \draw[Dble={blue and green},line width=2] (0,0) -- (0,3); \draw[Dble={blue and green},line width=2] (0,0) -- (0,-3); \draw[Dble={green and blue},line width=2] (0,0) -- (2.5,0); \draw[Dble={green and blue},line width=2] (2.5,0) -- ++(2,-2); \draw[Dble={green and blue},line width=2] (2.5,0) -- ++(2,2); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[fill, red, thick] (3,-3) -- (0,0) (0,0) -- (-3,3) (0,0) -- (45:2.5) circle (2pt) (45:2.5) -- ++(0,3) (45:2.5) -- ++(3,0) (0,0) -- (-135:1.5) circle (2pt) (-135:1.5) -- ++(0,-3) (-135:1.5) -- ++(-3,0) (-135:1.5) ++ (-0.5,0) circle (2pt) -- ++(0,-2); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,-1); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,1); \draw[Dble={blue and green},line width=2] (-2.5,0) -- (0,0); \draw[Dble={blue and green},line width=2] (0,0) -- (0,3); \draw[Dble={blue and green},line width=2] (0,0) -- (0,-3); \draw[Dble={green and blue},line width=2] (0,0) -- (1.5,0); \draw[Dble={green and blue},line width=2] (1.5,0) -- ++(2,-2); \draw[Dble={green and blue},line width=2] (1.5,0) -- ++(2,2); \draw[Dble={green and blue},line width=2] (1.5,0) ++(45:0.5) -- ++(2,-2); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[fill, red, thick] (3,-3) -- (0,0) (0,0) -- (-3,3) (0,0) -- (45:2.5) circle (2pt) (45:2.5) -- ++(0,3) (45:2.5) -- ++(3,0) (0,0) -- (-135:1.5) circle (2pt) (-135:1.5) -- ++(0,-3) (-135:1.5) -- ++(-3,0) (-135:1.5) ++ (-0.5,0) circle (2pt) -- ++(0,-2) (-135:1.5) ++ (-0.5,0) ++(0,-0.5) circle (2pt) -- ++(-1, 0); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,-1); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,1); \draw[Dble={blue and green},line width=2] (-2.5,0) -- (0,0); \draw[Dble={blue and green},line width=2] (0,0) -- (0,3); \draw[Dble={blue and green},line width=2] (0,0) -- (0,-3); \draw[Dble={green and blue},line width=2] (0,0) -- (1.5,0); \draw[Dble={green and blue},line width=2] (1.5,0) -- ++(2,-2); \draw[Dble={green and blue},line width=2] (1.5,0) -- ++(2,2); \draw[Dble={green and blue},line width=2] (1.5,0) ++(45:0.5) -- ++(2,-2); \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[fill, red, thick] (3,-3) -- (0,0) (0,0) -- (-3,3) (0,0) -- (45:2.5) circle (2pt) (45:2.5) -- ++(0,3) (45:2.5) -- ++(3,0) (0,0) -- (-135:1.5) circle (2pt) (-135:1.5) -- ++(0,-3) (-135:1.5) -- ++(-3,0) (-135:1.5) ++ (-0.5,0) circle (2pt) -- ++(0,-2); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,-1); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,1); \draw[Dble={blue and green},line width=2] (-2.5,0) -- (0,0); \draw[Dble={blue and green},line width=2] (0,0) -- (0,3); \draw[Dble={blue and green},line width=2] (0,0) -- (0,-3); \draw[Dble={green and blue},line width=2] (0,0) -- (1.5,0); \draw[Dble={green and blue},line width=2] (1.5,0) -- ++(2,-2); \draw[Dble={green and blue},line width=2] (1.5,0) -- ++(2,2); \draw[Dble={green and blue},line width=2] (1.5,0) ++(45:0.5) -- ++(2,-2); \draw[Dble={green and blue},line width=2] (1.5,0) ++(45:0.5) ++(-45:0.5) -- ++(45:1); \end{tikzpicture}\\ \hline Perturb. & \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \useasboundingbox (-3, -3.5) rectangle (3, 3.5); \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[cyclecolor1, opacity=0.5, line width=5, line cap=round] (-2.5,0.25) -- (2.5, 0.25) (-2.5,-0.25) -- (2.5, -0.25) (-135:1.5) -- ++(-0.5,0) (-135:1.5) ++(-0.5,-0.5) -- ++(-0.5,0) ; \draw[cyclecolor2, opacity=0.5, line width=5, line cap=round] (45:2.5) -- (-135:1.5) (-135:1.5) ++(-0.5,0) -- ++(0,-0.5) ; \draw[fill, red, thick] (3,-3) -- (0.25,-0.25) (-0.25,0.25) -- (-3,3) (0.25,0.25) -- (45:2.5) circle (2pt) (45:2.5) -- ++(0,3) (45:2.5) -- ++(3,0) (-0.25,-0.25) -- (-135:1.5) circle (2pt) (-135:1.5) -- ++(0,-3) (-135:1.5) -- ++(-3,0) (-135:1.5) ++ (-.5,0) circle (2pt) -- ++(0,-2) (-135:1.5) ++ (-.5,-.5) circle (2pt) -- ++(-2,0) (-135:1.5) ++ (-1,-.5) circle (2pt); \draw[red, thick] (-0.25,-0.25) rectangle (0.25,0.25); \draw[red, thick, dashed] (-135:1.5) ++ (-1,-.5) -- ++(0,-2); \draw[green, thick, fill] (-2.5,-0.25) circle (2pt) -- ++(-1,-1); \draw[green, thick] (-2.5,-0.25) -- ++(-1,1); \draw[green, thick] (-2.5,-0.25) -- (-0.25,-0.25) -- ++(0,-3); \draw[blue, thick, fill] (-2.5,0.25) circle (2pt) -- ++(-1,-1); \draw[blue, thick] (-2.5,0.25) -- ++(-1,1); \draw[blue, thick] (-2.5,0.25) -- (-0.25,0.25) -- ++(0,3); \draw[green, thick] (0.25, 0.25) -- (-0.25, -0.25); \draw[green, thick, fill] (0.25,0.25) -- ++(0,3); \draw[green, thick, fill] (0.25,0.25) -- ++(2.25,0); \draw[green, thick, fill] (2.5,0.25) circle (2pt) -- ++(2,-2); \draw[green, thick, fill] (2.5,0.25) -- ++(2,2); \draw[blue, thick] (-0.25, 0.25) -- (0.25, -0.25); \draw[blue, thick, fill] (0.25,-0.25) -- ++(0,-3); \draw[blue, thick, fill] (0.25,-0.25) -- ++(2.25,0); \draw[blue, thick, fill] (2.5,-0.25) circle (2pt) -- ++(2,-2); \draw[blue, thick, fill] (2.5,-0.25) -- ++(2,2); \draw[fill=white, thick] (0.25, 0.25) circle (2pt) (0.25, -0.25) circle (2pt) (-0.25, -0.25) circle (2pt) (-0.25, 0.25) circle (2pt); \end{tikzpicture}& \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[cyclecolor1, opacity=0.5, line width=5, line cap=round] (-2.5,0.25) -- (1.25, 0.25) (-2.5,-0.25) -- (1.25, -0.25) (-135:1.5) -- ++(-0.5,0) ; \draw[cyclecolor2, opacity=0.5, line width=5, line cap=round] (1.25, 0.25) -- ++(45:0.5) (1.25, -0.25) -- ++(45:0.5) (45:2.5) -- (-135:1.5) ; \draw[fill, red, thick] (3,-3) -- (0.25,-0.25) (-0.25,0.25) -- (-3,3) (0.25,0.25) -- (45:2.5) circle (2pt) (45:2.5) -- ++(0,3) (45:2.5) -- ++(3,0) (-0.25,-0.25) -- (-135:1.5) circle (2pt) (-135:1.5) -- ++(0,-3) (-135:1.5) -- ++(-3,0) (-135:1.5) ++ (-.5,0) circle (2pt) -- ++(0,-2) ; \draw[red, thick] (-0.25,-0.25) rectangle (0.25,0.25); \draw[green, thick, fill] (-2.5,-0.25) circle (2pt) -- ++(-1,-1); \draw[green, thick] (-2.5,-0.25) -- ++(-1,1); \draw[green, thick] (-2.5,-0.25) -- (-0.25,-0.25) -- ++(0,-3); \draw[blue, thick, fill] (-2.5,0.25) circle (2pt) -- ++(-1,-1); \draw[blue, thick] (-2.5,0.25) -- ++(-1,1); \draw[blue, thick] (-2.5,0.25) -- (-0.25,0.25) -- ++(0,3); \draw[green, thick] (0.25, 0.25) -- (-0.25, -0.25); \draw[green, thick, fill] (0.25,0.25) -- ++(0,3); \draw[green, thick, fill] (0.25,0.25) -- ++(1,0); \draw[green, thick, fill] (1.25,0.25) circle (2pt) -- ++(2,-2); \draw[green, thick, fill] (1.25,0.25) -- ++(2,2); \draw[green, thick, fill] (1.25,0.25) ++(45:0.5) circle (2pt) -- ++(2,-2); \draw[blue, thick] (-0.25, 0.25) -- (0.25, -0.25); \draw[blue, thick, fill] (0.25,-0.25) -- ++(1,0); \draw[blue, thick, fill] (0.25,-0.25) -- ++(0,-3); \draw[blue, thick, fill] (1.25,-0.25) circle (2pt) -- ++(2,-2); \draw[blue, thick, fill] (1.25,-0.25) -- ++(2,2); \draw[blue, thick, fill] (1.25,-0.25) ++(45:0.5) circle (2pt) -- ++(2,-2); \draw[fill=white, thick] (0.25, 0.25) circle (2pt) (0.25, -0.25) circle (2pt) (-0.25, -0.25) circle (2pt) (-0.25, 0.25) circle (2pt); \end{tikzpicture}& \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[cyclecolor1, opacity=0.5, line width=5, line cap=round] (-2.5,0.25) -- (1.25, 0.25) (-2.5,-0.25) -- (1.25, -0.25) (-135:1.5) -- ++(-0.5,0) ; \draw[cyclecolor2, opacity=0.5, line width=5, line cap=round] (1.25, 0.25) -- ++(45:0.5) (1.25, -0.25) -- ++(45:0.5) (45:2.5) -- (-135:1.5) (-135:1.5) ++(-0.5,0) -- ++(0,-0.5) ; \draw[fill, red, thick] (3,-3) -- (0.25,-0.25) (-0.25,0.25) -- (-3,3) (0.25,0.25) -- (45:2.5) circle (2pt) (45:2.5) -- ++(0,3) (45:2.5) -- ++(3,0) (-0.25,-0.25) -- (-135:1.5) circle (2pt) (-135:1.5) -- ++(0,-3) (-135:1.5) -- ++(-3,0) (-135:1.5) ++ (-.5,0) circle (2pt) -- ++(0,-2) (-135:1.5) ++ (-.5,-.5) circle (2pt) -- ++(-2,0) ; \draw[red, thick] (-0.25,-0.25) rectangle (0.25,0.25); \draw[green, thick, fill] (-2.5,-0.25) circle (2pt) -- ++(-1,-1); \draw[green, thick] (-2.5,-0.25) -- ++(-1,1); \draw[green, thick] (-2.5,-0.25) -- (-0.25,-0.25) -- ++(0,-3); \draw[blue, thick, fill] (-2.5,0.25) circle (2pt) -- ++(-1,-1); \draw[blue, thick] (-2.5,0.25) -- ++(-1,1); \draw[blue, thick] (-2.5,0.25) -- (-0.25,0.25) -- ++(0,3); \draw[green, thick] (0.25, 0.25) -- (-0.25, -0.25); \draw[green, thick, fill] (0.25,0.25) -- ++(0,3); \draw[green, thick, fill] (0.25,0.25) -- ++(1,0); \draw[green, thick, fill] (1.25,0.25) circle (2pt) -- ++(2,-2); \draw[green, thick, fill] (1.25,0.25) -- ++(2,2); \draw[green, thick, fill] (1.25,0.25) ++(45:0.5) circle (2pt) -- ++(2,-2); \draw[blue, thick] (-0.25, 0.25) -- (0.25, -0.25); \draw[blue, thick, fill] (0.25,-0.25) -- ++(1,0); \draw[blue, thick, fill] (0.25,-0.25) -- ++(0,-3); \draw[blue, thick, fill] (1.25,-0.25) circle (2pt) -- ++(2,-2); \draw[blue, thick, fill] (1.25,-0.25) -- ++(2,2); \draw[blue, thick, fill] (1.25,-0.25) ++(45:0.5) circle (2pt) -- ++(2,-2); \draw[fill=white, thick] (0.25, 0.25) circle (2pt) (0.25, -0.25) circle (2pt) (-0.25, -0.25) circle (2pt) (-0.25, 0.25) circle (2pt); \end{tikzpicture}& \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[cyclecolor1, opacity=0.5, line width=5, line cap=round] (-2.5,0.25) -- (1.25, 0.25) (-2.5,-0.25) -- (1.25, -0.25) (1.25, 0.25) ++(45:0.5) -- ++(-45:0.5) (1.25, -0.25) ++(45:0.5) -- ++(-45:0.5) (-135:1.5) -- ++(-0.5,0) ; \draw[cyclecolor2, opacity=0.5, line width=5, line cap=round] (1.25, 0.25) -- ++(45:0.5) (1.25, -0.25) -- ++(45:0.5) (45:2.5) -- (-135:1.5) ; \draw[fill, red, thick] (3,-3) -- (0.25,-0.25) (-0.25,0.25) -- (-3,3) (0.25,0.25) -- (45:2.5) circle (2pt) (45:2.5) -- ++(0,3) (45:2.5) -- ++(3,0) (-0.25,-0.25) -- (-135:1.5) circle (2pt) (-135:1.5) -- ++(0,-3) (-135:1.5) -- ++(-3,0) (-135:1.5) ++ (-.5,0) circle (2pt) -- ++(0,-2) ; \draw[red, thick] (-0.25,-0.25) rectangle (0.25,0.25); \draw[green, thick, fill] (-2.5,-0.25) circle (2pt) -- ++(-1,-1); \draw[green, thick] (-2.5,-0.25) -- ++(-1,1); \draw[green, thick] (-2.5,-0.25) -- (-0.25,-0.25) -- ++(0,-3); \draw[blue, thick, fill] (-2.5,0.25) circle (2pt) -- ++(-1,-1); \draw[blue, thick] (-2.5,0.25) -- ++(-1,1); \draw[blue, thick] (-2.5,0.25) -- (-0.25,0.25) -- ++(0,3); \draw[green, thick] (0.25, 0.25) -- (-0.25, -0.25); \draw[green, thick, fill] (0.25,0.25) -- ++(0,3); \draw[green, thick, fill] (0.25,0.25) -- ++(1,0); \draw[green, thick, fill] (1.25,0.25) circle (2pt) -- ++(2,-2); \draw[green, thick, fill] (1.25,0.25) -- ++(2,2); \draw[green, thick, fill] (1.25,0.25) ++(45:0.5) circle (2pt) -- ++(2,-2); \draw[green, thick, fill] (1.25,0.25) ++(45:0.5) ++(-45:0.5) circle (2pt) -- ++(2,2); \draw[blue, thick] (-0.25, 0.25) -- (0.25, -0.25); \draw[blue, thick, fill] (0.25,-0.25) -- ++(1,0); \draw[blue, thick, fill] (0.25,-0.25) -- ++(0,-3); \draw[blue, thick, fill] (1.25,-0.25) circle (2pt) -- ++(2,-2); \draw[blue, thick, fill] (1.25,-0.25) -- ++(2,2); \draw[blue, thick, fill] (1.25,-0.25) ++(45:0.5) circle (2pt) -- ++(2,-2); \draw[blue, thick, fill] (1.25,-0.25) ++(45:0.5) ++(-45:0.5) circle (2pt) -- ++(2,2); \draw[fill=white, thick] (0.25, 0.25) circle (2pt) (0.25, -0.25) circle (2pt) (-0.25, -0.25) circle (2pt) (-0.25, 0.25) circle (2pt); \end{tikzpicture}\\ \bottomrule \end{tabular} \caption{Degenerate $4$-graphs $\tilde\ngraphfont{G}(\dynkinfont{D}_{n+1}), \tilde\ngraphfont{G}(\dynkinfont{E}_6), \tilde\ngraphfont{G}(\widetilde{\dynE}_6)$ and $\tilde\ngraphfont{G}(\widetilde{\dynE}_7)$ and cycles in the perturbations} \label{table:degenerated 4-graphs} \end{table} \begin{remark}\label{remark:degenerated Ngraph of type A} As observed in Lemma~\ref{lemma:stabilized An}, one can think $\clusterfont{Q}(1,n,n)$ and $\ngraphfont{G}(1,n,n)$ for $\dynkinfont{A}_{2n-1}$ instead of $\clusterfont{Q}(\dynkinfont{A}_{2n-1})$ and $\ngraphfont{G}(\dynkinfont{A}_{2n-1})$. Therefore by Lemma~\ref{lemma:tripod to degenerated Ngraph}, we may obtain a degenerate $N$-graph~$\tilde\ngraphfont{G}(\dynkinfont{A}_{2n-1})$, which is obviously invariant under the conjugation. \end{remark} \subsubsection{\texorpdfstring{$N$-graphs}{N-graphs} of type \texorpdfstring{$\widetilde{\dynD}_n$}{affine Dn}} Let us start by defining the Legendrian link $\lambda(\widetilde{\dynD}_n)$ of type $\widetilde{\dynD}_n$ \begin{align*} \lambda({\widetilde{\dynD}}_n)= \begin{tikzpicture}[baseline=0ex,scale=0.8] \draw[thick] (0,0) to[out=0,in=180] (1,-0.5) to[out=0,in=180] (2,-1) to[out=0,in=180] (3,-0.5) to[out=0,in=180] (4,0) to[out=0,in=180] (9,0); \draw[thick] (0,-0.5) to[out=0,in=180] (1,0) to[out=0,in=180] (3,0) to[out=0,in=180] (4,-0.5) (5,-0.5) to[out=0,in=180] (6,-0.5) to[out=0,in=180] (7,-1) to[out=0,in=180] (8,-0.5) to[out=0,in=180] (9,-0.5); \draw[thick] (0,-1) to[out=0,in=180] (1,-1) to[out=0,in=180] (2,-0.5) to[out=0,in=180] (3,-1) to[out=0,in=180] (4,-1) (5,-1) to[out=0,in=180] (6,-1.5) to[out=0,in=180] (8,-1.5) to[out=0,in=180] (9,-1); \draw[thick] (0,-1.5) to[out=0,in=180] (5,-1.5) to[out=0,in=180] (6,-1) to[out=0,in=180] (7,-0.5) to[out=0,in=180] (8,-1) to[out=0,in=180] (9,-1.5); \draw[thick] (4,-0.4) rectangle node {$\scriptstyle{n-4}$} (5, -1.1); \draw[thick] (0,0) to[out=180,in=0] (-0.5,0.25) to[out=0,in=180] (0,0.5) to[out=0,in=180] (9,0.5) to[out=0,in=180] (9.5,0.25) to[out=180,in=0] (9,0); \draw[thick] (0,-0.5) to[out=180,in=0] (-1,0.25) to[out=0,in=180] (0,0.75) to[out=0,in=180] (9,0.75) to[out=0,in=180] (10,0.25) to[out=180,in=0] (9,-0.5); \draw[thick] (0,-1) to[out=180,in=0] (-1.5,0.25) to[out=0,in=180] (0,1) to[out=0,in=180] (9,1) to[out=0,in=180] (10.5,0.25) to[out=180,in=0] (9,-1); \draw[thick] (0,-1.5) to[out=180,in=0] (-2,0.25) to[out=0,in=180] (0,1.25) to[out=0,in=180] (9,1.25) to[out=0,in=180] (11,0.25) to[out=180,in=0] (9,-1.5); \end{tikzpicture} \end{align*} as the rainbow closure of the positive braid $\beta_0(\widetilde{\dynD}_n)$ \[ \beta_0({\widetilde{\dynD}}_n)=\sigma_3\sigma_2\sigma_2\sigma_3 \sigma_2^{n-4} \sigma_1 \sigma_2 \sigma_2 \sigma_1. \] Then the Legendrian link $\lambda({\widetilde{\dynD}}_n)$ admits the brick quiver diagram $\clusterfont{Q}^{\mathsf{brick}}({\widetilde{\dynD}}_n)$. \begin{align*} \clusterfont{Q}^{\mathsf{brick}}({\widetilde{\dynD}}_n)&= \begin{tikzpicture}[baseline=-5.5ex] \draw[gray] (0,0) to (8,0) (0,-0.5) to (8,-0.5) (0,-1) to (8,-1) (0,-1.5) to (8,-1.5); \draw[gray] (0.5,0) node[above] {$\sigma_3$} to (0.5,-0.5) (2.5,0) node[above] {$\sigma_3$} to (2.5,-0.5) (1,-0.5) to (1,-1) node[below] {$\sigma_2$} (2,-0.5) to (2,-1)node[below] {$\sigma_2$} (3,-0.5) node[above] {$\sigma_2$} to (3,-1) (5,-0.5) node[above] {$\sigma_2$} to (5,-1) (6,-0.5) node[above] {$\sigma_2$} to (6,-1) (7,-0.5) node[above] {$\sigma_2$} to (7,-1) (5.5,-1) to (5.5,-1.5)node[below] {$\sigma_1$} (7.5,-1) to (7.5,-1.5) node[below] {$\sigma_1$} (4,-0.75) node (dots) {$\cdots$}; \draw[thick,fill] (1.5,-0.25) circle (1pt) node (D1) {} (1.5,-0.75) circle (1pt) node (D2) {} (2.5,-0.75) circle (1pt) node (D3) {} (5.5,-0.75) circle (1pt) node (D4) {} (6.5,-0.75) circle (1pt) node (D5) {} (6.5,-1.25) circle (1pt) node (D6) {} ; \draw (4,-0.5) node[yshift=5ex] {$\overbrace{\hphantom{\hspace{2.5cm}}}^{n-4}$}; \draw[thick,->] (D2) to (D3); \draw[thick,->] (D3) to (dots); \draw[thick,->] (dots) to (D4); \draw[thick,->] (D4) to (D5); \draw[thick,->] (D3) to (D1); \draw[thick,->] (D6) to (D4); \end{tikzpicture} \end{align*} \begin{align*} (\ngraphfont{G}^{\mathsf{brick}}(\widetilde{\dynD}_n),\ngraphfont{B}^{\mathsf{brick}}(\widetilde{\dynD}_n))= \begin{tikzpicture}[baseline=10ex, xscale=0.6, yscale=0.4] \draw[rounded corners=5, thick] (0, 0) rectangle (14, 7); \clip[rounded corners=5] (0, 0) rectangle (14, 7); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (1,5)--(4,5) (10,5)--(13,5) (2,1)--(3,1) (5,1)--(6,1) (8,1)--(9,1) (11,1)--(12,1); \draw[color=cyan, line cap=round, line width=5, opacity=0.5] (3,1)--(5,1) (4,1)--(4,5) (9,1)--(11,1) (10,1)--(10,5) (6,1)--(6.5,1) (7.5,1)--(8,1); \draw[blue,thick,rounded corners] (0,3)--(6,3)--(8,2)--(9,2)--(10,1) (10,1)--(11,2)--(12,2)--(13,1) (13,1)--(13.5,2)--(14,2) (0,5)--(6,5)--(8,6)--(14,6) (1,3)--(1,5) (4,3)--(4,5) (10,0)--(10,1) (13,0)--(13,1); \draw[red,thick,rounded corners] (0,1)--(6.5,1) (7.5,1)--(14,1) (0,4)--(0.5,4)--(1,3) (1,3)--(2,4)--(3,4)--(4,3) (4,3)--(5,4)--(9,4)--(10,3) (10,3)--(11,4)--(12,4)--(13,3) (13,3)--(13.5,4)--(14,4) (2,0)--(2,1) (3,0)--(3,1) (5,0)--(5,1) (6,0)--(6,1) (8,0)--(8,1) (9,0)--(9,1) (11,0)--(11,1) (12,0)--(12,1) (1,1)--(1,3) (4,1)--(4,3) (10,1)--(10,3) (13,1)--(13,3); \draw[red,thick, dashed] (6.5,1)--(7.5,1); \draw[green,thick,rounded corners] (0,2)--(0.5,2)--(1,1) (1,1)--(2,2)--(3,2)--(4,1) (4,1)--(5,2)--(6,2)--(8,3)--(14,3) (0,6)--(6,6)--(8,5)--(14,5) (1,0)--(1,1) (4,0)--(4,1) (10,3)--(10,5) (13,3)--(13,5); \draw[fill, blue, thick] (1,5) circle (2pt) (4,5) circle (2pt); \draw[fill, red, thick] (2,1) circle (2pt) (3,1) circle (2pt) (5,1) circle (2pt) (6,1) circle (2pt) (8,1) circle (2pt) (9,1) circle (2pt) (11,1) circle (2pt) (12,1) circle (2pt); \draw[fill, green, thick] (10,5) circle (2pt) (13,5) circle (2pt); \draw[fill=white, thick] (1,1) circle (2pt) (4,1) circle (2pt) (10,1) circle (2pt) (13,1) circle (2pt) (1,3) circle (2pt) (4,3) circle (2pt) (10,3) circle (2pt) (13,3) circle (2pt); \end{tikzpicture} \end{align*} In $J^1\mathbb{S}^1$, the Legendrian link $\lambda(\widetilde{\dynD}_n)$ is presented by the closure of $\beta(\widetilde{\dynD}_n)$ \begin{align*} \tilde\beta(\widetilde{\dynD}_{n})&=\left(\sigma_2\sigma_1^3\sigma_2\sigma_1^3\sigma_2\sigma_1^k\sigma_3\right) \cdot\left(\sigma_2\sigma_1^3\sigma_2\sigma_1^3\sigma_2\sigma_1^\ell\sigma_3\right)\\ &\mathrel{\dot{=}}\sigma_1^2\sigma_2\sigma_1^3\sigma_2\sigma_3\sigma_1^k{\color{blue}\sigma_2\sigma_1}\cdot \sigma_1^2\sigma_2\sigma_1^3\sigma_2\sigma_3\sigma_1^\ell{\color{blue}\sigma_2\sigma_1}\\ &=\sigma_1{\color{blue}\sigma_2\sigma_1\sigma_2}\sigma_1^2\sigma_2\sigma_3{\color{blue}\sigma_2\sigma_1\sigma_2^k}\cdot \sigma_1^2\sigma_2\sigma_1^3\sigma_2{\color{red}\sigma_1^\ell\sigma_3}\sigma_2\sigma_1\\ &=\sigma_1\sigma_2\sigma_1\sigma_2\sigma_1^2{\color{blue}\sigma_3\sigma_2\sigma_3}\sigma_1\sigma_2^k \cdot \sigma_1^2\sigma_2\sigma_1^2{\color{red}\sigma_2^\ell\sigma_1\sigma_2}\sigma_3\sigma_2\sigma_1\\ &=\sigma_1\sigma_2\sigma_1\sigma_2{\color{blue}\sigma_3\sigma_1^2}\sigma_2{\color{blue}\sigma_1\sigma_3}\sigma_2^k \cdot \sigma_1^2\sigma_2\sigma_1^2{\color{red}\sigma_2^\ell\sigma_1\sigma_2}\sigma_3\sigma_2\sigma_1\\ &=\sigma_1\sigma_2\sigma_1\sigma_2\sigma_3{\color{blue}\sigma_2\sigma_1\sigma_2^2}\sigma_3\sigma_2^k \cdot \sigma_1{\color{blue}\sigma_2\sigma_1\sigma_2}\sigma_1\sigma_2^\ell\sigma_1\sigma_2\sigma_3\sigma_2\sigma_1\\ &=\sigma_1\sigma_2\sigma_1{\color{blue}\sigma_3\sigma_2\sigma_3}\sigma_1\sigma_2^2\sigma_3\sigma_2^k \cdot \sigma_1\sigma_2\sigma_1{\color{blue}\sigma_1^\ell\sigma_2\sigma_1}\sigma_1\sigma_2\sigma_3\sigma_2\sigma_1\\ &=\sigma_1\sigma_2\sigma_1\sigma_3\sigma_2{\color{blue}\sigma_1\sigma_3}\sigma_2^2\sigma_3\sigma_2^k \cdot {\color{blue}\sigma_2^\ell\sigma_1\sigma_2}{\color{blue}\sigma_2\sigma_1\sigma_2}\sigma_1\sigma_2\sigma_3\sigma_2\sigma_1\\ &=\Delta_4\beta_0(\widetilde{\dynD}_{n})\Delta_4, \end{align*} where $k=\lfloor \frac{n-3}2\rfloor$ and $\ell=\lfloor \frac{n-4}2\rfloor$. \begin{definition}[$N$-graph of type $\widetilde{\dynD}_{n}$] We define a free $4$-graph $\ngraphfont{G}(\widetilde{\dynD}_{n})$ for $\lambda(\widetilde{\dynD}_n)$ as depicted in the left of Figure~\ref{figure:4-graph of type affine Dn}. \end{definition} \begin{figure}[ht] \subfigure[$\ngraphfont{G}(\widetilde{\dynD}_{2n}), n\ge 2$] { \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw[rounded corners=5, thick] (-6.5, -2.5) rectangle (6.5, 2.5); \draw (0.5, -2.5) node[above] {$\cdots$} (-0.5, -2.5) node[below] {$\underbrace{\hphantom{\hspace{3cm}}}_{n-2}$}; \draw (1.5, 2.5) node[below] {$\cdots$} (0.5, 2.5) node[above] {$\overbrace{\hphantom{\hspace{3cm}}}^{n-2}$}; \clip[rounded corners=5] (-6.5, -2.5) rectangle (6.5, 2.5); \draw[cyclecolor1, opacity=0.5, line cap=round, line width=5] (-3.5, 0) -- (-2.5, 0) (-3.5, 0) -- (-4.5, 1) (-3.5, 0) -- (-4.5, -1) (-1.5, 0) -- (-0.5, 0) (0.5, 0) -- (1.5, 0) (3.5, 0) -- (2.5, 0) (3.5, 0) -- (4.5, 1) (3.5, 0) -- (4.5, -1) ; \draw[cyclecolor2, opacity=0.5, line cap=round, line width=5] (-4.5, 1) -- (-5.5, 1) (-4.5, -1) -- (-4.5, -1.75) (-1.5, 0) -- (-2.5, 0) (-0.5, 0) -- (0.5, 0) (1.5, 0) -- (2.5, 0) (4.5, 1) -- (4.5, 1.75) (4.5, -1) -- (5.5, -1) ; \foreach \i in {0, 180} { \begin{scope}[rotate=\i] \draw[thick, green] (-2.5, 2.5) -- (0,0); \draw[thick, red] (-3.5, -2.5) -- (-3.5, 2.5) (-6.5, 0) -- (-3.5, 0) ; \draw[thick, blue, fill] (-2.5, -2.5) -- (-2.5,0) circle (2pt) (-0.5, -2.5) -- (-0.5,0) circle (2pt) (1.5, -2.5) -- (1.5,0) circle (2pt) ; \draw[thick, blue, fill] (-3.5, 0) -- (3.5, 0) (-3.5, 0) -- (-4.5, 1) circle (2pt) -- (-4.5, 2.5) (-4.5, 1) -- (-6.5, 1) (-5.5, 1) circle (2pt) -- (-5.5, 2.5) (-3.5, 0) -- (-4.5, -1) circle (2pt) -- (-4.5, -2.5) (-4.5, -1) -- (-6.5, -1) (-4.5, -1.73) circle (2pt) -- (-6.5, -1.73) ; \end{scope} } \draw[thick, fill=white] (-3.5, 0) circle (2pt) (3.5, 0) circle (2pt); \end{tikzpicture} } \subfigure[$\ngraphfont{G}(\widetilde{\dynD}_{2n+1}), n\ge 2$] { \begin{tikzpicture}[baseline=-.5ex,scale=0.6] \draw[rounded corners=5, thick] (-6, -2.5) rectangle (6, 2.5); \draw (-1, -2.5) node[above] {$\cdots$} (1, -2.5) node[above] {$\cdots$} (0, -2.5) node[below] {$\underbrace{\hphantom{\hspace{3cm}}}_{n-1}$}; \draw (0, 2.5) node[below] {$\cdots$} (0, 2.5) node[above] {$\overbrace{\hphantom{\hspace{1.5cm}}}^{n-2}$}; \draw[cyclecolor1, opacity=0.5, line cap=round, line width=5] (-3, 0) -- (-2, 0) (-3, 0) -- (-4, 1) (-3, 0) -- (-4, -1) (-1, 0) -- (-0, 0) (1, 0) -- (2, 0) (4, -1) -- (4, -1.75) (4, 1) -- (5, 1) ; \draw[cyclecolor2, opacity=0.5, line cap=round, line width=5] (0, 0) -- (1, 0) (3, 0) -- (2, 0) (3, 0) -- (4, 1) (3, 0) -- (4, -1) (-4, 1) -- (-5, 1) (-4, -1) -- (-4, -1.75) (-1, 0) -- (-2, 0) ; \clip[rounded corners=5] (-6.5, -2.5) rectangle (6.5, 2.5); \draw[thick, green] (-2.5, 2.5) -- (2.5,-2.5); \foreach \i in {1, -1} { \begin{scope}[xshift=\i*0.5cm, xscale=\i] \draw[thick, red] (-3.5, -2.5) -- (-3.5, 2.5) (-6.5, 0) -- (-3.5, 0) ; \draw[thick, blue, fill] (-2.5, -2.5) -- (-2.5,0) circle (2pt) (-1.5, 2.5) -- (-1.5,0) circle (2pt) (-0.5, -2.5) -- (-0.5,0) circle (2pt) ; \draw[thick, blue, fill] (-3.5, 0) -- (0, 0) (-3.5, 0) -- (-4.5, 1) circle (2pt) -- (-4.5, 2.5) (-4.5, 1) -- (-6.5, 1) (-5.5, 1) circle (2pt) -- (-5.5, 2.5) (-3.5, 0) -- (-4.5, -1) circle (2pt) -- (-4.5, -2.5) (-4.5, -1) -- (-6.5, -1) (-4.5, -1.73) circle (2pt) -- (-6.5, -1.73) ; \draw[thick, fill=white] (-3.5, 0) circle (2pt); \end{scope} } \end{tikzpicture} } \caption{$N$-graph of type $\widetilde{\dynD}_n$} \label{figure:4-graph of type affine Dn} \end{figure} \begin{lemma}\label{lemma:Ngraphs of affine Dn} The $N$-graphs $\ngraphfont{G}(\widetilde{\dynD}_n)$ and $\ngraphfont{G}^{\mathsf{brick}}(\widetilde{\dynD}_n)$ are equivalent up to $\partial$-Legendrian isotopy and Legendrian mutations. \end{lemma} The pictorial proof of the lemma will be given in Appendix~\ref{appendix:Ngraph of type affine Dn}. As before, the freeness is obvious since $\ngraphfont{G}(\widetilde{\dynD}_n)$ consists of trees. Moreover, $\ngraphfont{G}(\widetilde{\dynD}_{2n})$ has a $\pi$-rotation symmetry. That is, we obtain the following lemma. \begin{lemma} The pair $\ngraphfont{G}(\widetilde{\dynD}_{2n})$ is invariant under $\pi$-rotation. \end{lemma} \subsubsection{The degenerate $4$-graph of type $\widetilde{\dynD}_4$} The final $N$-graph we introduce is a degenerate $4$-graph $\tilde\ngraphfont{G}(\widetilde{\dynD}_4)$ of type $\widetilde{\dynD}_4$ as follows: \[ \tilde\ngraphfont{G}(\widetilde{\dynD}_4)= \begin{tikzpicture}[baseline=-.5ex, scale=0.8] \draw (0,0) circle (3); \clip (0,0) circle (3); \foreach \r in {0, 180} { \begin{scope}[rotate=\r] \draw[fill, red, thick] (3,-3) -- (-3,3) (0,0) -- (45:2) circle (2pt) (45:2) -- ++(0,3) (45:2) -- ++(3,0) (45:2) ++ (0.75,0) circle (2pt) -- ++(0,2) ; \draw[Dble={blue and green},line width=2] (0,0) -- (0,3); \draw[Dble={green and blue},line width=2] (0,0) -- (2,0); \draw[Dble={green and blue},line width=2] (2,0) -- ++(-45:2); \draw[Dble={green and blue},line width=2] (2,0) -- ++(45:2); \end{scope} } \end{tikzpicture} \] which defines a bipartite quiver of type $\widetilde{\dynD}_4$. The boundary Legendrian link $\tilde\lambda(\widetilde{\dynD}_4)$ is the closure of \begin{align*} \tilde\beta(\widetilde{\dynD}_4)&=\sigma_2\sigma_{1,3}\sigma_2\sigma_{1,3}^2\sigma_2^3\sigma_{1,3}\sigma_2\sigma_{1,3}^2\sigma_2^2\\ &=\Delta_4 \sigma_{1,3} \sigma_2^2 \Delta_4 \sigma_{1,3}\sigma_2^2\\ &=\sigma_3\Delta_4 \sigma_3 \sigma_2^2 \sigma_{1,3}\sigma_2^2 \Delta_4\\ &\mathrel{\dot{=}}\Delta_4 \sigma_3\sigma_2^2 \sigma_3 \sigma_1 \sigma_2^2\Delta_4 \sigma_3\\ &=\Delta_4 \beta_0(\widetilde{\dynD}_4) \Delta_4. \end{align*} \begin{lemma} The $4$-graph $\tilde\ngraphfont{G}(\widetilde{\dynD}_4)$ is equivalent to the $4$-graph $\ngraphfont{G}(\widetilde{\dynD}_4)$ up to $\partial$-Legendrian isotopy and Legendrian mutations. \end{lemma} See Appendix~\ref{appendix:affine D4} for the proof. \subsubsection{Exchange graphs corresponding to linear or tripod \texorpdfstring{$N$-graphs}{N-graphs}} Notice that the $N$-graphs $\ngraphfont{G}(\dynkinfont{Z})$ and $\ngraphfont{G}^{\mathsf{brick}}(\dynkinfont{Z})$ are deterministic for $\dynkinfont{Z} = \dynkinfont{A},\dynkinfont{D},\dynkinfont{E},\widetilde{\dynD}, \widetilde{\dynE}$. Therefore, the coefficients in $\bfy(\ngraphfont{G}(\dynkinfont{Z}),\ngraphfont{B}(\dynkinfont{Z}))$ are defined on $\bbC[\cM(\lambda(\dynkinfont{Z}))]$. Here, $\cM(\lambda)$ is the moduli spaces of flags on $\lambda$ and is turned out to be a cluster Poisson variety as seen in Remark~\ref{remark:Poisson variety}. On the other hand, one can show that the variables $\{X_a\}_{a\in I_\lambda}$, Shen--Weng constructed in~\cite[\S 3.2]{SW2019}, coincide with the coefficients in the coefficient tuple $\bfy(\ngraphfont{G}^{\mathsf{brick}}(a,b,c),\ngraphfont{B}^{\mathsf{brick}}(a,b,c))$ or $\bfy(\ngraphfont{G}^{\mathsf{brick}}(n),\ngraphfont{B}^{\mathsf{brick}}(n))$. Moreover, coefficients are algebraically independent. In summary, we have the following corollary, which is a direct consequence of the above discussion, Proposition~\ref{prop_Y-pattern_exchange_graph}, and~\eqref{eq_exchange_graphs_are_the_same}. \begin{corollary}\label{corollary:algebraic independence} Let $(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})$ be either $(\ngraphfont{G}(a,b,c), \ngraphfont{B}(a,b,c))$ or $(\ngraphfont{G}(n), \ngraphfont{B}(n))$ of type $\dynkinfont{Z}$, and let $(\bfy_{t_0},\clusterfont{B}_{t_0})=\Psi(\ngraphfont{G}_{t_0}, \ngraphfont{B}_{t_0})$ and $\clusterfont{B}_{t_0}=\clusterfont{B}(\clusterfont{Q}(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0}))$. Then the exchange graph of the $Y$-pattern given by the initial $Y$-seed $(\bfy_{t_0},\clusterfont{B}_{t_0})$ is the same as the exchange graph $\exchange(\Phi)$ of the root system~$\Phi$ of type $\dynkinfont{Z}$. \end{corollary} \subsection{Legendrian Coxeter mutations} For a bipartite quiver $\clusterfont{Q}$, we have two sets of vertices $I_+$ and $I_-$ so that all edges are oriented from $I_+$ to $I_-$. Let $\mu_+$ and $\mu_-$ be sequences of mutations defined by compositions of mutations corresponding to each and every vertex in $I_+$ and $I_-$, respectively. A Coxeter mutation~$\mu_{\quiver}$ and its inverse $\mu_{\quiver}^{-1}$ are the compositions \begin{align*} \mu_\clusterfont{Q}&=\prod_{i\in I_+}\mu_i \cdot \prod_{i\in I_-} \mu_i,& \mu_\clusterfont{Q}^{-1}&=\prod_{i\in I_-}\mu_i \cdot \prod_{i\in I_+} \mu_i. \end{align*} Note that $\prod_{i\in I_+}\mu_i$ does not depend on the order of composition of mutations $\mu_i$ among $i\in I_+$, and the same holds for $I_-$. \begin{remark}\label{rmk_mutation_convention} For any sequence $\mu$ of mutations, we will use the right-to-left convention. Namely, the rightmost mutation will be applied first on the quiver $\clusterfont{Q}$. \end{remark} Similarly, we define the Legendrian Coxeter mutation, which will be denoted by $\mu_\ngraph$, on a bipartite $N$-graph $\ngraphfont{G}$ as follows: \begin{definition}[Legendrian Coxeter mutation] For a bipartite $N$-graph $\ngraphfont{G}$ with decomposed sets of cycles $\ngraphfont{B}=\ngraphfont{B}_+\cup\ngraphfont{B}_-$, we define the \emph{Legendrian Coxeter mutation} $\mu_\ngraph$ and its inverse $\mu_\ngraph^{-1}$ as the compositions of Legendrian mutations \begin{align*} \mu_\ngraphfont{G}&=\prod_{\gamma\in \ngraphfont{B}_+}\mu_\gamma \cdot \prod_{\gamma\in \ngraphfont{B}_-}\mu_\gamma,& \mu_\ngraphfont{G}^{-1}&=\prod_{\gamma\in \ngraphfont{B}_-}\mu_\gamma \cdot \prod_{\gamma\in \ngraphfont{B}_+}\mu_\gamma. \end{align*} \end{definition} It is worth mentioning that each $\mu_\ngraphfont{G}^{\pm1}$ does not depend on the order of mutations if cycles in each of $\ngraphfont{B}_\pm$ are disjoint. This directly implies that $\mu_\ngraphfont{G}^{-1}$ is indeed the inverse of $\mu_\ngraphfont{G}$. Note that all cycles in each of $\ngraphfont{B}_\pm(\dynkinfont{Z})$ for $\dynkinfont{Z} =\dynkinfont{A},\dynkinfont{D},\dynkinfont{E},\widetilde{\dynD}, \widetilde{\dynE}$ are disjoint as seen in Figures~\ref{figure:linear N-graph}, \ref{figure:tripod N-graph}, and \ref{figure:4-graph of type affine Dn}. \subsubsection{Legendrian Coxeter mutation for linear $N$-graphs} \begin{lemma}\label{lemma:Legendriam Coxeter mutation of type An} The effect of the Legendrian Coxeter mutation on $(\ngraphfont{G}(\dynkinfont{A}_n),\ngraphfont{B}(\dynkinfont{A}_n))$ is the clockwise $\frac{2\pi}{n+3}$-rotation and therefore \[ \mu_\ngraph(\ngraphfont{G}(\dynkinfont{A}_n),\ngraphfont{B}(\dynkinfont{A}_n))= \ngraphfont{C}(\dynkinfont{A}_n)(\ngraphfont{G}(\dynkinfont{A}_n),\ngraphfont{B}(\dynkinfont{A}_n)), \] where $\ngraphfont{C}(\dynkinfont{A}_n)$ is an annular $N$-graph called the \emph{Coxeter padding} of type $\dynkinfont{A}_n$ as follows: \begin{equation}\label{equation:Coxeter padding of type An} \ngraphfont{C}(\dynkinfont{A}_n)= \begin{tikzpicture}[baseline=-.5ex,scale=0.4] \draw[thick] (0,0) circle (5) (0,0) circle (3); \foreach \i in {45, 90, ..., 360} { \draw[blue, thick] (\i:5) to[out=\i-180,in=\i+45] (\i+45:3); } \end{tikzpicture} \end{equation} \end{lemma} \begin{proof} We may assume that the Coxeter element $\mu_\ngraph$ can be represented by the sequence \[ \mu_\ngraph=\mu_+\mu_-=(\mu_{\gamma_2}\mu_{\gamma_4}\mu_{\gamma_6}\cdots)(\mu_{\gamma_1}\mu_{\gamma_3}\mu_{\gamma_5}\dots). \] Then the action of $\mu_\ngraph$ on $\ngraphfont{G}(\dynkinfont{A}_n)$ is as depicted in Figure~\ref{figure:Legendrian Coxeter mutation on An}, which is nothing but the clockwise $\frac{2\pi}{n+3}$-rotation of the original $N$-graph $(\ngraphfont{G}(\dynkinfont{A}_n),\ngraphfont{B}(\dynkinfont{A}_n))$ as claimed. The last statement is obvious as seen in Figure~\ref{figure:coxeter padding of type An}. \end{proof} \begin{figure}[ht] \subfigure[Legendrian Coxeter mutation for $\ngraphfont{G}(\dynkinfont{A}_n)$\label{figure:Legendrian Coxeter mutation on An}]{ \begin{tikzcd}[ampersand replacement=\&] \begin{tikzpicture}[baseline=-.5ex,xscale=0.6,yscale=0.6] \draw[thick] (0,0) circle (3); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-1.5,0.5) -- (-0.5, -0.5) (0.5, 0.5) -- (1.5, -0.5); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-2.5,-0.5) -- (-1.5, 0.5) (-0.5, -0.5) -- (0.5, 0.5) (1.5, -0.5) -- (2.5, 0.5); \draw[blue, thick, fill] (0:3) -- (2.5,0.5) circle (2pt) -- (45:3) (2.5,0.5) -- (1.5,-0.5) circle (2pt) -- (-45:3) (1.5,-0.5) -- (0.5,0.5) circle (2pt) -- (90:3) (0.5, 0.5) -- (-0.5, -0.5) circle (2pt) -- (-90:3) (-0.5, -0.5) -- (-1.5, 0.5) circle (2pt) -- (135:3) (-1.5, 0.5) -- (-2.5, -0.5) circle (2pt) -- (-135:3); \draw[blue, thick] (-2.5,-0.5) -- (-180:3); \draw (0,-3) node[below] {$(\ngraphfont{G}(\dynkinfont{A}_n),\ngraphfont{B}(\dynkinfont{A}_n))$}; \end{tikzpicture} \arrow[r,|->,"\mu_-"] \& \begin{tikzpicture}[baseline=-.5ex, xscale=0.6,yscale=0.6] \begin{scope}[yscale=-1] \draw[thick] (0,0) circle (3); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-1.5,0.5) -- (-0.5, -0.5) (0.5, 0.5) -- (1.5, -0.5); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-2.5,-0.5) -- (-1.5, 0.5) (-0.5, -0.5) -- (0.5, 0.5) (1.5, -0.5) -- (2.5, 0.5); \draw[blue, thick, fill] (0:3) -- (2.5,0.5) circle (2pt) -- (45:3) (2.5,0.5) -- (1.5,-0.5) circle (2pt) -- (-45:3) (1.5,-0.5) -- (0.5,0.5) circle (2pt) -- (90:3) (0.5, 0.5) -- (-0.5, -0.5) circle (2pt) -- (-90:3) (-0.5, -0.5) -- (-1.5, 0.5) circle (2pt) -- (135:3) (-1.5, 0.5) -- (-2.5, -0.5) circle (2pt) -- (-135:3); \draw[blue, thick] (-2.5,-0.5) -- (-180:3); \end{scope} \draw (0,-3) node[below] {$(\ngraphfont{G}(\dynkinfont{A}_n),\ngraphfont{B}(\dynkinfont{A}_n))$}; \end{tikzpicture} \arrow[r,|->,"\mu_+"] \& \begin{tikzpicture}[baseline=-.5ex, xscale=0.6,yscale=0.6] \begin{scope}[rotate=-45] \draw[thick] (0,0) circle (3); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-1.5,0.5) -- (-0.5, -0.5) (0.5, 0.5) -- (1.5, -0.5); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-2.5,-0.5) -- (-1.5, 0.5) (-0.5, -0.5) -- (0.5, 0.5) (1.5, -0.5) -- (2.5, 0.5); \draw[blue, thick, fill] (0:3) -- (2.5,0.5) circle (2pt) -- (45:3) (2.5,0.5) -- (1.5,-0.5) circle (2pt) -- (-45:3) (1.5,-0.5) -- (0.5,0.5) circle (2pt) -- (90:3) (0.5, 0.5) -- (-0.5, -0.5) circle (2pt) -- (-90:3) (-0.5, -0.5) -- (-1.5, 0.5) circle (2pt) -- (135:3) (-1.5, 0.5) -- (-2.5, -0.5) circle (2pt) -- (-135:3); \draw[blue, thick] (-2.5,-0.5) -- (-180:3); \end{scope} \draw (0,-3) node[below] {$\mu_\ngraphfont{G}(\ngraphfont{G}(\dynkinfont{A}_n),\ngraphfont{B}(\dynkinfont{A}_n))$}; \end{tikzpicture} \end{tikzcd} } \subfigure[Coxeter padding $\ngraphfont{C}(\dynkinfont{A}_n)$ of type $\dynkinfont{A}_n$\label{figure:coxeter padding of type An}]{$ \begin{tikzpicture}[baseline=-.5ex, xscale=0.6,yscale=0.6] \begin{scope}[rotate=-45] \draw[thick] (0,0) circle (3); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-1.5,0.5) -- (-0.5, -0.5) (0.5, 0.5) -- (1.5, -0.5); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-2.5,-0.5) -- (-1.5, 0.5) (-0.5, -0.5) -- (0.5, 0.5) (1.5, -0.5) -- (2.5, 0.5); \draw[blue, thick, fill] (0:3) -- (2.5,0.5) circle (2pt) -- (45:3) (2.5,0.5) -- (1.5,-0.5) circle (2pt) -- (-45:3) (1.5,-0.5) -- (0.5,0.5) circle (2pt) -- (90:3) (0.5, 0.5) -- (-0.5, -0.5) circle (2pt) -- (-90:3) (-0.5, -0.5) -- (-1.5, 0.5) circle (2pt) -- (135:3) (-1.5, 0.5) -- (-2.5, -0.5) circle (2pt) -- (-135:3); \draw[blue, thick] (-2.5,-0.5) -- (-180:3); \end{scope} \draw (0,-3) node[below] {$\mu_\ngraphfont{G}(\ngraphfont{G}(\dynkinfont{A}_n),\ngraphfont{B}(\dynkinfont{A}_n))$}; \end{tikzpicture} = \begin{tikzpicture}[baseline=-.5ex,xscale=0.6,yscale=0.6] \draw[thick] (0,0) circle (5); \foreach \i in {45, 90, ..., 360} { \draw[blue, thick] (\i:5) to[out=\i-180,in=\i+45] (\i+45:3); } \draw[thick] (0,0) circle (3); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-1.5,0.5) -- (-0.5, -0.5) (0.5, 0.5) -- (1.5, -0.5); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-2.5,-0.5) -- (-1.5, 0.5) (-0.5, -0.5) -- (0.5, 0.5) (1.5, -0.5) -- (2.5, 0.5); \draw[blue, thick, fill] (0:3) -- (2.5,0.5) circle (2pt) -- (45:3) (2.5,0.5) -- (1.5,-0.5) circle (2pt) -- (-45:3) (1.5,-0.5) -- (0.5,0.5) circle (2pt) -- (90:3) (0.5, 0.5) -- (-0.5, -0.5) circle (2pt) -- (-90:3) (-0.5, -0.5) -- (-1.5, 0.5) circle (2pt) -- (135:3) (-1.5, 0.5) -- (-2.5, -0.5) circle (2pt) -- (-135:3); \draw[blue, thick] (-2.5,-0.5) -- (-180:3); \draw (0,-5) node[below] {$\ngraphfont{C}(\dynkinfont{A}_n)(\ngraphfont{G}(\dynkinfont{A}_n),\ngraphfont{B}(\dynkinfont{A}_n))$}; \end{tikzpicture} $} \caption{Legendrian Coxeter mutation $\mu_\ngraph$ on $(\ngraphfont{G}(\dynkinfont{A}_n), \ngraphfont{B}(\dynkinfont{A}_n))$} \end{figure} \begin{remark}\label{rmk_order_of_Coxeter_mutation} The order of the Coxeter mutation is either $(n+3)/2$ if $n$ is odd or $n+3$ otherwise. Since the Coxeter number $h=n+1$ for $\dynkinfont{A}_n$, this verifies Lemma~\ref{lemma:order of coxeter mutation} in this case. \end{remark} \subsubsection{Legendrian Coxeter mutation for tripod $N$-graphs} Let us consider the Legendrian Coxeter mutation for tripod $N$-graphs. By the mutation convention mentioned in Remark~\ref{rmk_mutation_convention}, for each tripod $\ngraphfont{G}(a,b,c)$, we always take a mutation at the central $\sfY$-cycle $\gamma$ first. After the Legendrian mutation on $(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))$ at $\gamma$, we have the $N$-graph on the left in Figure~\ref{figure:center mutation}. Then there are three shaded regions that we can apply the generalized push-through moves so that we obtain the $N$-graph on the right in Figure~\ref{figure:center mutation}. \begin{figure}[ht] \subfigure[\label{figure:center mutation}After the mutation at the central vertex]{ \begin{tikzcd}[ampersand replacement=\&] \begin{tikzpicture}[baseline=-.5ex,xscale=0.8, yscale=0.8] \begin{scope} \fill[opacity=0.1](85:3) to[out=-90,in=150] (60:1.3) arc (60:-60:0.3) arc (120:240:0.7) to[out=-30,in=150] (-25:3) arc (-25:85:3); \end{scope} \begin{scope}[rotate=120] \fill[opacity=0.1](85:3) to[out=-90,in=150] (60:1.3) arc (60:-60:0.3) arc (120:240:0.7) to[out=-30,in=150] (-25:3) arc (-25:85:3); \end{scope} \begin{scope}[rotate=240] \fill[opacity=0.1](85:3) to[out=-90,in=150] (60:1.3) arc (60:-60:0.3) arc (120:240:0.7) to[out=-30,in=150] (-25:3) arc (-25:85:3); \end{scope} \draw[thick] (0,0) circle (3cm); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (50:1.5) to[out=-60,in=60] (0:1) -- (-60:1) (70:1.75) -- (50:2) (170:1.5) to[out=60,in=180] (120:1) -- (60:1) (190:1.75) -- (170:2) (290:1.5) to[out=180,in=300] (240:1) -- (180:1) (310:1.75) -- (290:2); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) -- (60:1) (0,0) -- (180:1) (0,0) -- (300:1) (50:1.5) -- (70:1.75) (170:1.5) -- (190:1.75) (290:1.5) -- (310:1.75); \draw[blue, thick] (0,0) -- (0:1) (0,0) -- (120:1) (0,0) -- (240:1); \draw[red, thick, fill] (0,0) -- (60:1) circle (2pt) (0,0) -- (180:1) circle (2pt) (0,0) -- (300:1) circle (2pt); \draw[red, thick] (0:1) -- (0:3) (120:1) -- (120:3) (240:1) -- (240:3); \draw[red, thick] (60:1) -- (120:1) -- (180:1) -- (240:1) -- (300:1) -- (0:1) -- cycle; \draw[blue, thick] (100:3) to[out=-80,in=60] (120:1) (-20:3) to[out=-200,in=-60] (0:1) (220:3) to[out=40,in=180] (240:1); \draw[blue, thick] (50:1.5) to[out=-60,in=60] (0:1) (170:1.5) to[out=60,in=180] (120:1) (290:1.5) to[out=180,in=300] (240:1); \draw[blue, thick, fill] (50:1.5) circle (2pt) -- (20:3) (50:1.5) -- (70:1.75) circle (2pt) -- (80:3) (70:1.75) -- (50:2) circle (2pt) -- (40:3); \draw[blue, thick, dashed] (50:2) -- (60:3); \draw[blue, thick, fill] (170:1.5) circle (2pt) -- (140:3) (170:1.5) -- (190:1.75) circle (2pt) -- (200:3) (190:1.75) -- (170:2) circle (2pt) -- (160:3); \draw[blue, thick, dashed] (170:2) -- (180:3); \draw[blue, thick, fill] (290:1.5) circle (2pt) -- (260:3) (290:1.5) -- (310:1.75) circle (2pt) -- (320:3) (310:1.75) -- (290:2) circle (2pt) -- (280:3); \draw[blue, thick, dashed] (290:2) -- (300:3); \draw[thick,fill=white] (0:1) circle (2pt) (120:1) circle (2pt) (240:1) circle (2pt); \draw[thick, fill=white] (0,0) circle (2pt); \end{tikzpicture}\arrow[r,"\Move{II^*}"]\& \begin{tikzpicture}[baseline=-.5ex,xscale=0.6, yscale=0.6] \draw[thick] (0,0) circle (5cm); \draw[dashed] (0,0) circle (3cm); \fill[opacity=0.1, even odd rule] (0,0) circle (3) (0,0) circle (5); \foreach \i in {1,2,3} { \begin{scope}[rotate=\i*120] \begin{scope}[shift=(60:0.5)] \fill[opacity=0.1, rounded corners] (0,0) -- (0:2) arc (0:120:2) -- cycle; \end{scope} \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (60:1) -- (70:1.5) (90:1.75) -- (70:2); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) -- (60:1) (70:1.5) -- (90:1.75); \draw[blue, thick, rounded corners] (0,0) -- (0:3.4) to[out=-75,in=80] (-40:4); \draw[red, thick, fill] (0,0) -- (60:1) circle (2pt) (60:1) -- (70:1.5) circle (2pt) -- (90:1.75) circle (2pt) -- (70:2) circle (2pt); \draw[red, thick, dashed, rounded corners] (70:2) -- (60:2.8) -- (60:3.3) to[out=0,in=220] (40:4) (40:4) to[out=120,in=-20] (60:4); \draw[red, thick, rounded corners] (70:1.5) -- (40:2.8) -- (40:3.3) to[out=-20,in=200] (20:4) (70:2) -- (80:2.8) -- (80:3.3) to[out=20,in=240] (60:4) (90:1.75) -- (100:2.8) -- (100:3.3) to[out=40,in=260] (80:4); \draw[red, thick, rounded corners] (60:1) -- (20:3) -- (20:3.5) to[out=-70,in=50] (-40:4) (20:4) to[out=-50,in=120] (0:4.5) -- (0:5); \draw[red, thick] (20:4) to[out=100,in=-40] (40:4) (60:4) to[out=140,in=0] (80:4); \draw[blue, thick] (20:5) -- (20:4) to[out=140,in=-80] (40:4) (60:5) -- (60:4) to[out=180,in=-40] (80:4) -- (80:5); \draw[blue, thick, rounded corners] (20:4) to[out=-70,in=100] (-20:4.5) -- (-20:5); \draw[blue, thick, dashed] (40:4) to[out=160,in=-60] (60:4) (40:4) -- (40:5); \draw[fill=white, thick] (20:4) circle (2pt) (40:4) circle (2pt) (60:4) circle (2pt) (80:4) circle (2pt) (-40:4) circle (2pt); \end{scope} \draw[fill=white, thick] (0,0) circle (2pt); } \end{tikzpicture} \end{tikzcd}} \subfigure[\label{figure:coxeter mutation}After Legendrian Coxeter mutation]{$ \begin{tikzpicture}[baseline=-.5ex,xscale=0.6, yscale=0.6] \draw[thick] (0,0) circle (5cm); \draw[dashed] (0,0) circle (3cm); \fill[opacity=0.1, even odd rule] (0,0) circle (3) (0,0) circle (5); \foreach \i in {1,2,3} { \begin{scope}[rotate=\i*120] \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (60:1) -- (50:1.5) (70:1.75) -- (50:2); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) -- (60:1) (50:1.5) -- (70:1.75); \draw[blue, thick, rounded corners] (0,0) -- (0:3.4) to[out=-75,in=80] (-40:4); \draw[red, thick, fill] (0,0) -- (60:1) circle (2pt) (60:1) -- (50:1.5) circle (2pt) -- (70:1.75) circle (2pt) -- (50:2) circle (2pt); \draw[red, thick, dashed, rounded corners] (50:2) -- (60:2.8) -- (60:3.3) to[out=0,in=220] (40:4) (40:4) to[out=120,in=-20] (60:4); \draw[red, thick, rounded corners] (50:2) -- (40:2.8) -- (40:3.3) to[out=-20,in=200] (20:4) (70:1.75) -- (80:2.8) -- (80:3.3) to[out=20,in=240] (60:4) (60:1) -- (100:2.8) -- (100:3.3) to[out=40,in=260] (80:4); \draw[red, thick, rounded corners] (50:1.5) -- (20:3) -- (20:3.5) to[out=-70,in=50] (-40:4) (20:4) to[out=-50,in=120] (0:4.5) -- (0:5); \draw[red, thick] (20:4) to[out=100,in=-40] (40:4) (60:4) to[out=140,in=0] (80:4); \draw[blue, thick] (20:5) -- (20:4) to[out=140,in=-80] (40:4) (60:5) -- (60:4) to[out=180,in=-40] (80:4) -- (80:5); \draw[blue, thick, rounded corners] (20:4) to[out=-70,in=100] (-20:4.5) -- (-20:5); \draw[blue, thick, dashed] (40:4) to[out=160,in=-60] (60:4) (40:4) -- (40:5); \draw[fill=white, thick] (20:4) circle (2pt) (40:4) circle (2pt) (60:4) circle (2pt) (80:4) circle (2pt) (-40:4) circle (2pt); \end{scope} \draw[fill=white, thick] (0,0) circle (2pt); } \end{tikzpicture} $} \caption{Legendrian Coxeter mutation for $(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))$} \end{figure} Notice that in each triangular shaded region, the $N$-subgraph looks like the $N$-graph of type $\dynkinfont{A}_{a-1}, \dynkinfont{A}_{b-1}$, or $\dynkinfont{A}_{c-1}$. Moreover, the mutations corresponding to the rest sequence is just a composition of Legendrian Coxeter mutations of type $\dynkinfont{A}_{a-1},\dynkinfont{A}_{b-1}$, and $\dynkinfont{A}_{c-1}$, which are essentially the same as the clockwise rotations by Lemma~\ref{lemma:Legendriam Coxeter mutation of type An}. Therefore, the result of the Legendrian Coxeter mutation will be given as depicted in Figure~\ref{figure:coxeter mutation}. Then the resulting $N$-graph becomes very similar to the original $N$-graph $\ngraphfont{G}(a,b,c)$. Indeed, the inside is identical to $\ngraphfont{G}(a,b,c)$ but the colors are switched, which is the conjugation $\overline{\ngraphfont{G}(a,b,c)}$ by definition. The complement of $\overline{\ngraphfont{G}(a,b,c)}$ in $\mu_{\quiver}(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))$ is an annular $N$-graph. \begin{definition}[Coxeter padding of type $(a,b,c)$] For each triple $a,b,c$, the annular $N$-graph depicted in Figure~\ref{figure:coxeter padding} is denoted by $\ngraphfont{C}(a,b,c)$ and called the \emph{Coxeter padding} of type $(a,b,c)$. We also denote the Coxeter padding with color switched by $\overline{\ngraphfont{C}(a,b,c)}$, which is the conjugation of $\ngraphfont{C}(a,b,c)$. \end{definition} \begin{figure}[ht] \subfigure[$\ngraphfont{C}(a,b,c)$]{\makebox[0.48\textwidth]{ $ \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[thick] (0,0) circle (5) (0,0) circle (3); \foreach \i in {1,2,3} { \begin{scope}[rotate=\i*120] \draw[blue, thick, rounded corners] (0:3) -- (0:3.4) to[out=-75,in=80] (-40:4); \draw[red, thick, dashed, rounded corners] (60:3) -- (60:3.3) to[out=0,in=220] (40:4) (40:4) to[out=120,in=-20] (60:4); \draw[red, thick, rounded corners] (40:3) -- (40:3.3) to[out=-20,in=200] (20:4) (80:3) -- (80:3.3) to[out=20,in=240] (60:4) (100:3) -- (100:3.3) to[out=40,in=260] (80:4); \draw[red, thick, rounded corners] (20:3) -- (20:3.5) to[out=-70,in=50] (-40:4) (20:4) to[out=-50,in=120] (0:4.5) -- (0:5); \draw[red, thick] (20:4) to[out=100,in=-40] (40:4) (60:4) to[out=140,in=0] (80:4); \draw[blue, thick] (20:5) -- (20:4) to[out=140,in=-80] (40:4) (60:5) -- (60:4) to[out=180,in=-40] (80:4) -- (80:5); \draw[blue, thick, rounded corners] (20:4) to[out=-70,in=100] (-20:4.5) -- (-20:5); \draw[blue, thick, dashed] (40:4) -- (40:5) (40:4) to[out=160,in=-60] (60:4); \draw[fill=white, thick] (20:4) circle (2pt) (40:4) circle (2pt) (60:4) circle (2pt) (80:4) circle (2pt) (-40:4) circle (2pt); \end{scope} \curlybrace[]{10}{110}{5.2}; \draw (60:5.5) node[rotate=-30] {$a+1$}; \curlybrace[]{130}{230}{5.2}; \draw (180:5.5) node[rotate=90] {$b+1$}; \curlybrace[]{250}{350}{5.2}; \draw (300:5.5) node[rotate=30] {$c+1$}; } \end{tikzpicture}$ }} \subfigure[$\overline{\ngraphfont{C}(a,b,c)}$]{\makebox[0.48\textwidth]{ $ \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[thick] (0,0) circle (5) (0,0) circle (3); \foreach \i in {1,2,3} { \begin{scope}[rotate=\i*120] \draw[red, thick, rounded corners] (0:3) -- (0:3.4) to[out=-75,in=80] (-40:4); \draw[blue, thick, dashed, rounded corners] (60:3) -- (60:3.3) to[out=0,in=220] (40:4) (40:4) to[out=120,in=-20] (60:4); \draw[blue, thick, rounded corners] (40:3) -- (40:3.3) to[out=-20,in=200] (20:4) (80:3) -- (80:3.3) to[out=20,in=240] (60:4) (100:3) -- (100:3.3) to[out=40,in=260] (80:4); \draw[blue, thick, rounded corners] (20:3) -- (20:3.5) to[out=-70,in=50] (-40:4) (20:4) to[out=-50,in=120] (0:4.5) -- (0:5); \draw[blue, thick] (20:4) to[out=100,in=-40] (40:4) (60:4) to[out=140,in=0] (80:4); \draw[red, thick] (20:5) -- (20:4) to[out=140,in=-80] (40:4) (60:5) -- (60:4) to[out=180,in=-40] (80:4) -- (80:5); \draw[red, thick, rounded corners] (20:4) to[out=-70,in=100] (-20:4.5) -- (-20:5); \draw[red, thick, dashed] (40:4) -- (40:5) (40:4) to[out=160,in=-60] (60:4); \draw[fill=white, thick] (20:4) circle (2pt) (40:4) circle (2pt) (60:4) circle (2pt) (80:4) circle (2pt) (-40:4) circle (2pt); \end{scope} \curlybrace[]{10}{110}{5.2}; \draw (60:5.5) node[rotate=-30] {$a+1$}; \curlybrace[]{130}{230}{5.2}; \draw (180:5.5) node[rotate=90] {$b+1$}; \curlybrace[]{250}{350}{5.2}; \draw (300:5.5) node[rotate=30] {$c+1$}; } \end{tikzpicture}$ }} \\ \subfigure[$\ngraphfont{C}(a,b,c)^{-1}$]{\makebox[0.48\textwidth]{ $ \begin{tikzpicture}[baseline=-.5ex,xscale=-0.5,yscale=0.5,rotate=60] \draw[thick] (0,0) circle (5) (0,0) circle (3); \foreach \i in {1,2,3} { \begin{scope}[rotate=\i*120] \draw[red, thick, rounded corners] (0:3) -- (0:3.4) to[out=-75,in=80] (-40:4); \draw[blue, thick, dashed, rounded corners] (60:3) -- (60:3.3) to[out=0,in=220] (40:4) (40:4) to[out=120,in=-20] (60:4); \draw[blue, thick, rounded corners] (40:3) -- (40:3.3) to[out=-20,in=200] (20:4) (80:3) -- (80:3.3) to[out=20,in=240] (60:4) (100:3) -- (100:3.3) to[out=40,in=260] (80:4); \draw[blue, thick, rounded corners] (20:3) -- (20:3.5) to[out=-70,in=50] (-40:4) (20:4) to[out=-50,in=120] (0:4.5) -- (0:5); \draw[blue, thick] (20:4) to[out=100,in=-40] (40:4) (60:4) to[out=140,in=0] (80:4); \draw[red, thick] (20:5) -- (20:4) to[out=140,in=-80] (40:4) (60:5) -- (60:4) to[out=180,in=-40] (80:4) -- (80:5); \draw[red, thick, rounded corners] (20:4) to[out=-70,in=100] (-20:4.5) -- (-20:5); \draw[red, thick, dashed] (40:4) -- (40:5) (40:4) to[out=160,in=-60] (60:4); \draw[fill=white, thick] (20:4) circle (2pt) (40:4) circle (2pt) (60:4) circle (2pt) (80:4) circle (2pt) (-40:4) circle (2pt); \end{scope} \curlybrace[]{10}{110}{5.2}; \draw (60:5.5) node[rotate=-30] {$a+1$}; \curlybrace[]{130}{230}{5.2}; \draw (180:5.5) node[rotate=30] {$c+1$}; \curlybrace[]{250}{350}{5.2}; \draw (300:5.5) node[rotate=90] {$b+1$}; } \end{tikzpicture}$ }} \subfigure[$\overline{\ngraphfont{C}(a,b,c)}^{-1}$]{\makebox[0.48\textwidth]{ $ \begin{tikzpicture}[baseline=-.5ex,xscale=-0.5, yscale=0.5,rotate=60] \draw[thick] (0,0) circle (5) (0,0) circle (3); \foreach \i in {1,2,3} { \begin{scope}[rotate=\i*120] \draw[blue, thick, rounded corners] (0:3) -- (0:3.4) to[out=-75,in=80] (-40:4); \draw[red, thick, dashed, rounded corners] (60:3) -- (60:3.3) to[out=0,in=220] (40:4) (40:4) to[out=120,in=-20] (60:4); \draw[red, thick, rounded corners] (40:3) -- (40:3.3) to[out=-20,in=200] (20:4) (80:3) -- (80:3.3) to[out=20,in=240] (60:4) (100:3) -- (100:3.3) to[out=40,in=260] (80:4); \draw[red, thick, rounded corners] (20:3) -- (20:3.5) to[out=-70,in=50] (-40:4) (20:4) to[out=-50,in=120] (0:4.5) -- (0:5); \draw[red, thick] (20:4) to[out=100,in=-40] (40:4) (60:4) to[out=140,in=0] (80:4); \draw[blue, thick] (20:5) -- (20:4) to[out=140,in=-80] (40:4) (60:5) -- (60:4) to[out=180,in=-40] (80:4) -- (80:5); \draw[blue, thick, rounded corners] (20:4) to[out=-70,in=100] (-20:4.5) -- (-20:5); \draw[blue, thick, dashed] (40:4) -- (40:5) (40:4) to[out=160,in=-60] (60:4); \draw[fill=white, thick] (20:4) circle (2pt) (40:4) circle (2pt) (60:4) circle (2pt) (80:4) circle (2pt) (-40:4) circle (2pt); \end{scope} \curlybrace[]{10}{110}{5.2}; \draw (60:5.5) node[rotate=-30] {$a+1$}; \curlybrace[]{130}{230}{5.2}; \draw (180:5.5) node[rotate=30] {$c+1$}; \curlybrace[]{250}{350}{5.2}; \draw (300:5.5) node[rotate=90] {$b+1$}; } \end{tikzpicture}$ }} \caption{Coxeter paddings $\ngraphfont{C}(a,b,c)$, $\bar\ngraphfont{C}(a,b,c)$ and their inverses.} \label{figure:coxeter padding} \end{figure} Notice that two Coxeter paddings $\ngraphfont{C}(a,b,c)$ and $\overline{\ngraphfont{C}(a,b,c)}$ can be glued without any ambiguity and so we can also pile up Coxeter paddings $\ngraphfont{C}(a,b,c)$ and $\overline{\ngraphfont{C}(a,b,c)}$ alternatively as many times as we want. We also define the concatenation of the Coxeter padding $\overline{\ngraphfont{C}(a,b,c)}$ on the pair $(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))$ as the pair $(\ngraphfont{G}', \ngraphfont{B}')$ such that \begin{enumerate} \item the $N$-graph $\ngraphfont{G}'$ is obtained by gluing $\overline{\ngraphfont{C}(a,b,c)}$ on $\ngraphfont{G}(a,b,c)$, and \item the set $\ngraphfont{B}'$ of cycles is the set of $\sfI$- and $\sfY$-cycles identified with $\ngraphfont{B}(a,b,c)$ in a canonical way. \end{enumerate} \begin{proposition}\label{proposition:effect of Legendrian Coxeter mutation} Let $(\ngraphfont{G}, \ngraphfont{B}) = (\ngraphfont{G}(a,b,c), \ngraphfont{B}(a,b,c))$. The Legendrian Coxeter mutation on $(\ngraphfont{G}, \ngraphfont{B})$ or $\overline{(\ngraphfont{G},\ngraphfont{B})}$ is given as the concatenation \begin{align*} \mu_\ngraph(\ngraphfont{G}, \ngraphfont{B}) &= \ngraphfont{C}\overline{(\ngraphfont{G},\ngraphfont{B})},& \mu_\ngraph^{-1}(\ngraphfont{G}, \ngraphfont{B}) &= \bar\ngraphfont{C}^{-1}\overline{(\ngraphfont{G},\ngraphfont{B})},& \mu_\ngraph\overline{(\ngraphfont{G},\ngraphfont{B})} &= \bar \ngraphfont{C} (\ngraphfont{G}, \ngraphfont{B}),& \mu_\ngraph^{-1}\overline{(\ngraphfont{G},\ngraphfont{B})} &= \ngraphfont{C}^{-1} (\ngraphfont{G}, \ngraphfont{B}), \end{align*} where $\ngraphfont{C}=\ngraphfont{C}(a,b,c)$, $\bar\ngraphfont{C}=\overline{\ngraphfont{C}(a,b,c)}$. In general, for $r\ge 0$, we have \begin{align*} \mu_\ngraph^r(\ngraphfont{G},\ngraphfont{B}) &= \begin{cases} \ngraphfont{C}\bar\ngraphfont{C}\cdots \bar\ngraphfont{C} (\ngraphfont{G},\ngraphfont{B})& \text{ if }r\text{ is even},\\ \ngraphfont{C}\bar\ngraphfont{C}\cdots \ngraphfont{C} \overline{(\ngraphfont{G},\ngraphfont{B})}& \text{ if }r\text{ is odd}. \end{cases}\\ \mu_\ngraph^{-r}(\ngraphfont{G},\ngraphfont{B}) &= \begin{cases} \bar\ngraphfont{C}^{-1}\ngraphfont{C}^{-1}\cdots \ngraphfont{C}^{-1} (\ngraphfont{G},\ngraphfont{B})& \text{ if }r\text{ is even},\\ \bar\ngraphfont{C}^{-1}\ngraphfont{C}^{-1}\cdots \bar\ngraphfont{C}^{-1} \overline{(\ngraphfont{G},\ngraphfont{B})}& \text{ if }r\text{ is odd}. \end{cases} \end{align*} \end{proposition} \begin{proof} This follows directly from the above observation. \end{proof} It is important that this proposition holds only when we take the Legendrian Coxeter mutation on the very standard $N$-graph $\ngraphfont{G}(a,b,c)$ with the cycles $\ngraphfont{B}(a,b,c)$. Otherwise, the Legendrian Coxeter mutation will not be expressed as simple as above. Let $(\ngraphfont{G}, \ngraphfont{B})$ be a pair of a deterministic $N$-graph, a set of good cycles. Suppose that the quiver~$\clusterfont{Q}(\ngraphfont{G},\ngraphfont{B})$ is bipartite and the Legendrian Coxeter mutation $\mu_\ngraph(\ngraphfont{G},\ngraphfont{B})$ is realizable. Then, by Proposition~\ref{proposition:equivariance of mutations}, we have \[ \Psi(\mu_\ngraph(\ngraphfont{G},\ngraphfont{B})) = \mu_{\quiver}(\Psi(\ngraphfont{G},\ngraphfont{B})). \] In particular, for quivers of type $\dynkinfont{A}_n$ or tripods, we have the following corollary. \begin{corollary}\label{corollary:Coxeter mutations} For each $n\ge 1$ and $a,b,c\ge 1$, the Legendrian Coxeter mutation $\mu_\ngraph$ on $(\ngraphfont{G}(\dynkinfont{A}_n),\ngraphfont{B}(\dynkinfont{A}_n))$ or $(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))$ corresponds to the Coxeter mutation $\mu_{\quiver}$ on $\clusterfont{Q}(\dynkinfont{A}_n)$ or $\clusterfont{Q}(a,b,c)$, respectively. In other words, \begin{align*} \Psi(\mu_\ngraph(\ngraphfont{G}(\dynkinfont{A}_n),\ngraphfont{B}(\dynkinfont{A}_n))) &= \mu_{\quiver}(\Psi(\ngraphfont{G}(\dynkinfont{A}_n),\ngraphfont{B}(\dynkinfont{A}_n)));\\ \Psi(\mu_\ngraph(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))) &= \mu_{\quiver}(\Psi(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))). \end{align*} \end{corollary} \begin{theorem}\label{theorem:infinite fillings} For $a,b,c\ge 1$ with $\frac 1a+\frac1b+\frac1c\le 1$, The Legendrian knot or link $\lambda(a,b,c)$ in $J^1\mathbb{S}^1$ admits infinitely many distinct exact embedded Lagrangian fillings. \end{theorem} \begin{proof} By Proposition~\ref{proposition:effect of Legendrian Coxeter mutation}, the effect of the Legendrian Coxeter mutation on $(\ngraphfont{G}(a,b,c), \ngraphfont{B}(a,b,c))$ is just to attach the Coxeter padding on $(\bar\ngraphfont{G}(a,b,c),\bar\ngraphfont{B}(a,b,c))$. In particular, as mentioned earlier, the iterated Legendrian Coxeter mutation \[ \mu_\ngraph^r(\ngraphfont{G}(a,b,c), \ngraphfont{B}(a,b,c)) \] is well-defined for each $r\in\mathbb{Z}$. Each of these $N$-graphs defines a Legendrian weave $\Lambda(\mu_\ngraph^r(\ngraphfont{G}(a,b,c), \ngraphfont{B}(a,b,c)))$, whose Lagrangian projection is a Lagrangian filling \[ L_r(a,b,c)\colonequals(\pi\circ\iota)(\Lambda(\mu_\ngraph^r(\ngraphfont{G}(a,b,c), \ngraphfont{B}(a,b,c))) \] as desired. Therefore it suffices to prove that Lagrangians $L_r(a,b,c)$ for $r\ge 0$ are pairwise distinct up to exact Lagrangian isotopy when $\frac1a+\frac1b+\frac1c\le 1$. Now suppose that $\frac1a+\frac1b+\frac1c\le1$, or equivalently, $\clusterfont{Q}(a,b,c)$ is of infinite type, that is, it is not of finite Dynkin type (cf. Definition~\ref{def_quiver_of_type_X}(1)). Then the order of the Coxeter mutation is infinite by Lemma~\ref{lemma:order of coxeter mutation} and so is the order of the Legendrian Coxeter mutation by Corollary~\ref{corollary:Coxeter mutations}. In particular, the set \[ \left\{\Psi(\mu_\ngraph^r(\ngraphfont{G}(a,b,c), \ngraphfont{B}(a,b,c)))\mid r\in\mathbb{Z}\right\} \] is a set of infinitely many pairwise distinct $Y$-seeds in the $Y$-pattern for $\clusterfont{Q}(a,b,c)$. Hence, by Corollary~\ref{corollary:distinct seeds imples distinct fillings}, we have pairwise distinct Lagrangian fillings $L_r(a,b,c)$. \end{proof} \begin{remark} For Legendrian links of non $\dynkinfont{ADE}$-type, there are lots of examples having infinitely many distinct Lagrangian fillings given by a number of different researchers and groups. A non-exhaustive list includes \cite{CG2020, CN2021, CZ2020, GSW2020b}. \end{remark} \subsubsection{Legendrian Coxeter mutations for $N$-graphs of type $\widetilde{\dynD}_n$} We will perform the Legendrian Coxeter mutation $\mu_\ngraphfont{G}$ on $(\ngraphfont{G}(\widetilde{\dynD}_n), \ngraphfont{B}(\widetilde{\dynD}_n))$ in order to provide the pictorial proof of Proposition~\ref{proposition:coxeter realization D-type}. Before we take mutations, we first introduce a useful operation on $N$-graphs described below, called the \emph{move} $\mathrm{(Z)}$. \[ \begin{tikzcd} \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[cyclecolor2, line cap=round, line width=5, opacity=0.5] (-2,1)--(-1,1) (-1,-1)--(-1,-2); \draw[cyclecolor1, line cap=round, line width=5, opacity=0.5,] (-1,1)--(0,0) (-1,-1)--(0,0) (0,0)--(1,0); \draw[red, thick] (0,3) -- (0,-3) (0,0) -- (-3,0); \draw[blue,thick, fill] (0,0) -- (-1,1) circle (2pt) -- +(0,2) (-1,1) -- ++(-1,0) circle (2pt) -- +(-1,0) (-2,1) -- +(0,2); \draw[blue,thick, fill] (0,0) -- (-1,-1) circle (2pt) -- +(-2,0) (-1,-1) -- ++(0,-1) circle (2pt) -- +(0,-1) (-1,-2) -- +(-2,0); \draw[blue,thick] (0,0) -- (1,0); \draw[thick, fill=white] (0,0) circle (2pt); \end{tikzpicture}\arrow[r,"\mathrm{(II)}"]& \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[cyclecolor2, line cap=round, line width=5, opacity=0.5] (-2,1)--(-1,1) (-1,-2)--(1,0)--(1,1); \draw[cyclecolor1, line cap=round, line width=5, opacity=0.5] (-1,1)--(0,0) (0,0)--(2,0); \draw[red, thick, fill] (1,3) -- (1,1) circle (2pt) -- (0,0) (1,1) -- (1,0) (1,0) -- (1,-3) (1,0) -- (-3,0); \node at (-0.5,0.5)[above right] {$\gamma$}; \draw[blue, thick] (0,0) to[out=30,in=150] (1,0); \draw[blue,thick, fill] (0,0) -- (-1,1) circle (2pt) -- +(0,2) (-1,1) -- ++(-1,0) circle (2pt) -- +(-1,0) (-2,1) -- +(0,2) (-1,-2) circle (2pt); \draw[blue,thick] (-1,-3) -- (-1,-2) -- (1,0) (-1,-2) -- (-3,-2); \draw[blue,thick,rounded corners](0,0) -- (-1,-1) -- +(-2,0); \draw[blue,thick] (1,0) -- (2,0); \draw[thick, fill=white] (0,0) circle (2pt) (1,0) circle (2pt); \end{tikzpicture}\arrow[r,"\mu_\gamma"]& \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[cyclecolor2, line cap=round, line width=5, opacity=0.5] (-2,2)--(-1,1) (0,-2)--(1,-1)--(1,0) to[out=120,in=-120] (1,1) -- (1,2); \draw[cyclecolor1, line cap=round, line width=5, opacity=0.5,] (-1,1)--(0,1)--(1,0)--(2,0); \draw[red, thick, fill] (1,3) -- (1,-3) (-3,0) -- (-1,0) (1,2) circle (2pt) -- (0,2); \draw[red, thick] (1,1) -- (0,2) -- (0,1) -- (1,0) (0,1) -- (-1,0) -- (1,-1); \draw[blue, thick, fill] (0,2) -- (-1,3) (0,1) -- (-1,1) circle (2pt) -- (-2,2) circle (2pt) -- (-2,3) (-1,1) -- (-1,0) (-2,2) -- (-3,2) (1,-1) -- (0,-2) circle (2pt) -- (-3,-2) (0,-2) -- (0,-3); \draw[blue,thick, rounded corners] (-1,0) -- (-2,-1) -- (-3,-1) ; \draw[blue, thick] (1,1) -- (2,1) (1,0) -- (2,0) (1,-1) -- (2,-1); \draw[blue, thick] (-1,0) to[out=15,in=-105] (0,1) (0,2) to[out=-15,in=105] (1,1) (1,1) to[out=-120,in=120] (1,0) (1,0) to[out=-120,in=120] (1,-1) (0,2) to[out=-60,in=60] (0,1); \draw[thick, fill=white] (-1,0) circle (2pt) (1,0) circle (2pt) (0,1) circle (2pt) (0,2) circle (2pt) (1,1) circle (2pt) (1,-1) circle (2pt); \end{tikzpicture}\arrow[r,"\mathrm{(II)}"]& \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[cyclecolor2, line cap=round, line width=5, opacity=0.5] (-2,2)--(-1,1) (0,-2)--(1,-1)--(1,0)--(0.67, 0.67); \draw[cyclecolor1, line cap=round, line width=5, opacity=0.5,] (-1,1)--(0,1)--(1,0)--(2,0); \draw[red, thick, fill] (1,3)--(1,-3) (-3,0) -- (-1,0) (-1,0) -- (1,-1) (1,1) -- (0,1); \draw[red, thick] (-1,0) -- (0,1) -- (1,0); \draw[blue, thick, fill] (1,1) -- (0.67, 0.67) circle (2pt) -- (0,1) (0.67, 0.67) -- (1,0); \draw[blue, thick] (-1,0) to[out=15, in=-105] (0,1) (1,0) to[out=-120,in=120] (1,-1); \draw[blue, thick, fill] (0,1) -- (-1,1) circle (2pt) -- (-2,2) circle (2pt) -- (-2,3) (-1,0) -- (-1,1) (-2,2) -- (-3,2); \draw[blue, thick] (1,1) -- (0,2) -- (-1,3); \draw[blue, thick] (1,1) -- ++(1,0) (1,0) -- ++(1,0) (1,-1) -- ++(1,0); \draw[blue, thick, fill] (1,-1) -- (0,-2) circle (2pt) -- (-3,-2) (0,-2) -- (0,-3) ; \draw[blue,thick, rounded corners] (-1,0) -- (-2,-1) -- (-3,-1) ; \draw[thick, fill=white] (-1,0) circle (2pt) (0,1) circle (2pt) (1,-1) circle (2pt) (1,0) circle (2pt) (1,1) circle (2pt); \end{tikzpicture}\arrow[d, "\mathrm{(II)}"]\\ \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[cyclecolor2, line cap=round, line width=5, opacity=0.5, line cap=round] (1,0.5) -- (1,1.5) (0.5,-1) --(1.5,-1) ; \draw[cyclecolor1, line cap=round, line width=5, opacity=0.5] (1,0.5) -- (2,0) --(1.5,-1) (2,0) -- (3,0) ; \draw[red, thick] (2,3) -- (2,-3) (2,2) -- (0,2) -- (-2,0) -- (-3,0) (2,-2) -- (0,-2) -- (-2,0) (0,2) -- (0,-2) (0,0) -- (2,0) ; \draw[blue, thick] (-3,1) -- (-2,0) to[out=15,in=-105] (0,2) -- (-1,3) (0,2) -- (1,1.5) -- (2,2) -- (3,2) (1,3) -- (2,2) (1,1.5) -- (1,0.5) -- (2,0) -- (3,0) (1,0.5) -- (0,0) -- (0.5,-1) -- (1.5,-1) -- (2,0) (0,0) to[out=-120,in=120] (0,-2) -- (-1,-3) (0,-2) -- (0.5,-1) (1.5,-1) -- (2,-2) -- (3,-2) (2,-2) -- (1,-3) (-3,-1) -- (-2,0) ; \draw[thick, fill=white] (-2,0) circle (2pt) (0,2) circle (2pt) (0,-2) circle (2pt) (0,0) circle (2pt) (2,2) circle (2pt) (2,0) circle (2pt) (2,-2) circle (2pt); \draw[blue, thick, fill] (1,0.5) circle (2pt) (1,1.5) circle (2pt) (0.5,-1) circle (2pt) (1.5,-1) circle (2pt) ; \end{tikzpicture} & \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[cyclecolor2, line cap=round, line width=5, opacity=0.5] (-2,2)--(-1,2)--(-1,0)--(0,-1) (0,0.5)--(0,1.5); \draw[cyclecolor1, line cap=round, line width=5, opacity=0.5,] (0,-1)--(1,0)--(2,0) (0,0.5)--(1,0); \draw[red, thick] (1,3)--(1,-3) (-3,0) -- (-1,2)--(1,2) (-1,2) --(-1,0)--(1,0) (-1,0) to[out=-90,in=180] (1,-2); \draw[blue, thick] (-3,2) -- (-2,2) -- (-2,3) (-2,2) -- (-1,2) -- (0,1.5) -- (1,2) -- (0,3) (2,2)--(1,2) (0,1.5)--(0,0.5)--(1,0)--(2,0) (0,0.5)--(-1,0) -- (-2,-3) (0,-3) --(1,-2) -- (2,-2) (1,-2) -- (-1,0) (0,-1) -- (1,0) (-1,2) -- (-3,-2) ; \draw[blue, thick, fill] (-2,2) circle (2pt) (0,1.5) circle (2pt) (0,0.5) circle (2pt) (0,-1) circle (2pt) ; \draw[thick, fill=white] (-1,2) circle (2pt) (-1,0) circle (2pt) (1,2) circle (2pt) (1,0) circle (2pt) (1,-2) circle (2pt) ; \end{tikzpicture}\arrow[l,"\mathrm{(II)^2}"] & \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[cyclecolor2, line cap=round, line width=5, opacity=0.5] (-2,2)--(-1,1) (0.5, 0.75) -- (0.5,0.25); \draw[cyclecolor1, line cap=round, line width=5, opacity=0.5,] (-1,1)--(0,1)--(0,0) to[out=-30,in=-120] (1,0) --(2,0) (1,0) -- (0.5,0.25); \draw[red, thick, fill] (1,3)--(1,-3) (-3,0) -- (-1,0) (-1,0) -- (0,-1) (1,1) -- (0,1) (0,0) -- (0,-1); \draw[red, thick] (-1,0) -- (0,1) -- (0,0) -- (1,0) (0,-1) -- (1,-1); \draw[blue, thick, fill] (1,1) -- (0.5, 0.75) circle (2pt) -- (0.5,0.25) circle (2pt) -- (1,0) (0.5, 0.75) -- (0,1) (0.5, 0.25) -- (0,0); \draw[blue, thick] (-1,0) to[out=15, in=-105] (0,1) (0,0) to[out=-120,in=120] (0,-1) (0,-1) to[out=30, in=150] (1,-1) (0,0) to[out=-30, in=-150] (1,0); \draw[blue, thick, fill] (0,1) -- (-1,1) circle (2pt) -- (-2,2) circle (2pt) -- (-2,3) (-1,0) -- (-1,1) (-2,2) -- (-3,2); \draw[blue, thick] (1,1) -- (0,2) -- (-1,3); \draw[blue, thick] (1,1) -- ++(1,0) (1,0) -- ++(1,0) (1,-1) -- ++(1,0); \draw[blue, thick] (0,-1) -- (-2,-3) (1,-1) -- (-1,-3) (-1,0) -- (-3,-2); \draw[thick, fill=white] (-1,0) circle (2pt) (0,-1) circle (2pt) (0,0) circle (2pt) (0,1) circle (2pt) (1,-1) circle (2pt) (1,0) circle (2pt) (1,1) circle (2pt); \end{tikzpicture}\arrow[l,"\mathrm{(I,II)^*}"]& \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[cyclecolor2, line cap=round, line width=5, opacity=0.5] (-2,2)--(-1,1) (1,-0.5)--(1,0)--(0.67, 0.67); \draw[cyclecolor1, line cap=round, line width=5, opacity=0.5,] (-1,1)--(0,1)--(1,0)--(2,0); \draw[red, thick, fill] (1,3)--(1,-3) (-3,0) -- (-1,0) (-1,0) -- (0,-1) (1,1) -- (0,1) (1,-0.5) circle (2pt) -- (0,-1); \draw[red, thick] (-1,0) -- (0,1) -- (1,0) (0,-1) -- (1,-1); \draw[blue, thick, fill] (1,1) -- (0.67, 0.67) circle (2pt) -- (0,1) (0.67, 0.67) -- (1,0) (1,0) -- (0,-1); \draw[blue, thick] (-1,0) to[out=15, in=-105] (0,1) (0,-1) to[out=20, in=160] (1,-1); \draw[blue, thick, fill] (0,1) -- (-1,1) circle (2pt) -- (-2,2) circle (2pt) -- (-2,3) (-1,0) -- (-1,1) (-2,2) -- (-3,2); \draw[blue, thick] (1,1) -- (0,2) -- (-1,3); \draw[blue, thick] (1,1) -- ++(1,0) (1,0) -- ++(1,0) (1,-1) -- ++(1,0); \draw[blue, thick] (0,-1) -- (-2,-3) (1,-1) -- (-1,-3) (-1,0) -- (-3,-2); \draw[thick, fill=white] (-1,0) circle (2pt) (0,-1) circle (2pt) (0,1) circle (2pt) (1,-1) circle (2pt) (1,0) circle (2pt) (1,1) circle (2pt); \end{tikzpicture}\arrow[l,"\mathrm{(II)}"] \end{tikzcd} \] \begin{remark} The reader should not confuse that even though we call this operation the \emph{move}, it does not induce any equivalence on $N$-graphs since it involves a mutation $\mu_\gamma$. \end{remark} One important observation is that one can take the move $\mathrm{(Z)}$ instead of the Legendrian mutation~$\mu_\gamma$ on the $\sfY$-like cycle\footnote{We use an ambiguous terminology `$\sfY$-like cycle' since the global shape of $\gamma$ is unknown. However, the meaning is obvious and we omit the detail. }~$\gamma$, and after the move, the $\sfY$-like cycle becomes the $\sfY$-like cycle and $\sfI$-cycles become $\sfI$-cycles again. For example, let us consider $(\ngraphfont{G}(\widetilde{\dynD}_4), \ngraphfont{B}(\widetilde{\dynD}_4))$. Then the Legendrian Coxeter mutation $\mu_\ngraph(\ngraphfont{G}(\widetilde{\dynD}_4), \ngraphfont{B}(\widetilde{\dynD}_4))$ is obtained by the composition $(\mu_{\gamma_2}\mu_{\gamma_3}\mu_{\gamma_4}\mu_{\gamma_5})$ followed by the mutation $\mu_{\gamma_1}$. See Figure~\ref{figure:Legendrian Coxeter mutation for affine D4}. \begin{figure}[ht] \[ \begin{tikzcd} \begin{tikzpicture}[baseline=-.5ex,scale=0.4] \draw[rounded corners=5, thick] (-4, -2.5) rectangle (4, 2.5); \clip[rounded corners=5] (-4, -2.5) rectangle (4, 2.5); \draw[cyclecolor1, opacity=0.5, line cap=round, line width=5] (-1, 0) -- (1, 0) node[near start, above, color=black, sloped,opacity=1] {$\gamma_1$} (-1, 0) -- (-2, 1) (-1, 0) -- (-2, -1) (1, 0) -- (2, 1) (1, 0) -- (2, -1) ; \draw[cyclecolor2, opacity=0.5, line cap=round, line width=5] (-2, 1) -- (-3, 1) node[midway, below, color=black,opacity=1] {$\gamma_2$} (-2, -1) -- (-2, -1.75) node[midway, right=-1ex, color=black,opacity=1] {$\gamma_3$} (2, 1) -- (2, 1.75) node[midway, left=-1ex, color=black,opacity=1] {$\gamma_4$} (2, -1) -- (3, -1) node[midway, above, color=black,opacity=1] {$\gamma_5$} ; \foreach \i in {0, 180} { \begin{scope}[rotate=\i] \begin{scope}[xshift=2.5cm] \draw[thick, green] (-2.5, 2.5) -- ++(0,-2.5); \draw[thick, red] (-3.5, -2.5) -- (-3.5, 2.5) (-6.5, 0) -- (-3.5, 0) ; \draw[thick, blue, fill] (-3.5, 0) -- (-2.5, 0) (-3.5, 0) -- (-4.5, 1) circle (2pt) -- (-4.5, 2.5) (-4.5, 1) -- (-6.5, 1) (-5.5, 1) circle (2pt) -- (-5.5, 2.5) (-3.5, 0) -- (-4.5, -1) circle (2pt) -- (-4.5, -2.5) (-4.5, -1) -- (-6.5, -1) (-4.5, -1.73) circle (2pt) -- (-6.5, -1.73) ; \end{scope} \end{scope} } \draw[thick, fill=white] (-1, 0) circle (2pt) (1, 0) circle (2pt); \end{tikzpicture} \arrow[r,"\mu_{\gamma_1}"] \arrow[rd, "\mu_\ngraph"', bend right] & \begin{tikzpicture}[baseline=-.5ex,scale=0.4] \draw[rounded corners=5, thick] (-8, -4) rectangle (8, 4); \clip[rounded corners=5] (-8, -4) rectangle (8, 4); \foreach \r in {0, 180} { \begin{scope}[rotate=\r] \draw[blue, thick] (-4, 1) -- ++(-1, -1) (-4, -1) -- ++(-1, 1) to[out=-120, in=120] ++(0,-3) (-5, 3) to[out=-105,in=30] (-7,0) (-2, -2.5) -- ++(1, -0.5) -- +(-1, -1) ++(0,0) -- ++(2, 0) -- ++(1, 0.5) (-3, -2.5) -- ++(-2, -0.5) -- ++(-1, -1) (-4, 1.75) -- ++(-1, 1.25) -- ++(-1, 1) (-2, 2.5) -- ++(1, 0.5) -- ++(-1, 1) (-8, 1) -- ++(1, -1) -- ++(-1, -1) ; \draw[red, thick] (-1, -2.5) -- ++(0, -0.5) -- +(0, -1) ++(0,0) -- ++(-4, 0) -- ++(0, 3) -- +(1,0) ++(0,0) -- ++(0,3) -- ++(4,0) -- +(0, 1) ++(0,0) -- ++(0, -0.5) (-5, -3) -- ++(-2, 3) -- +(-1, 0) ++(0,0) -- ++(2, 3) ; \draw[green, thick] (0, 4) -- (0, 2.5); \draw[fill=white, thick] (-5, 0) circle (2pt) (-7, 0) circle (2pt) (-5, -3) circle (2pt) (-1, -3) circle (2pt) (1, -3) circle (2pt) (-5, 3) circle (2pt) ; \end{scope} } \begin{scope}[yscale=-1] \draw[rounded corners=5, thick] (-4, -2.5) rectangle (4, 2.5); \clip[rounded corners=5] (-4, -2.5) rectangle (4, 2.5); \draw[cyclecolor1, opacity=0.5, line cap=round, line width=5] (-1, 0) -- (1, 0) (-1, 0) -- (-2, 1) (-1, 0) -- (-2, -1) (1, 0) -- (2, 1) (1, 0) -- (2, -1) ; \draw[cyclecolor2, opacity=0.5, line cap=round, line width=5] (-2, 1) -- (-3, 1) (-2, -1) -- (-2, -1.75) (2, 1) -- (2, 1.75) (2, -1) -- (3, -1) ; \foreach \i in {0, 180} { \begin{scope}[rotate=\i] \begin{scope}[xshift=2.5cm] \draw[thick, green] (-2.5, 2.5) -- ++(0,-2.5); \draw[thick, red] (-3.5, -2.5) -- (-3.5, 2.5) (-6.5, 0) -- (-3.5, 0) ; \draw[thick, blue, fill] (-3.5, 0) -- (-2.5, 0) (-3.5, 0) -- (-4.5, 1) circle (2pt) -- (-4.5, 2.5) (-4.5, 1) -- (-6.5, 1) (-5.5, 1) circle (2pt) -- (-5.5, 2.5) (-3.5, 0) -- (-4.5, -1) circle (2pt) -- (-4.5, -2.5) (-4.5, -1) -- (-6.5, -1) (-4.5, -1.73) circle (2pt) -- (-6.5, -1.73) ; \end{scope} \end{scope} } \draw[thick, fill=white] (-1, 0) circle (2pt) (1, 0) circle (2pt); \end{scope} \end{tikzpicture} \arrow[d,"\mu_{\gamma_2}\mu_{\gamma_3}\mu_{\gamma_4}\mu_{\gamma_5}"] \\ & \begin{tikzpicture}[baseline=-.5ex,xscale=0.4, yscale=-0.4] \draw[rounded corners=5, thick] (-8, -4) rectangle (8, 4); \clip[rounded corners=5] (-8, -4) rectangle (8, 4); \foreach \r in {0, 180} { \begin{scope}[rotate=\r] \draw[blue, thick] (-4, 1) -- ++(-1, -1) (-4, -1) -- ++(-1, 1) to[out=120, in=-120] ++(0,3) (-5, -3) to[out=105,in=-30] (-7,0) (-2, -2.5) -- ++(1, -0.5) -- +(-1, -1) ++(0,0) -- ++(2, 0) -- ++(1, 0.5) (-3, -2.5) -- ++(-2, -0.5) -- ++(-1, -1) (-4, 1.75) -- ++(-1, 1.25) -- ++(-1, 1) (-2, 2.5) -- ++(1, 0.5) -- ++(-1, 1) (-8, 1) -- ++(1, -1) -- ++(-1, -1) ; \draw[red, thick] (-1, -2.5) -- ++(0, -0.5) -- +(0, -1) ++(0,0) -- ++(-4, 0) -- ++(0, 3) -- +(1,0) ++(0,0) -- ++(0,3) -- ++(4,0) -- +(0, 1) ++(0,0) -- ++(0, -0.5) (-5, -3) -- ++(-2, 3) -- +(-1, 0) ++(0,0) -- ++(2, 3) ; \draw[green, thick] (0, 4) -- (0, 2.5); \draw[fill=white, thick] (-5, 0) circle (2pt) (-7, 0) circle (2pt) (-5, -3) circle (2pt) (-1, -3) circle (2pt) (1, -3) circle (2pt) (-5, 3) circle (2pt) ; \end{scope} } \begin{scope}[yscale=-1] \draw[rounded corners=5, thick] (-4, -2.5) rectangle (4, 2.5); \clip[rounded corners=5] (-4, -2.5) rectangle (4, 2.5); \draw[cyclecolor1, opacity=0.5, line cap=round, line width=5] (-1, 0) -- (1, 0) (-1, 0) -- (-2, 1) (-1, 0) -- (-2, -1) (1, 0) -- (2, 1) (1, 0) -- (2, -1) ; \draw[cyclecolor2, opacity=0.5, line cap=round, line width=5] (-2, 1) -- (-3, 1) (-2, -1) -- (-2, -1.75) (2, 1) -- (2, 1.75) (2, -1) -- (3, -1) ; \foreach \i in {0, 180} { \begin{scope}[rotate=\i] \begin{scope}[xshift=2.5cm] \draw[thick, green] (-2.5, 2.5) -- ++(0,-2.5); \draw[thick, red] (-3.5, -2.5) -- (-3.5, 2.5) (-6.5, 0) -- (-3.5, 0) ; \draw[thick, blue, fill] (-3.5, 0) -- (-2.5, 0) (-3.5, 0) -- (-4.5, 1) circle (2pt) -- (-4.5, 2.5) (-4.5, 1) -- (-6.5, 1) (-5.5, 1) circle (2pt) -- (-5.5, 2.5) (-3.5, 0) -- (-4.5, -1) circle (2pt) -- (-4.5, -2.5) (-4.5, -1) -- (-6.5, -1) (-4.5, -1.73) circle (2pt) -- (-6.5, -1.73) ; \end{scope} \end{scope} } \draw[thick, fill=white] (-1, 0) circle (2pt) (1, 0) circle (2pt); \end{scope} \end{tikzpicture} \end{tikzcd} \] \caption{Legendrian Coxeter mutation for $\ngraphfont{G}(\widetilde{\dynD}_4)$} \label{figure:Legendrian Coxeter mutation for affine D4} \end{figure} Therefore, $\mu_\ngraph(\ngraphfont{G}(\widetilde{\dynD}_4), \ngraphfont{B}(\widetilde{\dynD}_4))$ is the same as the concatenation \[ \mu_\ngraph(\ngraphfont{G}(\widetilde{\dynD}_4), \ngraphfont{B}(\widetilde{\dynD}_4))=\ngraphfont{C}(\widetilde{\dynD}_4)(\ngraphfont{G}(\widetilde{\dynD}_4),\ngraphfont{B}(\widetilde{\dynD}_4,)), \] where the annular $N$-graph $\ngraphfont{C}(\widetilde{\dynD}_4)$ looks as follows: \[ \ngraphfont{C}(\widetilde{\dynD}_4)= \begin{tikzpicture}[baseline=-.5ex,scale=0.4] \draw[rounded corners=5, thick] (-8, -4) rectangle (8, 4); \clip[rounded corners=5] (-8, -4) rectangle (8, 4); \foreach \r in {0, 180} { \begin{scope}[rotate=\r] \draw[blue, thick] (-4, 1) -- ++(-1, -1) (-4, -1) -- ++(-1, 1) to[out=-120, in=120] ++(0,-3) (-5, 3) to[out=-105,in=30] (-7,0) (-2, -2.5) -- ++(1, -0.5) -- +(-1, -1) ++(0,0) -- ++(2, 0) -- ++(1, 0.5) (-4, -1.75) -- ++(-1, -1.25) -- ++(-1, -1) (-3, 2.5) -- ++(-2, 0.5) -- ++(-1, 1) (-2, 2.5) -- ++(1, 0.5) -- ++(-1, 1) (-8, 1) -- ++(1, -1) -- ++(-1, -1) ; \draw[red, thick] (-1, -2.5) -- ++(0, -0.5) -- +(0, -1) ++(0,0) -- ++(-4, 0) -- ++(0, 3) -- +(1,0) ++(0,0) -- ++(0,3) -- ++(4,0) -- +(0, 1) ++(0,0) -- ++(0, -0.5) (-5, -3) -- ++(-2, 3) -- +(-1, 0) ++(0,0) -- ++(2, 3) ; \draw[green, thick] (0, 4) -- (0, 2.5); \draw[fill=white, thick] (-5, 0) circle (2pt) (-7, 0) circle (2pt) (-5, -3) circle (2pt) (-1, -3) circle (2pt) (1, -3) circle (2pt) (-5, 3) circle (2pt) ; \end{scope} } \draw[rounded corners=5, thick] (-4, -2.5) rectangle (4, 2.5); \end{tikzpicture} \] In general, for the $N$-graph $(\ngraphfont{G}(\widetilde{\dynD}_n), \ngraphfont{B}(\widetilde{\dynD}_n))$, the Legendrian Coxeter mutation is the same as the concatenation of the Coxeter padding of type $\ngraphfont{C}^{\pm1}(\widetilde{\dynD}_n)$, which is an annular $N$-graph depicted in Figure~\ref{figure:coxeter paddings for affine D}. \begin{figure}[ht] \subfigure[$\ngraphfont{C}({\widetilde{\dynD}}_{n})$]{\makebox[.49\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,scale=0.4] \draw[thick, rounded corners=5] (-9,-4) rectangle (9, 4); \foreach \r in {0, 180} { \begin{scope}[rotate=\r] \begin{scope}[yscale=-1] \draw[thick, red] (-3, 4) -- ++(0, -2) (-3, -4) -- ++(0, 2) (-3, 3) -- ++(-3, 0) -- ++(0, -6) -- ++(3,0) (-6, 0) -- ++(1, 0) (-6, 3) -- ++(-2, -3) -- ++(2, -3) (-8, 0) -- ++(-1, 0) ; \draw[thick, blue] (-4, 4) -- ++(1, -1) -- ++(-1, -1) (-4, -4) -- ++(1, 1) -- ++(-1, 1) (-7, 4) -- ++(1, -1) -- ++(1.5, -1.5) (-7, -4) -- ++(1, 1) -- ++(1.5, 1.5) (-6, 0) -- ++(1, 1) (-6, 0) -- ++(1,-1) (-8, 0) -- ++(-1, 1) (-8, 0) -- ++(-1, -1) (-8, 0) to[out=-30, in=105] ++(2, -3) (-6, 3) to[out=-120, in=120] (-6, 0) ; \end{scope} \draw[thick, blue, rounded corners] (-3, 3) -- ++(2, 0) -- ++(0, -1) (-1, 4) -- ++(0, -1) -- ++(1,0) (2, 2) -- ++(0, 1) -- ++(-1, 0) (-2, -4) -- ++(0, 1) -- ++(-1, 0) ; \draw[thick, blue, dashed] (0, 3) -- ++(1, 0) ; \draw[thick, green] (-2, 4) -- ++(0,-2) ; \draw[thick, fill=white] (-3, 3) circle (2pt) (-3, -3) circle (2pt) (-6, 3) circle (2pt) (-6, 0) circle (2pt) (-6, -3) circle (2pt) (-8, 0) circle (2pt) ; \end{scope} } \draw[thick, rounded corners=5, fill=white] (-5,-2) rectangle (5, 2); \draw (0.5, 4) node[above=0ex] {$\overbrace{\hphantom{\hspace{1.6cm}}}^{\ell=\left\lfloor \frac{n-4}2\right\rfloor}$}; \draw (-0.5, -4) node[below=0ex] {$\underbrace{\hphantom{\hspace{1.6cm}}}_{k=\left\lfloor \frac{n-3}2\right\rfloor}$}; \end{tikzpicture} }} \subfigure[$\ngraphfont{C}({\widetilde{\dynD}}_{n})^{-1}$]{\makebox[.49\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,scale=0.4] \draw[thick, rounded corners=5] (-9,-4) rectangle (9, 4); \foreach \r in {0, 180} { \begin{scope}[rotate=\r] \draw[thick, red] (-3, 4) -- ++(0, -2) (-3, -4) -- ++(0, 2) (-3, 3) -- ++(-3, 0) -- ++(0, -6) -- ++(3,0) (-6, 0) -- ++(1, 0) (-6, 3) -- ++(-2, -3) -- ++(2, -3) (-8, 0) -- ++(-1, 0) ; \draw[thick, blue] (-4, 4) -- ++(1, -1) -- ++(-1, -1) (-4, -4) -- ++(1, 1) -- ++(-1, 1) (-7, 4) -- ++(1, -1) -- ++(1.5, -1.5) (-7, -4) -- ++(1, 1) -- ++(1.5, 1.5) (-6, 0) -- ++(1, 1) (-6, 0) -- ++(1,-1) (-8, 0) -- ++(-1, 1) (-8, 0) -- ++(-1, -1) (-8, 0) to[out=-30, in=105] ++(2, -3) (-6, 3) to[out=-120, in=120] (-6, 0) ; \draw[thick, blue, rounded corners] (-3, 3) -- ++(2, 0) -- ++(0, 1) (-1, 2) -- ++(0, 1) -- ++(1,0) (2, 4) -- ++(0, -1) -- ++(-1, 0) (-2, -2) -- ++(0, -1) -- ++(-1, 0) ; \draw[thick, blue, dashed] (0, 3) -- ++(1, 0) ; \draw[thick, green] (-2, 4) -- ++(0,-2) ; \draw[thick, fill=white] (-3, 3) circle (2pt) (-3, -3) circle (2pt) (-6, 3) circle (2pt) (-6, 0) circle (2pt) (-6, -3) circle (2pt) (-8, 0) circle (2pt) ; \end{scope} } \draw[thick, rounded corners=5, fill=white] (-5,-2) rectangle (5, 2); \draw (0.5, 4) node[above=0ex] {$\overbrace{\hphantom{\hspace{1.6cm}}}^{\ell=\left\lfloor \frac{n-4}2\right\rfloor}$}; \draw (-0.5, -4) node[below=0ex] {$\underbrace{\hphantom{\hspace{1.6cm}}}_{k=\left\lfloor \frac{n-3}2\right\rfloor}$}; \end{tikzpicture} }} \caption{Coxeter paddings $\ngraphfont{C}(\widetilde{\dynD}_n)^{\pm1}$} \label{figure:coxeter paddings for affine D} \end{figure} \begin{proposition}\label{proposition:coxeter realization D-type} For any $r\in\Z$, the Legendrian Coxeter mutation $\mu_\ngraphfont{G}^r$ on the pair $(\ngraphfont{G}(\widetilde{\dynD}_n),\ngraphfont{B}(\widetilde{\dynD}_n))$ is given by piling the Coxeter paddings $\ngraphfont{C}(\widetilde{\dynD}_n)^{\pm1}$. That is, \begin{align*} \mu_\ngraphfont{G}^{r}(\ngraphfont{G}(\widetilde{\dynD}_n),\ngraphfont{B}(\widetilde{\dynD}_n)) = \begin{cases} \ngraphfont{C}(\widetilde{\dynD}_n)\ngraphfont{C}(\widetilde{\dynD}_n)\cdots\ngraphfont{C}(\widetilde{\dynD}_n)(\ngraphfont{G}(\widetilde{\dynD}_n),\ngraphfont{B}(\widetilde{\dynD}_n)) & r\ge 0;\\ \ngraphfont{C}(\widetilde{\dynD}_n)^{-1}\ngraphfont{C}(\widetilde{\dynD}_n)^{-1}\cdots \ngraphfont{C}(\widetilde{\dynD}_n)^{-1}(\ngraphfont{G}(\widetilde{\dynD}_n),\ngraphfont{B}(\widetilde{\dynD}_n)) & r<0. \end{cases} \end{align*} \end{proposition} \begin{corollary}\label{cor:coxeter realization D-type} For any $r\in \Z$, the Legendrian Coxeter mutation $\mu_\ngraphfont{G}^r(\ngraphfont{G}(\widetilde{\dynD}_n),\ngraphfont{B}(\widetilde{\dynD}_n))$ is realizable by $N$-graphs and set of good cycles. \end{corollary} Note that the Coxeter paddings are obtained from the Coxeter mutations $\mu_\ngraphfont{G}^{\pm1}$ conjugated by a sequence of Move (II). For the notational clarity, it is worth mentioning that $\ngraphfont{C}(\widetilde{\dynD}_n)$ and $\ngraphfont{C}(\widetilde{\dynD}_n)^{-1}$ are the inverse to each other with respect to the concatenation introduced in Section~\ref{section:annular Ngraphs}. For example, one can present the Coxeter padding $\ngraphfont{C}(\widetilde{\dynD}_n)^{\pm1}$ as follows: \begin{align*} \ngraphfont{C}(\widetilde{\dynD}_n)&= \begin{tikzpicture}[baseline=-.5ex,yscale=1, scale=0.8] \begin{scope} \draw[thick] (0,1) -- (7,1) (0,-1) -- (7,-1); \draw[dashed] (0,1) -- (0,-1); \draw[thick, blue] (0,0) -- (0.75,0) -- (1,1) (0.75,0) -- (1,-1) (1.5,1) --(1.75,0) -- (1.5,-1) (1.75,0) to[out=0, in=90] (2.5,-0.5) -- (2,-1) (2.5,-0.5) --(3,-1) (2,1) -- (2.5,0.5) -- (3,1) (2.5,0.5) to[out=-90,in=180] (3.25,0) -- (3.5,1) (3.25,0) -- (3.5,-1) (4,1) -- (4.25,0) -- (4,-1) {[rounded corners](4.25,0) -- (5.5,0) -- (5.5, -1)} {[rounded corners](5.5, 1) -- (5.5,0) -- (6,0)} {[rounded corners](6.5, 0) -- (7,0) -- (7,-1)} ; \draw[thick, blue, dashed] (6,0) -- (6.5, 0) ; \draw[thick, green] (5, 1) -- (5, -1); \draw[thick, red] (0.5,1) -- (0.75,0) -- (0.5,-1) (0.75,0) -- (1.75,0) -- (2.5,0.5) -- (3.25,0) -- (2.5,-0.5) -- (1.75,0) (2.5,1) -- (2.5,0.5) (2.5,-1) -- (2.5,-0.5) (3.25,0) -- (4.25,0) -- (4.5,1) (4.25,0) -- (4.5,-1) ; \draw[thick, fill=white] (0.75,0) circle (2pt) (1.75,0) circle (2pt) (2.5,0.5) circle (2pt) (2.5,-0.5) circle (2pt) (3.25,0) circle (2pt) (4.25,0) circle (2pt) ; \end{scope} \begin{scope}[xshift=7cm] \draw[thick] (0,1) -- (7.5,1) (0,-1) -- (7.5,-1); \draw[dashed] (7.5,1) -- (7.5,-1); \draw[thick, blue] {[rounded corners] (0, 1) -- (0, 0) -- (0.75, 0)} (0.75,0) -- (1,1) (0.75,0) -- (1,-1) (1.5,1) --(1.75,0) -- (1.5,-1) (1.75,0) to[out=0, in=90] (2.5,-0.5) -- (2,-1) (2.5,-0.5) --(3,-1) (2,1) -- (2.5,0.5) -- (3,1) (2.5,0.5) to[out=-90,in=180] (3.25,0) -- (3.5,1) (3.25,0) -- (3.5,-1) (4,1) -- (4.25,0) -- (4,-1) {[rounded corners](4.25,0) -- (5.5,0) -- (5.5, -1)} {[rounded corners](5.5, 1) -- (5.5,0) -- (6,0)} {[rounded corners](6.5, 0) -- (7,0) -- (7,-1)} {[rounded corners](7, 1) -- (7,0) -- (7.5,0)} ; \draw[thick, blue, dashed] (6,0) -- (6.5, 0) ; \draw[thick, green] (5, 1) -- (5, -1); \draw[thick, red] (0.5,1) -- (0.75,0) -- (0.5,-1) (0.75,0) -- (1.75,0) -- (2.5,0.5) -- (3.25,0) -- (2.5,-0.5) -- (1.75,0) (2.5,1) -- (2.5,0.5) (2.5,-1) -- (2.5,-0.5) (3.25,0) -- (4.25,0) -- (4.5,1) (4.25,0) -- (4.5,-1) ; \draw[thick, fill=white] (0.75,0) circle (2pt) (1.75,0) circle (2pt) (2.5,0.5) circle (2pt) (2.5,-0.5) circle (2pt) (3.25,0) circle (2pt) (4.25,0) circle (2pt) ; \end{scope} \end{tikzpicture}\\ \ngraphfont{C}(\widetilde{\dynD}_4)^{-1}&= \begin{tikzpicture}[baseline=-.5ex, yscale=-1, scale=0.8] \begin{scope} \draw[thick] (0,1) -- (7,1) (0,-1) -- (7,-1); \draw[dashed] (0,1) -- (0,-1); \draw[thick, blue] (0,0) -- (0.75,0) -- (1,1) (0.75,0) -- (1,-1) (1.5,1) --(1.75,0) -- (1.5,-1) (1.75,0) to[out=0, in=90] (2.5,-0.5) -- (2,-1) (2.5,-0.5) --(3,-1) (2,1) -- (2.5,0.5) -- (3,1) (2.5,0.5) to[out=-90,in=180] (3.25,0) -- (3.5,1) (3.25,0) -- (3.5,-1) (4,1) -- (4.25,0) -- (4,-1) {[rounded corners](4.25,0) -- (5.5,0) -- (5.5, -1)} {[rounded corners](5.5, 1) -- (5.5,0) -- (6,0)} {[rounded corners](6.5, 0) -- (7,0) -- (7,-1)} ; \draw[thick, blue, dashed] (6,0) -- (6.5, 0) ; \draw[thick, green] (5, 1) -- (5, -1); \draw[thick, red] (0.5,1) -- (0.75,0) -- (0.5,-1) (0.75,0) -- (1.75,0) -- (2.5,0.5) -- (3.25,0) -- (2.5,-0.5) -- (1.75,0) (2.5,1) -- (2.5,0.5) (2.5,-1) -- (2.5,-0.5) (3.25,0) -- (4.25,0) -- (4.5,1) (4.25,0) -- (4.5,-1) ; \draw[thick, fill=white] (0.75,0) circle (2pt) (1.75,0) circle (2pt) (2.5,0.5) circle (2pt) (2.5,-0.5) circle (2pt) (3.25,0) circle (2pt) (4.25,0) circle (2pt) ; \end{scope} \begin{scope}[xshift=7cm] \draw[thick] (0,1) -- (7.5,1) (0,-1) -- (7.5,-1); \draw[dashed] (7.5,1) -- (7.5,-1); \draw[thick, blue] {[rounded corners] (0, 1) -- (0, 0) -- (0.75, 0)} (0.75,0) -- (1,1) (0.75,0) -- (1,-1) (1.5,1) --(1.75,0) -- (1.5,-1) (1.75,0) to[out=0, in=90] (2.5,-0.5) -- (2,-1) (2.5,-0.5) --(3,-1) (2,1) -- (2.5,0.5) -- (3,1) (2.5,0.5) to[out=-90,in=180] (3.25,0) -- (3.5,1) (3.25,0) -- (3.5,-1) (4,1) -- (4.25,0) -- (4,-1) {[rounded corners](4.25,0) -- (5.5,0) -- (5.5, -1)} {[rounded corners](5.5, 1) -- (5.5,0) -- (6,0)} {[rounded corners](6.5, 0) -- (7,0) -- (7,-1)} {[rounded corners](7, 1) -- (7,0) -- (7.5,0)} ; \draw[thick, blue, dashed] (6,0) -- (6.5, 0) ; \draw[thick, green] (5, 1) -- (5, -1); \draw[thick, red] (0.5,1) -- (0.75,0) -- (0.5,-1) (0.75,0) -- (1.75,0) -- (2.5,0.5) -- (3.25,0) -- (2.5,-0.5) -- (1.75,0) (2.5,1) -- (2.5,0.5) (2.5,-1) -- (2.5,-0.5) (3.25,0) -- (4.25,0) -- (4.5,1) (4.25,0) -- (4.5,-1) ; \draw[thick, fill=white] (0.75,0) circle (2pt) (1.75,0) circle (2pt) (2.5,0.5) circle (2pt) (2.5,-0.5) circle (2pt) (3.25,0) circle (2pt) (4.25,0) circle (2pt) ; \end{scope} \end{tikzpicture} \end{align*} Then it is direct to check that the concatenations $\ngraphfont{C}(\widetilde{\dynD}_4) \ngraphfont{C}(\widetilde{\dynD}_4)^{-1}$ and $\ngraphfont{C}(\widetilde{\dynD}_4)^{-1} \ngraphfont{C}(\widetilde{\dynD}_4)$ become trivial annulus $N$-graphs after a sequence of Move (I). The same holds for all $n\geq 4$. \subsubsection{Legendrian Coxeter mutations for degenerate $N$-graphs} For degenerate $N$-graphs $\tilde\ngraphfont{G}(a,b,b)$ and $\tilde\ngraphfont{G}(\widetilde{\dynD}_4)$, the Legendrian Coxeter mutations are as depicted in Figure~\ref{figure:Legendrian Coxeter mutations for degenerate Ngraphs}. \begin{figure}[ht] \subfigure[$\mu_\ngraphfont{G}(\tilde\ngraphfont{G}(a,b,b))$]{$ \begin{tikzcd}[ampersand replacement=\&, column sep=1pc] \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[fill, red, thick] (3,-3) -- (0,0) (0,0) -- (-3,3) (0,0) -- (45:2.5) circle (2pt) (45:2.5) -- ++(0,3) (45:2.5) -- ++(3,0) (0,0) -- (-135:1.5) circle (2pt) (-135:1.5) -- ++(0,-3) (-135:1.5) -- ++(-3,0) (-135:1.5) ++ (-.5,0) circle (2pt) -- ++(0,-2) (-135:1.5) ++ (-.5,-.5) circle (2pt) -- ++(-2,0) (-135:1.5) ++ (-1,-.5) circle (2pt); \draw[red, thick, dashed] (-135:1.5) ++ (-1,-.5) -- ++(0,-2); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,-1); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,1); \draw[Dble={blue and green},line width=2] (-2.5,0) -- (0,0); \draw[Dble={blue and green},line width=2] (0,0) -- (0,3); \draw[Dble={blue and green},line width=2] (0,0) -- (0,-3); \draw[Dble={green and blue},line width=2] (0,0) -- (1.25,0); \draw[Dble={green and blue},line width=2] (1.25,0) -- ++(2,-2); \draw[Dble={green and blue},line width=2] (1.25,0) -- ++(2,2); \draw[Dble={green and blue},line width=2] (1.25,0) ++(45:0.5) -- ++(2,-2); \draw[Dble={green and blue},line width=2,dashed] (1.25,0) ++(45:0.5) ++(-45:0.5) -- ++(2,2); \end{tikzpicture} \arrow[r,"\mu_\ngraphfont{G}"]\& \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[Dble={green and blue},line width=2] (-1,3) -- (-1,1); \draw[Dble={green and blue},line width=2] (-1,0.5) -- (-1,0); \draw[Dble={green and blue},line width=2] (-1,-0.5) -- (-1,-1); \draw[Dble={green and blue},line width=2] (-1,0.5) -- (-1,1); \draw[Dble={green and blue},line width=2] (-1,-0.5) -- (-1,0); \draw[Dble={green and blue},line width=2] (-1,1) -- (0,1); \draw[Dble={green and blue},line width=2] (-1,0) -- (0,0); \draw[Dble={blue and green},line width=2] (0,-1) -- (1,-1); \draw[Dble={blue and green},line width=2] (0,0) -- (1,0); \draw[Dble={blue and green},line width=2] (-2,1) -- (-1,1); \draw[Dble={blue and green},line width=2] (-2,0) -- (-1,0); \draw[Dble={blue and green},line width=2] (-3,-1) -- (-1,-1); \draw[Dble={blue and green},line width=2] (-3,0.5) -- (-2.5,0.5); \draw[Dble={blue and green},line width=2, line cap=round] (-2.5,0.5) -- (-2,0); \draw[Dble={blue and green},line width=2, line cap=round] (-2.5,0.5) -- (-2,1); \draw[Dble={green and blue},line width=2] (1,-3) -- (1,-1); \draw[Dble={blue and green},line width=2] (1,-1) -- (1,-0.5); \draw[Dble={blue and green},line width=2] (1,1) -- (1,0.5); \draw[Dble={blue and green},line width=2] (1,0) -- (1,0.5); \draw[Dble={blue and green},line width=2] (1,0) -- (1,-0.5); \draw[Dble={green and blue},line width=2] (0,-1) -- (0,-0.5); \draw[Dble={green and blue},line width=2] (0,0) -- (0,-0.5); \draw[Dble={green and blue},line width=2] (0,0) -- (0,0.5); \draw[Dble={green and blue},line width=2] (0,1) -- (0,0.5); \draw[Dble={green and blue},line width=2] (1,1) -- (3,1); \draw[Dble={green and blue},line width=2] (1,0) -- (1.5,0); \draw[Dble={green and blue},line width=2] (1,-1) -- (1.5,-1); \draw[Dble={green and blue},line width=2, line cap=round] (1.5,0) -- (2,-0.5); \draw[Dble={green and blue},line width=2, line cap=round] (1.5,-1) -- (2,-0.5); \draw[Dble={green and blue},line width=2] (2,-0.5) -- (3,0.5); \draw[Dble={green and blue},line width=2] (2.25,-0.25) -- (3,-1); \draw[Dble={green and blue},line width=2, dashed] (2.5,-0.5) -- (3,0); \draw[blue, line width=2] (0,1) to[out=30, in=150] (1,1); \begin{scope}[yshift=.5ex]\draw[green, line width=2] (0,1) to[out=30, in=150] (1,1);\end{scope} \draw[blue, line width=2] (0,1) to[out=-30, in=-150] (1,1); \begin{scope}[yshift=-.5ex]\draw[green, line width=2] (0,1) to[out=-30, in=-150] (1,1);\end{scope} \draw[blue, line width=2] (-1,-1) to[out=30, in=150] (0,-1); \begin{scope}[yshift=.5ex]\draw[green, line width=2] (-1,-1) to[out=30, in=150] (0,-1);\end{scope} \draw[blue, line width=2] (-1,-1) to[out=-30, in=-150] (0,-1); \begin{scope}[yshift=-.5ex]\draw[green, line width=2] (-1,-1) to[out=-30, in=-150] (0,-1);\end{scope} \draw[red, thick, rounded corners] (-1,1) -- (-3,3) (-1,1) -- (0,3) (-1,1) to[out=-120,in=120] (-1,0) (-1,1) to[out=-60,in=60] (-1,0) (-1,0) to[out=-120,in=120] (-1,-1) (-1,0) to[out=-60,in=60] (-1,-1) (-1,-1) -- (0,-1) (0,1) to[out=-120,in=120] (0,0) (0,1) to[out=-60,in=60] (0,0) (0,0) to[out=-120,in=120] (0,-1) (0,0) to[out=-60,in=60] (0,-1) (0,1) -- (1,1) (1,1) to[out=-120,in=120] (1,0) (1,1) to[out=-60,in=60] (1,0) (1,0) to[out=-120,in=120] (1,-1) (1,0) to[out=-60,in=60] (1,-1) (1,-1) -- (3,-3) (1,-1) -- (0, -3) (0,1) -- (0, 1.5) -- (0.5, 1.5) (1,1) -- (1, 1.5) -- (0.5, 1.5) (0.5, 1.5) -- (0.5, 3) (-1, -1) -- (-1, -1.5) (-1, -1.5) -- (-0.5, -1.5) (-0.5, -1.5) -- (-0.5, -3) (0, -1) -- (0, -1.5) -- (-0.5, -1.5) (-1, -1.5) -- (-1, -3) (-1, -2) -- (-3, -2) ; \draw[red, thick, dashed] (-1.5, -2) -- (-1.5, -3); \draw[red, thick, fill] (0.5, 1.5) circle (2pt) (-0.5, -1.5) circle (2pt) (-1, -1.5) circle (2pt) (-1, -2) circle (2pt) (-1.5, -2) circle (2pt) ; \end{tikzpicture} \end{tikzcd} $ } \subfigure[$\mu_\ngraphfont{G}(\tilde\ngraphfont{G}(\widetilde{\dynD}_4))$]{$ \begin{tikzcd}[ampersand replacement=\&, column sep=1pc] \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw (0,0) circle (3); \clip (0,0) circle (3); \foreach \r in {0, 180} { \begin{scope}[rotate=\r] \draw[fill, red, thick] (3,-3) -- (-3,3) (0,0) -- (45:2) circle (2pt) (45:2) -- ++(0,3) (45:2) -- ++(3,0) (45:2) ++ (0.75,0) circle (2pt) -- ++(0,2) ; \draw[Dble={blue and green},line width=2] (0,0) -- (0,3); \draw[Dble={green and blue},line width=2] (0,0) -- (2,0); \draw[Dble={green and blue},line width=2] (2,0) -- ++(-45:2); \draw[Dble={green and blue},line width=2] (2,0) -- ++(45:2); \end{scope} } \end{tikzpicture} \arrow[r,"\mu_\ngraphfont{G}"]\& \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw (0,0) circle (3); \clip (0,0) circle (3); \draw[Dble={green and blue},line width=2] (-1,3) -- (-1,1); \draw[Dble={green and blue},line width=2] (-1,0.5) -- (-1,0); \draw[Dble={green and blue},line width=2] (-1,-0.5) -- (-1,-1); \draw[Dble={green and blue},line width=2] (-1,0.5) -- (-1,1); \draw[Dble={green and blue},line width=2] (-1,-0.5) -- (-1,0); \draw[Dble={green and blue},line width=2] (-1,1) -- (0,1); \draw[Dble={green and blue},line width=2] (-1,0) -- (0,0); \draw[Dble={blue and green},line width=2] (0,-1) -- (1,-1); \draw[Dble={blue and green},line width=2] (0,0) -- (1,0); \draw[Dble={blue and green},line width=2] (-2,1) -- (-1,1); \draw[Dble={blue and green},line width=2] (-2,0) -- (-1,0); \draw[Dble={blue and green},line width=2] (-3,-1) -- (-1,-1); \draw[Dble={blue and green},line width=2] (-3,0.5) -- (-2.5,0.5); \draw[Dble={blue and green},line width=2, line cap=round] (-2.5,0.5) -- (-2,0); \draw[Dble={blue and green},line width=2, line cap=round] (-2.5,0.5) -- (-2,1); \draw[Dble={green and blue},line width=2] (1,-3) -- (1,-1); \draw[Dble={blue and green},line width=2] (1,-1) -- (1,-0.5); \draw[Dble={blue and green},line width=2] (1,1) -- (1,0.5); \draw[Dble={blue and green},line width=2] (1,0) -- (1,0.5); \draw[Dble={blue and green},line width=2] (1,0) -- (1,-0.5); \draw[Dble={green and blue},line width=2] (0,-1) -- (0,-0.5); \draw[Dble={green and blue},line width=2] (0,0) -- (0,-0.5); \draw[Dble={green and blue},line width=2] (0,0) -- (0,0.5); \draw[Dble={green and blue},line width=2] (0,1) -- (0,0.5); \draw[Dble={green and blue},line width=2] (1,1) -- (3,1); \draw[Dble={green and blue},line width=2] (1,0) -- (2,0); \draw[Dble={green and blue},line width=2] (1,-1) -- (2,-1); \draw[Dble={green and blue},line width=2, line cap=round] (2,0) -- (2.5,-0.5); \draw[Dble={green and blue},line width=2, line cap=round] (2,-1) -- (2.5,-0.5); \draw[Dble={green and blue},line width=2] (2.5,-0.5) -- (3,-0.5); \draw[blue, line width=2] (0,1) to[out=30, in=150] (1,1); \begin{scope}[yshift=.5ex]\draw[green, line width=2] (0,1) to[out=30, in=150] (1,1);\end{scope} \draw[blue, line width=2] (0,1) to[out=-30, in=-150] (1,1); \begin{scope}[yshift=-.5ex]\draw[green, line width=2] (0,1) to[out=-30, in=-150] (1,1);\end{scope} \draw[blue, line width=2] (-1,-1) to[out=30, in=150] (0,-1); \begin{scope}[yshift=.5ex]\draw[green, line width=2] (-1,-1) to[out=30, in=150] (0,-1);\end{scope} \draw[blue, line width=2] (-1,-1) to[out=-30, in=-150] (0,-1); \begin{scope}[yshift=-.5ex]\draw[green, line width=2] (-1,-1) to[out=-30, in=-150] (0,-1);\end{scope} \draw[red, thick, rounded corners] (-1,1) -- (-3,3) (-1,1) -- (0,3) (-1,1) to[out=-120,in=120] (-1,0) (-1,1) to[out=-60,in=60] (-1,0) (-1,0) to[out=-120,in=120] (-1,-1) (-1,0) to[out=-60,in=60] (-1,-1) (-1,-1) -- (0,-1) (0,1) to[out=-120,in=120] (0,0) (0,1) to[out=-60,in=60] (0,0) (0,0) to[out=-120,in=120] (0,-1) (0,0) to[out=-60,in=60] (0,-1) (0,1) -- (1,1) (1,1) to[out=-120,in=120] (1,0) (1,1) to[out=-60,in=60] (1,0) (1,0) to[out=-120,in=120] (1,-1) (1,0) to[out=-60,in=60] (1,-1) (1,-1) -- (3,-3) (1,-1) -- (0, -3) (0,1) -- (0, 1.5) -- (0.5, 1.5) (1,1) -- (1, 3) (1, 1.5) -- (0.5, 1.5) (0.5, 1.5) -- (0.5, 3) (-1, -1) -- (-1, -1.5) (-1, -1.5) -- (-0.5, -1.5) (-0.5, -1.5) -- (-0.5, -3) (0, -1) -- (0, -1.5) -- (-0.5, -1.5) (-1, -1.5) -- (-1, -3) ; \draw[red, thick, fill] (0.5, 1.5) circle (2pt) (-0.5, -1.5) circle (2pt) (-1, -1.5) circle (2pt) (1, 1.5) circle (2pt) ; \end{tikzpicture} \end{tikzcd} $} \caption{Legendrian Coxeter mutations for degenerate $N$-graphs} \label{figure:Legendrian Coxeter mutations for degenerate Ngraphs} \end{figure} Then by using \Move{DI} and \Move{DII} several times, one can show easily that the Legendrian Coxeter mutations are equivalent to $N$-graphs depicted below: \[ \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw (0,0) circle (6); \draw (0,0) circle (3); \begin{scope} \clip (0,0) circle (3); \draw[fill, red, thick] (3,-3) -- (0,0) (0,0) -- (-3,3) (0,0) -- (45:2.5) circle (2pt) (45:2.5) -- ++(0,3) (45:2.5) -- ++(3,0) (0,0) -- (-135:1.5) circle (2pt) (-135:1.5) -- ++(0,-3) (-135:1.5) -- ++(-3,0) (-135:1.5) ++ (-.5,0) circle (2pt) -- ++(0,-2) (-135:1.5) ++ (-.5,-.5) circle (2pt) -- ++(-2,0) (-135:1.5) ++ (-1,-.5) circle (2pt); \draw[red, thick, dashed] (-135:1.5) ++ (-1,-.5) -- ++(0,-2); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,-1); \draw[Dble={green and blue},line width=2] (-2.5,0) -- ++(-1,1); \draw[Dble={blue and green},line width=2] (-2.5,0) -- (0,0); \draw[Dble={blue and green},line width=2] (0,0) -- (0,3); \draw[Dble={blue and green},line width=2] (0,0) -- (0,-3); \draw[Dble={green and blue},line width=2] (0,0) -- (1.25,0); \draw[Dble={green and blue},line width=2] (1.25,0) -- ++(2,-2); \draw[Dble={green and blue},line width=2] (1.25,0) -- ++(2,2); \draw[Dble={green and blue},line width=2] (1.25,0) ++(45:0.5) -- ++(2,-2); \draw[Dble={green and blue},line width=2,dashed] (1.25,0) ++(45:0.5) ++(-45:0.5) -- ++(2,2); \end{scope} \clip (0,0) circle (6); \draw[Dble={blue and green},line width=2] (186:6) -- (205:4.5); \draw[Dble={blue and green},line width=2] (174:6) -- (185:4.5); \draw[Dble={green and blue},line width=2] (90:6) -- (135:5.5); \draw[Dble={green and blue},line width=2] (135:5.5) -- (185:4.5); \draw[red, thick, rounded corners] (135:6) -- (160:5.5) -- (185:4.5) (60:6) -- (120:5) -- (185:4.5) ; \draw[Dble={green and blue},line width=2] (185:4.5) -- (190:3); \begin{scope}[rotate=5] \draw[blue, line width=2] (180:4.4) to[out=-60,in=100] (190:4.2); \draw[green, line width=2] (190:4.2) to[out=-80,in=80] (200:4.4); \draw[green, line width=2] (180:4.5) to[out=-60,in=100] (190:4.3); \draw[blue, line width=2] (190:4.3) to[out=-80,in=80] (200:4.5); \draw[red, thick] (180:4.5) arc (180:200:4.5); \begin{scope}[rotate=20] \draw[blue, line width=2] (180:4.5) to[out=-60,in=80] (200:4.5); \draw[green, line width=2] (180:4.4) to[out=-60,in=80] (200:4.4); \draw[blue, line width=2] (180:4.5) to[out=-120,in=140] (200:4.5); \draw[green, line width=2] (180:4.6) to[out=-120,in=140] (200:4.6); \draw[red, thick] (180:4.5) arc (180:200:4.5); \end{scope} \begin{scope}[rotate=40] \draw[blue, line width=2, dashed] (180:4.5) to[out=-60,in=80] (200:4.5); \draw[green, line width=2, dashed] (180:4.4) to[out=-60,in=80] (200:4.4); \draw[blue, line width=2, dashed] (180:4.5) to[out=-120,in=140] (200:4.5); \draw[green, line width=2, dashed] (180:4.6) to[out=-120,in=140] (200:4.6); \draw[red, thick, dashed] (180:4.5) arc (180:200:4.5); \end{scope} \begin{scope}[rotate=60] \draw[blue, line width=2] (180:4.5) to[out=-60,in=80] (200:4.5); \draw[green, line width=2] (180:4.4) to[out=-60,in=80] (200:4.4); \draw[blue, line width=2] (180:4.5) to[out=-120,in=140] (200:4.5); \draw[green, line width=2] (180:4.6) to[out=-120,in=140] (200:4.6); \draw[red, thick] (180:4.5) arc (180:200:4.5); \end{scope} \end{scope} \draw[Dble={green and blue},line width=2] (265:4.5) -- (-90:3); \draw[red, thick, rounded corners] (265:4.5) -- (-45:3); \draw[blue, line width=2, rounded corners] (265:4.5) -- (315:3.5) -- (329:3); \draw[green, line width=2, rounded corners] (265:4.6) -- (316:3.6) -- (334:3); \draw[red, thick] (185:4.5) -- (201:3) (200:6) -- (205:4.5) -- (211:3) (228:6) -- (245:4.5) -- (238:3) (242:6) -- (265:4.5) -- (249:3) ; \draw[red, thick, dashed] (214:6) -- (225:4.5) -- (226:3) ; \draw[Dble={blue and green},line width=2] (-30:4.5) -- (-90:6); \draw[Dble={green and blue},line width=2] (-30:4.5) -- (-30:6); \draw[Dble={blue and green},line width=2, line cap=rect] (-18:3) -- (-30:4.5); \draw[red, thick, rounded corners] (-30:4.5) -- (-104:6) (-30:4.5) -- (-45:6) ; \begin{scope}[rotate=-20] \draw[blue, line width=2] (-10:4.45) arc (-10:5:4.45); \draw[green, line width=2] (5:4.45) arc (5:20:4.45); \draw[green, line width=2] (-10:4.55) arc (-10:5:4.55); \draw[blue, line width=2] (5:4.55) arc (5:20:4.55); \draw[red, thick] (-10:4.5) to[out=50,in=-40] (20:4.5); \draw[red, thick] (-10:4.5) to[out=120,in=-110] (20:4.5); \end{scope} \begin{scope}[rotate=10] \draw[blue, line width=2, dashed] (-10:4.45) arc (-10:5:4.45); \draw[green, line width=2, dashed] (5:4.45) arc (5:20:4.45); \draw[green, line width=2, dashed] (-10:4.55) arc (-10:5:4.55); \draw[blue, line width=2, dashed] (5:4.55) arc (5:20:4.55); \draw[red, thick, dashed] (-10:4.5) to[out=50,in=-40] (20:4.5); \draw[red, thick, dashed] (-10:4.5) to[out=120,in=-110] (20:4.5); \end{scope} \begin{scope}[rotate=40] \draw[blue, line width=2] (-10:4.45) arc (-10:5:4.45); \draw[green, line width=2] (5:4.45) arc (5:20:4.45); \draw[green, line width=2] (-10:4.55) arc (-10:5:4.55); \draw[blue, line width=2] (5:4.55) arc (5:20:4.55); \draw[red, thick] (-10:4.5) to[out=50,in=-40] (20:4.5); \end{scope} \draw[Dble={blue and green},line width=2, dashed] (18:3) -- (-0:4.5); \draw[Dble={green and blue},line width=2, dashed] (-0:4.5) -- (-10:6); \draw[Dble={blue and green},line width=2] (28:3) -- (30:4.5); \draw[Dble={green and blue},line width=2] (30:4.5) -- (10:6); \draw[Dble={green and blue},line width=2] (60:4.5) -- (30:6); \draw[Dble={blue and green},line width=2] (90:3) -- (60:4.5); \draw[red, thick] (36:3) -- (30:4.5) (54:3) -- (60:4.5) (60:4.5) -- (40:6) ; \draw[red, thick, rounded corners] (135:3) -- (120:3.5) -- (60:4.5); \draw[Dble={green and blue},line width=2] (170:3) -- (120:4.5); \draw[Dble={green and blue},line width=2] (120:4.5) -- (60:4.5); \end{tikzpicture} \qquad \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw (0,0) circle (6); \draw (0,0) circle (3); \begin{scope} \draw (0,0) circle (3); \clip (0,0) circle (3); \foreach \r in {0, 180} { \begin{scope}[rotate=\r] \draw[fill, red, thick] (3,-3) -- (-3,3) (0,0) -- (45:2) circle (2pt) (45:2) -- ++(0,3) (45:2) -- ++(3,0) (45:2) ++ (0.75,0) circle (2pt) -- ++(0,2) ; \draw[Dble={blue and green},line width=2] (0,0) -- (0,3); \draw[Dble={green and blue},line width=2] (0,0) -- (2,0); \draw[Dble={green and blue},line width=2] (2,0) -- ++(-45:2); \draw[Dble={green and blue},line width=2] (2,0) -- ++(45:2); \end{scope} } \end{scope} \clip (0,0) circle (6); \foreach \r in {0,180} { \begin{scope}[rotate=\r] \draw[red, thick, rounded corners] (135:6) -- (180:4.5) (-45:3) -- (-75:3.5) -- (-120:4.5) (180:4.5) -- (120:4.5) -- (75:6) ; \draw[Dble={blue and green},line width=2] (175:6) -- (180:4.5); \draw[Dble={blue and green},line width=2] (185:6) -- (210:4.5); \draw[Dble={green and blue},line width=2] (180:4.5) -- (195:3); \draw[Dble={blue and green},line width=2] (180:4.5) -- (135:5); \draw[Dble={blue and green},line width=2] (135:5) -- (90:6); \draw[Dble={green and blue},line width=2] (-15:3) -- (-45:3.5); \draw[Dble={green and blue},line width=2] (-45:3.5) -- (-75:4); \draw[Dble={green and blue},line width=2] (-75:4) -- (-120:4.5); \begin{scope}[rotate=-15] \draw[blue, line width=2] (195:4.4) to[out=-45,in=120] (210:4.1); \draw[green, line width=2] (210:4.1) to[out=-60,in=105] (225:4.4); \draw[green, line width=2] (195:4.5) to[out=-45,in=120] (210:4.2); \draw[blue, line width=2] (210:4.2) to[out=-60,in=105] (225:4.5); \draw[red, thick] (195:4.5) arc (195:225:4.5); \begin{scope}[rotate=30] \draw[green, line width=2] (195:4.4) to[out=-45,in=105] (225:4.4); \draw[blue, line width=2] (195:4.5) to[out=-45,in=105] (225:4.5); \draw[green, line width=2] (195:4.6) to[out=-105,in=165] (225:4.6); \draw[blue, line width=2] (195:4.5) to[out=-105,in=165] (225:4.5); \draw[red, thick] (195:4.5) arc (195:225:4.5); \end{scope} \end{scope} \draw[Dble={green and blue},line width=2] (240:4.5) -- (-90:3); \draw[red, thick] (180:4.5) -- (208:3) (195:6) -- (210:4.5) -- (224:3) (225:6) -- (240:4.5) -- (242:3) ; \end{scope} } \end{tikzpicture} \] Therefore one can conclude that the effect of the Legendrian Coxeter mutation on each degenerate $N$-graph $\tilde\ngraphfont{G}(a,b,b)$ or $\tilde\ngraphfont{G}(\widetilde{\dynD}_4)$ is equivalent to attaching an annular $N$-graph which defines the Coxeter padding $\tilde\ngraphfont{C}(a,b,b)$ or $\tilde\ngraphfont{C}(\widetilde{\dynD}_4)$. \begin{proposition}\label{proposition:coxeter realization denegerate type} Let $(\tilde\ngraphfont{G}, \ngraphfont{B})$ be either $(\tilde\ngraphfont{G}(a,b,b), \ngraphfont{B}(a,b,b))$ or $(\tilde\ngraphfont{G}(\widetilde{\dynD}_4), \ngraphfont{B}(\widetilde{\dynD}_4))$. Then for each $r\in\Z$, the Legendrian Coxeter mutation $\mu_\ngraphfont{G}^r$ on the pair $(\tilde\ngraphfont{G}, \ngraphfont{B})$ is given as \[ \mu_\ngraphfont{G}^{t}(\tilde\ngraphfont{G},\ngraphfont{B}) = \begin{cases} \tilde\ngraphfont{C}\tilde\ngraphfont{C}\cdots\tilde\ngraphfont{C}(\ngraphfont{G},\ngraphfont{B})& r\ge 0;\\ \tilde\ngraphfont{C}^{-1}\tilde\ngraphfont{C}^{-1}\cdots\tilde\ngraphfont{C}^{-1}(\ngraphfont{G},\ngraphfont{B})& r< 0. \end{cases} \] where $\tilde\ngraphfont{C}$ is either $\tilde\ngraphfont{C}(a,b,b)$ and $\tilde\ngraphfont{C}(\widetilde{\dynD}_n)$, which are degenerate annular $N$-graphs defined as follows: \begin{align*} \tilde\ngraphfont{C}(a,b,b)&= \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw (0,0) circle (6); \clip (0,0) circle (6); \draw[Dble={blue and green},line width=2] (186:6) -- (205:4.5); \draw[Dble={blue and green},line width=2] (174:6) -- (185:4.5); \draw[Dble={green and blue},line width=2] (90:6) -- (135:5.5); \draw[Dble={green and blue},line width=2] (135:5.5) -- (185:4.5); \draw[red, thick, rounded corners] (135:6) -- (160:5.5) -- (185:4.5) (60:6) -- (120:5) -- (185:4.5) ; \draw[Dble={green and blue},line width=2] (185:4.5) -- (190:3); \begin{scope}[rotate=5] \draw[blue, line width=2] (180:4.4) to[out=-60,in=100] (190:4.2); \draw[green, line width=2] (190:4.2) to[out=-80,in=80] (200:4.4); \draw[green, line width=2] (180:4.5) to[out=-60,in=100] (190:4.3); \draw[blue, line width=2] (190:4.3) to[out=-80,in=80] (200:4.5); \draw[red, thick] (180:4.5) arc (180:200:4.5); \begin{scope}[rotate=20] \draw[blue, line width=2] (180:4.5) to[out=-60,in=80] (200:4.5); \draw[green, line width=2] (180:4.4) to[out=-60,in=80] (200:4.4); \draw[blue, line width=2] (180:4.5) to[out=-120,in=140] (200:4.5); \draw[green, line width=2] (180:4.6) to[out=-120,in=140] (200:4.6); \draw[red, thick] (180:4.5) arc (180:200:4.5); \end{scope} \begin{scope}[rotate=40] \draw[blue, line width=2, dashed] (180:4.5) to[out=-60,in=80] (200:4.5); \draw[green, line width=2, dashed] (180:4.4) to[out=-60,in=80] (200:4.4); \draw[blue, line width=2, dashed] (180:4.5) to[out=-120,in=140] (200:4.5); \draw[green, line width=2, dashed] (180:4.6) to[out=-120,in=140] (200:4.6); \draw[red, thick, dashed] (180:4.5) arc (180:200:4.5); \end{scope} \begin{scope}[rotate=60] \draw[blue, line width=2] (180:4.5) to[out=-60,in=80] (200:4.5); \draw[green, line width=2] (180:4.4) to[out=-60,in=80] (200:4.4); \draw[blue, line width=2] (180:4.5) to[out=-120,in=140] (200:4.5); \draw[green, line width=2] (180:4.6) to[out=-120,in=140] (200:4.6); \draw[red, thick] (180:4.5) arc (180:200:4.5); \end{scope} \end{scope} \draw[Dble={green and blue},line width=2] (265:4.5) -- (-90:3); \draw[red, thick, rounded corners] (265:4.5) -- (-45:3); \draw[blue, line width=2, rounded corners] (265:4.5) -- (315:3.5) -- (329:3); \draw[green, line width=2, rounded corners] (265:4.6) -- (316:3.6) -- (334:3); \draw[red, thick] (185:4.5) -- (201:3) (200:6) -- (205:4.5) -- (211:3) (228:6) -- (245:4.5) -- (238:3) (242:6) -- (265:4.5) -- (249:3) ; \draw[red, thick, dashed] (214:6) -- (225:4.5) -- (226:3) ; \draw[Dble={blue and green},line width=2] (-30:4.5) -- (-90:6); \draw[Dble={green and blue},line width=2] (-30:4.5) -- (-30:6); \draw[Dble={blue and green},line width=2, line cap=rect] (-18:3) -- (-30:4.5); \draw[red, thick, rounded corners] (-30:4.5) -- (-104:6) (-30:4.5) -- (-45:6) ; \begin{scope}[rotate=-20] \draw[blue, line width=2] (-10:4.45) arc (-10:5:4.45); \draw[green, line width=2] (5:4.45) arc (5:20:4.45); \draw[green, line width=2] (-10:4.55) arc (-10:5:4.55); \draw[blue, line width=2] (5:4.55) arc (5:20:4.55); \draw[red, thick] (-10:4.5) to[out=50,in=-40] (20:4.5); \draw[red, thick] (-10:4.5) to[out=120,in=-110] (20:4.5); \end{scope} \begin{scope}[rotate=10] \draw[blue, line width=2, dashed] (-10:4.45) arc (-10:5:4.45); \draw[green, line width=2, dashed] (5:4.45) arc (5:20:4.45); \draw[green, line width=2, dashed] (-10:4.55) arc (-10:5:4.55); \draw[blue, line width=2, dashed] (5:4.55) arc (5:20:4.55); \draw[red, thick, dashed] (-10:4.5) to[out=50,in=-40] (20:4.5); \draw[red, thick, dashed] (-10:4.5) to[out=120,in=-110] (20:4.5); \end{scope} \begin{scope}[rotate=40] \draw[blue, line width=2] (-10:4.45) arc (-10:5:4.45); \draw[green, line width=2] (5:4.45) arc (5:20:4.45); \draw[green, line width=2] (-10:4.55) arc (-10:5:4.55); \draw[blue, line width=2] (5:4.55) arc (5:20:4.55); \draw[red, thick] (-10:4.5) to[out=50,in=-40] (20:4.5); \end{scope} \draw[Dble={blue and green},line width=2, dashed] (18:3) -- (-0:4.5); \draw[Dble={green and blue},line width=2, dashed] (-0:4.5) -- (-10:6); \draw[Dble={blue and green},line width=2] (28:3) -- (30:4.5); \draw[Dble={green and blue},line width=2] (30:4.5) -- (10:6); \draw[Dble={green and blue},line width=2] (60:4.5) -- (30:6); \draw[Dble={blue and green},line width=2] (90:3) -- (60:4.5); \draw[red, thick] (36:3) -- (30:4.5) (54:3) -- (60:4.5) (60:4.5) -- (40:6) ; \draw[red, thick, rounded corners] (135:3) -- (120:3.5) -- (60:4.5); \draw[Dble={green and blue},line width=2] (170:3) -- (120:4.5); \draw[Dble={green and blue},line width=2] (120:4.5) -- (60:4.5); \draw[fill=white] (0,0) circle (3); \end{tikzpicture} & \tilde\ngraphfont{C}(\widetilde{\dynD}_4)&= \begin{tikzpicture}[baseline=-.5ex, scale=0.4] \draw (0,0) circle (6); \draw (0,0) circle (3); \clip (0,0) circle (6); \foreach \r in {0,180} { \begin{scope}[rotate=\r] \draw[red, thick, rounded corners] (135:6) -- (180:4.5) (-45:3) -- (-75:3.5) -- (-120:4.5) (180:4.5) -- (120:4.5) -- (75:6) ; \draw[Dble={blue and green},line width=2] (175:6) -- (180:4.5); \draw[Dble={blue and green},line width=2] (185:6) -- (210:4.5); \draw[Dble={green and blue},line width=2] (180:4.5) -- (195:3); \draw[Dble={blue and green},line width=2] (180:4.5) -- (135:5); \draw[Dble={blue and green},line width=2] (135:5) -- (90:6); \draw[Dble={green and blue},line width=2] (-15:3) -- (-45:3.5); \draw[Dble={green and blue},line width=2] (-45:3.5) -- (-75:4); \draw[Dble={green and blue},line width=2] (-75:4) -- (-120:4.5); \begin{scope}[rotate=-15] \draw[blue, line width=2] (195:4.4) to[out=-45,in=120] (210:4.1); \draw[green, line width=2] (210:4.1) to[out=-60,in=105] (225:4.4); \draw[green, line width=2] (195:4.5) to[out=-45,in=120] (210:4.2); \draw[blue, line width=2] (210:4.2) to[out=-60,in=105] (225:4.5); \draw[red, thick] (195:4.5) arc (195:225:4.5); \begin{scope}[rotate=30] \draw[green, line width=2] (195:4.4) to[out=-45,in=105] (225:4.4); \draw[blue, line width=2] (195:4.5) to[out=-45,in=105] (225:4.5); \draw[green, line width=2] (195:4.6) to[out=-105,in=165] (225:4.6); \draw[blue, line width=2] (195:4.5) to[out=-105,in=165] (225:4.5); \draw[red, thick] (195:4.5) arc (195:225:4.5); \end{scope} \end{scope} \draw[Dble={green and blue},line width=2] (240:4.5) -- (-90:3); \draw[red, thick] (180:4.5) -- (208:3) (195:6) -- (210:4.5) -- (224:3) (225:6) -- (240:4.5) -- (242:3) ; \end{scope} } \draw[fill=white] (0,0) circle (3); \end{tikzpicture} \end{align*} \end{proposition} \subsection{Legendrian loops}\label{sec:legendrian loop} Recall Legendrian loops defined in Definition~\ref{definition:Legendrian loops}. The goal of this section is to interpret the Legendrian Coxeter paddings with tame Legendrian loops. Obviously, the Legendrian Coxeter paddings for $\dynkinfont{A}_n$ depicted in \eqref{equation:Coxeter padding of type An} is tame. Moreover, it corresponds to the tame $\partial$-Legendrian isotopy which moves the very first generator $\sigma_1$ to the rightmost position along the closure part of $\lambda(\dynkinfont{A}_n)$ as follows: \[ \ngraphfont{C}(\dynkinfont{A}_n)= \begin{tikzpicture}[baseline=-.5ex,scale=0.4] \draw[thick] (0,0) circle (5) (0,0) circle (3); \foreach \i in {45, 90, ..., 360} { \draw[blue, thick] (\i:5) to[out=\i-180,in=\i+45] (\i+45:3); } \end{tikzpicture} \longleftrightarrow \begin{tikzpicture}[baseline=-.5ex, scale=1.5] \draw[thick] (-3, -0.75) to[out=0,in=180] (-2.5, -0.25) -- (-2.25, -0.25); \draw[white, line width=5] (-3, -0.25) to[out=0,in=180] (-2.5, -0.75); \draw[thick] (-3, -0.25) to[out=0,in=180] (-2.5, -0.75) -- (-2.25, -0.75); \draw[thick] (-2.25, -0.125) rectangle (-1.25, -0.875) (-1.75, -0.5) node {$n+1$}; \draw[thick] (-1.25, -0.25) -- (-1, -0.25); \draw[thick] (-1.25, -0.75) -- (-1, -0.75); \draw[thick] (-1, -0.25) to[out=0,in=180] (0,0.75) arc (90:-90:0.75); \draw[thick] (-1, -0.75) to[out=0,in=180] (0,0.25) arc (90:-90:0.25); \draw[white, line width=5] (-1,0.25) to[out=0,in=180] (0, -0.75); \draw[white, line width=5] (-1,0.75) to[out=0,in=180] (0, -0.25); \draw[thick] (-3, -0.25) arc (-90:-270:0.25) -- (-1,0.25) to[out=0,in=180] (0, -0.75); \draw[thick] (-3, -0.75) arc (-90:-270:0.75) -- (-1,0.75) to[out=0,in=180] (0, -0.25); \draw[thick, violet, dashed] (-2.75, -0.5) circle (0.25); \draw[thick, violet, dashed, ->] (-3,-0.5) arc (-90:-270:0.5) -- (-1, 0.5) to[out=0,in=180] (0, -0.5) arc (-90:90:0.5) to[out=180,in=0] (-1, -0.5); \end{tikzpicture} =\lambda(\dynkinfont{A}_n) \] \begin{lemma} Legendrian Coxeter paddings of type $(a,b,c)$ and $\widetilde{\dynD}$ are tame. \end{lemma} \begin{proof} We provide decompositions of the Coxeter paddings $\ngraphfont{C}(a,b,c)$ and $\ngraphfont{C}(\widetilde{\dynD}_4)$ into sequences of elementary annular $N$-graphs in Figures~\ref{fig:coxeter padding affine E is tame} and~\ref{fig:coxeter padding affine D4 is tame}, respectively. We omit other cases. \end{proof} \begin{figure}[ht] \subfigure[$\ngraphfont{C}(a,b,c)$\label{fig:coxeter padding affine E is tame}]{$ \ngraphfont{C}(a,b,c)=\begin{tikzpicture}[baseline=-.5ex, xscale=-1, scale=0.8] \foreach \x in {0, 3, 6} { \begin{scope}[xshift=\x cm] \draw[thick] (0,0)--(3.5,0) (0,1)--(3.5,1) (0,2)--(3.5,2) (0,3)--(3.5,3) (0,4)--(3.5,4) ; \draw[thick,red] (0.5,0) -- (0.5,3) to[out=90,in=-150] (1,3.5) --(1,4) (1,0) -- (1,2) to[out=90,in=-150] (1.5,2.5) to (1.5,3) to[out=90,in=-30] (1,3.5) (2,0) to[out=90,in=-150] (2.5,0.5) to (2.5,1) to[out=90,in=-30] (2,1.5) (3,0) to[out=90,in=-30] (2.5,0.5) ; \draw[thick,red,dashed] (1.5,0) to (1.5,1) to[out=90,in=-150] (2,1.5) to (2,2) to[out=90,in=-30] (1.5,2.5) ; \begin{scope}[rotate around={180:(1.75,2)}] \draw[thick,blue] (0.5,0) -- (0.5,3) to[out=90,in=-150] (1,3.5) --(1,4) (1,0) -- (1,2) to[out=90,in=-150] (1.5,2.5) to (1.5,3) to[out=90,in=-30] (1,3.5) (2,0) to[out=90,in=-150] (2.5,0.5) to (2.5,1) to[out=90,in=-30] (2,1.5) (3,0) to[out=90,in=-30] (2.5,0.5) ; \draw[thick,blue,dashed] (1.5,0) to (1.5,1) to[out=90,in=-150] (2,1.5) to (2,2) to[out=90,in=-30] (1.5,2.5) ; \end{scope} \draw[thick, fill=white] (1,3.5) circle (2pt) (1.5,2.5) circle (2pt) (2,1.5) circle (2pt) (2.5,0.5) circle (2pt) ; \end{scope} } \draw[dashed] (0,-3)--(0,4) (9.5,-3)--(9.5,4); \draw[thick] (0,-1) -- (9.5,-1) (0,-2) -- (9.5,-2) (0,-3) -- (9.5,-3); \begin{scope}[xscale=-1, xshift=-9.5cm] \draw[thick, red] (0.5, 0) to[out=-90, in=0] (0,-0.5) (1.5, 0) to[out=-90, in=90] (1,-1) (2.5, 0) to[out=-90, in=90] (2,-1) (3, 0) to[out=-90, in=90] (2.5,-1) (3.5, 0) to[out=-90, in=90] (3,-1) (4.5, 0) to[out=-90, in=90] (4,-1) (5.5, 0) to[out=-90, in=90] (5,-1) (6, 0) to[out=-90, in=90] (5.5,-1) (6.5, 0) to[out=-90, in=90] (6,-1) (7.5, 0) to[out=-90, in=90] (7,-1) (8.5, 0) to[out=-90, in=90] (8,-1) (9, 0) to[out=-90, in=90] (8.5,-1) (9.5, -0.5) to[out=180,in=90] (9,-1); \draw[thick, red, dashed] (2, 0) to[out=-90,in=90] (1.5, -1) (5, 0) to[out=-90,in=90] (4.5, -1) (8, 0) to[out=-90,in=90] (7.5, -1); \draw[thick, blue] (1,0) to[out=-90,in=90] (0.5,-1) (4,0) to[out=-90,in=90] (3.5,-1) (7,0) to[out=-90,in=90] (6.5,-1); \begin{scope}[yshift=-1cm] \draw[thick, blue] (0.5, 0) to[out=-90, in=0] (0,-0.5) (3.5, 0) to[out=-90, in=90] (3,-1) (6.5, 0) to[out=-90, in=90] (6,-1) (9.5, -0.5) to[out=180,in=90] (9,-1); \draw[thick, red] (1,0) to[out=-90,in=90] (0.5,-1) (2, 0) to[out=-90,in=90] (1.5, -1) (2.5, 0) to[out=-90, in=90] (2,-1) (3, 0) to[out=-90, in=90] (2.5,-1) (4,0) to[out=-90,in=90] (3.5,-1) (5, 0) to[out=-90,in=90] (4.5, -1) (5.5, 0) to[out=-90, in=90] (5,-1) (6, 0) to[out=-90, in=90] (5.5,-1) (7,0) to[out=-90,in=90] (6.5,-1) (8, 0) to[out=-90,in=90] (7.5, -1) (8.5, 0) to[out=-90, in=90] (8,-1) (9, 0) to[out=-90, in=90] (8.5,-1); \draw[thick, red, dashed] (1.5, 0) to[out=-90, in=90] (1,-1) (4.5, 0) to[out=-90, in=90] (4,-1) (7.5, 0) to[out=-90, in=90] (7,-1); \end{scope} \begin{scope}[yshift=-2cm] \draw[thick, red] (0.5, 0) to[out=-90, in=0] (0,-0.5) (1.5, 0) to[out=-90, in=90] (1,-1) (2, 0) to[out=-90,in=90] (1.5, -1) (2.5, 0) to[out=-90, in=90] (2,-1) (3.5, 0) to[out=-90, in=90] (3,-1) (4.5, 0) to[out=-90, in=90] (4,-1) (5, 0) to[out=-90,in=90] (4.5, -1) (5.5, 0) to[out=-90, in=90] (5,-1) (6.5, 0) to[out=-90, in=90] (6,-1) (7.5, 0) to[out=-90, in=90] (7,-1) (8, 0) to[out=-90,in=90] (7.5, -1) (8.5, 0) to[out=-90, in=90] (8,-1) (9.5, -0.5) to[out=180,in=90] (9,-1); \draw[thick, red, dashed] (1,0) to[out=-90,in=90] (0.5,-1) (4,0) to[out=-90,in=90] (3.5,-1) (7,0) to[out=-90,in=90] (6.5,-1); \draw[thick, blue] (3, 0) to[out=-90, in=90] (2.5,-1) (9, 0) to[out=-90, in=90] (8.5,-1) (6, 0) to[out=-90, in=90] (5.5,-1); \end{scope} \end{scope} \draw [decorate,decoration={brace,amplitude=10pt}] (3.2,4.1) -- (0.3,4.1) node [black,midway,yshift=0.5cm] {$c+2$} ; \draw [decorate,decoration={brace,amplitude=10pt}] (6.2,4.1) -- (3.3,4.1) node [black,midway,yshift=0.5cm] {$b+2$} ; \draw [decorate,decoration={brace,amplitude=10pt}] (9.2,4.1) -- (6.3,4.1) node [black,midway,yshift=0.5cm] {$a+2$} ; \end{tikzpicture} $} \subfigure[$\ngraphfont{C}({\widetilde{\dynD}}_n)$\label{fig:coxeter padding affine D4 is tame} ]{$ \begin{aligned} \phantom{^{-1}}\ngraphfont{C}(\widetilde{\dynD}_n)&= \begin{tikzpicture}[baseline=-.5ex, scale=0.6] \begin{scope}[yshift=-6.5cm] \foreach \i in {9,8,7,6,5,4} { \draw[thick] (0,\i)-- ++(14.5,0); } \draw[dashed] (0,9) -- (0,4) (14.5,9) -- (14.5, 4) ; \foreach \i in {0, 7} { \begin{scope}[xshift=\i cm] \draw[violet, line width=5, opacity=0.5] (2.5, 9) -- ++(0,-1) to[out=-90,in=90] ++(-1,-1) to[out=-90,in=90] ++(-1,-1) -- ++(0,-1) (5.5, 9) to[out=-90, in=90] ++(-0.5,-1) to[out=-90,in=90] ++(-1,-1) to[out=-90,in=90] ++(-1,-1) -- ++(0,-1) to[out=-90,in=90] ++(-0.5, -1) ; \draw[blue, thick] (2, 9) to[out=-90, in=150] (2.5,8.5) (3, 9) to[out=-90, in=30] (2.5,8.5) (2.5, 8.5) -- (2.5, 8); \draw[red, thick] (2, 8) to[out=90, in=-150] (2.5,8.5) (3, 8) to[out=90, in=-30] (2.5,8.5) (2.5, 8.5) -- (2.5, 9); \draw[thick, fill=white] (2.5, 8.5) circle (2pt); \draw[thick, blue] (5.5, 9) to[out=-90, in=90] (5, 8); \foreach \x in {0.5, 4.5} { \draw[thick, red] (\x, 9) -- ++(0, -1) ; } \foreach \x in {1, 1.5, 3.5, 4, 6, 7} { \draw[thick, blue] (\x,9) -- ++(0, -1) ; } \draw[thick, green] (5, 9) to[out=-90, in=90] (5.5, 8); \draw[blue, thick] (1.5, 8) to[out=-90, in=150] (2,7.5) (2.5, 8) to[out=-90, in=30] (2,7.5) (2, 7.5) -- (2, 7); \draw[red, thick] (1.5, 7) to[out=90, in=-150] (2,7.5) (2.5, 7) to[out=90, in=-30] (2,7.5) (2, 7.5) -- (2, 8); \draw[thick, fill=white] (2, 7.5) circle (2pt); \draw[blue, thick] (4, 8) to[out=-90, in=150] (4.5,7.5) (5, 8) to[out=-90, in=30] (4.5,7.5) (4.5, 7.5) -- (4.5, 7); \draw[red, thick] (4, 7) to[out=90, in=-150] (4.5,7.5) (5, 7) to[out=90, in=-30] (4.5,7.5) (4.5, 7.5) -- (4.5, 8); \draw[thick, fill=white] (4.5, 7.5) circle (2pt); \foreach \x in {0.5, 3} { \draw[thick, red] (\x, 8) -- ++(0, -1) ; } \foreach \x in {1, 3.5, 6, 7} { \draw[thick, blue] (\x,8) -- ++(0, -1) ; } \draw[thick, green] (5.5, 8) -- ++(0, -1); \draw[red, thick] (0.5, 7) to[out=-90, in=150] (1,6.5) (1.5, 7) to[out=-90, in=30] (1,6.5) (1, 6.5) -- (1, 6); \draw[blue, thick] (0.5, 6) to[out=90, in=-150] (1,6.5) (1.5, 6) to[out=90, in=-30] (1,6.5) (1, 6.5) -- (1, 7); \draw[thick, fill=white] (1, 6.5) circle (2pt); \draw[red, thick] (3, 7) to[out=-90, in=150] (3.5,6.5) (4, 7) to[out=-90, in=30] (3.5,6.5) (3.5, 6.5) -- (3.5, 6); \draw[blue, thick] (3, 6) to[out=90, in=-150] (3.5,6.5) (4, 6) to[out=90, in=-30] (3.5,6.5) (3.5, 6.5) -- (3.5, 7); \draw[thick, fill=white] (3.5, 6.5) circle (2pt); \foreach \x in {2.5, 5} { \draw[thick, red] (\x, 7) -- ++(0, -1) ; } \foreach \x in {2, 4.5, 6, 7} { \draw[thick, blue] (\x,7) -- ++(0, -1) ; } \draw[thick, green] (5.5, 7) -- ++(0, -1); \draw[red, thick] (2.5, 6) to[out=-90, in=150] (3,5.5) (3.5, 6) to[out=-90, in=30] (3,5.5) (3, 5.5) -- (3, 5); \draw[blue, thick] (2.5, 5) to[out=90, in=-150] (3,5.5) (3.5, 5) to[out=90, in=-30] (3,5.5) (3, 5.5) -- (3, 6); \draw[thick, fill=white] (3, 5.5) circle (2pt); \foreach \x in {1, 5} { \draw[thick, red] (\x,6) -- ++(0, -1) ; } \foreach \x in {0.5, 1.5, 2, 4, 4.5, 6, 7} { \draw[thick, blue] (\x,6) -- ++(0, -1) ; } \draw[thick, green] (5.5, 6) -- ++(0, -1); \foreach \x in {1, 3, 5} { \draw[thick, red] (\x,5) to[out=-90, in=90] ++(-0.5, -1) ; } \foreach \x in {1.5, 2, 2.5, 3.5, 4, 4.5, 6, 7} { \draw[thick, blue] (\x,5) to[out=-90, in=90] ++(-0.5, -1) ; } \draw[thick, green] (5.5, 5) to[out=-90, in=90] ++(-0.5, -1); \draw (6.125,4) node[above] {$\cdots$}; \end{scope} } \clip(0, 4) rectangle (14.5, 9); \draw[violet, line width=5, opacity=0.5] (0.5, 5) to[out=-90, in=0] (0, 4.5); \draw[violet, line width=5, opacity=0.5] (7.5, 5) to[out=-90, in=90] (7, 4); \draw[violet, line width=5, opacity=0.5] (14.5, 4.5) to[out=180, in=90] (14, 4); \draw[blue, thick] (0.5, 5) to[out=-90, in=0] (0, 4.5); \draw[blue, thick] (7.5, 5) to[out=-90, in=90] (7, 4); \draw[blue, thick] (14.5, 4.5) to[out=180, in=90] (14, 4); \end{scope} \end{tikzpicture} \end{aligned} $} \caption{A sequence of elementary annulus $N$-graphs for Legendrian Coxeter paddings} \end{figure} Then we may translate the sequence of Reidemeister moves induced by $\bar\ngraphfont{C}(a,b,c) \ngraphfont{C}(a,b,c)$ into the Legendrian loop $\vartheta(a,b,c)$ depicted as in Figure~\ref{fig:legendrian loop of E_intro}. Note that the left column of the loop diagram corresponds to $\ngraphfont{C}(a,b,c)$ while the right column corresponds to $\bar\ngraphfont{C}(a,b,c)$. \begin{figure}[ht] \[ \begin{tikzcd}[ampersand replacement=\&] \begin{tikzpicture}[baseline=5ex,scale=0.4] \draw[thick](0,0) -- (9,0) (0,1) -- (9,1) (0,2) -- (9,2) (0,3) -- (9,3) (0,3.5) -- (9,3.5) (0,4) -- (9,4) (0,0) to[out=180,in=180] (0,4) (0,1) to[out=180,in=180] (0,3.5) (0,2) to[out=180,in=180] (0,3) (9,0) to[out=0,in=180] (12,3) (9,1) to[out=0,in=180] (12,3.5) (9,2) to[out=0,in=180] (12,4) (12,4) to[out=0,in=0] (12,0) (12,3.5) to[out=0,in=0] (12,1) (12,3) to[out=0,in=0] (12,2); \draw[line width=3, white] (9,4) to[out=0,in=180] (12,2) (9,3.5) to[out=0,in=180] (12,1) (9,3) to[out=0,in=180] (12,0); \draw[thick] (9,4) to[out=0,in=180] (12,2) (9,3.5) to[out=0,in=180] (12,1) (9,3) to[out=0,in=180] (12,0); \draw[thick,fill=white] (0.2,-0.2) rectangle node {$\scriptstyle{a-1}$} (1.8,1.2) (2.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (2.8,2.2) (3.2,-0.2) rectangle node {$\scriptstyle{b-1}$} (4.8,1.2) (5.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (5.8,2.2) (6.2,-0.2) rectangle node {$\scriptstyle{c-1}$} (7.8,1.2) (8.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (8.8,2.2); \end{tikzpicture} \arrow[r] \& \begin{tikzpicture}[baseline=5ex,scale=0.4] \draw[thick](0,0) -- (9,0) (0,1) -- (9,1) (0,2) -- (9,2) (0,3) -- (9,3) (0,3.5) -- (9,3.5) (0,4) -- (9,4) (0,0) to[out=180,in=180] (0,4) (0,1) to[out=180,in=180] (0,3.5) (0,2) to[out=180,in=180] (0,3) (9,0) to[out=0,in=180] (12,3) (9,1) to[out=0,in=180] (12,3.5) (9,2) to[out=0,in=180] (12,4) (12,4) to[out=0,in=0] (12,0) (12,3.5) to[out=0,in=0] (12,1) (12,3) to[out=0,in=0] (12,2); \draw[line width=3, white] (9,4) to[out=0,in=180] (12,2) (9,3.5) to[out=0,in=180] (12,1) (9,3) to[out=0,in=180] (12,0); \draw[thick] (9,4) to[out=0,in=180] (12,2) (9,3.5) to[out=0,in=180] (12,1) (9,3) to[out=0,in=180] (12,0); \draw[thick,fill=white] (0.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (0.8,2.2) (1.2,-0.2) rectangle node {$\scriptstyle{a-1}$} (2.8,1.2) (3.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (3.8,2.2) (4.2,-0.2) rectangle node {$\scriptstyle{b-1}$} (5.8,1.2) (6.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (6.8,2.2) (7.2,-0.2) rectangle node {$\scriptstyle{c-1}$} (8.8,1.2); \end{tikzpicture} \arrow[d] \\ \begin{tikzpicture}[baseline=5ex,scale=0.4] \draw[thick](0,0) -- (9,0) (0,1) -- (9,1) (0,2) -- (9,2) (0,3) -- (9,3) (0,3.5) -- (9,3.5) (0,4) -- (9,4) (0,0) to[out=180,in=180] (0,4) (0,1) to[out=180,in=180] (0,3.5) (0,2) to[out=180,in=180] (0,3) (9,0) to[out=0,in=180] (12,3) (9,1) to[out=0,in=180] (12,3.5) (9,2) to[out=0,in=180] (12,4) (12,4) to[out=0,in=0] (12,0) (12,3.5) to[out=0,in=0] (12,1) (12,3) to[out=0,in=0] (12,2); \draw[line width=3, white] (9,4) to[out=0,in=180] (12,2) (9,3.5) to[out=0,in=180] (12,1) (9,3) to[out=0,in=180] (12,0); \draw[thick] (9,4) to[out=0,in=180] (12,2) (9,3.5) to[out=0,in=180] (12,1) (9,3) to[out=0,in=180] (12,0); \draw[thick,fill=white] (0.2,-0.2) rectangle node {$\scriptstyle{a-1}$} (1.8,1.2) (2.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (2.8,2.2) (3.2,-0.2) rectangle node {$\scriptstyle{b-1}$} (4.8,1.2) (5.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (5.8,2.2) (6.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (6.8,2.2) (7.2,0.8) rectangle node {$\scriptstyle{c-1}$} (8.8,2.2); \end{tikzpicture} \arrow[u] \& \begin{tikzpicture}[baseline=5ex,scale=0.4] \draw[thick](0,0) -- (9,0) (0,1) -- (9,1) (0,2) -- (9,2) (0,3) -- (9,3) (0,3.5) -- (9,3.5) (0,4) -- (9,4) (0,0) to[out=180,in=180] (0,4) (0,1) to[out=180,in=180] (0,3.5) (0,2) to[out=180,in=180] (0,3) (9,0) to[out=0,in=180] (12,3) (9,1) to[out=0,in=180] (12,3.5) (9,2) to[out=0,in=180] (12,4) (12,4) to[out=0,in=0] (12,0) (12,3.5) to[out=0,in=0] (12,1) (12,3) to[out=0,in=0] (12,2); \draw[line width=3, white] (9,4) to[out=0,in=180] (12,2) (9,3.5) to[out=0,in=180] (12,1) (9,3) to[out=0,in=180] (12,0); \draw[thick] (9,4) to[out=0,in=180] (12,2) (9,3.5) to[out=0,in=180] (12,1) (9,3) to[out=0,in=180] (12,0); \draw[thick,fill=white] (0.2,0.8) rectangle node {$\scriptstyle{a-1}$} (1.8,2.2) (2.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (2.8,2.2) (3.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (3.8,2.2) (4.2,-0.2) rectangle node {$\scriptstyle{b-1}$} (5.8,1.2) (6.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (6.8,2.2) (7.2,-0.2) rectangle node {$\scriptstyle{c-1}$} (8.8,1.2); \end{tikzpicture} \arrow[d] \\ \begin{tikzpicture}[baseline=5ex,scale=0.4] \draw[thick](0,0) -- (9,0) (0,1) -- (9,1) (0,2) -- (9,2) (0,3) -- (9,3) (0,3.5) -- (9,3.5) (0,4) -- (9,4) (0,0) to[out=180,in=180] (0,4) (0,1) to[out=180,in=180] (0,3.5) (0,2) to[out=180,in=180] (0,3) (9,0) to[out=0,in=180] (12,3) (9,1) to[out=0,in=180] (12,3.5) (9,2) to[out=0,in=180] (12,4) (12,4) to[out=0,in=0] (12,0) (12,3.5) to[out=0,in=0] (12,1) (12,3) to[out=0,in=0] (12,2); \draw[line width=3, white] (9,4) to[out=0,in=180] (12,2) (9,3.5) to[out=0,in=180] (12,1) (9,3) to[out=0,in=180] (12,0); \draw[thick] (9,4) to[out=0,in=180] (12,2) (9,3.5) to[out=0,in=180] (12,1) (9,3) to[out=0,in=180] (12,0); \draw[thick,fill=white] (0.2,-0.2) rectangle node {$\scriptstyle{a-1}$} (1.8,1.2) (2.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (2.8,2.2) (3.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (3.8,2.2) (4.2,0.8) rectangle node {$\scriptstyle{b-1}$} (5.8,2.2) (6.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (6.8,2.2) (7.2,0.8) rectangle node {$\scriptstyle{c-1}$} (8.8,2.2); \end{tikzpicture} \arrow[u] \& \begin{tikzpicture}[baseline=5ex,scale=0.4] \draw[thick](0,0) -- (9,0) (0,1) -- (9,1) (0,2) -- (9,2) (0,3) -- (9,3) (0,3.5) -- (9,3.5) (0,4) -- (9,4) (0,0) to[out=180,in=180] (0,4) (0,1) to[out=180,in=180] (0,3.5) (0,2) to[out=180,in=180] (0,3) (9,0) to[out=0,in=180] (12,3) (9,1) to[out=0,in=180] (12,3.5) (9,2) to[out=0,in=180] (12,4) (12,4) to[out=0,in=0] (12,0) (12,3.5) to[out=0,in=0] (12,1) (12,3) to[out=0,in=0] (12,2); \draw[line width=3, white] (9,4) to[out=0,in=180] (12,2) (9,3.5) to[out=0,in=180] (12,1) (9,3) to[out=0,in=180] (12,0); \draw[thick] (9,4) to[out=0,in=180] (12,2) (9,3.5) to[out=0,in=180] (12,1) (9,3) to[out=0,in=180] (12,0); \draw[thick,fill=white] (0.2,0.8) rectangle node {$\scriptstyle{a-1}$} (1.8,2.2) (2.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (2.8,2.2) (3.2,0.8) rectangle node {$\scriptstyle{b-1}$} (4.8,2.2) (5.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (5.8,2.2) (6.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (6.8,2.2) (7.2,-0.2) rectangle node {$\scriptstyle{c-1}$} (8.8,1.2); \end{tikzpicture} \arrow[d] \\ \begin{tikzpicture}[baseline=5ex,scale=0.4] \draw[thick](0,0) -- (9,0) (0,1) -- (9,1) (0,2) -- (9,2) (0,3) -- (9,3) (0,3.5) -- (9,3.5) (0,4) -- (9,4) (0,0) to[out=180,in=180] (0,4) (0,1) to[out=180,in=180] (0,3.5) (0,2) to[out=180,in=180] (0,3) (9,0) to[out=0,in=180] (12,3) (9,1) to[out=0,in=180] (12,3.5) (9,2) to[out=0,in=180] (12,4) (12,4) to[out=0,in=0] (12,0) (12,3.5) to[out=0,in=0] (12,1) (12,3) to[out=0,in=0] (12,2); \draw[line width=3, white] (9,4) to[out=0,in=180] (12,2) (9,3.5) to[out=0,in=180] (12,1) (9,3) to[out=0,in=180] (12,0); \draw[thick] (9,4) to[out=0,in=180] (12,2) (9,3.5) to[out=0,in=180] (12,1) (9,3) to[out=0,in=180] (12,0); \draw[thick,fill=white] (0.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (0.8,2.2) (1.2,0.8) rectangle node {$\scriptstyle{a-1}$} (2.8,2.2) (3.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (3.8,2.2) (4.2,0.8) rectangle node {$\scriptstyle{b-1}$} (5.8,2.2) (6.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (6.8,2.2) (7.2,0.8) rectangle node {$\scriptstyle{c-1}$} (8.8,2.2); \end{tikzpicture} \arrow[u] \& \begin{tikzpicture}[baseline=5ex,scale=0.4] \draw[thick](0,0) -- (9,0) (0,1) -- (9,1) (0,2) -- (9,2) (0,3) -- (9,3) (0,3.5) -- (9,3.5) (0,4) -- (9,4) (0,0) to[out=180,in=180] (0,4) (0,1) to[out=180,in=180] (0,3.5) (0,2) to[out=180,in=180] (0,3) (9,0) to[out=0,in=180] (12,3) (9,1) to[out=0,in=180] (12,3.5) (9,2) to[out=0,in=180] (12,4) (12,4) to[out=0,in=0] (12,0) (12,3.5) to[out=0,in=0] (12,1) (12,3) to[out=0,in=0] (12,2); \draw[line width=3, white] (9,4) to[out=0,in=180] (12,2) (9,3.5) to[out=0,in=180] (12,1) (9,3) to[out=0,in=180] (12,0); \draw[thick] (9,4) to[out=0,in=180] (12,2) (9,3.5) to[out=0,in=180] (12,1) (9,3) to[out=0,in=180] (12,0); \draw[thick,fill=white] (0.2,0.8) rectangle node {$\scriptstyle{a-1}$} (1.8,2.2) (2.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (2.8,2.2) (3.2,0.8) rectangle node {$\scriptstyle{b-1}$} (4.8,2.2) (5.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (5.8,2.2) (6.2,0.8) rectangle node {$\scriptstyle{c-1}$} (7.8,2.2) (8.2,-0.2) rectangle node {$\scriptstyle{\Delta}$} (8.8,2.2); \end{tikzpicture} \arrow[l] \end{tikzcd} \] \caption{A Legendrian loop $\vartheta(a,b,c)$ induced from Legendrian Coxeter mutation $\mu_\ngraphfont{G}^{2}$ on $(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))$.} \label{fig:legendrian loop of E_intro} \end{figure} In order to see the effect of Legendrian Coxeter mutation of type $\widetilde{\dynD}_n$ efficiently, let us present it by a sequence of braid moves together with keep tracking braid words shaded by violet color as follows: \[ \begin{tikzcd}[column sep = -8pt, row sep = -5pt] \beta({\widetilde{\dynD}}_n) &= &\sigma_2&\sigma_1&\sigma_1&\color{red}\sigma_1&\color{violet}\circled{\text{$\sigma_2$}}&\color{red}\sigma_1&\sigma_1&\sigma_1&\sigma_2&\color{violet}\circled{\text{$\sigma_1$}}&\sigma_1^{k-1}&\sigma_3&\sigma_2&\sigma_1&\sigma_1&\color{red}\sigma_1&\color{violet}\circled{\text{$\sigma_2$}}&\color{red}\sigma_1&\sigma_1&\sigma_1&\sigma_2&\color{violet}\circled{\text{$\sigma_1$}}&\sigma_1^{\ell-1}&\sigma_3 \\ &= &\sigma_2&\sigma_1&\color{red}\sigma_1&\color{red}\sigma_2&\color{violet}\circled{\text{$\sigma_1$}}&\sigma_2&\sigma_1&\color{red}\sigma_1&\color{red}\sigma_2&\color{violet}\circled{\text{$\sigma_1$}}&\sigma_1^{k-1}&\sigma_3&\sigma_2&\sigma_1&\color{red}\sigma_1&\color{red}\sigma_2&\color{violet}\circled{\text{$\sigma_1$}}&\sigma_2&\sigma_1&\color{red}\sigma_1&\color{red}\sigma_2&\color{violet}\circled{\text{$\sigma_1$}}&\sigma_1^{\ell-1}&\sigma_3 \\ &= &\color{red}\sigma_2&\color{red}\sigma_1&\color{violet}\circled{\text{$\sigma_2$}}&\sigma_1&\sigma_2&\color{red}\sigma_2&\color{red}\sigma_1&\color{violet}\circled{\text{$\sigma_2$}}&\sigma_1&\sigma_2&\sigma_1^{k-1}&\sigma_3&\color{red}\sigma_2&\color{red}\sigma_1&\color{violet}\circled{\text{$\sigma_2$}}&\sigma_1&\sigma_2&\color{red}\sigma_2&\color{red}\sigma_1&\color{violet}\circled{\text{$\sigma_2$}}&\sigma_1&\sigma_2&\sigma_1^{\ell-1}&\sigma_3 \\ &= &\color{violet}\circled{\text{$\sigma_1$}}&\sigma_2&\sigma_1&\sigma_1&\sigma_2&\color{violet}\circled{\text{$\sigma_1$}}&\sigma_2&\sigma_1&\sigma_1&\sigma_2&\sigma_1^{k-1}&\sigma_3&\color{violet}\circled{\text{$\sigma_1$}}&\sigma_2&\sigma_1&\sigma_1&\sigma_2&\color{violet}\circled{\text{$\sigma_1$}}&\sigma_2&\sigma_1&\sigma_1&\sigma_2&\sigma_1^{\ell-1}&\sigma_3 \\ &\mathrel{\dot{=}} &\sigma_2&\sigma_1&\sigma_1&\sigma_2&\color{violet}\circled{\text{$\sigma_1$}}&\sigma_2&\sigma_1&\sigma_1&\sigma_2&\sigma_1^{k-1}&\color{red}\sigma_3&\color{violet}\circled{\text{$\sigma_1$}}&\sigma_2&\sigma_1&\sigma_1&\sigma_2&\color{violet}\circled{\text{$\sigma_1$}}&\sigma_2&\sigma_1&\sigma_1&\sigma_2&\sigma_1^{\ell-1}&\color{red}\sigma_3&\color{violet}\circled{\text{$\sigma_1$}} \\ &= &\sigma_2&\sigma_1&\sigma_1&\color{red}\sigma_2&\color{violet}\circled{\text{$\sigma_1$}}&\color{red}\sigma_2&\sigma_1&\sigma_1&\sigma_2&\sigma_1^{k-1}&\color{violet}\circled{\text{$\sigma_1$}}&\sigma_3&\sigma_2&\sigma_1&\sigma_1&\color{red}\sigma_2&\color{violet}\circled{\text{$\sigma_1$}}&\color{red}\sigma_2&\sigma_1&\sigma_1&\sigma_2&\sigma_1^{\ell-1}&\color{violet}\circled{\text{$\sigma_1$}}&\sigma_3 \\ &= &\sigma_2&\sigma_1&\sigma_1&\sigma_1&\color{violet}\circled{\text{$\sigma_2$}}&\sigma_1&\sigma_1&\sigma_1&\sigma_2&\sigma_1^{k-1}&\color{violet}\circled{\text{$\sigma_1$}}&\sigma_3&\sigma_2&\sigma_1&\sigma_1&\sigma_1&\color{violet}\circled{\text{$\sigma_2$}}&\sigma_1&\sigma_1&\sigma_1&\sigma_2&\sigma_1^{\ell-1}&\color{violet}\circled{\text{$\sigma_1$}}&\sigma_3 &=\beta(\widetilde{\dynD}_n) \end{tikzcd} \] The corresponding annular $N$-graph is depicted in Figure~\ref{fig:coxeter padding affine D4 is tame}. Finally, the effect of Coxeter padding $\ngraphfont{C}({\widetilde{\dynD}}_n)$ onto $\beta({{\widetilde{\dynD}}_n})$ can be presented as a Legendrian loop $\vartheta(\widetilde{\dynD}_n)$, which is a composition \[ \vartheta(\widetilde{\dynD}_n) = \varphi \vartheta_0(\widetilde{\dynD}_n) \varphi^{-1} \] as depicted in Figure~\ref{fig:legendrian loop of D_intro}. \begin{figure}[ht] \[ \begin{tikzcd}[row sep=2pc] \begin{tikzpicture}[baseline=-.5ex, scale=0.6] \begin{scope} \draw[thick] (-7, 0.5) -- ++(7,0) (-7, 1) -- ++(7,0); \draw[thick, blue] (-7, 1.5) -- ++(7,0) (-7, 2) -- ++(7,0); \fill[blue, opacity=0.1] (-7, 1.5) -- ++(7,0) -- ++(0, 0.5) -- ++(-7, 0); \draw[thick, rounded corners] (-7, -0.5) -- ++(0.5, 0) -- ++(4.5, 0) -- ++(0.5, -0.5) -- ++(1,0) -- ++(0.5, 0); \draw[thick, rounded corners] (-7, -1) -- ++(0.5, 0) -- ++(1, -1) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(1.5, 1.5) -- ++(1, 0) -- ++(0.5, 0); \draw[thick, blue, rounded corners] (-7, -1.5) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(1.5, 0) -- ++(1, -1) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, 0); \draw[thick, blue, rounded corners] (-7, -2) -- ++(0.5, 0) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(1, 1) -- ++(1.5, 0) -- ++(1, -1) -- ++(0.5, 0); \draw[thick, blue, fill=blue!10] (-1, -2.175) rectangle ++(1, 0.75) node[pos=.5] {$\scriptstyle k-1$}; \fill[blue, opacity=0.1] (-7, -2) [rounded corners]-- ++(0.5, 0) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) [sharp corners]-- ++(0.75, 0.75) [rounded corners]-- ++(-0.25, 0.25) -- ++(-1.5, 0) -- ++(-0.5, -0.5) -- ++(-0.5, 0); \fill[blue, opacity=0.1] (-4.25, -1.25) [rounded corners]-- ++(0.25, 0.25) -- ++(1.5, 0) [sharp corners]-- ++(0.75, -0.75) [rounded corners]-- ++(-0.25, -0.25) -- ++(-0.5, 0) -- ++(-0.5, 0.5) -- ++(-0.5, -0.5) -- ++(-0.75, 0.75); \fill[blue, opacity=0.1] (-1, -1.5) -- ++(-0.5, 0) -- ++(-0.25, -0.25) -- ++(0.25, -0.25) -- ++(0.5, 0); \end{scope} \begin{scope}[xshift=7cm] \draw[thick] (-7, 0.5) -- ++(7,0) (-7, 1) -- ++(7,0); \draw[thick, blue] (-7, 1.5) -- ++(7,0) (-7, 2) -- ++(7,0); \fill[blue, opacity=0.1] (-7, 1.5) -- ++(7,0) -- ++(0, 0.5) -- ++(-7, 0); \draw[thick, rounded corners] (-7, -0.5) -- ++(0.5, 0) -- ++(4.5, 0) -- ++(0.5, -0.5) -- ++(1,0) -- ++(0.5, 0); \draw[thick, rounded corners] (-7, -1) -- ++(0.5, 0) -- ++(1, -1) -- ++(1, 1) -- ++(0.5, 0) -- ++(1, -1) -- ++(1.5, 1.5) -- ++(1, 0) -- ++(0.5, 0); \draw[thick, blue, rounded corners] (-7, -1.5) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(1, 0) -- ++(1, -1) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, 0); \draw[thick, blue, rounded corners] (-7, -2) -- ++(0.5, 0) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) -- ++(1, 1) -- ++(1, 0) -- ++(1, -1) -- ++(0.5, 0); \draw[thick, blue, fill=blue!10] (-1, -2.175) rectangle ++(1, 0.75) node[pos=.5] {$\scriptstyle \ell-1$}; \fill[blue, opacity=0.1] (-7, -2) [rounded corners]-- ++(0.5, 0) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) [sharp corners]-- ++(0.25, 0.25) [rounded corners]-- ++(-0.75, 0.75) -- ++(-1, 0) -- ++(-0.5, -0.5) -- ++(-0.5, 0); \fill[blue, opacity=0.1] (-4.25, -1.75) [rounded corners]-- ++(0.75, 0.75) -- ++(1, 0) [sharp corners]-- ++(0.75, -0.75) [rounded corners]-- ++(-0.25, -0.25) -- ++(-0.5, 0) -- ++(-0.5, 0.5) -- ++(-0.5, -0.5) -- ++(-0.5, 0) -- ++(-0.25, 0.25); \fill[blue, opacity=0.1] (-1, -1.5) -- ++(-0.5, 0) -- ++(-0.25, -0.25) -- ++(0.25, -0.25) -- ++(0.5, 0); \end{scope} \draw[thick] (-7, 0.5) arc (90:270:0.5) (-7, 1) arc (90:270:1); \draw[thick, blue] (-7, 1.5) arc (90:270:1.5) (-7, 2) arc (90:270:2); \fill[blue, opacity=0.1] (-7, 1.5) arc (90:270:1.5) -- (-7, -2) arc (-90:-270:2); \begin{scope} \draw[thick] (7, 1) to[out=0, in=180] ++(3, -2.5) (7, 0.5) to[out=0, in=180] ++(3, -2.5); \draw[thick, blue] (7, 2) to[out=0, in=180] ++(3, -2.5) (7, 1.5) to[out=0, in=180] ++(3, -2.5); \fill[blue, opacity=0.1] (7, 2) to[out=0, in=180] ++(3, -2.5) -- (10, -1) to[out=180, in=0] ++(-3, 2.5); \end{scope} \begin{scope}[yscale=-1] \draw[thick] (7, 1) to[out=0, in=180] ++(3, -2.5) (7, 0.5) to[out=0, in=180] ++(3, -2.5); \draw[thick, blue] (7, 2) to[out=0, in=180] ++(3, -2.5) (7, 1.5) to[out=0, in=180] ++(3, -2.5); \fill[blue, opacity=0.1] (7, 2) to[out=0, in=180] ++(3, -2.5) -- (10, -1) to[out=180, in=0] ++(-3, 2.5); \end{scope} \draw[thick] (10, 2) arc (90:-90:2) (10, 1.5) arc (90:-90:1.5); \draw[thick, blue] (10, 1) arc (90:-90:1) (10, 0.5) arc (90:-90:0.5); \fill[blue, opacity=0.1] (10, 1) arc (90:-90:1) -- ++(0, 0.5) arc (-90:90:0.5); \end{tikzpicture} \arrow[d,"\varphi"', xshift=-.5ex] \arrow[from=d,"\varphi^{-1}"', xshift=.5ex]\\ \begin{tikzpicture}[baseline=-.5ex, scale=0.6] \begin{scope} \draw[thick] (-7, 0.5) -- ++(7,0) (-7, 1) -- ++(7,0); \draw[thick, blue] (-7, 1.5) -- ++(7,0) (-7, 2) -- ++(7,0); \fill[blue, opacity=0.1] (-7, 1.5) -- ++(7,0) -- ++(0, 0.5) -- ++(-7, 0); \draw[thick, rounded corners] (-7, -0.5) -- ++(0.5, 0) -- ++(4.5, 0) -- ++(0.5, -0.5) -- ++(1,0) -- ++(0.5, 0); \draw[thick, rounded corners] (-7, -1) -- ++(0.5, 0) -- ++(1, -1) -- ++(1, 1) -- ++(0.5, 0) -- ++(1, -1) -- ++(1.5, 1.5) -- ++(1, 0) -- ++(0.5, 0); \draw[thick, blue, rounded corners] (-7, -1.5) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(1, 0) -- ++(1, -1) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, 0); \draw[thick, blue, rounded corners] (-7, -2) -- ++(0.5, 0) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) -- ++(1, 1) -- ++(1, 0) -- ++(1, -1) -- ++(0.5, 0); \draw[thick, blue, fill=blue!10] (-1, -2.175) rectangle ++(1, 0.75) node[pos=.5] {$\scriptstyle k-1$}; \fill[blue, opacity=0.1] (-7, -2) [rounded corners]-- ++(0.5, 0) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) [sharp corners]-- ++(0.25, 0.25) [rounded corners]-- ++(-0.75, 0.75) -- ++(-1, 0) -- ++(-0.5, -0.5) -- ++(-0.5, 0); \fill[blue, opacity=0.1] (-4.25, -1.75) [rounded corners]-- ++(0.75, 0.75) -- ++(1, 0) [sharp corners]-- ++(0.75, -0.75) [rounded corners]-- ++(-0.25, -0.25) -- ++(-0.5, 0) -- ++(-0.5, 0.5) -- ++(-0.5, -0.5) -- ++(-0.5, 0) -- ++(-0.25, 0.25); \fill[blue, opacity=0.1] (-1, -1.5) -- ++(-0.5, 0) -- ++(-0.25, -0.25) -- ++(0.25, -0.25) -- ++(0.5, 0); \draw[thick, violet, dashed] (-4.25, -1.75) circle (0.25) (-1.75, -1.75) circle (0.25) ; \draw[thick, violet, dashed, ->, rounded corners] (-2, -1.75) -- ++(-0.25, 0) -- ++(-0.5, 0.5) -- ++(-0.5, 0) -- ++(-0.5, -0.5) -- ++(-0.25, 0); \draw[thick, violet, dashed, ->, rounded corners] (-4.5, -1.75) -- ++(-0.25, 0) -- ++(-0.5, 0.5) -- ++(-0.5, 0) -- ++(-0.5, -0.5) -- ++(-0.75, 0) arc (-90:-270:1.75) -- ++(14, 0) to[out=0, in=180] ++(3, -2.5) arc (-90:90:0.75) to[out=180, in=0] ++(-3, -2.5); \end{scope} \begin{scope}[xshift=7cm] \draw[thick] (-7, 0.5) -- ++(7,0) (-7, 1) -- ++(7,0); \draw[thick, blue] (-7, 1.5) -- ++(7,0) (-7, 2) -- ++(7,0); \fill[blue, opacity=0.1] (-7, 1.5) -- ++(7,0) -- ++(0, 0.5) -- ++(-7, 0); \draw[thick, rounded corners] (-7, -0.5) -- ++(0.5, 0) -- ++(4.5, 0) -- ++(0.5, -0.5) -- ++(1,0) -- ++(0.5, 0); \draw[thick, rounded corners] (-7, -1) -- ++(0.5, 0) -- ++(1, -1) -- ++(1, 1) -- ++(0.5, 0) -- ++(1, -1) -- ++(1.5, 1.5) -- ++(1, 0) -- ++(0.5, 0); \draw[thick, blue, rounded corners] (-7, -1.5) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(1, 0) -- ++(1, -1) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, 0); \draw[thick, blue, rounded corners] (-7, -2) -- ++(0.5, 0) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) -- ++(1, 1) -- ++(1, 0) -- ++(1, -1) -- ++(0.5, 0); \draw[thick, blue, fill=blue!10] (-1, -2.175) rectangle ++(1, 0.75) node[pos=.5] {$\scriptstyle \ell-1$}; \fill[blue, opacity=0.1] (-7, -2) [rounded corners]-- ++(0.5, 0) -- ++(0.5, 0) -- ++(0.5, 0.5) -- ++(0.5, -0.5) -- ++(0.5, 0) [sharp corners]-- ++(0.25, 0.25) [rounded corners]-- ++(-0.75, 0.75) -- ++(-1, 0) -- ++(-0.5, -0.5) -- ++(-0.5, 0); \fill[blue, opacity=0.1] (-4.25, -1.75) [rounded corners]-- ++(0.75, 0.75) -- ++(1, 0) [sharp corners]-- ++(0.75, -0.75) [rounded corners]-- ++(-0.25, -0.25) -- ++(-0.5, 0) -- ++(-0.5, 0.5) -- ++(-0.5, -0.5) -- ++(-0.5, 0) -- ++(-0.25, 0.25); \fill[blue, opacity=0.1] (-1, -1.5) -- ++(-0.5, 0) -- ++(-0.25, -0.25) -- ++(0.25, -0.25) -- ++(0.5, 0); \draw[thick, violet, dashed] (-4.25, -1.75) circle (0.25) (-1.75, -1.75) circle (0.25) ; \draw[thick, violet, dashed, ->, rounded corners] (-2, -1.75) -- ++(-0.25, 0) -- ++(-0.5, 0.5) -- ++(-0.5, 0) -- ++(-0.5, -0.5) -- ++(-0.25, 0); \draw[thick, violet, dashed, ->, rounded corners] (-4.5, -1.75) -- ++(-0.25, 0) -- ++(-0.5, 0.5) -- ++(-0.5, 0) -- ++(-0.5, -0.5) -- ++(-0.75, 0); \end{scope} \draw[thick] (-7, 0.5) arc (90:270:0.5) (-7, 1) arc (90:270:1); \draw[thick, blue] (-7, 1.5) arc (90:270:1.5) (-7, 2) arc (90:270:2); \fill[blue, opacity=0.1] (-7, 1.5) arc (90:270:1.5) -- (-7, -2) arc (-90:-270:2); \begin{scope} \draw[thick] (7, 1) to[out=0, in=180] ++(3, -2.5) (7, 0.5) to[out=0, in=180] ++(3, -2.5); \draw[thick, blue] (7, 2) to[out=0, in=180] ++(3, -2.5) (7, 1.5) to[out=0, in=180] ++(3, -2.5); \fill[blue, opacity=0.1] (7, 2) to[out=0, in=180] ++(3, -2.5) -- (10, -1) to[out=180, in=0] ++(-3, 2.5); \end{scope} \begin{scope}[yscale=-1] \draw[thick] (7, 1) to[out=0, in=180] ++(3, -2.5) (7, 0.5) to[out=0, in=180] ++(3, -2.5); \draw[thick, blue] (7, 2) to[out=0, in=180] ++(3, -2.5) (7, 1.5) to[out=0, in=180] ++(3, -2.5); \fill[blue, opacity=0.1] (7, 2) to[out=0, in=180] ++(3, -2.5) -- (10, -1) to[out=180, in=0] ++(-3, 2.5); \end{scope} \draw[thick] (10, 2) arc (90:-90:2) (10, 1.5) arc (90:-90:1.5); \draw[thick, blue] (10, 1) arc (90:-90:1) (10, 0.5) arc (90:-90:0.5); \fill[blue, opacity=0.1] (10, 1) arc (90:-90:1) -- ++(0, 0.5) arc (-90:90:0.5); \draw (1.5,-2) node[below] {$\vartheta_0(\widetilde{\dynD}_n)$}; \end{tikzpicture} \end{tikzcd} \] \caption{A Legendrian loop $\vartheta(\widetilde{\dynD})=\varphi\vartheta_0(\widetilde{\dynD}_n)\varphi^{-1}$ induced from Legendrian Coxeter mutation $\mu_\ngraphfont{G}$ on $(\ngraphfont{G}(\widetilde{\dynD}_n),\ngraphfont{B}(\widetilde{\dynD}_n))$.} \label{fig:legendrian loop of D_intro} \end{figure} \begin{theorem}\label{thm:legendrian loop} The square $\mu_\ngraphfont{G}^{\pm 2}$ of the Legendrian Coxeter mutation on $(\ngraphfont{G}(a,b,c),\ngraphfont{B}(a,b,c))$ and the Legendrian Coxeter mutation $\mu_\ngraphfont{G}^{\pm1}$ on $(\ngraphfont{G}(\widetilde{\dynD}),\ngraphfont{B}(\widetilde{\dynD}))$ induce tame Legendrian loops $\vartheta(a,b,c)$ and $\vartheta(\widetilde{\dynD})$ in Figures~\ref{fig:legendrian loop of E_intro} and \ref{fig:legendrian loop of D_intro}, respectively. \end{theorem} \subsection{Lagrangian fillings} In this section, we will prove one of our main theorem on `as many exact embedded Lagrangian fillings as seeds' as follows: \begin{theorem}\label{theorem:seed many fillings} Let $\lambda$ be a Legendrian knot or link of type~$\dynkinfont{ADE}$ or type $\widetilde{\dynD}\widetilde{\dynE}$. Then it admits as many exact embedded Lagrangian fillings as the number of seeds in the seed pattern of the same type. \end{theorem} Indeed, this theorem follows from considering the following general question. \begin{question}\label{question_CZ} For a given $N$-graph $\ngraphfont{G}$ with a chosen set $\ngraphfont{B}$ of cycles, can we take a Legendrian mutation as many times as we want? Or equivalently, after applying a mutation $\mu_k$ on $(\ngraphfont{G}, \ngraphfont{B})$, is the set $\mu_k(\ngraphfont{B})$ still good in $\mu_k(\ngraphfont{G})$? \end{question} This question has been raised previously in \cite[Remark~7.13]{CZ2020}. One of the main reason making the question nontrivial is that the potential difference of geometric and algebraic intersections between two cycles. Instead of attacking Question~\ref{question_CZ} directly, we will prove the following: \begin{proposition}\label{proposition:realizability} For $\dynkinfont{Z} = \dynkinfont{A},\dynkinfont{D},\dynkinfont{E},\widetilde{\dynD}, \widetilde{\dynE}$, let $(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})=(\ngraphfont{G}(\dynkinfont{Z}), \ngraphfont{B}(\dynkinfont{Z}))$. Suppose that $(\bfy, \clusterfont{B})$ is a $Y$-seed in the $Y$-pattern given by the initial $Y$-seed $(\bfy_{t_0},\clusterfont{B}_{t_0}) = \Psi(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})$. Then $\lambda(\dynkinfont{Z})$ admits an $N$-graph $(\ngraphfont{G}, \ngraphfont{B})$ on $\mathbb{D}^2$ with $\partial \ngraphfont{G} = \lambda(\dynkinfont{Z})$ such that \[ \Psi(\ngraphfont{G}, \ngraphfont{B}) = (\bfy, \clusterfont{B}). \] \end{proposition} Under the aid of this proposition, one can prove Theorem~\ref{theorem:seed many fillings}. \begin{proof}[Proof of Theorem~\ref{theorem:seed many fillings}] Let $\lambda$ be given as above. Then, by Proposition~\ref{proposition:realizability}, we have the set of pairs of $N$-graphs and set of good cycles which has a one-to-one correspondence via $\Psi$ with the set of $Y$-seeds in the $Y$-pattern of type $\dynkinfont{Z}$. Hence any pair of the Lagrangian fillings coming from these $N$-graphs is never exact Lagrangian isotopic by Corollary~\ref{corollary:distinct seeds imples distinct fillings}. Finally, by Corollary~\ref{corollary:algebraic independence}, there is a one-to-one correspondence between the set of $Y$-seeds and that of seeds, which completes the proof. \end{proof} \subsubsection{Proof of Proposition~\ref{proposition:realizability}} We use an induction argument on the rank $n$ of the root system~$\Phi(\dynkinfont{Z})$. The initial step is either \[ (\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})=(\ngraphfont{G}(1,1,1),\ngraphfont{B}(1,1,1))\quad\text{ or }\quad (\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})=(\ngraphfont{G}(\dynkinfont{A}_1),\ngraphfont{B}(\dynkinfont{A}_1)). \] Since there are no obstructions for mutations on these $N$-graphs, we are done for the initial step of the induction. Now suppose that $n\ge2$. By the induction hypothesis, we assume that the assertion holds for each type $\dynkinfont{Z}' =\dynkinfont{A}, \dynkinfont{D}, \dynkinfont{E}, \widetilde{\dynD}, \widetilde{\dynE}$ having rank strictly small than $n$. Let $(\bfy, \clusterfont{B})$ be an $Y$-seed of type $\dynkinfont{Z}$. By Lemma~\ref{lemma:normal form}, there exist $r\in\Z$ and a sequence $\mu_{j_1},\dots,\mu_{j_L}$ of mutations such that \[ (\bfy, \clusterfont{B})= \mu'((\mu_\clusterfont{Q})^r(\bfy_{t_0},\clusterfont{B}_{t_0})),\quad \mu'=\mu_{j_L}\dots\mu_{j_1}, \] where indices $j_1,\dots,j_L$ miss at least one index $i$. It suffices to prove that the $N$-graph \[ (\ngraphfont{G}, \ngraphfont{B}) =\mu'((\mu_\ngraph)^r(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})) \] is well-defined. Notice that by Lemma~\ref{lemma:Legendriam Coxeter mutation of type An}, Propositions~\ref{proposition:effect of Legendrian Coxeter mutation} and \ref{proposition:coxeter realization D-type}, the Legendrian Coxeter mutation $\mu_\ngraph^r(\ngraphfont{G}_{t_0}, \ngraphfont{B}_{t_0})$ is realizable so that \[ \Psi(\mu_\ngraph^r(\ngraphfont{G}_{t_0}, \ngraphfont{B}_{t_0})) =(\mu_\clusterfont{Q})^r(\bfy_{t_0},\clusterfont{B}_{t_0}). \] Since $\mu_\ngraph^r(\ngraphfont{G}_{t_0}, \ngraphfont{B}_{t_0})$ is the concatenation of Coxeter paddings on the initial $N$-graph $(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})$, it suffices to prove that the Legendrian mutation $\mu'(\mu_\ngraph^r(\ngraphfont{G}_{t_0}, \ngraphfont{B}_{t_0}))$ is realizable, which is equivalent to the realizability of $\mu'(\ngraphfont{G}_{t_0}, \ngraphfont{B}_{t_0})$. By assumption, the indices $j_1,\dots, j_L$ misses the index $i$ and therefore the sequence of mutations $\mu_{j_1},\dots,\mu_{j_L}$ can be performed inside the subgraph of the exchange graph $\exchange(\Phi(\dynkinfont{Z}))$, which is isomorphic to $\exchange(\Phi(\dynkinfont{Z} \setminus \{i\}))$. Here, with abuse of notation, we denote by $\dynkinfont{Z}$ the Dynkin diagram of type $\dynkinfont{Z}$. Moreover, we denote by $\Phi(\dynkinfont{Z} \setminus \{i\})$ the root system corresponding to the Dynkin diagram~$\dynkinfont{Z} \setminus \{i\}$. Then the root system $\Phi(\dynkinfont{Z}\setminus \{i\})$ is not necessarily irreducible and may be decomposed into $\Phi(\dynkinfont{Z}^{(1)}), \dots, \Phi(\dynkinfont{Z}^{(\ell)})$ for $\dynkinfont{Z}\setminus\{i\} = \dynkinfont{Z}^{(1)}\cup\cdots\cup\dynkinfont{Z}^{(\ell)}$ so that \begin{align*} \Phi(\dynkinfont{Z}\setminus\{i\}) &\cong \Phi(\dynkinfont{Z}^{(1)})\times\cdots\times\Phi(\dynkinfont{Z}^{(\ell)}),\\ \exchange(\dynkinfont{Z}\setminus\{i\}) &\cong \exchange(\dynkinfont{Z}^{(1)})\times\cdots\times\exchange(\dynkinfont{Z}^{(\ell)}),\\ \clusterfont{Q}_{t_0}\setminus\{i\}&\cong \clusterfont{Q}^{(1)}\amalg\cdots\amalg\clusterfont{Q}^{(\ell)}, \end{align*} where the subquiver $\clusterfont{Q}^{(k)}$ is of type $\dynkinfont{Z}^{(k)}$. Moreover, the composition $\mu'$ of mutations can be decomposed into sequences $\mu^{(1)},\dots, \mu^{(\ell)}$ of mutations on $\clusterfont{Q}^{(1)},\dots,\clusterfont{Q}^{(\ell)}$. Similarly, we may decompose the $N$-graph $(\ngraphfont{G}_{t_0}, \ngraphfont{B}_{t_0})$ into $N$-subgraphs \[ (\ngraphfont{G}^{(1)}, \ngraphfont{B}^{(1)}),\dots,(\ngraphfont{G}^{(\ell)}, \ngraphfont{B}^{(\ell)}) \] along $\gamma_i\in\ngraphfont{B}_{t_0}$, which are the restrictions of $(\ngraphfont{G}_{t_0}, \ngraphfont{B}_{t_0})$ onto $\mathbb{D}^{(i)}\subset \mathbb{D}^2$ as follows: \begin{enumerate} \item For $\lambda=\lambda(\dynkinfont{A}_n)$, we have the following two cases (Figure~\ref{figure:decomposition of linear Ngraphs}): \begin{enumerate} \item If $\gamma_i$ corresponds to a leaf, then we have the $2$-subgraph $(\ngraphfont{G}(\dynkinfont{A}_{n-1}), \ngraphfont{B}(\dynkinfont{A}_{n-1}))$. \item If $\gamma_i$ corresponds to a bivalent vertex, then for some $1\le r,s$ with $r+s+1=n$, we have two $2$-subgraphs $(\ngraphfont{G}(\dynkinfont{A}_r),\ngraphfont{B}(\dynkinfont{A}_r))$ and $(\ngraphfont{G}(\dynkinfont{A}_s),\ngraphfont{B}(\dynkinfont{A}_s))$. \end{enumerate} \begin{figure}[ht] \subfigure[At a leaf]{\makebox[0.4\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw(0,3) node[above] {$(\ngraphfont{G}(\dynkinfont{A}_{n-1}),\ngraphfont{B}(\dynkinfont{A}_{n-1}))$}; \draw(0,-3) node[below] {$\mathbb{D}^{(1)}$}; \draw[thick] (0,0) circle (3); \clip (0,0) circle (3); \fill[opacity=0.1] (157.5:3) arc (157.5:-112.5:3); \draw[dashed, line width=7] (-2.5,-0.5) -- (-1.5, 0.5); \draw[color=cyclecolor1!50, line cap=round, line width=5] (-2.5,-0.5) -- (-1.5, 0.5); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-1.5,0.5) -- (-0.5, -0.5) (0.5, 0.5) -- (1.5, -0.5); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-2.5,-0.5) -- (-1.5, 0.5) (-0.5, -0.5) -- (0.5, 0.5) (1.5, -0.5) -- (2.5, 0.5); \draw[blue, thick, fill] (2.5,0.5) circle (2pt) -- (45:3) (2.5,0.5) -- (1.5,-0.5) circle (2pt) -- (-45:3) (1.5,-0.5) -- (0.5,0.5) circle (2pt) -- (90:3) (0.5,0.5) -- (-0.5, -0.5) circle (2pt) -- (-90:3) (-0.5, -0.5) -- (-1.5, 0.5) circle (2pt) -- (135:3) (-1.5, 0.5) -- (-2.5, -0.5) circle (2pt) -- (-135:3); \draw[blue, thick] (-2.5,-0.5) -- (-180:3); \draw[blue, thick, dashed] (3,0) -- (2.5,0.5); \end{tikzpicture} }} \subfigure[At a bivalent vertex]{\makebox[0.4\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw(-3,0) node[above, rotate=90] {$(\ngraphfont{G}(\dynkinfont{A}_{r}),\ngraphfont{B}(\dynkinfont{A}_{r}))$}; \draw(3,0) node[above, rotate=-90] {$(\ngraphfont{G}(\dynkinfont{A}_{s}),\ngraphfont{B}(\dynkinfont{A}_{s}))$}; \draw(240:3) node[below left] {$\mathbb{D}^{(1)}$}; \draw(60:3) node[above right] {$\mathbb{D}^{(2)}$}; \draw[thick] (0,0) circle (3); \clip (0,0) circle (3); \fill[opacity=0.1] (105:3) arc (105:-60:3) (120:3) arc (120:285:3); \draw[dashed, line width=7] (-0.5, -0.5) -- (0.5, 0.5); \draw[color=cyclecolor1!50, line cap=round, line width=5] (-0.5, -0.5) -- (0.5, 0.5); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (-1.5,0.5) -- (-0.5, -0.5) (0.5, 0.5) -- (1.5, -0.5); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (-2.5,-0.5) -- (-1.5, 0.5) (-0.5, -0.5) -- (0, 0) (1.5, -0.5) -- (2.5, 0.5); \draw[blue, thick, dashed] (3,0) -- (2.5, 0.5) (-2.5, -0.5) -- (-3,0); \draw[blue, thick, fill] (2.5,0.5) circle (2pt) -- (45:3) (2.5,0.5) -- (1.5,-0.5) circle (2pt) -- (-45:3) (1.5,-0.5) -- (0.5,0.5) circle (2pt) -- (90:3) (0.5,0.5) -- (-0.5, -0.5) circle (2pt) -- (-90:3) (-0.5, -0.5) -- (-1.5, 0.5) circle (2pt) -- (135:3) (-1.5, 0.5) -- (-2.5, -0.5) circle (2pt) -- (-135:3); \end{tikzpicture} }} \caption{Decompositions of $\ngraphfont{G}(\dynkinfont{A}_n)$} \label{figure:decomposition of linear Ngraphs} \end{figure} \item For $\lambda=\lambda(a,b,c)$, we have the following three cases (Figure~\ref{figure:decomposition of tripod Ngraphs}): \begin{enumerate} \item If $\gamma_i$ corresponds to the central vertex, then we have three $3$-subgraphs $(\ngraphfont{G}_{(3)}(\dynkinfont{A}_{a-1}), \ngraphfont{B}_{(3)}(\dynkinfont{A}_{a-1}))$, $(\ngraphfont{G}_{(3)}(\dynkinfont{A}_{b-1}),\ngraphfont{B}_{(3)}(\dynkinfont{A}_{b-1}))$, and $(\ngraphfont{G}_{(3)}(\dynkinfont{A}_{c-1}), \ngraphfont{B}_{(3)}(\dynkinfont{A}_{c-1}))$. \item If $\gamma_i$ corresponds to a bivalent vertex, then for some $1\le r,s$ with $r+s+1=a$, up to permuting indices $a,b,c$, we have two $3$-subgraphs $(\ngraphfont{G}_{(3)}(\dynkinfont{A}_s),\ngraphfont{B}_{(3)}(\dynkinfont{A}_s))$ and $(\ngraphfont{G}(r,b,c),\ngraphfont{B}(r,b,c))$. \item Otherwise, if $\gamma_i$ corresponds to a leaf, then up to permuting indices $a,b,c$, we have the $3$-subgraph $(\ngraphfont{G}(a-1,b,c), \ngraphfont{B}(a-1,b,c))$. \end{enumerate} \begin{figure}[ht] \subfigure[At the central vertex]{$ \begin{aligned} \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \begin{scope} \foreach \i in {1, 2, 3} { \begin{scope}[rotate={\i*120-120}] \draw (60:4) node {$\mathbb{D}^{(\i)}$}; \clip(0,0) circle (3); \begin{scope}[shift=(60:0.5)] \fill[opacity=0.1, rounded corners] (0:3.5) -- (0,0) -- (120:3.5) arc (120:0:3.5); \end{scope} \end{scope} } \end{scope} \draw[thick] (0,0) circle (3cm); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (60:1) -- (50:1.5) (70:1.75) -- (50:2) (180:1) -- (170:1.5) (190:1.75) -- (170:2) (300:1) -- (290:1.5) (310:1.75) -- (290:2); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (50:1.5) -- (70:1.75) (170:1.5) -- (190:1.75) (290:1.5) -- (310:1.75); \begin{scope} \clip(0,0) circle (3); \draw[line width=7, dashed] (0,0) -- (60:1) (0,0) -- (180:1) (0,0) -- (300:1); \draw[color=cyclecolor1!50, line cap=round, line width=5] (0,0) -- (60:1) (0,0) -- (180:1) (0,0) -- (300:1); \foreach \i in {0,120,240} { \begin{scope}[rotate=\i] \draw[red, thick] (0,0) -- (0:3); \draw[blue, thick, fill] (0,0) -- (60:1) circle (2pt) -- (100:3) (60:1) -- (50:1.5) circle (2pt) -- (20:3) (50:1.5) -- (70:1.75) circle (2pt) -- (80:3) (70:1.75) -- (50:2) circle (2pt) -- (40:3); \draw[blue, thick, dashed] (50:2) -- (60:3); \end{scope} } \end{scope} \draw[thick, fill=white] (0,0) circle (2pt); \end{tikzpicture}&\to \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \foreach \i in {0,120,240} { \begin{scope}[rotate=\i,shift=(60:0.5)] \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (60:1) -- (50:1.5) (70:1.75) -- (50:2) ; \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (50:1.5) -- (70:1.75); \draw[thick](0:3) -- (0,0) -- (120:3) arc (120:0:3); \draw[blue, thick, fill] (0,0) -- (60:1) circle (2pt) -- (100:3) (60:1) -- (50:1.5) circle (2pt) -- (20:3) (50:1.5) -- (70:1.75) circle (2pt) -- (80:3) (70:1.75) -- (50:2) circle (2pt) -- (40:3); \draw[blue, thick, dashed] (50:2) -- (60:3); \end{scope} } \draw (1,4) node {$(\ngraphfont{G}_{(3)}(\dynkinfont{A}_{a-1}),\ngraphfont{B}_{(3)}(\dynkinfont{A}_{a-1}))$}; \draw (180:4) node[rotate=90] {$(\ngraphfont{G}_{(3)}(\dynkinfont{A}_{b-1}),\ngraphfont{B}_{(3)}(\dynkinfont{A}_{b-1}))$}; \draw (1,-4) node {$(\ngraphfont{G}_{(3)}(\dynkinfont{A}_{c-1}),\ngraphfont{B}_{(3)}(\dynkinfont{A}_{c-1}))$}; \end{tikzpicture} \end{aligned} $} \subfigure[At a bivalent vertex]{$ \begin{aligned} \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \begin{scope} \clip (0,0) circle (3); \begin{scope}[shift=(60:0.25)] \fill[opacity=0.1] (95:3) -- (60:1.25) -- (25:3) arc (25:95:3); \end{scope} \fill[opacity=0.1] (95:3) -- (60:1.25) -- (25:3) arc (25:-265:3); \end{scope} \draw (60:4) node {$\mathbb{D}^{(1)}$}; \draw (-120:4) node {$\mathbb{D}^{(2)}$}; \draw[thick] (0,0) circle (3cm); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (70:0.5) -- (60:1) (60:1.75) -- (50:2) (180:1) -- (170:1.5) (190:1.75) -- (170:2) (300:1) -- (290:1.5) (310:1.75) -- (290:2); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) -- (70:0.5) (0,0) -- (180:1) (0,0) -- (300:1) (170:1.5) -- (190:1.75) (290:1.5) -- (310:1.75); \draw[line width=7, dashed] (60:1.75) -- (60:1); \draw[color=cyclecolor1!50, line cap=round, line width=5] (60:1.75) -- (60:1); \draw[red, thick] (0,0) -- (0:3) (0,0) -- (120:3) (0,0) -- (240:3); \draw[blue, thick, fill] (0,0) -- (70:0.5) circle (2pt) -- (105:3) (70:0.5) -- (60:1) circle (2pt) -- (15:3) (60:1) -- (60:1.75) circle (2pt) -- (75:3) (60:1.75) -- (50:2) circle (2pt) -- (45:3); \draw[blue, thick, dashed] (50:2) -- (60:3); \draw[blue, thick, fill] (0,0) -- (180:1) circle (2pt) -- (220:3) (180:1) -- (170:1.5) circle (2pt) -- (140:3) (170:1.5) -- (190:1.75) circle (2pt) -- (200:3) (190:1.75) -- (170:2) circle (2pt) -- (160:3); \draw[blue, thick, dashed] (170:2) -- (180:3); \draw[blue, thick, fill] (0,0) -- (300:1) circle (2pt) -- (340:3) (300:1) -- (290:1.5) circle (2pt) -- (260:3) (290:1.5) -- (310:1.75) circle (2pt) -- (320:3) (310:1.75) -- (290:2) circle (2pt) -- (280:3); \draw[blue, thick, dashed] (290:2) -- (300:3); \draw[thick, fill=white] (0,0) circle (2pt); \end{tikzpicture}&\to \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[thick] (0,0) circle (3cm); \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (60:1) -- (50:1.5) (180:1) -- (170:1.5) (190:1.75) -- (170:2) (300:1) -- (290:1.5) (310:1.75) -- (290:2); \draw[color=cyclecolor1, line cap=round, line width=5, opacity=0.5] (0,0) -- (60:1) (0,0) -- (180:1) (0,0) -- (300:1) (170:1.5) -- (190:1.75) (290:1.5) -- (310:1.75); \draw[red, thick] (0,0) -- (0:3) (0,0) -- (120:3) (0,0) -- (240:3); \draw[blue, thick, fill] (0,0) -- (60:1) circle (2pt) -- (90:3) (60:1) -- (50:1.5) circle (2pt) -- (30:3) (50:1.5) -- (60:3); \draw[blue, thick, fill] (0,0) -- (180:1) circle (2pt) -- (220:3) (180:1) -- (170:1.5) circle (2pt) -- (140:3) (170:1.5) -- (190:1.75) circle (2pt) -- (200:3) (190:1.75) -- (170:2) circle (2pt) -- (160:3); \draw[blue, thick, dashed] (170:2) -- (180:3); \draw[blue, thick, fill] (0,0) -- (300:1) circle (2pt) -- (340:3) (300:1) -- (290:1.5) circle (2pt) -- (260:3) (290:1.5) -- (310:1.75) circle (2pt) -- (320:3) (310:1.75) -- (290:2) circle (2pt) -- (280:3); \draw[blue, thick, dashed] (290:2) -- (300:3); \draw[thick, fill=white] (0,0) circle (2pt); \draw (0,-3.5) node {$(\ngraphfont{G}(r,b,c),\ngraphfont{B}(r,b,c))$}; \begin{scope}[xshift=7cm,yshift=-2cm] \draw[color=cyclecolor2, line cap=round, line width=5, opacity=0.5] (70:1.5) -- (50:2); \draw[thick](0:3) -- (0,0) -- (120:3) arc (120:0:3); \draw[blue, thick, fill] (0,0) -- (70:1.5) circle (2pt) -- (90:3) (70:1.5) -- (50:2) circle (2pt) -- (30:3); \draw[blue, thick, dashed] (50:2) -- (60:3); \draw (0.5,-1.5) node {$(\ngraphfont{G}_{(3)}(\dynkinfont{A}_s),\ngraphfont{B}_{(3)}(\dynkinfont{A}_s))$}; \end{scope} \end{tikzpicture} \end{aligned} $} \caption{Decomposition of $\ngraphfont{G}(a,b,c)$} \label{figure:decomposition of tripod Ngraphs} \end{figure} \item For $\lambda=\lambda(\widetilde{\dynD}_n)$, we have the following four cases (Figure~\ref{figure:decomposition of Ngraph of type affine Dn}): \begin{enumerate} \item If $n=4$ and $\ell=1$, then we have four $4$-graphs of type $\dynkinfont{A}_1$. \item If $n\ge 5, \ell=1, n-1$, then we have three $4$-graphs of type $\dynkinfont{A}_1, \dynkinfont{A}_1$ and $(2,2, n-4)$. \item If $n\ge 6, \ell=4,\dots,n-2$, then for some $r+s=n-3$, we have two $4$-graphs of type $(2,2,r)$ and $(2,2,s)$. \item If $\ell = 2,3, n, n+1$, then we have the $4$-graph $(\ngraphfont{G}'(\dynkinfont{D}_n),\ngraphfont{B}'(\dynkinfont{D}_n))$. \end{enumerate} \begin{figure}[ht] \subfigure[At the central vertex of $\widetilde{\dynD}_4$]{\makebox[0.47\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[rounded corners=5] (-4, -2.5) rectangle (4, 2.5); \draw (-4.5, -1.5) node[rotate=90] {$\ngraphfont{G}_{(4)}(\dynkinfont{A}_1)$}; \draw (-4.5, 1.5) node[rotate=90] {$\ngraphfont{G}_{(4)}(\dynkinfont{A}_1)$}; \draw (4.5, -1.5) node[rotate=-90] {$\ngraphfont{G}_{(4)}(\dynkinfont{A}_1)$}; \draw (4.5, 1.5) node[rotate=-90] {$\ngraphfont{G}_{(4)}(\dynkinfont{A}_1)$}; \draw(-2.5, 2.5) node[above] {$\mathbb{D}^{(1)}$}; \draw(2.5, 2.5) node[above] {$\mathbb{D}^{(2)}$}; \draw(2.5, -2.5) node[below] {$\mathbb{D}^{(3)}$}; \draw(-2.5, -2.5) node[below] {$\mathbb{D}^{(2)}$}; \clip[rounded corners=5] (-4, -2.5) rectangle (4, 2.5); \fill[rounded corners=5, opacity=0.1](-1.5, 3.5) rectangle (-5, 0.5); \fill[rounded corners=5, opacity=0.1](-1.5, -3.5) rectangle (-5, -0.5); \fill[rounded corners=5, opacity=0.1](1.5, 3.5) rectangle (5, 0.5); \fill[rounded corners=5, opacity=0.1](1.5, -3.5) rectangle (5, -0.5); \draw[line width=7, dashed] (-1, 0) -- (1, 0) (-1, 0) -- (-2, 1) (-1, 0) -- (-2, -1) (1, 0) -- (2, 1) (1, 0) -- (2, -1) ; \draw[color=cyclecolor1!50, line cap=round, line width=5] (-1, 0) -- (1, 0) (-1, 0) -- (-2, 1) (-1, 0) -- (-2, -1) (1, 0) -- (2, 1) (1, 0) -- (2, -1) ; \draw[cyclecolor2, opacity=0.5, line cap=round, line width=5] (-2, 1) -- (-3, 1) (-2, -1) -- (-2, -1.75) (2, 1) -- (2, 1.75) (2, -1) -- (3, -1) ; \foreach \i in {0, 180} { \begin{scope}[rotate=\i] \begin{scope}[xshift=2.5cm] \draw[thick, green] (-2.5, 2.5) -- ++(0,-2.5); \draw[thick, red] (-3.5, -2.5) -- (-3.5, 2.5) (-6.5, 0) -- (-3.5, 0) ; \draw[thick, blue, fill] (-3.5, 0) -- (-2.5, 0) (-3.5, 0) -- (-4.5, 1) circle (2pt) -- (-4.5, 2.5) (-4.5, 1) -- (-6.5, 1) (-5.5, 1) circle (2pt) -- (-5.5, 2.5) (-3.5, 0) -- (-4.5, -1) circle (2pt) -- (-4.5, -2.5) (-4.5, -1) -- (-6.5, -1) (-4.5, -1.73) circle (2pt) -- (-6.5, -1.73) ; \end{scope} \end{scope} } \draw[thick, fill=white] (-1, 0) circle (2pt) (1, 0) circle (2pt); \end{tikzpicture} }} \subfigure[At a trivalent vertex]{\makebox[0.47\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[rounded corners=5] (-6, -2.5) rectangle (6, 2.5); \draw (-6.5, -1.5) node[rotate=90] {$\ngraphfont{G}_{(4)}(\dynkinfont{A}_1)$}; \draw (-6.5, 1.5) node[rotate=90] {$\ngraphfont{G}_{(4)}(\dynkinfont{A}_1)$}; \draw (6.5, 0) node[rotate=-90] {$\ngraphfont{G}_{(4)}(2,2,n-4)$}; \draw(-4.5, 2.5) node[above] {$\mathbb{D}^{(1)}$}; \draw(-4.5, -2.5) node[below] {$\mathbb{D}^{(2)}$}; \draw(2, 2.5) node[above] {$\mathbb{D}^{(3)}$}; \clip[rounded corners=5] (-6, -2.5) rectangle (6, 2.5); \fill[rounded corners=5, opacity=0.1](-3.5, 3.5) rectangle (-7, 0.5); \fill[rounded corners=5, opacity=0.1](-3.5, -3.5) rectangle (-7, -0.5); \fill[rounded corners=5, opacity=0.1](-2.25, 3.5) rectangle (7, -2); \draw[dashed, line width=7] (-3, 0) -- (-2, 0) (-3, 0) -- (-4, 1) (-3, 0) -- (-4, -1) ; \draw[color=cyclecolor1!50, line cap=round, line width=5] (-3, 0) -- (-2, 0) (-3, 0) -- (-4, 1) (-3, 0) -- (-4, -1) ; \draw[cyclecolor1, opacity=0.5, line cap=round, line width=5] (-1, 0) -- (-0, 0) (1, 0) -- (2, 0) (4, -1) -- (4, -1.75) (4, 1) -- (5, 1) ; \draw[cyclecolor2, opacity=0.5, line cap=round, line width=5] (0, 0) -- (1, 0) (3, 0) -- (2, 0) (3, 0) -- (4, 1) (3, 0) -- (4, -1) (-4, 1) -- (-5, 1) (-4, -1) -- (-4, -1.75) (-1, 0) -- (-2, 0) ; \draw[thick, green, rounded corners] (-2.5, 2.5) -- (-2.5, -2.25) -- (2.5, -2.25) -- (2.5, -2.5); \foreach \i in {1, -1} { \begin{scope}[xshift=\i*0.5cm, xscale=\i] \draw[thick, red] (-3.5, -2.5) -- (-3.5, 2.5) (-6.5, 0) -- (-3.5, 0) ; \draw[thick, blue, fill] (-2.5, -2.5) -- (-2.5,0) circle (2pt) (-1.5, 2.5) -- (-1.5,0) circle (2pt) (-0.5, -2.5) -- (-0.5,0) circle (2pt) ; \draw[thick, blue, fill] (-3.5, 0) -- (0, 0) (-3.5, 0) -- (-4.5, 1) circle (2pt) -- (-4.5, 2.5) (-4.5, 1) -- (-6.5, 1) (-5.5, 1) circle (2pt) -- (-5.5, 2.5) (-3.5, 0) -- (-4.5, -1) circle (2pt) -- (-4.5, -2.5) (-4.5, -1) -- (-6.5, -1) (-4.5, -1.73) circle (2pt) -- (-6.5, -1.73) ; \draw[thick, fill=white] (-3.5, 0) circle (2pt); \end{scope} } \end{tikzpicture} }} \subfigure[At a bivalent vertex]{\makebox[0.47\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[rounded corners=5] (-6.5, -2.5) rectangle (6.5, 2.5); \draw (-7, 0) node[rotate=90] {$\ngraphfont{G}_{(4)}(2,2,r)$}; \draw (7, 0) node[rotate=-90] {$\ngraphfont{G}_{(4)}(2,2,s)$}; \draw(-4.5, -2.5) node[below] {$\mathbb{D}^{(1)}$}; \draw(4.5, 2.5) node[above] {$\mathbb{D}^{(2)}$}; \clip[rounded corners=5] (-6.5, -2.5) rectangle (6.5, 2.5); \draw[dashed, line width=7] (-0.5, 0) -- (0.5, 0); \draw[color=cyclecolor2!50, line cap=round, line width=5] (-0.5, 0) -- (0.5, 0); \draw[cyclecolor1, opacity=0.5, line cap=round, line width=5] (-3.5, 0) -- (-2.5, 0) (-3.5, 0) -- (-4.5, 1) (-3.5, 0) -- (-4.5, -1) (-1.5, 0) -- (-0.5, 0) (0.5, 0) -- (1.5, 0) (3.5, 0) -- (2.5, 0) (3.5, 0) -- (4.5, 1) (3.5, 0) -- (4.5, -1) ; \draw[cyclecolor2, opacity=0.5, line cap=round, line width=5] (-4.5, 1) -- (-5.5, 1) (-4.5, -1) -- (-4.5, -1.75) (-1.5, 0) -- (-2.5, 0) (1.5, 0) -- (2.5, 0) (4.5, 1) -- (4.5, 1.75) (4.5, -1) -- (5.5, -1) ; \foreach \i in {0, 180} { \begin{scope}[rotate=\i] \fill[rounded corners=5, opacity=0.1](-0.25, 2) rectangle (-7, -3); \draw[thick, green, rounded corners] (-2.5, 2.5) -- (-2.5, 2.25) -- (0, 2.25) -- (0,0); \draw[thick, red] (-3.5, -2.5) -- (-3.5, 2.5) (-6.5, 0) -- (-3.5, 0) ; \draw[thick, blue, fill] (-2.5, -2.5) -- (-2.5,0) circle (2pt) (-0.5, -2.5) -- (-0.5,0) circle (2pt) (1.5, -2.5) -- (1.5,0) circle (2pt) ; \draw[thick, blue, fill] (-3.5, 0) -- (3.5, 0) (-3.5, 0) -- (-4.5, 1) circle (2pt) -- (-4.5, 2.5) (-4.5, 1) -- (-6.5, 1) (-5.5, 1) circle (2pt) -- (-5.5, 2.5) (-3.5, 0) -- (-4.5, -1) circle (2pt) -- (-4.5, -2.5) (-4.5, -1) -- (-6.5, -1) (-4.5, -1.73) circle (2pt) -- (-6.5, -1.73) ; \end{scope} } \draw[thick, fill=white] (-3.5, 0) circle (2pt) (3.5, 0) circle (2pt); \end{tikzpicture} }} \subfigure[At a leaf]{\makebox[0.47\textwidth]{ \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[rounded corners=5] (-6, -2.5) rectangle (6, 2.5); \draw (-6.5, 0) node[rotate=90] {$\ngraphfont{G}'(\dynkinfont{D}_n)$}; \draw(-3, -2.5) node[below] {$\mathbb{D}^{(1)}$}; \clip[rounded corners=5] (-6, -2.5) rectangle (6, 2.5); \fill[opacity=0.1, rounded corners=5] (4.5, 2) rectangle (-7, -3); \draw[dashed, line width=7] (4, 1) -- (5, 1); \draw[color=cyclecolor1!50, line cap=round, line width=5] (4, 1) -- (5, 1); \draw[cyclecolor1, opacity=0.5, line cap=round, line width=5] (-3, 0) -- (-2, 0) (-3, 0) -- (-4, 1) (-3, 0) -- (-4, -1) (-1, 0) -- (-0, 0) (1, 0) -- (2, 0) (4, -1) -- (4, -1.75) ; \draw[cyclecolor2, opacity=0.5, line cap=round, line width=5] (0, 0) -- (1, 0) (3, 0) -- (2, 0) (3, 0) -- (4, 1) (3, 0) -- (4, -1) (-4, 1) -- (-5, 1) (-4, -1) -- (-4, -1.75) (-1, 0) -- (-2, 0) ; \draw[thick, green, rounded corners] (-2.5, 2.5) -- ++(0,-0.25) -- ++(5,0) -- (2.5,-2.5); \foreach \i in {1, -1} { \begin{scope}[xshift=\i*0.5cm, xscale=\i] \draw[thick, red] (-3.5, -2.5) -- (-3.5, 2.5) (-6.5, 0) -- (-3.5, 0) ; \draw[thick, blue, fill] (-2.5, -2.5) -- (-2.5,0) circle (2pt) (-1.5, 2.5) -- (-1.5,0) circle (2pt) (-0.5, -2.5) -- (-0.5,0) circle (2pt) ; \draw[thick, blue, fill] (-3.5, 0) -- (0, 0) (-3.5, 0) -- (-4.5, 1) circle (2pt) -- (-4.5, 2.5) (-4.5, 1) -- (-6.5, 1) (-5.5, 1) circle (2pt) -- (-5.5, 2.5) (-3.5, 0) -- (-4.5, -1) circle (2pt) -- (-4.5, -2.5) (-4.5, -1) -- (-6.5, -1) (-4.5, -1.73) circle (2pt) -- (-6.5, -1.73) ; \draw[thick, fill=white] (-3.5, 0) circle (2pt); \end{scope} } \end{tikzpicture} }} \caption{Decompositions of $\ngraphfont{G}(\widetilde{\dynD}_n)$} \label{figure:decomposition of Ngraph of type affine Dn} \end{figure} \end{enumerate} Here, $\ngraphfont{G}_{(3)}(\dynkinfont{A}_{n'}), \ngraphfont{G}_{(4)}(\dynkinfont{A}_{n'})$ and $\ngraphfont{G}_{(4)}(a',b',c')$ are the $3$- and $4$-graphs obtained from $\ngraphfont{G}(\dynkinfont{A}_{n'})$ and $\ngraphfont{G}(a',b',c')$ by adding trivial planes at the top. Hence, the realizabilities of Legendrian mutations on $\ngraphfont{G}_{(3)}(\dynkinfont{A}_{n'}), \ngraphfont{G}_{(4)}(\dynkinfont{A}_{n'})$ and $\ngraphfont{G}_{(4)}(a',b',c')$ are the same as those on $\ngraphfont{G}(\dynkinfont{A}_{n'})$ and $\ngraphfont{G}(a',b',c')$. Except for the very last case (3d), all the other cases are reduced to either linear and tripod $N$-graphs with strictly lower rank. Hence, by the induction hypothesis, any composition $\mu^{(k)}$ of mutations on $\clusterfont{Q}^{(k)}$ for $1\leq k\leq \ell$ can be realized as a composition of Legendrian mutations on $(\ngraphfont{G}^{(k)},\ngraphfont{B}^{(k)})$. This guarantees the realizability of $\mu'(\ngraphfont{G}_{t_0}, \ngraphfont{B}_{t_0})$. For the case (3d), one can apply a sequence of Move $\Move{II}$ on $(\ngraphfont{G}'(\dynkinfont{D}_n), \ngraphfont{B}'(\dynkinfont{D}_n))$ as follows: \[ \begin{tikzcd} \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw[thick, dashed] (-0.5, 3) -- (-0.5, -3); \draw[thick, rounded corners=5] (-0.5, 3) -- (3, 3) -- (3, -3) -- (-0.5, -3); \clip[rounded corners=5] (-0.5, 3) -- (3, 3) -- (3, -3) -- (-0.5, -3); \draw[green, thick] (0, 3) -- (0, -3); \draw[red, thick] (1, 3) -- (1, -3); \draw[red, thick] (1,0) -- (3, 0); \draw[blue, thick, fill] (-0.5,0) -- (1, 0) (1,0) -- ++(1, 1) circle (2pt) ++(0,0) -- +(0,2) ++(0,0) -- +(1,0) (1,0) -- ++(1, -1) circle (2pt) ++(0,0) -- +(1,0) ++(0,0) -- ++(0,-1) circle (2pt) -- +(1,0) ++(0,0) -- ++(0, -1) ; \draw[fill=white, thick] (1,0) circle (2pt); \end{tikzpicture} \arrow[r, "\Move{II}"] & \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw[thick, dashed] (-0.5, 3) -- (-0.5, -3); \draw[thick, rounded corners=5] (-0.5, 3) -- (4, 3) -- (4, -3) -- (-0.5, -3); \clip[rounded corners=5] (-0.5, 3) -- (4, 3) -- (4, -3) -- (-0.5, -3); \draw[green, thick] (0, 3) -- (0, -3); \draw[red, thick, fill] (1, 3) -- (1, 0) (1,0) -- (1.5, -0.5) circle (2pt) -- (1.5, -3) (1.5, -0.5) -- (2, 0); \draw[red, thick] (1,0) to[out=30, in=150] (2,0) -- ++(2,0); \draw[blue, thick, fill] (-0.5,0) -- (2, 0) (1,0) -- (3,3) (2,0) -- (4, 2) (2,0) -- (3,-1) circle (2pt) -- +(2,0) ++(0,0) -- ++(0,-1) circle (2pt) -- +(2,0) ++(0,0) -- ++(0,-1) ; \draw[fill=white, thick] (1,0) circle (2pt) (2,0) circle (2pt); \end{tikzpicture} \arrow[r, "\Move{II^*}"] & \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw[thick, dashed] (-0.5, 3) -- (-0.5, -3); \draw[thick, rounded corners=5] (-0.5, 3) -- (6, 3) -- (6, -3) -- (-0.5, -3); \clip[rounded corners=5] (-0.5, 3) -- (6, 3) -- (6, -3) -- (-0.5, -3); \draw[blue, thick, rounded corners] (-0.5, 0) -- (0,0) (0,0) -- ++(1,1) -- ++(1,0) (0,0) -- (0.5, -0.5) (0.5, -0.5) -- ++(0.5, 0.5) -- ++(1,0) (0.5,-0.5) -- ++(0.5, -0.5) -- ++(1,0) (4,0) to[out=120, in=-120] ++(0,1) (4,-1) to[out=120, in=-120] ++(0,1) (2,-1) -- (3,-1.5) -- (4, -1) (2,-1) to[out=60, in=-60] ++(0,1) (2,0) to[out=60, in=-60] ++(0,1) ++(0,0) -- +(2,2) ++(2,0) -- +(2,2) (4, 1) -- ++(2,-2) (4, 0) -- ++(2,-2) (4, -1) -- ++(2,-2) ; \draw[blue, thick, fill] (0,0) circle (2pt) (0.5,-0.5) circle (2pt); \draw[green, thick] (1.5, 3) -- (1.5, -3); \draw[red, thick] (2, 3) -- (2,1) (2,0) to[out=120, in=-120] ++(0,1) (2,-1) to[out=120, in=-120] ++(0,1) (2,-1) -- ++(2,0) (2,0) -- ++(2,0) (2,1) -- ++(2,0) (4,-1) to[out=60, in=-60] ++(0,1) (4,0) to[out=60, in=-60] ++(0,1) -- ++(2,0) (3, -2) -- (3, -3) ; \draw[red, thick, fill] (3,-2) circle (2pt); \draw[red, thick, rounded corners] (2,-1) -- (2,-2) -- (4, -2) -- (4, -1) ; \draw[fill=white, thick] (2,0) circle (2pt) (2,1) circle (2pt) (2,-1) circle (2pt) (4,0) circle (2pt) (4,1) circle (2pt) (4,-1) circle (2pt) ; \end{tikzpicture} \arrow[r, "\Move{II}"] & \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw[thick, dashed] (-0.5, 3) -- (-0.5, -3); \draw[thick, rounded corners=5] (-0.5, 3) -- (6, 3) -- (6, -3) -- (-0.5, -3); \clip[rounded corners=5] (-0.5, 3) -- (6, 3) -- (6, -3) -- (-0.5, -3); \fill[opacity=0.1, rounded corners=5] (-1,2.5) -- (1, 2.5) -- (1, -0.5) -- (3.5, -0.5) -- (3.5, -1.5) -- (1, -1.5) -- (1, -2.5) -- (-1, -2.5) -- (-1, -4) -- (7, -4) -- (7, 4) -- (-1, 4); \draw[blue, thick, rounded corners] (-0.5, 0) -- (0,0) (0,0) -- ++(1,1) -- ++(1,0) (0,0) -- (0.5, -0.5) (0.5, -0.5) -- ++(0.5, 0.5) -- ++(1,0) (0.5,-0.5) -- ++(0, -2.25) -- ++(1.75,0) -- ++(0.75,0.75) (4,0) to[out=120, in=-120] ++(0,1) (2,0) -- (3,-1) (3,-1) -- (4, 0) (3,-1) -- (3,-2) (2,0) to[out=60, in=-60] ++(0,1) (2,1) -- (4,3) (4,1) -- (6,3) (4,1) -- (6,-1) (4,0) -- (6,-2) (3,-2) -- (4, -3) ; \draw[blue, thick, fill] (0,0) circle (2pt) (0.5,-0.5) circle (2pt) (3,-1) circle (2pt); \draw[green, thick] (1.5, 3) -- (1.5, -3); \draw[red, thick] (2, 3) -- (2,1) (2,0) to[out=120, in=-120] ++(0,1) (2,0) -- ++(2,0) (2,1) -- ++(2,0) (4,0) to[out=60, in=-60] ++(0,1) -- ++(2,0) (3, -2) -- (3, -3) ; \draw[red, thick, rounded corners] (2,0) -- (2,-2) -- (4, -2) -- (4, 0) ; \draw[fill=white, thick] (2,0) circle (2pt) (2,1) circle (2pt) (3,-2) circle (2pt) (4,0) circle (2pt) (4,1) circle (2pt) ; \end{tikzpicture} \end{tikzcd} \] Then in the last picture, the shaded part corresponds to a tame annular $N$-graph and so the $N$-graph $(\ngraphfont{G}'(\dynkinfont{D}_n), \ngraphfont{B}'(\dynkinfont{D}_n))$ is $\partial$-Legendrian isotopic to the following $N$-graph \[ \begin{tikzpicture}[baseline=-.5ex,scale=0.5] \draw[rounded corners=5] (-6, -2.5) rectangle (6, 2.5); \clip[rounded corners=5] (-6, -2.5) rectangle (6, 2.5); \begin{scope}[xshift=0.5cm] \draw[thick, red] (-3.5, -2.5) -- (-3.5, 2.5) (-6.5, 0) -- (-3.5, 0) ; \draw[thick, blue, fill] (-2.5, -2.5) -- (-2.5,0) circle (2pt) (-1.5, 2.5) -- (-1.5,0) circle (2pt) (-0.5, -2.5) -- (-0.5,0) circle (2pt) ; \draw[thick, blue, fill] (-3.5, 0) -- (0, 0) (-3.5, 0) -- (-4.5, 1) circle (2pt) -- (-4.5, 2.5) (-4.5, 1) -- (-6.5, 1) (-5.5, 1) circle (2pt) -- (-5.5, 2.5) (-3.5, 0) -- (-4.5, -1) circle (2pt) -- (-4.5, -2.5) (-4.5, -1) -- (-6.5, -1) (-4.5, -1.73) circle (2pt) -- (-6.5, -1.73) ; \draw[thick, fill=white] (-3.5, 0) circle (2pt); \end{scope} \draw[blue, thick, fill] (0.5,0) --(1, 0) circle (2pt) -- (1, 3) (1, 0) -- (2, 0) circle (2pt) -- (2, -3) (2,0) -- (2.5,0); \begin{scope}[xshift=3cm] \draw[blue, thick, fill] (-0.5, 0) -- (0,0) circle (2pt) -- (0,3) (0,0) -- (1,0) circle (2pt) -- (1,-3) (1,0) -- (3,0); \draw[green, thick, rounded corners] (1.5, -3) -- (1.5, -0.5) -- (3, -0.5); \draw[red, thick, rounded corners] (2, -3) -- (2, -1) -- (3, -1); \draw[blue, thick, rounded corners] (2.5, -3) -- (2.5, -1.5) -- (3, -1.5); \draw[blue, thick, fill] (2.5, -2) circle (2pt) -- (3, -2); \end{scope} \end{tikzpicture} \] which is a stabilization of $(\ngraphfont{G}(\dynkinfont{D}_n), \ngraphfont{B}(\dynkinfont{D}_n))=(\ngraphfont{G}(2,2,n-2), \ngraphfont{G}(2,2, n-2))$. Therefore the induction hypothesis completes the proof. \begin{remark} It is not claimed above that two mutations $\mu'$ and $\mu_\ngraph$ commute. Indeed, if we first mutate $(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})$ via $\mu'$, then the result may not look like either $(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})$ or $\overline{(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})}$ and hence $\mu_\ngraph$ will not work as expected. Besides it is not even clear whether $\mu_\ngraph \mu'(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})$ is realizable. \end{remark} \subsection{Group actions on \texorpdfstring{$N$}{N}-graphs} For each triple $(\dynkinfont{Z}, G, {\dynkinfont{Z}^G})$, we first consider the $G$-action on each $N$-graph of type $\dynkinfont{Z}$. \subsubsection{Rotation action} Let $(\dynkinfont{Z}, G, {\dynkinfont{Z}^G})$ be one of five cases in the first row of Table~\ref{table:foldings}. We will denote the generator of $G=\Z/2\Z$ or $\Z/3\Z$ by $\tau$, which acts on $N$-graphs $\ngraphfont{G}$ by $\pi$- or $2\pi/3$-rotation, respectively. Notice that for each $\dynkinfont{Z}=\dynkinfont{A}_{2n-1}, \dynkinfont{D}_4, \widetilde{\dynE}_6, \widetilde{\dynD}_{2n\ge6},$ or $\widetilde{\dynD}_4$, we may assume that the Legendrian $\lambda(\dynkinfont{Z})$ in $J^1\mathbb{S}^1$ is invariant under the $\pi$-rotation since the braid $\beta(\dynkinfont{Z})$ representing $\lambda(\dynkinfont{Z})$ has the rotation symmetry as follows: \begin{align*} \beta(\dynkinfont{A}_{2n-1})&=\left(\sigma_1^{n+1}\right)^2,& \beta(\dynkinfont{D}_4) &=\left(\sigma_2\sigma_1^3\right)^3,& \beta(\widetilde{\dynE}_6)&=\left(\sigma_2\sigma_1^4\right)^3,\\ \beta(\widetilde{\dynD}_{2n})&=\left(\sigma_2\sigma_1^3\sigma_2\sigma_1^3\sigma_2\sigma_3\sigma_1^{n-2}\right)^2,n\ge 3,& \beta(\widetilde{\dynD}_4)&=\left(\sigma_2\sigma_1^3\sigma_2\sigma_1^3\sigma_2\sigma_3\right)^2. \end{align*} Now the generator $\tau$ acts on the set $\mathscr{N}\mathsf{graphs}(\lambda(\dynkinfont{Z}))$ of equivalent classes of $N$-graphs with cycles whose boundary is precisely $\lambda(\dynkinfont{Z})$. Indeed, for each $(\ngraphfont{G}, \ngraphfont{B})$ in $\mathscr{N}\mathsf{graphs}(\lambda(\dynkinfont{Z}))$, we have \[ \tau\cdot(\ngraphfont{G}, \ngraphfont{B})= \begin{cases} R_\pi (\ngraphfont{G}, \ngraphfont{B}) & \text{ if }\tau\in \Z/2\Z;\\ R_{2\pi/3} (\ngraphfont{G}, \ngraphfont{B}) & \text{ if }\tau\in \Z/3\Z, \end{cases} \] where $R_\theta$ is the induced action on $N$-graphs with cycles from the $\theta$-rotation on $\mathbb{D}^2$. See Figure~\ref{figure:action on Ngraph of type A}. \begin{figure}[ht] \subfigure[$\Z/2\Z$-action on $(\ngraphfont{G}, \ngraphfont{B})$\label{figure:action on Ngraph of type A}]{ \begin{tikzcd}[ampersand replacement=\&] \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw[thick] (0,0) circle (3); \fill[orange, opacity=0.2] (-90:3) arc(-90:90:3) -- cycle; \fill[violet, opacity=0.1] (90:3) arc(90:270:3) -- cycle; \foreach \i in {0,...,8} { \draw[blue,thick] ({\i*45}:3) -- ({\i*45}:2); } \draw[double] (0,0) node {$(\ngraphfont{G},\ngraphfont{B})$} circle (2); \end{tikzpicture} \arrow[r,"\tau",yshift=.5ex]\& \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw[thick] (0,0) circle (3); \fill[violet, opacity=0.1] (90:3) arc(90:-90:3) -- cycle; \fill[orange, opacity=0.2] (-90:3) arc(-90:-270:3) -- cycle; \foreach \i in {0,...,8} { \draw[blue,thick] ({\i*45}:3) -- ({\i*45}:2); } \draw[double] (0,0) node[rotate=180] {$(\ngraphfont{G},\ngraphfont{B})$} circle (2); \end{tikzpicture} \arrow[l, "\tau", yshift=-.5ex] \end{tikzcd} } \subfigure[$\Z/3\Z$-action on $(\ngraphfont{G}, \ngraphfont{B})$]{ \begin{tikzcd}[ampersand replacement=\&] \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw[thick] (0,0) circle (3); \fill[orange, opacity=0.2] (0,0) -- (0:3) arc(0:120:3) -- cycle; \fill[violet, opacity=0.1] (0,0) -- (-120:3) arc(-120:0:3) -- cycle; \fill[blue, opacity=0.1] (0,0) -- (120:3) arc (120:240:3) -- cycle; \foreach \i in {0, 120, 240} { \begin{scope}[rotate=\i] \draw[blue, thick] (30:3) -- (30:2) (60:3) -- (60:2) (90:3) -- (90:2); \draw[red, thick] (0:3) -- (0:2); \end{scope} } \draw[double] (0,0) node {$(\ngraphfont{G},\ngraphfont{B})$} circle (2); \end{tikzpicture} \arrow[r,"\tau"]\& \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw[thick] (0,0) circle (3); \fill[orange, opacity=0.2] (0,0) -- (120:3) arc(120:240:3) -- cycle; \fill[violet, opacity=0.1] (0,0) -- (0:3) arc(0:120:3) -- cycle; \fill[blue, opacity=0.1] (0,0) -- (-120:3) arc (-120:0:3) -- cycle; \foreach \i in {0, 120, 240} { \begin{scope}[rotate=\i] \draw[blue, thick] (30:3) -- (30:2) (60:3) -- (60:2) (90:3) -- (90:2); \draw[red, thick] (0:3) -- (0:2); \end{scope} } \draw[double] (0,0) node[rotate=120] {$(\ngraphfont{G},\ngraphfont{B})$} circle (2); \end{tikzpicture} \arrow[r,"\tau"]\& \begin{tikzpicture}[baseline=-.5ex, scale=0.5] \draw[thick] (0,0) circle (3); \fill[orange, opacity=0.2] (0,0) -- (-120:3) arc(-120:0:3) -- cycle; \fill[violet, opacity=0.1] (0,0) -- (120:3) arc(120:240:3) -- cycle; \fill[blue, opacity=0.1] (0,0) -- (0:3) arc (0:120:3) -- cycle; \foreach \i in {0, 120, 240} { \begin{scope}[rotate=\i] \draw[blue, thick] (30:3) -- (30:2) (60:3) -- (60:2) (90:3) -- (90:2); \draw[red, thick] (0:3) -- (0:2); \end{scope} } \draw[double] (0,0) node[rotate=-120] {$(\ngraphfont{G},\ngraphfont{B})$} circle (2); \end{tikzpicture} \arrow[ll,"\tau", bend left=35] \end{tikzcd} } \caption{Rotation actions on $N$-graphs} \label{figure:rotation action} \end{figure} \subsubsection{Conjugation action} Assume that $(\dynkinfont{Z}, G, {\dynkinfont{Z}^G})$ is one of five cases in the second row of Table~\ref{table:foldings}. We denote the generator for $G=\Z/2\Z$ by $\eta$. Then, as before, the Legendrian $\tilde\lambda(\dynkinfont{Z})$ is represented by the braid $\tilde\beta(\dynkinfont{Z})$ which is invariant under the conjugation as follows: \begin{align*} \tilde\beta(\dynkinfont{D}_{n+1})&=\sigma_2^n \sigma_{1,3}\sigma_2\sigma_{1,3}^3 \sigma_2\sigma_{1,3}\sigma_2^2\sigma_{1,3},& \tilde\beta(\dynkinfont{E}_6)&=\sigma_2^3 \sigma_{1,3}\sigma_2\sigma_{1,3}^4 \sigma_2\sigma_{1,3}\sigma_2^2\sigma_{1,3},\\ \tilde\beta(\widetilde{\dynE}_6)&=\sigma_2^4 \sigma_{1,3}\sigma_2\sigma_{1,3}^4 \sigma_2\sigma_{1,3}\sigma_2^2\sigma_{1,3},& \tilde\beta(\widetilde{\dynE}_7)&=\sigma_2^3 \sigma_{1,3}\sigma_2\sigma_{1,3}^5 \sigma_2\sigma_{1,3}\sigma_2^2\sigma_{1,3},& \tilde\beta(\widetilde{\dynD}_4)&=(\sigma_2\sigma_{1,3}\sigma_2\sigma_{1,3}^2)^2. \end{align*} Therefore, the generator $\eta$ acts on the set $\mathscr{N}\mathsf{graphs}(\tilde\lambda(\dynkinfont{Z}))$ by conjugation. That is, for each $(\ngraphfont{G}, \ngraphfont{B})\in\mathscr{N}\mathsf{graphs}(\tilde\lambda(\dynkinfont{Z}))$, we have \[ \eta\cdot (\ngraphfont{G}, \ngraphfont{B}) = \overline{(\ngraphfont{G}, \ngraphfont{B})}. \] \begin{remark} One may consider the conjugation invariant degenerate $N$-graph $\tilde\ngraphfont{G}(\dynkinfont{A}_{2n-1})$ instead of the rotation invariant $N$-graph $\ngraphfont{G}(\dynkinfont{A}_{2n-1})$ as seen earlier in Remark~\ref{remark:degenerated Ngraph of type A}. Then it can be checked that these two actions are identical. \end{remark} \begin{remark} The denegerated $N$-graph $\tilde\ngraphfont{G}(\widetilde{\dynD}_4)$ admits the $\pi$-rotation action as well, which is essentially equivalent to the conjugation action on $\tilde\ngraphfont{G}(\widetilde{\dynD}_4)$. We omit the detail. \end{remark} \subsection{Invariant \texorpdfstring{$N$}{N}-graphs and Lagrangian fillings} Throughout this section, we assume that $(\dynkinfont{Z}, G, {\dynkinfont{Z}^G})$ is one of the triples in Table~\ref{table:foldings}. For an $N$-graph $\ngraphfont{G}$ in $\mathscr{N}\mathsf{graphs}(\lambda(\dynkinfont{Z}))$ or $\mathscr{N}\mathsf{graphs}(\tilde\lambda(\dynkinfont{Z}))$, we say that $(\ngraphfont{G}, \ngraphfont{B})$ is \emph{$G$-invariant} if for each $g\in G$, \[ g\cdot(\ngraphfont{G}, \ngraphfont{B}) = (\ngraphfont{G}, \ngraphfont{B}). \] Namely, \begin{enumerate} \item the $N$-graph $\ngraphfont{G}$ is invariant under the action of $g$, \item the sets of cycles $\ngraphfont{B}$ and $g(\ngraphfont{B})$ are identical up to relabeling $\gamma \leftrightarrow g(\gamma)$ for $\gamma\in\ngraphfont{B}$. \end{enumerate} The following statements are obvious but important observations. \begin{lemma}\label{lemma:Lagrangian fillings with symmetry} For a free $N$-graph $\ngraphfont{G}$, let \[ L(\ngraphfont{G})\colonequals(\pi\circ\iota)(\Lambda(\ngraphfont{G})) \] be the Langrangian surface defined by $\ngraphfont{G}$ in $\mathbb{C}^2$. \begin{enumerate} \item If $\ngraphfont{G}$ is invariant under the $\theta$-rotation, then $L(\ngraphfont{G})$ is invariant under the $\theta$-rotation in $\mathbb{C}^2$ \[ (z_1,z_2)\mapsto (z_1\cos(\theta) +z_2\sin(\theta), -z_1\sin(\theta)+z_2\cos(\theta)). \] \item If $\ngraphfont{G}$ is invariant under the conjugation, then $L(\ngraphfont{G})$ is invariant under the antisymplectic involution in $\mathbb{C}^2$ \[ (z_1,z_2)\mapsto (\bar z_1, \bar z_2). \] \end{enumerate} \end{lemma} \begin{lemma}\label{lemma:initial Ngraphs are G-invariant} The $N$-graphs $\ngraphfont{G}(\dynkinfont{Z})$ for $\dynkinfont{Z}=\dynkinfont{A}_{2n-1}, \dynkinfont{D}_4, \widetilde{\dynE}_6, \widetilde{\dynD}_{2n\ge 6}, \widetilde{\dynD}_4$ and the degenerate $N$-graphs $\tilde\ngraphfont{G}(\dynkinfont{Z})$ for $\dynkinfont{Z}=\dynkinfont{D}_{n+1}, \dynkinfont{E}_6, \widetilde{\dynE}_6, \widetilde{\dynE}_7, \widetilde{\dynD}_4$ are all invariant under the $G$-action. \end{lemma} \begin{lemma}\label{lemma:mutation preserves invariance} Suppose that $g\in G$ acts on $(\ngraphfont{G}, \ngraphfont{B})$. If the Legendrian mutation $\mu_{\gamma}(\ngraphfont{G}, \ngraphfont{B})$ is realizable, then \[ \mu_{g(\gamma)}\left(g\cdot(\ngraphfont{G},\ngraphfont{B})\right) = g\cdot\left(\mu_{\gamma}(\ngraphfont{G},\ngraphfont{B})\right). \] In particular, for a $G$-orbit $I\subset\ngraphfont{B}$ consists of pairwise disjoint cycles, if $(\ngraphfont{G},\ngraphfont{B})$ is $G$-invariant and the Legendrian orbit mutation $\mu_{I}(\ngraphfont{G},\ngraphfont{B})$ is realizable, then $\mu_I(\ngraphfont{G},\ngraphfont{B})$ is $G$-invariant as well. \end{lemma} On the other hand, if we have a $G$-invariant $N$-graph $(\ngraphfont{G}, \ngraphfont{B})$ with cycles, it gives us a $G$-admissible quiver $\clusterfont{Q}(\ngraphfont{G}, \ngraphfont{B})$. \begin{lemma}\label{lemma:G-invariant Ngraphs imply G-admissible quivers} Let $(\ngraphfont{G}, \ngraphfont{B})$ be a $G$-invariant $N$-graph with cycles. Then the quiver $\clusterfont{Q}(\ngraphfont{G}, \ngraphfont{B})$ is $G$-admissible. \end{lemma} \begin{proof} By definition of $G$-invariance of $(\ngraphfont{G}, \ngraphfont{B})$, it is obvious that the quiver $\clusterfont{Q}=\clusterfont{Q}(\ngraphfont{G}, \ngraphfont{B})$ is $G$-invariant. On the other hand, since $\dynkinfont{Z}$ is either a finite or an affine Dynkin diagram, the $G$-invariance of the quiver $\clusterfont{Q}$ implies the $G$-admissibility of $\clusterfont{Q}$ by Theorem~\ref{thm_invariant_seeds_form_folded_pattern}. \end{proof} \begin{proposition}\label{proposition:G-invariant Ngraphs} For each $Y$-seed $(\bfy',\clusterfont{B}')$ of type ${\dynkinfont{Z}^G}$, there exists a $G$-invariant $N$-graph with cycles $(\ngraphfont{G}, \ngraphfont{B})$ of type $\dynkinfont{Z}$ such that \[ \Psi(\ngraphfont{G}, \ngraphfont{B})^G = (\bfy',\clusterfont{B}'). \] \end{proposition} \begin{proof} For each $\dynkinfont{Z}$, let $(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})$ be the $N$-graph with cycles defined as follows: \begin{align*} (\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0}) \colonequals \begin{cases} (\ngraphfont{G}(\dynkinfont{Z}),\ngraphfont{B}(\dynkinfont{Z})) & \text{ if } \dynkinfont{Z}=\dynkinfont{A}_{2n-1}, \dynkinfont{D}_4, \widetilde{\dynE}_6, \widetilde{\dynD}_{2n\ge 6}, \widetilde{\dynD}_4, G\text{ acts as rotation};\\ (\tilde\ngraphfont{G}(\dynkinfont{Z}),\tilde\ngraphfont{B}(\dynkinfont{Z})) & \text{ if } \dynkinfont{Z}=\dynkinfont{D}_{n+1}, \dynkinfont{E}_6, \widetilde{\dynE}_6, \widetilde{\dynE}_7, \widetilde{\dynD}_4, G\text{ acts as conjugation}. \end{cases} \end{align*} We regard the $Y$-seed defined by $(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})$ as the intial seed $(\bfy_{t_0}, \clusterfont{B}_{t_0})$ \[ (\bfy_{t_0}, \clusterfont{B}_{t_0})=\Psi(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0}). \] As seen in Lemmas~\ref{lemma:initial Ngraphs are G-invariant} and \ref{lemma:G-invariant Ngraphs imply G-admissible quivers}, $(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})$ is $G$-invariant and so is the quiver $\clusterfont{Q}(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})$. Therefore, we have the folded seed $(\bfy_{t_0}, \clusterfont{B}_{t_0})^G$ which plays the role of the initial seed of the $Y$-pattern of type~${\dynkinfont{Z}^G}$. Let $(\bfy', \clusterfont{B}')$ be an $Y$-seed of the $Y$-pattern of type ${\dynkinfont{Z}^G}$. By Lemma~\ref{lemma:normal form}, there exist $r\in \Z$ and a sequence of mutations $\mu_{j_1}^{\dynkinfont{Z}^G}, \dots, \mu_{j_L}^{\dynkinfont{Z}^G}$ such that \[ (\bfy', \clusterfont{B}') = (\mu_{j_L}^{\dynkinfont{Z}^G}\cdots\mu_{j_1}^{\dynkinfont{Z}^G}) ((\mu_\clusterfont{Q}^{\dynkinfont{Z}^G})^r((\bfy_{t_0}, \clusterfont{B}_{t_0})^G)). \] Moreover, the indices $j_1,\dots, j_L$ misses at least one index, say $i$. Then Theorem~\ref{thm_invariant_seeds_form_folded_pattern} implies the existence of the $G$-admissible $Y$-seed $(\bfy, \clusterfont{B})$ of type $\dynkinfont{Z}$ such that $(\bfy,\clusterfont{B})^G=(\bfy',\clusterfont{B}')$ and \[ (\bfy,\clusterfont{B}) = (\mu_{I_L}^\dynkinfont{Z}\cdots\mu_{I_1}^\dynkinfont{Z}) ((\mu_\clusterfont{Q}^\dynkinfont{Z})^r(\bfy_{t_0},\clusterfont{B}_{t_0})), \] where $I_k$ is $G$-orbit corresponding to $j_k$ for each $1\le k\le L$. It suffices to prove that the $N$-graph \[ (\ngraphfont{G}, \ngraphfont{B})=(\mu_{I_L}\cdots\mu_{I_1})((\mu_\ngraph)^r(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})) \] is well-defined and $G$-invariant so that $(\bfy, \clusterfont{B})=\Psi(\ngraphfont{G},\ngraphfont{B})$ is $G$-admissible by Proposition~\ref{proposition:equivariance of mutations} as desired. By Lemma~\ref{lemma:Legendriam Coxeter mutation of type An}, Propositions~\ref{proposition:effect of Legendrian Coxeter mutation}, \ref{proposition:coxeter realization D-type} and \ref{proposition:coxeter realization denegerate type}, the Legendrian Coxeter mutation $\mu_\ngraph^r(\ngraphfont{G}_{t_0}, \ngraphfont{B}_{t_0})$ is realizable so that \[ \Psi(\mu_\ngraph^r(\ngraphfont{G}_{t_0}, \ngraphfont{B}_{t_0})) =(\mu_\clusterfont{Q})^r(\bfy_{t_0},\clusterfont{B}_{t_0}). \] Since $\mu_\ngraph^r(\ngraphfont{G}_{t_0}, \ngraphfont{B}_{t_0})$ is the concatenation of Coxeter paddings on the initial $N$-graph $(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})$, it suffices to prove that the Legendrian mutation $(\mu_{I_L}\cdots\mu_{I_1})(\mu_\ngraph^r(\ngraphfont{G}_{t_0}, \ngraphfont{B}_{t_0}))$ is realizable, which is equivalent to the realizability of $(\mu_{I_L}\cdots\mu_{I_1})(\ngraphfont{G}_{t_0}, \ngraphfont{B}_{t_0})$. On the other hand, since the indices $j_1,\dots, j_L$ misses the index $i$, the orbits $I_1,\dots, I_L$ misses one orbit $I$ corresponding to $i$. In other words, the sequence of mutations $\mu_{I_1},\dots,\mu_{I_L}$ can be performed inside the subgraph of the exchange graph $\exchange(\Phi(\dynkinfont{Z}))$, which is isomorphic to $\exchange(\Phi(\dynkinfont{Z} \setminus I))$. Then the root system $\Phi(\dynkinfont{Z}\setminus I)$ is decomposed into $\Phi(\dynkinfont{Z}^{(1)}), \dots, \Phi(\dynkinfont{Z}^{(\ell)})$, where $\dynkinfont{Z}\setminus I = \dynkinfont{Z}^{(1)}\cup\cdots\cup\dynkinfont{Z}^{(\ell)}$. Moreover, the sequence of mutations $\mu_{I_1},\dots,\mu_{I_L}$ can be decomposed into sequences $\mu^{(1)},\dots, \mu^{(\ell)}$ of mutations on $\dynkinfont{Z}^{(1)},\dots,\dynkinfont{Z}^{(\ell)}$. Similarly, we may decompose the $N$-graph $(\ngraphfont{G}_{t_0}, \ngraphfont{B}_{t_0})$ into $N$-subgraphs \[ (\ngraphfont{G}^{(1)}, \ngraphfont{B}^{(1)}),\dots,(\ngraphfont{G}^{(\ell)}, \ngraphfont{B}^{(\ell)}) \] along cycles in $I\subset\ngraphfont{B}_{t_0}$ as done in the previous section. Then the Legendrian mutation $(\mu_{I_L}\cdots\mu_{I_1})(\ngraphfont{G}_{t_0},\ngraphfont{B}_{t_0})$ is realizable if and only if so is $\mu^{(j)}(\ngraphfont{G}^{(j)},\ngraphfont{B}^{(j)})$ for each $1\le j\le \ell$. This can be done by induction on rank of the root system and so the $N$-graph $(\ngraphfont{G}, \ngraphfont{B})$ with $\Psi(\ngraphfont{G},\ngraphfont{B})=(\bfy,\clusterfont{B})$ is well-defined. Finally, the $G$-invariance of $(\ngraphfont{G}, \ngraphfont{B})$ follows from Lemma~\ref{lemma:mutation preserves invariance}. \end{proof} \begin{theorem}[Folding of $N$-graphs]\label{thm:folding of N-graphs} The following holds: \begin{enumerate} \item The Legendrian $\lambda(\dynkinfont{A}_{2n-1})$ has $\binom{2n}{n}$ Lagrangian fillings which are invariant under the $\pi$-rotation and admit the $Y$-pattern of type $\dynkinfont{B}_n$. \item The Legendrian $\lambda(\dynkinfont{D}_{4})$ has $8$ Lagrangian fillings which are invariant under the $2\pi/3$-rotation and admit the $Y$-pattern of type $\dynkinfont{G}_2$. \item The Legendrian $\lambda(\widetilde{\dynE}_{6})$ has Lagrangian fillings which are invariant under the $2\pi/3$-rotation and admit the $Y$-pattern of type $\widetilde{\dynG}_2$. \item The Legendrian $\lambda(\widetilde{\dynD}_{2n})$ with $n\ge 3$ has Lagrangian fillings which are invariant under the $\pi$-rotation and admit the $Y$-pattern of type $\widetilde{\dynB}_n$. \item The Legendrian $\lambda(\widetilde{\dynD}_4)$ has Lagrangian fillings which are invariant under the $\pi$-rotation and admit the $Y$-pattern of type $\widetilde{\dynC}_2$. \item The Legendrian $\tilde\lambda(\dynkinfont{E}_{6})$ has $105$ Lagrangian fillings which are invariant under the antisymplectic involution and admit the $Y$-pattern of type $\dynkinfont{F}_4$. \item The Legendrian $\tilde\lambda(\dynkinfont{D}_{n+1})$ has $\binom{2n}{n}$ Lagrangian fillings which are invariant under the antisymplectic involution and admit the $Y$-pattern of type $\dynkinfont{C}_n$. \item The Legendrian $\tilde\lambda(\widetilde{\dynE}_{6})$ has Lagrangian fillings which are invariant under the antisymplectic involution and admit the $Y$-pattern of type $\dynkinfont{E}_6^{(2)}$. \item The Legendrian $\tilde\lambda(\widetilde{\dynE}_{7})$ has Lagrangian fillings which are invariant under the antisymplectic involution and admit the $Y$-pattern of type $\widetilde{\dynF}_4$. \item The Legendrian $\tilde\lambda(\widetilde{\dynD}_4)$ has Lagrangian fillings which are invariant under the antisymplectic involution and admit the $Y$-pattern of type $\dynkinfont{A}_5^{(2)}$. \end{enumerate} \end{theorem} \begin{proof} Let $(\dynkinfont{Z}, G,{\dynkinfont{Z}^G})$ be one of the triples in Table~\ref{table:foldings}. By Proposition~\ref{proposition:G-invariant Ngraphs}, each $Y$-seed of the $Y$-pattern of type ${\dynkinfont{Z}^G}$ is realizable by a $G$-invariant $N$-graph, which gives us a Lagrangian filling with a certain symmetry by Lemma~\ref{lemma:Lagrangian fillings with symmetry}. This completes the proof. \end{proof}
1,477,468,750,880
arxiv
\section{Introduction} The theory of $C^*$-algebras provides a vast noncommutative generalisation of the theory of locally compact topological spaces, and by imposing suitable additional structures, one obtains noncommutative (or quantum) analogues of more sophisticated topological spaces. Two very successful examples of this phenomenon are the theory of quantum groups \cite{kustermans-vaes-C*-lc, wor:cpqgrps} which are noncommutative analogues of topological groups, and Connes' notion of spectral triples \cite{Con:NCG}, which are the noncommutative counterparts to (spin) Riemannian manifolds. In the same vein, it is also very natural to ask for a noncommutative generalisation of ordinary metric spaces, and Rieffel's seminal work \cite{Rie:MSS, Rie:GHD} provides a very satisfactory answer to this question. Rieffel's fundamental insight is that the right noncommutative counterpart to a metric on a compact topological space is a certain densely defined seminorm on a unital $C^*$-algebra, and he dubbed these structures \emph{compact quantum metric spaces}. Over the past 20 years, ample examples of compact quantum metric spaces have emerged, and the theory has been developed in several different directions through the works of many hands; see \cite{LatAgu:AF, BMR:DSS, HSWZ:STC, Ker:MQG, Lat:QGH, Li:GH-dist, Rie:MSA} and references therein. One of the most important features of the theory is that it admits a generalisation of the classical Gromov-Hausdorff distance \cite{ edwards-GH-paper, gromov-groups-of-polynomial-growth-and-expanding-maps}, known as the \emph{quantum Gromov-Hausdorff distance} \cite{Rie:GHD}, which allows one to study the theory of quantum metric spaces from an analytic point of view, and thus ask questions pertaining to continuity and convergence. As an example of this, Rieffel showed in \cite{Rie:GHD} that the noncommutative tori $A_{\theta}$ admit a natural quantum metric structure and that they vary continuously in the deformation parameter $\theta$ with respect to the quantum Gromov-Hausdorff distance, and in \cite{Rie:MSG} that the so-called fuzzy-spheres also admit a compact quantum metric structure with respect to which they converge to the classical 2-sphere $S^2$ as their linear dimension tends to infinity; for many more examples in this direction see for instance \cite{aguilar:thesis, kaad-kyed, Lat:AQQ, LatPack:Solenoids}. \\ The definition of compact quantum metric spaces is inspired by Connes' theory of noncommutative geometry, and the latter is therefore a natural source of many interesting examples, which may be viewed as the noncommutative counterparts of Riemannian manifolds when these are considered merely as metric spaces with their Riemannian metric. Despite a continuous effort over at least 30 years, it has proven quite difficult to reconcile the theory of quantum groups with Connes' noncommutative geometry, \cite{CoMo:TST, Maj:QNG}. In fact, even for the most fundamental example, Woronowicz' quantum $SU(2)$, there are still several competing candidates for good spectral triples, and it is not known which of these provide quantum $SU(2)$ with a quantum metric space structure, \cite{BiKu:DQQ,DLSSV:DOS,KaSe:TST,KRS:RFH,NeTu:DCQ}. However, just as $SU(2)$ has the classical $2$-sphere as a homogeneous space, its quantised counterpart $SU_q(2)$ also has a quantised $2$-sphere $S_q^2$, known as the standard Podle{\'s} sphere, as a ``homogenous space'' \cite{Pod:QS}, and the work of D\k{a}browski and Sitarz \cite{DaSi:DSP} provides $S_q^2$ with a spectral triple, which was shown in \cite{AgKa:PSM} to turn $S_q^2$ into a compact quantum metric space. This result provides the first genuine quantum analogue of a Riemannian geometry on $S_q^2$, and the most pertinent question to investigate at this point is therefore if the quantised $2$-spheres $S_q^2$ converge to the classical round $2$-sphere as the deformation parameter $q$ tends to 1. The present paper answers this questions in the affirmative: \begin{theoremletter}[{see Theorem \ref{thm:podles-converging-to-classical}}]\label{mainthm:A} As $q$ tends to 1, the Podle{\'s} spheres $S_q^2$ converge to the classical $2$-sphere $S^2$ in the quantum Gromov-Hausdorff distance. \end{theoremletter} One of the main ingredients in the proof of Theorem \ref{mainthm:A} is a quantum analogue of Rieffel's convergence result for fuzzy spheres mentioned above. More precisely, we introduce a sequence of finite dimensional compact quantum metric spaces $(F^N_q)_{N\in \mathbb{N}}$, which play the role of quantised counterparts to the classical fuzzy spheres, and prove the following result: \begin{theoremletter}[{see Theorem \ref{thm:fuzzy-to-podles}}]\label{mainthm:B} For each $q \in (0,1]$ the sequence of quantised fuzzy spheres $\left(F_q^N\right)_{N\in \mathbb{N}}$ converges to $S_q^2$ with respect to the quantum Gromov-Hausdorff distance. \end{theoremletter} For $q=1$, Theorem \ref{mainthm:B} provides a variation of Rieffel's result \cite[Theorem 3.2]{Rie:MSG}, in that it shows that the finite dimensional quantum metric spaces $F_N^1$ converge to the classical round 2-sphere as $N$ tends to infinity. Along the way, we also prove (see Proposition \ref{prop:quantum-fuzzy-to-classical-fuzzy}) that for fixed $N\in \mathbb{N}$, the 1-parameter family $(F_q^N)_{q\in (0,1]}$ of compact quantum metric spaces vary continuously in the quantum Gromov-Hausdorff distance. \\ The rest of the paper is structured as follows: in Section \ref{sec:prelim} we give a detailed introduction to quantum $SU(2)$, the noncommutative geometry of the standard Podle{\'s} sphere and compact quantum metric spaces, while Section \ref{sec:quantum-fuzzy-spheres} is devoted to introducing the quantised fuzzy spheres mentioned above. In Section \ref{sec:qGH-convergence} we carry out the main analysis and prove our convergence results. \subsection{{Acknowledgements}} The authors gratefully acknowledge the financial support from the Independent Research Fund Denmark through grant no.~9040-00107B and 7014-00145B. Furthermore, they would like to thank Marc Rieffel for pointing out the reference \cite{Sain:thesis}, and the anonymous referees for their careful reading of the manuscript. \section{Preliminaries}\label{sec:prelim} \subsection{Quantum $SU(2)$} Let us fix a $q \in (0,1]$. We consider the universal unital $C^*$-algebra $C(SU_q(2))$ with two generators $a$ and $b$ subject to the relations \[ \begin{split} & ba = q ab \quad b^* a = q ab^* \quad bb^* = b^* b \\ & a^* a + q^2 bb^* = 1 = aa^* + bb^* . \end{split} \] This unital $C^*$-algebra is referred to as \emph{quantum $SU(2)$} and was introduced by Woronowicz in \cite{Wor:UAC}. Notice here that we are conforming with the notation from \cite{AgKa:PSM,DaSi:DSP} which is also known as Majid's lexicographic convention, see \cite{Maj:NRS}. With these conventions the fundamental corepresentation unitary takes the form \[ u = \ma{cc}{ a^* & - qb \\ b^* & a } . \] We let $\C O(SU_q(2)) \subseteq C(SU_q(2))$ denote the unital $*$-subalgebra generated by $a$ and $b$ and refer to this unital $*$-subalgebra as the \emph{coordinate algebra}. The coordinate algebra can be given the structure of a Hopf $*$-algebra where the coproduct $\Delta \colon \C O(SU_q(2)) \to \C O(SU_q(2)) \otimes \C O(SU_q(2))$, antipode $S \colon \C O(SU_q(2)) \to \C O(SU_q(2))$ and counit $\epsilon \colon \C O(SU_q(2)) \to \mathbb{C}$ are defined on the fundamental unitary by \[ \Delta(u) = u \otimes u, \quad S(u) = u^* \quad \T{and} \quad \epsilon(u) = \ma{cc}{1 & 0 \\ 0 & 1} . \] For $q \neq 1$, we also consider the universal unital $*$-algebra $\C U_q( \G{su}(2))$ with generators $e,f,k$ satisfying the relations \[ k k^{-1} = 1 = k^{-1} k \, \, , \, \, \, ek = qke \, \, , \, \, \, kf = qfk \, \, \T{ and } \, \, \, \frac{k^2 - k^{-2}}{q - q^{-1}} = fe - ef \] and with involution defined by $e^* = f$, $f^* = e$ and $k^* = k$. We refer to this unital $*$-algebra as the \emph{quantum enveloping algebra}. The quantum enveloping algebra also becomes a Hopf $*$-algebra with comultiplication, antipode and counit determined by \begin{align} \label{eq:envhopf} \Delta(e) &= e \otimes k + k^{-1} \otimes e & S(e) &= -q^{-1}e & \epsilon(e) &= 0 \notag \\ \Delta(f) &= f \otimes k + k^{-1} \otimes f & S(f) &= - qf & \epsilon(f) &= 0 \\ \Delta(k) &= k \otimes k& S(k) &= k^{-1} & \epsilon(k) &= 1 \notag \end{align} We are here again conforming with the notations from \cite{DaSi:DSP}. The quantum enveloping algebra $\C U_q(\G{su}(2))$ is seen to be isomorphic, as a Hopf algebra, to $\E{\u{U}}_q(\T{sl}_2)$ with generators $E,F,K$ from Klimyk and Schm\"udgen \cite[Chapter 3]{KlSc:QGR}, by using the dictionary $e \mapsto F$, $f \mapsto E$, $k \mapsto K$. For $q = 1$, we furthermore consider the \emph{universal enveloping Lie algebra} $\C U(\G{su}(2))$ with generators $e,f,h$ satisfying the relations \[ [h,e] = -2e \quad [h,f] = 2f \quad [f,e] = h \] and with involution defined by $e^* = f$, $f^* = e$ and $h^* = h$. It too becomes a Hopf $*$-algebra with comultiplication, antipode and counit given by \begin{align}\label{eq:envlie} \Delta(e) &= e \otimes 1 + 1 \otimes e & S(e) &= - e & \epsilon(e) &= 0 \notag \\ \Delta(f) &= f \otimes 1 + 1 \otimes f& S(f) &= - f & \epsilon(f) &= 0 \\ \Delta(h) &= h \otimes 1 + 1 \otimes h & S(h) &= - h & \epsilon(h) &= 0 \notag \end{align} Notice that $\C O(SU_q(2))$ agrees with classical coordinate algebra $\C O(SU(2))$ for $q = 1$. However, for the quantum enveloping algebra the relationship is slightly more subtle: we obtain the classical universal enveloping Lie algebra by formally putting $h := 2 \log(k)/\log(q)$ so that $k = e^{\log(q) h/2}$ and then letting $\log(q)$ tend to zero. For more information on these matters, we refer to \cite[Section 3.1.3]{KlSc:QGR}. In order to unify our notation in the rest of the paper we apply the convention that $k = 1 \in \C U(\G{su}(2)) =: \C U_1(\G{su}(2))$. \\ The algebras $ \C O(SU_q(2))$ and $\C U_q(\G{su}(2))$ are linked by a non-degenerate dual pairing of Hopf $*$-algebras $\inn{\cdot,\cdot} \colon \C U_q(\G{su}(2)) \times \C O(SU_q(2)) \to \mathbb{C}$, which for $q \neq 1$ is given by \begin{align*} \inn{k,a} &= q^{1/2} & \inn{e,a} &= 0 & \inn{f,a} &=0 \\ \inn{k,a^*} & = q^{-1/2} & \inn{e,a^*}&=0 & \inn{f,a^*} &=0 \\ \inn{k,b} & = 0 & \inn{e,b} &= -q^{-1} & \inn{f,b} &=0 \\ \inn{k,b^*} &=0 & \inn{e,b^*} &=0 & \inn{f,b^*} &= 1 \end{align*} In the case where $q = 1$, this pairing is determined by \begin{align*} \inn{h,a} &= 1 & \inn{e,a} &= 0 & \inn{f,a} &=0 \\ \inn{h,a^*} & = -1 & \inn{e,a^*}&=0 & \inn{f,a^*} &=0 \\ \inn{h,b} & = 0 & \inn{e,b} &= -1 & \inn{f,b} &=0 \\ \inn{h,b^*} &=0 & \inn{e,b^*} &=0 & \inn{f,b^*} &= 1 \end{align*} See \cite[Chapter 4, Theorem 21]{KlSc:QGR} for more details. \\ The above dual pairing of Hopf $*$-algebras yields a left and a right action of $\C U_q(\G{su}(2))$ on $\C O(SU_q(2))$. For each $\eta \in \C U_q(\G{su}(2))$ the corresponding (linear) endomorphisms of $\C O(SU_q(2))$ are defined by \[ \delta_\eta(x) := (\inn{\eta,\cdot} \otimes 1)\Delta(x) \qquad \T{ and } \qquad \partial_\eta(x) := (1 \otimes \inn{\eta,\cdot})\Delta(x), \] respectively. \\ As it turns out, we shall need a rather detailed description of the irreducible representations of $\C U_q(\G{su}(2))$, and to this end the following notation is convenient: for $q \in (0,1]$ and $n \in \mathbb{N}$, we define \[ \inn{n} := \sum_{m = 0}^{n-1} q^{2m} . \] We also put $\inn{0} := 0$. For $q=1$ we of course have $\inn{n}=n$, and for $q \neq 1$, the relationship with the usual $q$-integers $[n] := \tfrac{q^n - q^{-n}}{q - q^{-1}}$ is given by $\inn{n} = q^{n-1} [n]$. \\ For each $n \in \mathbb{N}_0$ we have an irreducible $*$-representation of $\C U_q(\G{su}(2))$ on the Hilbert space $\mathbb{C}^{n+1}$ with standard orthonormal basis $\{e_j\}_{j = 0}^n$. This irreducible $*$-representation is given on generators by \[ \begin{split} \sigma_n(k)(e_j) & = q^{j-n/2} \cdot e_j \\ \sigma_n(e)(e_j) &= q^{\frac{1 - n}{2}} \sqrt{ \inn{n - j +1} \inn{j} } \cdot e_{j-1} \\ \sigma_n(f)(e_j) & = q^{\frac{1-n}{2}} \sqrt{ \inn{n-j} \inn{j+1} } \cdot e_{j + 1} \end{split} \] in the case where $q \neq 1$ and by \[ \begin{split} \sigma_n(h)(e_j) & = (2j -n) \cdot e_j \\ \sigma_n(e)(e_j) &= \sqrt{ (n - j + 1) j } \cdot e_{j-1} \\ \sigma_n(f)(e_j) & = \sqrt{ (n-j) (j+1) } \cdot e_{j + 1} \end{split} \] in the case where $q = 1$, see \cite[Chapter 3, Theorem 13]{KlSc:QGR}. The above sequence of irreducible $*$-representations together with the non-degenerate pairing $\inn{\cdot,\cdot} \colon \C U_q(\G{su}(2)) \times \C O(SU_q(2))\to \mathbb{C}$ gives rise to a complete set of irreducible corepresentation unitaries $u^n \in \mathbb{M}_{n+1}( \C O(SU_q(2)))$, $n \in \mathbb{N}_0$. Indeed, the entries in $u^n$ are characterised by the identity \[ \sigma_n(\eta)(e_j) = \sum_{i = 0}^n \inn{ \eta, u^n_{ij}} \cdot e_i , \] which holds for all $\eta \in \C U_q(\G{su}(2))$ and all $j \in \{0,1,\ldots,n\}$, see \cite[Chapter 4, Proposition 16 \& 19]{KlSc:QGR} for this. We record that $u^0 = 1$ and $u^1 = u$. Notice here that we are applying a different convention than Klimyk and Schm\"udgen \cite{KlSc:QGR} who denote the unitary corepresentations by $\{t^l\}_{l \in \frac{1}{2} \mathbb{N}_0}$. The relationship with our notation can be summarised by the identities $u^n_{ij} = t^{n/2}_{i-n/2,j-n/2}$ for $n \in \mathbb{N}_0$ and $i,j \in \{0,1,\ldots,n\}$. \\ When $q \neq 1$, we obtain from the definition of the irreducible corepresentation unitaries that we have the following formulae \begin{equation}\label{eq:exppai} \begin{split} \inn{k,u^n_{ij}} & = \delta_{ij} \cdot q^{j -n/2} \\ \inn{e,u^n_{ij}} &= \delta_{i,j - 1} \cdot q^{\frac{1 - n}{2}} \sqrt{ \inn{n - j +1} \inn{j} } \\ \inn{f,u^n_{ij}} & = \delta_{i,j + 1} \cdot q^{\frac{1-n}{2}} \sqrt{ \inn{n-j} \inn{j+1} } \end{split} \end{equation} describing the pairing between the entries of the unitaries and the generators for $\C U_q(\G{su}(2))$. For $q = 1$, we have the same formulae for $\inn{e, u^n_{ij}}$ and $\inn{f,u^n_{ij}}$ but we moreover have that $\inn{h,u^n_{ij}} = \delta_{ij} \cdot (2j - n)$.\\ The left multiplication of the generators of $\C O(SU_q(2))$ on the entries of the irreducible unitary corepresentations $u^n$ are computed explicitly here below, using the convention that $u_{ij}^n:=0$ if $n < 0$ or $(i,j) \notin \{0,1,\ldots,n\}^2$: \begin{equation}\label{eq:leftmult} \begin{split} a^* \cdot u^n_{ij} & = q^{i + j} \tfrac{ \sqrt{\inn{n - i + 1} \inn{n-j+1}}}{ \inn{n+1}} \cdot u^{n+1}_{ij} + \tfrac{\sqrt{\inn{i} \inn{j}}}{\inn{n+1}} \cdot u^{n-1}_{i-1,j-1} \\ b^* \cdot u^n_{ij} & = q^j \tfrac{ \sqrt{\inn{i+1}\inn{n-j + 1}}}{\inn{n+1}} \cdot u^{n+1}_{i+1,j} - q^{i+1} \tfrac{ \sqrt{\inn{n-i} \inn{j}}}{\inn{n+1}} \cdot u^{n-1}_{i,j-1} \\ a \cdot u^n_{ij} & = \tfrac{ \sqrt{\inn{i + 1} \inn{j+1}}}{ \inn{n+1}} \cdot u^{n+1}_{i+1,j+1} + q^{i+j+2}\tfrac{\sqrt{\inn{n-i} \inn{n-j}}}{\inn{n+1}} \cdot u^{n-1}_{ij} \\ b \cdot u^n_{ij} & = - q^{i-1}\tfrac{\sqrt{\inn{j+1} \inn{n-i +1}}}{\inn{n+1}} \cdot u^{n+1}_{i,j+1} + q^j \tfrac{ \sqrt{\inn{n -j} \inn{i}}}{ \inn{n+1}} \cdot u^{n-1}_{i-1,j} . \end{split} \end{equation} These formulae can be derived from the $q$-Clebsch-Gordan coefficients and we refer the reader to \cite[Section 3]{DLSSV:DOS} and \cite[Chapter 3.4]{KlSc:QGR} for more information on these matters.\\ The Haar state on the $C^*$-completion $h \colon C(SU_q(2)) \to \mathbb{C}$ is determined by the identities \[ h(1) =1 \quad \T{and} \quad h(u^n_{ij}) = 0, \] for all $n \in \mathbb{N}$ and all $i,j \in \{0,1,\ldots,n\}$, see \cite[Chapter 4, Equation (50)]{KlSc:QGR}. The Haar state is a twisted trace on $\C O(SU_q(2))$ with respect to the algebra automorphism $\nu \colon \C O(SU_q(2)) \to \C O(SU_q(2))$, which on the matrix units is given by \begin{align}\label{eq:modular-function} \nu(u^n_{ij}) = q^{2(n - i - j)} \cdot u^n_{ij}. \end{align} That is, we have that \begin{align}\label{eq:modular} h(x y) = h(\nu(y) x) \end{align} for all $x,y \in \C O(SU_q(2))$, see \cite[Chapter 4, Proposition 15]{KlSc:QGR}. The Haar state is faithful and we let $L^2(SU_q(2))$ denote the Hilbert space completion of the $C^*$-algebra $C(SU_q(2))$ with respect to the induced inner product \[ \inn{x,y} := h(x^* y), \quad x,y \in C(SU_q(2)) . \] The corresponding GNS-representation is denoted by \[ \rho \colon C(SU_q(2)) \to \B B( L^2(SU_q(2))) . \] The entries from the irreducible unitary corepresentations $u^n_{ij}$ yield an orthogonal basis for $L^2(SU_q(2))$. Their norms are determined by \begin{equation}\label{eq:haarmatrixI} \inn{u^n_{ij},u^n_{ij}} = h( (u^n_{ij})^* u^n_{ij}) = \frac{q^{2(n-i)}}{\inn{n + 1} } , \end{equation} see \cite[Chapter 4, Theorem 17]{KlSc:QGR}. It is also convenient to record that \begin{equation}\label{eq:haarmatrixII} \inn{ (u^n_{ij})^* , (u^n_{ij})^*} = h( u^n_{ij} (u^n_{ij})^*) = \frac{q^{2j}}{\inn{n + 1} } . \end{equation} Finally, we remark that for each $\eta \in \C U_q(\G{su}(2))$ one has $h\circ \delta_\eta=h\circ \partial_\eta=\eta(1)\cdot h$ which follows directly from the bi-invariance of the Haar state. \subsection{The D\k{a}browski-Sitarz spectral triple} In the previous section we saw that the dual pairing gives rise to an action of $\C U_q(\G{su}(2))$ on $\C O(SU_q(2))$ by linear endomorphisms, so in particular we obtain three linear maps \[ \partial_e, \partial_f, \partial_k \colon \C O(SU_q(2)) \to \C O(SU_q(2)) \] from the generators $e,f,k \in \C U_q(\G{su}(2))$ of the quantum enveloping algebra. Using the pairing between $\C U_q(\G{su}(2))$ and $\C O(SU_q(2))$ together with the formulas \eqref{eq:envhopf} one sees that $\partial_k$ is an algebra automorphism and that $\partial_e$ and $\partial_f$ are twisted derivations, in the sense that \begin{equation}\label{eq:derder} \begin{split} \partial_e(x \cdot y) & = \partial_e(x) \partial_k(y) + \partial_k^{-1}(x) \partial_e(y), \\ \partial_f(x \cdot y) & = \partial_f(x) \partial_k(y) + \partial_k^{-1}(x) \partial_f(y) \end{split} \end{equation} for all $x,y \in \C O(SU_q(2))$; i.e.~the twist is determined by the algebra automorphism $\partial_k$. The behaviour of our three operations with respect to the involution on $\C O(SU_q(2))$ is also determined by \eqref{eq:envhopf} and the fact that we have a dual pairing of Hopf $*$-algebras: \begin{equation}\label{eq:derstar} \partial_e(x^*) = - q^{-1} \cdot \partial_f(x)^* \, \, , \, \, \, \partial_f(x^*) = -q \partial_e(x)^* \, \, \T{and} \, \, \, \partial_k(x^*) = \partial_k^{-1}(x)^* \end{equation} for all $x \in \C O(SU_q(2))$. For $q = 1$ we emphasise that our conventions imply that $\partial_k = \partial_1 = \T{id} \colon \C O(SU(2)) \to \C O(SU(2))$. In this case, both $\partial_e$ and $\partial_f$ are simply derivations on $\C O(SU(2))$, but we also have a third interesting derivation namely $\partial_h \colon \C O(SU(2)) \to \C O(SU(2))$ coming from the third generator $h \in \C U(\G{su}(2))$. The interaction between $\partial_h$ and the involution is encoded by the formula $\partial_h(x^*) = - \partial_h(x)^*$. \\ It is convenient to specify the explicit formulae \begin{align}\label{eq:derexp} \partial_k(a) &= q^{1/2} a & \partial_e(a) &= b^* & \partial_f(a) &= 0 \notag \\ \partial_k(a^*) &= q^{-1/2} a^* & \partial_e(a^*) &= 0 & \partial_f(a^*) &= -q b\\ \partial_k(b) &= q^{1/2} b & \partial_e(b) &= -q^{-1} a^* & \partial_f(b)&=0 \notag\\ \partial_k(b^*) &= q^{-1/2} b^* & \partial_e(b^*)&=0& \partial_f(b^*) &= a \notag \end{align} explaining the behaviour of the algebra automorphism $\partial_k$ and the two twisted derivations $\partial_e$ and $\partial_f$ on the generators for the coordinate algebra $\C O(SU_q(2))$. For $q = 1$, our extra derivation $\partial_h \colon \C O(SU(2)) \to \C O(SU(2))$ is given explicitly on the generators by \[ \partial_h(a) = a \, \, , \, \, \, \partial_h(b) = b \, \, , \, \, \, \partial_h(a^*) = - a^* \, \, \, \T{and} \, \, \, \partial_h(b^*) = - b^* . \] For each $n \in \mathbb{Z}$, we let $\C A_n \subseteq \C O(SU_q(2))$ denote the $n$'th spectral subspace coming from the strongly continuous circle action $\sigma_L \colon S^1 \times \C O(SU_q(2)) \to \C O(SU_q(2))$ determined on the generators by $(z,a) \mapsto z \cdot a$ and $(z,b) \mapsto z \cdot b$. Thus, we let \[ \C A_n := \big\{ x \in \C O(SU_q(2)) \mid \sigma_L(z,x) = z^n \cdot x \, \, , \, \, \, \forall z \in S^1 \big\} . \] In particular, we have the fixed point algebra $\C A_0 \subseteq \C O(SU_q(2))$, which is referred to as the \emph{coordinate algebra} for the standard \emph{Podle\'s sphere} and we apply the notation \[ \C O(S_q^2) := \C A_0 = \big\{ x \in \C O(SU_q(2)) \mid \sigma_L(z,x) = x \, \, , \, \, \, \forall z \in S^1 \big\} . \] The \emph{standard Podle\'s sphere} is defined as the $C^*$-completion of $\C O(S_q^2)$ using the $C^*$-norm inherited from $C(SU_q(2))$ and we apply the notation $C(S_q^2) \subseteq C(SU_q(2))$ for this unital $C^*$-algebra. As the name suggests, the standard Podle\'s sphere was introduced by Podle\'s in \cite{Pod:QS} together with a whole range of ``non-standard'' Podle\'s spheres which we are not considering here. We shall also refer to the (standard) Podle{\'s} sphere as the \emph{quantised 2-sphere} or the \emph{$q$-deformed 2-sphere}, whenever linguistically convenient. \\ The coordinate algebra $\C O(S_q^2)$ is generated by the elements \[ A := bb^* \quad B = ab^* \quad B^* = ba^* \] and the following set \[ \big\{ A^i B^j, A^i (B^*)^k \mid i,j \in \mathbb{N}_0 \, , \, \, k \in \mathbb{N} \big\} \] constitutes a vector space basis for this coordinate algebra, see \cite[Theorem 1.2]{Wor:UAC}. For $q \neq 1$, it therefore follows that $\partial_k$ fixes $\C O(S_q^2)$, and another application of \cite[Theorem 1.2]{Wor:UAC} shows that this is actually an alternative description of the coordinate algebra: \[ \C O(S_q^2) = \big\{ x \in \C O(SU_q(2)) \mid \partial_k(x) = x \big\} . \] In terms of the irreducible unitary corepresentations $\{u^n\}_{n = 0}^\infty$, the coordinate algebra $\C O(S_q^2)$ can be described as \begin{equation}\label{eq:midcol} \T{span}\big\{ u^{2m}_{im} \mid m \in \mathbb{N}_0 \, , \,\, i \in \{0,1,\ldots,2m\} \big\} . \end{equation} Indeed, one may use the description of the pairing with $k \in \C U_q(\G{su}(2))$ from \eqref{eq:exppai} to obtain the formula $\partial_k(u^n_{ij}) = q^{j -n/2} u^n_{ij}$ for all $n \in \mathbb{N}_0$ and all $i,j \in \{0,1,\ldots,n\}$. \\ We let $H_n$ denote the Hilbert space completion of $\C A_n$ with respect to the inner product inherited from $L^2(SU_q(2))$ and we put $H_+ := H_1$ and $H_- := H_{-1}$. We consider the direct sum $H_+ \oplus H_-$ as a $\mathbb{Z}/2\mathbb{Z}$-graded Hilbert space with grading operator $\gamma = \ma{cc}{1 & 0 \\ 0 & -1}$. The restriction of the GNS-representation $\rho \colon C(SU_q(2)) \to \B B\big(L^2(SU_q(2))\big)$ to the standard Podle\'s sphere then provides us with an injective, even $*$-homomorphism \[ \pi \colon C(S_q^2) \to \B B(H_+ \oplus H_-) \quad \T{ given by } \quad \pi(x) := \ma{cc}{\rho(x)\vert_{H_+} & 0 \\ 0 & \rho(x)\vert_{H_{-}}} . \] The definition of the circle action together with the identities in \eqref{eq:derder} and \eqref{eq:derexp} entail that \begin{equation}\label{eq:dercirc} \sigma_L(z, \partial_e(x)) = z^{-2} \partial_e\big( \sigma_L(z,x) \big) \quad \T{and} \quad \sigma_L(z, \partial_f(x)) = z^2 \partial_f\big( \sigma_L(z,x) \big) \end{equation} for all $z \in S^1$ and $x \in \C O(SU_q(2))$. In particular, we obtain two unbounded operators \[ \C E \colon \C A_1 \to H_- \quad \T{and} \quad \C F \colon \C A_{-1} \to H_+ \] agreeing with the restrictions $\partial_e \colon \C A_1 \to \C A_{-1}$ and $\partial_f \colon \C A_{-1} \to \C A_1$ followed by the relevant inclusions. An application of the identities \[ h(\partial_e(x) ) = 0 = h(\partial_f(x)) \quad x \in \C O(SU_q(2)) \] together with the identities in \eqref{eq:derder} and \eqref{eq:derstar} shows that $\C F \subseteq \C E^*$ and $\C E \subseteq \C F^*$. We let \[ E \colon \T{Dom}(E) \to L^2(SU_q(2)) \quad \T{and} \quad F \colon \T{Dom}(F) \to L^2(SU_q(2)) \] denote the closures of $\C E$ and $\C F$, respectively. The $q$-deformed \emph{Dirac operator} $D_q \colon \T{Dom}(D_q) \to H_+ \oplus H_-$ is the odd unbounded operator given by \[ D_q := \ma{cc}{0 & F \\ E & 0} \quad \T{with} \quad \T{Dom}(D_q) := \T{Dom}(E) \oplus \T{Dom}(F), \] and the main result from \cite{DaSi:DSP} is the following: \begin{thm}\cite[Theorem 8]{DaSi:DSP}\label{t:spectrip} The triple $(\C O(S_q^2), H_+ \oplus H_-, D_q)$ is an even spectral triple. \end{thm} The commutator with the Dirac operator $D_q \colon \T{Dom}(D_q) \to H_+ \oplus H_-$ induces a $*$-derivation $\partial \colon \C O(S_q^2) \to \B B(H_+ \oplus H_-)$, and since $D_{q}$ is odd the the $2\times 2$-matrix representing $\partial(x)$ is off-diagonal, and we denote it as follows: \[ \partial(x) = \ma{cc}{0 & \partial_2(x) \\ \partial_1(x) & 0}. \] The $*$-derivation $\partial \colon \C O(S_q^2) \to \B B(H_+ \oplus H_-)$ is closable since the Dirac operator $D_q$ is selfadjoint and therefore in particular closed. For $x\in \C O(S_q^2)$, an application of the twisted Leibniz rule \eqref{eq:derder} yields that $\partial_1(x)=q^{1/2}\rho(\partial_e(x))\vert_{H_+}$ and $\partial_2(x)=q^{-1/2}\rho(\partial_f(x))\vert_{H_-}$, and in the sequel we will therefore often think of $\partial_1$ and $\partial_2$ as the derivations \[ q^{1/2}\partial_e, q^{-1/2}\partial_f \colon \C O(S_q^2)\to \C O(SU_q(2)) \] rather than their representations as bounded operators. \begin{remark} In \cite{DaSi:DSP} the even spectral triple on $\C O(S_q^2)$ is also equipped with an extra antilinear operator $J \colon H_+ \oplus H_- \to H_+ \oplus H_-$ (the reality operator). It is then verified that the even spectral triple is in fact both real and $\C U_q(\G{su}(2))$-equivariant and that these properties determine the spectral triple up to a non-trivial scalar $z \in \mathbb{C} \setminus \{0\}$. The description of the Dirac operator in terms of the dual pairing of Hopf $*$-algebras, which we are using here, can be found in \cite[Section 3]{NeTu:LFQ}. \end{remark} \begin{remark}\label{r:dirac} In the case where $q = 1$, it can be proved that the direct sum of Hilbert spaces $H_+ \oplus H_-$ agrees with the $L^2$-sections of the spinor bundle $S_+ \oplus S_- \to S^2$ on the $2$-sphere. Letting $\Gamma^\infty(S_+ \oplus S_-)$ denote the smooth sections of the spinor bundle it can moreover be verified that the unbounded selfadjoint operator $D_1 \colon \T{Dom}(D_1) \to H_+ \oplus H_-$ agrees with the closure of the Dirac operator $\C D : \Gamma^\infty(S_+ \oplus S_-) \to \Gamma^\infty(S_+ \oplus S_-)$ upon considering $\C D$ as an unbounded operator on $H_+ \oplus H_-$; see e.g.~\cite[Section 3.5]{Friedrich:Dirac}. For more information on the spin geometry of the $2$-sphere, we refer the reader to \cite[Chapter 9A]{GrVaFi:ENG}. \end{remark} \subsection{Compact quantum metric spaces}\label{subsec:cqms} In this section we gather the necessary background material concerning compact quantum metric spaces. These are the natural noncommutative analogues of classical compact metric spaces, and were introduced by Rieffel \cite{Rie:MSS, Rie:GHD} around the year 2000. The basic idea is that the noncommutative counterpart to a classical metric is captured by a certain seminorm, the domain of which can be chosen in several ways, leading to slight variations of the same theory. Rieffel's original theory \cite{Rie:GHD} is formulated in the language of order unit spaces, but one may equally well take a $C^*$-algebraic setting as the point of departure \cite{Li:CQG, Li:GH-dist, Rie:MSS}. We will here present a generalisation of the $C^*$-algebraic setting and take concrete operator systems as our point of departure.\\ Let $X$ be a concrete operator system; thus, for our purposes, $X$ is a closed subspace of a specified unital $C^*$-algebra such that $X$ is stable under the adjoint operation and contains the unit. The operator system $X$ has a state space $\C S(X)$ consisting of all the positive linear functionals preserving the unit. \begin{dfn}\label{def:cqms} A \emph{compact quantum metric space} is a concrete operator system $X$ equipped with a seminorm $L\colon X \to [0,\infty]$ satisfying the following: \begin{itemize} \item[(i)] One has $L(x)=0$ if and only if $x\in \mathbb{C}\cdot 1$. \item[(ii)] The set $\T{Dom}(L):=\{x\in X \mid L(x)<\infty\}$ is dense in $X$ and $L$ satisfies that $L(x^*)=L(x)$ for all $x \in X$. \item[(iii)] The function $\rho_L(\mu,\nu):=\sup\{|\mu(x)-\nu(x)| \mid L(x)\leq 1\}$ defines a metric on the state space $\C S(X)$ which metrises the weak$^*$-topology. \end{itemize} In this case, the seminorm $L$ is referred to as a \emph{Lip-norm} and the corresponding metric is referred to as the \emph{Monge-Kantorovi\v{c} metric}. \end{dfn} A compact quantum metric space $(X,L)$ has an associated order unit space $A:=\{x\in X_{{\operatorname{sa}}} \mid L(x)<\infty \}= X_{\T{sa}}\cap \T{Dom}(L)$, where both the order and the unit are inherited from the ambient unital $C^*$-algebra. In the setting of order unit spaces, the notion of a state also makes sense, and we note that the restriction map provides an identification $\C S(X)\cong \C S(A)$. Moreover, it is easy to verify that $(A,L\vert_A)$ is an \emph{order unit compact quantum metric space} in the sense of Rieffel; see \cite{Rie:MSA, Rie:MSS, Rie:GHD}. The restriction of $L$ to $A$ defines a Monge-Kantorovi\v{c} metric on $\C S(A)$ by the obvious modification of the formula in (iii), and the requirement that $L$ be invariant under the involution implies that the identification of state spaces $\C S(X) \cong \C S(A)$ becomes an isometry for the Monge-Kantorovi\v{c} metrics; for more details on these matters, see \cite[Section 2]{kaad-kyed}.\\ The canonical commutative example upon which the above definition is modelled, arises by considering a compact metric space $(M,d)$ and its associated $C^*$-algebra $C(M)$, which can be endowed with a seminorm by setting \[ L_d(f):=\sup\left\{\frac{|f(p)-f(q)|}{d(p,q)} \mid p,q\in M, p\neq q\right\}. \] Then $L_d(f)$ is finite exactly when $f$ is Lipschitz continuous, in which case $L_d(f)$ is the Lipschitz constant. By results of Kantorovi\v{c} and Rubin\v{s}te\u{\i}n \cite{KaRu:FSE,KaRu:OSC}, one has that $\rho_{L_d}$ metrises the weak$^*$-topology on $\C S(C(M))$, so that $(C(M),L_d)$ is indeed a compact quantum metric space, and that the restriction of $\rho_{L_d}$ to $M\subseteq \C S(C(M))$ agrees with the metric $d$. \\ At first glance, it might seem like a difficult task to verify that the function $\rho_L$ metrises the weak$^*$-topology, but actually this can be reduced to a compactness question as the following theorem shows. \begin{thm}[{\cite[Theorem 1.8]{Rie:MSA}}]\label{t:totallybdd} Let $X$ be a concrete operator system and $L\colon X\to [0,\infty]$ a seminorm satisfying (i) and (ii) from Definition \ref{def:cqms}. Then $(X,L)$ is a compact quantum metric space if and only if the image of the Lip-unit ball $\{x\in X\mid L(x)\leq 1\}$ under the quotient map $X\to X/\mathbb{C}\cdot 1$ is totally bounded for the quotient norm. \end{thm} Note, in particular, that this implies that if $X$ is finite dimensional, then any seminorm satisfying (i) and (ii) from Definition \ref{def:cqms} automatically provides $X$ with a quantum metric structure; we will use this fact repeatedly without further reference in the sequel. \\ One of the many pleasant features of the theory of compact quantum metric spaces, is that it allows for a noncommutative analogue of the classical Gromov-Hausdorff distance between compact metric spaces \cite{gromov-groups-of-polynomial-growth-and-expanding-maps, edwards-GH-paper}, which we now recall. If $(X,L_X)$ and $(Y,L_Y)$ are two compact quantum metric spaces, denote by $A$ and $B$ the associated order unit compact quantum metric spaces $A:=X_{\T{sa}} \cap \T{Dom}(L_X)$ and $B:=Y_{\T{sa}}\cap \T{Dom}(L_Y)$. A finite (real) seminorm $L\colon A\oplus B \to[0,\infty)$ is called \emph{admissible} if $(A\oplus B, L)$ is an order unit compact quantum metric space and if the associated quotient seminorms on $A$ and $B$ agree with $L_X\vert_A$ and $L_Y\vert_B$, respectively. For any such $L$, one obtains isometric embeddings of state spaces: \begin{align*} (\C S(X), \rho_{L_X}) &\cong (\C S(A), \rho_{L_X\vert_A}) \hookrightarrow (\C S (A\oplus B), \rho_L) \\ (\C S(Y), \rho_{L_Y}) &\cong (\C S(B), \rho_{L_Y\vert_B}) \hookrightarrow (\C S (A\oplus B), \rho_L) \end{align*} and hence one can consider the \emph{Hausdorff distance} ${\operatorname{dist}}_{\T{H}}^{\rho_L}(\C S(X), \C S(Y))$ (see \cite{Hausdorff-grundzuge}). The \emph{quantum Gromov-Hausdorff distance} between $(X,L_X)$ and $(Y,L_Y)$ is then defined as \[ {\operatorname{dist}}_{\T{Q}} ((X,L_X); (Y,L_Y)):= \inf \left\{ {\operatorname{dist}}_{\T{H}}^{\rho_L}(\C S(X), \C S(Y)) \mid L\colon A\oplus B\to [0,\infty) \T{ admissible} \right\} . \] We remark that this is simply a rephrasing of Rieffels original definition from \cite{Rie:GHD}, where everything is formulated in terms of order unit spaces, in the sense that ${\operatorname{dist}}_{\T{Q}}((X,L_X); (Y,L_Y)):={\operatorname{dist}}_{\T{Q}}((A,L_X\vert_A); (B,L_Y\vert_B))$. In \cite{Rie:GHD}, Rieffel showed that ${\operatorname{dist}}_{\T{Q}} $ is symmetric and satisfies the triangle inequality, and that distance zero is equivalent to the existence of a Lip-norm preserving isomorphism of order unit spaces between the (completions of the) quantum metric spaces in question, see \cite{Rie:GHD} for details on this. Over the past 20 years, several refinements of the quantum Gromov-Hausdorff distance have been proposed \cite{Ker:MQG, Lat:QGH, Li:CQG} for which distance zero actually implies Lip-isometric isomorphism at the $C^*$-level. A very successful such refinement is Latr{\'e}moli{\`e}re's notion of \emph{ quantum propinquity}, which has been developed in a series of influential papers \cite{Lat:AQQ, Lat:BLD, Lat:DGH, Lat:QGH}. In the present text, however, we will only consider Rieffel's original notion, and shall therefore not elaborate further on the quantum propinquity.\\ Rieffel's definition of compact quantum metric spaces is drawing inspiration from Connes' noncommutative geometry, and the latter is therefore, not surprisingly, a source of interesting examples of compact quantum metric spaces, \cite[Chapter 6]{Con:NCG} and \cite{Con:CFH}. Concretely, if $(\C A, H, D)$ is a unital spectral triple with $\C A$ sitting as a dense unital $*$-subalgebra of the unital $C^*$-algebra $A \subseteq \B B(H)$, then one obtains a seminorm $L_D\colon A\to [0,\infty]$ by setting \begin{equation}\label{eq:semispec} L_D(a):= \fork{ccc}{ \| \overline{[D,a]} \| & \T{for} & a \in \C A \\ \infty & \T{for} & a \notin \C A} . \end{equation} The main result of \cite{AgKa:PSM} is the following: \begin{thm}\label{t:specmetpod} For each $q \in (0,1)$, the seminorm $L_{D_q}$ arising from the D\k{a}browski-Sitarz spectral triple $(\C O(S_q^2),H_+ \oplus H_-,D_q)$ turns $C(S_q^2)$ into a compact quantum metric space. \end{thm} \begin{remark}\label{r:lipspec} For a unital spectral triple $(\C A,H,D)$ there is also a \emph{maximal} domain for the associated seminorm called the \emph{Lipschitz algebra} and denoted by $A^{{\operatorname{Lip}}} \subseteq A$. This dense unital $*$-subalgebra consists of all elements $a \in A$ satisfying that $a(\T{Dom}(D)) \subseteq \T{Dom}(D)$ and that $[D,a] \colon \T{Dom}(D) \to H$ extends to a bounded operator on $H$. The corresponding seminorm is denoted by $L_D^{\T{max}}$. In general, the Lip-algebra does not agree with the domain of the closure of the derivation $\partial \colon \C A \to \B B(H)$ given by $\partial(a) := \overline{[D,a]}$. As a consequence, we do not know whether it holds that, if $(A,L_D)$ a compact quantum metric space, then $(A,L_D^{\T{max}})$ is a compact quantum metric space. The converse can, however, be verified immediately by an application of Theorem \ref{t:totallybdd}. It is an interesting problem to investigate the relationship between $(A,L_D)$ and $(A,L_D^{\T{max}})$ -- in particular whether the quantum Gromov-Hausdorff distance between the two is equal to zero. In \cite[Theorem 8.3]{AgKa:PSM} it is actually proved (for $q \in (0,1)$) that $(C(S_q^2),L_{D_q}^{\T{max}})$ is a compact quantum metric space and in \cite[Theorem A]{AKK:DistZero} we show that the quantum Gromov-Hausdorff distance between $(C(S_q^2),L_{D_q})$ and $(C(S_q^2),L_{D_q}^{\T{max}})$ is indeed zero. Thus, the main convergence result in Theorem \ref{mainthm:A} holds true also when the seminorm $L_{D_q}$ is replaced by its maximal counterpart $L_{D_q}^{\T{max}}$. \end{remark} \begin{remark} In the case where $q = 1$ we have discussed earlier in Remark \ref{r:dirac} that the unbounded selfadjoint operator $D_1 \colon \T{Dom}(D_1) \to H_+ \oplus H_-$ agrees with the closure of the Dirac operator $\C D \colon \Gamma^\infty(S_+ \oplus S_-) \to H_+ \oplus H_-$ coming from the spin geometry of the $2$-sphere. We therefore know from \cite[Proposition 1]{Con:CFH} that $L_{D_1}(f)$ agrees with the Lipschitz constant of $f$ associated with the round metric on the $2$-sphere when $f \in \C O(S^2)$ (and outside of $\C O(S^2)$, $L_{D_q}$ takes the value $\infty$, by definition). In particular, we obtain that the Monge-Kantorovi\v{c} metric coming from the seminorm $L_{D_1} \colon C(S^2) \to [0,\infty]$ agrees with the Monge-Kantorovi\v{c} metric coming from the round metric on the $2$-sphere. We may thus conclude that $(C(S^2),L_{D_1})$ is a compact quantum metric space as well. \end{remark} In the recent papers \cite{walter-connes:truncations, walter:GH-convergence} the notion of a unital spectral triple was extended by replacing the $C^*$-algebra by an operator system. For such an operator system spectral triple $(\C X,H,D)$ one may again form a seminorm using the formula \eqref{eq:semispec} and it then makes sense to ask whether the data $(X,L_D)$ is a compact quantum metric space. We shall see examples of this phenomenon in our analysis of quantum fuzzy spheres here below, see Section \ref{ss:quafuz}. \section{Quantum fuzzy spheres}\label{sec:quantum-fuzzy-spheres} In this section we introduce the key ingredients needed to prove the convergence of the Podle\'s spheres towards the classical round sphere. Our proof of convergence proceeds via a finite dimensional approximation procedure involving a quantum version of the fuzzy spheres. We are going to consider these quantum fuzzy spheres as finite dimensional operator system spectral triples sitting inside the D\k{a}browski-Sitarz spectral triple for the corresponding Podle\'s sphere. The operation which links the quantum fuzzy spheres to the Podle\'s sphere is then provided by a quantum analogue of the classical Berezin transform. We are now going to describe all these ingredients and once this is carried out the present section culminates with a proof of the Lip-norm contractibility of the quantum Berezin transform. Let us once and for all fix an $N \in \mathbb{N}$ and a deformation parameter $q \in (0,1]$. \subsection{The quantum Berezin transform} Our first ingredient is a quantum analogue of the Berezin transform. This is going to be a positive unital map $\beta_N \colon C(S_q^2) \to C(S_q^2)$ which has a finite dimensional image. The aim of this section is to introduce the Berezin transform and compute its image. We define the state $h_N \colon { C(S_q^2)} \to \mathbb{C}$ by the formula \begin{align}\label{eq:statedef} h_N(x) := \inn{N+1} \cdot h\big( (a^*)^N x a^N \big) . \end{align} Remark that $h_N$ is indeed unital since $(a^*)^N = u^N_{00}$ and therefore \[ h_N(1) = \inn{N+1} \cdot h\big( u^N_{00} \cdot (u^N_{00})^*\big) = 1 \] by the identity in \eqref{eq:haarmatrixII}. In the definition here below, we let $\Delta \colon C(S_q^2) \to C(SU_q(2)) \otimes_{\T{min}} C(S_q^2)$ denote the left coaction of quantum $SU(2)$ on the Podle\'s sphere. This coaction comes from the restriction of the coproduct $\Delta \colon C(SU_q(2)) \to C(SU_q(2)) \otimes_{\T{min}} C(SU_q(2))$ to the Podle\'s sphere $C(S_q^2) \subseteq C(SU_q(2))$. \begin{dfn} The \emph{quantum Berezin transform} in degree $N \in \mathbb{N}$ is the positive unital map $ \beta_N \colon C(S_q^2) \to C(S_q^2)$ given by $\beta_N(x) := (1 \otimes h_N) \Delta(x) $. \end{dfn} We notice that $\beta_N$ would a priori take values in $C(SU_q(2))$, but the following lemma shows that $\beta_N \colon C(S_q^2) \to C(SU_q(2))$ does indeed factorise through $C(S_q^2)$. Recall to this end that the elements $u^{2m}_{im}$ for $m \in \mathbb{N}_0$ and $i \in \{0,1,\ldots,2m\}$ form a vector space basis for the coordinate algebra $\C O(S_q^2)$. \begin{lemma}\label{l:altI} For every $m \in \mathbb{N}_0$ and $i \in \{0,1,2,\ldots,2m\}$ we have the formula \[ \beta_N(u^{2m}_{im} ) = u^{2m}_{im} \cdot h_N(u^{2m}_{mm}) . \] \end{lemma} \begin{proof} Let $m \in \mathbb{N}_0$ and $i \in \{0,1,2,\ldots,2m\}$ be given. We compute that \[ \begin{split} \beta_N( u^{2m}_{im}) & = (1 \otimes h_N) \Delta(u^{2m}_{im}) = \sum_{l = 0}^{2m} u^{2m}_{il} \cdot h_N( u^{2m}_{lm}) \\ & = \sum_{l = 0}^{2m} u^{2m}_{il} \cdot h\big( (a^* )^N u^{2m}_{lm} a^N \big) \inn{N+1} = u^{2m}_{im} \cdot h_N(u^{2m}_{mm}) , \end{split} \] where the last identity follows since the Haar state $h \colon \C O(SU_q(2)) \to \mathbb{C}$ is invariant under the algebra automorphism $\nu \colon \C O(SU_q(2)) \to \C O(SU_q(2))$. \end{proof} \begin{remark} The (unpublished) paper \cite{Sain:thesis} also provides a version of the Berezin transform for certain quantum homogeneous spaces, but the setting is that of Kac type quantum groups and the constructions therefore do not directly apply to $SU_q(2)$ and $S_q^2$ when $q\neq 1$. \end{remark} For classical spaces, the Berezin transform is a well studied object (cf. \cite{Sch:BTQ} and references therein), and as we shall now see, our quantum Berezin transform exactly recovers the classical construction when $q=1$. So let us for a little while assume that $q = 1$. Recall first that the irreducible corepresentation $u^N\in \mathbb{M}_{N+1}(C(SU(2)))$ is actually the same as an irreducible representation $u^N\colon SU(2) \to U(\mathbb{C}^{N+1})$. Let $\operatorname{Tr}$ denote the trace on $\mathbb{M}_{N+1}(\mathbb{C})$ without normalisation so that $\operatorname{Tr}(1) = N + 1$. We choose $P\in \mathbb{M}_{N+1}(\mathbb{C})$ to be the rank one projection with $P_{00} = 1$ and all other entries equal to zero. Viewing $C(S^2)$ as the fixpoint algebra $C(SU(2))^{S^1}$, the classical Berezin transform $b_N \colon C(S^2) \to C(S^2)$ is given by (cf.~\cite[Section 3.3.1]{walter:GH-convergence} or \cite[Section 2]{Rie:MSG}) \[ b_N(f)(g):= (N+1)\int_{SU(2)} f(g \cdot x^{-1})H_N(x) d\lambda(x), \quad { g \in SU(2)}, \] where { $H_N$} denotes the density $H_N(x):=\operatorname{Tr}\big(P u^N(x)P u^N(x)^*\big)$ and $\lambda$ is the Haar probability measure on $SU(2)$. We remark that the trace property implies that $H_N(x)=H_N(x^{-1})$ which together with the unimodularity of $SU(2)$ gives \[ b_N(f)(g)=(N+1)\int_{SU(2)} f(g \cdot x)H_N(x) d\lambda(x), \quad { g \in SU(2)}. \] An element $x$ in $SU(2)$ is (in the first irreducible representation) given by a complex matrix $\begin{pmatrix} \bar{z_1} & -z_2 \\ \bar{z_2} & {z_1}\\ \end{pmatrix}$ and the functions $a$ and $b$ then correspond to mapping the matrix to $z_1$ and $z_2$, respectively. Recall also that we have chosen the irreducible corepresentations $u^N$ so that $u^N_{00}=(a^*)^N$. A direct computation now shows that \[ \operatorname{Tr}\big(P u^N(x)P u^N(x)^*\big)=\bar{z_1}^Nz_1^N =(a^*)^N(x) a^N(x). \] Since the Haar state $h$ at $q=1$ is given by integration against $\lambda$, we obtain from this that \begin{align*} (N+1)\int_{SU(2)} f(g \cdot x)H_N(x) d\lambda(x) &= (N+1)\int_{SU(2)} \Delta(f)(g,x)(a^*)^N(x) a^N(x) d\lambda(x) \\ &=\inn{N+1} \cdot h\big( \Delta(f)(g,-)(a^*)^Na^N \big)\\ &=(1\otimes h_N)(\Delta(f))(g) , \end{align*} whenever $g$ belongs to $SU(2)$. This shows that our quantum Berezin transform $\beta_N$ agrees with the classical Berezin transform $b_N$ when $q=1$.\\ We now return to the more general setting where the deformation parameter $q$ belongs to $(0,1]$. We apply the convention that $u_{ij}^n = 0$ whenever $n < 0$ or $n \in \mathbb{N}_0$ and $(i,j) \notin \{0,1,\ldots,n\}^2$. \begin{lemma}\label{l:altII} The image of $\beta_N \colon C(S_q^2) \to C(S_q^2)$ agrees with the linear span: \[ \T{span}_{\mathbb{C}}\big\{ u^{2m}_{im} \mid m \in \{0,1,\ldots,N\} \, , \, \, i \in \{0,1,\ldots,2m\} \big\} . \] In particular, we have that $\T{Im}(\beta_N) \subseteq \C O(S_q^2)$ and that $\T{Dim}( \T{Im}(\beta_N)) = (N + 1)^2$. \end{lemma} \begin{proof} Let first $n \in \mathbb{N}_0$ and $i,j \in \{0,1,\ldots,n\}$ be given. Applying the formulae from \eqref{eq:leftmult} we obtain that \[ \begin{split} a^N \cdot u^n_{ij} & = \sum_{k = 0}^N \lambda_{n,i,j}(k) \cdot u_{i+k,j+k}^{n + 2k - N} \quad \T{and} \\ (a^*)^N \cdot u^n_{ij} & = \sum_{k = 0}^N \mu_{n,i,j}(k) \cdot u_{i -k,j-k}^{n-2k + N} , \end{split} \] where all the coefficients appearing are strictly positive. In particular, we may find strictly positive coefficients such that \[ a^N (a^*)^N \cdot u^n_{ij} = \sum_{k = -N}^N \alpha_{n,i,j}(k) \cdot u^{n + 2k}_{i + k,j+k} . \] Let now $m \in \mathbb{N}_0$ be given. Since $h(u^n_{ij}) = 0$ for all $n > 0$ and $h(u^0_{00}) = 1$ we obtain that \[ \begin{split} h_N(u^{2m}_{m,m}) & = h\big( (a^*)^N u^{2m}_{m,m} a^N \big) \cdot \inn{N+1} = h\big( a^N (a^*)^N \cdot u^{2m}_{m,m}\big) q^{-2N} \cdot \inn{N+1} \\ & = \sum_{k = -N}^N \alpha_{2m,m,m}(k) \cdot h( u^{2m + 2k}_{m + k,m+k} ) q^{-2N} \cdot \inn{N+1} \\ & = \fork{ccc}{ 0 & \T{for} & m > N \\ \alpha_{2m,m,m}(-m) q^{-2N} \cdot \inn{N+1} & \T{for} & m \leq N } . \end{split} \] An application of Lemma \ref{l:altI} then proves the result of the present lemma. \end{proof} \subsection{Quantum fuzzy spheres}\label{ss:quafuz} Our second ingredient is a quantum analogue of the fuzzy spheres, which we will introduce in this section, and afterwards equip each of them with an operator system spectral triple. These operator system spectral triples provide each of the quantum fuzzy spheres with the structure of a compact quantum metric space. Moreover, we are going to link the quantum fuzzy spheres to the Podle\'s spheres by showing that the image of the Berezin transform in degree $N$ agrees with the quantum fuzzy sphere in degree $N$, thus obtaining natural quantum analogues of classical results, see \cite{walter-connes:truncations, Rie:MSG, walter:GH-convergence}. \begin{dfn}\label{d:quantumfuzz} We define the {\it quantum fuzzy sphere} in degree $N \in \mathbb{N}$ as the $\mathbb{C}$-linear span \[ F_q^N := \T{span}_\mathbb{C}\big\{ A^i B^j, A^i (B^*)^j \mid i,j \in \mathbb{N}_0 \, , \, \, i + j \leq N \big\} \subseteq C(S_q^2) . \] \end{dfn} We immediately remark that the vector space dimension of $F_q^N$ agrees with $(N + 1)^2$ which in turn is the dimension of the classical fuzzy sphere $M_{N+1}(\mathbb{C})$. Since $F_q^N \subseteq C(S_q^2)$ is closed, unital and stable under the adjoint operation we may think of the quantum fuzzy sphere as a concrete operator system. Moreover, since $(\C O(S_q^2), H_+ \oplus H_-, D_q)$ is an even unital spectral triple we immediately obtain an even operator system spectral triple $( F_q^N, H_+ \oplus H_-, D_q)$ for the quantum fuzzy spheres. In particular, we may equip $F_q^N$ with the seminorm $L_{D_q} \colon F_q^N \to [0,\infty)$ defined by \[ L_{D_q}(x) := \max\{ \| \partial_1(x) \| , \| \partial_2(x) \| \} . \] Thus, $L_{D_q}$ on the quantum fuzzy sphere is just the restriction of the seminorm on $C(S_q^2)$ arising from the D\k{a}browski-Sitarz spectral triple. Since we already know that $L_{D_q}$ is a Lip-norm on $C(S_q^2)$ we immediately obtain that $L_{D_q}$ is also a Lip-norm on $F_q^N$. We summarise this observation in a lemma: \begin{lemma} The pair $(F_q^N,L_{D_q})$ is a compact quantum metric space. \end{lemma} We shall now see that the quantum fuzzy sphere in degree $N$ agrees with the image of the Berezin transform $\beta_N \colon C(S_q^2) \to C(S_q^2)$. \begin{lemma}\label{lem:fuzzy-equal-image} It holds that $F_q^N = \T{Im}(\beta_N)$. \end{lemma} \begin{proof} Since the vector space dimension of $F_q^N$ agrees with the vector space dimension of $\T{Im}(\beta_N)$ it suffices to show that $F_q^N \subseteq \T{Im}(\beta_N)$ (see Lemma \ref{l:altII}). Moreover, since $\beta_N(x^*) = \beta_N(x)^*$ we only need to show that $A^k B^l \in \T{Im}(\beta_N)$ for all $k,l \in \mathbb{N}_0$ with $k + l \leq N$. However, using that $A = b b^*$ and $B = a b^*$ we see from \eqref{eq:leftmult} that \[ A^k B^l = (b b^*)^k (ab^*)^l \in \T{span}_{\mathbb{C}}\big\{ u^n_{ij} \mid n \leq 2(k+l) \, , \, \, i,j \in \{0,1,\ldots,n\} \big\}. \] Moreover, since $A^k B^l \in \C O(S_q^2)$ we must in fact have that \[ A^k B^l \in \T{span}_{\mathbb{C}}\big\{ u^{2m}_{im} \mid m \leq k + l \, , \, \, i \in \{0,1,\ldots,2m \} \big\} , \] see \eqref{eq:midcol}. Since $k + l \leq N$ we now obtain the result of the present lemma by applying Lemma \ref{l:altII}. \end{proof} When $q=1$ the classical fuzzy sphere in degree $N$ is, by definition, given as $\mathbb{M}_{N+1}(\mathbb{C})$ and the classical Berezin transform agrees with the composition $b_N = \sigma_N \circ \breve \sigma_N$, where $\sigma_N\colon \mathbb{M}_{N+1}(\mathbb{C}) \to C(S^2)$ is the so-called covariant Berezin symbol and $\breve \sigma_N$ is its adjoint (see eg.~\cite[Section 3.3.1]{walter:GH-convergence} or \cite[Section 2]{Rie:MSG}). In the quantised setting we have only defined the composition thus leaving out a treatment of the covariant Berezin symbol. However, since the quantum Berezin transform at $q=1$ agrees with the classical Berezin transform we obtain that $F^N_1=\sigma_N\big(\mathbb{M}_{N+1}(\mathbb{C})\big)$. Using our seminorm $L_{D_1} \colon F^N_1 \to [0,\infty)$ we therefore also obtain a seminorm on the classical fuzzy sphere $\mathbb{M}_{N+1}(\mathbb{C})$ by deeming the covariant Berezin symbol to be a Lip-norm isometry. At least a priori, this seminorm is different from the one considered by Rieffel in \cite{Rie:MSG} and the one arising from the Grosse-Pre\v{s}najder Dirac operator considered in \cite{Bar:MFF, GrPr:Dirac-fuzzy, walter:GH-convergence}. \subsection{Derivatives of the Berezin transform}\label{s:der} Our aim in this section is to show that the quantum Berezin transform is a Lip-norm contraction and we are going to achieve this for elements in the coordinate algebra $\C O(S_q^2)$. In Proposition \ref{p:derVI} here below we shall thus see that $L_{D_q}(\beta_N(x)) \leq L_{D_q}(x)$ for all $x \in \C O(S_q^2)$. It turns out that the derivation $\partial : \C O(S_q^2) \to \mathbb{M}_2\big( \C O(SU_q(2))\big)$ coming from the D\k{a}browski-Sitarz spectral triple does not have any good equivariance properties with respect to the Berezin transform and this makes the proof of the inequality $L_{D_q}(\beta_N(x)) \leq L_{D_q}(x)$ a delicate matter. Our strategy is to conjugate the derivation $\partial$ with the fundamental corepresentation unitary $u \in \mathbb{M}_2( \C O(SU_q(2)))$ and thereby obtain an operation which is equivariant with respect to the Berezin transform. It is in fact possible to describe the conjugated derivation $u \partial u^*$ entirely in terms of the \emph{right action} of the quantum enveloping algebra, even though the derivation $\partial$ comes from the \emph{left action} of the quantum enveloping algebra. To this end, recall that the right action on $\C O(SU_q(2))$ of an element $g \in \C U_q(\G{su}(2))$ is given by the linear endomorphism $\delta_g \colon \C O(SU_q(2)) \to \C O(SU_q(2))$ defined by $\delta_g(x) := (\inn{g,\cdot} \otimes 1) \Delta(x)$. Remark that $\delta_g(x) \in \C O(S_q^2)$ for all $x \in \C O(S_q^2)$ since $\Delta(\C O(S_q^2)) \subseteq \C O(SU_q(2)) \otimes \C O(S_q^2)$. \begin{lemma}\label{l:derI} Let $g \in \C U_q(\G{su}(2))$. It holds that \[ \delta_g \beta_N(x) = \beta_N \delta_g(x) \] for all $x \in \C O(S_q^2)$. \end{lemma} \begin{proof} Remark first that $\T{Im}(\beta_N) \subseteq \C O(S_q^2)$ so that it makes sense to look at the composition $\delta_g \beta_N$, see Lemma \ref{l:altII}. Then by coassociativity of the coproduct $\Delta \colon \C O(SU_q(2)) \to \C O(SU_q(2)) \otimes \C O(SU_q(2))$ we have that \[ \begin{split} \delta_g \beta_N(x) & = (\inn{g,\cdot} \otimes 1)(\Delta \otimes h_N)\Delta(x) = (\inn{g,\cdot} \otimes 1)(1 \otimes 1 \otimes h_N)(1 \otimes \Delta)\Delta(x) \\ & = (1 \otimes h_N)\Delta(\inn{g,\cdot} \otimes 1)\Delta(x) = \beta_N \delta_g(x) \end{split} \] for all $x \in \C O(S_q^2)$. This proves the lemma. \end{proof} We are particularly interested in the three linear maps $\delta_e, \delta_f$ and $\delta_k \colon \C O(SU_q(2)) \to \C O(SU_q(2))$. We record that $\delta_k$ is an algebra automorphism whereas $\delta_e$ and $\delta_f$ are twisted derivations, meaning that \begin{equation}\label{eq:delder} \begin{split} \delta_e(x \cdot y) & = \delta_e(x) \cdot \delta_k(y) + \delta_k^{-1}(x) \cdot \delta_e(y) \quad \T{and} \\ \delta_f(x \cdot y) & = \delta_f(x) \cdot \delta_k(y) + \delta_k^{-1}(x) \cdot \delta_f(y) \end{split} \end{equation} for all $x,y \in \C O(SU_q(2))$. The compatibility between our three operations and the involution on $\C O(SU_q(2))$ is described by the identities \begin{equation}\label{eq:delstar} \delta_e(x^*) = -q^{-1} \cdot \delta_f(x)^* \, \, , \, \, \, \delta_f(x^*) = -q \cdot \delta_e(x)^* \, \, \T{and} \, \, \, \delta_k(x^*) = \delta_k^{-1}(x)^* . \end{equation} In the case where $q = 1$, we emphasise that our conventions imply that $\delta_k$ agrees with the identity automorphism of $\C O(SU(2))$, and hence we obtain that $\delta_e$ and $\delta_f$ are derivations on $\C O(SU(2))$ in the usual sense of the word. However, for $q = 1$ we also have the interesting derivation $\delta_h \colon \C O(SU(2)) \to \C O(SU(2))$ coming from the third generator $h \in \C U(\G{su}(2))$. This extra derivation relates to the adjoint operation via the formula $\delta_h(x^*) = - \delta_h(x)^*$.\\ It is convenient to record the formulae: \begin{align}\label{eq:delexp} \delta_k(a) &= q^{1/2} \cdot a& \delta_e(a) &= 0 & \delta_f(a) &= -q \cdot b \notag \\ \delta_k(a^*) &= q^{-1/2} \cdot a^*& \delta_e(a^*) &= b^* &\delta_f(a^*) &= 0 \\ \delta_k(b) &= q^{-1/2} \cdot b & \delta_e(b) &= -q^{-1} \cdot a &\delta_f(b) &=0 \notag\\ \delta_k(b^*) &= q^{1/2} \cdot b^* & \delta_e(b^*)&=0& \delta_f(b^*)& = a^* \notag \end{align} Moreover, for $q = 1$ we in addition have that \[ \delta_h(a) = a \, \, , \, \, \, \delta_h(b) = -b \, \, , \, \, \, \delta_h(a^*) = -a^* \, \, \T{and } \, \, \delta_h(b^*) = b^* . \] We define the algebra automorphism $\tau := \delta_k \partial_k \colon \C O(SU_q(2)) \to \C O(SU_q(2))$ and notice that it follows from the defining commutation relations in $\C O(SU_q(2))$ that \begin{equation}\label{eq:taucommu} b x = \tau(x) b \quad \T{and} \quad b^* x = \tau(x) b^* \quad \T{for all } x \in \C O(SU_q(2)) . \end{equation} Clearly, $\tau$ agrees with $\delta_k$ when restricted to $\C O(S_q^2)$. In general, when $\theta \colon \C O(SU_q(2)) \to \C O(SU_q(2))$ is an algebra automorphism, we shall apply the notation \[ [y,x]_\theta := y x - \theta(x) y \] for the twisted commutator between two elements $x$ and $y \in \C O(SU_q(2))$. With this notation we now obtain: \begin{lemma}\label{l:deriII} We have the identities \[ [a^*,x]_{\delta_k} = (1 - q^2) q^{1/2} b \partial_e(x) \quad \mbox{and} \quad [a, x]_{\delta_k} = (1 - q^2) q^{-3/2} b^* \partial_f(x) \] for all $x \in \C O(S_q^2)$. \end{lemma} \begin{proof} The operation $x \mapsto [a^*,x]_{\delta_k}$ satisfies a twisted Leibniz rule, meaning that \[ [a^*, x y]_{\delta_k} = [a^*,x]_{\delta_k} y + \delta_k(x) [a^*, y]_{\delta_k} , \] for all $x, y \in \C O(S_q^2)$. It moreover follows from \eqref{eq:taucommu} that the operation $x \mapsto b \partial_e(x)$ satisfies the same twisted Leibniz rule so that \[ b \partial_1(xy) = b \partial_1(x) y + \delta_k(x) b \partial_e(y) , \] for all $x,y \in \C O(S_q^2)$. In order to prove the first identity of the lemma, it thus suffices to check that \[ a^* x - \delta_k(x) a^* = (1 - q^2) q^{1/2} b \partial_e(x) \] for $x \in \{A,B,B^*\}$. This can be done in a straightforward fashion using that \begin{align}\label{eq:partial-on-generators} \partial_e(A) &= - q^{-1/2} b^* a^* \notag\\ \partial_e(B) &= q^{-1/2} (b^*)^2\\ \partial_e(B^*) &= -q^{-3/2} (a^*)^2 \notag , \end{align} which can be seen from \eqref{eq:derexp}. The second identity of the lemma follows by a similar argument, or, alternatively, from the first identity by applying the involution. \end{proof} We recall that \[ u := u^1 = \ma{cc}{a^* & -qb \\ b^* & a} \in \mathbb{M}_2\big( \C O(SU_q(2)) \big) \] denotes the fundamental corepresentation unitary and that $\partial \colon \C O(S_q^2) \to \mathbb{M}_2(\C O(SU_q(2)))$ denotes the derivation \[ \partial = \ma{cc}{0 & \partial_2 \\ \partial_1 & 0} = \ma{cc}{0 & q^{-1/2} \partial_f \\ q^{1/2} \partial_e & 0} . \] \begin{lemma}\label{l:deriIII} We have the identities \[ [u, x]_{\delta_k} = (1 - q^2) \ma{cc}{0 & b \\ q^{-1} b^* & 0} \partial(x) \quad \mbox{and} \quad [u^*, \delta_{k^{-1}}(x)]_{\delta_k} = (1 - q^2) \partial(x) \ma{cc}{0 & q^{-1} b \\ b^* & 0} \] for all $x \in \C O(S_q^2)$. \end{lemma} \begin{proof} Let $x \in \C O(S_q^2)$ be given. By definition of the fundamental corepresentation unitary $u$, the identities in \eqref{eq:taucommu}, and Lemma \ref{l:deriII} we see that \[ \begin{split} u x - \delta_k(x) u & = \ma{cc}{ [a^*,x]_{\delta_k} & -q [b,x]_{\delta_k} \\ \,[b^*,x]_{\delta_k} & [a,x]_{\delta_k}} \\ & = (1 - q^2) \ma{cc}{ b \partial_1(x) & 0 \\ 0 & q^{-1} b^* \partial_2(x)} = (1 - q^2) \ma{cc}{0 & b \\ q^{-1} b^* & 0} \partial(x) . \end{split} \] This proves the first identity of the lemma. The remaining identity then follows from the computation \[ \begin{split} (1-q^2)\partial(x) \ma{cc}{0 & q^{-1} b \\ b^* & 0} & = (q^2 - 1) \Big( \ma{cc}{0 & b \\ q^{-1} b^* & 0} \partial(x^*) \Big)^* = ( \delta_k(x^*) u - u x^* )^* \\ & = u^* \delta_k^{-1}(x) - x u^* = [u^* ,\delta_k^{-1}(x)]_{\delta_k} . \qedhere \end{split} \] \end{proof} We are now ready to show that the operation $x \mapsto u \partial(x) u^*$ is a twisted derivation on $\C O(S_q^2)$. \begin{prop}\label{p:deri} It holds that \[ u \partial(xy) u^* = u \partial(x) u^* \delta_k(y) + \delta_{k^{-1}}(x) u \partial(y) u^* \] for all $x,y \in \C O(S_q^2)$. \end{prop} \begin{proof} Let $x,y \in \C O(S_q^2)$ be given. We compute that \[ \begin{split} u \partial(xy) u^* & = u \partial(x) y u^* + u x \partial(y) u^* \\ & = u \partial(x) u^* \delta_k(y) + \delta_{k^{-1}}(x) u \partial(y) u^* - u \partial(x) [u^*,\delta_k(y)]_{\delta_{k^{-1}}} + [u,x]_{\delta_{k^{-1}}} \partial(y) u^* , \end{split} \] so (upon conjugating with $u$) we have to show that \[ u^* [u,x]_{\delta_{k^{-1}}} \partial(y) = \partial(x) [u^*,\delta_{k}(y)]_{\delta_{k^{-1}}} u . \] Notice now that \[ \ma{cc}{0 & q^{-1}b \\ b^* & 0} u = \ma{cc}{q^{-1} bb^* & ab \\ q^{-1} a^* b^* & - qb^* b} = u^* \ma{cc}{0 & b \\ q^{-1} b^* & 0} . \] Thus, applying Lemma \ref{l:deriIII} we obtain that \[ \begin{split} u^* [u,x]_{\delta_{k^{-1}}} \partial(y) & = (x - u^* \delta_{k^{-1}}(x) u) \partial(y) = - [u^*,\delta_{k^{-1}}(x)]_{\delta_k} u \partial(y) \\ & = (q^2 - 1) \partial(x) \ma{cc}{0 & q^{-1} b \\ b^* & 0} u \partial(y) = (q^2 - 1) \partial(x) u^* \ma{cc}{0 & b \\ q^{-1} b^* & 0} \partial(y) \\ & = - \partial(x) u^* [u,y]_{\delta_k} = \partial(x) [u^*,\delta_k(y)]_{\delta_{k^{-1}}} u . \end{split} \] This proves the proposition. \end{proof} In order to finish our computation of the conjugated derivation $x \mapsto u \partial(x) u^*$ we introduce two twisted derivations $\delta_1, \delta_2\colon \C O(S_q^2) \to \C O(S_q^2)$ by setting \[ \delta_1 := q^{1/2} \delta_e \quad \T{and} \quad \delta_2 := q^{-1/2} \delta_f. \] For $q \neq 1$, we furthermore define the twisted derivation $\delta_3 := \frac{\delta_k - \delta_k^{-1}}{q - q^{-1}} \colon \C O(S_q^2) \to \C O(S_q^2)$ and for $q = 1$ we simply put $\delta_3 := \frac12\delta_h \colon \C O(S^2) \to \C O(S^2)$. Here, and below, the adjective ``twisted'' is to be understood in the sense of the following Leibniz type rule: \[ \delta_i(xy)=\delta_i(x)\delta_k(y) + \delta_{k^{-1}}(x)\delta_i(y), \] for $x,y\in \C O(S_q^2)$ and $i\in \{1,2,3\}$; that this holds follows from \eqref{eq:delder}. For $q \in (0,1]$ we now assemble this data into a twisted derivation $\delta \colon \C O(S_q^2) \to \mathbb{M}_2( \C O(S_q^2))$ by the formula \[ \delta(x) := \ma{cc}{ -\delta_3(x) & \delta_2(x) \\ \delta_1(x) & \delta_3(x)} . \] We record that $\delta(x^*) = - \delta(x)^*$, see \eqref{eq:delstar} for this. \begin{prop}\label{p:derV} It holds that $u \partial(x) u^* = \delta(x)$ for all $x \in \C O(S_q^2)$. \end{prop} \begin{proof} Using Proposition \ref{p:deri} we see that the operations $x \mapsto u \partial(x) u^*$ and $x \mapsto \delta(x)$ satisfy the same twisted Leibniz rule and they also behave in the same way with respect to the adjoint operation. It therefore suffices to verify the required identity on the generators $A,B \in \C O(S_q^2)$. This can be carried out by a straightforward computation using \eqref{eq:partial-on-generators} and \eqref{eq:delexp} together with the defining relations for $\C O(SU_q(2))$, indeed: \[ \begin{split} u \partial(A) u^* & = \ma{cc}{a^* & - qb \\ b^* & a} \ma{cc}{0 & ab \\ -b^* a^* & 0} \ma{cc}{a & b \\ -qb^* & a^*} \\ & = \ma{cc}{0 & a^* a ba^* + qbb^* a^* b \\ -q b^* a bb^* - ab^* a^* a & 0} \\ &= \ma{cc}{0 & ba^* \\ -ab^* & 0} = \delta(A), \quad \T{and} \\ u \partial(B) u^* & = \ma{cc}{a^* & - qb \\ b^* & a} \ma{cc}{0 & q^{-1} a^2 \\ (b^*)^2 & 0} \ma{cc}{a & b \\ -qb^* & a^*} \\ & = \ma{cc}{-a^* a^2 b^* - q b(b^*)^2 a & q^{-1} a^* a^2 a^* - q b^2(b^*)^2 \\ 0 & q^{-1} b^* a^2 a^* + a (b^*)^2 b} \\ & = \ma{cc}{-ab^* & q^{-1}aa^* -qbb^* \\ 0 & ab^* } = \delta(B) . \qedhere \end{split} \] \end{proof} We may now show that the Berezin transform is a Lip-norm contraction: \begin{prop}\label{p:derVI} Let $q \in (0,1]$ and $N \in \mathbb{N}$. The Berezin transform $\beta_N \colon C(S_q^2) \to C(S_q^2)$ is a Lip-norm contraction in the sense that \[ \| \partial \beta_N(x) \| \leq \| \partial(x) \| \] for all $x \in \C O(S_q^2)$. \end{prop} \begin{proof} Since $u \in \mathbb{M}_2\big( \C O(SU_q(2))\big)$ is unitary we obtain from Proposition \ref{p:derV} that \[ \| \partial \beta_N(x) \| = \| u \cdot \partial \beta_N(x) \cdot u^* \| = \| \delta \beta_N(x) \| . \] Then, since $\beta_N \colon C(S_q^2) \to C(S_q^2)$ is a complete contraction we get from Lemma \ref{l:derI} that \[ \| \delta \beta_N(x) \| = \| \beta_N \delta(x) \| \leq \| \delta(x) \| = \| \partial(x) \| . \] This proves the result of the proposition. \end{proof} \section{Quantum Gromov-Hausdorff convergence}\label{sec:qGH-convergence} The aim of the present section is to prove our main convergence results, namely that the quantum fuzzy spheres $F_q^N$ converge to the Podle{\'s} sphere $S_q^2$ as the matrix size $N$ grows, and that the Podle{\'s} spheres $S_q^2$ converge to the classical 2-sphere (with its round metric) as $q$ tends to 1. However, before approaching the actual convergence results, quite a bit of preparatory analysis is needed. As it turns out, the key to the convergence results is that the quantum Berezin transform $\beta_N$ provides a good approximation of the identity map on the Lip unit balls, and we prove this in the sections to follow. In the following section we provide the essential upper bound on $\|\beta_N(x)-x\|$ for $x$ in the Lip unit ball, and in the next section we prove that this upper bound indeed goes to zero as $N$ tends to infinity. \subsection{Approximation of the identity}\label{ss:approx} Throughout this section we let $N \in \mathbb{N}_0$ and $q \in (0,1]$ be fixed. We let { $\epsilon \colon C(S_q^2) \to \mathbb{C}$} denote the restriction of the counit to the Podle\'s sphere and we recall that { $h_N \colon C(S_q^2) \to \mathbb{C}$} is the state given by $h_N(x) := h\big((a^*)^N x a^N \big) \cdot \inn{N+1}$. We start out by proving an equivariance property for our derivations $\partial_1,\partial_2 \colon \C O(S_q^2) \to \C O(SU_q(2))$. \begin{lemma}\label{l:approxI} Let $x \in \C O(S_q^2)$. It holds that \[ (1 \otimes \partial_1)\Delta(x) = \Delta \partial_1(x) \quad \mbox{and} \quad (1 \otimes \partial_2) \Delta(x) = \Delta \partial_2(x) . \] \end{lemma} \begin{proof} First note that since $\C O(S_q^2)$ is a left $\C O(SU_q(2))$-comodule the formulae in the lemma are well defined. As explained in Section \ref{sec:prelim}, the derivations $\partial_1, \partial_2 \colon \C O(S_q^2) \to \C O(SU_q(2))$ are given by \[ \partial_1(y) = q^{1/2} (1 \otimes \inn{e,\cdot}) \Delta(y) \quad \T{and} \quad \partial_2(y) = q^{-1/2} (1 \otimes \inn{f,\cdot}) \Delta(y) \] for all $y \in \C O(S_q^2)$. Using the coassociativity of $\Delta \colon \C O(SU_q(2)) \to \C O(SU_q(2)) \otimes \C O(SU_q(2))$ we then obtain that \[ (1 \otimes \partial_1) \Delta(x) = q^{1/2}(1 \otimes 1 \otimes \inn{e,\cdot}) (1 \otimes \Delta) \Delta(x) = q^{1/2}(\Delta \otimes \inn{e,\cdot}) \Delta(x) = \Delta(\partial_1(x)) . \] A similar proof applies when $\partial_1$ is replaced by $\partial_2$. \end{proof} The next lemma is a consequence of the above Lemma \ref{l:approxI}, and standard properties of the minimal tensor product: \begin{lemma}\label{l:approxII} Let $\phi \colon C(SU_q(2)) \to \mathbb{C}$ be a bounded linear functional. It holds that \[ L_{D_q}\big( (\phi \otimes 1) \Delta(x) \big) \leq \| \phi \| \cdot L_{D_q}(x) \] for all $x \in \C O(S_q^2)$. \end{lemma} We let $d_q(h_N,\epsilon) \in [0,\infty)$ denote the Monge-Kantorovi\v{c} distance between the two states $h_N,\epsilon \colon C(S_q^2) \to \mathbb{C}$. Recall that this is defined as \[ d_q(h_N,\epsilon):=\sup\big\{ |h_N(x)-\varepsilon(x)| \mid x\in \C O(S_q^2), L_{D_q}(x)\leq 1\big\} . \] \begin{prop}\label{p:approxIII} We have the inequality \[ \| x - \beta_N(x) \| \leq d_q(h_N,\epsilon) \cdot L_{D_q}(x) \] for all $x \in \C O(S_q^2)$. \end{prop} \begin{proof} Let $x \in \C O(S_q^2)$ be given. We record that $x = (1 \otimes \epsilon) \Delta(x)$ and hence \[ x - \beta_N(x) = \big( 1 \otimes ( \epsilon - h_N) \big) \Delta(x) . \] Notice now that $x - \beta_N(x) \in C(S_q^2) \subseteq \B B\big( L^2(S_q^2) \big)$ and let $\eta , \zeta \in L^2(SU_q(2))$ with $\| \eta \| , \| \zeta \| \leq 1$ be given. We let $\phi_{\eta,\zeta} \colon C(SU_q(2)) \to \mathbb{C}$, $\phi_{\eta,\zeta}(y) := \inn{\eta,\rho(y)\zeta}$, denote the associated contractive linear functional, and compute that \[ \phi_{\eta,\zeta}( x - \beta_N(x) ) = (\epsilon - h_N)( \phi_{\eta,\zeta} \otimes 1) \Delta(x) . \] Using the definition of the Monge-Kantorovi\v{c} distance together with Lemma \ref{l:approxII} we then obtain that \[ \begin{split} \big| \phi_{\eta,\zeta}( x - \beta_N(x) ) \big| & \leq d_q(h_N,\epsilon) \cdot L_{D_q}\big( ( \phi_{\eta,\zeta} \otimes 1) \Delta(x) \big) \\ & \leq d_q(h_N,\epsilon) \cdot \| \phi_{\eta,\zeta} \| \cdot L_{D_q}(x) \leq d_q(h_N,\epsilon) \cdot L_{D_q}(x) . \end{split} \] Taking the supremum over vectors $\eta$ and $\zeta \in L^2(SU_q(2))$ with $\| \eta \|, \| \zeta \| \leq 1$ we obtain that \[ \| x - \beta_N(x) \| \leq d_q(h_N,\epsilon) \cdot L_{D_q}(x) . \qedhere \] \end{proof} \subsection{Convergence to the counit} Throughout this section we let $q \in (0,1]$ be fixed. We see from Proposition \ref{p:approxIII} that in order to establish that the Berezin transform provides a good approximation of the identity map on the Lip unit ball, we only need to verify that $\lim_{N\to \infty}d_q(h_N, \epsilon)=0$. Since we already know that $(C(S_q^2),L_{D_q})$ is a compact quantum metric space, the convergence in the Monge-Kantorovi\v{c} metric is equivalent to convergence in the weak$^*$-topology. We apply the notation $\C O(A,1) \subseteq C(S_q^2)$ for the smallest unital $*$-subalgebra containing the element $A \in C(S_q^2)$. \begin{prop}\label{prop:convergence-to-counit} Let $q\in (0,1]$. The sequence of states $\{h_N\}_{N = 0}^\infty$ converges to $\epsilon : C(S_q^2) \to \mathbb{C}$ in the weak$^*$-topology and hence $\displaystyle \lim_{N\to\infty} d_q(h_N,\epsilon)=0$. \end{prop} \begin{proof} It suffices to show that \begin{equation}\label{eq:limitstates} \lim_{N\to \infty} h_N(x)=\epsilon(x), \end{equation} for all $x$ in the norm-dense unital $*$-subalgebra $\C O(S_q^2) \subseteq C(S_q^2)$. Notice next that $h_N(x) = 0 = \epsilon(x)$ whenever $x = y B^i$ or $x = y (B^*)^i$ for some $i \in \mathbb{N}$ and some $y \in \C O(A,1)$. Indeed, this is clear for the counit and for the state $h_N$ this follows since the Haar state is invariant under the modular automorphism $\nu : \C O(SU_q(2)) \to \C O(SU_q(2))$. When proving \eqref{eq:limitstates} we may thus restrict our attention to the case where $x \in \C O(A,1)$. Using next that \[ \C O(A,1) = \T{span}_{\mathbb{C}} \big\{ (a^*)^k a^k \mid k \in \mathbb{N}_0 \big\} \] we only need to show that $h_N\big((a^*)^k a^k\big)$ converges to $\epsilon( (a^*)^k a^k\big) = 1$ for all $k \in \mathbb{N}_0$. But this follows from the computation here below, where we utilize the fact that $(a^*)^{N+k}=u_{00}^{N+k}$ together with \eqref{eq:haarmatrixII}: \begin{align*} h_N\left((a^*)^k a^k\right)&= \inn{N+1} h\big( (a^*)^{N+k} a^{N+k} \big) = \tfrac{ \inn{N+1}}{ \inn{N + k + 1}} = 1 - q^{2(N+1)} \tfrac{ \inn{k} }{ \inn{N + k + 1}} \underset{N\to \infty}{ \longrightarrow} 1 . \qedhere \end{align*} \end{proof} \begin{cor}\label{cor:fuzzy-approx} Let $q\in (0,1]$. For each $\varepsilon>0$ there exists an $N_0\in \mathbb{N}_0$ such that \[ \|x-\beta_N(x)\|\leq \varepsilon \cdot L_{D_q}(x) \] for all $N \geq N_0$ { and all $x \in \C O(S_q^2)$}. \end{cor} \begin{proof} By Proposition \ref{p:approxIII} we have the inequality \[ \| x - \beta_N(x) \| \leq d_q(h_N,\epsilon) \cdot L_{D_q}(x), \] and by Proposition \ref{prop:convergence-to-counit} we have that $\lim_{N\to \infty} d_q(h_N,\epsilon)=0$. \end{proof} The above corollary { in combination with Proposition \ref{p:derVI}} now makes it an easy task to show that that the quantum fuzzy spheres $F^N_q$ converge to the Podle{\'s} sphere $C(S_q^2)$ as $N$ approaches infinity, and we carry out the details of this argument in Section \ref{subsec:quantum-fuzzy-to-podles}. However, in order to show that $C(S_q^2)$ converges to $C(S^2)$ as $q$ tends to $1$, we need to { estimate the distance $d_q(h_N,\epsilon)$ in a suitably uniform manner} with respect to the deformation parameter $q$. As a first step in this direction we show that the states $h_N$ and $\epsilon \colon C(S_q^2) \to \mathbb{C}$ may be restricted to the unital $C^*$-subalgebra $C^*(A,1) \subseteq C(S_q^2)$ without changing their Monge-Kantorovi\v{c} distance. This will play out to our advantage since we already carried out a careful analysis of the compact quantum metric space $\big( C^*(A,1), L_{D_q} \big)$ in \cite{GKK:QI}. In fact, $C^*(A,1)$ is a commutative unital $C^*$-algebra and the Lip-norm $L_{D_q} \colon C^*(A,1) \to [0,\infty]$ comes from an explicit metric on the spectrum of the selfadjoint positive operator $A$. We shall give more details on these matters in the next section.\\ Letting $i : C^*(A,1) \to C(S_q^2)$ denote the inclusion we specify that \begin{equation}\label{eq:metricres} \begin{split} d_q(h_N \circ i, \epsilon \circ i) & := \big\{ | h_N(x) - \epsilon(x) | \mid x \in \C O(A,1) \, , \, \, L_{D_q}(x) \leq 1 \big\} \quad \T{and} \\ d_q(h_N,\epsilon) & := \big\{ | h_N(x) - \epsilon(x) | \mid x \in \C O(S_q^2) \, , \, \, L_{D_q}(x) \leq 1 \big\} . \end{split} \end{equation} In order to prove that these two quantities agree, we define the strongly continuous circle action $\sigma_R \colon S^1 \times C(SU_q(2)) \to C(SU_q(2))$ by the formulae \[ \sigma_R(z,a) := z a \quad \T{and} \quad \sigma_R(z,b) := z^{-1} b \] This circle action gives rise to the operation $\Phi_0 \colon C(SU_q(2)) \to C(SU_q(2))$ given by \[ \Phi_0(x) := \frac{1}{2 \pi} \int_0^{2\pi} \sigma_R(e^{it},x) \, dt . \] For $z \in S^1$ and $x \in \C O(S_q^2)$ we record the relation \[ \sigma_R(z, \partial(x)) = \partial( \sigma_R(z,x)), \] which can be proved by a direct computation on the generators $A, B$ and $B^*$ and an application of the Leibniz rule. { Using that $\partial \colon \C O(S_q^2) \to \mathbb{M}_2\big(C(SU_q(2))\big)$ is closable (see the discussion after Theorem \ref{t:spectrip})} we then obtain the identity \[ \partial( \Phi_0(x) ) = \Phi_0( \partial(x) ) \] for all $x \in \C O(S_q^2)$. Remark that the restriction of $\Phi_0$ to $C(S_q^2)$ {yields a} conditional expectation onto $C^*(A,1)$ which maps the coordinate algebra $\C O(S_q^2)$ onto $\C O(A,1)$. These observations immediately yield the next result: \begin{lemma}\label{l:conmetII} Let $q \in (0,1]$. We have the inequality \[ L_{D_q}( \Phi_0(x)) \leq L_{D_q}(x) \] for all $x \in \C O(S_q^2)$. \end{lemma} As alluded to above, we have the following: \begin{lemma}\label{l:conmetIII} Let $q \in (0,1]$. It holds that $d_q( h_N,\epsilon) = d_q( h_N \circ i,\epsilon \circ i)$ for all $N \in \mathbb{N}_0$. \end{lemma} \begin{proof} Let $N \in \mathbb{N}_0$ be given. The inequality $d_q(h_N \circ i, \epsilon \circ i) \leq d_q(h_N,\epsilon)$ is clearly satisfied. To prove the remaining inequality, let $x \in \C O(S_q^2)$ with $L_{D_q}(x) \leq 1$ be given. Using Lemma \ref{l:conmetII} we then obtain that $L_{D_q}(\Phi_0(x))\leq 1$. Next, since $h_N\big( \sigma_R(z, x) \big) = h_N(x)$ and $\epsilon\big( \sigma_R(z,x) \big)$ it follows that $h_N(x)= h_N( \Phi_0(x))$ and $\epsilon(x) =\epsilon(\Phi_0(x) )$ and hence that \[ \big| h_N(x) - \epsilon(x) \big| = \big| (h_N \circ i) (\Phi_0(x)) - (\epsilon \circ i)(\Phi_0(x)) \big| \leq d_q( h_N \circ i, \epsilon \circ i) . \] This proves the present lemma. \end{proof} \subsection{Uniform approximation of the identity}\label{ss:conmet} In this section we are no longer considering $q \in (0,1]$ to be fixed but rather as a variable deformation parameter. For this reason we decorate our generators $a,b \in \C O(SU_q(2))$ and $A,B \in \C O(S_q^2)$ with an extra subscript, e.g. writing $A_q$ instead of $A$. Likewise, we put \[ \inn{n}_q := \inn{n} = \sum_{m = 0}^{n-1} q^{2n} . \] We are, however, fixing a $\delta \in (0,1)$ and restrict our attention to the case where $q \in [\delta,1]$. \\ We start out by shortly reviewing some of the results obtained in \cite{GKK:QI}. The unital $C^*$-subalgebra $C^*(A_q,1) \subseteq C(S_q^2)$ is commutative and is therefore isomorphic to the continuous functions on the spectrum of $A_q$. For $q \neq 1$ this spectrum is given by $X_q := \{ q^{2m} \mid m \in \mathbb{N}_0 \} \cup \{0\} \subseteq [0,1]$ and for $q = 1$ the spectrum $X_q$ agrees with the whole closed unit interval $[0,1]$. We are from now on tacitly identifying the $C^*$-algebra $C^*(A_q,1)$ with the $C^*$-algebra $C(X_q)$. It was proved in \cite{GKK:QI} that the restriction of the seminorm $L_{D_q} \colon C(S_q^2) \to [0,\infty]$ to $C^*(A_q,1)$ agrees with the Lipschitz constant seminorm arising from a metric $\rho_q : X_q \times X_q \to [0,\infty)$. For $q \neq 1$ this metric is given by the explicit formula \begin{align}\label{eq:metric-formula} \rho_q(q^{2m}, q^{2l}) = \Big| \sum_{k = m}^\infty \frac{(1 - q^2) q^k}{\sqrt{1 - q^{2(k+1)}}} - \sum_{k = l}^\infty \frac{(1 - q^2) q^k}{\sqrt{1 - q^{2(k+1)}}} \Big| \end{align} for all $m,l \in \mathbb{N}_0$. For $q = 1$, the metric is given by \[ \rho_1(s,t) = \big| \arcsin(2s-1) - \arcsin(2t-1) \big| \quad \T{for all } s,t \in [0,1] . \] We let $h_q \colon C(X_q) \to \mathbb{C}$ denote the restriction of the Haar state to the $C^*$-subalgebra $C^*(A_q,1) \subseteq C(SU_q(2))$. For $q = 1$ we remark that $h_1$ agrees with the usual Riemann integral on $C( [0,1])$. \\ Let $N \in \mathbb{N}_0$. As mentioned earlier we are interested in estimating the Monge-Kantorovi\v{c} distance between the two states $h_N$ and $\epsilon \colon C(S_q^2) \to \mathbb{C}$ in a suitably uniform manner with respect to the deformation parameter $q \in [\delta,1]$. We now present an estimate on this quantity which involves the metric $\rho_q \colon X_q \times X_q \to [0,\infty)$. \begin{lemma}\label{l:metricest} Let $q \in [\delta,1]$ and $N \in \mathbb{N}_0$. We have the estimate \[ d_q( \epsilon, h_N) \leq \frac{\inn{N+1}_q}{q^{2N}} \cdot h_q\big( a_q^N (a_q^*)^N \cdot \rho_q(-,0)\big) . \] \end{lemma} \begin{proof} By Lemma \ref{l:conmetIII} we have that $d_q(\epsilon,h_N) = d_q(\epsilon \circ i,h_N \circ i)$, where $i \colon C^*(A_q,1) \to C(S_q^2)$ denotes the inclusion. The composition $\epsilon \circ i \colon C^*(A_q,1) \to \mathbb{C}$ agrees with the pure state on $C(X_q)$ given by evaluation at $0 \in X_q$. Let now $\xi \in C(X_q)$ be given. Since the restriction $L_{D_q} \colon C(X_q) \to [0,\infty]$ agrees with the Lipschitz constant seminorm coming from the metric $\rho_q \colon X_q \times X_q \to [0,\infty)$ we obtain that \[ \begin{split} \big| \epsilon(\xi) - h_N(\xi) \big| & = \big| h_N\big( \xi(0) - \xi \big) \big| \leq h_N\big( | \xi(0) - \xi| \big) \leq L_{D_q}(\xi) \cdot h_N\big( \rho_q( -,0)\big) \\ & = L_{D_q}(\xi) \cdot \frac{\inn{N+1}_q}{q^{2N}} \cdot h_q\big( a_q^N (a_q^*)^N \cdot \rho_q(-,0) \big) , \end{split} \] where the last identity uses the definition of state $h_N \colon C(S_q^2) \to \mathbb{C}$ together with the twisted tracial property of the Haar state, see \eqref{eq:statedef} and \eqref{eq:modular}. The result of the lemma now follows from the definition of the quantity $d_q(\epsilon \circ i,h_N \circ i) = d_q(\epsilon,h_N)$ as recalled in \eqref{eq:metricres}. \end{proof} We now carry out a more detailed analysis of the right hand side of the estimate appearing in Lemma \ref{l:metricest}. First of all we treat the dependency of the (restriction of the) Haar state $h_q \colon C(X_q) \to \mathbb{C}$ on the deformation parameter $q \in [\delta,1]$. Our next lemma can also be deduced from \cite[Page 195]{Bla:DCH}, but for the convenience of the reader we here provide a short self contained argument. Consider the polynomial algebra $\mathbb{C}[x,y]$ as a unital $*$-subalgebra of $C\big( [\delta,1] \times [0,1] \big)$ and define the linear map $H \colon \mathbb{C}[x,y] \to C( [\delta,1])$ given by \[ H( x^j y^k )(q) := \frac{q^j}{ \inn{k + 1}_q} . \] \begin{lemma}\label{l:conmetIV} The linear map $H \colon \mathbb{C}[x,y] \to C( [\delta,1])$ extends to a norm-contraction $H \colon C( [\delta,1] \times [0,1]) \to C([\delta,1])$ such that $H( \xi)(q) = h_q( \xi(q, - ) |_{X_q} )$ for all $q \in [\delta,1]$. \end{lemma} \begin{proof} Let first $q \in [\delta,1]$ be fixed. By definition of $H \colon \mathbb{C}[x,y] \to C([\delta,1])$ we have that \[ H(x^j y^k)(q) = \frac{q^j}{ \inn{k+1}_q} = q^j \cdot h_q( A_q^k) = h_q\big( (x^j \cdot y^k)(q ,-)|_{X_q} \big) \] for all $j,k \in \mathbb{N}_0$. Remark that the formula for $h_q(A_q^k)$ can be found in \cite[Chapter 4, Equation (51)]{KlSc:QGR}. Then, using that $h_q \colon C(X_q) \to \mathbb{C}$ is a state for every $q \in [\delta,1]$, we obtain that \[ \begin{split} \| H(\xi) \|_{\infty} & = \sup_{q \in [\delta,1]} \big| H(\xi)(q) \big| = \sup_{q \in [\delta,1]} \big| h_q( \xi(q ,\cdot ) |_{X_q}) \big| \leq \sup_{ (q,s) \in [\delta,1] \times X_q} \big| \xi(q,s) \big| \leq \| \xi \|_\infty \end{split} \] for all $\xi \in \mathbb{C}[x,y]$. The result of the lemma now follows since $\mathbb{C}[x,y] \subseteq C( [\delta,1] \times [0,1])$ is dense in supremum norm. \end{proof} Continuing our treatment of the right hand side of the estimate in Lemma \ref{l:metricest} we now analyze how the functions $\rho_q(-,0) \colon X_q \to [0,\infty)$ depend on the deformation parameter $q \in [\delta,1]$. We record that \begin{equation}\label{eq:distzero} \begin{split} \rho_q(q^{2m},0) & = \sum_{k = m}^\infty \frac{(1 - q^2) q^k}{\sqrt{1 - q^{2(k+1)}}} = \sum_{k = 0}^\infty \frac{(1 - q^2) q^{k+m}}{\sqrt{1 - q^{2(k + m +1)}}} \end{split} \end{equation} whenever $q \neq 1$ and $m \in \mathbb{N}_0$, see \eqref{eq:metric-formula}. In order to deal with these expressions we let $(q,s) \in [\delta,1) \times [0,1]$ and define the continuous decreasing function $\zeta_{q,s} \colon (-1,\infty) \to [0,\infty)$ by the formula \[ \zeta_{q,s}(x) := \frac{ (1 - q^2) q^x \cdot \sqrt{s} }{\sqrt{1 - s q^{2(x + 1)}}} \] Each of these functions can be estimated from above as follows: \[ \zeta_{q,s}(x) \leq \frac{q^x}{\sqrt{1 - q^{2(x + 1)}} } \leq \frac{q^x}{ \sqrt{1 - q^2}} \quad \T{for all } x \geq 0 . \] We may thus introduce the function $f \colon [\delta,1] \times [0,1] \to [0,\infty)$ by putting \begin{equation}\label{eq:conmetI} f(q,s) := \fork{ccc}{ \sum_{k = 0}^\infty \zeta_{q,s}(k) & \T{for} & q \neq 1 \\ 2 \arcsin( \sqrt{s}) & \T{for} & q = 1 } . \end{equation} Comparing with the formula for the metric in \eqref{eq:distzero} we immediately see that \begin{equation}\label{eq:metricfunc} f(q,q^{2m}) = \sum_{k = 0}^\infty \zeta_{q,q^{2m}}(k) = \sum_{k = 0}^\infty \frac{ (1 - q^2) q^{k+m} }{\sqrt{1 - q^{2(k + m + 1)}}} = \rho_q(q^{2m},0) \end{equation} whenever $q \neq 1$ and $m \in \mathbb{N}_0$. Similarly, for $q = 1$ we have that \begin{equation}\label{eq:metricfuncII} f(1,s) = 2 \arcsin(\sqrt{s}) = \arcsin(2s-1) + \frac{\pi}{2} = \rho_1(s,0) \quad \T{for all } s \in [0,1]. \end{equation} \begin{lemma}\label{l:conmetI} The function $f \colon [\delta,1] \times [0,1] \to [0,\infty)$ is continuous. \end{lemma} \begin{proof} We focus on proving continuity at a point of the form $(1,s_0)$ for a fixed $s_0 \in [0,1]$, since the continuity of the restriction $f\vert_{[\delta,1) \times [0,1]}$ follows as the sequence of partial sums $\{ \sum_{k=0}^m \zeta_{-,-}(k)\}_{m = 0}^\infty$ converges in supremum norm to $f$ {on compact subsets of $[\delta,1) \times [0,1]$.} We remark that for each fixed $(q,s) \in [\delta,1) \times [0,1]$ it holds that the function $\gamma_{q,s} \colon (-1,\infty) \to \mathbb{R}$ given by \[ \gamma_{q,s} \colon x \mapsto \frac{1 - q^2}{\ln(q) \cdot q} \cdot \arcsin( \sqrt{s} \cdot q^{x + 1} ) \] is an antiderivative to $\zeta_{q,s} \colon (-1,\infty) \to \mathbb{R}$. Moreover, since $\zeta_{q,s} \colon (-1,\infty) \to \mathbb{R}$ is positive and decreasing we obtain the estimates \[ \int_0^\infty \zeta_{q,s}(x) dx \leq \sum_{k = 0}^\infty \zeta_{q,s}(k) \leq \int_{-1}^\infty \zeta_{q,s}(x) dx . \] In order to compute the above integrals we record the following formulae: \[ \begin{split} & \lim_{x \to \infty} \gamma_{q,s}(x) = 0 \quad \quad \quad \quad \quad \quad \gamma_{q,s}(0) = \frac{1 - q^2}{\ln(q) \cdot q} \arcsin(\sqrt{s} \cdot q) \\ & \lim_{x \to -1} \gamma_{q,s}(x) = \frac{1 - q^2}{\ln(q) \cdot q} \arcsin(\sqrt{s}) . \end{split} \] We thereby obtain the estimates \[ -\frac{1 - q^2}{\ln(q) \cdot q} \arcsin(\sqrt{s} \cdot q) \leq f(q,s) \leq -\frac{1 - q^2}{\ln(q) \cdot q} \arcsin(\sqrt{s}) \] for all $(q,s) \in [\delta,1) \times [0,1]$. The continuity of the function $f \colon [\delta,1] \times [0,1] \to [0,\infty)$ at the fixed point $(1,s_0) \in [\delta,1] \times [0,1]$ now follows by noting that \[ \lim_{q \to 1} \frac{1 - q^2}{\ln(q) \cdot q} = - 2. \qedhere \] \end{proof} We are now ready for the final step regarding the continuity properties of the right hand side of the estimate in Lemma \ref{l:metricest}. For each $q \in [\delta,1]$ and $N \in \mathbb{N}_0$, we compute that \[ a_q^N (a_q^*)^N = (1 - q^{-2(N-1)} A_q) \cdot (1 - q^{-2(N-2)} A_q) \cdot\ldots\cdot (1 - A_q) . \] We then define the continuous function $g_N \colon [\delta,1] \times [0,1] \to [0,\infty)$ \begin{equation}\label{eq:conmetII} g_N(q,s) := \frac{\inn{N+1} }{q^{2N}} \cdot (1 - q^{-2(N-1)} \cdot s) \cdot (1 - q^{-2(N-2)} s) \cdot\ldots\cdot (1 - s) \end{equation} and note that $g_N(q,-)\vert_{X_q} = \frac{\inn{N + 1}}{q^{2N}} \cdot a_q^N (a_q^*)^N$.\\ The next result summarises what we have obtained so far and is thus a consequence of Lemma \ref{l:metricest}, Lemma \ref{l:conmetIV} and Lemma \ref{l:conmetI} together with \eqref{eq:metricfunc}, \eqref{eq:metricfuncII} and \eqref{eq:conmetII}: \begin{lemma}\label{l:conmetV} For each $N \in \mathbb{N}_0$ and each $q \in [\delta,1]$ we have the estimate \[ d_q( \epsilon,h_N) \leq H( f \cdot g_N)(q) . \] \end{lemma} We finish this section by proving two lemmas culminating in a uniform estimate on the distance between the Berezin transform and the identity map. \begin{lemma}\label{l:conmetVI} For each $q \in [\delta,1]$, it holds that $\displaystyle \lim_{N \to \infty} H(f \cdot g_N)(q) = 0$. \end{lemma} \begin{proof} Let $q \in [\delta,1]$ be given. We first remark that it follows from the definition in \eqref{eq:conmetI} (see also \eqref{eq:metricfunc}) that $f(q,0) = 0$. Since the restriction of the counit to $C^*(A_q,1) \cong C(X_q)$ is given by evaluation at $0\in X_q$ this translates into the identity $\epsilon(f(q,\cdot)\vert_{X_q})=0$. Moreover, we notice that \[ H(f \cdot g_N)(q)= h_q\big( g_N(q,-)\vert_{X_q} \cdot f(q,-)\vert_{X_q}\big) = h_N\big( f(q,-)\vert_{X_q} \big) . \] The result of the lemma now follows since the sequence of states $\{h_N\}_{N = 0}^\infty$ converges to the restriction of the counit $\epsilon \colon C(S_q^2) \to \mathbb{C}$ in the weak$^*$-topology by Proposition \ref{prop:convergence-to-counit}. \end{proof} \begin{lemma}\label{l:conmetVII} For each $\varepsilon > 0$, each $q_0 \in [\delta,1]$ and each $N_0 \in \mathbb{N}_0$ there exists an $N \geq N_0$ and an open interval $I \subseteq \mathbb{R}$ with $q_0 \in I$ such that \[ \| x - \beta_N(x) \| \leq \varepsilon \cdot L_{D_q}(x) \] for all $q \in I \cap [\delta,1]$ and all $x \in \C O(S_q^2)$. \end{lemma} \begin{proof} Let $\varepsilon > 0$, $q_0 \in [\delta,1]$ and $N_0 \in \mathbb{N}_0$ be given. By Lemma \ref{l:conmetVI} we may choose an $N \geq N_0$ such that $H(f \cdot g_N)(q_0) < \varepsilon/2$. Then, using Lemma \ref{l:conmetI} and Lemma \ref{l:conmetIV}, we see that $H(f \cdot g_N) \in C([\delta,1])$. We may therefore choose our open interval $I \subseteq \mathbb{R}$ such that $\big| H(f \cdot g_N)(q) - H(f \cdot g_N)(q_0)\big| < \varepsilon/2$ for all $q \in I \cap [\delta,1]$. Combining this result with Lemma \ref{l:conmetV} and Proposition \ref{p:approxIII} we then obtain that \[ \| x - \beta_N(x) \| \leq d_q(h_N,\epsilon) \cdot L_{D_q}(x) \leq \big| H(f \cdot g_N)(q) \big| \cdot L_{D_q}(x) \leq \varepsilon \cdot L_{D_q}(x) \] for all $q \in I \cap [\delta,1]$ and all $x \in \C O(S_q^2)$. This ends the proof of the lemma. \end{proof} \subsection{Quantum fuzzy spheres converge to the fuzzy sphere} In this section we prove that the quantum fuzzy spheres converge to the classical fuzzy sphere as the deformation parameter $q$ tends to $1$. As remarked in Section \ref{ss:quafuz}, rather than thinking of the fuzzy sphere as a matrix algebra we will consider its image $F^N_1:=\sigma_N(\mathbb{M}_{N+1}(\mathbb{C}))$ under the covariant Berezin symbol $\sigma_N \colon \mathbb{M}_{N+1}(\mathbb{C})\to C(S^2)$, see Lemma \ref{lem:fuzzy-equal-image} and the discussion after this lemma. We recall that each of the quantum fuzzy spheres $F^N_q \subseteq C(S_q^2)$ is equipped with the Lip-norm arising from restricting the Lip-norm $L_{D_q} \colon C(S_q^2) \to [0,\infty]$, and in this way each $F^N_q$ becomes a compact quantum metric space. \begin{prop}\label{prop:quantum-fuzzy-to-classical-fuzzy} Let $N \in \mathbb{N}_0$. The quantum fuzzy spheres $\left(F^N_q\right)_{q\in (0,1]}$ vary continuously in the parameter $q$ with respect to the quantum Gromov Hausdorff distance. \end{prop} \begin{proof} Let $\delta \in (0,1)$ be given. By \cite[Proposition 7.1]{Bla:DCH}, there exists a continuous field of $C^*$-algebras over $[\delta,1]$ with total space $C(SU_\bullet(2))$ such that $C(SU_q(2))$ agrees with the fibre at $q \in [\delta,1]$. There exist continuous sections of this field $A_\bullet, B_\bullet \in C(SU_\bullet(2))$ mapping to the generators $A_q$ and $B_q \in C(S_q^2) \subseteq C(SU_q(2))$ under the quotient map $\T{ev}_q \colon C(SU_\bullet(2)) \to C(SU_q(2))$ for each $q \in [\delta,1]$. Let us define the subspace \[ V := \T{span}_\mathbb{C}\big\{ A^i_\bullet B_\bullet^j, A_\bullet^i (B_\bullet^*)^j \mid i,j \in \mathbb{N}_0 \, , \, \, i + j \leq N \big\} \subseteq C(SU_\bullet(2)) \] and let $V_{{\operatorname{sa}}} \subseteq V$ denote the real part $V_{{\operatorname{sa}}} := \big\{ x + x^* \mid x \in V \big\}$ so that $V_{{\operatorname{sa}}}$ becomes a real vector space of dimension $(N + 1)^2$ containing the unit $1$ from $C(SU_\bullet(2))$. \\ We remark that it follows from Definition \ref{d:quantumfuzz} that the image $\T{ev}_q(V_{{\operatorname{sa}}})$ agrees with the order unit space $(F_q^N)_{{\operatorname{sa}}}$ for each $q \in [\delta,1]$. Therefore, upon defining $\| y\|_q := \| \T{ev}_q(y) \|$ for each $q \in [\delta,1]$ we obtain a continuous field $\{ \| \cdot \|_q\}_{q \in [\delta,1]}$ of order unit norms on $(V_{{\operatorname{sa}}},1)$. For each $q \in [\delta,1]$ we now record the formulae $\partial_1(A_q) = -b_q^* a_q^*$, $\partial_1(B_q) = (b^*_q)^2$ and $\partial_1(B_q^*) = - q^{-1} (a_q^*)^2$. Since we have continuous sections $a_\bullet, b_\bullet \in C(SU_\bullet(2))$ (mapping to the generators of $C(SU_q(2))$ under the quotient map $\T{ev}_q$) we then obtain a continuous family of Lip-norms $\{ L_q \}_{q \in [\delta,1]}$ on $V_{{\operatorname{sa}}}$ defined by $L_q(y) := L_{D_q}( \T{ev}_q(y))$ for each $q \in [\delta,1]$. The assumptions in \cite[Theorem 11.2]{Rie:GHD} are thereby fulfilled, and since $\delta \in (0,1)$ was arbitrary this implies the claimed continuity result. \end{proof} \subsection{Quantum fuzzy spheres converge to the Podle\'s sphere}\label{subsec:quantum-fuzzy-to-podles} In this section we fix a $q \in (0,1]$ and show that the quantum fuzzy spheres $F^N_q$ converge to $C(S_q^2)$ as $N$ tends to infinity. This follows directly from our analysis of the quantum Berezin transform and the following lemma, which is certainly part of the folklore knowledge, but seems not to be directly available in the literature. We remark that when the quantum Gromov-Hausdorff distance is replaced by Latr{\'e}moli{\`e}re's quantum propinquity, the statement can be found in \cite[Theorem 6.3]{Lat:QGH}, but for the benefit of the reader, we include a proof for the corresponding statement in our setting. {We emphasise that the seminorm $K \colon Y \to [0,\infty]$ appearing in the statement need not agree with $L|_Y$ -- the seminorms $K$ and $L$ only agree on the dense domain of $K$. \begin{lemma}\label{lem:order-unit-version-of-frederics-result} Let $(X,L)$ be a compact quantum metric space and let $Y\subseteq X$ be a sub-operator system equipped with a seminorm $K \colon Y \to [0,\infty]$ with dense $*$-invariant domain $\T{Dom}(K)\subseteq \T{Dom}(L)$ such that $K(y) = L(y)$ for all $y \in \T{Dom}(K)$. Let $\varepsilon>0$. If for every $x\in X$ there exists $y\in Y$ such that $K(y)\leq L(x)$ and $\|x-y\| \leq \varepsilon L(x)$, then ${\operatorname{dist}}_{\T{Q}}\big((X,L); (Y,K) \big)\leq \varepsilon$. \end{lemma} Note that an application of Theorem \ref{t:totallybdd} gives that $(Y,K)$ automatically becomes a compact quantum metric spaces, so that the statement in the lemma indeed makes sense. \begin{proof} As explained in Section \ref{subsec:cqms}, the compact quantum metric spaces $(X,L)$ and $(Y,K)$ give rise to order unit compact quantum metric space denoted by $A:=\{x\in X_{{\operatorname{sa}}} \mid L(x)<\infty\}$ and $B:=\{y\in Y_{\operatorname{sa}} \mid K(y)<\infty\}$. And in fact, since \[ {\operatorname{dist}}_{\T{Q}}\big((X,L); (Y,K)\big)= {\operatorname{dist}}_{\T{Q}}\big((A,L); (B ,K)\big) \] one may as well pass to the order unit setting when considering matters related to the quantum Gromov-Hausdorff distance. \\ We define a seminorm $L_\varepsilon$ on $A \oplus B$ by setting \[ L_\varepsilon(a,b):=\max\left\{L(a), \, K(b), \, \tfrac{1}{\varepsilon}\|a-b\| \right\}. \] By construction, $L_\varepsilon(a,b)=0$ if and only if $(a,b)=t(1,1)$ for some $t\in \mathbb{R}$. By assumption, for each $a\in A$ there exists $b \in B$ such that $K(b)\leq L(a)$ and $\|a-b\|\leq \varepsilon L(a)$ and hence \[ L_\varepsilon(a,b) \leq L(a). \] Conversely, if $b \in B$ we trivially have that $L_\varepsilon(b,b) \leq L(b)$ (using now that $K(b) = L(b)$). This proves that the assumptions in \cite[Theorem 5.2]{Rie:GHD} are fulfilled, and from this we obtain that $L_\varepsilon \colon A\oplus B \to [0,\infty)$ is an admissible Lip-norm. \\ Next, denote by $\pi_1\colon A\oplus B \to A$ and $\pi_2\colon A\oplus B \to B$ the natural projections, and note that by \cite[Proposition 3.1]{Rie:GHD}, the dual maps $\pi_1^* \colon \C S(A) \to \C S(A \oplus B)$ and $\pi_2^*\colon \C S(B) \to \C S(A \oplus B)$ are isometries for the associated Monge-Kantorovi\v{c} metrics. Given $\psi \in \C S(B)$, we extend $\psi$ to a state $\tilde{\psi}$ on $A$ using the Hahn-Banach theorem, see \cite[Chapter II (1.10)]{alfsen-book}. For $(a,b) \in A\oplus B$ with $L_\varepsilon (a,b) \leq 1$, we have $\| b-a\|\leq \varepsilon$ and thus \[ \big|\pi_2^*(\psi)(a,b)-\pi_1^*(\tilde{\psi})(a,b)\big| = \big|\psi(b)-\tilde{\psi}(a) \big| = \big|\tilde{\psi}(b-a)\big| \leq \|b-a\| \leq \varepsilon. \] Hence $\rho_{L_{\varepsilon}}\big(\pi_2^*(\psi), \pi_1^*(\tilde{\psi})\big)\leq \varepsilon$. Conversely, given $\varphi \in \C S(A)$ we have $\varphi\vert_B \in \C S(B)$ and an analogous computation proves the inequality $\rho_{L_{\varepsilon}}\big(\pi_2^*(\varphi\vert_B), \pi_1^*(\varphi)\big)\leq \varepsilon$. Hence, we have shown that \[ {\operatorname{dist}}_H^{\rho_{L_\varepsilon}}\big(\pi_1^*(\C S(A)), \pi_2^*(\C S(B))\big)\leq \varepsilon, \] and therefore ${\operatorname{dist}}_{\T{Q}}\big((A,L);(B,K) \big)\leq \varepsilon$ as desired. \end{proof} } Our result regarding convergence of the quantum fuzzy spheres towards the Podle\'s sphere can now be stated and proved. We emphasise that the domain of our Lip-norm $L_{D_q} \colon C(S_q^2) \to [0,\infty]$ is given by the coordinate algebra $\C O(S_q^2)$. \begin{thm}\label{thm:fuzzy-to-podles} For each $q\in (0,1]$ it holds that $\displaystyle\lim_{N\to \infty} {\operatorname{dist}}_{\T{Q}}\big(F^N_q; C(S_q^2)\big)=0$. \end{thm} \begin{proof} This follows from Proposition \ref{p:derVI}, Corollary \ref{cor:fuzzy-approx} and Lemma \ref{lem:order-unit-version-of-frederics-result}. \end{proof} Note that when $q=1$, Theorem \ref{thm:fuzzy-to-podles} gives a variation of Rieffel's original result \cite[Theorem 3.2]{Rie:MSG}, but since our Lip-norm on $F^N_1$ is { a priori} different from the one considered in \cite{Rie:MSG}, we do not recover the classical result verbatim. \subsection{The Podle\'s spheres converge to the sphere} In this section we prove the main result of the paper, which at this point follows rather easily from the analysis carried out in the previous sections. \begin{thm}\label{thm:podles-converging-to-classical} For any $q_0\in(0,1]$ one has $\displaystyle\lim_{q\to q_0} {\operatorname{dist}}_{\T{Q}}\big( C(S_q^2); C(S_{q_0}^2)\big)=0$. \end{thm} \begin{proof} Let $\varepsilon>0$ be given. By Proposition \ref{p:derVI}, Lemma \ref{l:conmetVII} and Lemma \ref{lem:order-unit-version-of-frederics-result} there exists an open interval $I$ containing $q_0$ and an $N\in \mathbb{N}_0$ such that ${\operatorname{dist}}_{\T{Q}}(F_q^N; C(S_q^2))<\varepsilon/3$ for all $q\in I\cap (0,1]$. Upon shrinking $I$ if needed, Proposition \ref{prop:quantum-fuzzy-to-classical-fuzzy} shows that we may assume ${\operatorname{dist}}_{\T{Q}}(F^N_q; F^N_{q_0})<\varepsilon/3$ for all $q\in I \cap (0,1]$. Given $q \in I$, the inequality ${\operatorname{dist}}_{\T{Q}}\big(C(S_q^2);C(S_{q_0}^2) \big)<\varepsilon$ now follows from the triangle inequality for the quantum Gromov-Hausdorff distance, see \cite[Theorem 4.3]{Rie:GHD}. \end{proof} \begin{remark} Theorem \ref{thm:podles-converging-to-classical} is, in reality, a result about the coordinate algebras $\C O(S_q^2)$, $q \in (0,1]$, in so far that these are exactly the domains of the Lip-norms $L_{D_q} \colon C(S_q^2) \to [0,\infty]$. These coordinate algebras are the natural domains from a Hopf-algebraic point of view. Another natural (and much larger) domain would be the Lipschitz algebra $C^{{\operatorname{Lip}}}(S_q^2)$, and, in fact, the above convergence result holds true for the Lip-norms $L_{D_q}^{\T{max}} \colon C(S_q^2) \to [0,\infty]$ with domain $C^{{\operatorname{Lip}}}(S_q^2)$ discussed in Remark \ref{r:lipspec}. This result relies on Theorem \ref{thm:podles-converging-to-classical} but requires a substantial amount of extra analysis. For this reason we defer the details to the separate paper \cite{AKK:DistZero}. In fact, we show in \cite[Theorem A]{AKK:DistZero} that the quantum Gromov-Hausdorff distance between $(C(S_q^2), L_{D_q})$ and $(C(S_q^2), L_{D_q}^{\T{max}})$ is equal to zero for each $q \in (0,1]$. \end{remark} \bibliographystyle{plain}